Chatbots: A Long and Complex History

Widely known as the first chatbot, Eliza wasn’t as versatile as similar services today. Relying on natural language understanding, the program reacts to keywords and essentially sends the dialogue back to the user. Nevertheless, Joseph Weizenbaum, his MIT computer scientist who created Eliza, writes: research paper In 1966, “Some subjects had great difficulty convincing themselves that ELIZA (now the script) was not human.”

For Weizenbaum, that fact was cause for concern, according to a 2008 MIT obituary. The people who interacted with Eliza were willing and open, even though they knew it was her program on the computer. “ELIZA shows how easy it is to create and maintain the illusion of understanding, and thus the illusion of judgment. Weizenbaum wrote in 1966, “There is a certain danger lurking there.” became a critic.

Even before then, the complex relationship between artificial intelligence and machines and humans was evident in the plots of Hollywood movies such as She and Ex Machina. Not to mention harmless arguments with people who insist on saying “thank you” to their voice assistants. Alexa or Siri.

Widely known as the first chatbot, Eliza wasn't as versatile as similar services today. It reacts to keywords and basically brings the interaction back to the user.
If modern chatbots do not perform as expected, or become good at mimicking flawed trained human speech, spitting out racist and inflammatory comments, It can also elicit a strong emotional response. Meta’s New Chatbot Earlier this month, he sparked controversy by spouting highly false political commentary and anti-Semitic remarks in conversations with users.
Still, proponents of the technology argue that it can streamline customer service jobs and increase efficiency across a wider range of industries. This technology powers digital assistants, which many people use on a daily basis to play music, order deliveries, fact-check homework, and more.Some argue that these chatbots offer comfort to the lonely, elderly, or isolated. Start-up It even goes so far as to use it as a tool to keep dead relatives alive by creating computer-generated versions based on uploaded chats.

Others warn that the technology behind AI-powered chatbots remains far more limited than some would like. “These technologies are very good at pretending to be human and sounding human, but they’re not deep,” said Gary Marcus, an AI researcher and professor emeritus at New York University. The system is an imitator, but a very superficial imitator. They don’t really understand what they’re talking about.”

Yet, as these services expand into every corner of our lives and companies take steps to make these tools more personalized, our relationship with them can become more complicated as well.

Evolution of chatbots

Sanjeev P. Khudanpur remembers chatting with Eliza in graduate school. Despite its historical importance in the tech industry, it didn’t take long to understand its limitations, he said.

It could only convincingly imitate about a dozen text conversation exchanges, but “No, it’s not really smart. It’s just trying to prolong the conversation in some way,” Khudanpur said. Told. Application of information-theoretic methods to human language technology and professor at Johns Hopkins University.

Eliza's inventor, Joseph Weizenbaum, sits at his computer desktop at the Computer Museum in Paderborn, Germany, May 2005.
Another early chatbot was developed by Stanford University psychiatrist Kenneth Colby in 1971 and was intended to mimic a paranoid schizophrenic patient, hence the name “Parry.” (New York Times 2001 obituary Colby included a colorful chat that followed when researchers brought Eliza and Parry together.)

However, in the decades that followed these tools, there was a shift away from the idea of ​​”talking to a computer.” “Because the problem turned out to be very difficult,” said Khudanpur. Instead, he said, the focus has shifted to “goal-oriented dialogue.”

It didn't take long for Meta's new chatbot to say nasty things

To understand the difference, think of a conversation with Alexa or Siri. Typically, these digital assistants are asked to help with buying tickets, checking the weather, playing songs, and more. It was a goal-oriented dialogue that became a major focus of academic and industry research as computer scientists sought to glean useful things from computers’ ability to scan human language.

Using technology similar to previous social chatbots, Khudanpur said:

There was a decades-long “lull” in the technology before the Internet became widespread, he added. “Maybe this millennium was the big breakthrough,” Kudanpur said. “With the rise of companies that successfully employ things like computerized agents to perform routine tasks.”

With the rise of smart speakers like Alexa, it has become even more common for people to chat with machines.

“People are always upset when their bags go missing. The human agents who deal with them are always stressed out by all the negativity. That’s why they said, ‘Let the computer do it.’ is,” said Khudanpur. “You can yell at the computer, but all the computer wants to know is, ‘Give me your tag number so I can tell you where my bag is.'”

For example, in 2008 Alaska Airlines Digital assistant “Jen” to help travelers. As a sign of the tendency to humanize these tools, accelerated examination A service rep from The New York Times said: If the smart guys messing around with this site inevitably trip her up with her line of clumsy pick-ups at bars, for example, she politely suggests getting back into business. )”

Back to social chatbots and social issues

In the early 2000s, researchers began revisiting the development of social chatbots capable of conducting long conversations with humans. These chatbots have been trained on large amounts of data, often from the internet, and have learned to mimic the way humans speak very well, but they also risk reflecting the worst of the internet. did.

For example, in 2015 Microsoft conducted a public experiment with an AI chatbot called Tay. crashed and burned within 24 hours. Tay was designed to speak like his teen, but soon began spewing racist and hateful comments, and Microsoft shut it down. (The company says humans have also coordinated efforts to trick her Tay into making certain offensive comments.)

“The more you chat with Tay, the smarter she gets, making the experience more personalized,” Microsoft said at the time.

This refrain will be repeated by other tech giants who have released public chatbots, including Meta’s BlenderBot3 released earlier this month. The metachatbot falsely claimed, among other controversial statements, that Donald Trump is still president and there is “absolutely a lot of evidence” that the election was stolen.

BlenderBot3 professed to be more than a bot.. In one conversation, he claimed that “the fact that I am alive and conscious is what makes me human.”

Meta's new chatbot, BlenderBot3, explains to users why they're actually human. But it didn't take long for the chatbot to spark controversy with its inflammatory remarks.

Despite all the progress since Eliza and the large amount of new data to train these language processing programs, NYU professor Marcus said, “It is clear whether we can really build reliable and secure chatbots. It’s not.”

He cites 2015 A Facebook project named “M” An automated personal assistant that was supposed to be the company’s text-based response to services like Siri and Alexa “The idea was to help Siri order a romantic dinner, have a musician play, or have flowers delivered—to be a universal assistant that goes far beyond what Siri can do,” Marcus said. say. Instead, the service was shut down in 2018 after an underwhelming run.

Kudanpur, on the other hand, is optimistic about potential use cases. “I have a big picture of how AI empowers humans on an individual level,” he said. “Imagine if my bot could read all the scientific papers in my field, I wouldn’t have to go read them all, I could simply think, ask questions, and conduct dialogue. please,” he said. “In other words, I will have my alter-ego with complementary psychic powers.”

Source: www.cnn.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Bảie leveluplimo