We Should Stop Pretending that Chatbots Are Artificial Intelligence

Why Trust Techopedia

Chatbots are not artificial intelligence — while a tool such as ChatGPT technically uses AI techniques like natural language processing (NLP) to function, it cannot be considered a complete AI solution for a simple reason: it doesn’t think.

Today’s large language models (LLMs) are adept at predicting responses to user inputs in a convincing way, but there’s no cognitive consciousness under the hood. This is part of the reason why even the most trustworthy AI tools have a tendency to hallucinate when you ask them factual questions.

Despite these limitations, many users believe the misconception that these models are capable of thinking, which is a recipe for disaster.

Key Takeaways

  • Chatbots like ChatGPT use AI techniques but cannot think or possess cognitive consciousness.
  • LLMs (Large Language Models) are ‘Stochastic Parrots’ — able to simulate human responses by predicting text without actual reasoning.
  • Two-thirds of people (67%) surveyed thought that ChatGPT could experience consciousness.
  • AI experts emphasize that chatbots only imitate thought without purpose or meaning.
  • It’s essential for the AI industry to communicate the limitations of chatbots more clearly.

Can LLMs Think?

One of the biggest questions surrounding LLMs, from the average person’s perspective, is whether or not they’re capable of autonomous thought. To examine this further, Techopedia reached out to a number of AI experts — who gave us an unequivocal “No”.

Thomas Randall, director of AI Market Research at Info-Tech Research Group, told Techopedia:

“LLM chatbots are trained and fine-tuned to be very good at predicting what word/sentence/paragraph should follow the previous.

“The larger the model, the more data can be drawn on to provide more accurate responses. The larger the computing power, the more parameters can be set to have nuanced weightings in responses.

“There is no “ghost in the machine” doing any abstract thinking or deliberation. It is a blind process; LLMs merely simulate human cognition.”

Brian Green, director of technology ethics at Markkula Center for Applied Ethics at Santa Clara University, also shared a similar assessment of generative AI‘s lack of cognitive capabilities.

Advertisements

“Chatbots are not thinking. They are just following a program which statistically associates words to form predictions about what might reasonably come next in a sequence.”

In this case, “reasonably” is defined by the amount of text the LLM was trained on.

Green also argues that “from the human side, thinking involves purpose and meaning.

“LLMS can only imitate that, they do not have actual purpose or meaning in any kind of mind which they are trying to express.”

The Stochastic Parrot: The Myth of Sentient Chatbots

It’s fairly obvious where the misconception arises.

While decades of sci-fi films featuring AI and centuries of sci-fi fiction will have helped, the ability of AI tools to emulate natural language has to be a large factor. If you ask ChatGPT a question, you’ll usually get a fairly verbose and reliable answer.

Randall said:

“Given the unprecedented amount of data and computing power that LLMs now have, users unfamiliar with how LLMs are set up could be convinced LLMs “think” based on the high-quality responses users receive.

“I believe a major reason why ChatGPT became popular overnight is because many people wanted to find the limits of when they could “break” the machine and show it doesn’t pass the Turing test.”

This natural language prowess was enough to build a wave of hype surrounding generative AI following ChatGPT’s initial release in November 2022. It’s worth noting that the release came just months after a Google software engineer claimed that the tech giant’s chatbot Lambda had become self-aware.

How many people think that LLMS can think? According to a study conducted by the University of Waterloo, which surveyed 300 people, two-thirds of people (67%) surveyed thought that ChatGPT could experience consciousness.

No, chatbots are “Stochastic Parrots” — able to generate plausible dialogue without understanding what they are saying.

While this is just a single survey, it’s interesting that so many people thought that a limited chatbot could think.

This begs the question of whether this misconception has also influenced enterprise decision-makers to invest in the technology and contributed to a potential overvaluation of generative AI.

The Bottom Line

LLM-powered chatbots are useful tools, but it’s time to stop pretending that they are sentient artificial intelligence. Above all, the AI industry needs to do a more effective job of communicating this technology’s limitations to those outside the world of machine learning.

FAQs

Can AI think for itself?

Are chatbots considered artificial intelligence?

Why do people believe chatbots are sentient?

How do chatbots generate responses?

What are the limitations of chatbots?

Advertisements

Related Reading

Related Terms

Advertisements
Tim Keary
Technology Specialist
Tim Keary
Technology Specialist

Tim Keary is a freelance technology writer and reporter covering AI, cybersecurity, and enterprise technology. Before joining Techopedia full-time in 2023, his work appeared on VentureBeat, Forbes Advisor, and other notable technology platforms, where he covered the latest trends and innovations in technology.