Generative AI Apps and Hallucination Problem


Even though the generative AI Apps, such as ChatGPT, Bard, Calude 2, others are very helpful tools, but they make things up a lot. A recent article from CNN Business describes hallucination as one of the most significant hazards associated with it.

AI-powered tools like ChatGPT have mesmerized us with their ability to produce authoritative, human-sounding responses to seemingly any prompt. But as more people turn to this buzzy technology for things like homework help, workplace research, or health inquiries, one of its biggest pitfalls is becoming increasingly apparent: AI models often just make things up.

– From CNN

Some social media users said to describe the AI chatbots as “pathological liars,” CNN writes.

Suresh Venkatasubramanian, a professor at Brown University who helped co-author the White House’s Blueprint for an AI Bill of Rights said “The reality is that large language models — the technology underpinning AI tools like ChatGPT — are simply trained to ‘produce a plausible sounding answer’ to user prompts. So, in that sense, any plausible-sounding answer, whether it’s accurate or factual or made up or not, is a reasonable answer, and that’s what it produces. There is no knowledge of truth there.”

I probably trust the answers that come out of ChatGPT the least of anybody on Earth.

– Sam Altman, CEO of ChatGPT-maker OpenAI (source: CNN)

Jevin West, a professor at the University of Washington and co-founder of its Center for an Informed Public, quoted saying “Simply put, a hallucination refers to when an AI model starts to make up stuff — stuff that is not in-line with reality. But it does it with pure confidence, and it does it with the same confidence that it would if you asked a very simple question like, ‘What’s the capital of the United States?’ ”

Answering a question, whether hallucinations can be prevented? Venkatasubramanian said “These models are so complex, and so intricate, but because of this, they’re also very fragile. This means that very small changes in inputs can have changes in the output that are quite dramatic. And that’s just the nature of the beast, if something is that sensitive and that complicated, that comes along with it. Which means trying to identify the ways in which things can go awry is very hard, because there’s so many small things that can go wrong.”