Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

The Intractable Problem of AI Hallucinations

Solutions to Gen AI's 'Creative' Errors Not Enterprise-Ready, Say Experts
The Intractable Problem of AI Hallucinations
Image: Shutterstock

The tech industry is rushing out products to tamp down artificial intelligence models' propensity to lie faster than you can say "hallucinations." But many experts caution they haven't made generative AI ready for scalable, high-precision enterprise use.

See Also: AI and ML: Ushering in a new era of network and security

Hallucinations arguably are gen AI's greatest problem - the sometimes laughably wrong, sometimes viral and sometimes dangerous or misleading responses that large language models spit out because they don't know better. "The challenge is that these models predict word sequences without truly understanding the data, making errors unavoidable," said Stephen Kowski, field CTO at SlashNext.

Tech companies' solution has been to look for ways to stop hallucinations from reaching users - layering tech on top of tech. Solutions such as Google's Vertex AI, Microsoft's correction capability and Voyage AI use varied approaches to improve the accuracy of LLM outputs.

The correction capability aims to curb hallucinations by boosting the output's reliability by "grounding" the responses to specific sources with trustworthy information the LLMs can access, Microsoft told Information Security Media Group. "For example, we ground Copilot's model with Bing search data to help deliver more accurate and relevant responses, along with citations that allow users to look up and verify information," a spokesperson said.

But approaches fall short if the expectation is for high-precision outcomes, said Ram Bala, associate professor of AI and analytics at Santa Clara University.

"Think of it this way: LLMs are always dreaming. Sometimes these dreams are real. How is this helpful? It is incredibly useful when you want creative output like writing a leave of absence letter, but enterprise applications do not always need this creativity," he told ISMG.

Implementing safeguards for all use cases is often cost-prohibitive. Many companies prefer to prioritize speed and breadth of deployment over accuracy, said Kowski.

Experts said a layered approach to stopping hallucinations can stymie hallucinations for common consumer use cases where an incorrect response could cause harm or be inappropriate since developers have enough data to, for instance, stop models from again advising users to put glue on pizza to stop cheese from sliding off (see: Breach Roundup: Google AI Blunders Go Viral). "It's one of the reasons we don't hear as many complaints about ChatGPT as we did two years ago," Bala said.

But the approach of layering anti-hallucination solutions in AI models is inadequate for nuanced, enterprise-specific demands. "Enterprises have many complex problems to solve and plenty of nuanced rules and policies to follow. This requires a deeper custom approach that many of the big tech companies may not be ready to invest in," he said.

Experts also argue that no advancements in technology can fully obliterate hallucinations. This is because hallucinations aren't bugs in the system but byproducts of how AI models are trained to operate, said Nicole Carignan, vice president of strategic cyber AI at Darktrace.

Hallucinations occur because gen AI models, particularly LLMs, use probabilistic modeling to generate output based on semantic patterns in their training data. Unlike traditional data retrieval, which pulls verified information from established sources, models generate content by predicting what is likely to be correct based on previous data. Kowski said some research concludes it may be mathematically impossible for LLMs to learn all computable functions.

Alternative Approaches

While big tech has largely focused on broad-scale, generalized solutions, several startups are taking a more targeted approach to tackle hallucinations. Bala described two primary strategies emerging among these smaller players: allowing enterprises to build custom rules and prompts, and developing domain-specific applications with curated knowledge bases. Some startups enable companies to encode their own rules within LLMs, adapting AI to meet particular needs. Other startups deploy domain expertise to create knowledge graphs that are paired with retrieval-augmented generation, further anchoring AI responses in verified information. RAG lets LLMs reference documents outside training data sources when responding to queries. While these methods are still nascent, Bala said he anticipated rapid advancements in the coming year.

Experts said that supervised machine learning, which is more structured than the probabilistic approach of gen AI, tends to yield more reliable results for applications requiring high accuracy.

To harness AI's benefits while mitigating hallucinations, Carignan recommends a multi-faceted approach. Robust data science principles such as rigorous testing and verification, combined with layered machine learning approaches, can help reduce errors. But technology alone isn't enough, she said. Security teams must be embedded throughout the entire process to ensure AI safety and employees are educated about AI's limitations.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.co.uk, you agree to our use of cookies.