Tech Term Decoded: Hallucination

Definition

An AI hallucination is an output produced by an AI model that is far from reality or lacks credibility. Simply put, it is a situation where a model gives a wrong answer, makes up a story, or produces output that doesn’t make sense. Hallucinations can vary in severity, ranging from a minor factual mistake to an entirely fabricated claim. Hallucinations is not only about text-based LLMs, they also occur in image and video AI generators, resulting in visually implausible or contextually inaccurate outputs [1].

AI hallucinations are usually caused by flawed training data and a lack of proper grounding. An AI model may find it challenging to comprehend real-world knowledge, physical properties, or factual information. This lack of grounding can lead to the model producing outputs that, while seemingly plausible, are actually factually incorrect, irrelevant, or nonsensical. It can even go as far as fabricating links to web pages that are not real or in existence.

An example of AI hallucination would be a scenario where an AI model designed to answer questions about Nigerian history is claiming "Nnamdi Azikiwe was Nigeria's first military head of state in 1960" when he was actually the first President (a civilian position), completely producing false information by confusing roles and mixing up historical facts about Nigeria's independence.

Hallucinations in AI

Explaining AI Hallucination [2].

Origin

The term “hallucination” was introduced by researchers Baker and Kanade but it was in relation to computer vision (CV). Then, in the year 2000, hallucinations were something to be leveraged on in Computer Vision (CV), as it had more positive meanings like super resolution, image inpainting, and image synthesizing.

Then in 2018, Google DeepMind researchers proposed the term IT hallucinations, describing hallucinations as "highly pathological translations that are completely untethered from the source material." The term gained prominence with ChatGPT’s release, which brought LLMs to main stream audiences while simultaneously exposing their tendency to generate incorrect output.

Context and Usage

The beauty of generative AI is its potential for new content, so sometimes hallucinations can actually be welcome.  According to Swayamdipta (assistant professor of computer science at the USC Viterbi School of Engineering and leader of the Data, Interpretability, Language and Learning (DILL) lab), "We want these models to come up with new scenarios, or maybe new ideas for stories or … to write a sonnet in the style of Donald Trump”. "We don't want it to produce exactly what it has seen before."

And so there's an important distinction between using an AI model as a content generator and using it to factually answer questions.

"It's really not fair to ask generative models to not hallucinate because that's what we train them for", "That's their job” added Soatto (vice president and distinguished scientist at Amazon Web Services) [3].

Why it Matters

The generative artificial intelligence (GenAI) market is projected to expand significantly over the next five years. With over 5.3 billion internets users globally, the LLMs powering these systems continuously absorb vast amounts of data – billions of videos, photos, emails, social post, and other contents created daily.

Ideally, generative AI tools like Google’s Gemini and OpenAI’s ChatGPT would accurately address every user query. However, these systems sometimes produce incorrect responses or fabricate information entirely. It’s worth noting that ChatGPT now includes a disclaimer just below the open text field: “ChatGPT can make mistakes. Consider checking important information.” And Google’s Gemini recommends that users double-check their responses.

AI hallucinations represent one of the most significant challenges in GenAI development. They are not minor errors, they can result to wrong medical advice, fabricated citations, or even brand-damaging customer responses [4].

Related AI Ethics and Governance Terms

  • Grounding: Technique of connecting AI outputs to verifiable sources or real-world data to reduce hallucinations
  • Prompt Injection: Security attack where malicious inputs manipulate AI system behavior.
  • Prompt Leaking: Security vulnerability where AI systems inadvertently reveal their internal instructions.
  • Responsible AI by Design: Approach to building AI systems with ethical considerations from the start.

In Practice

A real-life case study of occurrences of AI hallucination can be seen in the case of Google when they faced significant backlash following an incident of Gemini’s image generation model producing historically inaccurate and culturally insensitive images. For instance, it generated images of the Pope as a woman and a German-era Nazi with dark skin. These occurrences embarrassed Google leading to temporal pause of Gemini’s image generation feature so they could fix what they called a “fine-tuning” issue in the model’s programming [5].

Reference

  1. Farnschläder, T. (2025). AI Hallucination: A Guide With Examples.
  2. Nttdata. (n.d). Not All Hallucinations Are Bad: The Constraints and Benefits of Generative AI
  3. Lacy, L. (2024). Hallucinations: Why AI Makes Stuff Up, and What's Being Done About It.
  4. Hamill, E. (2026). What are AI hallucinations?
  5. Allyn, B. (2024). Google CEO Pichai says Gemini's AI image results "offended our users".


Kelechi Egegbara

Kelechi Egegbara is a Computer Science lecturer with over 13 years of experience, an award winning Academic Adviser, Member of Computer Professionals of Nigeria and the founder of Kelegan.com. With a background in tech education, he has dedicated the later years of his career to making technology education accessible to everyone by publishing papers that explores how emerging technologies transform various sectors like education, healthcare, economy, agriculture, governance, environment, photography, etc. Beyond tech, he is passionate about documentaries, sports, and storytelling - interests that help him create engaging technical content. You can connect with him at kegegbara@fpno.edu.ng to explore the exciting world of technology together.

Post a Comment

Previous Post Next Post