Council Post: Does AI Hallucinate Electric Sheep?

Brian Neely is CIO and CISO of American Systems. In his influential 1968 sci-fi novel Do Androids Dream of Electric Sheep, the basis for the movie Blade Runner, Philip K. Dick explored the philosophical and ethical questions that arise when artificial beings approach human-like consciousness. As he explores this visceral and thought-provoking topic, Dick questions whether a non-biological entity is capable of taking on sentient characteristics, such as the ability to perceive, feel and be self-aware, blurring the line between human and artificial intelligence. While we hope that AI is still a long way from turning on its creators like the “replicants” in Dick’s story, concerns over what can happen at the intersection between technology and humanity are more relevant today than ever.

One of the most well-known issues with widely used large language models (LLMs) like OpenAI’s ChatGPT, Meta’s Llama, or Google’s Gemini, is their tendency to **“hallucinate,”** or present false, misleading, or illogical information as if it were factual. Whether it’s citing non-existent legal cases or claiming Acapulco is the capital of Antarctica, hallucinations typically happen when an LLM isn’t given proper context or enough high-quality data.

Because LLMs use linguistic statistics to generate responses about an underlying reality they don’t actually understand, their answers may be grammatically and semantically correct but still make no sense at all. A poorly trained AI model with inherent biases or blind spots will try to **“fill in the blanks,”** but end up producing nonsensical gibberish, or worse.

Some have paralleled AI hallucinations with humans dreaming, creatively connecting seemingly unrelated data without logical grounding. Irrational, unexpected AI responses can be used to spark new ideas in creative writing, art, or music, perhaps during an **“outside-the-box”** brainstorming session, but they can also be dangerous. They can erode trust in AI, lead to poor decision-making, and even result in harmful outcomes; you certainly don’t want hallucinations when it comes to medical diagnoses or self-driving car actions.

That’s why leading AI scientists continue advancing the battle against hallucinations, and why three popular approaches can be used to reduce or eliminate hallucinations entirely.

Prompt Engineering

The simplest approach to reducing the odds of AI hallucinations is to put careful thought into how we interact with it. This means shifting our thinking from **“can AI properly answer my question?”** to **“can I ask AI the right question?”** Engineering a better prompt doesn’t require any specialized skills—it is something that anyone can do and works with virtually all GenAI models.

A prompt is simply the set of instructions that you give a GenAI model to get an intended response. Good prompt engineering involves providing proper context and perspective on what you’re asking, assigning the AI model a role, and even telling it how you want the response to be structured. Don’t just write a few words or a single sentence like a traditional Google search. Instead, be as detailed and comprehensive as possible—even use multiple prompts if necessary. Essentially, you are manipulating the input to influence the output, guiding the LLMs behavior without actually altering any of its core behavior. The better the prompt, the better the response.

Retrieval-Augmented Generation (RAG)

This approach involves constraining an LLM to specific data sources or proprietary content so it provides more relevant answers. Instead of looking at its entire data set for context, the LLM will only use a predetermined subset of data to formulate its responses. RAG works well for organizations with highly dynamic data and well-defined content sources. In specialized areas, such as the medical field, a doctor could use an off-the-shelf LLM like ChatGPT that is configured to use their hospital’s medical library as a source, so all responses come from a medical perspective.

From a user’s perspective, RAG implementation is seamless and transparent, generally only requiring lightweight setup by admins to connect relevant data sources. The result is better, more accurate and, most importantly, context-aware responses.

Fine-Tuning

The most involved solution starts with a pre-trained LLM that has a general understanding of language. That model is then fine-tuned on smaller, task-specific datasets that contain high-quality, relevant examples on a certain subject—say, history. The goal is to fine-tune the model’s knowledge for the target task, in part by penalizing the model for generating irrelevant or implausible content.

This approach can be more expensive and time-consuming, requiring specialized skills. You often have to use your own data—slow-changing data works best—and provide hands-on supervision to make sure you’re getting the best results. But if done correctly, fine-tuning can teach LLMs to give more precise and appropriate responses on narrow, specialized topics.

Better Together

These three approaches aren’t mutually exclusive, and in fact work best when used in unison. By using good prompt engineering, pointing an LLM at relevant data sources and then fine-tuning it, you’re going to not only eliminate almost all hallucinations, but you will also be left with results that are both context-aware and highly accurate.

In the end, we need to be mindful of hallucinations as AI-powered technologies are increasingly relied on for their quick results and powerful decision-making capabilities, which can be further amplified in time-critical, high-pressure situations.

In the meantime, as we ask AIs for answers on a daily basis, it’s probably not a bad idea to be thorough, provide context and review the results with a critical eye. You don’t want to be like the lawyers in 2023 who relied on ChatGPT to help prepare a case—only to find the documents were filled with entirely made-up legal citations.

And just like in Dick’s book, where replicants aren’t inherently good or bad, but simply responding to how they’re being used, we need to make sure that we put AI in the best possible position to be a benefit, not a hazard.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs, and technology executives. Do I qualify?