See this visualization first on the Voronoi app.
![](https://www.visualcapitalist.com/wp-content/uploads/2025/01/AI-Models-with-the-Lowest-Hallucination-Rates_website_Jan7.jpg)
Use This Visualization
AI Models With the Lowest Hallucination Rates
This was originally posted on our Voronoi app. Download the app for free on iOS or Android and discover incredible data-driven charts from a variety of trusted sources.
As AI-powered tools and applications become more integrated into our daily lives, it’s important to keep in mind that models may sometimes generate incorrect information.
This phenomenon, known as “hallucinations,” is described by IBM as occurring when a large language model (LLM)—such as a generative AI chatbot or computer vision tool—detects patterns or objects that do not exist or are imperceptible to humans, leading to outputs that are inaccurate or nonsensical.
This chart visualizes the top 15 AI large language models with the lowest hallucination rates.
The hallucination rate is the frequency that an LLM generates false or unsupported information in its outputs.
The data comes from Vectara and is updated as of Dec. 11, 2024. Hallucination rates were calculated by summarizing 1,000 short documents with each LLM and using a model to detect hallucinations, yielding a percentage of factually inconsistent summaries.
Which AI Models Have the Lowest Hallucination Rates?
Below, we show the top 15 AI models with the lowest hallucination rates, their company, and their country of origin.
Model | Company | Country | Hallucination Rate |
Zhipu AI GLM-4-9B-Chat | Zhipu AI | 🇨🇳 China | 1.3% |
Google Gemini-2.0-Flash-Exp | Google | 🇺🇸 United States | 1.3% |
OpenAI-o1-mini | OpenAI | 🇺🇸 United States | 1.4% |
GPT-4o | OpenAI | 🇺🇸 United States | 1.5% |
GPT-4o-mini | OpenAI | 🇺🇸 United States | 1.7% |
GPT-4-Turbo | OpenAI | 🇺🇸 United States | 1.7% |
GPT-4 | OpenAI | 🇺🇸 United States | 1.8% |
GPT-3.5-Turbo | OpenAI | 🇺🇸 United States | 1.9% |
DeepSeek-V2.5 | DeepSeek | 🇨🇳 China | 2.4% |
Microsoft Orca-2-13b | Microsoft | 🇺🇸 United States | 2.5% |
Microsoft Phi-3.5-MoE-instruct | Microsoft | 🇺🇸 United States | 2.5% |
Intel Neural-Chat-7B-v3-3 | Intel | 🇺🇸 United States | 2.6% |
Qwen2.5-7B-Instruct | Alibaba Cloud | 🇨🇳 China | 2.8% |
AI21 Jamba-1.5-Mini | AI21 Labs | 🇮🇱 Israel | 2.9% |
Snowflake-Arctic-Instruct | Snowflake | 🇺🇸 United States | 3.0% |
Smaller or more specialized models, such as Zhipu AI GLM-4-9B-Chat, OpenAI-o1-mini, and OpenAI-4o-mini have some of the lowest hallucination rates among all models. Intel’s Neural-Chat 7B is also a smaller model.
According to Vectara, small-size models can “achieve hallucination rates comparable or even better (lower) than LLMs that are much larger in size.”
Measuring hallucination rates is becoming increasingly critical as AI systems are deployed in high-stakes applications across fields such as medicine, law, and finance.
While larger models generally outperform smaller ones and are continually scaled up for better results, they come with drawbacks like high costs, slow inference, and complexity.
Smaller models, however, are closing the gap, with many performing well on specific tasks. For example, a study showed that the smaller Mistral 8x7B model successfully reduced hallucinations in AI-generated text.
In terms of foundational models, Google’s Gemini 2.0 slightly outperforms OpenAI GPT-4 with a hallucination rate difference of just 0.2%.
However overall, several variants of GPT-4 (e.g., Turbo, Mini, Standard) fall within the 1.5%–1.8% range, highlighting a strong focus on accuracy across different tiers of the same architecture.
Learn More on the Voronoi App ![](https://www.visualcapitalist.com/wp-content/uploads/2023/12/voronoi-icon-transparent.png)
To learn more about the artificial intelligence industry, check out this graphic that visualizes how much big tech giants are spending on AI data centers.