MIT Spinout Develops AI That Can Recognize When It's Unsure to Address Hallucinations
Wednesday, Jun 4, 2025

As AI models increasingly take on critical tasks and responsibilities, the issue of AI hallucinations grows more concerning.
We all know someone who pretends to know everything but gives unreliable advice based on sketchy online sources. AI-induced hallucinations are similar, except this unreliable friend could be tasked with something as crucial as designing your cancer treatment plan.
Enter Themis AI. Spun out of MIT, this company has achieved what seems simple in theory but complicated in practice: teaching AI to admit, “I’m not certain about this.”
AI systems often appear overly confident. Themis’ Capsa platform serves as a reality check, helping models identify when they’re guessing instead of relying on factual data.
Themis AI was founded in 2021 by MIT Professor Daniela Rus and her former research colleagues Alexander Amini and Elaheh Ahmadi. They have developed a platform that can integrate with nearly any AI system to spot moments of uncertainty before leading to errors.
Capsa essentially instructs AI to recognize pattern inconsistencies in its data processing that may suggest confusion, bias, or incomplete information that could result in hallucinations.
Since its inception, Themis asserts it has helped telecom companies avoid pricey network planning mistakes, assisted oil and gas sectors in deciphering intricate seismic data, and produced research on building chatbots that don't fabricate confidently.
Many remain unaware of how often AI systems operate on educated guesses. As these technologies handle increasingly important responsibilities, such guesses could lead to severe repercussions. Themis AI’s software provides a missing layer of self-awareness.
Themis AI's journey began in Professor Rus's lab at MIT, where they tackled a fundamental issue: how to make machines aware of their limitations.
In 2018, Toyota funded their research into dependable AI for self-driving cars—a field where mistakes can be fatal due to the high stakes, as autonomous vehicles must reliably identify pedestrians and road dangers.
The breakthrough was developing an algorithm that identified racial and gender biases in facial recognition systems and proactively addressed them by rebalancing training data, essentially enabling AI to correct its biases.
By 2021, the team showed how this approach could transform drug discovery. AI systems could assess new medication possibilities while highlighting whether their conclusions were based on robust evidence or mere guesswork. This capability could help the pharmaceutical industry save time and money by concentrating on drug candidates confirmed by the AI's confidence.
An added benefit of the technology lies in devices with limited computing capability. While edge devices may lack the accuracy of large models on servers, Themis technology enables them to manage most tasks locally, only reaching out for additional server assistance when truly necessary.
AI has incredible potential to transform our lives but comes with risks. As AI systems deeply integrate into critical infrastructure and decision-making, acknowledging uncertainty might be their most human—and essential—trait. Themis AI is ensuring they acquire this vital skill.
Latest News
Here are some news that you might be interested in.

Saturday, Jun 7, 2025
Anthropic Unveils Claude AI Models to Enhance U.S. National Security
Read more

Friday, Jun 6, 2025
Reddit Files Lawsuit Against Anthropic for Unauthorized AI Data Extraction
Read more

Wednesday, Jun 4, 2025
AI Drives Transition from Support Role to Strategic Leadership
Read more

Wednesday, Jun 4, 2025
The Essential ROI Focus: Implementing AI, Ensuring Security, and Strengthening Governance
Read more