In the race to scale AI capabilities, we’ve trained models to speak fluently and respond confidently — even when they have no idea what they’re talking about. The result? Hallucinations. Not in the metaphysical sense, but in the form of fabricated facts, false confidence, and dangerous assumptions.
A recent article on Artificial Intelligence News highlights a company trying to fix this at the root level. Themis AI, an MIT spinout, is building software that helps AI systems know when they’re unsure — and say so.
Their platform, Capsa, is a kind of meta-layer for machine learning: it doesn’t make predictions directly, but it monitors how and when a model is likely to go off the rails. Think of it as a built-in BS detector. It looks for patterns that suggest confusion, bias, or incomplete data — and flags them in real-time before the model can confidently hallucinate its way into a decision.
The article outlines some compelling use cases:
- In telecoms, Capsa has helped companies avoid costly network planning errors.
- In oil and gas, it’s improved interpretation of seismic data.
- In pharma, it’s filtered out weak drug candidates, saving time and money on trials that would’ve gone nowhere.
What strikes me most is how Themis frames this: not as a patch, but as a core competency. The future of trustworthy AI may not lie in smarter predictions, but in models that can quantify their own ignorance.
In a sense, Themis is reintroducing a very human trait — epistemic humility — into systems we’re increasingly trusting to act without oversight. Admitting uncertainty isn’t a weakness. It’s what separates a reliable assistant from a charismatic guesser.
As we move toward AI embedded in health, infrastructure, and governance, this shift could be one of the most important evolutions in the field.
Reference
Daws, R. (2025, June 3). Tackling hallucinations: MIT spinout teaches AI to admit when it’s clueless. AI News. https://www.artificialintelligence-news.com/news/tackling-hallucinations-mit-spinout-ai-to-admit-when-clueless/