Hallucination is fundamental to how transformer-based language models work. In fact, it’s their greatest asset: this is the method by which language models find links between sometimes disparate concepts.
But hallucination can become a curse when language models are applied in domains where the truth matters. Examples range from questions about health care policies, to code that correctly uses third-party APIs.
With agentic AI, the stakes are even higher, as the autonomous bots can take irreversible action—like sending money—on our behalf.
The good news is that we have methods for making AI systems follow the rules, and the underlying engines of those tools are also scaling dramatically each year.
This branch of AI is called automated reasoning (a/k/a symbolic AI) which symbolically searches for proofs in mathematical logic to reason about the truth and falsity that follow from axiomatically defined policies.
Read more | FORTUNE