We Need a Hallucinations Only AI
AI isn't making breakthroughs because it can't be a contrarian
The Deload explores my curiosities and experiments across AI, finance, and philosophy. Join 2,200 readers:
If AI is so Smart…
AI is going to cure all diseases, it’s going to create limitless clean energy, and it’s going to let us live forever.
AI optimists tout a utopian future, but if AI is so powerful, why hasn’t it made any groundbreaking discoveries already? AI knows everything humans have ever learned about the world, but it has yet to come up with a breakthrough. A reasonably smart human with the same amount of knowledge would have surely solved at least one problem by now.
Dwarkesh Patel posed this AI breakthrough question to Anthropic’s CEO Dario Amodei almost 18 months ago. He recently resurfaced it on X.
Amodei answered in August 2023 that AI was on the cusp of being able to make breakthrough discoveries. Maybe that breakthrough happens this year. Or maybe the way we build AI models has inherent limitations that won’t allow them to make important discoveries.
Commenters on Patel’s post highlighted two issues that might prevent AI from making breakthrough discoveries — one with training data and one with the training process. I think of the training data issue as an exploration problem and the training process an incentive problem.
Large-language models make predictions based on training data. While AI can make new connections between things it already knows, it can’t yet leap to ideas it doesn’t already know. AI needs the ability to seek and take in new data just like a human to come up with breakthroughs.
The training process may be an even bigger issue. As Joel Lehman and Kenneth Stanley write in Why Greatness Cannot Be Planned: “The question of what behavior is good or no good is important because the good ideas are the ones that the program will explore further.”
Show me the incentives, and I’ll show you the outcome works for AI models just as well as it does for humans.
Hallucinations — which Google calls “incorrect or misleading” results — are seen as bad behavior by AI models. The problem is that every breakthrough appears incorrect until it’s proven correct, so the models in our current paradigm would be discouraged from exploring such bad ideas. If we want novel discoveries, we need AI to play with bad ideas that turn out to be good.
Contrarians are those who explore bad ideas in search of good ones. Consensus thinkers avoid bad ideas in favor of safety from the crowd. The way we build AI today pushes it toward consensus and decidedly away from contrarian ideas.
My strongest belief about the world is that extraordinary outcomes only come from contrarian ideas that turn out to be right. Curing cancer or finding some incredible new energy solution would both be extraordinary outcomes, so I believe that only contrarian ideas will get us there. If AI can’t develop and explore contrarian ideas, then it won’t create the breakthroughs we hope from it.
So how can we solve the exploration and incentive problems that seem to be preventing AI from coming up with breakthroughs?
Agents Solve the Exploration Problem
If training data creates a barrier to AI-generated breakthroughs, then an AI model that’s capable of finding novel information about the world might be able to make novel discoveries. AI agents can solve the data problem by collecting insights from the world at massive scale. Agents could talk to humans to get information, have robots perform experiments, monitor environments through sensors, and more, reporting all of the data back to an AI model to discover contrarian ideas that might lead to a breakthrough.
Agents should enable AI to chase discoveries through the scientific method similar to humans — gather data, form a hypothesis, test the hypothesis, and so on. However, while agents might help models further knowledge in science, they may not solve all complex problems. Take markets for example.
Markets are complex adaptive systems built on unknowable unknowns. That’s what makes a market. If the inter-connected movements of all the components of a market were able to be known, it would no longer be a market. There’s no risk to be assumed, just prices to be accepted based on guaranteed outcomes.
That doesn’t mean AI can’t do a better job as an investor. Agents will capture information about the world, process it at scale, and likely make markets more efficient, but more efficient doesn't mean solved.
While agents may solve the exploration problem blocking AI from breakthrough discoveries, the incentive problem of defining “good behavior” might prove more challenging. Just because you add massive amounts of new data from the outside world doesn’t mean you’ll be able to find a new insight without a leap of faith that often comes in the most novel discoveries. When dealing in truly profound breakthroughs, there are often periods where intangible faith is the only path forward, not tangible evidence.
Lack of Emotion: An Advantage or Disadvantage?
We’ve been using LLMs as investment analysts and portfolio managers at Intelligent Alpha for some time. One of our core learnings is that AI’s inherent advantage vs humans is that it lacks emotions that often lead to bad decisions. This naturally connects to AI’s reticence to stray from consensus that’s represented by its training data . My evolving belief is that without large amounts of external data, LLMs can win at investing by doing consensus better by removing emotions rather than looking for home run breakthrough ideas.
Lack of emotion is AI’s superpower, and every superpower can be a super weakness.
I’ve written before about how human investors can harness emotion to beat AI. Breakthrough investment ideas rarely offer certain evidence that they’ll work. As explained above, if something were totally obvious, it would get priced into the market rapidly, and there would be no upside for taking the risk. Contrarian investors often use instinct and faith when evidence lacks for a unique investment idea.
Instinct and faith are emotional constructs. Humans can have them. AI can’t.
We expect AI to be emotionless and perfect. AI shouldn’t believe anything. It should know everything. We build these expectations into how we train models, avoiding bad hallucinations in favor of safe answers to be traced back to training data.
We don’t expect the same perfection of humans. Humans make mistakes. We believe things that turn out to be wrong, and we allow that of each other.
Mistakes are a key part of making breakthrough discoveries. Without an allowance for operating on belief and making mistakes, we’d never make major discoveries because breakthroughs necessarily push the bounds of what we know. We can never be certain about a path to a breakthrough because, as in markets, if the path were apparent it would already have been explored.
Unless we encourage AI to make mistakes by taking chances and acting on instinct, we may never get models that are capable of breakthroughs.
Contra AI: Rethinking Good Behavior
Contrarian ideas are necessarily unlikely, unacceptable, or both. They only come from “bad” behavior in the context of our current AI model building philosophy grounded in “truth” and “safety.” That’s a lot of quotes, but that’s because these terms are subjective.
Truth is not subjective in the absolute sense. What humans accept as truth is subjective, and there is safety in what consensus believes to be true in the world today. Consensus is always safe. It’s not always right. If we want AI models to be capable of breakthrough discoveries, then we have to allow them to pursue truth in the absolute sense even if it conflicts with what we want to believe true about the world.
Elon has described xAI’s Grok model as “maximally truth seeking.” Maybe Grok will be the model capable of breakthrough discoveries unfettered by consensus. While I’m optimistic about Grok, I’m not sure if it’s structured differently enough from other models to break the boundaries created by the current training approach.
Breakthroughs come from contrarian ideas, but sometimes consensus is right. To find truth universally, you need to be able to live between contrarian and consensus ideas, but it’s hard to fully embrace contrarian ideas while respecting consensus. Contrarians need a healthy disdain for consensus to power through the doubt and uncertainty that comes with chasing novel discoveries. As F. Scott Fitzgerald famously observed, holding two conflicting ideas at the same time is tough. Perhaps even for AI.
The solution to the AI breakthrough problem might be separating the consensus and contrarian functions into different models. Mainstream AI models offer the consensus outputs needed for most tasks, but a Contra AI would offer only contrarian ideas. Contra AI’s training would be inverted from what we do today. “Good” behavior would be hallucinations — inventing ideas outside of training data. “Bad” behavior would be relying on training data alone for answers.
You’d never want to ask a Contra AI the answer to 4 * 4 or who won the Civil War, but it might come up with the breakthrough we so hopefully want from AI.
Disclaimer. The Deload is a collection of my personal thoughts and ideas. My views here do not constitute investment advice. Content on the site is for educational purposes. The site does not represent the views of my firms, Intelligent Alpha or Deepwater Asset Management. I may reference companies in which Intelligent Alpha or Deepwater has an investment. See Intelligent Alpha’s full disclosures here. See Deepwater’s full disclosures here.
In support of contrarian. One of the famous VCs said that their firm never invests in a startup if all partners like it, because, by implication, it's too obvious.
Thanks for the article and the challenge to AI. You've provoked a question. The cliche; "Asking the right question is more important" makes me wonder. Why haven't humans used LLM's asked the breakthrough questions? Where is the modern Galileo or Newton asking why to some phenomena?