Infinite Intelligence > Superintelligence
We're only years from infinite intelligence to solve the world's problems
The Deload explores curiosities and experiments across finance, fitness, and philosophy. If you haven’t subscribed, join 1,600 readers:
The Era of Infinite Intelligence
Superintelligent AI might solve all the world’s problems. It could cure cancer, eliminate human aging, create a world of abundance for all.
Superintelligent AI might also prove completely uncontrollable and destroy humanity, whether intentionally or as mere collateral damage in the path of achieving other goals.
The clashing viewpoints about the potential and dangers of peak AI live at the heart of the battle of techno-optimists and doomsayers, accelerationists vs doomers.
No matter what side you fall on about the promise or danger of superintelligence, we don’t need it to achieve the extraordinary outcomes in the first scenario. We only need to achieve infinite intelligence, and there’s a path to get there in just a few years.
Here’s the difference between the two:
Superintelligence is an AI vastly more intelligent than the most intelligent human. It is a singular, centralized system capable of extreme knowledge and understanding across any and every domain. Superintelligence solves problems with apparent elegance through superior intellect and without human involvement.
Infinite intelligence is a massively scaled system with intelligence in a specific field or fields equivalent to a human college graduate. It is more specialized and decentralized than superintelligence. Infinite intelligence solves problems with brute force alongside some human guidance.
There are about 35,000 mathematicians in the US workforce today and 1 million workers with a math degree. Imagine a world where AI can do graduate level math, and we could spin up a billion instances of those digital mathematicians. We’d have 1,000x more artificial brains working on mathematical problems than we do now. If we focused a firehose of AI instances with graduate level math knowledge, we’d probably solve some important problems, and you could extend the same approach to chemistry (only about 80,000 chemists in the US workforce), physics (20,000 physicists), biology (100,000 biologists), and more.
That’s infinite intelligence. It wins through brute force vs superintelligent elegance, but winning is winning. All human excellence is brute force. There are 7 billion of us. Only a handful participate in meaningful breakthroughs in society. We’re surrounded by the output of brute force intelligence. Infinite intelligence seems a most natural extension of the brute force human experience.
How Far are We From Infinite Intelligence?
Not far. We may already be living it.
Google’s DeepMind used a large language model to solve a stubborn math problem through what seems a method in evolutionary brute force. The model treated the math problem like a puzzle. Per the Nature paper explaining the breakthrough, the model “searches for programs that describe how to solve a problem, rather than what the solution is.”
With better models and more efficient compute, infinite intelligence may only be a few years away.
One catalyst is achieving AGI, which could happen soon. Jack Kendall, CTO at Rain AI predicted in my interview with him that we’ll have AGI by 2025 using a specific definition:
A single network architecture that can in principle learn any combination of data modalities - e.g. speech, vision, language. We have this now.
The ability to use prior experience to learn new things rapidly. That’s causal understanding.
The ability to reason about things and perform tasks it hasn’t seen before. That’s symbolic reasoning.
Symbolic reasoning includes the ability to do math, which is critical to graduate level intelligence in any field.
Note that Jack’s definition of AGI doesn’t require the AI to understand like a human. That’s human-level intelligence. AGI only requires that a machine can learn new things and perform tasks it hasn’t seen before, whether that means the AI thinks like a human or not.
By achieving Jack’s AGI, we have all the pieces for infinite intelligence.
Who Benefits from Infinite Intelligence?
Don’t over think it. The beneficiaries of infinite intelligence are:
AI infrastructure providers
Companies and states with resources to buy compute
Infinite intelligence is definitionally only limited by compute. No wonder AMD now sees a $400 billion AI accelerator market opportunity in 2027, which they estimated at only a $150 billion market at the beginning of 2023. AI infrastructure companies, especially those who make chips, are bound to have a massive tailwind from the companies or nations that harness infinite intelligence.
If compute is the only limiting factor to solve problems with infinite intelligence, the companies and nations that have the most resources are the ones most likely to benefit. The mega sized companies that understand this future will become even more mega sized, and it’s not just in big tech. Pfizer, with $40 billion+ in cash, is more likely to brute force an infinite intelligence solution to cancer than some $1 billion-funded startup.
While skeptics might criticize the big get bigger reality of infinite intelligence, humanity is still the big winner. I am an accerlationist, albeit a rational one. It’s hard to find good arguments as to why curing diseases and solving our most complex problems with infinite intelligence aren’t good outcomes for humans, no matter who solves them. Again, winning is winning.
What’s After Infinite Intelligence?
Once we have infinite intelligence, someone will set an infinite intelligence firehose of millions of digital college-level mathematicians and programmers to crack superintelligence. They’ll probably figure it out. At least eventually.
Setting aside tribal associations with accelerationism or doomerism, it’s worth wondering: Should we stop at infinite intelligence?
In many ways, infinite intelligence seems a superior outcome than superintelligence. Infinite intelligence doesn’t come with the same runaway risk as superintelligence. Humans should be able to control infinite intelligence systems unlike a superintelligence. Infinite intelligence also wouldn’t result in a centralized power, whether that be an uncontrollable machine or a nation state that invents the superintelligence and maintains some influence over it.
Maybe we should stop with infinite intelligence, but we won’t.
It’s human nature to use brute force to solve impossible problems. Perhaps it’s destiny that the ultimate act of brute force will give way to the elegance of a superintelligent world.
Disclaimer: My views here do not constitute investment advice. They are for educational purposes only. My firm, Deepwater Asset Management, may hold positions in securities I write about. See our full disclaimer.
Regular AI is Enough to Beat Markets
We don’t need superintelligence for AI to beat markets. Intelligent Alpha is already doing that with today’s current state-of-the-art AI models via ChatGPT, Bard, and Claude.
The most recent updates on Intelligent Alpha:
~70% of our 35+ AI-powered strategies are beating benchmarks. Many are ahead of benchmarks by 300-500 bps.
Our strategies are also largely beating comparable AI-powered ETFs like AIEQ, AIVL, ECML, AMOM, QRFT, and others.
We're running two AI-powered funds within Deepwater: A concentrated Large Cap Tech strategy (12 stocks) that's beating the Nasdaq by over 900 bps since inception, and a Long/Short strategy that's beating the BarclayHedge L/S Index by 200+ bps.
Citywire just did a feature on our AI work.
Maybe an infinite intelligence would come up with some novel approach to investing that crushes markets every year. Of course, markets are dynamic. Such a strategy would likely be replicated and the alpha eliminated in time, just as we’ve seen throughout history with human investors.
Investing in fundamentals for the long run is the only unbeaten investment strategy. The reason Intelligent Alpha works is because it doesn’t try to do too much. It doesn’t try to be too smart. It focuses on the big important things and ignores the rest.
I believe even with infinite intelligence, the same patient investment philosophy will endure, even as machines get better at identifying superior companies that prove to be great and durable fundamental investments.