I've been watching the AI industry evolve for years now, and I need to tell you something that's been keeping me up at night. We're living through what might be the most significant financial event of our generation, and most people have no idea what's actually happening behind the scenes.
This isn't just about technology. It's about money, power, incentives, and what happens when humanity's biggest ambitions collide with its oldest weaknesses.
Let me walk you through this story, because once you see it, you can't unsee it.
The Beginning: When Idealism Met Reality
December 2015. OpenAI launches with a mission statement so clean and pure it almost seems naive now: build safe artificial general intelligence and share the benefits with all of humanity. Sam Altman and Elon Musk co-chair the venture. Nine founding members collectively pledge up to one billion dollars.
One billion dollars to change the world. To build something that could fundamentally transform human civilization. It sounds like science fiction, but they were serious.
For the first two and a half years, OpenAI operated exactly as you'd expect a research nonprofit to operate. They burned through money doing research. They published papers. They experimented. They failed a lot. They spent roughly 130 million dollars just trying to figure out if this whole thing was even possible.
And then 2018 happened.
The Moment Everything Changed
OpenAI cracked something. They took Google's transformer model from 2017 and built GPT. Suddenly, they had a language model that could do things nobody expected. It wasn't perfect, but it was a glimpse of something much bigger.
And that's the moment when the incentives started to shift.
Think about it from their perspective. You've spent years and tens of millions of dollars on what everyone said was a moonshot. Most people thought you were chasing a fantasy. And then suddenly you have proof. Real, tangible proof that this might actually work.
What do you do?
GPT-2 was scheduled to release in February 2019. But OpenAI held it back. They cited concerns about "harmful misuse" and potential dangers. Eventually, after public pressure, they released it. But the message was already clear to anyone paying attention: this wasn't just a research project anymore. This was something valuable. Something worth protecting.
The Corporate Structure That Changed Everything
March 2019. OpenAI announces they're restructuring. The nonprofit stays, but now there's a for-profit arm too. They called it a "capped profit" organization, which meant investors could make up to 100 times their initial investment, but no more. Everything beyond that would go to the nonprofit.
One hundred times return. Think about that number for a second.
If you invest one million dollars and make 100 times that, you walk away with 100 million. If you invest 10 million, you could make a billion. It sounds absurd, right? Who needs that kind of return? What kind of cap is that really?
But here's what actually happened.
They split the company into two entities. OpenAI Inc remained as the nonprofit and maintained control over everything. But OpenAI Global LLC was born as the for-profit subsidiary. The nonprofit controlled the for-profit side, but the for-profit side could now raise money from venture capitalists, banks, and private investors. Employees could get equity. The machine could grow.
On paper, the nonprofit was still in charge. The mission was still the same. But the incentive structure had fundamentally changed.
Now you had investors who wanted returns. Employees who wanted their equity to be worth something. A company that needed to justify larger and larger valuations to keep the money flowing.
The scientific mission was still there, but it now had to coexist with something much more primal: the desire for wealth.
The Microsoft Effect and the Valuation Explosion
July 2019. Microsoft makes the first major investment into the new structure. One billion dollars. At the time, OpenAI was probably valued somewhere around 5 billion dollars total.
That was just the beginning.
Over the next few years, investors piled in. Wells Fargo. Sequoia. Andreessen Horowitz. All the big names in venture capital and tech. Each funding round pushed the valuation higher and higher.
And here we are today. October 2025. OpenAI is worth 500 billion dollars.
Let's do that math again because it's important. Microsoft invested 1 billion when OpenAI was worth 5 billion. Today it's worth 500 billion. That's a 100x return right there. If Microsoft sold its original stake today, they'd walk away with 100 billion dollars. In just six years. From a company that still doesn't turn a profit.
This is the game of hot potato playing out in real-time.
Here's what's wild: OpenAI loses money. They spend more than they make. But it doesn't matter. Each round of investors doesn't need OpenAI to be profitable. They just need the next round of investors to value the company higher than they did. As long as the music keeps playing, everyone can keep passing the potato and walking away with massive gains.
The 2008 subprime mortgage crisis erased 8 trillion dollars from the US economy. We're building something that could be significantly larger, and the foundation is speculation about a technology that hasn't been fully realized yet.
The PBC Pivot: Removing the Cap
September 2024. OpenAI made a bold move. They pushed to abandon the nonprofit structure entirely and become a fully for-profit company. The public response was intense. People felt betrayed. This wasn't what they signed up for.
A letter called "Not For Private Gain" circulated. It outlined OpenAI's original mission and demanded they stay true to being a charitable organization. The pressure worked.
Sort of.
May 2025. OpenAI backed down from going fully for-profit. Instead, they restructured as a Public Benefit Corporation or PBC. On the surface, this sounds good. PBCs can prioritize their mission over investor returns. They have more freedom to focus on what matters.
But here's the catch: they removed the 100x cap entirely.
Think about that progression. Started as a nonprofit. Added a capped profit structure with a 100x limit. Then became a PBC with no cap at all. The founding members and early investors like Microsoft probably already hit that 100x return, and now there's no ceiling on how high things can go.
The incentives keep shifting further from the original mission.
Anthropic: History Repeating Itself
If you're hoping this is just an OpenAI problem, I have bad news.
2021, Seven employees leave OpenAI to start Anthropic. Their stated mission? Focus on AI safety. Study the safety properties at the technological frontier. It sounded like a return to the original OpenAI vision.
They needed funding. They found an early investor: Sam Bankman-Fried. Yes, that Sam Bankman-Fried. The guy who would later be convicted of massive fraud and sentenced to 25 years in prison. He invested 500 million dollars of FTX money into Anthropic in April 2022.
When FTX collapsed and filed for bankruptcy, they needed to liquidate assets to pay back customers. In 2024, they sold their Anthropic stake for 884 million dollars. Even in a desperate fire sale, even after everything went wrong, FTX still made a 76% return on that investment.
If Sam Bankman-Fried hadn't committed fraud and FTX had held onto those shares until today, that 500 million dollar investment from 2022 would be worth approximately 15 billion dollars. That's a 30x return in three and a half years.
Anthropic is currently valued at 183 billion dollars. The company is four years old.
The same pattern. The same explosive growth. The same game of hot potato with ever-increasing stakes.
xAI and Mistral: Speed Running the Playbook
xAI was founded in 2023. Elon Musk doing what Elon does—moving fast and building big. He constructed the Colossus data center facility in just four months, which is insane by any standard. In May 2024, xAI merged with X (formerly Twitter) and changed its corporate structure from a PBC to a for-profit company.
Their most recent fundraising round brought in 20 billion dollars, largely from Nvidia. Current valuation? 200 billion dollars.
The company is two years old.
Mistral, based in Paris, started in June 2023 with 117 million dollars in initial funding. Two years later, they're valued at nearly 14 billion dollars.
Do you see the pattern? These companies aren't old enough to have a track record of sustained profitability. They're not generating massive revenues. But the valuations keep exploding upward because everyone believes this technology will eventually be worth trillions.
And maybe they're right. But maybe they're not.
The China Question: Open Source as Strategy
Meanwhile, something interesting is happening in China. Chinese AI labs are releasing their models as open source. Completely free for anyone to download and use. This seems counterintuitive. Why would you spend billions of dollars developing cutting-edge technology just to give it away?
The answer is actually quite clever.
If Chinese models weren't open source, mainstream media probably wouldn't care about them. By making them freely available, they generate enormous amounts of attention, publicity, and goodwill. Developers around the world start using them. Researchers study them. Tech journalists write about them.
China can't compete with American venture capital money. So they're competing for attention, reputation, and talent instead. And it's working. Chinese models are now beating many American proprietary models in benchmarks and user preference tests.
This strategy has served its purpose. Now that Chinese AI has credibility and mindshare, don't be surprised if the era of open-source Chinese models starts to wind down. Once you're ahead, you don't need to give everything away anymore.
DeepSeek: The Exception That Proves the Rule
But there's one Chinese company that stands apart from all the others: DeepSeek.
DeepSeek's CEO is Liang Wenfeng. Before starting DeepSeek, Liang founded a hedge fund called High-Flyer that uses machine learning techniques to make investment decisions. High-Flyer is now worth over 10 billion dollars, and Liang controls 99% of the voting rights with 55% ownership.
Here's why this matters: High-Flyer was already working with machine learning, so they'd already accumulated massive amounts of GPU computing power. In 2021 alone, they purchased over 10,000 Nvidia A100 chips. This was before the US banned sales of advanced chips to China. Some estimates suggest DeepSeek now has access to around 50,000 H100 chips.
DeepSeek is predominantly funded by High-Flyer itself. They've stated publicly that they're not looking for outside financing in the short term and that "money has never been the problem." They're focused on hiring the right people and driving genuine innovation in AI.
This puts DeepSeek in an extraordinarily unique position. They're not beholden to outside investors demanding returns. They're not playing the valuation game. They're not preparing for an exit or an IPO. They're just building because they want to build.
It's probably the cleanest incentive structure I've seen in this entire industry.
The Companies We Haven't Talked About
You might be wondering why I haven't mentioned Meta, Google, Amazon, or Microsoft's own AI efforts in depth. These companies are spending just as much money on AI research and development as the pure-play AI companies. Sometimes more.
But there's a crucial difference: they're diversified.
Meta has advertising. Google has search and cloud services. Amazon has e-commerce and AWS. Microsoft has enterprise software and Azure. These companies are all publicly traded with established revenue streams. If AGI takes longer than expected, or doesn't pan out the way everyone hopes, or turns out to be less transformative than anticipated, these companies will be fine. They'll take a hit, sure, but they'll survive.
OpenAI, Anthropic, xAI, Mistral, DeepSeek—these are pure-play AI companies. If AGI doesn't materialize, or if it takes 20 more years instead of 5, or if the technology plateaus before reaching the promised land, these companies are in serious trouble. And all those investors who paid billions based on speculation are going to want answers.
This is the fundamental risk nobody wants to talk about loudly.
The Constitutional Moment We Missed
There's a comparison that keeps running through my head. The founding fathers of America faced a similar challenge when drafting the Constitution. They were building a nation that had enormous potential to create wealth and prosperity. But they also understood human nature. They knew that concentrated power corrupts. They knew that people would try to game the system for personal gain.
So they built checks and balances. Separation of powers. A structure designed to keep ambition in check while still allowing for growth, innovation, and freedom.
AGI presents a similar moment. It has massive potential to improve human life and create unprecedented wealth. But the incentive to capture that wealth and power is equally massive. We needed structures that would keep those incentives aligned with humanity's broader interests.
Instead, we got standard corporate structures optimized for investor returns. We got capped profits that became uncapped. We got nonprofits that became for-profits. We got mission statements that gradually got subordinated to financial pressures.
The corporate structures around AGI development have been, frankly, inadequate. Not because anyone had bad intentions, but because AGI creates such strong incentives that even well-intentioned structures tend to warp under the pressure.
What Happens When the Music Stops?
I don't know when this ends. I don't know if it ends with AGI being achieved and everyone celebrates, or if it ends with a massive financial collapse that makes 2008 look quaint. Maybe it's something in between.
But I do know this: we're living through something unprecedented. Companies that don't make profit are valued at hundreds of billions of dollars. Early investors are seeing 30x, 50x, 100x returns in just a few years. The valuations keep climbing based on the belief that AGI will eventually be worth trillions.
This belief might be correct. AGI might transform everything. It might cure diseases, solve climate change, unlock abundance, and usher in a new era of human flourishing.
Or it might take much longer than anyone expects. It might plateau at narrow AI that's useful but not transformative. It might turn out that the last 10% of the journey to AGI is harder than the first 90%.
The problem is that the financial structures we've built don't have much room for uncertainty. They're built on the assumption that this technology will deliver, and deliver soon. Every funding round at a higher valuation reinforces that assumption and raises the stakes.
The Admission Fee
Maybe this is just the price we have to pay. Maybe achieving something as profound as AGI requires this kind of financial speculation and risk-taking. Maybe you can't get enough smart people working on hard problems without massive financial incentives. Maybe the chaos and messiness are just part of the process.
There's a certain logic to that argument. Technological revolutions are rarely neat and orderly. The railroad boom of the 1800s created massive wealth and massive waste. The dot-com bubble destroyed billions in value but also funded the infrastructure that powers today's internet. Maybe the AGI race will follow the same pattern—lots of casualties, lots of wasted capital, but ultimately worth it for the breakthrough.
Or maybe we're just repeating the same mistakes humans always make when we see the chance to get rich quick. Maybe we're caught in a speculative bubble that will eventually pop, destroying wealth and setting the field back by years.
I don't have the answer. Nobody does. That's what makes this so fascinating and terrifying at the same time.
What This Means for You
If you're not directly involved in AI, you might be thinking this doesn't affect you. But it does.
The amount of capital flowing into AI is unprecedented. The valuations we're seeing dwarf most historical bubbles. When you have companies worth hundreds of billions based largely on future promises rather than current profits, that affects the entire economy. It affects where talent goes. It affects what gets funded and what doesn't. It affects the opportunity cost of all that capital.
If this works out, we might genuinely be on the verge of a transformation in human civilization. Your life, my life, everyone's life could be radically different in ways we can barely imagine.
If it doesn't work out, or if it works out much slower than expected, we're looking at a financial reckoning that will touch every part of the economy. Pension funds, endowments, retail investors—everyone who's betting on tech and growth is indirectly betting on this AGI narrative.
The Questions We Should Be Asking
Instead of just riding this wave and hoping for the best, maybe we should be asking harder questions:
What happens if AGI takes 30 years instead of 5? What if it requires breakthroughs we can't currently foresee? What if the current approach hits diminishing returns?
How do we keep the incentives aligned with the stated mission when billions of dollars are at stake? Can a nonprofit structure actually control a for-profit subsidiary when the for-profit side has access to that much capital?
What are the systemic risks if multiple 200+ billion dollar AI companies fail at roughly the same time? Who's on the hook? What's the contagion potential?
Are we building AGI for humanity, or are we building it for whoever can capture the most value from it? And if it's the latter, what does that mean for the technology's development and deployment?
These aren't comfortable questions. But they're necessary ones.
Living in the Before Time
Here's what keeps me up at night: we're living in the "before" time. Before AGI, or before the bubble pops, or before whatever comes next. Future historians will look back at this period with perfect hindsight and wonder what we were thinking.
Either they'll marvel at how we funded the most important technological breakthrough in human history despite all the messiness and misaligned incentives, or they'll shake their heads at how obvious the warning signs were and how we ignored them all in our rush to get rich.
I genuinely don't know which story they'll tell.
What I do know is that we're witnesses to something extraordinary. Companies going from zero to 200 billion in valuation in two years. Investments returning 30x in under four years. Fundamental questions about intelligence, consciousness, and the nature of thinking being pursued by for-profit entities valued like sovereign nations.
This is the AGI gold rush. And like all gold rushes, it's chaotic, exciting, dangerous, and probably going to look very different in retrospect than it does from inside the maelstrom.
The Only Certainty
The only thing I'm certain of is this: the story isn't over. We're somewhere in the middle chapters, and the ending hasn't been written yet.
Maybe OpenAI achieves AGI next year and transforms everything. Maybe Anthropic or DeepSeek or some company that doesn't exist yet gets there first. Maybe it takes another decade and dozens of failures and billions more in burned capital.
Or maybe we're chasing something that's always going to be just out of reach, and future generations will look back at this era the way we look at the Dutch tulip mania or the South Sea Bubble—as an object lesson in human nature and the danger of collective delusion.
I hope it's the former. I really do. The potential upside of AGI is too enormous to dismiss. But I've seen enough of human history to know that hope isn't a strategy, and good intentions don't override bad incentives.
All we can do is watch, question, and try to understand what's really happening beneath the surface. And maybe, just maybe, we can avoid the worst outcomes by being honest about the risks we're taking and the prices we're paying.
This is humanity's cost to AGI. Whether it's worth it or not, we're about to find out.