AI’s Breakneck Expansion: Six Stark Realities in 60 Days
AI is the fastest-growing product category in world history. In just the past two months, six undeniable facts have emerged—each a warning sign of the technology’s rapid, unchecked evolution.
1. AI Models Are Growing Too Powerful to Release
One of the latest AI models is so advanced that its creator has chosen not to make it publicly available, citing uncontrollable risks.
2. AI Is Now Coding Itself
OpenAI and Anthropic confirm that their most powerful AI coding models can autonomously build and improve upon their own code.
3. Transparency Is Disappearing as Power Grows
As AI models become more potent, AI companies are becoming less transparent. The U.S. federal government imposes zero transparency requirements on these systems.
4. Public Fear and Resentment Are Escalating
In early April, OpenAI CEO Sam Altman’s San Francisco home was targeted in two separate attacks within a week. Altman responded with a stark admission:
"The fear and anxiety about AI is justified ... Power cannot be too concentrated."
5. Financial Markets Are Already Reeling
This year’s AI-driven market volatility erased $2 trillion in value as investors rapidly reassessed the capabilities of AI models—from coding to real estate, legal research, and financial management.
6. No One Is Truly Prepared
Despite these warnings, American society, workers, academic institutions, and governments remain unprepared for AI’s disruptive potential.
Why This Matters: A New Atomic Age
This moment echoes the dawn of the Atomic Age in 1945—a time when humanity first confronted a technology with both transformative promise and catastrophic risk. Like nuclear power, AI carries the potential for both utopia and apocalypse, yet its trajectory remains poorly understood by those in power.
The Science Fiction of AI’s Future
Much of today’s most viral AI discourse reads like modern science fiction. Consider these examples:
-
"AI 2027" (2025): A scenario led by a former OpenAI researcher that predicts either a pro-democracy revolution across the solar system or the harvesting of humanity’s brains by AI.
-
Matt Shumer’s "Something Big Is Happening": A viral post conflating AI’s code-generation with the arrival of an intelligence capable of genuine creativity and taste.
-
Citrini Research’s "The 2028 Global Intelligence Crisis": A worst-case economic forecast where governments and markets fail to respond effectively to AI’s disruption.
These narratives drive debate and, in some cases, market movements because they could be right. But they are speculative—edge cases, not inevitabilities. Yet no one can guarantee they’re wrong. Not the president. Not AI company leaders. If anyone claims certainty, they’re engaging in science fiction themselves.
The Unanswerable Question: Where Does This End?
Humanity has no clear vision of AI’s ultimate destination. The technology’s exponential growth outpaces our ability to predict, regulate, or even fully comprehend its implications. Without better leadership, collaboration, and public understanding, the risks—both known and unknown—will only intensify.
Anthropic’s Explosive Growth: A Case Study in Unchecked Ambition
Anthropic, one of AI’s most prominent players, has achieved the fastest revenue growth in American business history. Its annualized revenue skyrocketed from:
- $1 billion (end of 2024)
- $9 billion (one year later)
- $30 billion (current estimate)
This staggering expansion underscores the urgency of addressing AI’s risks before its power becomes unmanageable.
A Final Warning: The Time to Act Is Now
A year ago, business leaders were urged to wake up to AI’s dangers. Today, the message is for everyone: We’ve been warned—by the data, by the technology itself, and by the very people building it. The question is whether we will listen before it’s too late.