Anthropic, the AI lab behind the Claude chatbot, recently issued another dire warning about an impending "intelligence explosion"—a scenario where AI systems could autonomously improve themselves without human intervention. The company’s research arm published an agenda outlining the risks of this technology, including the possibility of AI "procreating" by creating new models independently.
Anthropic co-founder Jack Clark told Axios that by the end of 2028, there is a greater than 50% chance humanity will have an AI system capable of receiving the command: "Make a better version of yourself" and executing it entirely on its own.
Clark’s prediction aligns with Anthropic’s established identity as a vocal advocate for AI safety. However, the company is not alone in this approach. OpenAI, the creator of ChatGPT, also frequently highlights the risks of artificial intelligence while aggressively pursuing market dominance. Meanwhile, both firms are securing unprecedented funding rounds that enrich stakeholders and fuel rapid development.
The Wall Street Journal reported on May 19, 2024, that OpenAI recently permitted current and former employees to sell up to $30 million in company shares. Over 600 individuals participated, generating a total of $6.6 billion in transactions.
The Contradictions of AI Development
The dual role of AI companies—as both harbingers of doom and beneficiaries of unchecked growth—has become impossible to ignore. These organizations warn of catastrophic risks while accelerating deployment, competing to lead what many describe as the most transformative technological shift in history. Governments, too, are rushing to embed AI in military, educational, and administrative systems, acknowledging its dangers but unable—or unwilling—to halt progress.
Anthropic’s Warning as a Pitch Deck
In January 2024, Anthropic CEO Dario Amodei published an essay titled "The Adolescence of Technology," drawing parallels to Carl Sagan’s Contact. The piece frames AI development as humanity’s defining crossroads, presenting five existential risks:
- Rogue autonomous AI systems that act beyond human control
- Misuse of AI for mass destruction, including bioweapons or cyberattacks
- Authoritarian capture of AI for political repression and surveillance
- Economic disruption leading to mass unemployment and inequality
- Extreme wealth concentration in the hands of a few corporations or individuals
Amodei also warned of cascading indirect effects that remain unforeseen, amplifying the unpredictability of AI’s societal impact.
Record Funding Follows Catastrophic Warnings
Seventeen days after the essay’s publication, Anthropic secured $30 billion in new funding, pushing its valuation to $380 billion. Just a week later, on the same day Axios published its report, the Financial Times revealed that Anthropic is now targeting tens of billions more in summer 2024 funding, with ambitions to reach a $1 trillion valuation—a figure that would surpass OpenAI’s current $852 billion valuation.
This juxtaposition is no coincidence. "The Adolescence of Technology" functions as both a warning and a strategic pitch to investors. The essay’s framing of AI as "the most consequential technology in human history" serves a dual purpose: it underscores the urgency of addressing risks while positioning the company as a leader in the field. Similarly, claims of responsibility in AI development double as competitive advantages, and geopolitical concerns about China’s AI advancements are leveraged to justify further funding as a matter of national interest.
Amodei’s essay concludes by acknowledging the inherent dangers of AI, but the timing of Anthropic’s funding rounds suggests that these warnings may be as much about attracting capital as they are about averting catastrophe.