Artificial intelligence—hailed as the most transformative technology of a generation—faces growing skepticism from the American public. While a majority acknowledge its significance, recent surveys reveal more concern than enthusiasm, particularly regarding its impact on creativity and personal relationships.
Pew Research Center polling indicates public apprehension is rising even as AI usage spreads. The technology is increasingly linked to job displacement, academic dishonesty, unreliable advice, excessive energy consumption, and existential threats—including the potential annihilation of humanity. In March, 57% of respondents to an NBC News poll stated that the risks of AI outweigh its benefits.
Several factors contribute to this skepticism, but one stands out: the messaging from the industry’s most prominent leaders. Last month, Anthropic, a leading AI firm, restricted access to its new Mythos cybersecurity tool, citing concerns it could be exploited by malicious actors. Sam Altman, CEO of rival OpenAI, dismissed the move as “fear-based marketing.” Yet shortly after, OpenAI released its own security tool—and similarly limited its availability.
This pattern reflects a broader trend: AI companies frequently highlight the dangers of their own products. While such warnings may align with corporate responsibility narratives, they also risk undermining consumer confidence. The public’s growing distrust suggests that constant reminders of AI’s perils may not be an effective branding strategy. (The recent attack on Altman’s home with a Molotov cocktail underscores the stakes.)
AI Leaders’ Pessimism Isn’t New—or Isolated
This alarmist tone dates back to at least March 2023, when OpenAI unveiled GPT-4. Alongside technical breakthroughs, the company’s report included a section detailing potential misuse—such as instructions for building bombs or synthesizing hazardous chemicals.
Soon after, hundreds of AI researchers and executives—including representatives from Anthropic, Google DeepMind, and OpenAI—signed an open letter warning that AI posed “extinction-level risks” comparable to nuclear war.
Many executives have since advocated for government regulation. Elon Musk’s ongoing legal dispute with OpenAI highlights the company’s original mission: it was founded as a nonprofit precisely because its founders believed the technology was too dangerous to be driven solely by profit motives.
Is Fear the New Selling Point?
While AI firms are right to acknowledge risks, their repeated emphasis on doom may be counterproductive. The industry’s marketing often leans into dystopian narratives, yet this approach has failed to resonate. During this year’s Super Bowl, a wave of AI-themed ads promised grand visions (“You can just build things”), but these broad strokes lacked tangible consumer appeal.
On a practical level, the benefits AI delivers in daily life—such as productivity tools or creative assistance—rarely receive the same spotlight as warnings of catastrophe. The result? A growing disconnect between the technology’s promise and the public’s perception.