Just a few years ago, while reporting on the rise of AI-generated spam on Facebook, I asked friends and family to share examples of suspicious content they encountered in their timelines. Some sent me glaringly artificial images—AI-generated sci-fi landscapes, bizarre creations like "Shrimp Jesus," and even fabricated images of starving children begging for help. But others sent me images they thought were AI but weren’t. Their skepticism had reached a point where human-made art and photos were dismissed as potentially fake to avoid being misled.
Browsing the internet today means navigating a landscape saturated with AI-generated content. The lines between real and fake have blurred, fueling what some call "AI psychosis"—a colloquial term for the disorientation people feel when bombarded with synthetic media. Yet the greater issue is the cognitive load imposed by others’ indiscriminate use of AI, infiltrating every corner of our digital lives.
Our brains now perform countless calculations daily: Is this AI? Does it matter? Why does this look or sound so unnatural? Is the person behind this content even real? We’re conditioned to expect—and ignore—AI-generated material in predictable places: Google’s AI Overviews (remember the infamous "glue pizza" advice?), engagement-bait posts on LinkedIn, and the endless scroll of Facebook and Instagram feeds. But lately, it feels inescapable, creeping into every direction we turn.
It’s not that I reject AI-assisted content or fear being deceived. The problem is deeper: my brain has become a vigilant AI detective, scanning for anomalies in everything I consume. One misstep, and I’m left questioning my own sanity. Take last week, for example. Desperate to avoid yet another analysis of the White House Correspondents’ Dinner shooting, I tuned into an episode of Everyone’s Talkin’ Money, a long-running podcast about taxes hosted by Shari Rash. The episode’s intro script, which Rash read aloud, was riddled with AI-generated tropes: "The shift I want you to make today—and this is the shift that changes everything—is starting to see your tax return as information—not a bill, not a badge of shame, but information."
The script’s unnatural cadence and overused phrases set off alarm bells. My focus shifted from the content to the delivery itself. Was Rash using AI to script her episodes? The question lingered, undermining my trust in a show I’d followed for years. This isn’t an isolated incident. It’s a symptom of a larger problem: the erosion of authenticity in digital spaces due to lazy, unchecked AI use.
We’re no longer just consumers of content; we’re detectives, forced to interrogate every image, video, and text for signs of artificiality. The mental fatigue is real, and the collateral damage—distrust, overwhelm, and a sense of helplessness—is only growing.