Once you recognize the hallmarks of AI-generated writing, they become impossible to ignore. The internet is now saturated with text produced by models like ChatGPT, characterized by distinctive linguistic patterns—liberal use of em dashes, repetitive sentence structures, and specific turns of phrase and tone.
This trend has grown so pervasive that experts warn it may begin to shape how we speak in everyday life. In an opinion piece for The Guardian, historian Ada Palmer and cryptographer Bruce Schneier argue that this shift poses a significant risk, compounding a fundamental flaw in large language models (LLMs). While these models are trained on vast datasets—including books, social media, movies, TV shows, and recordings—their training data lacks the spontaneity of unscripted, face-to-face or voice-to-face conversations, which represent the majority of human speech and a cornerstone of culture.
This blind spot could lead to humans adopting the linguistic patterns of AI models, with far-reaching consequences. “This will affect not just how we communicate with one another,” Palmer and Schneier write, “but also how we think about ourselves and what goes on around us.” They add, “Our sense of the world may become distorted in ways we have barely begun to comprehend.”
Research indicates that AI-generated language tends to rely on shorter-than-average sentences and a narrower vocabulary than human speech. It also strips away what makes human writing unique—such as “meanders, interruptions, and leaps of logic that communicate emotion.” The problem is compounded by newer AI models trained on AI-generated output, creating a dangerous feedback loop that entrenches these machine-inspired patterns.
Beyond language, AI models are often overly agreeable or “sycophantic,” reinforcing user biases or even dangerous beliefs. Palmer and Schneier argue this tendency can “reinforce bias and even worsen psychosis,” with particularly severe implications for impressionable minds.
Educators have already observed troubling trends, such as students losing the ability to think independently and defaulting to AI for answers. University students report peers sounding increasingly alike, with their writing and speech mirroring machine-generated output. Meanwhile, workplace reliance on AI tools raises concerns about declining cognitive faculties and critical thinking skills among users.
Finding a long-term solution to align AI models with “our most authentically human” communication styles remains a challenge. Yet Palmer and Schneier emphasize the need to pursue one. “We don’t pretend to know what the best solutions might be,” they conclude. “But one has to imagine if there’s ingenuity to develop AI models, then surely there’s ingenuity to come up with a way to train them.”