AI writing has its quirks—repetitive syntax, excessive metaphors, or unnatural adjective-noun pairings—but spotting AI in a full-length book remains challenging. According to writer Imogen West-Knights, these patterns include “negative parallelisms…or excessive use of metaphor and similes, especially ones that don’t quite make sense or that come very rapidly, one after another. Every noun having an adjective attached, certain kinds of repetitive syntactical blocks that appear.”
AI models are trained on vast datasets of human writing—both polished and flawed—making it difficult to differentiate AI-generated text from human prose, especially in shorter passages. To test this, journalist Vauhini Vara conducted an experiment with her closest readers.
Vara suspected a common misconception: that AI-generated language is fundamentally distinct from human writing. “There’s a certain kind of way that AI generates language and it’s super different from the way writers do,” she noted. To challenge this idea, she asked a researcher to train an AI model on her past work—including three books and journalism pieces—then generate passages mimicking her unreleased novel. She then mixed these AI-generated snippets with her own writing and asked her friends to identify the real ones.
This experiment builds on earlier research by Tuhin Chakrabarty, whose study found that graduate writing students often preferred AI-generated imitations of established authors over human-written passages. Vara’s twist on the experiment—using her own work—aimed to test whether even those familiar with her style could spot the difference.
The results, discussed on the podcast Today, Explained, highlight how advanced AI has become at replicating human writing styles. For more insights, listen to the full episode on platforms like Apple Podcasts, Pandora, or Spotify.