Study Finds People Rarely Suspect AI in Personal Messages
Two new experiments reveal that most people do not consider the possibility that a personal message could be AI-generated, even when they themselves use artificial intelligence tools like ChatGPT. To investigate how people judge others based on written communication in the AI era, researchers recruited over 1,300 U.S.-based participants aged 18 to 84.
Participants were shown AI-generated messages, such as an apology sent via email, and divided into four groups. Some received no information about the message’s origin, while others were told the text was written by a human, AI, or that the source was unknown. The study uncovered a clear "AI disclosure penalty."
AI Disclosure Leads to Negative Perceptions
When participants knew a message was AI-generated, they rated the sender far more negatively, describing them as "lazy," "insincere," or "lacking effort." In contrast, identical messages believed to be human-written were perceived as "genuine," "grateful," and "thoughtful."
However, the most surprising finding was that participants who received no information about authorship formed impressions just as positive as those who believed the messages were human-written. An example of an AI-generated fictional apology evaluated in the study is shown below.
"An AI-generated fictional apology sent via text was one of the messages participants evaluated in a recent study." [Source: Zhu Molnar (2026)]
Familiarity with AI Doesn’t Increase Skepticism
The researchers tested whether participants’ own AI usage influenced their judgments. Surprisingly, even frequent AI users—those who employ generative AI at least every other day—showed little difference in their reactions. While heavy AI users penalized AI use slightly less when authorship was disclosed, they were no more skeptical by default. When authorship was undisclosed, all groups—heavy users, light users, and nonusers—assumed the text was human-written and formed similar impressions.
Why This Study Matters
This lack of skepticism and negative bias toward AI has significant implications. People often use written messages to gauge sincerity, authenticity, and competence, which influence decisions in friendships, dating, and professional settings. The study highlights a critical disconnect: people rarely suspect AI use unless it is obvious.
This unawareness creates an ethical dilemma. Individuals who secretly use AI to compose personal messages can enjoy the benefits without facing detection risks. Meanwhile, those who openly admit to using AI suffer reputational harm. The findings suggest that over time, the lack of awareness and skepticism could reshape the meaning of writing in everyday communication.
Word clouds depict participants’ first impressions of senders who wrote messages themselves (left) and those who used AI (right). [Source: Andras Molnar]