The New York Times has issued a stern reminder to its freelance contributors, reinforcing its strict policy against the use of artificial intelligence in submitted work. The email, sent on Tuesday, May 14, 2024, serves as a formal clarification of the paper’s longstanding rules regarding generative AI tools.

According to the notice, reviewed by Futurism, all content submitted by freelancers must reflect exclusive human creativity and original reporting. The policy explicitly states:

"All writing and visuals that freelancers submit to The Times must be the product of human creativity and craft, and all submissions must consist solely of their original reporting, writing and other work."

Freelancers are prohibited from using generative AI tools to create, modify, or enhance any part of their submissions. The policy states:

"Freelance contributors must not submit any material for publication that contains content generated, modified or enhanced by [generative AI] tools, or that has been input into these tools."

The reminder directs contributors to a detailed internal document outlining the paper’s stance on AI. While the policy allows AI tools for high-level brainstorming, it strictly forbids their use in drafting, editing, or refining any portion of a story. The document specifies:

"Using [generative AI] tools to create, draft, guide, clean up, edit, improve, or rephrase your writing is strictly prohibited."

The policy also names specific tools that are off-limits, including chatbots like Gemini, Claude, ChatGPT, and Perplexity; AI-powered search products such as Google AI Overviews; and image generators like Adobe Firefly, DALL-E, and MidJourney.

Recent AI-Related Incidents at The New York Times

The reminder follows a series of high-profile controversies involving AI-generated content published by the newspaper. In March 2024, a contributor to the Modern Love column was publicly accused of using AI to generate an emotional personal essay. The writer later confirmed to Futurism that she had used chatbots to conceptualize and edit the piece.

In April 2024, the NYT severed ties with a freelancer who admitted to using AI to produce a book review that was later found to contain plagiarized content.

Most recently, on May 7, 2024, the newspaper issued a substantial correction after discovering that a quote attributed to Canadian Conservative leader Pierre Poilievre in an April 15 article was, in fact, an AI-generated summary misrepresented as a direct quotation. The article, which discussed the political success of Liberal Prime Minister Mark Carney, was updated to reflect the error. The correction noted:

"An article on April 15 about the success that Mark Carney, the Liberal prime minister of Canada, has had in building cross-party alliances was updated after The Times learned that a remark attributed to Pierre Poilievre, the Conservative leader, was in fact an AI-generated summary of his views about Canadian politics that AI rendered as a quotation. The reporter should have checked the accuracy of what the AI tool returned."

When contacted by Futurism, the NYT did not immediately respond to inquiries about whether this reminder was a direct response to the recent AI-related scandals.

Source: Futurism