NYT Terminates Freelancer Over AI-Generated Review

ai

NYT Fires Freelancer for AI-Generated Plagiarism

The line between human creativity and algorithmic assistance just got a lot thinner, and someone got cut. The New York Times has permanently ended its relationship with freelance book reviewer Alex Preston after an investigation revealed he used artificial intelligence to draft a review that accidentally plagiarized work from a competitor. It’s a stark reminder that even in the age of generative AI, accuracy and originality remain the gold standard for journalism.

How the Mistake Unfolded

Investigation and Consequences
>An internal investigation confirmed the suspicions. The Times found the language in Preston’s review “uncomfortably similar” to the piece in The Guardian. When confronted, Preston reportedly admitted to using AI assistance during the writing process and admitted he failed to catch the overlapping material before it went live. The paper added an editor’s note to the review acknowledging the AI use and the plagiarism, and then, to make sure the point was clear, severed ties with the freelancer for good.

What This Means for the Industry

This incident highlights a growing anxiety in the newsroom: how do you police creativity when a tool can mimic it? Media outlets are scrambling to adapt, and this case proves that without strict guidelines, AI can easily blur the lines of attribution. The New York Times clearly felt Preston’s work failed the smell test, proving that relying on machines without proper vetting is a liability, not an assist.

Practitioners Perspective

From my seat in the newsroom, this is a cautionary tale. We can’t just let the machines do the heavy lifting and hope for the best. If a freelancer can’t distinguish between their own work and a bot-generated draft, that’s a fundamental failure of quality control. Transparency is key, but in this case, the damage was already done. The only way to fix it was to cut the cord.