The New York Times has severed ties with freelance writer Alex Preston after discovering he relied on an AI tool to draft a book review. The resulting content inadvertently mirrored a Guardian article, marking a serious breach of journalistic standards. This incident sends a sharp message: using AI without transparency and proper attribution carries immediate, severe consequences for writers.
Why the New York Times Fired the Freelancer
On March 30, the newspaper formally ended its relationship with Preston. This decision followed the discovery that a January 6 review contained striking similarities to an August Guardian piece. The paper didn’t just let it slide. Instead, they appended an editors’ note to the original article, notified the Guardian, and made it clear that unauthorized AI use and unattributed text simply aren’t on the table.
Preston admitted to the error, but his confession didn’t save his gig. The Times insists that relying on AI for drafting, without proper disclosure, crossed a line they won’t let writers cross. You need to understand that even if you think the tool is just “helping,” the output belongs to the machine’s training data, not your unique voice.
The Trust Gap in Modern Journalism
This incident doesn’t happen in a vacuum. It arrives just as the industry grapples with growing anxiety about AI. While some writers like James Frey proudly admit to using AI to mimic their style, the newsroom view is much stricter. If a freelancer uses AI to “help” and that help results in unattributed plagiarism, the consequences are immediate and severe.
When a review is supposed to be a fresh, human perspective, and it turns out to be a patchwork of machine-generated text and stolen ideas, that trust evaporates. The freelancers who want to work with the Times now know the rules: no more hidden prompts, no more unattributed drafts, and definitely no more accidental Guardian reviews.
What This Means for Your Writing
Is this the moment the industry finally draws a hard line in the sand? The Times’ response suggests yes. They aren’t banning AI outright, but they are demanding transparency. If you use these tools, you have to say so. You can’t hide behind the “helpful assistant” defense when the output overlaps with another publication’s work.
The Future of Book Criticism
For editors, the implications are huge. They now have to decide how much they can trust their freelancers. Do they need to implement new checks? Do they need to mandate that any AI usage be disclosed upfront? The Times says they won’t tolerate unattributed text, but enforcing that in a world where AI can generate thousands of words in seconds is a logistical nightmare.
We’re entering a new era where the definition of “plagiarism” is being rewritten in real-time. It’s no longer just about copying a paragraph. It’s about using a tool that ingests everything and spits out something that looks like a human thought but is actually a statistical probability of something else. The stakes are too high to let AI run wild without guardrails.
For now, the answer seems to be a hard “no” to the kind of shortcuts Alex Preston took. The Times made their stance clear: you can use the tools, but you can’t let them do the thinking for you, and you certainly can’t let them steal your voice. As the technology evolves, so must the ethics around it. And right now, the New York Times is leading the charge.
