Why LLMs Ruin Everything
Reading, Writing: feat. Matt Goodwin, Alex Preston and the Bookpocalypse
How did you go bankrupt? Gradually, then suddenly. How did the negative effects of AI affect the world of books? Same.
First there was that steady drip-drip about students using LLMs to cheat on their essays, or hapless solicitors in provincial court hearing filing hallucinated briefs. Then there was the long and winding discussion about the ethics and legality of LLMs training datasets. Now, bam: AI fuckery is breaking through into the mainstream of the world of books, and at pace. It was, what, last week when Mia Ballard was dropped by her publisher after her novel Shy Girl was discovered to have been written with the “help” of large language models?
Then this week I was writing in the Spectator about the hoo-hah over Matt Goodwin’s use of AI to help in preparing his new book Suicide of a Nation. The piece I wrote is here. In it I mostly wanted, as usual, to use the misfortunes of others to make a couple of childish jokes which some of my more hostile readers would fail to get. But I also wanted to pull focus a bit and notice that what I called “bookiness” — ie the kudos attaching to books and book-learning — seems at least for now to be surviving the dramatic decline in the reading of actual books.
I wrote in my piece:
Ever fewer students, we’re told, read books, and ever more outsource the research and writing of their essays to ChatGPT. Their teachers, likewise, let ChatGPT design their courses and mark their essays. It doesn’t get any better when these golden boys and girls graduate and go into their influential jobs in the big world. The Daily Telegraph last week published a survey on the country’s new class set-up containing the dismaying finding that today’s elites are the least likely of all the demographics surveyed to read books at all.
Yet even as we’re in wholesale retreat from books themselves, we’re still in thrall as a society to the idea of books. You could call it “bookiness”. Those made-up quotes, that pilfered subtitle, those questionable footnotes, and the fact of Goodwin’s argument being in a codex at all: these are clumsy tributes to the kudos that still attaches to real scholarship. The quotes are, at least ostensibly, from famous authors. The existence of footnotes and references imply a library. Intellectual pride, even when you’ve made the transition to full-time Twitter warrior, asks you to be the author of a proper book.
And isn’t bookiness exactly what ChatGPT and its LLM (large language model) cousins are selling? They’re selling – to the student, the self-published author and perhaps the sometime academic – the illusion that you’ve read the books they quote and allude to on your behalf. It’s selling the idea that you have the store of knowledge and understanding, or the facility with prose, that people get from reading books. It’s just saving you the bother of doing the work. Meanwhile the LLMs that are killing off our interest in books (and, judging by this, our trust in them) can only exist in the first place because of the books that they have stolen.
LLMs — since they read books for us and threaten to be able to write them too — are in an odd position. Their value as cheating machines, and indeed their very existence, depends on our continued esteem for bookiness. But as they take over the tasks both of reading and (we will increasingly suspect) writing, they necessarily eat away at that esteem. They are a parasite that threatens to kill the host organism.
Anyway, the very afternoon my piece was published, another apparent LLM cheating story broke. It emerged that the New York Times had cut ties with a freelance critic after elements of his review were found to have been lifted from a review of the same book already published in the Guardian by another critic. The critic, on being challenged, had admitted that he used AI in composing the review, and that he didn’t notice it had lifted the material from the Guardian. The NYT, quite properly, dropped the hammer.
I wasn’t at all surprised that such a story broke. It was only a matter of time. We’ve already seen crappy AI-generated book lists punted out by local syndication services; it would be foolish to assume that higher-profile outlets would be immune forever. But I was not expecting the story to be about Alex Preston, who established his chops as a critic before ChatGPT was even a twinkle in Sam Altman’s crypto wallet. My little jaw dropped.
Full disclosure: I have been a friendly acquaintance of Alex Preston for many years, and I’m an admirer of his work. He has written for me from time to time at the Spectator, and I’ve always been pleased when he’s done so. There really aren’t many good fiction critics about, and Alex is one of them. You don’t generally get to review fiction for the New York Times if you aren’t.1
So I was — like others with whom I compared notes — utterly baffled. Why would a critic of his calibre want or need to use AI to help with a review? And why — lord, why? — would he risk doing so in a space as high-profile as the New York Times? This is one of the most prestigious book-reviewing spaces in the English-speaking world. It’s a showcase — and they pay. In the UK, if you can get £300 for a book review you are doing extremely well. I have looked with sour envy — sorry, fraternal admiration — at friends and colleagues from these islands who, over the years, have managed to snag regular work at the NYT, NYRB or New Yorker. It’s not the kind of work, having obtained it, that you’d want to outsource to the slop machine.
A little bit of me briefly wondered — as I imagine some other critics will have too — whether “I used AI and it messed up” might be cover for the still more embarrassing “I plagiarised someone else’s review and when I was challenged I claimed it was AI”, but, again, that doesn’t make sense. No sane person would expect nobody to notice if they plagiarised the Guardian in the New York Times.
So I asked Alex what on earth he’d been thinking. He replied to me very quickly and fully, as follows:
Oh god it’s awful and I’m so ashamed. Such a total car crash.
The short version is that I had written a draft review of the book, but it was under length and I was rushing badly and drowning slightly. I made the stupid decision to use an AI tool to help expand and smooth it, with instructions about US spelling and house style at the NYT which I always get wrong - Mr so and so etc.I looked at how it had tidied up the end of the review but didn’t realise that it had also dropped in language from Christobel Kent’s Guardian review. I was rushed and stupid and I’m so sorry. That is the heart of it.
So this was not a case of asking a bot to write a review from scratch, nor of deliberately lifting from the Guardian. But it was absolutely an improper use of A.I., and a complete failure of judgment on my part. I was ashamed when it was put to me, and I admitted it straight away. It was the case of someone naively and clumsily using a tool they didn’t understand and this is where we are.
I’m not trying to minimise it. It’s a serious lapse, and deeply embarrassing. But nor do I think the pure AI-ghostwriting version of the story is quite right. The review was mine in the vast majority of its substance, and the error was in using the tool at all and then failing to catch what it had done.
I put it here, with his permission, for the record. Personally, I believe him. I don’t think it at all gets him off the hook, and nor to judge by this does he. But it seems to me to make a tiny bit more sense than the idea that he’d simply got ChatGPT to write a review for him. Still, that someone with his reputation would risk trashing it by using AI in his writing at all slightly astonishes me. It is gloomy testament to how tempting these tools seem to be as a shortcut.
Funnily enough, and this is a cause for at least some optimism, I think ChatGPT is going to tend to be a poor tool to cheat at writing book reviews anyway. Leaving aside all the obvious things about the human experience of reading a book, there’s the practical point that by the time most critics file book reviews (a few weeks ahead of publication) there won’t be many or any other reviews to plagiarise from. The Guardian’s lines will have come up nearly undigested in Alex’s review, I suspect, because the AI didn’t have lots of reviews to subtly compost in the way it subtly composts billions of sources for most of its work.
Bottom line, though, kids, these tools are undermining not just the work and livelihoods of writers: they are undermining the very things that make the world trust and value what writers do. Don’t touch the fuckers with a disinfected plastic wand.
***
In other news, my event with Philip Pullman at the British Library is now available to stream on the BL’s website. I think (and this is not a boast because it was mostly down to Philip’s full-spectrum excellence and the generous and thoughtful steering of our chair Julia Eccleshare) it was excellent. We were talking about fairytales and folktales and the roots of storytelling. You get Philip reading his own version of a Grimm story and reflecting captivatingly on panpsychism, consciousness and his theory of the imagination, shoutouts to Bruno Bettelheim and Vladimir Propp, and me telling my, like, heartwrenching story about Charlotte’s Web.
***
This week’s Book Club podcast was with Yann Martel, talking about coming late to Homer, definitely not being influenced by Pale Fire, why he can’t resist a silly animal, and his new book Son of Nobody. Which is, in my view, a banger.
I’m trying to work out whether I’m too favourably disposed to Alex Preston and too quick to assume bad things of Matt Goodwin because of personal acquaintance and/or political sympathy. Inasmuch as they both used AI to some extent, and that has cratered both of their reputations by opening everything they do to question, the cases are certainly comparable. That one of them is mortified and the other blusteringly defiant also tells a story, though.


Oh dear. How very sad. And your point is well taken. Writers shouldn’t touch these tools with a bargepole.
Good treatment. And the Alex Preston story is stunning, especially his forthright admission of fault. Refreshing, actually. I wouldn’t hold it against him. Where this gets complicated is that LLMs well used can be exceptionally helpful in the research process. As I write in my history of books as an information technology, The Idea Machine, the desire for this kind of tool goes back to the days of Vannevar Bush and JCR Licklider, 1930s–1960s.
These weren’t men trying to cheat the system or have a machine handle the thinking and writing for them. They were looking for tools to help them do more of what they already did well. LLMs represent a fulfillment of those hopes. To the extent that people are using them to read for them and write for them, they’re shortchanging themselves and producing work of little value; that bears its own costs and consequences.
But I don’t think that condemns the entire effort to produce and use “thinking” machines. They have a million uses beyond reading and writing, many of which are of genuine value. What’s also clear is that anyone who aspires to “write” and be taken as someone making an intellectual and artistic contribution now has a tool they can use to fake the work. It’s appropriate that they would be called out.
Given the provenance of the work, it’s not surprising that the work itself would expose the fakery, and anyone who defaults to using an LLM to avoid the effort of actually producing something of value is the least equipped to judge whether the product of the LLM is any good.