28 Comments
User's avatar
Joel J Miller's avatar

Good treatment. And the Alex Preston story is stunning, especially his forthright admission of fault. Refreshing, actually. I wouldn’t hold it against him. Where this gets complicated is that LLMs well used can be exceptionally helpful in the research process. As I write in my history of books as an information technology, The Idea Machine, the desire for this kind of tool goes back to the days of Vannevar Bush and JCR Licklider, 1930s–1960s.

These weren’t men trying to cheat the system or have a machine handle the thinking and writing for them. They were looking for tools to help them do more of what they already did well. LLMs represent a fulfillment of those hopes. To the extent that people are using them to read for them and write for them, they’re shortchanging themselves and producing work of little value; that bears its own costs and consequences.

But I don’t think that condemns the entire effort to produce and use “thinking” machines. They have a million uses beyond reading and writing, many of which are of genuine value. What’s also clear is that anyone who aspires to “write” and be taken as someone making an intellectual and artistic contribution now has a tool they can use to fake the work. It’s appropriate that they would be called out.

Given the provenance of the work, it’s not surprising that the work itself would expose the fakery, and anyone who defaults to using an LLM to avoid the effort of actually producing something of value is the least equipped to judge whether the product of the LLM is any good.

Susan Knopfelmacher's avatar

See Arnold Kling’s Substack today - AI and research papers

the future is here (pretty much)

Lucy Seton-Watson's avatar

Oh dear. How very sad. And your point is well taken. Writers shouldn’t touch these tools with a bargepole.

Adam Morgan's avatar

It genuinely seems like Preston didn't realize this, but asking an AI tool to "extend" a book review, especially of a new release, is asking for plagiarism even more directly than other prompts. ChatGPT can't read the book, and it can't "generate" its own opinions — it can only look at what other critics have said.

winter king's avatar

Fortunately I am of the generation born in the mid-70s to which bookishness was not exactly hidden from public view...but certainly never helped me socially err...advance.

Philip Womack's avatar

I think what's really insidious is the way that someone who is clearly excellent at his job - like Alex - would, in a hurry, turn to it, because he knows he can do it without using it.

Susan Knopfelmacher's avatar

I've been taking a close look at its use in schools... very easy to throw out a lesson plan or two, churn out an essay etc etc. Difficult to resist that temptation, no matter all the high talk of just using it as a "tool". Actually, we have one inbuilt - called the brain.

David Quantick's avatar

"Bookishness" is a great phrase and makes me think of an AI illustration of lovely leather bound books in a cosy AI bookshop. And those novels called things like THE REASSURING BOOKSTORE OF SEOUL OR TOKYO THAT ALSO SELLS HOT CHOCOLATE.

Sam Leith's avatar

Many leather-bound books. The smell of mahogany.

Susan Knopfelmacher's avatar

These Chatbots are insidious - just imagine how they are being used in education!

Marschal's avatar

I think in general it is important to be as forgiving as possible and to resist joining in a public savaging. But really, no amount of explanation here mitigates what this is. Doesn’t matter how “behind” you get on assignment, better to ask for extension or just not meet your deadline than vacate the fundamental responsibility of writing. (And by “writing,” I mean the real thing, not the ubiquitous “content” that is produced only to sell something—that stuff *should* be AI-generated, being so manipulative and degrading.) If you use AI “to help and expand and smooth” any combination of words that will go under your name, it is a trespass that negates the whole endeavor. Scandals like this just make blatant what had already been detectable for some time: there’s an incredible number of people who are concerned primarily with being called “writer” rather than writing

Kate Clanchy's avatar

Thank you for giving Alex Preston space to speak. It must be awful for him. Book reviews can give you a desperate feeling, especially if you don't really like the book, because of the cruel impact they can have. I can imagine getting behindhand and then turning to AI as a sort of spellcheck. I hope it doesn't ruin his career.

Sam Leith's avatar

Thanks so much, Kate. Agreed.

Amanda Craig's avatar

Yes, same. I could not understand how it could be true. We must all beware of software.

Kate Clanchy's avatar

And I have also written for the NYT and been baffled and bewildered by the style guide and US spellings. Beware of software indeed.

Susan Knopfelmacher's avatar

Chat platforms are far more problematic than the average software. The interlocutory aspect has many ramifications, especially but not only for young people.

Joshua Gaskell's avatar

I really feel for Preston based on the message you quote.

Sam Leith's avatar

Ngl I do too, very much - but, as he has the grace to recognise, it was a daft thing to do

Katrina Gulliver's avatar

I see it in submissions all the time now. I ask the author and I’m told “I just got it to help me with the outline” or “I just got it to polish it up”. The key element here to me is that humans are good at 2 things: taking the path of least resistance, and deceiving ourselves. “I just use it to brainstorm” becomes “I just used it to help with structure” becomes ChatGPT wrote the whole thing. But they kid themselves that they are still the author.

Sam Leith's avatar

Property of slopes: slipperiness

Rhiannon Lucy Cosslett's avatar

I’m honestly very surprised to read this, Sam. As someone who also reviews books there isn’t a universe in which I would even open chat GPT to “help” me, whether I was under word count or struggling with Americanisms, or whatever the “reason”. Even if we take these claims at face value, who doesn’t read their work before submitting it? I don’t buy it and I don’t think any of it is a valid excuse. I think it makes a mockery of criticism

Sam Leith's avatar

Me neither. As I hope I made clear in the piece I don’t think his explanation - if we credit it, which I tend to for the reasons I explain, but many won’t - in any way gets him off the hook. I thought it was of value to have that explanation and quote it is all. For me, I think LLMs are poison and using them to “write” is unethical full stop and using them to write criticism is absurd. I’m curious to understand (and a bit horrified) how someone with the facility to write well themselves could come to be using one.

Rhiannon Lucy Cosslett's avatar

Yes I do think that’s a worthy subject of interrogation! I do think you are more generous than I would be - I have to say I don’t really believe the justifications given. I did have to sit through the Guardian’s extensive and mandatory AI training which was excellent. It sounds like freelancers are even more in need of it than staffers.

Amanda Craig's avatar

Enlightened and relieved by this explanation of an incomprehensible mess-up.

User's avatar
Comment deleted
16h
Comment deleted
Tom Foster's avatar

You seem very certain that the guardrails you describe work perfectly. Seems much more plausible to me they don't, given past history, than a respected author would straight rip from a publication that is only a short hop from the NYT.

Sam Leith's avatar

I incline to this view. Also, AI clutching its pearls over copyright would be quite the irony.

Sam Leith's avatar

That’s interesting to hear. Thank you. I can’t claim to know how AI could/couldn’t have been responsible, but as I said in the piece it absolutely beggars belief that he’d have plonked a more or less unaltered piece of prose from the Guardian into the NYT and expected nobody to notice. I mean, he’d have to be not only a scoundrel but an absolute fool to do that.

Elnathan John's avatar

Then I hope I am wrong.