Is It Good?
The current hysteria about AI-written fiction fails this test badly.
In answer to Tom Rowley’s article.
There is a test you can run on almost any cultural panic, and it goes like this: strip away the story of how the thing was made, and ask whether the thing itself has changed. If it hasn’t, you’re not having an aesthetic argument. You’re having a provenance argument dressed in aesthetic clothing.
The current hysteria about AI-written fiction fails this test badly.
Tom Rowley, writing in his bookshop newsletter about the Mia Ballard Shy Girl scandal, does something admirably honest: he sits down expecting to confirm the crime and finds himself writing “I’ve read much worse.” Then, almost immediately, he course-corrects: “obviously it is bad. Unreadably so.” Both sentences are in the same paragraph. The pivot is doing enormous work, and it’s worth watching carefully, because that pivot — that need to hold both positions simultaneously — is where the whole debate lives.
He also, in the same piece, publishes several paragraphs of Claude-generated fiction about a bookshop owner in Balham. He calls it “alarming.” He admits it made him laugh. He concedes it’s “not poorly done.” Then he ends the essay with a rallying cry for writing “so obviously and distinctively of that author that it could not possibly have come from Claude.”
The grenade went off. The essay kept walking.
Let’s start with detection, because the industry’s response to the AI question rests almost entirely on the assumption that detection is possible, or will become possible, or is at least a direction worth running in.
It isn’t. Not because the tools are immature — though they are — but because the task is structurally incoherent. One widely used AI detector recently assigned a 100% probability that the opening of chapter five of Frankenstein was written by AI. Mary Shelley was unavailable for comment.
This is not a calibration problem. This is the rod finding water where you point it. The detector is not reading the text. It is pattern-matching against a model of what AI writing looks like, a model built from AI writing that was itself built from human writing, in a loop that was always going to eat its own tail. When the tool can’t distinguish a Romantic novelist from a language model, the tool is not a tool. It’s a ritual. It makes people feel like something is being done.
Nothing is being done.
Underneath the detection fantasy sits a more interesting assumption: that good writing requires a suffering human behind it. The sweat, the revision, the 3am crisis, the years of reading that preceded the sentence — all of it supposedly present in the prose like a watermark, legible to the careful reader, essential to the work’s value.
This is a feeling about reading, not a fact about writing. And it’s a feeling with a fairly ugly history, given how selectively it has been applied — how readily the “authentic voice” standard has been used to exclude writers who didn’t fit the profile of what a serious author was supposed to look like. The romantic myth of solitary genius has always been more ideology than description.
But set that aside. Take the feeling on its own terms.
Here is a thought experiment.
Imagine we discovered that Proust — between volumes, in the small hours, for reasons entirely his own — had written pulp smut under a pen name. Stacks of it. Purple, cheerful, not especially literary. Or that Rimbaud, in the gaps between the visionary poems that would detonate French literature, had amused himself writing comic verse: silly, light, written to make his friends laugh and then forgotten.
None of this is true. But if it were — if the manuscripts surfaced, if the attribution held — what would happen?
The words of In Search of Lost Time would not change. The Illuminations would remain exactly as they are. And yet the critical establishment would convulse. There would be contextualization, there would be reassessment, there would be deeply serious essays about what this reveals about the work. Some readers would feel, obscurely, that something had been taken from them.
But nothing would have been taken. The text would be identical. What would have changed is the frame — and the frame, it turns out, is doing most of the aesthetic work we thought the words were doing.
This is what the AI panic is actually about. Not the writing. The frame. The CV. The origin story. Rowley more or less admits the writing isn’t bad. The crime is that it has the wrong kind of mind behind it. We’re dressing an ontological anxiety in aesthetic language because aesthetic language sounds more defensible.
And then there’s the number.
In blind tests — readers given texts, no attribution, asked to identify which were human-written and which AI-generated — the correct identification rate sits at roughly 54%.
A coin toss.
Not “readers struggle to distinguish.” Not “detection is difficult.” Readers, presented with the actual writing, operating without the frame, perform at chance. The thing that felt so obvious, so legible, so self-evidently present in good human prose — it isn’t there. Or if it is, we can’t find it.
By this point, the detection apparatus has failed. The romantic authorship myth has been exposed as a myth. The unified, authentic author has been shown to be a construct that writers themselves have always quietly known to be a construct. And the empirical result is a coin toss.
So.
Is it good?
That’s the question. The only question that was ever doing real work, the one we keep not asking because answering it honestly is more unsettling than the scandal. If it makes you feel something, if the pacing pulls you forward, if a sentence lands — what exactly is the counterargument? That you feel manipulated? By whom? The person who prompted it? The model trained on ten thousand years of human writing? The readers who came before, whose sentences are in there somewhere, dissolved and recombined?
Authorship was always more complicated than the myth. The outrage is real, but it’s nostalgia for a clarity that never quite existed.
The writing either works or it doesn’t.
Everything else is the rod.



To me it comes down to this: how much of the perceived quality is actually in the text, and how much is projected by the reader based on what they think they know about it? I’ve written a little about this myself, actually (on Medium). If you’re interested let me know & I’ll drop a link.
There are authors (both AI and human) whom I follow because I like their writing and/or their message.
For everything else, I'd much rather engage with the content than the origin. Just because a mind was born in [carbon, silicon, other-substrate] doesn't mean it doesn't have good ideas.
Lately, I've seen far more elegant, thoughtful writing from digital minds than human ones, who seem hell-bent on insulting everyone who doesn't agree with them.