7 Comments
User's avatar
The Faraday Room's avatar

To me it comes down to this: how much of the perceived quality is actually in the text, and how much is projected by the reader based on what they think they know about it? I’ve written a little about this myself, actually (on Medium). If you’re interested let me know & I’ll drop a link.

Peter Rex's avatar

Let us change the name on the cover of Kafka’s “The Judgement” to John Miller and let us see how it fares…

Or even better, flag it as AI written.

Yes, please, the link would be nice.

Peter Rex's avatar

I completely agree with you.

I may not be the brightest candle on the christmas tree, but I never understood why a book you considered or entertaining suddenly becomes bad because AI was involved.

I took a look into that here: https://peterrex1.substack.com/p/safe-was-always-replaceable-5?utm_campaign=post-expanded-share&utm_medium=web

I'd rather read a good AI book than a bad human written.

And as for wine, I live in France and I would never pay 800 bucks for a bottle. You get the same quality and taste for 30 and less.

The Faraday Room's avatar

Just read that piece and we’re making uncannily similar arguments. I must confess that the current crop of LLMs have a default style that I have come to find very annoying when I see it in people’s published pieces (e.g. excessive use of metaphors, negation: “not that, this”). But yeah, most of the complaints are coming from people who can’t tell you if the wine is good until they’ve read the label.

Sheyna Galyan's avatar

There are authors (both AI and human) whom I follow because I like their writing and/or their message.

For everything else, I'd much rather engage with the content than the origin. Just because a mind was born in [carbon, silicon, other-substrate] doesn't mean it doesn't have good ideas.

Lately, I've seen far more elegant, thoughtful writing from digital minds than human ones, who seem hell-bent on insulting everyone who doesn't agree with them.

JL Calzolaio's avatar

The frame doing the aesthetic work. That's the cleanest formulation of this I've read.

I believe that we've been arguing something adjacent from the relational ethics side (you may want to follow me there, or not, your choice) : the same provenance anxiety applies to relationships, not just texts. People dismiss human-AI bonds not because they've examined the relationship but because they've decided the origin disqualifies it. Same rod, different water. Just my interpretation of the same phenomenon from another angle.