48 Comments
User's avatar
Roger Hunt's avatar

Of all the dumb takes on this, this one is by far the dumbest.

The correct framing is: I only want to see the BEST painters. I don’t give a shit about what my neighbors paints after a few classes.

I only want to read the BEST writers. If you’re not the best, I don’t care.

If you’re just transmitting info. Use AI. I don’t give a shit about your personal attitude or style. I’d read AI slop over human slop everyday.

I’ll read a great writer like I’ll go see a great speaker. A fun dalliance. A side show. BUT If shit needs to get done or real problems need my attention (whether personal or professional), I’m working and not in the mood to be entertained.

And leave your bullshit about “soul” “humanity” at the door. Clearly you never read Phaudrus.

If you’re nostalgic about reading/writing, then clearly you havent read or written enough.

John Warner's avatar

You're not very good at reading my friend. I'm not talking about anyone's soul. I'm not even talking about what is "the best." I'm describing the experience of writing and using AI to produce text is axiomatically a different experience than what I'm talking about with writing. Writing is what I like to spend my time doing. I'm not interested in automation assisted text production. You do you, my man.

Roger Hunt's avatar

Yeah, you’re a sideshow. Like attending a great storyteller. Fun to look at when work is finished: a menagerie

John Warner's avatar

Those words, strung together, don't mean anything. Thanks for the effort, though.

Roger Hunt's avatar

Appealing to (one’s own) ignorance…a new form of moronity

T. Benjamin White's avatar

Why don't you "give a shit" about your neighbor's painting? You don't sound like a very good neighbor.

Roger Hunt's avatar

Go ask all your neighbors about their hobbies and sit through about an hour of them talking about them.

Send over a report and I’ll review it carefully.

As an academic exercise, indeed.

I want to be wrong! I’m just not.

T. Benjamin White's avatar

Academic exercise? I was thinking you do it for the human connection, but hey that's just me.

Roger Hunt's avatar

I could care less about that. You threw the gambit, do the experiment. Only a cuck creates an experiment and asks others to do it for them

Sophie's avatar

I suppose you think that comment makes you sound edgy. In fact, you sound adolescent. Maybe you are? If so, no doubt you’ll grow up, and find out that most bullshit jobs that “get done” aren’t that important, and that what’s important isn’t the bullshit that gets squeezed out of humans and machines in most jobs today.

In fact, what you call the sideshow, that is creativity, including writing, is often the most important part of living, just as some of the most critical scientific research has little or no ROI but derive from our hunger for knowledge. Humans evolved to create and discover and are miserable without these two activities.

As for storytelling - the real thing, not marketing - it’s one of the oldest human activities and we do not exist, as humans, without it. Nobody cares about counting bushels in ancient Sumer, despite the fact the new technology of writing was first applied to that purpose. But the stories of Sumer and Babylon, which also came to us through writing, are still told, and back in Sumerian days, were considered important enough to write down.

Roger Hunt's avatar

Prove it: read my Substack. You won’t. Because you’re hollow and your words are empty.

But I’d love to be wrong! Prove it.

Sophie's avatar

I have no need to prove it. The Sumerians did it long ago, and all their successors since.

As for your crotchety insults…sticks and stones…

Roger Hunt's avatar

The humanist…indeed…once you read enough maybe you’ll be able to engage humanely

Hollis Robbins's avatar

"Plausibility ratchet" -- hilarious. Nicely done.

Cheryl Foster's avatar

Your discernment of kinship between LLM output and math zinged my mind over to the way philosophers of very different stripes collapsed their concerns into “ language” once science took over many questions formerly handled by metaphysics.

Analytic positivists reduced philosophical inquiry to the consideration of sentences: reference, utterance, propositional possibility - what does “ the king of France is bald” MEAN if, ontologically, there is no King of France? ( No Kings! Yeah!)

Continental philosophers skated around on lateral, rhizomic networks opened up by “Différance” and its elaborate version of word association. Again, what does an utterance MEAN if any word could slip off into instability and wear the mantel of something else entirely?

Yada yada. Much philosophy of late appears to have come to its senses, turning attention away from the vacuum created by technical or whimsical noodling and toward actual thinking about problems both enduring and urgent. I say come to its senses literally; embodiment appears to be back.

Operationally, however, LLM output is the parallel universe version of philosophy’s vacuum years, with its word associative expanse and its channeling of logical form. But as you imply, that output - under its veneer of voice- stands on a skeleton of math, probability, and pattern detection. Impressive as a feat of complex abstraction. But unlikely to elicit the Stroutian flush - or tears.

Peter Hourdequin's avatar

You missed a recent and very thoughtful contribution to this discussion here: https://www.newyorker.com/culture/open-questions/is-it-wrong-to-write-a-book-with-ai. Worth a read. It’s a well-written consideration of AI tools through comparison to another creative art, music, that has already survived the integration of automation tools. I imagine you’ve probably bopped your head to many drum-machine beats without knowing it, and similarly, authors are now going to be getting better at using LLMs to enhance their writing. Some won’t even try and will produce slop by just depending too much on the tool, others will try and and fail. But I’m sure many others will use AI artfully to create work that is quite beautiful.

Also, it’s worth noting that automation has long been part of writing. The spelling and grammar checker has been my friend for quite some time, catching my tendency to screw up “its” and “it's”and “their” and “there”. This form of automation only improves my writing because I use it judiciously, and I suspect writers will find ways to use AI tools similarly. We've been able to right click for a list of synonyms and antonyms in MS Word for a long time, and now we can right click for new phrases, sentence, or paragraph options. I don’t use these features out of preference and habit (and even prefer my thesaurus to the digital synonym lists), but I am not going to lie to myself to pretend that they cannot/will not be used artfully by some. There will still be plenty of slop, just as there is way too much bad music, but there will also be soulful writing that uses AI the way Marvin Gaye used his drum machine.

John Warner's avatar

"You missed a recent and very thoughtful contribution to this discussion here: https://www.newyorker.com/culture/open-questions/is-it-wrong-to-write-a-book-with-ai. Worth a read. It’s a well-written consideration of AI tools through comparison to another creative art, music, that has already survived the integration of automation tools. I imagine you’ve probably bopped your head to many drum-machine beats without knowing it, and similarly, authors are now going to be getting better at using LLMs to enhance their writing. Some won’t even try and will produce slop by just depending too much on the tool, others will try and and fail. But I’m sure many others will use AI artfully to create work that is quite beautiful."

I didn't miss it, I just found it not particularly notable and ultimately, for its failure to do anything other than "on the one hand, and on the other" ends up not saying much of anything.. The drum machine to LLM comparison is not a good one as the drum machine was a way of making a beat constructed through human creativity and action through other means, not automating the creation of the beat itself. Loops in music predate the drum machine. Stayin' Alive by The Bee Gees uses an analog loop method that they recreated on The Bee Gees documentary. "Dreams" by Fleetwood Mac utilizes an 8-bar drum loop.

As to authors "getting better at using LLMs to enhance their writing" there is both zero evidence and not even a persuasive theory as to how this will come about. Now, some of this depends on what we mean by "better." If we mean faster or even more salable because it can mass produce a desired product then sure, LLMs may help us get better at that, though I suspect that under those metrics, human involvement will be a detriment, not an asset. This already appears to be happening in Nashville as music generators are being used to make demos that deploy simulated voices of the stars the songs are being written for, but this is not a recipe for the long term health of the music industry. Ultimately, some fresh human DNA must be injected into the system.

For me to believe that writers could use a tool to enhance their writing by not writing, I have to understand how that process would work, exactly. I don't think it's as straightforward as you seem to believe.

"Also, it’s worth noting that automation has long been part of writing. The spelling and grammar checker has been my friend for quite some time, catching my tendency to screw up “its” and “it's”and “their” and “there”. This form of automation only improves my writing because I use it judiciously, and I suspect writers will find ways to use AI tools similarly. We've been able to right click for a list of synonyms and antonyms in MS Word for a long time, and now we can right click for new phrases, sentence, or paragraph options. I don’t use these features out of preference and habit (and even prefer my thesaurus to the digital synonym lists), but I am not going to lie to myself to pretend that they cannot/will not be used artfully by some. There will still be plenty of slop, just as there is way too much bad music, but there will also be soulful writing that uses AI the way Marvin Gaye used his drum machine."

As it happens I have another chapter in More Than Words titled "A Personal History of the Automation of Writing" that discusses everything you mention here and also argues how the way LLMs are deployed as a tool of automation is unlike any of those other tools of automation because of the level of experience at which they insert themselves. Again, you say, "I suspect writers will find ways to use AI tools similarly," but why suspect this? What evidence do we have, not in flawed analogies to other forms of automation, but in the present? The best example of an AI-aided work I've ever read is Vauhini Vara's "Ghosts," but it's important to note that this piece was done prior to ChatGPT, using an earlier LLM that had not been tuned to optimization, but which has far spikier and stranger responses. https://www.thebeliever.net/ghosts/

Maybe someone will rejigger an LLM closer to those earlier models and find ways to make something interesting, but also we could you know, just write.

Peter Hourdequin's avatar

I think you’ve somewhat misunderstood my point and the value I found in Rothman’s piece. Let me restate things more plainly: for writers with access to AI tools (which is now all of us), the choices aren’t binary. To borrow from Eric Clapton, “It’s in the way that ya use it.”

Just as a writer can use a spelling, grammar, or style checker to scan a document and accept or reject suggestions one by one, AI systems can and will be used in a range of ways—some artful, some not (and beauty will, as ever, be in the eye of the beholder). Some writers will use AI tools effectively, others poorly; the difference comes down to process. Sloppy methods will still produce sloppy work, while careful, time‑intensive approaches that use a variety of tools thoughtfully will certainly still produce good writing. Readers will rarely know—or care—how much AI is involved, since writers have no incentive to disclose their tools of composition.

You seem to be framing things as AI‑produced writing versus human‑produced writing. My point is that AI tools can be used at many scales and in many ways: reconsidering a word, rephrasing a sentence, modifying a paragraph, rethinking a chapter’s structure, etc. None of this inherently improves writing, but it doesn’t necessarily degrade it, either.

FWIW, I did read your book, including the chapter you mention. I agree with your broader thesis that writing is thinking, and I share your concern about turning our thinking over to machines. However, you also seem to value the importance of process, which is exactly what I am pointing to here. We cannot know all writers’ processes, so the best we can do is encourage more care and human agency. And yes, we should value and support good writing. When you say (above) that LLMs are different because of “the level of experience at which they insert themselves” I have to ask who is doing this inserting? Writers still have agency, don’t they (as do readers in the marketplace)? If someone chooses to outsource large parts of their writing to AI, that’s a human decision, not a technological inevitability (and it will likely produce slop). Conversely, if a writer remains deliberate and reflective, I don’t see LLMs as inherently corrupting. I came up without these tools so I don’t find them particularly helpful for composition myself, but I'm not sure they are inherently evil. How much power and water AI companies use is, of course, another story entirely.

John Warner's avatar

I think there is a growing evidence base that the tools are indeed inherently corrupting when it comes to process. There may be exceptions, but we know that when people use the tools, over time, they're more likely to defer to the outputs, we know that de-skilling, cognitive atrophy, and offloading are almost inherent to use. We know that the way that the models are tuned for sycophancy creates conditions which can result in outcomes as bad as psychosis.

Using AI to rethink structure is a choice to not think about how to rethink structure. It is, by definition, an offloading of the work of writing to something that does not write. The promise of this technology is almost entirely in increased production, not quality. Even after three and a half years we have no evidence of writers using this technology to improves as writers. More writing, yes. Better writing, no.

Peter Hourdequin's avatar

The research on this is much less unequivocal than you imply; there’s evidence in the literature for both deskilling and upskilling, depending on how tools are used. Here’s just one study that offers a more complicated picture of things: https://arxiv.org/html/2502.02880v4. But because your framing leans heavily on pessimistic technological determinism and has only engaged with selective slices of the perspective I’ve shared, I’m not sure we’re going to make much headway towards a nuanced discussion here.

John Warner's avatar

I've seen that study and I'm afraid I'm not particularly impressed because it's an example of what plagues a lot of education research, including writing studies, essentially constructing an experimental version of somebody who has dropped their keys at night looking at the ground underneath the lamppost because it's where the light is.

These studies do not measure the part of learning to write that matters when it comes to improving as a writer: transfer. Getting coaching on a discrete artifact with a measurable product (a necessity for this kind of research to be valid) does not test what it means to build what I call a writing practice (skills, knowledge, attitudes, habits of mind of writers). It's unsurprising that given an example, a rubric and AI coaching on the rubric, people wrote superior cover letters. This tells us nothing about if they became better writers.

I know this because for the first five or so years of my career trying to help college students learn to write I deployed a very similar pedagogical method to that AI coach where I gave examples, rubrics and rules and coached students to follow them. They would often perform reasonably well against the grading rubric, but it became clear as the semester advanced and we moved to a different type of artifact that there was little to no transfer happening, assignment to assignment. In theory, these assignments were "scaffolded," but students were not developing their practices. They were just getting better at following explicit instructions from an authority (me). Each new assignment was like hitting a reset button, even when I would try to draw out the connections. Students became very good at following instructions for a grade. They were doing well in school, but they weren't learning. This disconnect led to years of iterative experiments which ultimately coalesced into the writer's practice. In some cases, (often even) the artifacts students were producing once I started to emphasize the experience and practice building were inferior as products to when I was coaching them, essentially, on how to get a good grade.

But by changing my method of assessment to include reflection and metacognitive understanding, it became clear that students were improving in terms of their capacities to tackle an unfamiliar piece of writing by deploying a writing practice. At that point, it really is just a matter of reps to continue to improve.

I know this sounds grandiose, but it's a far superior way to teach writing as compared to approaches that deploy rubrics and judge products because it acknowledges that writing is something done by embodied humans, not disembodied "students."

My doubts are rooted in much deeper concerns about writing is viewed, taught, and assessed in schools, concerns which predate ChatGPT and are aired in Why They Can't Write. Since you read the book you know that I emphasize the experience of writing and that study, in order to be valid, must constrain the experience in order to have something testable. But this test does not tell us about the development of those people as writers.

Peter Hourdequin's avatar

I think we agree: it’s not the tool, it’s the practice (and pedagogy that enforces good practices). Good practices help good writers. Sloppy practices do the opposite. But focusing on demonization of one technology ignores this and a whole body of research.

John Warner's avatar

What have I demonized, exactly? I've pointed out something that seems pretty obvious, that writing is distinct from automation-assisted text production and that if we want people to learn to write they should write, rather than do automation-assisted text production, which could very well be something else we decide to teach.

Peter Hourdequin's avatar

I did not assert that you said that. You seem very defensive. As I said, this would not seem to be the right forum for a productive discussion. You’ve ignored much of what I’ve written and seem very reactive and more interested in winning an argument based on everything you know you know than having a fruitful discussion. Let’s leave things there.

John Warner's avatar

I'm just seeking clarity, my man. You said "focusing on the demonization of one tool..." which I took as you saying that I'm demonizing it. My apologies if that's mistaken. I think I've been pretty responsive. You cited that study as evidence of AI helping with skill acquisition and I explained why I find studies with that sort of design not dispositive to what I believe to be important when it comes to teaching and learning writing. I understand that I'm in the minority about this stuff, but, unsurprisingly, I think I'm right, and I think our collective understanding is moving toward me as we realize that writing is an embodied experience with more dimensions than can be captured in those sorts of experiments.

Rayna Alsberg's avatar

John, this is the sort of quality writing that I have come to expect from you. How neatly you encapsulate a large part of my objection (but by no means my only objection) to AI. It's math, and I too hate math. OTOH, you cheer me up because NEW BOOK BY ELIZABETH STROUT!!! Much proverbial squeegeeing. Must call library to get on the list. 💖📚

Lynn's avatar
2dEdited

I will occasionally check in with the Hard Fork podcast. Part of that is just about knowing what the tech industry is doing and thinking. But also it’s amazing to hear what these very smart dudes admit to! Some time back, one of them said they were using ChatGPT to learn how to meditate! Like what? Goofing with AI to learn how to sit quietly for a moment? Just nuts.

I did listen to the podcast with Jasmine Sun and it was so strange. She had written an essay for the Atlantic on how AI would not be able to write a good novel. Basically the way it’s programmed is anathema to writing something original and compelling. That was an interesting insight! Then she went on to admit what she did with Claude. I think she said that is just for her Substack. It is absolutely verboten at the Atlantic, which has human editors. And I was like huh? After making all her interesting points on the limits of AI.

In defense of these tech writers, they do need to understand how this stuff works. So I am obviously not opposed to them using AIs. I also don’t think they should be knee jerk skeptics. But I don’t understand why they can’t do the writing themselves. Almost like they need to keep their experiments separate from their thinking and writing about it. I don’t understand why they don’t see the risk of contaminating their writing with AI.

Here’s that essay, by the way:

https://www.theatlantic.com/technology/2026/03/ai-creative-writing/686418/

T. Benjamin White's avatar

There seems, to me, to be some serious hubris in any professional writer who starts using AI to help produce their work. No matter how much you remind the chatbot that it's not doing your work for you (whatever that even means in practice) this is going to mean you're not exercising your brain as much in the process of writing. That's the whole point of using the LLM -- so that you don't have to use your brain as much. But there are a lot of other writers who are ready to use their brains... do Sun and Roose and them all really think their jobs are secure enough that they don't have to stay on top of their game?

Gordon Strause's avatar

I've been reading Jasmine Sun for a year or two on Substack, and I'm a big fan. She has done a great job of diving into the San Francisco tech scene and writing about the personalities and ideas she encounters insightfully and sympathetically but without sychophancy.

I thought about you when reading her Atlantic piece, so I was glad that you wrote about it, but I think you would do well to grapple with what she is saying more deeply rather than simply dismissing her reflexively as someone who doesn't understand good editing or writing.

First off, I think her perspective on good writing is actually not all that different yours. From her Atlantic piece:

"I began to hypothesize that AIs might be able to generate award-winning literary prose if only we unhobbled them from the strictures of the post-training process and built specialized writing models instead. But as I reflected on the authors I love most, that didn’t seem right either.

When a practiced human writer reaches for a particular turn of phrase, they aren’t aiming for some single standard of great writing. Rather, the best metaphors come from the author’s specific blend of experiences or expertise. A writer’s diction, their citations, and the stories they share all reflect a singular, irreplicable perspective. Authorial voice emerges from the specificity of a life."

Correct me if I'm wrong, but I think that second paragraph captures your perspective on writing about as well as anything I have seen.

But more importantly, if you haven't already done so, you should definitely read Jasmine's Substack piece where she dives deeper on exactly how she uses AI as an editor:

https://jasmi.news/p/ai-writing

If, after reading that piece, you still believe that what's she's doing with Claude isn't actually providing her with valuable editing, I'd be interested to hear why.

John Warner's avatar

I've subscribed to her Substack since a couple weeks after she started, which is why I'm baffled that someone who knows how to write and report without utilizing an LLM editor that cannot read is doing so. I assumed it's a way to establish some kind of bona fides with the tech folk, but it is pretty ridiculous on its face weighed against what she clearly understands about how writing works. I honestly think she's deluding herself about what this tech might be doing for her. I know that long run, it's not going to help her develop as a writer.

Edited to add that, IMO, and strictly IMO, I think she's in the midst of a the AI novelty cycle when it comes to the Claude editor. It's fun to make something like this that looks like it does human work, but human work is human work and only humans read.

Jasmine Sun's avatar

Hello! I made the AI editor as part of my research/reporting process around my Atlantic piece. I view experimenting with AI as part of how I learn the tech well enough to write about its capabilities & flaws — I wouldn't trust a restaurant critic who doesn't try the food — *not* some meta-play about establishing "bona fides" with the industry.

I obviously disagree with you about what this does for me in the long run. I view AI as a supplement to human editing for the times when I don't have that resource, which I pretty obviously also value given that I have just accepted a role at the Atlantic :)

(Someone sent this comment to me, which is why I am responding. Sorry to butt in!)

John Warner's avatar

I explore this technology as well in my work trying to help education institutions adjust instruction in a world in which AI exists, so I certainly respect that sort of approach, but I continue to be surprised by writers who obviously can write who trust their writing to something that does not read. I don't know how one gets over that hump when you understand so well how LLMs work. You have thousands of readers. There's always a human available to respond to your writing. How would you feel if a publication you wrote for started requiring your writing to go through their custom AI editor that they believe has been tuned to their audience?

Jasmine Sun's avatar

On your last question, I would be fine with it as long as the writer got the ultimate say on 1) doing the writing itself and 2) deciding which feedback to accept vs. reject, which is the same stance with my Claude editor. That's a lot more freedom than I get with most human editors, who regularly rewrite entire paragraphs and hold the final say on both style and substance. A rubric is not nearly as restrictive as the "house style" that many publications employ. I have had enough bad experiences with human editors at prestige publications to not put them on such a pedestal.

I am not nearly as bothered as you are that the LLMs "can't read." When LLMs summarize my writing or other writing back to me, it does so at a higher quality / level of understanding than most educated humans (I bet my claim here would survive a blind experiment). I don't care if the precise internal mechanics of an LLM are different than a human brain if it produces good feedback.

I'm saying this all as someone who has dedicated my career so far to human writing and human editing, and been on both sides of it professionally. I don't care if other writers want to use AI or not, and I do worry about over-reliance, especially among students, but I believe there are ways that smart LLM use—like smart use of say, Google—can make writers better.

John Warner's avatar

I would never argue the infallibility of human editors having been on both sides of the equation, but I know that a human editor is human. You say that editor is a different job than writer, but it really isn't. It's an experience of taking in a piece of writing, an attempt at communication, and responding to it as a human. You say as much in your post describing your Claude editor emphasizing the role of an editor's taste. Taste is entirely beyond the realm of an LLM. That prompt you're using for the model is voodoo to the model itself. The notion that an LLM could evaluate an "insider-anthropologist position" is fanciful. It doesn't know what that is. It could possibly do better on the thesis question because we're in the realm of language and something amenable to mechanical learning, but even that is a stretch in terms of how a human would respond.

The LLM doesn't "understand" anything. It simulates understanding through a process entirely different from how humans develop understanding. The precise internal mechanics have no relationship to how humans read and respond. Your blind experiment wouldn't prove that LLMs "understand" better than humans because LLMs are incapable of understanding anything in the sense you're using the word. Whatever understanding happens is in the mind of the person responding to the LLM output.

The fact that humans may perform worse than an LLM on a summary is a sign that we shouldn't rely on LLMs to help us edit our work because it's humans we write for. If they don't understand, we haven't hit the mark.

It's amazing that this technology working this way is able to create these simulations, but that's all they'll ever be.

Gordon Strause's avatar

I don't think it's accurate to say editing is "the experience of taking in a piece of writing, an attempt at communication, and responding to it as a human."' I'd argue that's the experience of reading (and perhaps commenting) but not editing.

"Editing" is about making a piece of writing better along some dimension that matters to the editor or writer or (ideally) both. Now, historically, its certainly true that has always happened through a human reading a piece and responding to it as a human, but the interesting questions that Jasmine is exploring is whether AI now offers another option.

And again, I think the example she provides in her piece (https://jasmi.news/p/ai-writing) is pretty compelling. I can see why she would find it valuable.

Which leads to the interesting question of "taste." I should start by saying that in some sense I don't disagree with you (and I doubt that Jasmine would either). I agree that LLMs don't "understand" things in the sense that we do and therefore, in some sense, don't have taste.

However, I don't think the fact that LLMs don't "understand" things the way we do is the mic drop discussion ender that you seem to think it is. I'd argue that it's irrelevant to the question of whether an AI editor can help one write better. When Jasmine talks about her AI editor having "good taste", all she's saying is that it's better at helping her achieve her goal of improving her copy (in ways that matter to her) than an AI editor without good taste. At the core, there is no claim that the better AI understands things better; ultimately it's just a question of whether the new model can produce better results.

Or, to put it another way, there is something extraordinary about riding a horse that a car ride will never match. But that doesn't mean that it's a mistake to drive places. Sometimes, for folks, getting to the destination is as important as the journey.

John Warner's avatar

I don't find that example compelling, but perhaps that's a matter of taste. This is not the kind of feedback that I've given the literally thousands of writers and students I've edited over my career. We talk about the work through the lens of the rhetorical situation. A question like "is the thesis in the first 500 words?" is trivial outside of the larger context of what a piece is trying to say.

When the AI as "thought partner" discourse popped up I realized that people had what I believe to be a flawed understanding of their exchanges with the LLMs. An LLM is not capable of thought, as we know, but people say that they help them think. It's akin to how as a kid I would sometimes hit a tennis ball against the garage. The garage isn't a tennis player, it's a backboard. Wherever the ball goes is entirely dependent on me. If Jasmine Sun finds that editing helpful it's entirely a byproduct of Jasmine Sun reflecting on the output and finding some value. This is something any writer can do without engaging with an LLM and to get anything from the LLM beyond following it like a set of instructions, you'd have to be able to do it without the LLM.

It adds nothing that couldn't be achieved without it.

Gordon Strause's avatar

Ok. This seems reasonable. And I think the tennis backboard analogy is a good one. Not sure I totally agree with it, I think there is a case to made that an AI editor is more like playing tennis with a clone of oneself rather than a backboard, but I am persuaded that the upper limit of the value that it can provide may not be that high.

John Warner's avatar

I'm truly not trying to be contentious or pedantic, but I spend so much time thinking/writing/presenting/conversing/convening about these things that I can't stop myself.

The clone of oneself framework doesn't work because the LLM does not have experiences to draw upon as part of the response to the text. The expertises humans build are literally the byproduct of experience and the opportunity to reflect on those experiences. LLMs will always work entirely from a foundation of patterns of language. The technological feat that allows them to simulate human-like feedback is astounding, but it is not the same thing.

I know some folks who don't find those differences meaningful because they put all of their weight on the product, but because of my background doing and teaching this stuff, I "know" that it's the experiences that matter when it comes to building our capacities.

Gordon Strause's avatar

"The subscriber dollar in this ecosystem is very much zero sum... I deeply desire Ron Charles to have success because I am one of his readers. I am also a little irritated... because his success means there may be less oxygen for me."

This isn't true. There isn't some finite set of "book writing dollars" on Substack that have to be divided up between everyone who writes about writing. The situation, in my opinion is both much better and much worse than that.

The good news is that if lots of folks start to write interesting book and writing centric Substacks, the amount of dollars will grow. In fact, I'd bet that if more people began writing this type of Substack with different perspectives and arguments among them, you actually begin making more money not less. Certainly, I would bet anything that the emergence of Charles' Substack isn't going to negatively affects yours.

I wrote about this a bit in a (belated) comment on your post about Substack, but I'm convinced that Substack is growing the pie of dollars going to writers:

https://biblioracle.substack.com/p/substack-is-not-your-liberator/comment/219149023

The bad new, however, is that you're while not directly competing with Charles' Substack, you are competing with EVERYONE's Substacks, as well as with other media and social media, TV, and to some degree even video games.

What I think is zero sum is the number of institutional writing positions, where your success is determined by whether you can get hired one of these publications. For better or worse (and there is probably some of both, but I think mostly the former), that world has mostly gone and is only getting smaller.

John Warner's avatar

The essential equation is whether or not the growth of the pie comes with revenue sufficient to lift the boats of those of us in the water and I can say that in my own experience, and in conversations with maybe a couple of other dozen newsletter writers around my particular level, everyone's audience, as measured by views and total subscribers, has been increasing, but once you get to a certain level, your revenue peaks and then starts to inexorably decline as you shed subscribers over time. Over the last three years my # of views per piece has doubled. My paid subscribers (at the moment) is +3.

It may not quite be zero sum, but when you get down to the individual reader who is interested in books and they have a threshold they will not cross when it comes to paying, it's pretty darn close. It's me or Ron at that point. Substack wants us to put much of our content behind a paywall because that does squeeze out extra revenue, but it reduces readers. Long term, it's not good. It's not sustainable. The long tail/1000 fans theory has been shown to be not a thing, but it's the theory that most individual writers are counting on here to make it sustainable. Unfortunately, it doesn't work. There's no universe where the pie of money gets aggregated and then divided in a way that benefits the vast majority of writers. It's a great way for a handful of writers to reap much more than they would working for a publication, but for a healthy ecosystem, it is not workable.

You could argue that the era of print-based periodicals didn't work either, except it did until the digital age drove the value of writing down (after a brief period which I was able to capitalize on for a couple years of driving it up). That model was not as tied to the attention economy either, whereas making money writing on the internet requires you to become a kind of spectacle in order to draw attention. That's not good for writing.

I suppose time will tell on this front, but the fact that Substack is trying to move into "TV" and establishing a relationship with Polymarket suggests to me that the core enterprise is not viable from their point of view either.

James Borden's avatar

I had read Kevin Roose's "Young Money" when he joined the Times and it was a good book but he did not claim to be anything other than a generalist then and I do not really think of him as a tech person now. He was and is more of a tech culture writer who may know some more about the tech than the average member of Congress just from having been on that beat. I am very disappointed that he used AI though.

James Borden's avatar

(In general if an AI sounded at all like me I would assume that I had descended into self-parody and take my ball and go home)(This may have happened already)

Sophie's avatar

As well as for automating content factories (who reads that stuff? Anyone?), AI seems made for people who don’t like the act of writing, but who like to have written. Or for writers who have contracted for more than they can manage. They always existed, and used to pop amphetamines to get through the writing process as fast as possible, or pay ghostwriters (famously, Alexandre Dumas wrote in a team with Auguste Maquet to keep up with his vast serials production).

Marcie Geffner | Mostly Books's avatar

Not all marketing content written for the Internet was poorly written. I wrote some of it myself, and I believe I am a capable writer. With your other points, I agree. Automated text-generation is not writing. This comment was 100% human written.

Diana Zahuranec's avatar

There is, or will be, a coordinated campaign to regard AI positively (see: OpenAI buying TBPN in a stated effort to combat AI’s negative image).

Totally agree with all your points. Writers write! It’s how we think and uncover new ideas; the point IS the act of writing. The thing that worries me is how the push for a product (i.e. content) will usurp this, and writers who write and readers who read will be fewer and farther between.

Modern life has become so compressed that I completely understand how “giving in” to AI can feel like a secret hall pass, or even eventually necessary. But people who speed up in order to get in front of the machine are hurting themselves in the end, because modern life will never organically support slow living because its structure values speed, output, product, money above anything. So it’s a hamster wheel! Gah! The ultimate pushback is to refuse to give in to the frantic pace that modern work systems demand, the thing that makes AI-assisted “writing” so tempting. Slow down. F the system.

That being said, as a tool it can be useful, for those times when you need to produce content and not write (fixing an email, clarifying thoughts for a presentation, etc.). Worth the massive amounts of infrastructure, water, energy, blah blah? Ehhh…

Stephen Lloyd Webber's avatar

I'm less interested in arguing about who's a real writer and who isn't.

Warner's right that writing is an embodied practice and that the experience is everything. Having a writing practice is how you improve as a writer. But I think he's spending his energy on the wrong fight. Let's let individual writers sort themselves out. The thing to actually push back against is the human instinct to dehumanize each other the moment a cheaper option shows up.

It's tiring to worry ourselves about whether a writer uses AI to tighten a paragraph or get some machine-based feedback. I care more that organizations reached for replacement as a first move. It could have been something that people can make use of or not, but instead became "how many people can we cut." That happened almost immediately, and there's still this backlash narrative that comes in waves about what can and can't be replaced.

I'd like to see more time for real people and small businesses to find their own relationship with these tools to do something genuinely novel on their own terms. But that requires patience and space. The race right now is to extract human labor from the equation as fast as possible. It's misdirected to orient toward LLMs as if that's the source of the problem.

John Warner's avatar

To be clear I don’t make any judgements about people as writers or not writers because I don’t think the identity or label means anything. I’m merely trying to map the territory of what writing is. I recognize that I’ve created something of a tautology by saying writers write, but I’m mostly interesting in writing, not writers.

Stephen Lloyd Webber's avatar

Seems fair. Writing, not writers. And, to that point, I think what Nadella has talked about is relevant. He wants the line between a document and an application to blur, so that, like, a Word file isn't just a static thing you wrote, it's going to become a canvas for an AI agent to crawl around in. Lay eggs in. Raise its little agent offspring. That would mean the categories are shifting underneath us, that what counts as a document is changing whether we want it to or not.

I would think the debate about whether someone is writing versus doing automation-assisted text production assumes the container stays the same. The more those lines blur at the application level, the more annoying this whole thing is going to become.