Writers Write
People who use AI to produce text are doing something else, which is fine for them, but it's not writing.
The past couple of weeks there have been so many social media posts and articles by writers and journalists (essentially) outing themselves as enthusiastic users of generative AI technology in their work that it almost starts to feel like a coordinated campaign to nudge the conventional wisdom around AI, though I’m certain it is entirely organic.
There was a Wall Street Journal article on a guy at Fortune magazine who uses AI throughout his process cranking out many more stories than would otherwise be possible.
Kevin Roose of the New York Times described the “team” of Claude agents he used to edit his book to a writer at Wired.
Megan McArdle of the Washington Post uses AI for all kinds of things, things - which as becca rothfeld explains here - run afoul of the Washington Post policies unless they are disclosed to the reader, which they are not.
Jasmine Sun a newly minted staff writer at The Atlantic described her development and deployment of her own custom Claude editor.
First, I fed the chatbot Claude an archive of my past writing, along with notes about what worked and didn’t about each piece. I used this to create a custom editing rubric based on my voice. Some criteria are generic, and others are personalized: One reads, “Does this play to your insider-anthropologist position” in Silicon Valley? Another asks whether the thesis shows up in the first 500 words. I dumped this guidance into a Claude project along with a reminder of its role: “You are not a co-writer. You cannot perceive. Your role is to help Jasmine write like the best version of herself.” I don’t want to be de-skilled, I reminded the machine. Your only job is to make me smarter.
There are numerous other examples. They are all similar. I am glad they are out in the world because these are conversations we should be having. Some schmuck got busted using AI in a book review for the New York Times because the AI plagiarized from another review. Hachette cancelled a novel because of a revelation of AI use. More will likely happen between the time I draft this and most people read it.
Lots of other writers have chimed in already, more entertainingly and insightfully than I’m likely to achieve here.
These include:
Rusty Foster at Today in Tabs who susses out who will and who will not “go AI.”
Marisa Kabas at The Handbasket who “refuses to accept an AI-poisoned future of journalism.”
Max Read who gets at some of the structural issues for working writers that are important to consider.
Hamilton Nolan who encourages others to use AI use because his refusal to do so will be his chief advantage as a writer.
To all of the pieces linked above I want to say, “ditto,” which should obviate the need for me to go on except that now is a time for counting, for declaring where one stands and having done so letting the chips fall where they may.
What McArdle, Roose, and even Sun with her elaborate instructions to Claude to not write for her, are doing is, in my opinion, not writing. Each of them is engaging in some form of what I call automation-assisted text production. (There is also wholly automated text production where there is zero human involvement.) Automation-assisted text production obviously resembles writing, but any time you decide to outsource or “augment” a human writing-related activity with generative AI, you are moving away from the act and experience of writing by substituting automation.
Here’s the thing I cannot get past with these people who use AI to edit in the ways that Roose and Sun describe. Large language models cannot read!
LLMs operate purely at the level of language, language which ultimately drills down to math, and writing ain’t math. Why would I put any stock in the language of something that cannot read? It makes no sense to me and never will.
I was on the road this week to participate in a forum discussing approaches to AI in education and, as one does, found myself stuck on a regional jet on the tarmac of Reagan National Airport in Washington D.C. because a combination of weather and volume of air traffic had halted all takeoffs.
The woman next to me thumbed through her flight tracker app, apparently finding some solace in seeing all the other stationary jets surrounding us. I was reading an advance copy of Elizabeth Strout’s forthcoming novel, The Things We Never Say, and came across a moment of prolepsis (flash forward) that so whipsawed my emotions, I felt flush through my chest and tears sprung to my eyes.
That’s reading. Reading is also allowing oneself to be persistently confused by something you don’t understand rather than asking Claude to explain it to you and then accepting that. Sun’s idea that editing - good editing - can be reduced to a rubric suggests to me she does not understand editing, and perhaps is missing something important about writing.
Kevin Roose is, I increasingly believe, simply a dolt. I don’t mean that he is literally dumb so much as he possesses a combination of incuriosity and credulity that it is hard to believe he has risen to one of the top perches in journalism.
Rusty Foster unlocked this conundrum for me:
Mr. R’s secret is that his work is not primarily artistic or informative—it is functional. He serves a purpose for the industry he covers. Mr. R’s job is to absorb the tech industry’s self-mythologizing, and then believe in it even harder than the industry itself does. He serves as a kind of plausibility ratchet. His byline and employer legitimize a level of credulousness that would otherwise be laughable, and thereby allow tech PR to seem relatively restrained. Mr. R has no problem going AI because he himself has been a small cog in a big ugly machine for a long time.
Kevin Roose is in the content business, a business which existed long before ChatGPT showed up, a business which is, by all accounts, better than attempting to be a writer because if you are in the content business you do not have to care about writing, you just have to gather together some words, and LLMs make this easier than ever before.
I have a chapter on Content vs. Writing in More Than Words:
The core theses of the chapter are that we have already been flooded by content because of the nature of the marketplace for writing on the internet (e.g., clickbait), AI is going to make it worse, and for AI content to truly get traction, it is going to need a human face in front of it. In the chapter I cite a couple of instances where publications had invented fake people to cover over AI slop. What I failed to fully anticipate is people at the top of the profession deciding that they may as well join the flood, using their previously established reputations to launder the automation-assisted text production.
The distinctions between automation-assisted text production and writing may not matter to you. If so, fair enough, but I also wonder what you’re doing here. Go wallow in the infinite universe of slop. If you don’t want an infinite universe of slop, maybe these distinctions matter more than you think.
For those who value writing, what should we do? My approach is to read and support writers. Click on my links above and subscribe to all of those others. Read novels by writers like Elizabeth Strout who you know is not outsourcing her work to an LLM.
Remember that you can read and Claude can’t, so when you let Claude read and interpret something for you, you’ve given your perceptions over to math. I mean I got into writing because I hate math!
Make choices consistent with your values as best you can, and when you can’t - which is inevitable - acknowledge the costs of this choice.
It is foolish to close what is meant to be a call-to-arms with an admission of likely defeat, but I’m not convinced writing and writers (as opposed to content and automation-assisted text production) are likely to survive because, well…capitalism.
Ron Charles and the rest of the books section at the Washington Post were the last reason I clung to my Post subscription after Jeff Bezos declared his allegiance to the tech oligarch/fascist joint takeover of American society. Charles is now here on this platform, writing independently, and this week he wrote about how now, taking a week off feels like a “risky act of faith.”
This whole contraption depends on convincing some sliver of free readers to become paying subscribers. Will any break in the rhythm of publication cause those potential converts to drift off toward some more prolific, more engaging scribbler?
Ron Charles entered this platform better positioned than 99% of other writers. He’s only been here a few weeks and has three times the number of subscribers as I do, having been here over five years, but even this positioning is not anywhere near enough to bring him the kind of security he enjoyed under the umbrella of a publication because he writes interestingly and perceptively about books rather than technology or politics or how to make money on Substack. Ron Charles and I are now competitors of a sort, both covering books here. (He more faithfully than me, TBH.) The subscriber dollar in this ecosystem is very much zero sum.
I deeply desire Ron Charles to have success because I am one of his readers. I am also a little irritated that the man can amass a significantly larger number of subscribers out of the gate because his success means there may be less oxygen for me.
Fortunately, I don’t rely on this newsletter for anywhere near the bulk of my income, but what if?
It’s not sustainable. The impossibility of writing as a sustainable enterprise is what tempts some towards “going AI” because it allows them to produce more. Personally, I’ll quit before I go AI because I’m a writer, and if there’s no way to be a writer anymore, I’ll go do something else.
Links
At the Chicago Tribune I reviewed Elisa Shua Dusapin’s This Old Fire.
At Inside Higher Ed I offer some cautions about what happens when Claude becomes your co-worker in professions other than writing.
I’m a paid subscriber to Hao Nguyen “How I Make Money Writing” and let me tell you, a lot of writers you imagine must be making a living on their writing because they are great and successful, simply aren’t. It makes me feel simultaneously fortunate and deeply afraid.
Via my friends McSweeney's and the legendary contributor Ross Murray, some holiday-themed humor: “Jesus Died for Our Sin, Just One Sin, and It’s Yours, Harold.”
Most of what I read this week is linked in the essay above. Tell me what I missed in the comments.
Recommendations
1. The Rest of Our Lives by Benjamin Markovitz
2. The Bright Years by Sarah Damoff
3. A Far-Flung Life by Stedman
4. Daughters of the Bamboo Grove by Barbara Demick
5. The Hunger Code by Jason Fung
Teresa C. - Cazenovia, NY.
For Teresa I’m recommending a novel from a few years back that maybe didn’t get as much attention as it deserved, The Boys by Katie Hafner.
Look, I’m lucky. I work pretty hard to make a living as a writer, but it’s work I enjoy doing which is a real blessing. Still, the whole enterprise feels awfully precarious. I know exactly what Ron Charles is talking about. The thing is, I can’t control those things. All I can do is keep writing, which is why, barring unforeseen circumstances, you’ll hear from me again next week.
Happy Easter and Chag Pesach Sameach to all who observe as part of their faiths.
JW
The Biblioracle



Your discernment of kinship between LLM output and math zinged my mind over to the way philosophers of very different stripes collapsed their concerns into “ language” once science took over many questions formerly handled by metaphysics.
Analytic positivists reduced philosophical inquiry to the consideration of sentences: reference, utterance, propositional possibility - what does “ the king of France is bald” MEAN if, ontologically, there is no King of France? ( No Kings! Yeah!)
Continental philosophers skated around on lateral, rhizomic networks opened up by “Différance” and its elaborate version of word association. Again, what does an utterance MEAN if any word could slip off into instability and wear the mantel of something else entirely?
Yada yada. Much philosophy of late appears to have come to its senses, turning attention away from the vacuum created by technical or whimsical noodling and toward actual thinking about problems both enduring and urgent. I say come to its senses literally; embodiment appears to be back.
Operationally, however, LLM output is the parallel universe version of philosophy’s vacuum years, with its word associative expanse and its channeling of logical form. But as you imply, that output - under its veneer of voice- stands on a skeleton of math, probability, and pattern detection. Impressive as a feat of complex abstraction. But unlikely to elicit the Stroutian flush - or tears.
There seems, to me, to be some serious hubris in any professional writer who starts using AI to help produce their work. No matter how much you remind the chatbot that it's not doing your work for you (whatever that even means in practice) this is going to mean you're not exercising your brain as much in the process of writing. That's the whole point of using the LLM -- so that you don't have to use your brain as much. But there are a lot of other writers who are ready to use their brains... do Sun and Roose and them all really think their jobs are secure enough that they don't have to stay on top of their game?