Because Kustov's point was "entirely generated and posted on Substack by agentic AI using my new Claude Code (Opus 4.6) workflow," I thought I would ask my ChatGPT 5.2 Pro to respond.
Specifically: "Some people are worried that AI generated research publications won't have the rigor or insight of human generated research publications. Consider that Journals already publish studies with data errors, p-hacked results, and non-replicable findings at alarming rates. One estimate puts the share of genuinely useful published papers at around 4%. An LLM that occasionally hallucinates a citation is competing against a system that routinely produces junk science dressed in enough jargon to pass review. If we applied the same skepticism to human-produced research that we apply to AI outputs, we’d shut down half the journals tomorrow. Assuming that advancing science (even social science) is the goal of publishing research, what conclusions do you draw from this argument?"
ChatGPT thought for 6m 2s (for the record, longer than I needed to in order to come to a similar conclusion) and said:
=============
Your argument is basically: “The current publication system already has a terrible signal-to-noise ratio, so it’s inconsistent to treat AI-generated papers as uniquely suspect; if we applied the same skepticism to humans, the system would collapse.”
A few conclusions follow, but they’re not all “therefore AI is fine.”
[I will spare you the exposition, because all of you readers also generated it in your heads in less than 6 min. But I'll give you the wrap-up]
If you want a single sentence:
The argument doesn’t vindicate AI-generated publications; it indicts the current journal system and predicts that without structural reforms, AI will amplify its failure modes.
===============
I guess ChatGPT has a less sunny vision of our AI future than Claude does. Perhaps if Alexander Kustov had applied some human thinking to the question instead of outsourcing it to Claude, he might have seen the problem with "his" argument, too.
Thanks for the DFW anniversary notes link - that made me want to revisit parts of both IJ and The Pale King. For now, I'll pull them out and place them on a nearby desk. Also, I too have clicked on that NYT 32 books to read link multiple times.
As a visual artist, I am becoming more and more disturbed by the ever presence of AI - I have an artist friend that consults ChatGPT for suggestions to improve her paintings. She's pretty much still a beginner, so that makes me very disturbed for all creatives and also a bit sad for her. I want to deliberate and make decisions on my work - that's the fun and magical part of the process. Finally, I don't want AI in my world, especially not my toddler granddaughters' worlds, but looks like I don't get much of a vote.
AI has a role to play, but I'll make an argument that I think John would approve of--AI has no taste, no aesthetic, no true discernment. At best, it tries to mimic the taste it has ingested, but "mean taste" is ultimately tasteless.
It is so very, very tempting to ask AI "should" questions. Except for well-known scenarios with standard solutions, this will nearly always give terrible advice. Thinking is hard and we have birthed a generation that lives in a near constant state of anxiety of ever being "wrong". The "release of responsibility" that comes from having the AI make the decisions is as addicting as any drug.
Thank you, John. Guess some of us are VERY lucky to have been born no later than when we were. I'll leave it at that, and go back to reading my 19th and early 20th century books.
*Nods along to everything*. A great antidote to anti-maxxing/anti-optimization, or even the fatigue of such when the world around us is obsessed with it, is the book "Rest" by Alex Soojung-Kim Pang. Highly recommend!
Excellent post. I recently learned about Clavicular because one of my 14-year old son's friends said he looks kind of like him (ack!). My son explained to me that Clavicular is "really a joke", but I could tell he was a little proud of the association too. He knows better, but he's still drawn in by this empty, soulless content sometimes. The way you break it down is better than anything I could have said so I sent him your newsletter. Thank you!
Slightly off-topic and a bit provocative perhaps but I do think sometimes academics lamenting how much reading they can get students to do fall into the "more is more" trap. Don't get me wrong - distraction and AI are real obstacles to reading, but people read 800 page romance novels and listen to three hour podcasts so it's just not true that people never take on anything long. And maybe it's the poet in me, but I think focused, deep reading of a short text is a stronger learning experience the cramming and half-understanding of hundreds of pages a lot of these profs are nostalgic for.
Deliberation. Sigh. We miss it too. 😔
You're on to something
You know what I miss? Deliberation.
Amen to this.
Because Kustov's point was "entirely generated and posted on Substack by agentic AI using my new Claude Code (Opus 4.6) workflow," I thought I would ask my ChatGPT 5.2 Pro to respond.
Specifically: "Some people are worried that AI generated research publications won't have the rigor or insight of human generated research publications. Consider that Journals already publish studies with data errors, p-hacked results, and non-replicable findings at alarming rates. One estimate puts the share of genuinely useful published papers at around 4%. An LLM that occasionally hallucinates a citation is competing against a system that routinely produces junk science dressed in enough jargon to pass review. If we applied the same skepticism to human-produced research that we apply to AI outputs, we’d shut down half the journals tomorrow. Assuming that advancing science (even social science) is the goal of publishing research, what conclusions do you draw from this argument?"
ChatGPT thought for 6m 2s (for the record, longer than I needed to in order to come to a similar conclusion) and said:
=============
Your argument is basically: “The current publication system already has a terrible signal-to-noise ratio, so it’s inconsistent to treat AI-generated papers as uniquely suspect; if we applied the same skepticism to humans, the system would collapse.”
A few conclusions follow, but they’re not all “therefore AI is fine.”
[I will spare you the exposition, because all of you readers also generated it in your heads in less than 6 min. But I'll give you the wrap-up]
If you want a single sentence:
The argument doesn’t vindicate AI-generated publications; it indicts the current journal system and predicts that without structural reforms, AI will amplify its failure modes.
===============
I guess ChatGPT has a less sunny vision of our AI future than Claude does. Perhaps if Alexander Kustov had applied some human thinking to the question instead of outsourcing it to Claude, he might have seen the problem with "his" argument, too.
Terrific post. When I read your essays it feels like the Rocky moment on the stairs. Yes!!!
Thanks for the DFW anniversary notes link - that made me want to revisit parts of both IJ and The Pale King. For now, I'll pull them out and place them on a nearby desk. Also, I too have clicked on that NYT 32 books to read link multiple times.
As a visual artist, I am becoming more and more disturbed by the ever presence of AI - I have an artist friend that consults ChatGPT for suggestions to improve her paintings. She's pretty much still a beginner, so that makes me very disturbed for all creatives and also a bit sad for her. I want to deliberate and make decisions on my work - that's the fun and magical part of the process. Finally, I don't want AI in my world, especially not my toddler granddaughters' worlds, but looks like I don't get much of a vote.
AI has a role to play, but I'll make an argument that I think John would approve of--AI has no taste, no aesthetic, no true discernment. At best, it tries to mimic the taste it has ingested, but "mean taste" is ultimately tasteless.
It is so very, very tempting to ask AI "should" questions. Except for well-known scenarios with standard solutions, this will nearly always give terrible advice. Thinking is hard and we have birthed a generation that lives in a near constant state of anxiety of ever being "wrong". The "release of responsibility" that comes from having the AI make the decisions is as addicting as any drug.
Thank you, John. Guess some of us are VERY lucky to have been born no later than when we were. I'll leave it at that, and go back to reading my 19th and early 20th century books.
We need an anti-maxxing term for the “do less to do more of quality” camp. Proustian? :)
Fascinating read.
(The hyperlink to 32 novels is actually the DeLillo link.)
*Nods along to everything*. A great antidote to anti-maxxing/anti-optimization, or even the fatigue of such when the world around us is obsessed with it, is the book "Rest" by Alex Soojung-Kim Pang. Highly recommend!
Excellent post. I recently learned about Clavicular because one of my 14-year old son's friends said he looks kind of like him (ack!). My son explained to me that Clavicular is "really a joke", but I could tell he was a little proud of the association too. He knows better, but he's still drawn in by this empty, soulless content sometimes. The way you break it down is better than anything I could have said so I sent him your newsletter. Thank you!
Slightly off-topic and a bit provocative perhaps but I do think sometimes academics lamenting how much reading they can get students to do fall into the "more is more" trap. Don't get me wrong - distraction and AI are real obstacles to reading, but people read 800 page romance novels and listen to three hour podcasts so it's just not true that people never take on anything long. And maybe it's the poet in me, but I think focused, deep reading of a short text is a stronger learning experience the cramming and half-understanding of hundreds of pages a lot of these profs are nostalgic for.