The Limits of AI Research for Real Writers
To produce interesting writing, more is not necessarily better.
The fundamental mistake of how writing is framed in school contexts is to suggest that the goal of writing is to capture a correct or at least “approved” answer on the page.
I was guilty of, if not explicitly, at least implicitly using this framing for many years. Even though I was a believer in teaching the process of writing, the way I presented the challenges to students suggested that when it came to what they should be producing, I had a pretty specific end in mind.
But finding the answer someone else wants is not how writing works, at least it isn’t how it should work because that process isn’t particularly interesting to the writer. Writing should be about mining your unique intelligence for something worth saying to the world. This is the privilege I had in my own work, so why couldn’t I give it to students?
Once I allowed students the freedom to write from their unique intelligences, I saw the shift in their attitudes towards writing immediately. The experiences they were doing were still highly structured, but the mindset, attention, and rigor they brought to the experience was significantly different, and to my eye, much more aligned with their learning.
This writing was also clearly better, perhaps not grammatically or as measured against the criteria we often apply in school contexts, but it was inevitably more interesting to read. I don’t know why we should have a goal for a piece of writing other than for it to be read and engaged with/enjoyed by an audience of other unique intelligences.
If we want students to write - as opposed to outsourcing document production to an AI - I think this must be our relentless focus. Writing is to know and express your mind, not to figure out what someone else wants you to say for a grade.
I would tell students that their work was meant to be part of an ongoing, never-ending “academic conversation” and therefore the goal was not to deliver something that has some kind of definitive answer, but to instead take sufficient care to produce writing that continues the conversation and induces someone else to respond.
This was largely a foreign idea to them, as it was to me until I started to embrace it for myself in my teaching.
One of the enthusiasms among educators who are not AI boosters, but nonetheless see potential for AI as some kind of extension of our human capacities is using generative AI tools for “research.” As humans we simply don’t have the capacity to access and consider the whole scope of available information. Generative AI, on the other hand, is literally made of that stuff and is more than happy to recombine it on a probabilistic basis for our consumption.
Surely being able to have this assistant that can survey the whole field and report back to us is an improvement?
I’m not so sure about this, and my uncertainty is rooted in the mistaken framing I talk about above, as well as what I observed when it came to my students’ behaviors when they were called upon to research.
A couple of weeks ago I lamented the years I spent engaging my students in a forced march through a “research paper” writing process.
One of my laments was the way students would engage in “research,” essentially a pro-forma, box-checking process meant to satisfy an instructor’s grading criteria (number and quality of sources, etc…) rather than an exploration of a topic looking for the stuff that would fuel their thinking.
In fact there was very little thinking going on, which is why the end result was often so disappointing to both them and me.
The ultimate promise of AI is an endpoint, a super intelligence that produces outputs far superior to anything humans are capable of. I think something similar is at work when we talk about AI research, particularly those who are most enthusiastic about it. We are meant to believe that, using AI, we can get the “best” stuff and it can digest everything for us, leaving behind the choicest bits. Research can be optimized.
I think this is flawed on several levels. For one, there’s not a lot evidence that we write more interestingly because we have access to more stuff.
I’m here to testify that when it comes to creating an interesting, persuasive piece of writing one’s research does not need to be wholly comprehensive. It merely needs to be sufficient. At one of my campus visits recently I had an interesting conversation with a PhD student who was experiencing anxiety over the fact that they could not possibly read everything they thought they should before producing their dissertation, and part of their anxiety was wrapped up in this notion of AI’s bottomless capacity for doing “research.”
I tried to be reassuring, to remind this person that there will always be something we don’t know, that we have to get comfortable with not knowing. The stuff I don’t know - including in areas where I have experience and expertise - is bottomless, but what I do know, I know, and I know it because I’ve read it myself. I’ve thought about it, and I’ve used it as part of my own attempted contributions to the academic conversation.
Another problem is that AI models do not actually digest the material. It does not summarize using the kind of considered thinking humans do when we summarize. It compresses based on probabilities. I would not say this is useless, but just as syntax production is not writing, AI summarization is not the same as human summarization.
This is not to say that generative AI cannot be useful, but as opposed to seeing it as a “co-intelligence” or research partner, we should instead see generative AI as something closer to this:
A Lab will bring us anything: a toy, a dead squirrel, a living squirrel, a rock, a stick, a gold bar. They are friendly, indefatigable and non-discerning.
While the method of generative AI operations delivers outputs that contain much surface-level sense, we know that those outputs are not the byproduct of thought and consideration any more than those offerings from a Labrador Retriever. They are syntax fetching machines, a big, high-tech Labrador that will eagerly bring things of potential utility to us for inspection, but which should ultimately not be trusted as to the value of those things to our own thinking and writing until we think about them for ourselves.
All research is ultimately just fodder for our own thinking. To see AI-assisted research as inherently superior is to embrace values of optimization in an arena - at least when we’re talking about the academic conversation in humanistic studies - where optimization is not a recognizable value to the quality of the work.
When I engage in research for something requiring depth and consideration - as I’ve been doing recently for a project I’m not sure I’m allowed to announce yet - I always wind up with more sources than I’m ultimately able to make use of. This unannounced thing for which I just finished a draft of around 3800 words cites 30 sources. I see there are another couple dozen possible sources that I have left out. I found these sources through the continual process of reading, exploring, searching and rabbit holing that I engage in on a daily basis. I gathered the vast majority of these sources over time, for later consideration because I found each and every one of them interesting relative to this subject I suspected I was going to write about one day.
I augmented that process with targeted research as I was writing where I recognized I needed to fill a hole in the conversation. In a couple of instances my AI Labrador was useful in finding information to plug those holes, but not more useful than Google used to be before its enshittification.
I am not here to argue that my research method is superior and people should follow my lead. I can say that my method works pretty well for me, and one of the reasons it works for me is because it’s evolved to put as much thinking fodder as I can handle in front of me. My curiosity has led every step of the journey.
This is not at all impossible to do in school contexts, but it does mean centering curiosity and sustained inquiry as the fuel of research, rather than speed and comprehensiveness. The students who are reflexively outsourcing the work to a large language model cannot conceive a world where there is interest in what they specifically have to say.
I wonder where they got that idea. (No I don’t.)
In the end, the choices we make in utilizing this technology depends on what we value in both our experiences and our outputs, and I’m always going to be team humans first mostly because I think that makes for more interesting and engaging world.
Links
This week at the Chicago Tribune I wrote about Angela Flournoy’s stunner of a novel, The Wilderness.
At Inside Higher Ed I used a series of propositions to argue that if we say we’re teaching writing, we should teach writing, not AI-aided document production.
Here’s a very interesting conversation between literary agent Alia Hanna Habib and Simon & Schuster publisher Alessandra Bastagli.
Great, short example of reading closely from Derek Neal looking at Kazuo Ishiguro and others.
The New York Times has 23 books coming in November.
Via my friends at McSweeney's, a little schadenfreude commenting on the sudden ubiquity of AI slop, “Hi, It’s Me, Wikipedia, and I’m Ready for Your Apology” by Tom Ellison.
Recommendations
1. Michelangelo: The Artist, The Man, and His Times by William Wallace.
2. Independence Day by Steve Lopez
3. The Ancient Art of Thinking For Yourself by Robin Reames
4. Proof by Adam Kucharski
5. More Than Words by John Warner
Patrick P. - Menomonee Falls, WI
These are mostly big idea books (including my own), so I want to honor that, but perhaps by pivoting to something that uses small observations to get to the big ones: The Peace of Wild Things by Wendell Berry.
This past week I had my first full week at home in quite some time, and it was glorious.
That said, if you are looking for help at your school, college/university, or organization as to how to navigate the world of writing in the age of AI, I’m eager to talk.
Take care, enjoy your weeks, and see you next Sunday.
JW
The Biblioracle



Well said. And to paraphrase one element, I've added this piece to a list of sources I've been gathering for later consideration because I find it relevant to something I'm writing about.
Great as always, John, and pretty much sums up my thoughts on AI, especially the little doggy graphic. Actually I would go further and say that I'm deeply suspicious of the whole thing because in my Luddite way, I see it as yet another device for management to shove workers out of the house and thereby cut costs. 😠