An Unserious Book
Sal Khan brings an infomercial to a (supposed) revolution with "Brave New Words."
(This post may be too long for some email clients. Click through to see the full post.)
The title of Sal Khan’s new book, Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing), makes two implicit claims:
AI will revolutionize education.
The AI revolution is “a good thing.”
It is strange, then, that the book makes no real attempt to grapple with the implications of its own argument. Rather than walking his audience through the method and manner of this supposed revolution, Khan simply asserts that it will happen because…AI! As to whether or not this is “a good thing” we’re then treated to a series of unsupported leaps from the initial premise, because…AI.
The result is a 270-page infomercial for Khan’s tutor-bot, Khanmigo, which he has recently made free to educators thanks to the backing of Microsoft, which is also the primary source of funding behind OpenAI whose ChatGPT powers the Khanmigo platform.
It is difficult to even grapple with Khan’s book as an argument or vision because there is no real argument and no vision beyond an almost childlike faith in the awesomeness of technocratic approaches to teaching.
If we are facing a revolution of the scale Khan is promising we deserve a serious book about how this revolution will work. This is not that book.
This book is filled with bullshit. (Sorry Mom, it’s the right word.)
Turning a blind eye to the dark side of AI development
The B.S. starts with his characterization of OpenAI, the developer of ChatGPT. OpenAI president Greg Brockman and CEO Sam Altman invited Khan inside the tent in the summer of 2022, months before the public release of ChatGPT and over a year prior to the release of GPT-4, which powers the Khanmigo tutoring platform. In the book, Khan characterizes OpenAI this way as: “One of the groundbreaking research laboratories working in the field of friendly, or socially positive, artificial intelligence.”
Perhaps this was a plausible description of OpenAI in the summer of 2022, but subsequent events have revealed a rather different side of the company. Recent reporting about the brief ouster of CEO Sam Altman shows that the company’s head has engaged in a serial campaign of deceit, even with his own governing board, using the non-profit origins of the company as cover for what is a vision that is either expansive, if you want to put it nicely, or rapacious, if you consider that Altman recently declared that OpenAI needs $7 trillion dollars - the combined annual GDP of the UK and Germany combined - in order to continue their AI development.
This is a company that is literally asking us to bet our collective future on their quest to create a god-like super intelligence that will be able to solve all of our problems.
In a book that is purportedly concerned about providing paths to prosperity for our world’s young people, Khan expends exactly zero words considering the dubious ethical origins of generative AI having been trained on the unauthorized and uncompensated use of the text, images, video, and audio of others.
Khan briefly acknowledges, but then hand waves away the known problems around algorithmic bias in AI.
Khan says nothing about the environmental threat of AI development. He does not grapple with the fact that as GPT-4 was being trained, a data center in West Des Moines, Iowa “used 6% of the district’s water” all by itself, or that 30-50 queries to ChatGPT requires the use of half a liter of water to cool the servers tasked with responding.
Khan does not mention how coal-fired power plants that were set to be retired have been kept in use solely for the purpose of powering energy hungry AI.
Khan ignores the fact that workers in Kenya being paid $2 an hour to “train” ChatGPT had their mental health “destroyed” after being exposed to explicit content as part of the training process.
Reading the opening of the book helped shed some light on the book’s title, an obvious reference to Aldous Huxley’s dystopian novel, Brave New World, in which individuals are sorted by IQ in a genetically engineered society and then kept docile through doses of mind-control drugs. I previously wrote how bizarre it seemed to invoke a dystopia as you’re trying to sell a utopia, but Khan appears to have a special knack for ignoring anything that doesn’t fit his preferred vision.
In Brave New Words he invokes Orson Scott Card’s Ender’s Game as one of the inspirations for his digital tutor. This is apparently a longstanding vision as when asked about what he was reading by the New York Times in 2012 he said:
I’m a fan of hard science fiction, which is science fiction that is possible. The science fiction books I like tend to relate to what we’re doing at Khan Academy, like Orson Scott Card’s “Ender’s Game” series and Isaac Asimov’s “Foundation” series. What all these books are about is how humans can transcend what we think of traditionally as being human — how species hit transition points and can become even more elevated. Very epic ideas are at play here. Not the everyday pay-the-bills, take-out-the-trash kind of stuff.
Ender’s Game (spoiler alert) is a novel about children who are manipulated into executing a preemptive war conducted through virtual combat in which they consign (likely) millions of their own soldiers to death while wiping out an entire alien species in an act of “xenocide.” Ender and his companions believe they are training in a simulation, only being told the combat was real after the conclusion of the final battle.
They were kept in the dark because the military leaders were concerned about the children hesitating or showing mercy if they knew what they were doing had real-world consequences.
This is, quite frankly, a bizarre model for educating people, and yet it appears to be one of Khan’s core inspirations, “What all these books are about is how humans can transcend what we think of as traditionally being human - how species hit transition points and can become even more elevated.”
These are the words of a fanatic.
Sal Khan has no interest in teaching
Sal Khan has no apparent genuine interest in teaching and teachers.
Oh, on the surface, it seems he cares deeply about teaching and teachers, even including several chapters purportedly directly addressing the fate and treatment of teachers, but his sole concern for teachers involves making sure it’s easier for them to use his technology. He touts how quickly and efficiently Khanmigo can make a lesson plan, or “customize” a lesson with content that is personalized to the student.
But as Dan Meyer shows, “customizing” lesson content has never been shown to work as an aid to student learning. Khan is in the business of solving the problems he perceives rather than truly engaging with and collaborating with teachers on the actual work of teaching. He turns teaching into an abstract problem, one that just so happens to align with the capabilities of his Khanmigo tutor-bot.
Teaching is the most difficult, most rewarding work I have ever done. It is an ever evolving challenge to engage the individual intelligences of students in experiences that will foster their social, emotional, and intellectual growth.
Teaching is a practice which requires the employment of skills, knowledge, attitudes and habits of mind. Developing these aspects of one’s practice requires a kind of constant attention to both the particulars of the moment in the classroom, as well as a longer view of how these moments aggregate into learning.
If you would like a look at what this looks like in practical terms, I highly recommend the newsletter of
who writes with great insight about his experiences as a 5th grade teacher. This past year has been a challenging one, and his recent post wondering if he’s become “Richard Vernon” the cynical principal from The Breakfast Club illustrates what it means to think, feel, and act as a teacher.I know I have a lot of teachers reading this newsletter, and I would encourage anyone so moved to try to describe what teaching is and how it works in the comments.
At his core, Sal Khan has never exhibited any interest in education per se. He has been focused on the problem of the delivery of educational content, first through Khan Academy, and now Khanmigo. To be sure, good content is an important component of achieving learning, but in truth it is a relatively small component.
In Brave New Words he declares, “With Khanmigo, I think we have an artificial intelligence that is hard to distinguish from a strong human tutor.” Khan believes this because Khanmigo can engage students “in Socratic questioning throughout the learning process.”
Khanmigo cannot reason, feel, or communicate with intention. It cannot smile or frown. It does not read non-verbal cues. It does not joke around or make intuitive leaps. To believe that Khanmigo is hard to distinguish from a strong human tutor, one needs to ignore that when we interact with other human beings we bring our human selves to the experience.
Khan wants us to marvel that after a reading an assignment, Khanmigo may engage a student by asking “What is your opinion of this essay?”
I can testify that this is not an effective way to engage students in a learning experience because this is what I was doing in my earliest days as a TA in graduate school when I knew nothing and had no experience with teaching. My students would look back at me, blank-faced until one of them had mercy on me, raising their hand and saying, “Uhh, it was alright.”
Throughout the book Khan takes what Khanmigo is capable of doing and asserts that this is an example of effective teaching. One of the claims he and others make about the benefits of tutor-bots is their “patience,” even their “infinite” patience, but is infinite patience truly a component of effective teaching?
I think not and said as much at my other newsletter.
Honestly, Khan’s treatment of teaching is insulting as much as anything. He claims to want to provide technology to teachers and schools that will help them without bothering to understand what teachers do or how teaching works.
Motivation, engagement, relationships have no salience to Khan’s vision of teaching. This is not a vision of teaching that will work for most students. At his
newsletter, highlights recent research that shows the limits of technological intervention in teaching spaces. When ed tech companies do research on the efficacy of their products, they “excluded roughly 95% of students from their studies for not meeting arbitrary thresholds for usage.”Should we be embracing an approach that only 5% of students are willing to engage with?
This is another question that Khan lets go begging throughout the entire book because he just doesn’t care, even though the problem of engagement is the central challenge of learning.
Sal Khan really don’t know about learnin’ writin’
As I explored in Why They Can’t Write: Killing the Five-Paragraph Essay and Other Necessities, Sal Khan is not unique in mistaking producing written texts for the purposes of schooling with learning to write, but he for sure falls into the trap.
He argues, “The most successful students will be those who use artificial intelligence applications to make their writing smoother, their prose clearer, and their long-form answers to complex questions more succinct.”
Here we see values the values of smoothness, clarity, and succinctness as tantamount to quality writing. This is a cramped vision focused on very narrow criteria. There is no consideration of depth, or elegance, or entertainment and engagement. There is no consideration of audience or the rhetorical situation.
Just as Khan is uninterested in what it means to teach, he is unconcerned with how writing is learned. To him - and again, he is not alone in this - the benefit of generative AI is in streamlining the production of a text product.
The real power of generative AI is to solve what I call ‘the blank-paper problem where oftentimes the hardest thing to do is start writing. Early classroom adopters in this new reality are finding success allowing their students to use generative AI to help compose the first draft.
If the goal is to help students to learn to write as opposed to engage in academic cosplay for the purpose of schooling, this is the exact opposite of what we should be doing and is a form of educational malpractice.
Khan also latches on to the canard of the benefit of instant feedback on writing, as though this is an obvious benefit. He starts with a bad analogy.
I want to dwell on the value of providing rapid feedback. For example, it would be very hard to get better at basketball free throws if you didn’t know whether or not you made the basket for several days or weeks. As ridiculous as this sounds, this is exactly what happens with writing practice. Before generative AI came on the scene, it could take days or weeks before students got feedback on their papers. By that point, they may have forgotten much of what they had written, and there wouldn’t be a chance for them to refine their work. Contrast this to the vision in which students receive immediate feedback on every dimensions of there writing from the AI. They will have the chance to practice, iterate, and improve much faster.
A cognitively complex and challenging process like writing is nothing like the mechanical process of shooting free throws. The comparison is off the rails from the beginning.
To be sure, providing students with feedback on their writing is an ongoing challenge, but it is a problem primarily caused by giving teachers too many students and too little time to respond to student writing. But even with that being true, the idea that immediate feedback on writing is a benefit to learning to write is simply wrong.
I covered the limited utility of real-time feedback on student writing back in 2018 at Inside Higher Ed. I think I make a pretty persuasive case for why real-time feedback has very few benefits to learning to write. Maybe some folks would differ on that front, but Khan does not even attempt to grapple with the possibility that learning to write may be more complicated than he believes.
Writing is learned through writing experiences. The way Khan discusses the integration of generative AI into the writing process distorts those experiences into something that is not writing. With learning, as
points out friction is the whole point of the exercise.Outsourcing writing to something that cannot think or feel, that has no experience of the world, that has no memory (in the human sense) is a willing abandonment of our own humanity.
Maybe Sal Khan would argue that this is an example of species “elevation” through interacting with AI, but I think that’s bullshit.
Sal Khan wants you to doubt your own humanity
There are literally dozens of head-slapping claims in this book. Because Brave New Words is an informercial, not an argument, Khan gives himself to fire off mini thought experiments that fall apart under even a moment’s scrutiny.
Sleeping on a problem is the same as what large language models do.
What are our brains doing subconsciously while our consciousness waits for an answer? Clearly, when you “sleep on a problem,” some part of your brain continues to work even though “you” are not aware of it. Neurons activate, which then activate the neurons depending on the strength of the synapses between them. This happens trillions of times overnight, a process mechanically analogous to what happens in a large language model.
Uh…what now? I couldn’t tell you if Khan is correct about what happens to humans as we sleep here, but on its own terms he’s wrong. LLMs work on a probabilistic next token prediction process. They are a syntax fetching machine. This is nothing like what is happening in our subconscious brains. Pure nonsense.
Generative AI is creative just like humans
Some would also argue that generative AI’s “creativity” is just derivative from all the data it has been expose to. But isn’t that very human as well? Even the large leaps in human creativity have been closely correlated to things that the creator has been exposed to. Would Einstein have mad the leap to special relativity if he hadn’t already read the work of Lorentz and countless other physicists?
This is how I can tell Khan didn’t anticipate any critical reads of his work and perhaps wrote this anticipating an audience far more naive about how generative AI works than what has become reality, because this is transparently wrong. Thinking and intuitive leaps based on prior knowledge and experience is fundamentally different than next token prediction. B.S. all the way down.
At times, it’s hard to take Khan’s book seriously because it really seems to have been written in such a fundamentally unserious way. He refuses to grapple with any kind of complexity around what it might mean to integrate this technology into our educational systems. If you want to read someone who is grappling with these questions, I recommend
who is in the midst of a series of posts discussing all of the subjects Khan’s book raises.There is a particular irony in Khan’s concluding chapter calling for a spirit of “educated bravery” in considering AI in education. He wants us to be bold experimenters pushing the bounds of possibility. That he concludes a book devoid of genuine educating content with this call-to-action is a bigger irony than the book’s title. The idea that embracing AI is an act of bravery while questioning it is something like the opposite is an insult.
While Brave New Words is an unserious book, we should take the vision for education it promotes very seriously given how much money is being put behind it and power of the people who are pushing this vision. The book’s endorsements primarily come from billionaires with longstanding histories of shaping the education systems in the United States and beyond. Some of the non-billionaires who blurbed it should be ashamed of themselves, but it’s tough to stop the Ted Talk logrolling train once it starts heading down the tracks.
Integrating Khanmigo or other generative AI tools into schools would be to engage in a massive, unregulated, untested, possibly deeply harmful experiment. Sal Khan’s book-length informercial is meant to grease the wheels for that journey.
I think the book’s utter unseriousness should be read as a warning to the rest of us.
The people in charge have not thought these things through.
Links
This week at the Chicago Tribune I eschew the tradition of summer “beach reads” and instead recommend five of my favorite “hammock reads.”
In other John Warner creates content news, I took the time to round-up just about everything I’ve produced at Inside Higher Ed on how to think about teaching writing in a world of generative AI.
Even Booker Prize winning books were rejected by publishers before finding a home and award recognition.
At Esquire, Kate Dwyer explores the myriad difficulties in getting attention for a debut novel.
It’s not too late to join
’s “1000 Words of Summer” collective writing experience. If you’ve got something you’ve been thinking about, jump in! of Belt Publishing has acquired a book first published serially on Substack. (I’m thinking about starting a newsletter for sharing my unpublished novels not because I think they’ll get picked up by a publisher, but just so they have an existence beyond my hard drive.)The New York Times has 17 new books for June. I’m seriously considering reading four of them.
Via McSweeney’s this week: “Excerpts from an Epic Fantasy Novel Where the Protagonist Is Over Thirty” by Scarlet Meyer.
Recommendations
1. Victory City by Salman Rushdie
2. The Pole by J.M. Coetzee
3. The Maniac by Benjamin Labatut
4. The Books of Jacob by Olga Tokarczuk
5. Annihilation by Michel Houellebecq
Andrew M. - Moscow, Russia
For Andrew, I’m going with a classic work of compelling misanthropy, Journey to the End of Night by Louis-Ferdinand Celine.1
If you’d like your own custom reading recommendation. Try the link right below.
We can do so much better than Khanmigo and the current status quo when it comes to giving students meaningful educational experiences. But to do that requires deep discussions and broad collaboration on what it means to learn. Brave New Words is not a contribution to that goal.
See you next week,
JW
The Biblioracle
All books (with the occasional exception) linked throughout the newsletter go to The Biblioracle Recommends bookstore at Bookshop.org. Affiliate proceeds, plus a personal matching donation of my own, go to Chicago’s Open Books and an additional reading/writing/literacy nonprofit to be determined. Affiliate income for this year is $77.20.
Whereas Khan touts instant feedback for writing, I can't help but think that, in my ten years as a classroom teacher, there's value when we bring the pen to a page and simply struggle. Writing by hand is not the same as typing.
My students don't improve from digital dialogue, but rather the accumulation of writing, both small and large, informal and formal. And maybe, just maybe, face to face dialogue helps? My goodness, when I think of the kids who shut down because red pens make their papers bleed, 'instant feedback' might be the worst thing.
...now I'm going to search for cheap copies of this book on Amazon. I may as well familiarize myself with these arguments. The 'data' folks will lap it up.
observation is the first tool of teaching. then shared humanity to meet someone where they are. shared between the two individuals. teaching happened when my high school english teacher saw i was struggling reading the language of Pride & Prejudice, then told me that it's supposed to be funny. i was surprised to find that i didn't need to know what every word meant to grasp the dialogue. this opened my mind to that book and the enjoyment of reading ever since.
i also believe teaching is about learning to learn. learning is about connecting us to the shared human experience to bring meaning and purpose to our lives.
teaching and learning are absolutely not about transferring skillsets or banks of knowledge into body/mind. teaching is about meeting a person where they are and seeing where they want to go and intuiting scaffolds and changes of mind and motivations and direction as the experience of learning unfolds. the nature of learning is such a natural experience, there is not much an unnatural systemized mode or model would contribute.
i think AI in education is the same as us humans trying to teach trees to grow better. us assuming that now we have the technology of airplanes, we would be capable of teaching birds to fly better than they currently can teach themselves. it's not how it works, that's not how any of this works.