21 Comments

Whereas Khan touts instant feedback for writing, I can't help but think that, in my ten years as a classroom teacher, there's value when we bring the pen to a page and simply struggle. Writing by hand is not the same as typing.

My students don't improve from digital dialogue, but rather the accumulation of writing, both small and large, informal and formal. And maybe, just maybe, face to face dialogue helps? My goodness, when I think of the kids who shut down because red pens make their papers bleed, 'instant feedback' might be the worst thing.

...now I'm going to search for cheap copies of this book on Amazon. I may as well familiarize myself with these arguments. The 'data' folks will lap it up.

Expand full comment

If a machine could be designed to check all your technical boxes (eg, read non verbal cues, offer its own affect, etc), would it matter that it was a machine and not a human? A bit like The Blade Runner of education.

It seems to me that there’s something essential in a human relationship that machines can’t replicate even if they replicate everything else that relationship is meant to offer.

Expand full comment
Jun 2Liked by John Warner

observation is the first tool of teaching. then shared humanity to meet someone where they are. shared between the two individuals. teaching happened when my high school english teacher saw i was struggling reading the language of Pride & Prejudice, then told me that it's supposed to be funny. i was surprised to find that i didn't need to know what every word meant to grasp the dialogue. this opened my mind to that book and the enjoyment of reading ever since.

i also believe teaching is about learning to learn. learning is about connecting us to the shared human experience to bring meaning and purpose to our lives.

teaching and learning are absolutely not about transferring skillsets or banks of knowledge into body/mind. teaching is about meeting a person where they are and seeing where they want to go and intuiting scaffolds and changes of mind and motivations and direction as the experience of learning unfolds. the nature of learning is such a natural experience, there is not much an unnatural systemized mode or model would contribute.

i think AI in education is the same as us humans trying to teach trees to grow better. us assuming that now we have the technology of airplanes, we would be capable of teaching birds to fly better than they currently can teach themselves. it's not how it works, that's not how any of this works.

Expand full comment
Jun 2Liked by John Warner

Brave new World and Ender game as positive examples is a choice. Khan seems to be the one who not only welcomes the invention of the Torment Nexus, but also praises it in a book

(For the unfamiliar with the meme: https://www.reddit.com/r/Cyberpunk/comments/sa0eh3/dont_create_the_torment_nexus/ )

Expand full comment
Jun 2Liked by John Warner

So much to unpack here, but I'm grateful to hear some pushback and criticism of the tech booster vision. One thing I kept thinking while reading was how ideologically similar so many tech bros are and how difficult it is to argue against them without making explicit the implicit assumptions in their ideology. Dr. Emile Torres has done a great job uncovering the connections between the various silicon Valley ideologies and the implicit eugenics and anti-humanism in them (https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/)

Expand full comment
Jun 5Liked by John Warner

I'm glad you mentioned how poorly titled his books was, as that's the first thing that jumped out at me. Forget Khan -- did no one involved in publishing this actually think this through?

On the other hand, "Khanmigo" is a great name for a chatbot, even if the product itself shouldn't exist."

Expand full comment

Business writing generally dispenses with inefficiencies like "having an argument" and "evidence for an argument".

Expand full comment
author

Yes, I think the mistake I made going into the book was treating it as though it was a book on education, but it really is not that, even though it's categorized as "education policy." To have it for sale now, it obviously would have had to be written pretty much before anyone had a chance to truly use Khanmigo.

Expand full comment
Jun 4·edited Jun 4Liked by John Warner

Thank you for your wonderful review. I deeply appreciate all that you wrote, and would like to add a bit more. Disclaimer--I have not read the book, but I have read extensively about it and listened to hours of him on his recent tour describing his book and vision.

I fear that his book is deadly serious according to his values, desires and potential impacts. Maybe us, as reflective practitioners who have dedicated our lives to the craft of teaching and caring for children and young adults aren't the audience? He doesn't have to be serious about the pedagogy, practicality, or impact. If he desires fame, status, and profit, this book will seriously help him do that to the detriment of the rest of us. Sal Khan is a nicer techbro, but a techbro nonetheless. Whether it was out of hubris, greed, ignorance or evil, Zuckerberg's pronouncement that Facebook would "connect the world" has failed spectacularly--while it has made him one of the richest men in the world. Khan has a softer image, but I project no benevolence onto him. He's motivated to provide a "free, world-class education for all". Maybe it's more hubris than evil, but most of us here know his tech utopianism isn't the path.

There are so many flaws in this vision, and we should discuss them here. As you stated, the impact of this book will be seriously tragic for students and teachers alike. One of the questions I have is what's the Purpose of an AI tutor? He's a smart guy, so I would presume he's thought about it beyond just the technical capacity of it. If it's successful in his utopian vision, and kids don't cheat, and they actually receive a world-class education, what will they do with it? Helping students be gainfully employed is a primary goal of education--maybe one that most of us here lament, but none can deny it's fundamental to schooling. Education in America (everywhere?) is more economic endeavor than humanist endeavor.

So my question is, "If students can learn all they need from a personal AI tutor, why the hell would a business ever hire the student when they could employ the tutor???" Some may say he still recognizes the role that teachers play and that he doesn't believe AI can teach it all. That's mere lip service that he's repeated for 20 years. He believes teaching is a problem and that technology is the solution. Bye, us.

Economics, not pedagogy, will drive this. He's the keynote address at the NASSP convention this summer in Nashville, and they will LOVE his optimism and simple panacea. Experienced educators of conscience must confront his vision at every turn.

Expand full comment

Thank you for this coherent rebuttal of the tech-booster approach. Very few teachers ever think like this, because they have experienced the messy and humane reality of the classroom. Will definitely add this piece to my own summary of the implications of AI and English teaching. https://www.juliangirdham.com/blog/english-ai-and-the-thermostatic-principle

Expand full comment

Thanks for the detailed review!

I haven't read your book, but this from the blurb caught my eye: "We have done no more, Warner argues, than conditioned students to perform "writing-related simulations," which pass temporary muster but do little to help students develop their writing abilities."

I suspect here we are quite literally staring at a "teaching-related stimulation". This new fad will die with a whimper. It boggles the mind that something this bad at understanding (exhibit A: inability to understand even numbers https://amahabal.substack.com/p/gpt-4-still-doesnt-understand-even; exhibit B: https://x.com/colin_fraser/status/1785132544482226679) is expected both to understand the content and have a rich theory-of-mind to understand where the student is and what they are struggling with.

Expand full comment

This is so grim. I hate hate haaaaaate when people reference a thing when they obviously haven't read it and/or understood it on any level. I've been thinking a lot about how techbros are trying to rewire our brains to beleive bullshit, and this seems to be a cortical shunt piping in stupidity.

Article about how big tech is failing us on so many levels here: https://www.webworm.co/p/reptilianfacebookpage

Expand full comment

While I agree with most of your argument. As a computer engineering master student I can say that "LLMs work on a probabilistic next token prediction process. They are a syntax fetching machine" is a oversimplification for what LLMs do.

Expand full comment

Came here to say something like this, too. I will likely agree with much of what is said here once I read the book, but it's unfortunate that, on this issue, John is making the same kind of overconfident and unsupported claims he accuses Khan of making.

You don't have to be a tech bro or a computer scientist to know that next token prediction is merely a component of how LLMs were *trained,* and that how the models were trained actually tells us very little about their post-training processes. To get better at predicting the next word while training, the models have developed structures, processes, and abilities we are only starting to understand (see the research on mechanistic interpretability). And the little bit we do know suggests they have developed rudimentary world models, engage in a kind of reasoning, and have a sense of when they are being deceptive.

It is thus *far* too early to say with any degree of confidence that LLMs do not think or feel. They are obviously not identical to humans, but neither are animals, and we (hopefully!) grant that animals think and feel, as well. And, again, this is not just the narrative of the tech bro. Philosophers of mind, cognitive scientists, and most importantly neuroscientists, are doing really important work on this front right now.

To be clear, I'm not saying they *are* conscious, reflective entities. I'm just saying we don't actually know and should be less confident about pronouncing it "pure nonsense" to suggest otherwise (or even to suggest it might someday be possible). None of this is an argument in favor of integrating LLMs into our teaching and learning practice, let alone adopting Khan's recommendations. But because I agree with John that naivete about how LLMs work is a serious challenge to thinking carefully about these issues, I wanted to raise this issue to encourage all of us to read more widely and demonstrate more intellectual humility about this issue.

For what it's worth, of all the ethical risks AI presents (and there are *many*), one of the biggest is that our deep desire to preserve the privileged status of the human will lead us to actively ignore or deny evidence that suggests we have moral obligations to treat any new entities we create as something more than mere "tools." I actually don't think we're there yet, but strong claims that such speculation is "pure nonsense," are not setting us up for success if we ever get there.

Expand full comment
author

I appreciate this perspective, but the idea that we should treat an algorithm that cannot and never will (at least if we're talking about generative AI), think, feel, or communicate with intention as having some kind of status on par with biological life forms is pretty nonsensical to me. To believe otherwise is to willingly embrace an illusion, or worse, a delusion.

Expand full comment

You may be right that generative AI cannot and never will think, feel, or communicate with intention or have some kind of status on par with biological life forms. But to say it is an illusion or delusion presumes we know that this to be the case. My point is simply that we don't have enough evidence to give us this confidence, and if we aren't confident, there are real risks to saying otherwise. At the very least, we should withhold judgment. Of course, if we are confident, there are also real risks to saying "we don't know," so I understand why you're worried (and just as we have evidence that people are likely to deny moral status to other entities, we also have evidence that people are likely to anthropomorphize in inaccurate ways).

So I guess the real question I have for you and others who make this kind of claim is: what gives you such confidence that this will never be possible?

Expand full comment
author

I would never say never, but we do know that it is the case with generative AI. It has no capacity for feeling or memory (in the non-computing sense). It has no capacity for reflection or metacognition. I know there are unknowns around exactly how these algorithms produce their outputs and finding answers to those questions will likely reveal some interesting previously unknown and unappreciated things about language, but that doesn't mean we're going to find any form of consciousness.

I strongly doubt we will ever be visited by extraterrestrials, but I also wouldn't go so far as to say "never." At the same time, to be actively concerned about extraterrestrials visiting us in the face of what's actually going on in the here and now would be strange. I feel similarly about generative AI and even AGI. I may withhold certainty, but that doesn't mean I should withhold all judgment based on what is know and what's likely to be knowable. To plan for a future around AGI when we have lives to live in the present makes no sense to me. It feels literally anti-human.

Expand full comment

Fair enough. As an ethicist, I have often found this line of argument curious because it seems to presume that our concern is zero sum. While that's true to some extent (we only have so much time and attention), I would like to think that it is possible to be deeply concerned about the harms in the here and now without dismissing or denying potential risk. And, unlike aliens, the development of general artificial intelligence is something over which we have some control, and which some humans are actively trying to bring about, so it seems like something we should be talking about. To me, it is the opposite of anti-human to worry about whether we should be trying to develop non-human intelligence over which we have little control. But I take the point that we should *also*--and indeed, primarily, be concerned about the risk of these models as they currently exist. Fully agreed.

I suppose I am just struck by the number of very smart people who dismiss these comparisons and/or worries as absurd on their face (and this includes computer scientists arguing AGI is possible but that it's absurd to think they could be conscious). Every time that happens I am genuinely curious about what they have seen or read to make them so confident, because I would actually *love* to be confident that these things are and will not be possible. So when I asked about evidence I didn't meant to imply you needed to prove your claim with a reference list. I was mostly curious if there were one or two things you've read that you were foundational for you on this front. On my side, here are a handful of the many things I've read that I find deeply unsettling/concerning:

1. Chalmers, "Could a Large Language Model Be Conscious?" https://www.bostonreview.net/articles/could-a-large-language-model-be-conscious/?

2. Watts, "Conscious AI is the Second-Scariest Kind" https://www.theatlantic.com/ideas/archive/2024/03/ai-consciousness-science-fiction/677659/?gift=b1NRd76gsoYc6famf9q-8kj6fpF7gj7gmqzVaJn8rdg&utm_source=copy-link&utm_medium=social&utm_campaign=share

3. The interpretability segment of the May 31st Hard Fork episode: https://www.nytimes.com/2024/05/31/podcasts/hardfork-google-overviews-anthropic-interpretability.html (and the research it is based on: https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html --note particularly the "features" related to emotions and deception)

One final point (and then I'll be finished, I promise!): Because I have not yet read anything convincing on this front (and I follow Gary Marcus and have read Emily Bender's work), my sense is that the primary reason folks dismiss the possibility of AGI and/or AI consciousness is to shore up our commitment to addressing immediate concerns. If that's true, I'd much rather folks make this argument explicitly, as you have done in your response. It's more compelling, and it doesn't risk making the casual reader overconfident about what is and is not possible in the future!

Expand full comment
author

In my book that'll be coming about next year, I essentially take a stance similar to yours that we can't know the future and we may be surprised by some things, but that stance is primarily in the interest of pivoting to a much longer (really book-length) exploration of what we do know about reading and writing and learning and how we find meaning as individual humans in those experiences and the kinds of threats that generative AI poses to them not by the nature of the technology, but because of how those who are developing and selling the technology are operating.

The ethical concerns you raise are interesting to me, but for now, and by all reasonable measurements for my lifetime, they will remain thought experiments, literally the stuff of science fiction. On the one hand, yes, I recognize the potential/theoretical risk, but that risk is, IMO, for all practical purposes irrelevant. As you cause me to consider my own position (much appreciated), I find my feelings similar to my sentiments about the Effective Altruism movement, which offers a compelling theoretical framework, but when applied at the level the most powerful adherents suggest (and do), it becomes a permission structure to prioritize the purely hypothetical over the concrete issues we face now.

So, I would say that yes, my dismissal of these things is driven by a desire to focus the discussion where I think it belongs, but...I also find the speculations about AI consciousness unconvincing from a philosophical angle, that the search for consciousness is a product of motivated reasoning and the same kind of delusion which has us (me included) looking at the outputs of LLMs and seeing "intelligence." It could be, though, that as those articles you link to (particularly Chalmers) point out, we don't have a good enough understanding of consciousness to really make these distinctions.

The biological role in consciousness ways especially heavily for me. I am aligned with a distinction Wendell Berry draws in his essay on Edward O. Wilson's "Conscilience" between "creatures" (which includes, but is not limited to humans) and "machines." AI will always be a machine. Maybe I'm engaging in a form of discrimination in finding this distinction meaningful, but it's a stance I'm comfortable with.

Expand full comment

Thank you for the advice, John. But I don’t remember how many years have passed since I read it. And, by the way, I really liked it. So the advice turned out to be subtle, but very late....

Andrew, Moscow

Expand full comment