"Reckless" is the correct word for what these AI companies are doing; putting these bullshitting LLMs on the market for us to use, all the while extolling the virtues of our putting queries to them and expecting that they should give us correct answers.
I understand that LLMs don't "reason" the way humans do.
I'm a relatively new AI chatbot user, and so I'm trying to incorporate the above understandings with those listed below:
1). Both ChatGPT and Claude have helped me think about the choices I had to make between two particular surgeries, and both have given me data and guidance that I've taken to my surgeons, who were both astonished by the accuracy of both ChatGPT's and Claude's answers; and
2) ChatGPT was able to correctly read and translate my great-great-great-great-great-grandfather's vital stats from a birth record found in a Swedish church's "vital stats" book from the early 1700's. The calligraphy in which the data were written was a combination of both a "Latin" and a "Gothic" hand, as ChatGPT declared to me, and the spelling and syntax were challenging in and of themselves. Nonetheless, ChatGPT sorted it all out and I confirmed the validity of its answers by checking them with a professional genealogist.
These 2 are not paeans to AI, given their deficiencies (noted in this very Substack); that said, I have mixed feelings about how problematic AI is, coupled with how well it has worked for my two particular recent tasks listed above.
Great piece, John. We've seen how social media use has affected people without the necessary critical thinking skills to engage with it, sending them down conspiracy rabbit holes and further polarizing them on both the left and right. I saw this with my older sister, who went from being respectful of my knowledge as a professor to being disdainful and insisting that her opinions were just as reality-based as mine. It's terrifying to think about those same folks using AI without the proper education or critical thinking skills to interpret the results.
I find ChatGPT to be most helpful as a souped-up search engine or as a thinking partner. Yesterday, I shared that I'm working on a song with these chords, but these notes don't fit the key I think it's in, so what are my options? It gave me suggestions to explore and I found what I needed much more quickly than if I'd pored over a music theory textbook. But I've been using computers with rigorous logic since the early 80s and have a good handle on how to use AI systems so far.
The facade of "authority" that LLMs operate with seems like a significant amplifier of the kind of problem you illustrate with the example of your sister. Why trust any human when the intelligent machine is giving you an answer, quite possible the answer you're looking for, like "Yes, you're special and talented."
Your example is a great one. You approached the tool with an objective and plan and a heuristic that would allow you to judge the ultimate quality of the result. I use the framing of a practice (skills, knowledge, attitudes, habits-of-mind) as the thing we must have to make this productive use. You have these experiences in multiple domains (music and computer interfaces) that allows you to have this result.
I think we seriously underestimate the challenge of providing the necessary experiences that help students especially develop these practices. The notion that an AI "native" will make better use of this than someone like you never made sense to me.
The only historical precedent we have for inhuman beings that mimic human behavior is in myth and legend and so myth and legend are where we should go for guidance.
Well, no, it doesn't hate because it is untethered to any larger moral sense. But models have been shown to demonstrate the biases and hate of the human material they've been trained on.
I was this morning labeled an "enemy" on X for asking a question about ChatGPT capabilities. Have gotten a lot of insults and hate today. Sometimes an entity untethered from all the emotion is refreshing.
X is a cesspool that brings out the worst in humanity. This isn't to excuse people hurling abuse at you, but you could extend the "user error" analogy genAI developers get the benefit from to these social spaces. You've done nothing wrong that deserves abuse, but received it anyway. This happens to just about everyone on X at one point or another. To attempt to post there is to risk that abuse.
I think that's bad and I wish platforms would do something about that, but they not only don't, they go the other way because abuse too is a form of engagement. I'm glad that Substack still allows authors to police their own comment sections and block others from viewing their work or interacting with them for that reason.
I don't know that LLMs are untethered from emotion given they're trained on our outputs, but they don't assemble syntax from an emotional place. I'd have to think about whether or not that's truly an overall advantage. I guess it would depend on the context.
Thank you for your kind words and you know too that my particular oblique communication style invites misinterpretation. I'm used to that. I guess my point mostly is that I'd like to see more attention to the performed kindness of AI models (or sycophancy, or what have you) in contrast to the bile of humanity.
In the end, I think every instance here is optimized around engagement. On social media, bile and hate is the most powerful engine for engagement and attention. Interacting with a chatbot, it appears the opposite, that we respond to a friendly or nurturing or even sycophantic presence.
Think off the top of my head, it's entirely possible that the pleasure people take in the bot being so obsequious is a reaction response to experiencing the opposite in the world.
This is why, if any higher ed institution were to ask me (which they don't), I would say they should be presently optimizing for meaningful human contact that doesn't treat students like customers, but people. This is the differentiator.
"Has there ever been a product allowed to get away with this without regulation, without oversight, and with so many people blaming the human for error when interacting with the technology explicitly in the ways they are being encouraged to do?"
Too many politicians are BULLSHITTERS, aren't they? I don't respect many politicians so should that guide how I might feel about AI models which bullshit?
Speaking of bullshit, the quotes by the Ohio State administrators and professor in this article on their requiring the use of AI in all classes at OSU are monumental BS. Maybe they don't need it there since they've perfected the use. Or maybe it's speaking for them.
I'm so tired of the "the genie's already out of the bottle so [insert some form of surrender here]" kind of thinking. And also, the professor who talked about the unique and creative writings he's gotten from his students using this--he references his favorite one being an essay on karma and returning shopping carts. Quick Google search shows that's a trope that's been rolling around the internet for decades. I wonder whose writing was stolen by Chatgpt to produce that student's "creative" work?
I found those things dismaying too. I'm writing about the CalState and Ohio State initiatives at my Inside Higher Ed column this week. I'm totally with you about this argument of inevitability. We have no idea what's to come and the idea that we have to give ourselves over to this tech because it's inevitable is corrosive.
Another stimulating essay.
"Reckless" is the correct word for what these AI companies are doing; putting these bullshitting LLMs on the market for us to use, all the while extolling the virtues of our putting queries to them and expecting that they should give us correct answers.
I understand that LLMs don't "reason" the way humans do.
I'm a relatively new AI chatbot user, and so I'm trying to incorporate the above understandings with those listed below:
1). Both ChatGPT and Claude have helped me think about the choices I had to make between two particular surgeries, and both have given me data and guidance that I've taken to my surgeons, who were both astonished by the accuracy of both ChatGPT's and Claude's answers; and
2) ChatGPT was able to correctly read and translate my great-great-great-great-great-grandfather's vital stats from a birth record found in a Swedish church's "vital stats" book from the early 1700's. The calligraphy in which the data were written was a combination of both a "Latin" and a "Gothic" hand, as ChatGPT declared to me, and the spelling and syntax were challenging in and of themselves. Nonetheless, ChatGPT sorted it all out and I confirmed the validity of its answers by checking them with a professional genealogist.
These 2 are not paeans to AI, given their deficiencies (noted in this very Substack); that said, I have mixed feelings about how problematic AI is, coupled with how well it has worked for my two particular recent tasks listed above.
Great piece, John. We've seen how social media use has affected people without the necessary critical thinking skills to engage with it, sending them down conspiracy rabbit holes and further polarizing them on both the left and right. I saw this with my older sister, who went from being respectful of my knowledge as a professor to being disdainful and insisting that her opinions were just as reality-based as mine. It's terrifying to think about those same folks using AI without the proper education or critical thinking skills to interpret the results.
I find ChatGPT to be most helpful as a souped-up search engine or as a thinking partner. Yesterday, I shared that I'm working on a song with these chords, but these notes don't fit the key I think it's in, so what are my options? It gave me suggestions to explore and I found what I needed much more quickly than if I'd pored over a music theory textbook. But I've been using computers with rigorous logic since the early 80s and have a good handle on how to use AI systems so far.
The facade of "authority" that LLMs operate with seems like a significant amplifier of the kind of problem you illustrate with the example of your sister. Why trust any human when the intelligent machine is giving you an answer, quite possible the answer you're looking for, like "Yes, you're special and talented."
Your example is a great one. You approached the tool with an objective and plan and a heuristic that would allow you to judge the ultimate quality of the result. I use the framing of a practice (skills, knowledge, attitudes, habits-of-mind) as the thing we must have to make this productive use. You have these experiences in multiple domains (music and computer interfaces) that allows you to have this result.
I think we seriously underestimate the challenge of providing the necessary experiences that help students especially develop these practices. The notion that an AI "native" will make better use of this than someone like you never made sense to me.
The only historical precedent we have for inhuman beings that mimic human behavior is in myth and legend and so myth and legend are where we should go for guidance.
And these sources urge extreme caution.
Jen Shahade has a great post about ChatGPT cheating against her at chess - the tone is very similar to what Guinzburg experienced: https://jenshahade.substack.com/p/chatgpt-is-weirdly-bad-at-chess
What AI does not do is hate though. People still have that market cornered.
Well, no, it doesn't hate because it is untethered to any larger moral sense. But models have been shown to demonstrate the biases and hate of the human material they've been trained on.
I was this morning labeled an "enemy" on X for asking a question about ChatGPT capabilities. Have gotten a lot of insults and hate today. Sometimes an entity untethered from all the emotion is refreshing.
X is a cesspool that brings out the worst in humanity. This isn't to excuse people hurling abuse at you, but you could extend the "user error" analogy genAI developers get the benefit from to these social spaces. You've done nothing wrong that deserves abuse, but received it anyway. This happens to just about everyone on X at one point or another. To attempt to post there is to risk that abuse.
I think that's bad and I wish platforms would do something about that, but they not only don't, they go the other way because abuse too is a form of engagement. I'm glad that Substack still allows authors to police their own comment sections and block others from viewing their work or interacting with them for that reason.
I don't know that LLMs are untethered from emotion given they're trained on our outputs, but they don't assemble syntax from an emotional place. I'd have to think about whether or not that's truly an overall advantage. I guess it would depend on the context.
Thank you for your kind words and you know too that my particular oblique communication style invites misinterpretation. I'm used to that. I guess my point mostly is that I'd like to see more attention to the performed kindness of AI models (or sycophancy, or what have you) in contrast to the bile of humanity.
In the end, I think every instance here is optimized around engagement. On social media, bile and hate is the most powerful engine for engagement and attention. Interacting with a chatbot, it appears the opposite, that we respond to a friendly or nurturing or even sycophantic presence.
Think off the top of my head, it's entirely possible that the pleasure people take in the bot being so obsequious is a reaction response to experiencing the opposite in the world.
This is why, if any higher ed institution were to ask me (which they don't), I would say they should be presently optimizing for meaningful human contact that doesn't treat students like customers, but people. This is the differentiator.
100%
Here. You see why having a conversation with ChatGPT appeals. https://x.com/anecdotal/status/1931768642376774038
"Has there ever been a product allowed to get away with this without regulation, without oversight, and with so many people blaming the human for error when interacting with the technology explicitly in the ways they are being encouraged to do?"
Donald Trump as product/technology
Too many politicians are BULLSHITTERS, aren't they? I don't respect many politicians so should that guide how I might feel about AI models which bullshit?
Speaking of bullshit, the quotes by the Ohio State administrators and professor in this article on their requiring the use of AI in all classes at OSU are monumental BS. Maybe they don't need it there since they've perfected the use. Or maybe it's speaking for them.
https://www.theguardian.com/us-news/2025/jun/09/ohio-university-ai-training
I'm so tired of the "the genie's already out of the bottle so [insert some form of surrender here]" kind of thinking. And also, the professor who talked about the unique and creative writings he's gotten from his students using this--he references his favorite one being an essay on karma and returning shopping carts. Quick Google search shows that's a trope that's been rolling around the internet for decades. I wonder whose writing was stolen by Chatgpt to produce that student's "creative" work?
I found those things dismaying too. I'm writing about the CalState and Ohio State initiatives at my Inside Higher Ed column this week. I'm totally with you about this argument of inevitability. We have no idea what's to come and the idea that we have to give ourselves over to this tech because it's inevitable is corrosive.
I KNEW you'd be on top of this.
John, have you read this article in the Atlantic? A Computer Wrote my Mother's Obituary: https://www.theatlantic.com/technology/archive/2025/06/ai-obituaries-chatgpt/683096/?gift=7QbJfJ55UWEj0J5E-fg0vAj0OxQXYxsbTwaQwtMBaZE&utm_source=copy-link&utm_medium=social&utm_campaign=share
Curious what you think?
I spent some time breaking down how ChatGPT comes to bullshit the bullshit it bullshits... and came up with a ton of fun examples while investigating:
https://ramblingafter.substack.com/p/why-does-chatgpt-think-mammoths-were