I can't believe you are having trouble giving away books. I want free books . Exciting news about your book. I admire anyone who can put thoughts together in such a way as to produce a book.
While I agree with most of this, using AI is also a form of thinking. The economic lens is not the only way to approach a technology, whether it be writing or AI.
AI can be used to deepen thought and be more creative, not just do things faster.
When you say "AI is a form of thinking" are you referring to the AI itself or its use as a tool? To say that AI can be used to depend thought isn't much of a substantive claim. Anything can be used to deepen thought. I could hire someone to punch me in the face and claim it helps me deepen thought. What is possible with a piece of technology isn't particularly meaningful in the face of how it is actually being used and positioned by the structural forces that significantly govern its use. Right now, generative AI is being deployed almost exclusively as a substitute for human activity, as opposed to an enhancement of it.
I'm with John on this one. In a letter that I gave to my high school students last week (after we had discussed ethical uses of AI), I wrote: "Here is the bottom line as I see it: thinking is life. And writing is thinking. So if you ask an AI to do the writing for you, you outsource your thinking. And if you farm out your thinking, you farm out your life."
There is a whole lot of space between writing without AI and outsourcing my thinking to AI. It is a false dichotomy.
Using AI isn't just about having it do stuff for us. If you want a glimpse of AI outside the economic lens, stop reading articles written by investors 😆 (I really hate that oft-referenced article by Andreessen - it has little to do with LLM technology).
There are a whole slew of people in AI Operations exploring how AI can help us think and communicate in creative, not just expedient, ways.
Fair enough, Lance. But for high school sophomores, most of whom are really just beginning to think with any kind of depth or nuance, it seems nearly impossible to have them enter into a writing project and have AI be a tool that doesn't short-circuit essential steps in the learning process.
And I'm not suggesting that writing is the only tool for thinking. But it's one with many important steps, and I fear that farming out things like brainstorming, making and refining an argument, and finding and analyzing evidence will greatly atrophy those skills, especially for young people who are still in the midst of developing them.
Using AI correctly isn't about farming out any of those things ... it's about integrating. For example, my students have learned more about the invention process using AI then they ever would just brainstorming in a notebook.
The writing process doesn't disappear with AI.
Does writing collaboratively with other student atrophy these skills?
I put my cards on the table and said I was talking about 15-year olds; it looks like you teach at the college level, Lance. That seems like a different ballgame to me.
But I remain open. If you'd describe here or send along (jbergerwhite@dist113.org) any of your assignments in some detail, I'd be grateful. I'd love to know what productively integrating AI into the writing process looks like in your classrooms.
I think there is some question-begging going on in lots of the discussions about using this technology and this is an example. What do we mean when we say we're using AI "correctly"? You have one definition, rooted in your system of values, but much of the actual use of AI (both in and out of education) "correct" use of AI is really about speed, efficiency, and replacing human labor with machines. "Correct" is a highly dependent concept, not just something floating around that there's actually widespread agreement on.
It's use as a tool. Many of the outcomes we hope to achieve in writing courses can be achieved by using AI ... sometimes better than with writing. Writing a good prompt takes just as much rhetorical knowledge, if not more, than writing. Building a knowledge graph that gets AI to analyze literature a certain way requires a lot of thinking.
But I'm not saying it replaces writing. The two together are powerful. My own thoughts and writing, combined with thoughtful use of AI can do powerful things ... and yes, help me think through my thoughts.
Writing has never been the sole tool for thinking ... isn't always the best either. We just like to say so, because we teach writing. 😉
You write “ Many of the outcomes we hope to achieve in writing courses can be achieved by using AI ... sometimes better than with writing. ” - and I think that gets to the crux of the matter. What are those outcomes?
I don’t want bad/mediocre/good essays, even though that’s the current “product” students produce - rather those essays are means to an end: to deepen their thinking, to engage in various habits of mind in engaging complex topics, to learn to consider audience, to expand their vocabulary and rhetorical competence, and to wrestle with reality, etc.
Whether using AI as a teaching tool can do that better than my existing pedagogy is the question. And I haven’t seen it yet. But I am open to being convinced- all while skeptically noting the roots of LLM in exploitation and mirroring existing biases.
There is process and there is product, yes? Introducing generative AI tools to the process inevitably changes that process. As to how we judge what that introduction means, we must look at both the product and the process. What are these powerful things you speak of? What distinguishes them from what can be produced without the additional tool of generative AI? I never made a claim that writing is the sole tool for thinking, just that writing is best characterized as a form of thinking. I think the burden should be on those who champion the use of AI to demonstrate its efficacy, not just in the generation of products, but in the enhancement of process, rather than the other way around as it seems to be currently.
A ton of people out there are showing how AI enhances process - AI Operations. It is all about people and processes. That's actually the best part of working with AI ... not necessarily the product. Maybe it's getting lost in all investor rhetoric and bro culture.
There are product and process oriented approaches to writing. There are also product and process oriented approaches to AI. They are both technologies.
Can you show me some of these people who are doing this? It's not that I doubt their existence, but mostly what I see are experiments that result in outcomes that seem fine, but which also have obvious trade offs. They are doing something different, which again, okay, but this is a different frame than what I'm talking about when I'm working through what thinking through writing entails.
"Writing a good prompt takes just as much rhetorical knowledge, if not more, than writing."
I have a hard time believing that you actually believe this. But I'm curious why you feel that developing skills matter less than "outcomes"? Would you say that we shouldn't teach students math skills but just teach them how to copy and paste a math questions into ChatGPT?
This seems mostly hypothetical. Existing public chatbots are still quite bad at understanding anything and tend to abandon prompts quickly in their output. Even very simple prompts are hard for current chatbots to handle. E.g., I did a test asking chatbots to write "six word stories" about baby shoes and not a single one gave me six words. Instead I got five word slogans or eight word sentences.
But I also think there's no reason to believe future chatbots will function in the same way as current ones, including what inputs work for them. Prompts might not even be all or mostly text in the future. So trying to teach to current chatbot prompts seems like teaching something that will be immediately outdated.
Oh, it's not hypothetical. It's demonstrable on a lot of different levels.
That's why crafting prompts is a skill ... it is about structuring information in a way that LLMs can process. They don't really "understand," they process.
I agree ... prompting will not stay the same. It may not even exist in a few years (though I am skeptical on that account). But structuring writing, content, and knowledge for AI systems isn't going away. Nor is building AI tools to enhance writing.
Writers (or rhetoricians) should be the ones doing this, not computer engineers.
I’m very excited for this new book! If you’re delving into the connections between war and technology and AI, I really recommend looking at the history of cybernetics (and thinking about why the AI people broke with the cybernetics folks, eg Norbert Weiner). The Cybernetics Moment by Ronald Kline is pretty good. There are also some interesting figures in and around early computing who asked good questions about the hype (Weizenbaum, Dreyfuss.) I’m sure you know all this. I wanted to talk more in Teaching Machines about the cybernetics stuff, but it felt a little tangential. :)
I had to cut myself off because the rabbit hole became very fascinating and it really is only a tangential point, but as you know, at every turn in these technological turns, there was someone or someone's saying "not so fast" even as those warnings were barreled past. I'm pretty sure the best case scenario for this book is someone finding it after I'm dead and saying, "that guy really was on to something."
Also, thinking is writing. Having mostly retired, I find myself feeling compelled to write what I’m thinking about. Thus, my blog. A great column, John. I’m interested in free books, but I’ll pay for yours. I’m anxious to read it.
I just want to double down on the "writing is thinking" concept for a sec. I write every day, ergo I think every day. My thought process is being sharpened by deliberate thoughts about deep, important subjects (or just goofy thought experiments) every day, and that really helps form my own worldview over time. Spelling it out for others and getting feedback has been unimaginably helpful.
Free books are excellent, but oh my gosh, I do not need more books! I went to the library yesterday to pick up three holds that came in, and now have seven books checked out, not to mention piles lying around, and the Hyde Park-Kenwood book sale is in a couple of weeks. I'm doomed, I think.
The idea that writing is thinking has been so freeing for me personally and for my students. Thank you for bringing it to the foreground in different contexts.
I can't believe you are having trouble giving away books. I want free books . Exciting news about your book. I admire anyone who can put thoughts together in such a way as to produce a book.
Congratulations on the new book!
While I agree with most of this, using AI is also a form of thinking. The economic lens is not the only way to approach a technology, whether it be writing or AI.
AI can be used to deepen thought and be more creative, not just do things faster.
When you say "AI is a form of thinking" are you referring to the AI itself or its use as a tool? To say that AI can be used to depend thought isn't much of a substantive claim. Anything can be used to deepen thought. I could hire someone to punch me in the face and claim it helps me deepen thought. What is possible with a piece of technology isn't particularly meaningful in the face of how it is actually being used and positioned by the structural forces that significantly govern its use. Right now, generative AI is being deployed almost exclusively as a substitute for human activity, as opposed to an enhancement of it.
I'm with John on this one. In a letter that I gave to my high school students last week (after we had discussed ethical uses of AI), I wrote: "Here is the bottom line as I see it: thinking is life. And writing is thinking. So if you ask an AI to do the writing for you, you outsource your thinking. And if you farm out your thinking, you farm out your life."
There is a whole lot of space between writing without AI and outsourcing my thinking to AI. It is a false dichotomy.
Using AI isn't just about having it do stuff for us. If you want a glimpse of AI outside the economic lens, stop reading articles written by investors 😆 (I really hate that oft-referenced article by Andreessen - it has little to do with LLM technology).
There are a whole slew of people in AI Operations exploring how AI can help us think and communicate in creative, not just expedient, ways.
Fair enough, Lance. But for high school sophomores, most of whom are really just beginning to think with any kind of depth or nuance, it seems nearly impossible to have them enter into a writing project and have AI be a tool that doesn't short-circuit essential steps in the learning process.
And I'm not suggesting that writing is the only tool for thinking. But it's one with many important steps, and I fear that farming out things like brainstorming, making and refining an argument, and finding and analyzing evidence will greatly atrophy those skills, especially for young people who are still in the midst of developing them.
Using AI correctly isn't about farming out any of those things ... it's about integrating. For example, my students have learned more about the invention process using AI then they ever would just brainstorming in a notebook.
The writing process doesn't disappear with AI.
Does writing collaboratively with other student atrophy these skills?
I put my cards on the table and said I was talking about 15-year olds; it looks like you teach at the college level, Lance. That seems like a different ballgame to me.
But I remain open. If you'd describe here or send along (jbergerwhite@dist113.org) any of your assignments in some detail, I'd be grateful. I'd love to know what productively integrating AI into the writing process looks like in your classrooms.
I think there is some question-begging going on in lots of the discussions about using this technology and this is an example. What do we mean when we say we're using AI "correctly"? You have one definition, rooted in your system of values, but much of the actual use of AI (both in and out of education) "correct" use of AI is really about speed, efficiency, and replacing human labor with machines. "Correct" is a highly dependent concept, not just something floating around that there's actually widespread agreement on.
It's use as a tool. Many of the outcomes we hope to achieve in writing courses can be achieved by using AI ... sometimes better than with writing. Writing a good prompt takes just as much rhetorical knowledge, if not more, than writing. Building a knowledge graph that gets AI to analyze literature a certain way requires a lot of thinking.
But I'm not saying it replaces writing. The two together are powerful. My own thoughts and writing, combined with thoughtful use of AI can do powerful things ... and yes, help me think through my thoughts.
Writing has never been the sole tool for thinking ... isn't always the best either. We just like to say so, because we teach writing. 😉
You write “ Many of the outcomes we hope to achieve in writing courses can be achieved by using AI ... sometimes better than with writing. ” - and I think that gets to the crux of the matter. What are those outcomes?
I don’t want bad/mediocre/good essays, even though that’s the current “product” students produce - rather those essays are means to an end: to deepen their thinking, to engage in various habits of mind in engaging complex topics, to learn to consider audience, to expand their vocabulary and rhetorical competence, and to wrestle with reality, etc.
Whether using AI as a teaching tool can do that better than my existing pedagogy is the question. And I haven’t seen it yet. But I am open to being convinced- all while skeptically noting the roots of LLM in exploitation and mirroring existing biases.
Yep. Nearly all of those outcomes can either be achieved or enhanced using AI.
I'm not saying you have to us AI in your class ... just realize that the teachers who are have good reasons.
In this case, "enhanced" is going begging. What is enhanced?
There is process and there is product, yes? Introducing generative AI tools to the process inevitably changes that process. As to how we judge what that introduction means, we must look at both the product and the process. What are these powerful things you speak of? What distinguishes them from what can be produced without the additional tool of generative AI? I never made a claim that writing is the sole tool for thinking, just that writing is best characterized as a form of thinking. I think the burden should be on those who champion the use of AI to demonstrate its efficacy, not just in the generation of products, but in the enhancement of process, rather than the other way around as it seems to be currently.
A ton of people out there are showing how AI enhances process - AI Operations. It is all about people and processes. That's actually the best part of working with AI ... not necessarily the product. Maybe it's getting lost in all investor rhetoric and bro culture.
There are product and process oriented approaches to writing. There are also product and process oriented approaches to AI. They are both technologies.
Can you show me some of these people who are doing this? It's not that I doubt their existence, but mostly what I see are experiments that result in outcomes that seem fine, but which also have obvious trade offs. They are doing something different, which again, okay, but this is a different frame than what I'm talking about when I'm working through what thinking through writing entails.
"Writing a good prompt takes just as much rhetorical knowledge, if not more, than writing."
I have a hard time believing that you actually believe this. But I'm curious why you feel that developing skills matter less than "outcomes"? Would you say that we shouldn't teach students math skills but just teach them how to copy and paste a math questions into ChatGPT?
Try writing a good prompt! Not really the same as a calculator.
I'm not saying that students should auto-generate all their writing ... just that you can learn a lot about rhetoric through prompt crafting.
I'm not sure where you got the idea that I'm focused on outcomes more than skills ... except maybe because that's how they assess us in academia.
Outcome: Define and apply key rhetorical concepts to communication
Skills: Giving a speech, Writing a blog, Creating a video game, Designing a website, Creating a poster, Crafting a prompt
This seems mostly hypothetical. Existing public chatbots are still quite bad at understanding anything and tend to abandon prompts quickly in their output. Even very simple prompts are hard for current chatbots to handle. E.g., I did a test asking chatbots to write "six word stories" about baby shoes and not a single one gave me six words. Instead I got five word slogans or eight word sentences.
But I also think there's no reason to believe future chatbots will function in the same way as current ones, including what inputs work for them. Prompts might not even be all or mostly text in the future. So trying to teach to current chatbot prompts seems like teaching something that will be immediately outdated.
Oh, it's not hypothetical. It's demonstrable on a lot of different levels.
That's why crafting prompts is a skill ... it is about structuring information in a way that LLMs can process. They don't really "understand," they process.
I agree ... prompting will not stay the same. It may not even exist in a few years (though I am skeptical on that account). But structuring writing, content, and knowledge for AI systems isn't going away. Nor is building AI tools to enhance writing.
Writers (or rhetoricians) should be the ones doing this, not computer engineers.
I’m very excited for this new book! If you’re delving into the connections between war and technology and AI, I really recommend looking at the history of cybernetics (and thinking about why the AI people broke with the cybernetics folks, eg Norbert Weiner). The Cybernetics Moment by Ronald Kline is pretty good. There are also some interesting figures in and around early computing who asked good questions about the hype (Weizenbaum, Dreyfuss.) I’m sure you know all this. I wanted to talk more in Teaching Machines about the cybernetics stuff, but it felt a little tangential. :)
I had to cut myself off because the rabbit hole became very fascinating and it really is only a tangential point, but as you know, at every turn in these technological turns, there was someone or someone's saying "not so fast" even as those warnings were barreled past. I'm pretty sure the best case scenario for this book is someone finding it after I'm dead and saying, "that guy really was on to something."
Fascinated by your AI insights and thinking around its uses.
FYI: I want free books.
Congratulations! Great post. I can’t seem to give away books either. Wanna swap?! 😂
Congrats on the new book! Yes, I want free books.
Also, thinking is writing. Having mostly retired, I find myself feeling compelled to write what I’m thinking about. Thus, my blog. A great column, John. I’m interested in free books, but I’ll pay for yours. I’m anxious to read it.
As someone who never got good writing instruction it's reassuring to hear that it's good to change one's mind multiple times throughout the process :)
I'll take free books!
I’m excited for your thinking on this to develop as you flesh out this new book!
I just want to double down on the "writing is thinking" concept for a sec. I write every day, ergo I think every day. My thought process is being sharpened by deliberate thoughts about deep, important subjects (or just goofy thought experiments) every day, and that really helps form my own worldview over time. Spelling it out for others and getting feedback has been unimaginably helpful.
Free books are excellent, but oh my gosh, I do not need more books! I went to the library yesterday to pick up three holds that came in, and now have seven books checked out, not to mention piles lying around, and the Hyde Park-Kenwood book sale is in a couple of weeks. I'm doomed, I think.
I hear you!
I want Free Books
I want free books!
Such a good idea for a book, excited for it!
The idea that writing is thinking has been so freeing for me personally and for my students. Thank you for bringing it to the foreground in different contexts.
PS. Free books?!?! I’d love one!