I'm interested in AI as I've always looked at new technology to see how it can help my life be better. I admit to being a computer nerd but there is no part of me that would want it to make decisions for me based on my past actions or thoughts. I would hope that I've matured and changed over the 80+ years I've been alive. I do think I know myself. I loved working and still enjoy a mental challenge. Can't wait to get your book.
You're describing a life of being truly engaged, which to me is the goal of life, but it seems like there's a lot of sentiment for a life of passivity, of disengagement. Where did we get the idea that this should be desirable?
I've had this background thought while listening to Hoffman--what's off? Thank you for putting words around it. I'm interested in where his ideas about outsourcing life decisions intersect with Zuckerberg's male energy. Makes me think of bell hooks, that patriarchy demands "psychic self-mutilation." Allowing a LLM to search your email to find an answer to an emotion question? Sheesh. Anyway, thank you!
Hoffman is a much "friendlier" face than some other tech titans, but he is indeed "off." His book is an infomercial and the way he blows past complications makes you wonder what he's hiding. It feels like a snow job, but what is underneath it?
the lived experience of not wanting to live with the consequences from one own's decisions or feelings or thoughts is what drives him (and others) to outsource this kind of personal decision making/feeling/expression. i wonder if it is more positive and productive for him to rely on this kind of AI and normalize it with those of us who prefer to experience life though natural consequences. i have a father that is abusive and impulsive and all the things. he tends to be a "keyboard warrior" and has what i would call fits where he says or texts or emails things that are hurtful and harmful to others and his relationships. before AI as we know it now, i was wishing for a "text robot" that would answer his texts for me -this would help him feel engaged with and avoid the mental/emotional/relational damage done to me during these exchanges. now he admits to chatting for hours with AI. it is perhaps my wish come true & i have very complicated feelings about it.
I'm a veteran English teacher, too. Warren would not send that message to his boss if he thought his boss cared. My students and I work with the bot together. No student works alone. And to Mike Kentz's persistent point, with AI on the scene, we will have to grade the process work (say, good journaling) instead of the product. Once the student feels like a product--or lets his work stand for him--, it's game over, and the Hoffmans have won. So the redesign of education will perforce take AI into account, and the structures--including little phrases like "my students"--will have to change to be sure *caring* is still what that possessive means.
I tried to listen to Hoffman on a London School of Economics podcast of some lecture he gave there last year, figuring maybe he'd try harder in front of an academic audience. Nope. I swear his talk sounded like ChatGPT wrote it. Nothing interesting or original in the fifteen minutes I listened to it before I couldn't take it anymore.
Thanks for the pointer to Mike Caulfield. I had not seen his posts about the Toulminzer. Good stuff!
I keep thinking about the comparison that was made between hallucination and imagination and it really bothers me (which is why I needed to come put this in words). You addressed it correctly, and made a great point. And my brain has been trying to think why he even made this comparison. To me, hallucination is something we see/perceive that isn’t real, but we think is real. Imagination is letting the mind create ideas we know aren’t real (yet) but are possibilities perhaps. The very important difference is one’s awareness. With hallucination there is not awareness that the hallucination is not real, until maybe afterwards, if we discover or are told it’s not real. With imagination we have complete awareness that what we are imagining is not real (yet). It is a conscious creative process. AI does not make any distinctions as to what is real or what is not real as far as I know, it just presents output. I’m sure I will learn much more when I read your book, but that distinction seems tremendously important to me. Is it correct to say that AI generates “answers” or “content” but does not specify the veracity or reality of what it produces? Does its usefulness rely on human intuition that it has produced an error or has stated something untrue or unreal? That doesn’t seem very useful to me. Maybe I am misunderstanding AI?
Since Jan 20th it’s harder to stay invested. The sorrow is overwhelming. But, thanks to social media, I’ve found a group of women who feel as I do and want to fight for democracy.
Interesting to compare those ads to the Google commercial “Reunion” from a decade ago: an altogether more human-centered vision of how technology could improve lives. https://www.youtube.com/watch?v=gHGDN9-oFJE
another podcast, another AI booster: Sam Altman's response that students couldn't start with a blank page anymore: "for me writing is outsourced thinking and very important. But as long as people replace a better way to do their thinking with a new kind of writing, that seems directionally fine."
Oy vey. This is not directionally fine. I had to stop listening to that episode because similar to the Reid Hoffman one I cite in the newsletter, it made me too agitated to continue.
I like Adam Grant because he's generous to the people he talks too, but there has to be a limit to that generosity, IMO. It seems clear he knows that Altman is spouting shit, but then gives him an out by accepting that Altman is something just thinking out loud. I don't believe that. Altman's stated goal is to create a godlike super intelligence. He should be held to the strictest scrutiny.
I write and teach writing. For me, it is a skill and an art that is enjoyable. My students feel the same way, in large part because they are homeschoolers. We are not pressured to meet someone else’s expectation of what they should be writing.
That said, I tried AI for writing a couple of times when I had to prepare a research resource for my students on short notice. Since I was asking for something specific that has an accepted set of “steps,” it worked well to give me the basics, which I could then tweak and format. Other than that purpose, I haven’t found it very helpful. I write because I have something to say, so having something else write it for me just doesn’t make sense.
When I try to think ahead to how this will look in the offices of Warrens everywhere, I wonder if anxiety and imposter syndrome will reach a new high? There will always (I hope!) be people who write well naturally. Those who are hiding behind AI to the extent of Warren will most likely live in fear of being found out!
One last thing - when I married my husband, his writing was pretty atrocious, although pretty typical of office communication (i.e., completely unclear due to wording and punctuation errors). I proofread his writing but didn’t fix it for him. I insisted on going over every mistake I found, and explaining why it needed to be changed. In a couple of months, his writing had improved so much that he no longer needed me. AI would be more useful if it could edit and explain errors to students! I do use Grammarly (free edition) at times for this purpose.
Thank you for writing this post. It’s the best description of what I see wrong with the way AI is being sold, in words better than mine. I also just ordered your book. Looking forward to reading it this weekend.
Thanks! I think the AI companies are overplaying their hand and turning people off because deep down, we don't want to be absent from our own lives. I hope the book helps you with your own thinking on what this tech means.
I'm interested in AI as I've always looked at new technology to see how it can help my life be better. I admit to being a computer nerd but there is no part of me that would want it to make decisions for me based on my past actions or thoughts. I would hope that I've matured and changed over the 80+ years I've been alive. I do think I know myself. I loved working and still enjoy a mental challenge. Can't wait to get your book.
You're describing a life of being truly engaged, which to me is the goal of life, but it seems like there's a lot of sentiment for a life of passivity, of disengagement. Where did we get the idea that this should be desirable?
I've had this background thought while listening to Hoffman--what's off? Thank you for putting words around it. I'm interested in where his ideas about outsourcing life decisions intersect with Zuckerberg's male energy. Makes me think of bell hooks, that patriarchy demands "psychic self-mutilation." Allowing a LLM to search your email to find an answer to an emotion question? Sheesh. Anyway, thank you!
Hoffman is a much "friendlier" face than some other tech titans, but he is indeed "off." His book is an infomercial and the way he blows past complications makes you wonder what he's hiding. It feels like a snow job, but what is underneath it?
the lived experience of not wanting to live with the consequences from one own's decisions or feelings or thoughts is what drives him (and others) to outsource this kind of personal decision making/feeling/expression. i wonder if it is more positive and productive for him to rely on this kind of AI and normalize it with those of us who prefer to experience life though natural consequences. i have a father that is abusive and impulsive and all the things. he tends to be a "keyboard warrior" and has what i would call fits where he says or texts or emails things that are hurtful and harmful to others and his relationships. before AI as we know it now, i was wishing for a "text robot" that would answer his texts for me -this would help him feel engaged with and avoid the mental/emotional/relational damage done to me during these exchanges. now he admits to chatting for hours with AI. it is perhaps my wish come true & i have very complicated feelings about it.
I'm a veteran English teacher, too. Warren would not send that message to his boss if he thought his boss cared. My students and I work with the bot together. No student works alone. And to Mike Kentz's persistent point, with AI on the scene, we will have to grade the process work (say, good journaling) instead of the product. Once the student feels like a product--or lets his work stand for him--, it's game over, and the Hoffmans have won. So the redesign of education will perforce take AI into account, and the structures--including little phrases like "my students"--will have to change to be sure *caring* is still what that possessive means.
I tried to listen to Hoffman on a London School of Economics podcast of some lecture he gave there last year, figuring maybe he'd try harder in front of an academic audience. Nope. I swear his talk sounded like ChatGPT wrote it. Nothing interesting or original in the fifteen minutes I listened to it before I couldn't take it anymore.
Thanks for the pointer to Mike Caulfield. I had not seen his posts about the Toulminzer. Good stuff!
I keep thinking about the comparison that was made between hallucination and imagination and it really bothers me (which is why I needed to come put this in words). You addressed it correctly, and made a great point. And my brain has been trying to think why he even made this comparison. To me, hallucination is something we see/perceive that isn’t real, but we think is real. Imagination is letting the mind create ideas we know aren’t real (yet) but are possibilities perhaps. The very important difference is one’s awareness. With hallucination there is not awareness that the hallucination is not real, until maybe afterwards, if we discover or are told it’s not real. With imagination we have complete awareness that what we are imagining is not real (yet). It is a conscious creative process. AI does not make any distinctions as to what is real or what is not real as far as I know, it just presents output. I’m sure I will learn much more when I read your book, but that distinction seems tremendously important to me. Is it correct to say that AI generates “answers” or “content” but does not specify the veracity or reality of what it produces? Does its usefulness rely on human intuition that it has produced an error or has stated something untrue or unreal? That doesn’t seem very useful to me. Maybe I am misunderstanding AI?
Since Jan 20th it’s harder to stay invested. The sorrow is overwhelming. But, thanks to social media, I’ve found a group of women who feel as I do and want to fight for democracy.
Interesting to compare those ads to the Google commercial “Reunion” from a decade ago: an altogether more human-centered vision of how technology could improve lives. https://www.youtube.com/watch?v=gHGDN9-oFJE
another podcast, another AI booster: Sam Altman's response that students couldn't start with a blank page anymore: "for me writing is outsourced thinking and very important. But as long as people replace a better way to do their thinking with a new kind of writing, that seems directionally fine."
https://www.ted.com/talks/rethinking_with_adam_grant_sam_altman_on_the_future_of_ai_and_humanity
:(
Oy vey. This is not directionally fine. I had to stop listening to that episode because similar to the Reid Hoffman one I cite in the newsletter, it made me too agitated to continue.
I made it all the way through but I'm still agitated
I like Adam Grant because he's generous to the people he talks too, but there has to be a limit to that generosity, IMO. It seems clear he knows that Altman is spouting shit, but then gives him an out by accepting that Altman is something just thinking out loud. I don't believe that. Altman's stated goal is to create a godlike super intelligence. He should be held to the strictest scrutiny.
I write and teach writing. For me, it is a skill and an art that is enjoyable. My students feel the same way, in large part because they are homeschoolers. We are not pressured to meet someone else’s expectation of what they should be writing.
That said, I tried AI for writing a couple of times when I had to prepare a research resource for my students on short notice. Since I was asking for something specific that has an accepted set of “steps,” it worked well to give me the basics, which I could then tweak and format. Other than that purpose, I haven’t found it very helpful. I write because I have something to say, so having something else write it for me just doesn’t make sense.
When I try to think ahead to how this will look in the offices of Warrens everywhere, I wonder if anxiety and imposter syndrome will reach a new high? There will always (I hope!) be people who write well naturally. Those who are hiding behind AI to the extent of Warren will most likely live in fear of being found out!
One last thing - when I married my husband, his writing was pretty atrocious, although pretty typical of office communication (i.e., completely unclear due to wording and punctuation errors). I proofread his writing but didn’t fix it for him. I insisted on going over every mistake I found, and explaining why it needed to be changed. In a couple of months, his writing had improved so much that he no longer needed me. AI would be more useful if it could edit and explain errors to students! I do use Grammarly (free edition) at times for this purpose.
Thank you for writing this post. It’s the best description of what I see wrong with the way AI is being sold, in words better than mine. I also just ordered your book. Looking forward to reading it this weekend.
Thanks! I think the AI companies are overplaying their hand and turning people off because deep down, we don't want to be absent from our own lives. I hope the book helps you with your own thinking on what this tech means.