I’m increasingly convinced that the biggest AI boosters think everyone is dumb, and also want to keep them that way.
This week, Reid Hoffman, founder of LinkedIn, part of the original “PayPal Mafia” (along with Elon Musk and Peter Thiel), was on the Armchair Expert podcast to promote his new book, Superagency: What Could Possibly Go Right With Our AI Future. Hoffman is both a big investor in AI technology - including early funding for OpenAI - and an evangelist for our AI-mediated future.
I’m going to assume that Hoffman is sincere in his beliefs that AI has the potential to enhance human lives on a mass scale - putting aside what his vision of those lives will be like for the moment - but I wonder why, in pursuit of his vision, he has to peddle obvious bullshit.
E.g., in the podcast, he’s asked about the problem of generative AI making stuff up, and he responds at the 40:51 mark:
Hoffman: The fear is…hallucination. (Hallucination is pronounced with a kind of sarcastic delivery.)
Monica Padman (co-host): What does that mean? I mean I know what it means, but what does it mean in this context.
Hoffman: I invented shit…right?
Padman: Oh…
Hoffman: I invented shit, and sometimes it’s wrong. By the way, hallucination, imagination…same thing!
Padman: Right…
Hoffman: And imagination is sometimes a really good thing. So I want to keep all the imagination and use all the imagination.
This is wrong. The hallucinations of statistical models like ChatGPT are nothing like the imagination of humans. Human imaginations are not the byproduct of a statistical probabilities. A large language model hallucination happens under the exact same process as the non-hallucinatory output. You couldn’t even call it a glitch because the LLM is simply doing what it does and we humans notice when what it generates according to its probabilistic process does not align with what we know to be true.
Calling these things hallucinations is a mistake, given that this suggests a kind of mind at work in the large language model, and Hoffman is taking advantage of this to confuse people.
Human imagination, on the other hand involves letting our minds run freely in a truly creative, generative (pun intended) way.
Hoffman, who knows that large language models run on probabilistic processes knows what he is saying is fundamentally wrong, but he is counting on his audience not knowing this in order to sell his vision for the future, a vision which suggests we should live lives entirely mediated by artificial intelligence.
Hoffman believes that AI will enhance human agency by giving us additional insight into our selves through the collection and analysis of the data we generate through shopping, interacting on social media, surfing the web, et al…
In a recent op-ed at the New York Times, Hoffman asked us to picture the future:
This monitoring will deliver this benefit:
Pause for a second to consider what Reid Hoffman must believe about our capacity to know ourselves, that our own desires must somehow be mysterious to us until they are revealed by our data as interpreted by an algorithm. As it turns out, we already have very good technology for recording and reflecting on the “small moments” or your life so they may be accessible and useable to your future self.
I believe the technology is called…a journal.
Writing about Hoffman’s Times op-ed at her newsletter, Audrey Watters shows that Hoffman’s world view is the opposite of increased agency and empowerment:
Imagine a world in which AI dictates your decision-making, limits your options about what you can and should learn, and thus forecloses your future. This is the disempowering and dehumanizing future of education and AI, one in which students' futures are constrained by the past – by their own past decisions and by the data trail of other students, those that the algorithms decree to have similar profiles.
Hoffman apparently believes humans are incapable of acting in their own interests in the absence of algorithmic surveillance. In order to sell his vision of the future, he is willing to obfuscate about how large language models actually work.
This is a snow job being perpetrated by people who think we’re too dumb to know better.
It is interesting to consider why the public may be so ripe for this message.
There is a writing experience in The Writer’s Practice that I have done dozens of times with students over the years titled “Who Are We?”
The experience is a rhetorical analysis of a commercial, which is an exercise in observing and then drawing inferences that reveal the subtext of what the commercial is suggesting about the American culture it portrays. Students always enjoyed this experience because it revealed to them how easy it is to identify the subtext if you take just a handful of moments to look just under the surface of the message and presentation. One of the examples I’d use is a Bud Light ad called “Swear Jar” that was planned for the Super Bowl, but was pulled because of content objections.
The ad takes place in a typical, lifeless corporate office and opens with one employee asking about a jar partway filled with money sitting on another employee’s desk. She says it’s a swear jar that people will pay a quarter into any time they use profanity. The first employee asks who gets the money, and she replies, “I don’t know. We’ll use it to buy something for the office, like a case of Bud Light or something.”
"Fuckin’ Awesome,” the guy replies with the profanity just barely bleeped, while reaching into his pocket for change.
The rest of the ad plays out with every employee swearing up a storm in every context. When one employee asks another to borrow a pen, he sighs and ignores her until she says, “Can I borrow your fu**ing pen!” Another moment, when the copier jams, the employee exclaims, “poop.” Another employee pops his head out of the office to scold her for not swearing and she screams, “Shut the f*ck up!”
The ad concludes with the employees gathered together, Bud Lights in hand, as the boss concludes his toast, “I’m so proud of you mother**ing c*cksuckers!”
Students always laughed at the ad. The jokes are pretty obvious and well-executed, but after our initial reaction, I’d ask them to look underneath. What do they notice about this office, the work these people are doing, how they seem to feel about it? What is the source of motivation that keeps them going?
Again, it’s not hard to see that the ad deliberately paints office work as mindless drudgery where the only source of pleasure is the idea that if you all swear enough, you’ll get to have a Bud Light.
Once we go through this exercise, students would report back that it seemed like every ad that showed people working suggested that work is supposed to be misery, and the only relief was to be found in post-work consumption of (most often) beer, but ansi sometimes driving your new car around a gently curving coastal road without anyone else in sight.
(The exception in how work is portrayed are ads for credit cards supporting small business entrepreneurs who are living life doing what they love, thanks to the benevolence of banks.)
In their essays, after students described the subtext of the ad, I’d ask them to comment on how they felt about what they’d revealed, and for those using ads that related to work, many of them would say that even at age 18-19, while they hoped to like their jobs, they expected they would not.
This never stopped being sad to me, but considering the messages the culture was providing them, these feelings were not surprising.
How we see “work” is central to a series of ads for the Apple Intelligence AI “enhancements” for the iPhone, and the main message from Apple appears to be that the people who use its new product are idiots, but thanks to Apple Intelligence you can fool your boss (who is also an idiot) into thinking you’re a star.
The central character of “Write Smarter” is Warren, who works in a generic open office environment, though as we see from the outset, Warren does not work. He is shown:
Bouncing up and down in his chair.
Playing with a chain of paperclips while making sound effects.
Playing with the tape dispenser.
Licking an envelope, pausing to appreciate the apparently desirable flavor, and then going in for another helping.
Warren is an idiot and doofus. He’s a little chubby which would be fine, except also his clothes don’t fit, the waist of his pants is too tight, his hair is mussed, and he his coworkers around him appear so inured to his farting around that he barely has a presence in the world.
Warren pauses from his activities to type a short email addressed to his boss into the phone. The email apparently references a current project, but Warren can only write in slang and text speak. Sending this kind of message to a supervisor would be a disaster.
But wait! Warren has Apple Intelligence! He hits the “professional” button on the phone’s “rewrite” feature, and sends it to the boss.
Cut to the boss who gets an office with walls and a window to the interior reading Warren’s email out loud in amazement, as though he did not know Warren was capable of such brilliance. “Warren?” he asks himself. “Huh,” he replies, to himself clearly at least a little impressed.
Cut back to Warren now wielding his paperclip chain like a martial artist using nunchucks as a chorus declares Warren to be a “genius.”
This Apple Intelligence ad is not quite as dispiriting as the Google Gemini ad which suggested a good way for a young girl to show her appreciation for a gold medal winning athlete would be to have AI write her fan letter, but it’s pretty close.
Apple is selling their product on backs of a message that signals their users are idiots and don’t mind staying that way since AI is here to help. The average person knows they’re not nearly as big a waste of space as Warren, so just imagine what you could do with this amazing tool!
But that’s the thing. What anyone could do with that tool is the exact same thing as Warren. Apple is suggesting that to some degree, we are all Warrens, and we may as well just get with the program (literally) and start relying on this technology to do our work for us. After all, the boss will think you’re a genius!
It’s being sold as liberating, but is truly dystopian. It is, as Audrey Watters shows us, a life that is being “foreclosed” as you fortunes at work are tied to technology you don’t understand, and which requires you to maintain a consumer relationship to the technology.
It’s gross and anti-human, but this is the game at the moment, to convince people that they will be better off abandoning their humanity to AI because to fail to do so will leave you behind even the losers (like Warren) of the world.
There’s a reason the AI boosters want us to be ignorant about how their products work, and that’s because, according to new research, the less you know about AI, the more receptive you are to its use. They want us to just accept that it’s magic and move on already. This is the future!
I recommend
’s breakdown of these issues at his newsletter, where he explores how this product positioning relates to education. The very core of education is to empower students to be able to exert agency over their lives, to actively work to gradually make themselves less ignorant. It’s very likely that this technology could be very helpful in pursuit of that goal. But the way AI in education is being positioned the opposite of this goal.The tech companies see picking off teachers and replacing them with chatbots as low hanging fruit, but it doesn’t need to be this way.
To see one example of what genuine innovation with these tools might look like, I recommend checking out what
is up to at his newsletter as he explores how to use LLMs as a Toulmin analysis machine.Mike’s work is objectively cool and potentially hugely useful, but in order for Mike to do this work he needed to spend years of not being Warren, years of reading, thinking, learning, years of sharing what he knows with a community of other scholars.
We should see some of these advertising and marketing strategies as indicators of a certain level of desperation within the industry. Lots of people who try out generative AI find it not actually that useful. Google and Microsoft are so desperate they’re shoving their unwanted AI products into users’ faces in making them front and center by default giving individuals limited autonomy to make them go away. If we’re not yet Warrens, maybe they can turn us into him through deskilling.
If this technology is ever going to be any genuine use, we must not accept tech companies treating us like ignoramuses.
I have a way of approaching these issues that I think works pretty well.
The last three chapters of my imminently forthcoming book, More Than Words: How to Think About Writing in the Age of AI are titled, “Resist,” “Renew,” and “Explore.”
I start with resist because if we’re ever going to truly benefit from generative AI, or whatever comes next, we must not give in to the demand that we remain ignorant about how this stuff works. We have to call Reid Hoffman out when he lies to us. We have to reject products like Apple Intelligence of Microsoft Copilot when they’re forced into our lives and work without invitation.
Unless you’re interested in living life as a Warren, or truly believe that Reid Hoffman’s world of knowing yourself better through data is desirable, our first duty is to remember that we hare humans.
Links
This week at the Chicago Tribune I reviewed Geraldine Brooks’s powerful exploration of her husband, Tony Horwitz’s, death and her work grieving and remembering him, Memorial Days.
At Inside Higher Ed I wrote how I think it’s a mistake for higher education institutions to duck and cover from the early assault of the Trump Administration, and instead they need to fight, not just for their own sake, but in the interests of our democracy.
On Tuesday, in addition to it being the release date for More Than Words, I’ll be helping launch a new newsletter under the umbrella of the Center for the Defense of Academic Freedom (for which I’m one of the fellows). If you care about higher education, and its rule in preserving democracy and providing people access to opportunity, give it a subscribe. All content will be free.
Not really about books, per se, but I found this piece from
a beautiful reflection on the work of teaching and learning.I found this piece by
about how some journalists are viewed as “talent,” giving them advantage in market insightful. Mostly what I took away is that we don’t have structures that allows the people who are doing the work to just do the work and not worry about becoming “talent.”Here’s another great post from
at on the fake book covers you see on television or in movies. examines the question on whether or not “the woke mob is ruining publishing.”Via my friends
, “Forgotten Literary Moments in Which a Cat Throws Up and No One Wants to Deal with It” by Katie Burgess.Recommendations
1. Shriver by Chris Bowen
2. The Best American Short Stories 2024 ed. by Lauren Groff
3. The Best American Mystery & Suspense ed. by S.A. Crosby
4. Conversations With Friends by Sally Rooney
5. Lucky Jim by Kingsley Amis
Joe F. - Channahon, IL
This is a book I like to recommend every so often because I think it’s under-read The Italian Teacher, by Tom Rachmann.
1. The Log of the Sea of Cortez John Steinbeck
2. Barchester Towers Anthony Trollope
3. Swallows and Amazons Arthur Ransome
4. Bridge to Terabithia Katherine Paterson
5. The Nine Tailors Dorothy Sayers
Dawn (on behalf of her husband, Ian R.)
This is a tough one, but I feel like I’ve got a good fit, The Line of Beauty by Alan Hollinghurst.
Alrighty, Tuesday is indeed the release day for More Than Words: How to Think About Writing in the Age of AI, and I’ll be coming to you with a reflection on the book and the writers and thinkers who influenced it as a gesture of thanks and form of celebration.
For now, this is really your last chance to pre-order. Soon, you’ll just have to buy it.
Who has been using these generative AI tools? How have you been using them? Are you worried about becoming a Warren, or perhaps raising future generations of Warrens? Weight in in the comments.
Thank you, as always for taking the time to read this newsletter. If you did make it all the way here, perhaps you’d like to subscribe?
Take care,
JW
The Biblioracle
I've had this background thought while listening to Hoffman--what's off? Thank you for putting words around it. I'm interested in where his ideas about outsourcing life decisions intersect with Zuckerberg's male energy. Makes me think of bell hooks, that patriarchy demands "psychic self-mutilation." Allowing a LLM to search your email to find an answer to an emotion question? Sheesh. Anyway, thank you!
I tried to listen to Hoffman on a London School of Economics podcast of some lecture he gave there last year, figuring maybe he'd try harder in front of an academic audience. Nope. I swear his talk sounded like ChatGPT wrote it. Nothing interesting or original in the fifteen minutes I listened to it before I couldn't take it anymore.
Thanks for the pointer to Mike Caulfield. I had not seen his posts about the Toulminzer. Good stuff!