We already knew where OpenAI’s CEO, Sam Altman, stands on artificial intelligence vis-à-vis the human saga: It will be transformative, historic, and overwhelmingly beneficial. He has been nothing but consistent across countless interviews. For some reason, this week he felt it necessary to distill those opinions in a succinct blog post. “The Intelligence Age,” as he calls it, will be a time of abundance. “We can have shared prosperity to a degree that seems unimaginable today; in the future, everyone’s lives can be better than anyone’s life is now,” he writes. “Although it will happen incrementally, astounding triumphs—fixing the climate, establishing a space colony, and the discovery of all of physics—will eventually become commonplace.”
Maybe he published this to dispute a train of thought that dismisses the apparent gains of large language models as something of an illusion. Nuh-uh, he says. We’re getting this big AI bonus because “deep learning works,” as he said in an interview later in the week, mocking those who said that programs like OpenAI’s GPT4o were simply stupid engines delivering the next token in a queue. “Once it can start to prove unproven mathematical theorems, do we really still want to debate: ‘Oh, but it’s just predicting the next token?'” he said.
No matter what you think of Sam Altman, it’s indisputable that this is his truth: Artificial general intelligence–AI that matches and then exceeds human capabilities–is going to obliterate the problems plaguing humanity and usher in a golden age. I suggest we dub this deus ex machina concept The Strawberry Shortcut, in honor of the codename for OpenAI’s recent breakthrough in artificial reasoning. Like the shortcake, the premise looks appetizing but is less substantial in the eating.
Altman correctly notes that the march of technology has brought what were once luxuries to everyday people—including some unavailable to pharaohs and lords. Charlemagne never enjoyed air-conditioning! Working-class people and even some on public assistance have dishwashers, TVs with giant screens, iPhones, and delivery services that bring pumpkin lattes and pet food to their doors. But Altman is not acknowledging the whole story. Despite massive wealth, not everyone is thriving, and many are homeless or severely impoverished. To paraphrase William Gibson, paradise is here, it’s just not evenly distributed. That’s not because technology has failed—we have. I suspect the same will be true if AGI arrives, especially since so many jobs will be automated.
Altman isn’t terribly specific about what life will be like when many of our current jobs go the way of 18th-century lamplighters. We did get a hint of his vision in a podcast this week that asked tech luminaries and celebrities to share their Spotify playlists. When explaining why he chose the tune “Underwater” by Rüfüs du Sol, Altman said it was a tribute to Burning Man, which he has attended several times. The festival, he says, “is part of what the post-AGI can look like, where people are just focused on doing stuff for each other, caring for each other and making incredible gifts to get each other.”
Altman is a big fan of universal basic income, which he seems to think will cushion the blow of lost wages. Artificial intelligence might indeed generate the wealth to make such a plan feasible, but there’s little evidence that the people who amass fortunes—or even those who still eke out a modest living—will be inclined to embrace the concept. Altman might have had a great experience at Burning Man, but some kind souls of the Playa seem to be up in arms about a proposal, affecting only people worth over $100 million, to tax some of their unrealized capital gains. It’s a dubious premise that such people—or others who become super rich working at AI companies—will crack open their coffers to fund leisure time for the masses. One of the US’s major political parties can’t stand Medicaid, so one can only imagine how populist demagogues will regard UBI.
I’m also wary of the supposed bonanza that will come when all of our big problems are solved. Let’s concede that AI might actually crack humanity’s biggest conundrums. We humans would have to actually implement those solutions, and that’s where we’ve failed time and again. We don’t need a large language model to tell us war is hell and we shouldn’t kill each other. Yet wars keep happening.
It’s exciting to envision AI tackling diseases. But if a model from OpenAI, Google, or Anthropic came up with an injectable cure for Covid tomorrow, you know exactly what would happen. Large segments of the population will warn that it’s some insidious plot to wipe out everyone. Likewise, we already know how to mitigate the climate crisis, but we’re consuming more energy than ever. Altman dreams of trillions of dollars devoted to clean fusion plants. Even if AI provides a blueprint for how to pull that off, Exxon and OPEC might still figure out a way to kill it.
Altman need only look at his own company to see how well-laid plans can go awry. This week several key employees abruptly left his firm. Of the company’s 11 founders, Altman is now one of two remaining. One defector was CTO Mira Murati, who left “to create the time and space to do my own exploration.” Murati did critical work: If you talk to people at Microsoft, OpenAI’s most important partner, they will gush at everything she has done to coordinate the collaboration. Also this week we learned that OpenAI is reportedly going to change its status to a conventional for-profit entity. On one hand, this makes sense. OpenAI started as a nonprofit but later designated part of the company—actually almost all of it—as a commercial enterprise, to get funding to build and run its models. It was an awkward compromise, and that tension will now be eased. Remember, though, that OpenAI began specifically to counter the prospect that a profit-seeking corporation might end up developing—and controlling—AGI. Back in 2015, Altman and his cofounders feared a situation like that of the fossil fuel companies, which knew the right thing to do but didn’t do it because they answer to shareholders, not those of us merely trying to breathe.
I am not a foe of AI, and I agree with Altman that it’s ridiculous to dismiss this astonishing technological development by calling it a parlor trick. Like Altman, I expect that it will improve many, many aspects of our lives. That’s where our views diverge. Altman predicts some bumps along the way, and that the goodness in people will prevail. But the story of humanity, and much of its beauty, is the struggle of the good against the powerful forces that generate misery. The ugly part is how often the good side loses. That’s why it’s so discordant to rely on the Strawberry Shortcut, as Altman does when he proclaims, “The future is going to be so bright that no one can do it justice by trying to write about it now.” Altman should read Voltaire, or at least ask GPT-4o whether the author’s hyper optimistic Pangloss character was wise. This is what he’d find: “His refusal to engage critically with the world and his blind adherence to his philosophy make him a figure of ridicule rather than respect.”
AI scientist Danny Hillis once said that his goal was to design a computer that would be proud of him. If we indeed develop AGI in a few thousand days, as Altman predicts, would it be proud of us? More likely, it would take one look at the news of the day and perform the silicon equivalent of vomiting. The human problem that AI will never solve is humanity itself, in all its glory and shame. Unless AGI decides that the age of intelligence will commence only when it gets rid of us.
Time Travel
I couldn’t locate the first time that Danny Hillis mentioned designing a computer that would be proud of him. But I cited the quote in my introduction to a conversation I moderated between Hillis and legendary computer visionary Alan Kay. (Actually I just sat back and let those big brains interact.) The dialog appeared in WIRED 30 years ago, and in light of what we know now about AI, it’s fascinating and prophetic.
Hillis: When I first came into the MIT Artificial Intelligence Lab, it was during golden days when language programs were sort of working and it looked like if you just kept on heading in that same direction then you could just engineer something that thought. But we reached a wall where things became more fragile and more difficult to change as they got more complex, and in fact we never really got much beyond that point. The state of natural language understanding today is not a whole lot advanced in terms of performance above what it was back then. Now, you could conclude that artificial intelligence is just an impossible task. Marvin [Minsky], who still imagines engineering AI, certainly has come to the conclusion that the brain is a very complex kludge. So you might conclude that we can never build one. But you can also conclude that it’s simply the techniques we’re using to approach AI that just aren’t powerful enough.
Kay: Well, the problem is that nobody knows how to do it the other way. But that doesn’t mean you shouldn’t try it.
Hillis: If we’re ever going to make a thinking machine, we’re going to have to face the problem of being able to build things that are more complex than we can understand. That means we have to build things by some method other than engineering them. And the only candidate that I’m aware of for that is biological evolution.
Ask Me One Thing
Elijah asks, “We are approaching the 18th anniversary of “The Perfect Thing,” your book all about the cultural impact of iPods. Given today’s prominence of smartphones, and the direction the music industry went with streaming services, how do you view the book today—outdated or prophetic?”
Thanks for the question, Elijah, especially for shouting out that not-exactly-milestone anniversary. Maybe I should mention that this is the 40th anniversary of my book Hackers, and the 30th of Insanely Great, which was about how the Macintosh and its interface changed everything? All available at online bookstores near you! The iPod book was different from my others—it’s a series of long essays that can stand on their own, addressing subjects like personal music (with a mini-history of Sony’s Walkman), how music and our gadgets are tied to our identities, what coolness means, the birth of podcasting, and of course the creation story of a gadget that thrills and satisfies users to the point where it’s a cultural phenomenon.
You’ll not be surprised to hear that though the iPod as a singular gadget is outdated, I think the book does look forward—not only to the iPhone but how our love of cool technology shapes us. In terms of prophecy, The Perfect Thing covers a lot of the “celestial jukebox” issues of today’s streaming experience.
One of those issues involves the experience of shuffling a large corpus of songs. I was baffled that when I shuffled my thousands of songs on my iPod, Steely Dan tunes appeared far more than their overall representation in the collection. When I first wrote about this in Newsweek, and then in the book, a lot of people reported similar experiences. There was even an academic study on the phenomenon. Steve Jobs himself once put an engineer on the line to assure me that the iPod shuffling was random. But because of a groundswell of an ear-wormed public, the company eventually made a setting called Smart Shuffle that allowed users to space out artists. “Rather than argue whether it’s random or not, we can give them the outcome they want,” Jobs told me. Journalism that made a difference!
You can submit questions to mail@wired.com. Write ASK LEVY in the subject line.
End Times Chronicle
Hello, Hurricane Helene, this month’s version of a storm we rarely see the likes of.
Last but Not Least
Founders Fund investor and Anduril cofounder Trae Stephens explains to me why Jesus loves VCs–and why he might serve in the next Trump administration.
Mark Zuckerberg, whose clothes and grooming give off a Spicoli vibe, shows off Meta’s amazing VR glasses. No, you can’t buy them yet.
The creepy online behavior of a former Trump aide.
A great lineup for WIRED’s Big Interview conference on December 3 in San Francisco. Can you attend?
Don’t miss future subscriber-only editions of this column. Subscribe to WIRED (50% off for Plaintext readers) today.
Source : Wired