In June I had a conversation with chief scientist Ilya Sutskever at OpenAI’s headquarters, as I reported WIRED’s October cover story. Among the topics we discussed was the unusual structure of the company.
OpenAI began as a nonprofit research lab whose mission was to develop artificial intelligence on par or beyond human level—termed artificial general intelligence or AGI—in a safe way. The company discovered a promising path in large language models that generate strikingly fluid text, but developing and implementing those models required huge amounts of computing infrastructure and mountains of cash. This led OpenAI to create a commercial entity to draw outside investors, and it netted a major partner: Microsoft. Virtually everyone in the company worked for this new for-profit arm. But limits were placed on the company’s commercial life. The profit delivered to investors was to be capped—for the first backers at 100 times what they put in—after which OpenAI would revert to a pure nonprofit. The whole shebang was governed by the original nonprofit’s board, which answered only to the goals of the original mission and maybe God.
Sutskever did not appreciate it when I joked that the bizarre org chart that mapped out this relationship looked like something a future GPT might come up with when prompted to design a tax dodge. “We are the only company in the world which has a capped profit structure,” he admonished me. “Here is the reason it makes sense: If you believe, like we do, that if we succeed really well, then these GPUs are going to take my job and your job and everyone’s jobs, it seems nice if that company would not make truly unlimited amounts of returns.” In the meantime, to make sure that the profit-seeking part of the company doesn’t shirk its commitment to ensuring the AI doesn’t get out of control, there’s that board, keeping an eye on things.
This would-be guardian of humanity is the same board that fired Sam Altman last Friday, saying that it no longer had confidence in the CEO because “he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” No examples of that alleged behavior were provided, and almost no one at the company knew about the firing until just before it was publicly announced. Microsoft CEO Satya Nadella and other investors got no advance notice. The four directors, representing a majority of the six-person board, also kicked OpenAI president and chairman Greg Brockman off the board. Brockman quickly resigned.
After speaking to someone familiar with the board’s thinking, it appears to me that in firing Altman the directors believed they were executing their mission of making sure the company develops powerful AI safely—as was its sole reason for existing. Increasing profits or ChatGPT usage, maintaining workplace comity, and keeping Microsoft and other investors happy were not of their concern. In the view of directors Adam D’Angelo, Helen Toner, and Tasha McCauley—and Sutskever—Altman didn’t deal straight with them. Bottom line: The board no longer trusted Altman to pursue OpenAI’s mission. If the board can’t trust the CEO, how can it protect or even monitor progress on the mission?
I can’t say whether Altman’s conduct truly endangered OpenAI’s mission, but I do know this: The board seems to have missed the possibility that a poorly explained execution of a beloved and charismatic leader might harm that mission. The directors appear to have thought that they would give Altman his walking papers and unfussily slot in a replacement. Instead, the consequences were immediate and volcanic. Altman, already something of a cult hero, became even revered in this new narrative. He did little or nothing to dissuade the outcry that followed. To the board, Altman’s effort to reclaim his post, and the employee revolt of the past few days, was kind of a vindication that it was right to dismiss him. Clever Sam is still up to something! Meanwhile, all of Silicon Valley blew up, tarnishing OpenAI’s status, maybe permanently.
Altman’s fingerprints do not appear on the open letter released Monday morning and signed by more than 95 percent of OpenAI’s roughly 770 employees that says the directors are “incapable of overseeing OpenAI.” It says that if the board members don’t reinstate Altman and resign, the workers who signed may quit and join a new advanced AI research division at Microsoft, formed by Altman and Brockman. At first, this threat did not seem to dent the resolve of the directors, who apparently felt like they were being asked to negotiate with terrorists. Presumably one director felt differently: Sutskever, who now says he regrets his actions. His signature appeared on the you-quit-or-we’ll-quit letter. Having apparently deleted his distrust of Altman, Sutskever and Altman have been sending love notes to each other on X, the platform owned by Elon Musk, another fellow OpenAI cofounder, now estranged from the project.
For a tense day, the board called the bluff of the massed employees, virtually daring OpenAI’s workforce to stream out the door and join Altman at Microsoft. According to the letter, the directors told OpenAI leaders that allowing the company to be destroyed “would be consistent with the mission.” (The New York Times later attributed those words to Helen Toner.) That seems extreme. If everyone walks, it’s hard to imagine how OpenAI continues to be a leader in hastening the singularity—and if someone else brings it about, the company’s board will have no say in how safely it occurs. Even OpenAI’s excellent free coffee and lunches would not draw the top machine-learning researchers needed to fill the suddenly vacant workspaces of the company’s current wizards.
So it wasn’t shocking to see the board negotiating with the Altman camp, and finally agreeing to allow Altman to return as CEO late Tuesday. Two directors resigned, leaving only D’Angelo on the board, joined by former Salesforce CEO Bret Taylor as its new chair, and former secretary treasurer and guy-who-got-in-hot-water-by-insulting-women Lawrence Summers. Presumably as the company rebuilds the board, it will become more diverse. Helping soften the blow for the outgoing directors was a commitment for OpenAI to launch an internal investigation of its restored CEO’s behavior,
Looked at with the benefit of brief hindsight, the idea of OpenAI’s entire workforce joining Microsoft seems something even ChatGPT would never dare to hallucinate. Sure, it would have been a fantastic coup for Microsoft to grab the cream of the AI research talent pool. But it would have been extremely expensive, and many of those OpenAI employees are involved in more pedestrian things like interface design, product management, and developer relations that Microsoft already has plenty of people working on. And wouldn’t OpenAI’s products compete with all the Copilot apps that Microsoft is launching based on OpenAI technology?
But the craziest thing of all would have been the simple reality that OpenAI, literally formed to thwart companies like Microsoft from dominating AI technology, would have delivered its elite talent—lock, stock, and data set—into the hands of a multitrillion-dollar giant. Microsoft would have no qualms whatsoever about pocketing “truly unlimited amounts of returns” from future breakthroughs from the ex-OpenAI staff—something that anyone who was thinking of following Altman over there might have pondered, in light of their previous time at a company with different founding principles. (I do think that a high percentage of OpenAI researchers would have wound up moving to other rising AI startups or striking out on their own to start new ones.)
This whole story had so many hellzapoppin twists and turns over the past five days that it was tempting to sit back and enjoy the fun, like that ubiquitous GIF of Michael Jackson tossing popcorn kernels in his mouth. But serious attention must be paid. We are not disinterested spectators in this drama. Lurking behind this geeky real-life episode of Succession are issues that will determine what our collective future looks like.
“I see the problem posed by superintelligence to be humanity’s final challenge,” Sutskever said to me in that June interview. “So what do we do with it? My view is if there is high-quality understanding, if all the smart intellectuals and people who are thinkers on different domains come together, if there are discussions and powerful ideas floating around, maybe something good will happen.” That’s an optimistic view of what’s ahead. Right now, however, the guardians of today’s leading candidate for nascent superintelligence are reeling from a lunatic boardroom knife fight, and our trust in it has softened. As we tiptoe toward AGI, we must always make sure that the bots are aligned with the best human values. This won’t happen unless humans are aligned with those, too.
This story was originally published hours before the agreement to restore Sam Altman as OpenAI CEO and has been updated to reflect this.
Don’t miss future subscriber-only editions of this column. Subscribe to WIRED (50% off for Plaintext readers) today.
Source : Wired