Elon Musk started the week by posting testily on X about his struggles to set up a new laptop running Windows. He ended it by filing a lawsuit accusing OpenAI of recklessly developing human-level AI and handing it over to Microsoft.
Musk’s lawsuit is filed against OpenAI and two of its executives, CEO Sam Altman and president Greg Brockman, both of whom worked with the rocket and car entrepreneur to found the company in 2015. A large part of the case pivots around a bold and questionable technical claim: That OpenAI has developed so-called artificial general intelligence, or AGI, a term generally used to refer to machines that can comprehensively match or outsmart humans.
The case claims that Altman and Brockman have breached the original “Founding Agreement” for OpenAI worked out with Musk, which it says pledged the company to develop AGI openly and “for the benefit of humanity. Musk’s suit alleges that the for-profit arm of the company, established in 2019 after he parted ways with OpenAI, has instead created AGI without proper transparency and licensed it to Microsoft, which has invested billions into the company. It demands that OpenAI be forced to release its technology openly and that it be barred from using it to financially benefit Microsoft, Altman, or Brockman.
“On information and belief, GPT-4 is an AGI algorithm,” the lawsuit states, referring to the large language model that sits behind OpenAI’s ChatGPT. It cites studies that found the system can get a passing grade on the Uniform Bar Exam and other standard tests as proof that it has surpassed some fundamental human abilities. “GPT-4 is not just capable of reasoning. It is better at reasoning than average humans,” the suit claims.
Although GPT-4 was heralded as a major breakthrough when it was launched in March 2023, most AI experts do not see it as proof that AGI has been achieved. “GPT-4 is general, but it’s obviously not AGI in the way that people typically use the term,” says Oren Etzioni, a professor emeritus at the University of Washington and an expert on AI.
“It will be viewed as a wild claim,” says Christopher Manning, a professor at Stanford University who specializes in AI and language, of the AGI assertion in Musk’s suit. Manning says there are divergent views of what constitutes AGI within the AI community. Some experts might set the bar lower, arguing that GPT-4’s ability to perform a wide range of functions would justify calling it AGI, while others prefer to reserve the term for algorithms that can outsmart most or all humans at anything. “Under this definition, I think we very clearly don’t have AGI and are indeed still quite far from it,” he says.
GPT-4 won notice—and new customers for OpenAI—because it can answer a wide range of questions, while older AI programs were generally dedicated to specific tasks like playing chess or tagging images. Musk’s lawsuit refers to assertions from Microsoft researchers, in a paper from March 2023, that “given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.” Despite its impressive abilities, GPT-4 still makes mistakes and has significant limitations to its ability to correctly parse complex questions.
“I have the sense that most of us researchers on the ground think that large language models [like GPT-4] are a very significant tool for allowing humans to do much more but that they are limited in ways that make them far from stand-alone intelligences,” adds Michael Jordan, a professor at UC Berkeley and an influential figure in the field of machine learning.
Jordan adds that he prefers to avoid the term AGI entirely because it is so vague. “I’ve never found Elon Musk to have anything to say about AI that was very calibrated or based on research reality,” he adds.
Another difficulty for Musk’s lawsuit is that OpenAI has long used its own definition of AGI, describing it as “a highly autonomous system that outperforms humans at most economically valuable work.” GPT-4 seems far short of that mark today.
Musk has offered different definitions for AGI in the past that would disqualify GPT-4 for that honor. In December 2022, shortly after he declared OpenAI’s newly launched ChatGPT “scary good,” the entrepreneur suggested that an algorithm would need to “invent amazing things or discover deeper physics” to deserve the moniker. “I’m not seeing that potential yet,” Musk wrote.
OpenAI’s first release of ChatGPT was built on top of an AI model called GPT-3. It and GPT-4, which powers the premium version of ChatGPT today, are the latest in a series of programs pioneered by OpenAI known as large language models. They learn to predict the text that should follow a string by training on huge amounts of text sourced from the web, books, and other places. Although GPT-4—and rivals such as Google’s Gemini—have stunned AI researchers with their flexibility and power, they remain prone to fabricating information, blurting out unpleasantries, or becoming confused and incoherent.
Recognizing GPT-4 as AGI is a central part of Musk’s lawsuit. It’s part of the basis for its claim that OpenAI’s founding ideas have been breached and also that the for-profit arm breached its own licensing agreement with Microsoft, which says that the company can only receive “pre-AGI” technology.
Mark Lemley, a professor at Stanford Law School, is doubtful of both the AGI claim and the suit’s broader legal merits. While OpenAI does seem less open and has become more profit-focused, it is far from clear what rights that gives Musk.
“Notably, the complaint does not include any contract between Musk and the company or the text of any rights he has to enforce those principles or get his money back,” Lemley says. “If those documents existed I would expect they would be prominently featured in the complaint.” Although the suit refers to a “Founding Agreement,” it cites only an email between Musk and Altman before the company was founded and its brief certificate of incorporation, not any specific contract.
The lawsuit may stumble on other grounds, like the claims about OpenAI’s creation of a for-profit arm. Although that structure is unusual for a technology company, many corporations are controlled by nonprofits.
“I’m really skeptical that the case is meritorious or that it has any chance of success,” says Samuel Brunson, an associate dean at Loyola University Chicago who teaches about nonprofit law. “In large part, Musk is arguing that OpenAI’s pursuit of profits and its coinvestment with for-profit entities has caused it to stop being a nonprofit. And that’s just wrong.”
Additional reporting by Paresh Dave.
Source : Wired