Michal Kosinski is a Stanford research psychologist with a nose for timely subjects. He sees his work as not only advancing knowledge, but alerting the world to potential dangers ignited by the consequences of computer systems. His best-known projects involved analyzing the ways in which Facebook (now Meta) gained a shockingly deep understanding of its users from all the times they clicked “like” on the platform. Now he’s shifted to the study of surprising things that AI can do. He’s conducted experiments, for example, that indicate that computers could predict a person’s sexuality by analyzing a digital photo of their face.
I’ve gotten to know Kosinski through my writing about Meta, and I reconnected with him to discuss his latest paper, published this week in the peer-reviewed Proceedings of the National Academy of Sciences. His conclusion is startling. Large language models like OpenAI’s, he claims, have crossed a border and are using techniques analogous to actual thought, once considered solely the realm of flesh-and-blood people (or at least mammals). Specifically, he tested OpenAI’s GPT-3.5 and GPT-4 to see if they had mastered what is known as “theory of mind.” This is the ability of humans, developed in the childhood years, to understand the thought processes of other humans. It’s an important skill. If a computer system can’t correctly interpret what people think, its world understanding will be impoverished and it will get lots of things wrong. If models do have theory of mind, they are one step closer to matching and exceeding human capabilities. Kosinski put LLMs to the test and now says his experiments show that in GPT-4 in particular, a theory of mind-like ability “may have emerged as an unintended by-product of LLMs’ improving language skills … They signify the advent of more powerful and socially skilled AI.”
Kosinski sees his work in AI as a natural outgrowth of his earlier dive into Facebook Likes. “I was not really studying social networks, I was studying humans,” he says. When OpenAI and Google started building their latest generative AI models, he says, they thought they were training them to primarily handle language. “But they actually trained a human mind model, because you cannot predict what word I’m going to say next without modeling my mind.”
Kosinski is careful not to claim that LLMs have utterly mastered theory of mind—yet. In his experiments he presented a few classic problems to the chatbots, some of which they handled very well. But even the most sophisticated model, GPT-4, failed a quarter of the time. The successes, he writes, put GPT-4 on a level with 6-year-old children. Not bad, given the early state of the field. “Observing AI’s rapid progress, many wonder whether and when AI could achieve ToM or consciousness,” he writes. Putting aside that radioactive c-word, that’s a lot to chew on.
“If theory of mind emerged spontaneously in those models, it also suggests that other abilities can emerge next,” he tells me. “They can be better at educating, influencing, and manipulating us thanks to those abilities.” He’s concerned that we’re not really prepared for LLMs that understand the way humans think. Especially if they get to the point where they understand humans better than humans do.
“We humans do not simulate personality—we have personality,” he says. “So I’m kind of stuck with my personality. These things model personality. There’s an advantage in that they can have any personality they want at any point of time.” When I mention to Kosinski that it sounds like he’s describing a sociopath, he lights up. “I use that in my talks!” he says. “A sociopath can put on a mask—they’re not really sad, but they can play a sad person.” This chameleon-like power could make AI a superior scammer. With zero remorse.
Some research psychologists have challenged Kosinski’s claims. In response to a preprint Kosinski had published on Arxiv in early 2023, a group of AI researchers wrote a paper suggesting he was actually observing a “Clever Hans,” referring to that famous early 20th century horse that people were tricked into thinking could do math and keep track of a calendar. Their argument was that if an LLM fails even one test of theory of mind, it fails completely. “LLMs might have some reasoning abilities, but it’s definitely not complete or robust in the way of humans,” says Vered Shwartz, an assistant professor of computer science at the University of British Columbia, who is one of the coauthors. “We did a lot of different tests, and we definitely can’t claim that language models have the same [theory-of-mind] ability that people do. And it could be that it’s just cheating.”
Shwartz is referring to the fact that since LLMs are trained on huge corpora of information, some of them inevitably contain published academic papers that discuss similar experiments to the ones that Kosinski ran. GPT-4 might have reached into its vast training materials to find the answers. AI’s skeptic-in-chief, Gary Marcus, found that the tests Kosinski used were also deployed in classic experiments that had been cited in scientific papers more than 11,000 times. It’s like the LLMs had memorized crib sheets to fake having theory of mind. (To me, this cold-blooded shortcut to cognition, if true, is even scarier than LLMs emergently acquiring theory of mind.)
Kosinski says that the work done for this latest version of the paper addresses the criticisms. Also, some other papers have been published recently that seem to bolster his claims, including one in Nature Human Behavior that found that both GPT-3.5 and GPT-4, while not succeeding at every theory-of-mind task, “exhibited impressive performance” on some of them, and “exceeded human level” on others. In an email to me, the lead author, James Strachan, a postdoctoral researcher at the University Medical Center Hamburg-Eppendorf, doesn’t claim that LLMs have fully mastered theory of mind, but says his team did refute the cheating charge. “It seems that these abilities go beyond simply regurgitating the data used to train the LLMs,” he says, and that “it is possible to reconstruct a great deal of information about human mental states from the statistics of natural language.”
I’m agnostic about whether LLMs will achieve true theory of mind. What matters is whether they behave as if they have that skill, and they are definitely on the road to that. Even Shwartz, who swatted down some of Kosinski’s methods, concedes that it’s possible. “If companies continue to make language models more sophisticated, maybe they would have [ToM] at some point,” she says.
That’s why Kosinski, despite the tough critiques of his work, is worth listening to. As is the conclusion to his paper: Theory of Mind is “unlikely to be the pinnacle of what neural networks can achieve in this universe,” he writes. “We may soon be surrounded by AI systems equipped with cognitive capabilities that we, humans, cannot even imagine.” Happy holidays!
Time Travel
Kosinski was a pioneer in analyzing Facebook data, and as an early researcher in the field at Cambridge University, he played a role that indirectly led to Cambridge Analytica’s notorious misuse of data on the service. But as I wrote in my book Facebook: The Inside Story, Kosinski’s work (with collaborator David Stillwell) alerted the world to how much data Facebook gathered whenever people pressed the ever-present Like button. Just as now, critics challenged his findings.
Kosinski encountered some skepticism about [his] methodology. “Senior academics at that time didn’t use Facebook, so they believed these stories that a 40-year-old man would suddenly become a unicorn or a 6-year-old girl or whatever,” he says. But Kosinski knew that what people did on Facebook reflected their real selves. And as he used Facebook Likes more and more, he began to realize they were incredibly revealing. He came to believe that you didn’t need a quiz to know a ton about people. All you needed to know was what they Liked on Facebook.
Kosinski [and collaborators] used statistics to make predictions about personal traits from the Likes of about 60,000 volunteers, then compared the predictions to the subjects’ actual traits as revealed by the myPersonality test. The results were so astounding that the authors had to check and recheck. “It took me a year from having the results to actually gaining confidence in them to publish them because I just couldn’t believe it was possible,” he says…. Solely by analyzing Likes, they successfully determined whether someone was straight or gay 88 percent of the time. In 19 out of 20 cases, they could figure out if someone was White or African American. And they were 85 percent correct in guessing one’s political party.
In subsequent months, Kosinski and Stillwell would improve their prediction methods and publish a paper that claimed that using Likes alone, a researcher would know someone better than the people who worked with, grew up with, or even married that person. “Computer models need 10, 70, 150, and 300 Likes, respectively, to outperform an average work colleague, cohabitant or friend, family member and spouse,” they wrote.
Ask Me One Thing
Alan asks, “Why can’t we choose how to pay for online content?”
Thanks for the question, Alan. It’s one that baffles me, too. I do have little tolerance for those who complain when they come across articles that are behind a paywall. At one point, kiddies, everything was in print and you could read nothing for free unless you stood in the newstand and consumed it, hoping the proprietor wouldn’t snatch it away. Folks, it costs money to produce those gems. Admittedly, the news industry didn’t do itself any favors initially by giving its content away online, but now most, if not all, places have abandoned the idea that digital ads alone can fund excellent writing and reporting.
But you are complaining about the lack of choice in how we pay for it. I’m assuming you are unhappy that our current system is subscription or nothing. There’s generally no way to pay a small fee for a single article or even newsletter. How many times have you found a link to something in a newspaper in a town you never visited that might be of interest—and can’t get at it without giving up a credit card to be charged for complete access to news and archives you couldn’t care less about? Literally for decades I have been assuming that an easy-to-use micropayment system will get constructed and implemented. The technical challenges are minimal. Yet despite multiple attempts, none has caught on. One company, Blendle, once promised to “save journalism” with its micropayment system. Last year it announced that it was no longer in the pay-per-article business and was moving to an Apple News–style subscription service that gives access to multiple publications.
The micropayment solution seems dead. Still, when I hit a paywall and can’t access something I want to read, I would certainly hit a button that would move a few cents, or in some cases even a dollar or two, into the account of a publication. It seems so logical. But as all of us know too well, making sense is not a sufficient condition for something actually happening.
You can submit questions to mail@wired.com. Write ASK LEVY in the subject line.
End Times Chronicle
Summerlike Halloween temperatures in the mid-Atlantic and New England are scarier than the costumes.
Last but Not Least
An oral history of HotWired makes a case that WIRED committed the original sin of trying to fund online journalism with digital ads.
Facebook is auto-generating group pages for militia groups.
Hospitals are embracing OpenAI’s transcription tool, even though it hallucinates. Code blue!
Employees at Cisco are clashing over the politics of Israel and Gaza. Which raises the question: Cisco is still around?
Don’t miss future subscriber-only editions of this column. Subscribe to WIRED (50% off for Plaintext readers) today.
Source : Wired