Elon Musk’s Criticism of ‘Woke AI’ Suggests ChatGPT Could Be a Trump Administration Target

Elon Musk just dragged ChatGPT and other artificial intelligence programs into the Trump crosshairs by repeating his warning that current AI models are too “woke” and “politically correct.”

“A lot of the AIs that are being trained in the San Francisco Bay Area, they take on the philosophy of people around them,” Musk said at the Future Investment Initiative, a Saudi Arabia government–backed event held in Riyadh this week. “So you have a woke, nihilistic—in my opinion—philosophy that is being built into these AIs.”

Although Musk is himself a polarizing figure, he is right about AI systems harboring political biases. The issue, however, is far from one-sided, and Musk’s framing may help further his own interests due to his ties to Trump. Musk runs xAI, a competitor to OpenAI, Google, and Meta that could benefit if those companies become government targets.

“Musk clearly has a close, close relationship with the Trump campaign, and any comment that he’s making will hold a big influence,” says Matt Mittelsteadt, a research fellow at George Mason University. “At a maximum he could have some sort of seat in a potential Trump administration, and his views could actually be enacted into some sort of policy.”

Musk has previously accused both OpenAI and Google of being infected with “the woke mind virus.” When Google’s Gemini chatbot produced historically inaccurate images, including black Nazis and Vikings, in February, Musk saw it as proof of Google using AI to spread an absurdly woke outlook.

Musk is clearly no fan of government regulation, but he backed a proposed AI bill in California that would have required companies to make their AI models available for vetting.

Mittelsteadt adds that Trump could punish companies in a variety of ways. He cites, for example, the way the Trump government canceled a major federal contract with Amazon Web Services, a decision likely influenced by the former president’s view of the Washington Post and its owner, Jeff Bezos.

It would not be hard for policymakers to point to evidence of political bias in AI models, even if it cuts both ways.

A 2023 study by researchers at the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University found a range of political leanings in different large language models. It also showed how this bias may affect the performance of hate speech or misinformation detection systems.

Another study, conducted by researchers at the Hong Kong University of Science and Technology, found bias in several open source AI models on polarizing issues such as immigration, reproductive rights, and climate change. Yejin Bang, a PhD candidate involved with the work, says that most models tend to lean liberal and US-centric, but that the same models can express a variety of liberal or conservative biases depending on the topic.

AI models capture political biases because they are trained on swaths of internet data that inevitably includes all sorts of perspectives. Most users may not be aware of any bias in the tools they use because models incorporate guardrails that restrict them from generating certain harmful or biased content. These biases can leak out subtly though, and the additional training that models receive to restrict their output can introduce further partisanship. “Developers could ensure that models are exposed to multiple perspectives on divisive topics, allowing them to respond with a balanced viewpoint,” Bang says.

The issue may become worse as AI systems become more pervasive, says Ashique KhudaBukhsh, an computer scientist at the Rochester Institute of Technology who developed a tool called the Toxicity Rabbit Hole Framework, which teases out the different societal biases of large language models. “We fear that a vicious cycle is about to start as new generations of LLMs will increasingly be trained on data contaminated by AI-generated content,” he says.

“I’m convinced that that bias within LLMs is already an issue and will most likely be an even bigger one in the future,” says Luca Rettenberger, a postdoctoral researcher at the Karlsruhe Institute of Technology who conducted an analysis of LLMs for biases related to German politics.

Rettenberger suggests that political groups may also seek to influence LLMs in order to promote their own views above those of others. “If someone is very ambitious and has malicious intentions it could be possible to manipulate LLMs into certain directions,” he says. “I see the manipulation of training data as a real danger.”

There have already been some efforts to shift the balance of bias in AI models. Last March, one programmer developed a more right-leaning chatbot in an effort to highlight the subtle biases he saw in tools like ChatGPT. Musk has himself promised to make Grok, the AI chatbot built by xAI, “maximally truth-seeking” and less biased than other AI tools, although in practice it also hedges when it comes to tricky political questions. (A staunch Trump supporter and immigration hawk, Musk’s own view of “less biased” may also translate into more right-leaning results.)

Next week’s election in the United States is hardly likely to heal the discord between Democrats and Republicans, but if Trump wins, talk of anti-woke AI could get a lot louder.

Musk offered an apocalyptic take on the issue at this week’s event, referring to an incident when Google’s Gemini said that nuclear war would be preferable to misgendering Caitlyn Jenner. “If you have an AI that’s programmed for things like that, it could conclude that the best way to ensure nobody is misgendered is to annihilate all humans, thus making the probability of a future misgendering zero,” he said.

Source : Wired