OpenAI Employees Warn of a Culture of Risk and Retaliation

A group of current and former OpenAI employees have issued a public letter warning that the company and its rivals are building artificial intelligence with undue risk, without sufficient oversight, and while muzzling employees who might witness irresponsible activities.

“These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction,” reads the letter published at righttowarn.ai. “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable.”

The letter calls for not just OpenAI but all AI companies to commit to not punishing employees who speak out about their activities. It also calls for companies to establish “verifiable” ways for workers to provide anonymous feedback on their activities. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” the letter reads. “Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry.”

OpenAI came under criticism last month after a Vox article revealed that the company has threatened to claw back employees’ equity if they do not sign non-disparagement agreements that forbid them from criticizing the company or even mentioning the existence of such an agreement. OpenAI’s CEO, Sam Altman, said on X recently that he was unaware of such arrangements and the company had never clawed back anyone’s equity. Altman also said the clause would be removed, freeing employees to speak out. OpenAI did not respond to a request for comment by time of posting.

OpenAI has also recently changed its approach to managing safety. Last month, an OpenAI research group responsible for assessing and countering the long-term risks posed by the company’s more powerful AI models was effectively dissolved after several prominent figures left and the remaining members of the team were absorbed into other groups. A few weeks later, the company announced that it had created a Safety and Security Committee, led by Altman and other board members.

Source : Wired