Back to News

Former Employees Allegedly Accuse OpenAI of Prioritizing Profit Over AI Safety

Friday, Jun 20, 2025

Former Employees Allegedly Accuse OpenAI of Prioritizing Profit Over AI Safety

Former employees of OpenAI claim that the world's leading AI research lab is sacrificing safety for financial gain, according to a report titled 'The OpenAI Files'. What was initially a mission to benefit humanity is allegedly turning into a commercial enterprise focused on profit, sidelining safety and ethical considerations.

Initially, OpenAI promised to limit investor profits, ensuring that the advantages of advanced AI would benefit all of humanity, not just a few wealthy individuals. However, that promise is reportedly at risk of being discarded to appease investors seeking greater returns.

To those who helped establish OpenAI, this shift away from prioritizing AI safety feels like a significant betrayal. Former team member Carroll Wainwright expressed, "The nonprofit mission was a pledge to do the right thing when the stakes were high." Now, that nonprofit initiative seems to be disappearing, rendering the original promise void.

Many concerned voices point to CEO Sam Altman. Issues surrounding his leadership have been long-standing, as previous colleagues alleged "deceptive and chaotic" behavior in past roles.

The sentiment of distrust carried over to OpenAI, where even co-founder Ilya Sutskever, who has since started his own company, voiced strong doubts: "I don't think Sam is the right person to handle AGI." Sutskever believed Altman exhibited dishonesty and created disorder, which could be dangerous for someone in control of the future of AI.

Mira Murati, previously the CTO, also expressed concerns. "I don't feel comfortable with Sam leading us to AGI," she mentioned. She described a manipulative dynamic where Altman would make false assurances and then betray those who opposed him. This manipulation is deemed "unacceptable" by former board member Tasha McCauley, especially considering the high stakes involved in AI safety.

This erosion of trust has had tangible implications. Insiders report a cultural shift at OpenAI, where the essential task of ensuring AI safety has taken a backseat to launching "glitzy products". Jan Leike, who managed the long-term safety team, described their efforts as "sailing against the wind," facing difficulties in obtaining necessary resources for critical research.

William Saunders, another ex-employee, even testified before the US Senate, disclosing severe security lapses that could have allowed engineers to misappropriate the firm's sophisticated AIs, such as GPT-4.

Despite leaving, former employees have devised a plan to redirect OpenAI towards its original mission, aiming to rescue their initial vision.

They advocate for reinstating the nonprofit essence of the company with decision-making power regarding safety matters and insist on transparent leadership, including a meticulous review of Sam Altman's behavior.

The proposal also calls for genuine, independent regulatory oversight to ensure OpenAI can't self-assess its safety measures. They demand a culture that empowers staff to voice concerns without the threat of retaliation—a safe environment for whistleblowers.

Lastly, they emphasize the importance of adhering to the initial financial commitments, insisting that profit limits remain in place, prioritizing public welfare over unlimited private earnings.

Ultimately, this extends beyond just internal conflicts at a Silicon Valley tech firm. OpenAI is creating transformative technology with immense potential impact. The former staff members are compelling all of us to consider an essential question: whom do we trust to shape our future?

In the words of former board member Helen Toner, "internal guardrails can be fragile when financial interests are involved."

The individuals most acquainted with OpenAI now warn us that those safety mechanisms have largely failed.

Latest News

Here are some news that you might be interested in.