← Back to Database
🌍 Landscape

Senior Safety Researchers Departing AI Labs

✓ Verified
Multiple senior safety researchers have left major AI companies, citing concerns about the prioritisation of capabilities over safety, including high-profile departures from OpenAI.
Verified: 1 March 2026 · Last updated: 1 March 2026 · Region: international

A pattern of senior safety researchers leaving major AI labs - particularly OpenAI - has raised serious concerns:

  • Jan Leike (co-lead of OpenAI’s Superalignment team) resigned, stating that “safety culture and processes have taken a back seat to shiny products”
  • Ilya Sutskever (OpenAI co-founder and chief scientist) departed after reportedly clashing with leadership over safety priorities
  • OpenAI’s entire Superalignment team was effectively dissolved

When the people hired specifically to ensure AI safety leave because they believe safety is not being prioritised, this is a significant warning signal. Voluntary safety commitments are only as strong as the internal culture that supports them.

Sources (1)

OpenAIsafety teamdeparturesindustry