← Back to Database
šŸŒ Landscape

The People Who Built These Systems Are Leaving Over Safety Concerns

◐ Moderate
An unprecedented concentration of departures from frontier AI companies spans military governance, privacy, content safety, and civilisational concerns - the people closest to the technology are raising alarms across multiple dimensions simultaneously.
Verified: 17 March 2026 · Last updated: 17 March 2026

Note: This milestone is marked ā€˜needs update’ because the evidence is primarily news-based (resignation statements, open letters) rather than peer-reviewed research. The departures are well-documented but academic analysis of the pattern does not yet exist.

The AI industry is under simultaneous ethical strain across military, privacy, content safety, and civilisational dimensions. Key departures include:

  • Sharma: Philosophical/civilisational concerns about AI trajectory
  • Kalinowski: Military governance concerns (closest to direct safety)
  • Hitzig: Advertising and privacy concerns
  • Beiermeister: Content moderation (circumstances of departure disputed)
  • xAI founders: Leadership dysfunction (not directly safety-related)

Honest Framing

This milestone needs careful framing. The departures are real, but they do not represent a single unified ā€œthe builders think it’s dangerousā€ narrative. The concerns are diverse and span different aspects of AI development and deployment.

The honest version is stronger: the AI industry is under unprecedented ethical strain across multiple dimensions simultaneously. The concentration of departures is the signal - not any single narrative thread.

The Signal

When people who understand a technology best begin leaving the organisations building it - for varied but serious reasons - this is a signal that warrants attention regardless of whether the concerns are unified. The concentration of departures across multiple companies and multiple dimensions is the pattern that distinguishes this from normal industry turnover.

Counterarguments

The strongest objections to this entry, with sources.

High-profile departures can be self-exonerating and tech startup turnover is generally high. OpenAI leadership stated they are 'willing to push out release timelines if they haven't reached their safety bar for releases.'

Source: Sam Altman and Greg Brockman (via Axios, May 2024)

Response:No substantive academic analysis systematically makes the 'normal churn' argument. The concentration of departures across multiple companies and multiple dimensions (military, privacy, content safety, civilisational) is the signal.

Sources (4)

  • Primary Source
    Jan Leike - Public resignation statement
    'Safety culture and processes have taken a backseat to shiny products.' Co-head of OpenAI's Superalignment team; team dissolved after departure
  • Primary Source
    Kokotajlo, Hilton, Saunders et al. - 'A Right to Warn about Advanced AI' open letter
    13 signatories (11 current/former OpenAI employees). Endorsed by Geoffrey Hinton, Yoshua Bengio, and Stuart Russell. Warns companies have 'substantial non-public information about risks' and 'strong financial incentives to avoid effective oversight'
  • Analysis
    FLI AI Safety Index 2024
    'All flagship models were found to be vulnerable to adversarial attacks.' 'Companies were unable to resist profit-driven incentives to cut corners on safety in the absence of independent oversight'
  • Primary Source
    Mrinank Sharma - Resignation from Anthropic
    Head of Anthropic's Safeguards Research team. 'I've repeatedly seen how hard it is to truly let our values govern actions.' Warning extends pattern beyond OpenAI to Anthropic
safety-departureswhistleblowersindustry-ethicsOpenAI