AI Culture

Cognitive Elites and the 'God Machine' Narrative

Who promotes AGI as inevitable and messianic? Sam Altman, Demis Hassabis, and the effective altruism influence on AI labs. Billionaire funding vs democratic institutions. Internal debates on AGI timelines.

The AGI Prophecy: Who Benefits from Inevitability?

Sam Altman's Roadmap

On January 6, 2025, OpenAI CEO Sam Altman published "Reflections," declaring: "We are now confident we know how to build AGI as we have traditionally understood it." In his June 2025 blog post "The Gentle Singularity"—which he said "may be the last like this I write with no AI help at all"—Altman laid out a timeline: agents that can do real cognitive work (2025), systems that can figure out "novel insights" (2026), robots that can do tasks in the real world (2027), and by end of 2028, "more of the world's intellectual capacity could reside inside data centers than outside of them." By 2030, he predicted AI will surpass humans "in all dimensions" and expressed he would be "very surprised if we haven't developed a superintelligent model." He acknowledged "AGI has become a very sloppy term" while continuing to use it as the organizing principle for OpenAI's corporate strategy.

Demis Hassabis (Google DeepMind)

Hassabis predicts AI will be "10 times bigger than the industrial revolution and 10 times faster." He argues AI will solve climate change, disease, and scientific challenges that have stumped humans. His framing positions AI development as fundamentally beneficent and historically inevitable—a narrative that conveniently aligns with Google's investment in AI infrastructure.

Elon Musk and Larry Page

Musk consistently warns about AI existential risk while simultaneously building xAI, pursuing military contracts, and racing to deploy Grok. This dual posture—prophet of doom and active accelerationist—attracts both regulatory deference and investor capital. Google co-founder Larry Page has reportedly expressed the view that AIs are humanity's "rightful heirs" and the next step of cosmic evolution, considering human control over AI to be "speciesist."

What They Stand to Gain

OpenAI's valuation trajectory illustrates the financial stakes: from a $40 billion funding round in March 2025 to $300 billion (August 2025) to $500 billion (October 2025) to a reported $850 billion+ valuation in February 2026. OpenAI is paying workers an average of $1.5 million in stock-based compensation—the highest of any tech startup in history. Promoting AGI inevitability attracts investment, talent, and regulatory deference while deflecting criticism of current harms as mere "growing pains" on the path to transcendence.

OpenAI's Transformation: From Safety Lab to $850 Billion Corporation

The For-Profit Conversion

In December 2024, OpenAI announced plans to remove the nonprofit's controlling status over its for-profit entity. After significant backlash, it reversed course on May 5, 2025. On October 28, 2025, the restructuring was completed, splitting OpenAI into the OpenAI Foundation (nonprofit, 26% equity) and OpenAI Group PBC (public benefit corporation). Microsoft received 27% equity; employees and investors hold 47%. The Foundation retains "special voting and governance rights" to appoint all board members. Sam Altman received no equity stake and drew a salary of $76,001 in 2024.

Deleting "Safely"

OpenAI's November 2025 IRS disclosure revealed the company removed the word "safely" from its mission statement. The old mission: "to build artificial intelligence that safely benefits humanity, unconstrained by a need to generate financial return." The new mission: "to ensure that artificial general intelligence benefits all of humanity." The commitment to being "unconstrained" by financial incentives was also dropped. This change was not publicly announced; it was discovered by reporters examining tax filings.

Board Composition

OpenAI's board includes Bret Taylor (Chair, former Salesforce Co-CEO), Adam D'Angelo (Quora CEO), Paul M. Nakasone (retired U.S. Army General, former NSA director), and Adebayo Ogunlesi (BlackRock senior managing director). Lawrence Summers resigned from the board in November 2025. The board composition reflects corporate and national security interests, not AI safety expertise.

The Safety Exodus: Researchers Who Walked Out

OpenAI's Revolving Door of Safety Teams

OpenAI has disbanded three safety-focused teams in succession. The Superalignment team, formed mid-2023 with a promise of 20% of compute, was disbanded in May 2024 after Ilya Sutskever and Jan Leike departed. The AGI Readiness team was disbanded in October 2024 after Miles Brundage resigned. The Mission Alignment team, created September 2024 under Joshua Achiam, was disbanded in February 2026 after approximately 16 months; its 7 employees were transferred and Achiam's title was changed to "chief futurist."

Named Departures

Ilya Sutskever, OpenAI co-founder and chief scientist, departed May 14, 2024 following his role in the failed November 2023 boardroom coup against Altman. He founded Safe Superintelligence Inc (SSI) in June 2024 and raised $2 billion at a $32 billion valuation by April 2025—a sixfold increase in under a year—with no product released.

Jan Leike, who co-led Superalignment, resigned May 15, 2024 and joined Anthropic on May 28. He stated: "Over the past years, safety culture and processes have taken a backseat to shiny products." The Superalignment team had been "struggling for compute."

Miles Brundage, head of AGI Readiness, resigned October 2024 writing: "Neither OpenAI nor any other frontier lab is ready, and the world is also not ready." Steven Adler, safety researcher, left November 2024, calling the AGI race "a very risky gamble" with "huge downside." Zoe Hitzig resigned February 11, 2026, publishing a New York Times op-ed titled "OpenAI Is Making the Mistakes Facebook Made. I Quit," warning that ChatGPT ads would exploit "the most detailed record of private human thought ever assembled."

Ryan Beiermeister, VP of product policy, was fired in February 2026 after opposing ChatGPT's planned "adult mode" that would allow sexually explicit conversations.

Anthropic's Safety Lead Resigns

Mrinank Sharma, leader of Anthropic's Safeguards Research team, resigned February 9, 2026, publishing a letter stating "the world is in peril" and "I've repeatedly seen how hard it is to truly let our values govern our actions." He described systemic pressures—economic, geopolitical, institutional—that prioritize short-term growth over long-term risk mitigation. He plans to write poetry and do community work in the UK.

xAI's Founding Team Collapse

6 of 12 co-founders of Elon Musk's xAI have departed, with 5 leaving in the last year. Christian Szegedy left February 2025. Igor Babuschkin left August 2025. Greg Yang left January 2026, citing health issues. Yuhuai "Tony" Wu and Jimmy Ba both resigned within 24 hours in February 2026. The departures coincide with regulatory probes over Grok's role in generating non-consensual explicit images.

The Suchir Balaji Case

Whistleblowing and Death

Suchir Balaji, 26, an OpenAI researcher, left the company in August 2024 after becoming disillusioned. On October 23, 2024, he gave an interview to the New York Times alleging OpenAI's products violate U.S. copyright law because they are trained on competitors' products and outputs can substitute those products. He stated he intended to testify against OpenAI. He was found dead in his San Francisco apartment on November 26, 2024—one month after his public accusations.

Investigation and Family Response

The San Francisco Chief Medical Examiner's autopsy, released February 14, 2025, ruled the death a suicide by self-inflicted gunshot wound. SFPD found "no evidence of foul play." His parents described his mood as "cheerful" two weeks prior and cited the lack of a suicide note and blood spatter anomalies. They hired private investigators in December 2024 and had a second autopsy performed. In September 2025, the family filed a wrongful death lawsuit against his apartment complex, alleging tampering with surveillance footage, destruction of evidence, and obstruction of the investigation.

Mira Murati and the Talent Wars

From OpenAI CTO to Founder

Mira Murati, OpenAI's CTO for over six years, departed on September 25, 2024. In February 2025, she launched Thinking Machines Lab, a public benefit corporation focused on accessible, customizable, and human-aligned AI. The startup raised $2 billion in seed funding led by Andreessen Horowitz at a $12 billion valuation—assembled with approximately 30 researchers and engineers from OpenAI, Google, Meta, Mistral, and Character AI.

The Raid

In January 2026, multiple co-founders left to return to OpenAI, including CTO Barret Zoph, Luke Metz, and Sam Schoenholz. Mark Zuckerberg launched a reported "full-scale raid," approaching 12+ employees of the ~50-person company with packages worth up to $200 million. The episode illustrates the winner-take-all dynamics of AI talent competition: a small number of individuals command valuations that would be extraordinary for entire companies in other sectors.

Ethics Washing: Teams Created and Destroyed

The Pattern

Every major AI company has created an ethics or safety team. Every major AI company has either disbanded, downsized, or marginalized that team when it conflicted with commercial objectives. Microsoft dissolved its entire Ethics & Society team in early 2023 as part of 10,000-person layoffs; the team had already been reduced from 30 members to 7 in October 2022. Meta disbanded its Responsible Innovation team (~24 engineers and ethics researchers) in September 2022, reassigning most to product teams under a "decentralization" rationale. Twitter/X cut its AI ethics team from 17 people to 1 in November 2022 during Musk's layoffs; the remaining person later quit.

Google's DEI Rollback

In March 2025, Google scrubbed mentions of "diversity" and "equity" from its Responsible AI team's mission description, as part of broader DEI rollbacks. Diversity hiring targets were deleted. Google also cut about one-third of a unit aimed at protecting society from misinformation, radicalization, toxicity, and censorship. The pattern is consistent: ethics teams are created during periods of public scrutiny and quietly dismantled when attention shifts.

Public Perception: The Growing Concern

Americans Worry About AI

According to Pew Research (June 2025), 50% of Americans say they are more concerned than excited about AI in daily life, up from 37% in 2021. 57% rate the societal risks of AI as high; only 25% say the benefits are high. 52% of workers worry about AI's workplace impact. More than half (56%) are "extremely or very concerned" about AI eliminating jobs. 53% say AI will worsen people's ability to think creatively. The U.S. is at the top of the global concern list among 25 countries surveyed.

The Expert-Public Gap

The chasm between expert optimism and public skepticism is stark. Among the general public, 51% are more concerned than excited about AI. Among AI experts, only 15% are more concerned than excited. This gap enables a self-reinforcing cycle: experts who are bullish on AI shape policy and investment, while the public's concerns are dismissed as uninformed. Only 44% of Americans trust the U.S. government to regulate AI effectively; 47% have little to no trust.

Partisan Convergence

As of November 2025, Republicans and Democrats are now equally concerned about AI in daily life, though they diverge on regulation. This bipartisan concern has not yet translated into legislative action: the AI Whistleblower Protection Act remains pending, KOSA has not passed, and no federal AI liability statute exists.

Who Builds AI: Demographics of Power

Gender in AI

Women hold just 22% of AI roles globally and less than 14% of senior executive AI roles. Women make up approximately 26% of the U.S. STEM workforce and 24% in core tech fields. In Big Tech leadership specifically: Amazon 29% women, Meta 34%, Apple 31%, Google 28%, Microsoft 26%. The Stanford AI Index 2025 found that the AI workforce "has experienced little change in its makeup over the past decade" and remains "predominantly male and lacking in diversity regarding race, ethnicity, sexual orientation, and gender identity."

Race and Intersectionality

Black, Latina, and Native American women in tech dropped from 4.6% to 4.1% between 2018 and 2022. Only 1 in 20 top tech executives is a woman of color. Many organizations hesitate to share workforce diversity data because it "often reflects poorly or shows little progress." AI systems are built by and for a narrow demographic, and the values, assumptions, and blind spots of that demographic are encoded into systems used by billions.

Ideological Ties: Effective Altruism and Longtermism

The EA-AI Lab Pipeline

Effective Altruism, a philosophical movement emphasizing evidence-based approaches to doing good, has been deeply influential in AI safety prioritization. Anthropic was founded by former OpenAI employees concerned with existential risk. Open Philanthropy (Dustin Moskovitz, Cari Tuna) has contributed $336+ million to AI safety research. Multiple AI lab employees are active in EA communities. An Open Philanthropy Technology Policy Fellow was placed with Senator Martin Heinrich, who became one of four senators leading U.S. AI policy—demonstrating direct influence channels between EA organizations and policymakers.

Post-SBF Reckoning

Sam Bankman-Fried's conviction (25 years in prison, $11 billion in forfeiture) damaged the EA movement's credibility but did not dismantle its institutional influence. EA engagement has grown 20-25% since the conviction, though GiveWell giving dropped 51%. The movement's core argument—that preventing future existential risks is more important than addressing present-day suffering—continues to shape how AI companies frame their missions, justify their valuations, and defer accountability for current harms. When a company can claim it is "saving the future of humanity," criticism of current labor practices, environmental damage, or privacy violations can be dismissed as insufficiently ambitious.

Funding Asymmetry

Tech companies invest tens of billions annually in AI capabilities. Government funding for AI research through NSF and DARPA is dwarfed by corporate spending. A substantial majority of AI research funding comes from corporations and billionaires rather than democratic institutions. Big Tech collectively spent $380 billion+ on AI infrastructure in 2025 while 95% of corporate AI projects show no measurable profit—a capital allocation driven by AGI faith rather than demonstrated returns.