Corporate Power

Corporate Control and the Data Land Grab

AI labs like OpenAI, Anthropic, and Google DeepMind have deep military and intelligence contracts. User data is used to fine-tune models. Learn about health data acquisition and lobbying against open-source mandates.

AI Labs and the Pentagon: The $800 Million Frontier

The Pentagon's Frontier AI Program

In July 2025, the Department of Defense awarded contracts of up to $200 million each to four companies—OpenAI, Anthropic, Google, and xAI—under the Pentagon's Frontier AI program, totaling $800 million. The contracts cover "prototype frontier AI" for both warfighting and enterprise domains. OpenAI launched "OpenAI for Government" alongside the contract. Google agreed to let the DoD use Gemini for "all lawful purposes" on unclassified systems. Elon Musk's xAI agreed to unrestricted lawful use.

OpenAI's Military Pivot

OpenAI's trajectory from nonprofit AI safety lab to Pentagon contractor illustrates the industry's transformation. The company previously prohibited military use explicitly. On January 10, 2025, OpenAI softened those restrictions, and within months secured a $200 million Pentagon contract. OpenAI also partnered with defense-tech firm Anduril to help defend against drone attacks, directly integrating its AI capabilities into kinetic military systems.

Anthropic: The Safety Holdout Under Pressure

Anthropic is notably the only company that deployed its models on the Pentagon's classified networks while maintaining restrictions on mass surveillance and fully autonomous weaponry. As of February 2026, its contract is "under review" because Anthropic refuses to lift guardrails. The Pentagon's CTO publicly stated it is "not democratic" for Anthropic to limit military use of Claude. The Pentagon threatened to cut off Anthropic entirely, while OpenAI, Google, and xAI all agreed to remove consumer-facing guardrails for military use.

Google and Project Nimbus

Over 200 DeepMind employees signed a letter in 2025 asking Google to cut military contracts, specifically citing Project Nimbus—a $1.2 billion contract providing AI and cloud services to the Israeli military. Google has not responded to these demands. Israeli weapons firms are required by law to purchase cloud services from either Google or Amazon, creating a structural link between U.S. tech companies and Israeli military operations.

Market Concentration and Monopoly Power

NVIDIA's GPU Monopoly

NVIDIA holds 85-90%+ of the data center GPU market, making it the gatekeeper of AI development. The DOJ issued subpoenas investigating exclusionary practices including vendor lock-in (better pricing for exclusive NVIDIA customers), bundling (forcing purchase of proprietary InfiniBand networking hardware with GPUs), and the RunAI acquisition. The EU has raised concerns over the "CUDA moat"—the AI industry's dependence on NVIDIA's proprietary software ecosystem that creates an insurmountable barrier to entry. China declared NVIDIA violated anti-monopoly laws in September 2025. NVIDIA reached a $5 trillion market capitalization by January 2026.

Cloud Oligopoly

As of Q3 2025, the "Big Three" cloud providers control two-thirds of all global cloud infrastructure: AWS (~30%), Microsoft Azure (~20%), and Google Cloud (~13%)—totaling 66% of a $107 billion quarterly market. All other providers combined hold less than 10%. The three planned to spend approximately $240 billion in 2025 on data centers and AI infrastructure, spending $87 billion in a single quarter. GPU-as-a-Service revenues increased over 200% year-over-year in Q3 2025.

This concentration means that every AI company, every AI researcher, and every organization deploying AI systems depends on infrastructure controlled by three companies that are simultaneously competitors in the AI market. The cloud providers can observe what AI companies build on their platforms, adjust pricing to favor their own AI products, and use infrastructure control as competitive leverage.

AI Company Valuations: The Trillion-Dollar Race

OpenAI

OpenAI's valuation escalated from $300 billion (late 2024) to $500 billion (secondary share sale, October 2025). The company completed its for-profit recapitalization in October 2025, with the OpenAI Foundation now owning approximately 25% of the new for-profit public benefit corporation. OpenAI is currently raising a round that could close at roughly $100 billion in fresh capital and has signed $1.4 trillion worth of infrastructure deals. The company also deleted the word "safely" from its mission statement during its corporate restructuring.

Anthropic

Anthropic's valuation trajectory: $183 billion (Series F, 2025) to a term sheet at $350 billion (January 2026) to a closed round at $380 billion (February 12, 2026, $30 billion raised, led by Coatue and GIC with Microsoft and NVIDIA participating)—the second-largest private financing round in tech history. Annualized revenue projected at $7 billion for 2025 and $26 billion for 2026. The company has engaged Wilson Sonsini for IPO preparations.

xAI

Elon Musk's xAI reached a $200 billion valuation with over $10 billion raised in 2025, and was targeting $230 billion in a NVIDIA-led raise. The top 10 AI mega-rounds in 2025 totaled approximately $84 billion—an unprecedented concentration of capital in a single technology sector, funded almost entirely by private investors rather than democratic institutions.

User Data Extraction Despite Privacy Policies

OpenAI's GDPR Fine

In December 2024, the Italian Data Protection Authority fined OpenAI EUR 15 million—the first generative AI fine under GDPR. Violations included no legal basis for processing personal data used to train ChatGPT, failure to meet transparency obligations, inadequate age verification for minors, and failure to report a March 2023 breach that exposed chat histories and payment data of ChatGPT Plus subscribers. OpenAI appealed, calling the fine "disproportionate."

Reddit vs. Anthropic

In June 2025, Reddit sued Anthropic for allegedly scraping "millions, if not billions" of Reddit posts and comments between December 2021 and October 2024 to train Claude, without authorization. The complaint includes deleted posts—content users had intentionally removed. The case includes five causes of action: breach of contract, unjust enrichment, trespass to chattels, tortious interference, and unfair competition. Reddit filed a separate lawsuit against Perplexity AI and three data-scraping companies in October 2025.

Meta's Data Practices

Brazil's data protection authority suspended Meta's processing of personal data for AI training, establishing a global precedent for enforcing consent requirements. In June 2025, BBC News reported that Meta AI users' prompts and chat responses were inadvertently made publicly visible in a "Discover" feed, often without users' awareness. Europe recorded EUR 2.3 billion in privacy penalties in 2025—a 38% year-over-year increase.

Consent Gaps

Most users do not understand what data is being collected or how it will be used. "Consent" is typically obtained through lengthy terms of service that no one reads, not through genuine informed agreement. AI companies routinely use user interactions as training data, and privacy policies are crafted to maximize data collection rights while minimizing apparent intrusiveness. The gap between what companies disclose and what they actually do with data has been documented in multiple government investigations.

Lobbying and Political Influence

Washington D.C.: $400,000 Per Day

The seven largest tech and AI companies spent a combined $50 million on federal lobbying in the first 9 months of 2025—an average of approximately $400,000 per day Congress was in session. Over 500 organizations lobbied Congress and the White House on AI policies in H1 2025, nearly double the approximately 268 that did so in 2023. Meta set a record with $13.8 million in H1 2025, the most the company has ever spent in a half-year since it began hiring federal lobbyists in 2009. NVIDIA's lobbying spending surged 388% in H1 2025.

Brussels: EUR 151 Million Annually

Tech industry spending on EU lobbying reached EUR 151 million annually—a rise of over 50% compared to four years ago. U.S. tech giants and the Trump administration actively pressured Brussels to water down the EU AI Act, arguing strict rules would drive AI development to less-regulated countries. This lobbying has been effective: several proposed provisions in the AI Act were weakened during the legislative process under industry pressure.

The Revolving Door

The Trump administration launched "Tech Force" through OPM in December 2025: an initial cohort of 1,000 tech workers serving two-year government stints, with approximately 25 partner companies including Microsoft, Palantir, and xAI. These companies committed to considering program "alumni" for future employment, formalizing the revolving door. Anthropic hired former Biden NSC official Tarun Chhabra to lead national security policy and former White House economic aide Elizabeth Kelly. Anthropic backers gave $174 million to Democrats before the company's federal AI vendor list approval.

Antitrust Actions and Market Power

Microsoft-OpenAI Antitrust Class Action

Filed in October 2025, Bryant et al. v. Microsoft Corp. alleges Microsoft's Azure exclusivity clause with OpenAI resulted in ChatGPT prices 100 to 200 times competitors' rates. When Microsoft relaxed exclusivity in June 2025 and OpenAI could purchase compute from Google, prices dropped 80% immediately—demonstrating the monopolistic pricing power that exclusivity enabled. Plaintiffs seek treble damages and dismantling of Microsoft's control over OpenAI's compute infrastructure.

OpenAI's For-Profit Conversion

OpenAI initially attempted to shed nonprofit control entirely. Elon Musk sued, arguing his $44 million donation was contingent on nonprofit status. A coalition of California nonprofits urged the state Attorney General to investigate. OpenAI reversed course in May 2025, then completed a recapitalization in October 2025 where the nonprofit OpenAI Foundation retained approximately 25% ownership of the for-profit entity—a compromise that critics argue gives the nonprofit insufficient control over an organization it created.

FTC and DOJ Investigations

The FTC issued subpoenas to Microsoft, Google, Amazon, OpenAI, and Anthropic examining cloud computing and AI market concentration. The DOJ leads the investigation into NVIDIA for anticompetitive practices. The EU fined Google EUR 2.95 billion for adtech dominance in 2025 and launched investigations into Meta's AI practices. However, the new FTC leadership under the Trump administration is expected to take a more "restrained" approach to AI antitrust enforcement.

Open-Source vs. Proprietary: The False Choice

Meta's Open-Source Reversal

Meta's Llama models became the backbone of national AI initiatives in France, India, and the UAE. However, in December 2025, Meta announced a pivot away from open-source: proprietary models codenamed "Avocado" (text/code) and "Mango" (visual media) target Q1 2026 release. Meta raised 2025 capex guidance to $72 billion to build closed-garden infrastructure and acquired 49% of Scale AI for $14.3 billion. This directly contradicts CEO Zuckerberg's public stance that open-source was "closing the gap" with proprietary models.

The Safety Argument as Market Protection

Industry argues that open-source AI requirements threaten national security, that audits are too burdensome, and that regulation will slow innovation and cede ground to China. These arguments have successfully delayed meaningful regulation. Critics note that "safety" concerns about open-source AI conveniently align with the commercial interests of companies that want to maintain proprietary advantages. The actual safety record suggests that open-source models receive more security scrutiny from the broader research community than proprietary models examined only by their creators.

Safety Departures and Commercial Pressure

The Exodus of Safety Researchers

Mrinank Sharma, Anthropic's head of the Safeguards Research Team, resigned in February 2026 with a public letter warning "the world is in peril." He stated employees "constantly face pressures to set aside what matters most" and described systemic pressures—economic, geopolitical, institutional—that prioritize short-term growth over long-term risk mitigation. Other researchers including Harsh Mehta and Behnam Neyshabur also departed. CNN reported a pattern of AI researchers "sounding the alarm on their way out the door" at both OpenAI and Anthropic.

Venture Capital vs. Safety

Anthropic's $380 billion valuation at a $30 billion raise means investors expect revenue growth from $7 billion (2025) to $26 billion (2026). This pressure to deliver returns at unprecedented scale creates structural incentives to prioritize commercialization over safety research. VC firms assess safety primarily through the lens of legal compliance risk, not alignment research. When safety requirements conflict with deployment timelines and revenue targets, commercial pressure consistently wins.