Labor Extraction and the Illusion of Automation
Gig platforms like Uber, Amazon, and Deliveroo use AI for algorithmic management. Explore algorithmic wage theft, data labeling exploitation at Sama and Remotasks, and the hidden workforce behind AI.
White-Collar Job Displacement
The Scale of AI-Driven Layoffs
Challenger, Gray & Christmas reported 54,836 job cuts explicitly attributed to AI in 2025, part of a broader 1,206,374 total announced cuts representing a 58% increase over 2024. The technology sector led private-sector cuts with 154,445 positions eliminated. Over 100,000 employees were impacted by AI-driven layoffs in 2025, with over 22,000 already impacted in the first weeks of 2026.
Specific Company Announcements
Amazon eliminated approximately 14,000 corporate roles in October 2025 citing AI efficiency, then announced an additional 16,000 positions, totaling roughly 30,000 cuts—approximately 10% of its office workforce. Microsoft cut around 15,000 jobs with AI central to reshaping its operations; Microsoft AI CEO Mustafa Suleyman stated in February 2026 that AI could automate all work involving "sitting down at a computer" within 12-18 months.
Salesforce CEO Marc Benioff cut 4,000 customer support employees, reducing support staff from 9,000 to approximately 5,000, stating that 50% of queries are now resolved by AI agents. Chegg, the education technology company, laid off 22% of its workforce in May 2025, then another 45% (388 employees) in October 2025, as its stock collapsed 99% from $113.51 to approximately $0.50—a market capitalization decline from $14.7 billion to $156 million—largely attributed to ChatGPT replacing its core tutoring business.
Duolingo declared itself "AI-first," cutting 10% of contractors followed by another round of layoffs, stating it would "gradually stop using contractors to do work that AI can handle." IBM CEO Arvind Krishna announced a hiring freeze on approximately 7,800 back-office roles expected to be replaced by AI within five years.
The Customer Service Boomerang
Klarna replaced approximately 700 customer service positions with an OpenAI-powered chatbot, claiming AI handled 75% of customer interactions across 2.3 million conversations in 35+ languages. CEO Sebastian Siemiatkowski declared AI had shrunk the workforce by 40%. By mid-2025, Klarna reversed course and began rehiring human agents after customer satisfaction dropped and complaints increased. Gartner predicts that by 2027, half of companies that cut customer service staff due to AI will rehire, as the gap between AI capability and customer expectations proves larger than anticipated.
Sectors at Risk
Anthropic CEO Dario Amodei warned AI could eliminate half of entry-level white-collar roles. Accounting, basic legal drafting, contract review, compliance monitoring, junior software development, financial modeling, and paralegal work are identified as directly exposed. The entertainment and media sector saw layoffs increase 18% in 2025 with over 17,000 jobs eliminated, while the news sector suffered 2,254 job cuts.
Creative Industry Under Siege
Writers' and Actors' Strike Outcomes
The 2023 WGA strike secured that only human writers will be credited as authors—AI cannot be used as source material and cannot receive writing credit. SAG-AFTRA's contract established consent and compensation requirements for digital replicas, 48-hour notice requirements, and residuals for digital replica use. However, both unions were unable to secure protections against their works being used to train AI systems in the 2023 contracts.
New negotiations are underway: SAG-AFTRA began talks with AMPTP on February 9, 2026, and WGA talks are set for March 16, 2026. AI training data is expected to be the central issue in both negotiations, as studios have continued using copyrighted creative works to train AI models that may eventually replace the workers who created those works.
AI Art and Copyright Lawsuits
Over 50 AI copyright infringement lawsuits have been filed in U.S. federal courts, with approximately 30 active cases after consolidations and settlements. In Andersen v. Stability AI, Judge William Orrick denied motions to dismiss direct and induced copyright infringement claims against Stability AI and Midjourney, allowing discovery to proceed. Anthropic settled the Bartz case for $1.5 billion covering approximately 500,000 pirated works—roughly $3,000 per work.
In the music industry, the RIAA (Universal, Sony, Warner) sued Suno and Udio in June 2024, seeking up to $150,000 per infringing song. Warner Music Group settled with both companies in November 2025, entering licensing deals for AI music platforms launching in 2026.
Gig Platforms and Algorithmic Management
Amazon Worker Surveillance
Amazon uses AI-powered cameras (Netradyne) in delivery vans to monitor driver behavior. An 18-month Senate investigation found Amazon workers nearly twice as likely to be injured as workers at other warehouses. OSHA found Amazon's injury rate at 6.5 per 100 employees—71% higher than the 3.8 rate at all other non-Amazon warehouses with over 1,000 employees. Amazon's injury rate was triple that of Walmart in 2023.
A University of Illinois Chicago survey found 41% of Amazon workers report being injured on the job, and 51% of those employed more than three years have been injured. Workers perform twisting, bending, and long reaches as much as 9 times per minute. In December 2024, OSHA and Amazon entered a corporate-wide settlement resolving 10 ergonomics cases across 10 facilities. The U.S. Attorney of the Southern District of New York is investigating whether Amazon engaged in a fraudulent scheme to hide true injury rates.
Gig Worker Exploitation: The Gig Trap
Human Rights Watch's May 2025 report "The Gig Trap" examined seven companies—Amazon Flex, DoorDash, Favor, Instacart, Lyft, Shipt, and Uber—based on interviews with 95 platform workers. Texas gig workers earned nearly 30% below the federal minimum wage, with net pay as low as $5.12 per hour after expenses (against a federal minimum of $7.25 and an estimated Texas living wage of approximately $17). Six of seven companies use opaque algorithms to assign jobs and determine wages, and workers do not know their pay until after completing jobs.
These companies generate enormous revenue from this labor model: Uber reported $43.9 billion in revenue (2024) and $9.8 billion net income with a market capitalization of $169.4 billion. DoorDash reported $10.72 billion revenue and was valued at $81 billion. The disparity between corporate valuations and worker compensation illustrates how algorithmic management extracts maximum value from labor while minimizing returns to workers.
Deliveroo and Food Delivery Apps
Algorithmic systems determine which workers get orders using metrics that are opaque to workers. Workers report being "deactivated" (fired) by algorithms without explanation or recourse. The algorithms can change pay rates dynamically without notice, and workers who decline low-paying orders see their future order allocation reduced—a form of algorithmic punishment for exercising the independence that "independent contractor" status supposedly guarantees.
Algorithmic Wage Theft and Task Fragmentation
Time Off Task (TOT)
Amazon warehouses track every minute of "time off task" via handheld scanners. Bathroom breaks, talking to colleagues, or navigational errors count against workers. Accumulating too much TOT results in automatic termination—wage theft through algorithmic time theft. The system treats human biological needs as inefficiencies to be eliminated, measuring workers against a standard of continuous machine-like productivity.
Task Fragmentation
AI systems break complex jobs into simple, repetitive micro-tasks that can be done by less-skilled, lower-paid workers. This deskilling reduces worker bargaining power and wages while making work more monotonous and psychologically harmful. The strategy is explicit: rather than paying one skilled worker a living wage, companies use AI to decompose work into components cheap enough to outsource to the lowest bidder anywhere in the world.
Opacity as Control
Workers cannot see how algorithms determine their pay or assignments, making it impossible to challenge unfair treatment or understand how to improve performance. This opacity is not a bug—it is a deliberate design choice that serves as a tool of labor control. Transparency would enable collective bargaining and legal challenges; opacity prevents both.
Data Labeling and Content Moderation Exploitation
Scale AI / Remotasks
Remotasks, Scale AI's subsidiary, shut down operations in Kenya on March 7, 2024, without proper notice, also exiting Nigeria and Pakistan. Workers in Rwanda and South Africa were also blocked from accessing the platform. Kenyan workers earned approximately $2 per hour versus $20 for American counterparts performing identical tasks. Some micro-tasks paid as little as $0.01 each. Workers reported accounts closed before payday with claims of "policy violations." The Oxford Internet Institute gave Remotasks a score of 1 out of 10 on fair labor practices.
In contrast, Scale AI's Outlier subsidiary lists pay of $30-$50 per hour for expert work including biology and coding—a stark illustration of how the same company applies radically different compensation standards based on workers' geographic location and economic vulnerability.
Sama and OpenAI
Workers labeling data for OpenAI's ChatGPT through Sama in Kenya reported traumatic working conditions, including exposure to graphic violence, child sexual abuse material, and self-harm content. OpenAI paid Sama $12 per hour per worker; Kenyan workers received only $2. Daniel Motaung, recruited from South Africa to Nairobi in 2019, alleges he was not told the nature of the work before arrival—his first assignment was moderating a beheading video. Motaung was fired shortly after attempting to form an employee union.
In September 2024, 185 former Facebook content moderators working through Sama won the right to take their mass firing case to trial after courts rejected Meta's appeal. A 2025 Equidem survey of 76 workers from Colombia, Ghana, and Kenya reported 60 independent incidents of psychological harm including anxiety, depression, panic attacks, PTSD, and substance dependence.
The Hidden Workforce
An estimated millions of workers in the Global South power AI systems through invisible labor—labeling images, transcribing audio, moderating content, and providing the RLHF feedback that makes AI systems usable. They receive minimal pay, no recognition, and suffer psychological trauma from exposure to the worst content the internet produces. The AI industry's dependence on this exploited labor force is systematically concealed from users and investors who prefer the narrative that AI systems are purely technological achievements.
Worker Organizing and Union Responses
Hollywood Unions and AI
SAG-AFTRA ratified a new Interactive Media Agreement in July 2025 with AI digital replica consent and disclosure requirements. The current TV/Theatrical Agreement covers through June 30, 2026, with new negotiations beginning February 2026. WGA's contract expires May 2026 with talks set for March. In both negotiations, AI training data protections—preventing studios from using actors' performances and writers' scripts to train AI systems that replace them—are expected to be the primary demand.
Tech Worker Organizing
ZeniMax (Microsoft-owned) QA workers announced a tentative contract agreement in May 2025—Microsoft's first-ever U.S. union contract. CWA, OPEIU, and the Teamsters launched coordinated tech sector organizing campaigns. Mass layoffs at Google, Meta, Microsoft, and Amazon in 2023-2024 despite record profits have driven unionization interest among tech workers who previously considered themselves immune to labor displacement.
Gig Worker Legislation
A bill introduced in July 2025 aims to curb exploitation of U.S. gig workers by establishing minimum pay standards and algorithmic transparency requirements. The International Labour Organization called in November 2025 for strengthened global rules to protect gig workers from algorithmic management. Worker-organizing platforms like Gig2Gether enable workers across multiple platforms to share earnings data and build solidarity against opaque compensation practices.
Net Job Displacement: The Structural Debate
Corporate Claims vs. Reality
Companies consistently claim AI "augmentation" makes workers more productive rather than replacing them, and that automation creates new jobs. These claims are used to deflect regulatory scrutiny and worker organizing. The reality is more nuanced: AI does create some new roles (AI trainers, prompt engineers, automation supervisors), but these positions require different skills, are far fewer in number, and are concentrated in high-income countries, while the positions eliminated span the global workforce.
McKinsey and Economic Projections
McKinsey projects AI could replace up to 45% of U.S. jobs, while other estimates suggest that for every robot deployed, multiple jobs are eliminated. The new wave of generative AI threatens knowledge workers who were previously considered safe from automation—lawyers, accountants, journalists, programmers, and analysts. Unlike previous waves of automation that primarily affected manual and routine cognitive work, generative AI targets the creative and analytical tasks that define professional work.
Universal Basic Income Debates
The prospect of mass AI-driven displacement has revived UBI discussions. Sam Altman's OpenResearch study gave 3,000 participants $1,000 per month for three years; recipients worked 1.3-1.4 fewer hours per week and directed extra spending to essentials. In early 2026, UK Minister for Investment Lord Jason Stockwood told the Financial Times the government is weighing UBI introduction to support workers in AI-threatened industries. Proposals for "robot taxes" or automation taxes that create a direct link between AI deployment and social support funding are gaining traction in multiple jurisdictions.