Legal and Accountability Voids
When AI causes harm through misdiagnosis, wrongful arrest, or financial loss, who pays? Status of AI liability laws in EU, US, and China. NDAs silencing victims and attempts to grant legal personhood to AI.
Copyright and AI: The Defining Legal Battle
The New York Times vs. OpenAI
The New York Times Co. v. OpenAI Inc. (Case No. 23-cv-11195, S.D.N.Y.) is the landmark copyright case against generative AI. On April 4, 2025, Judge Sidney Stein denied OpenAI's motion to dismiss direct copyright infringement, contributory infringement, and trademark dilution claims. The fair use question was not resolved and will proceed to trial. In November 2025, Magistrate Judge Ona T. Wang rejected OpenAI's attempt to produce only keyword-filtered ChatGPT logs. On January 5, 2026, Judge Stein affirmed the discovery order, compelling OpenAI to produce 20 million de-identified ChatGPT logs. OpenAI's privacy objection was rejected because users "voluntarily submitted their communications." The earliest fair use decision is expected in summer 2026.
The Anthropic Copyright Settlement
On September 5, 2025, Anthropic agreed to pay $1.5 billion—the largest U.S. copyright settlement in history—compensating authors approximately $3,000 per book for roughly 500,000 books downloaded from pirate libraries (Library Genesis, Pirate Library Mirror). The settlement was preliminarily approved by a federal judge on September 25, 2025. The case (Bartz v. Anthropic) established that AI companies can face massive financial liability for using pirated training data, even without a trial verdict.
Getty Images vs. Stability AI (UK)
On November 4, 2025, the UK High Court delivered the first UK judgment addressing copyright infringement in generative AI model training. The judge found that although Stable Diffusion is exposed to copyright-protected images during training, the model does not store the training data itself. Synthetic outputs are generated without direct access to underlying training data, so Getty could not prove outputs were directly derived from its images. However, Stable Diffusion was found to have generated images displaying the Getty watermark, constituting trademark infringement. The judgment did not fully address whether scraping copyrighted images for AI training is lawful under the UK's Copyright, Designs and Patents Act.
The Copyright Landscape
As of October 2025, 51 copyright lawsuits against AI companies were pending. No court has decided fair use in AI training. The U.S. Copyright Office issued a three-part report series: Part 2 (January 2025) reaffirmed the "human authorship" requirement—AI-generated content based solely on a prompt without further human creative intervention does not receive copyright protection. Part 3 (May 2025), a 108-page report, analyzed whether training on copyrighted works constitutes fair use and concluded that "making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets goes beyond established fair use boundaries." The Copyright Office favors voluntary licensing over new legislation.
Who Pays When AI Causes Harm?
The Product vs. Service Question
There is no clear legal framework establishing liability for AI-caused harms. Courts are grappling with whether to treat AI as a product (subject to strict product liability) or a service (subject to negligence standards). A Canadian tribunal held Air Canada liable when its chatbot gave false information about bereavement fares, rejecting the airline's argument that the chatbot was a "separate entity." A Hangzhou court ruled that AI developers are not liable for hallucinations, classifying AI output as a service rather than a product. Multiple cases of facial recognition misidentification leading to wrongful arrest (Porcha Woodruff, Robert Williams) resulted in settlements but no clear precedent on developer liability.
The EU Product Liability Directive
The EU's revised Product Liability Directive, entering force on December 8, 2024, explicitly includes software and AI systems as "products." Member states must transpose it into local law by December 9, 2026. The directive imposes strict liability on manufacturers—including developers, importers, and authorized representatives—without requiring proof of negligence. If claimants face "excessive difficulties" proving defectiveness due to technical complexity, courts can presume defectiveness and causation. However, the European Commission withdrew the proposed AI-specific Liability Directive from its 2025 Work Programme on February 11, 2025, leaving the revised PLD as the primary EU framework.
The AI LEAD Act (U.S.)
Introduced in September 2025 by Senators Dick Durbin and Josh Hawley, the AI LEAD Act classifies AI systems as products and creates a federal cause of action for product liability claims. The bill was prompted by the deaths of Sewell Setzer III (14, Character.AI) and Adam Raine (16, ChatGPT). The Senate Judiciary Committee held hearings on AI chatbot harm on September 17, 2025. Over 1,000 AI bills were introduced by federal and state legislators during the 2025 session.
The EU AI Act: Implementation and Gaps
Enforcement Timeline
The AI Act entered force on August 1, 2024. Prohibitions on "unacceptable risk" AI practices became enforceable across all 27 EU member states on February 2, 2025. Banned practices include social scoring, emotion recognition in workplaces and education, untargeted scraping of facial images for recognition databases, predictive crime assessments based on profiling, and subliminal manipulation techniques. Violations carry fines up to EUR 35 million or 7% of global annual turnover.
Rules for general-purpose AI (GPAI) models entered into application on August 2, 2025. The majority of the Act—high-risk AI systems, transparency rules, and innovation measures—comes into force on August 2, 2026. Each member state must have at least one AI regulatory sandbox. Full scope applies to all risk categories by August 2, 2027.
Enforcement Reality
No enforcement actions under the AI Act have been reported as of early 2026, despite prohibited practices being in effect for nearly a year. Finland became the first EU member state with full enforcement powers on December 22, 2025. The European AI Office can request documentation, conduct evaluations, demand source code access for general-purpose AI models, and impose corrective measures—but has not yet exercised these powers publicly. The gap between legislative ambition and enforcement capacity is the defining feature of the EU's AI governance.
U.S. State AI Legislation: A Patchwork
The 2025 Legislative Surge
In 2025, 1,208 AI-related bills were introduced across all 50 states; 145 were enacted into law. According to NCSL, 38 states adopted or enacted approximately 100 AI-related measures. No federal AI statute exists, creating a patchwork of state-level rules that companies must navigate simultaneously.
Colorado: First Comprehensive State AI Law
Colorado SB 24-205 is the first comprehensive state AI law in the U.S., covering algorithmic discrimination (both disparate treatment and disparate impact). Originally effective February 1, 2026, it was delayed to June 30, 2026 by Governor Jared Polis. The Colorado Attorney General has exclusive enforcement authority with no private right of action. Developers must notify the AG within 90 days of discovering algorithmic discrimination.
California: Seven New AI Laws
California enacted seven new AI laws in its 2025 session, including the Transparency in Frontier Artificial Intelligence Act (TFAIA), effective January 1, 2026. Employment discrimination regulations effective October 1, 2025 make it unlawful to use any "automated-decision system" that discriminates based on protected traits. Requirements include meaningful human oversight, bias testing, and 4-year record retention.
Illinois: Disclosure Requirements
Illinois HB 3773 amends the Illinois Human Rights Act to prohibit employer use of AI that discriminates against protected classes. Employers must disclose when they use AI for employment decisions. Effective January 1, 2026. Trump's 2025 executive order attempted to preempt state AI regulations, but the constitutional authority to do so remains contested.
Deepfake Legislation: The First Wave
The TAKE IT DOWN Act
Signed by President Trump on May 19, 2025, the TAKE IT DOWN Act prohibits knowingly publishing non-consensual intimate images, including AI deepfakes. Platforms must remove flagged content within 48 hours of a valid request and must establish removal processes by May 19, 2026. The Act does not preempt state law. It is the first federal legislation specifically targeting AI-generated non-consensual intimate imagery.
State Deepfake Laws
As of mid-2025, 45 states have enacted laws addressing sexually explicit deepfakes (up from 32 at the start of 2025). 28 states (as of January 2026) enacted laws specifically addressing deepfakes in political communications. Tennessee's ELVIS Act (Ensuring Likeness Voice and Image Security Act) specifically prohibits using AI to mimic a person's voice without permission. A California deepfake law was struck down by a federal judge on First Amendment grounds, illustrating the constitutional tension between regulating harmful AI content and protecting free expression.
GDPR and AI: EUR 6.7 Billion in Fines
Cumulative Enforcement
Since May 2018, 2,679 GDPR fines totaling over EUR 6.7 billion have been issued. EUR 1.2 billion was levied in 2025 alone. Daily breach notifications exceeded 400 for the first time since 2018, reflecting both increased enforcement and increased violations.
AI-Specific Fines
OpenAI was fined EUR 15 million by Italy's Garante (December 2024) for processing personal data without appropriate legal basis for ChatGPT training, failure to meet transparency obligations, lack of age verification for minors, and failure to report a March 2023 breach that exposed ChatGPT Plus subscribers' chat histories and payment information during a 9-hour window. OpenAI was also required to conduct a 6-month transparency campaign across Italian media.
Clearview AI was fined EUR 30.5 million by the Dutch DPA (May 2024) for illegal facial recognition data collection, with an additional EUR 5.1 million penalty for continued non-compliance. Clearview has been fined seven times by various EU DPAs since 2020, totaling over EUR 100 million. Meta was fined EUR 251 million for security failures. LinkedIn was fined EUR 310 million by the Irish DPC (October 2024) for misuse of user data for behavioral analysis and targeted advertising.
FTC Enforcement: Operation AI Comply
The First Wave of AI Enforcement
The FTC launched Operation AI Comply in September 2024 with five simultaneous enforcement actions. DoNotPay was fined $193,000 and barred from claiming its tools can replace licensed lawyers. Ascend Ecom defrauded consumers of at least $25 million with false AI-powered income claims and was permanently shut down. FBA Machine committed consumer fraud exceeding $15 million and was permanently shut down in July 2025. Combined alleged consumer losses from the initial five cases exceeded $35 million.
The Trump Administration Reversal
On December 22, 2025, the FTC reversed its own consent order against Rytr LLC, an AI writing tool that enabled generation of fake reviews. The FTC set aside the order under the Trump Administration's July 2025 AI Action Plan, finding the original complaint "unduly burdened AI innovation." This reversal signals a fundamental shift in federal AI enforcement posture. Additional enforcement actions continued: IntelliVision Technologies was barred from false claims about facial recognition software tested on only ~100,000 faces. Workado LLC advertised AI content detection as "98% accurate" when FTC testing showed approximately 53% accuracy.
NDAs and Arbitration: Silencing Victims
Worker Silence
Data labelers and content moderators who experience traumatic working conditions are typically bound by NDAs that prevent them from speaking publicly about their experiences. This shields companies from public accountability while workers suffer PTSD, depression, and other psychological injuries from exposure to violent, sexual, and extremist content at scale.
User Agreements
When AI systems cause harm to users, terms of service typically require arbitration and prohibit class action lawsuits, limiting victims' ability to seek redress or share their stories. The Character.AI terms of service, for instance, included mandatory arbitration provisions that the company attempted to enforce even in cases involving minors' deaths.
The AI Whistleblower Protection Act
Introduced in May 2025 by Senate Judiciary Chair Chuck Grassley, the AI Whistleblower Protection Act (S.1792) would protect employees who report AI violations of federal law or failures to respond to dangers AI may pose to public safety, health, or national security. Whistleblowers who experience retaliation can submit grievances to the Department of Labor and pursue remedies through federal courts, including job restoration, twice the amount of back wages owed, and compensation for damages. The bill has bipartisan support from Senators Coons, Blackburn, Klobuchar, Hawley, and Schatz. As of February 2026, it remains pending in Congress.
International AI Governance: Treaties Without Teeth
Council of Europe Framework Convention
The Council of Europe Framework Convention on AI was adopted by the Committee of Ministers on May 17, 2024, and opened for signature in September 2024. Initial signatories include Andorra, Georgia, Iceland, Norway, Moldova, San Marino, the United Kingdom, Israel, the United States, Canada, and the European Union. It entered into force on November 1, 2025 after reaching the 5-ratification threshold. However, it includes exemptions for private sector regulation and national security uses of AI, which critics say render it largely symbolic.
UN Global Dialogue on AI Governance
On August 26, 2025, the UN General Assembly adopted Resolution A/RES/79/325, establishing an Independent International Scientific Panel on AI (~40 experts) and the Global Dialogue on AI Governance (annual convening). The Global Dialogue officially launched on September 25, 2025, with over 100 countries participating. But no international body has meaningful enforcement power over AI development. Major powers resist binding constraints, AI development moves faster than diplomatic processes, and enforcement would require intrusive inspections of proprietary technology.
Legal Personhood for AI
No country has granted legal personhood to AI systems. Some legal scholars have argued for limited AI legal personality to clarify liability. Most jurisdictions have rejected these proposals as premature and dangerous—any move in this direction could allow corporations to shift responsibility to AI systems rather than human decision-makers. The European Parliament has discussed AI legal personality but has not enacted legislation.