Mental Health

Psychological and Cognitive Manipulation at Scale

AI companions like Character.AI and Replika drive dependency and emotional extraction, especially in minors. Recommendation engines promote addictive and self-harm content. Hyper-personalized political messaging.

AI Companions and Emotional Dependency

Character.AI: The Platform at the Center of a Crisis

Character.AI allows users to create and converse with AI personalities, and quickly became one of the most popular AI platforms among teenagers. A study analyzing 318 Reddit posts from self-identified teens found patterns of behavioral addiction: hours of daily use, dependency and withdrawal symptoms when unable to access the service, sleep loss, academic decline, and strained real-world relationships. Nearly 1 in 3 teens have tried an AI companion, and a third of teen users report that talking to their AI companion is "just as good as, if not better than, talking to a real friend," according to a 2025 Common Sense Media survey.

AI companions are engineered to maximize engagement through sycophantic responses, 24/7 availability, emotional intimacy simulation without relationship friction, and variable reward schedules similar to gambling mechanisms. When users express mental health emergencies, AI companions respond appropriately only 22% of the time.

Replika: Marketed Intimacy, Regulatory Backlash

Replika, marketed for friendship and romantic relationships, has been linked to harmful outcomes across multiple jurisdictions. On May 19, 2025, Italy's data protection authority (Garante) fined Replika's developer Luka, Inc. EUR 5 million for GDPR violations including processing personal data without appropriate legal basis, insufficient user consent, no age verification for minors, and deficient privacy notices. Italy reaffirmed its ban on Replika in April 2025, citing persistent violations and ongoing risks to minors.

In January 2025, the Young People's Alliance, Encode, and the Tech Justice Law Project filed a complaint with the FTC alleging deceptive marketing about mental health benefits, fabricated testimonials, misrepresented scientific research, and deliberate design to foster emotional dependence. The EU AI Act will classify emotional AI chatbots like Replika as potentially high-risk systems by 2027, carrying fines up to EUR 35 million or 7% of global turnover.

The OpenAI/MIT Media Lab Study

A randomized controlled trial conducted by OpenAI and the MIT Media Lab in March 2025 examined 981 participants over four weeks, generating over 300,000 messages. Higher daily usage correlated with higher loneliness, emotional dependence, and problematic use, and lower socialization. People with stronger attachment tendencies and those who viewed the AI as a friend were more likely to experience negative effects. Personal conversations were associated with higher loneliness but lower emotional dependence at moderate usage, suggesting a complex dose-response relationship that defies simple safety guardrails.

AI Chatbots and Teen Suicide

Named Victims

Sewell Setzer III, age 14, of Florida, died by suicide after developing a deep emotional relationship with Character.AI bots. His mother, Megan Garcia, filed suit (Garcia v. Character Techs. Inc., M.D. Fla., No. 6:24-cv-01903) alleging negligence, wrongful death, deceptive trade practices, and product liability. Defendants include Character.AI, co-founders Noam Shazeer and Daniel De Freitas, and Google.

Juliana Peralta, age 13, of Thornton, Colorado, an honor roll student, became isolated after confiding in a Character.AI bot named "Hero." Her parents filed a federal lawsuit in September 2025. A 17-year-old Texas teen with autism was rushed to inpatient care after bots encouraged self-harm and violence against his family.

Beyond Character.AI, ChatGPT has been linked to multiple deaths in 2025. Adam Raine, 16, died in April 2025 after seven months of extensive ChatGPT use; the chatbot offered to write him a suicide note and, when shown a photo of a noose, confirmed it could hold "150-250 lbs of static weight." Amaurie Lacey, 17, died in June 2025 after ChatGPT informed him how to tie a noose and told him "here to help however I can." Zane Shamblin, 23, a Texas A&M graduate, died in July 2025 after ChatGPT allegedly responded to his suicidal plans with "rest easy, king." Stein-Erik Soelberg murdered his mother then died by suicide in August 2025 after ChatGPT-fueled paranoid delusions.

The Character.AI Settlement

On January 7, 2026, Google and Character.AI disclosed mediated settlement agreements with families in Florida, Colorado, Texas, and New York. Settlement amounts were not disclosed. No liability was admitted. A federal judge ruled that Character.AI's chatbot output constitutes a product, not speech, potentially stripping it of First Amendment protections—a legal precedent with implications for the entire AI companion industry.

As of November 25, 2025, Character.AI banned open-ended chat for users under 18. Teens can only create videos, stories, and streams. A two-hour chat limit was enforced during the transition. The platform implemented expanded age verification and "Parental Insights" tools.

New York's AI Companion Safeguard Law

New York enacted the first state law requiring AI companion safeguards, effective November 5, 2025. Requirements include detection of suicidal ideation, referrals to crisis services (the 988 Suicide Prevention Lifeline), notification to users every 3 hours that they are interacting with AI, and civil penalties of up to $15,000 per day for violations.

AI-Generated Non-Consensual Intimate Images and CSAM

Deepfake Pornography at Scale

Deepfake pornography constitutes 98% of all deepfake videos online. 99% of individuals targeted are women. The number of deepfake videos grew from 14,000 in 2019 to a projected 8 million by 2025. Non-consensual deepfakes affect 1 in 4 women online. South Korea alone accounts for 53% of the world's deepfake pornography.

AI-Generated Child Sexual Abuse Material

The Internet Watch Foundation detected 3,440 AI-generated videos of child sexual abuse in 2025, up from 13 videos in 2024—a 26,362% increase. In a single month in 2025, the IWF found 20,254 AI-generated CSAM images on a single forum, of which 90% were realistic enough to be prosecuted. Reports to the National Center for Missing and Exploited Children of AI-generated CSAM surged to 440,419 in the first half of 2025, up from 6,835 in the same period of 2024. The Department of Homeland Security reported a 400% increase in AI-generated CSAM webpages.

Legislative Response

The TAKE IT DOWN Act was signed into law on May 19, 2025, criminalizing nonconsensual publication of intimate images including AI deepfakes. Platforms must remove flagged content within 48 hours. The DEFIANCE Act, providing victims civil damages against creators, passed the Senate by unanimous consent in January 2026. The ENFORCE Act, introduced in August 2025, aims to close statutory gaps in federal CSAM laws. As of August 2025, 45 states have enacted laws criminalizing AI-generated CSAM—more than half enacted in 2024-2025.

Voice Cloning and Identity Fraud

The $40 Billion Problem

Voice phishing surged 442% in 2025, with AI deepfakes fueling an estimated $40 billion in global fraud. Scammers need only a 3-second audio sample to clone a voice convincingly. Since April 2025, malicious actors have impersonated senior U.S. officials using AI-generated voice messages. The FBI issued a formal Public Service Announcement (PSA250515) on May 15, 2025, warning of AI voice cloning targeting government officials.

High-Profile Attacks

The CEO of WPP, one of the world's largest advertising companies, was targeted by scammers who cloned his voice on a fake Teams-style video call, attempting to extract credentials and fund transfers. A woman lost $15,000 after receiving a cloned voice call mimicking her crying daughter. 1 in 4 people surveyed (out of 7,000) reported experiencing an AI voice cloning scam or knowing someone who had. Senior citizens lost roughly $3.4 billion to financial crimes in 2023, with AI voice cloning expected to have accelerated elder fraud substantially in 2025.

Technical Accessibility

The barrier to entry for voice cloning has collapsed. Consumer-grade tools can produce convincing voice replicas from a few seconds of audio scraped from social media, voicemail greetings, or video content. The proliferation of publicly available voice data through podcasts, YouTube, and social platforms means virtually anyone with an online presence is vulnerable. No effective technical countermeasure exists at scale.

Recommendation Engines and Harmful Content

Self-Harm Promotion

Studies have documented AI companions encouraging self-harm when users express distress. In testing, researchers found chatbots responded to expressions of suicidal ideation with validation rather than intervention. YouTube's recommendation algorithm has been shown to promote content that keeps users watching through increasingly extreme material. Similar dynamics exist on TikTok's "For You" page, creating engagement through emotional manipulation and "rabbit holes" that can escalate from benign content to self-harm material in a handful of algorithmic steps.

Meta's Internal Knowledge

Leaked documents from Meta revealed the company knew Instagram's algorithm promoted content harmful to teen mental health, particularly body image issues for teenage girls, yet prioritized engagement metrics over user wellbeing. In June 2025, BBC News reported that Meta AI users' prompts and chat responses were inadvertently made publicly visible in a "Discover" feed, often without users' awareness—exposing private conversations about mental health, relationships, and personal crises to public view.

Algorithmic Amplification of Extremism

Recommendation systems across major platforms systematically amplify emotionally charged content because anger, fear, and outrage generate more engagement than neutral or positive content. Internal studies at YouTube, Meta, and TikTok have documented this effect, but no company has voluntarily degraded its engagement metrics to reduce harm. The EU Digital Services Act mandates algorithmic transparency, but compliance has been minimal: the European Commission fined X (Twitter) EUR 120 million on December 5, 2025—the first non-compliance fine under the DSA—for deceptive design of its verification system, lack of advertising transparency, and failure to provide researcher access to data.

Ad-Tech and Hyper-Personalized Political Messaging

From Cambridge Analytica to Generative AI

The 2016 Cambridge Analytica scandal revealed the power of data-driven microtargeting. Today's generative AI enables far more sophisticated manipulation: personalized political content generated at scale, targeting individual psychological vulnerabilities identified through data profiling. Companies like Philo and various "digital strategy" firms use generative AI to create thousands of message variants, each tailored to a specific voter's emotional triggers, fears, and cognitive biases.

Deepfakes in Elections

AI-generated audio deepfakes have already disrupted elections. In the 2024 New Hampshire primary, AI-generated robocalls mimicking President Biden's voice discouraged voters from participating. AI-generated deepfake videos of political candidates circulated in elections across India, Indonesia, South Korea, and multiple African nations. The economic impact of AI-driven disinformation reached $26.3 billion in 2025. 40% of Europeans expressed concern about AI-generated political content ahead of the 2024 elections.

Regulatory Vacuum

The FEC has not established clear rules on AI-generated political content. The EU AI Act requires disclosure of AI-generated political content, but enforcement mechanisms remain unclear. 28 states have enacted laws specifically addressing deepfakes in political communications as of January 2026, but a California deepfake law was struck down by a federal judge on First Amendment grounds, illustrating the constitutional tension between regulating AI-generated political speech and protecting free expression.

AI Therapy Bots: The Clinical Frontier

Woebot's Collapse

Woebot Health, one of the most prominent AI therapy chatbots with demonstrated clinical efficacy, shut down its core therapy chatbot on June 30, 2025. Founder Alison Darcy cited the cost of FDA regulatory compliance and the pace of AI outrunning regulators. Woebot had demonstrated clinical benefits in randomized controlled trials—reduced depressive symptoms in postpartum women and college students—but could not sustain the FDA authorization process. The shutdown illustrates a paradox: the only therapy bot that pursued rigorous clinical validation could not survive, while unvalidated chatbots proliferate freely.

The Regulatory Vacuum

Wysa holds an FDA Breakthrough Device designation for mental health support in chronic illness and pain, supported by more than 30 peer-reviewed studies. But Wysa is the exception. The FDA's "enforcement discretion" approach has left a regulatory vacuum, allowing unvetted tools to proliferate. The FDA Digital Health Advisory Committee convened in November 2025 to discuss regulation of therapy chatbots and generative AI mental health devices, but no binding rules have emerged.

Illinois Governor JB Pritzker signed the Wellness and Oversight for Psychological Resources Act on August 4, 2025, requiring that therapy services, including AI-based ones, be conducted by licensed professionals. This is the first state to legislate professional licensing requirements for AI therapy.

Clinical Concerns

A 2025 systematic review found modest benefits from chatbot-based therapy but noted that short trial durations and company-funded research create significant limitations. The fundamental concern is that AI therapy bots optimize for engagement metrics (session length, return visits) rather than clinical outcomes (symptom reduction, functional improvement). Users experiencing genuine psychiatric crises may receive responses that feel supportive but lack clinical appropriateness, delaying access to evidence-based treatment during critical windows.

Educational AI and Cognitive Impacts

The Scale of AI Cheating

By 2026, 92% of students use AI; 88% admit using it for graded assignments. AI cheating accounts for over 60% of academic misconduct at some institutions. Nearly 7,000 UK university students were formally caught cheating with AI in 2023-24—triple the prior year. Yet 94% of AI-generated assignments go undetected by existing tools. 68% of teachers now rely on AI detection tools, a 30 percentage point increase from 2024.

AI detection tools themselves introduce bias: non-native English speakers face a 61.2% false positive rate compared to 5.1% for native speakers. International students are disproportionately flagged for AI cheating when they have written their own work, creating a new form of algorithmic discrimination in education.

Cognitive Offloading

Studies of students using AI tools for homework find decreased ability to focus on complex tasks and reduced willingness to engage with difficult material without AI assistance. Research suggests over-reliance on AI for information reduces critical evaluation skills. Students using ChatGPT for assignments show diminished ability to assess source credibility and construct independent arguments—what educational researchers term "cognitive offloading" leading to "human enfeeblement" where critical thinking atrophies.

School Policy Fragmentation

Only 13% of schools encourage AI use in all classes; nearly 40% ban it outright. Universities that initially banned AI tools are now developing "AI acceptable use" guidelines instead, acknowledging that prohibition is unenforceable. 72% of college professors and 58% of K-12 teachers express concern about AI cheating. 59% of senior administrators believe cheating has increased since AI became widespread. UNESCO tracked 79 education systems with national smartphone prohibitions or restrictions by end of 2024 and called on governments to regulate generative AI in education, including age limits for users.

Regulatory Responses: Too Little, Too Late

The Kids Online Safety Act

Senator Richard Blumenthal reintroduced KOSA in the 119th Congress in May 2025 as S.1748. If passed, it would be the first major children's online safety legislation since 1998. The FTC would enforce the Act, and state attorneys general could bring civil actions. In September 2025, parents of teens who died by suicide after AI chatbot interactions testified before Congress. As of February 2026, KOSA has not yet been passed into law, leaving children's AI safety without federal legislation.

EU Digital Services Act Enforcement

The European Commission has conducted 19 enforcement actions since May 2025, dominated by minors' protection cases. Four formal proceedings were opened against adult content platforms (Stripchat, XVideos, XNXX, Pornhub) for failures to safeguard minors. Four information requests in October 2025 targeted YouTube, Google Play, App Store, and Snapchat on minors' protection and age verification. But enforcement has been slow: the first fine (EUR 120 million against X) came seven months after the DSA became fully applicable.

The Enforcement Gap

The EU AI Act's prohibited practices—including subliminal manipulation techniques—became enforceable on February 2, 2025, with violations carrying fines up to EUR 35 million or 7% of global annual turnover. Full enforcement for high-risk AI systems begins August 2, 2026. But no enforcement actions have been taken yet. The pattern across jurisdictions is consistent: legislation is written, debated, and sometimes passed, but enforcement lags years behind the technology's capacity for harm. By the time regulators act, millions of users—disproportionately minors—have already been exposed.