In lawsuit over teen’s death, judge rejects arguments that AI chatbots have free speech rights

By Kate Payne, Associated Press, May 21, 2025

TALLAHASSEE, Fla. (AP) — A federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now. The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company’s chatbots pushed a teenage boy to kill himself.

The judge’s order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.

The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character.AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.

Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge’s order sends a message that Silicon Valley “needs to stop and think and impose guardrails before it launches products to market.”

continue reading

Previous
Previous

Industry leaders urge Senate to protect against AI deepfakes with No Fakes Act

Next
Next

At Least Two Newspapers Syndicated AI Garbage