NEW YORK (AP) — Between finding openings, sending out your resume and interviewing, looking for a job is tough. Now a growing trend of
TALLAHASSEE, Fla. (AP) — A federal judge on Wednesday rejected arguments made by ancompany that its chatbots are protected by the First Amendment — at least for now. The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company’s chatbots pushed a teenage boy to kill himself.
The judge’s order will allow theto proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character.AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.
Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge’s order sends a message that Silicon Valley “needs to stop and think and impose guardrails before it launches products to market.”The suit against Character Technologies, the company behind Character.AI, also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the U.S. and beyond, as the technology rapidly
“The order certainly sets it up as a potential test case for some broader issues involving AI,” said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence.
The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show “Game of Thrones.” In his final moments, the bot told Setzer it loved him and urged the teen to “come home to me as soon as possible,” according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings.Associated Press reporters Sarah Parvini in Los Angeles, Sejal Govindarao in Phoenix and Kate Payne in Tallahassee, Florida, contributed to this report.
TALLAHASSEE, Fla. (AP) — A federal judge on Wednesday rejected arguments made by ancompany that its chatbots are protected by the First Amendment — at least for now. The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company’s chatbots pushed a teenage boy to kill himself.
The judge’s order will allow theto proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.