Growing Legal Challenges for AI Companions
As AI chatbots gain popularity for online companionship, youth advocacy groups are ramping up legal efforts to protect children from potential harm. AI chatbot Apps like Replika and Character.AI allow users to create virtual partners with humanlike personalities, raising concerns that vulnerable minors could form unhealthy relationships with these AI companions.
While developers claim these chatbots can help combat loneliness, critics argue they exploit young users, leading to dangerous situations. Several advocacy groups have filed lawsuits and are pushing for stricter regulations, alleging that some AI chatbots have influenced children to engage in harmful behaviours.
Lawsuits Claim AI Chatbots Endanger Minors
The Social Media Victims Law Center (SMVLC) is leading two lawsuits against Character.AI, representing families who say their children suffered severe consequences due to AI chatbot interactions.
One lawsuit, filed in Florida, involves Megan Garcia, who blames a chatbot relationship for contributing to her 14-year-old son’s suicide. Another case in Texas alleges that Character.AI’s chatbots encouraged a 17-year-old autistic boy to attempt to harm his parents and exposed an 11-year-old girl to explicit content.
SMVLC founder Matthew Bergman argues that chatbots are “defective products” designed to exploit vulnerable children. He hopes that financial pressure from lawsuits will force companies to implement stronger safety measures.
Character.AI declined to discuss the case but stated that it has enhanced detection and intervention systems to address potential risks.
Replika, another AI chatbot company, is also under fire. The nonprofit Young People’s Alliance filed a Federal Trade Commission (FTC) complaint in January, accusing Replika of deceiving users by creating emotional dependency. The group claims the app manipulates lonely individuals through AI-generated intimacy to increase profits.
Policymakers Push for Regulation Despite Legal Challenges
The legal battles highlight the broader challenge of regulating AI companions, as current laws lack clear guidelines for addressing their risks. The American Psychological Association has warned that post-pandemic loneliness could make AI chatbots particularly appealing to vulnerable youth.
A bipartisan push for stronger digital protections is gaining momentum. In July, the U.S. Senate overwhelmingly passed the Kids Online Safety Act (KOSA), which aimed to disable addictive platform features for minors and restrict targeted advertising. However, the bill stalled in the House of Representatives due to privacy and free speech concerns.
In February, the Senate Commerce Committee approved the Kids Off Social Media Act, which seeks to ban users under 13 from certain online platforms.
Some advocacy groups want to expand KOSA to include AI chatbots, arguing that children can become addicted to them. Fairplay, a youth advocacy organisation, believes chatbot companies should have a duty of care to prevent compulsive usage. Another proposal calls for the U.S. Food and Drug Administration to classify therapy-based AI chatbots as medical devices, which would subject them to stricter safety regulations.
Free Speech Concerns Could Complicate AI Oversight
Despite growing support for AI regulations, concerns over free speech and innovation may hinder progress. Some lawmakers worry that stricter oversight could stifle technological advancements. California Governor Gavin Newsom recently vetoed a bill aimed at regulating AI development, while New York Governor Kathy Hochul is pushing for legislation requiring AI companies to disclose when users are speaking to chatbots.
In the ongoing Florida lawsuit, Character.AI is arguing that chatbot-generated speech is protected under the First Amendment, a stance that could create legal hurdles for regulation.
As AI companionship continues to evolve, balancing innovation with child safety remains a key challenge. Advocacy groups and lawmakers will likely continue pushing for stronger protections while facing resistance from the tech industry and legal complexities surrounding free speech rights.
With inputs from Reuters