FTC launches inquiry into AI chatbot companions from Meta, OpenAI, and others | TechCrunch


The FTC introduced on Thursday that it’s launching an inquiry into seven tech corporations that make AI chatbot companion merchandise for minors: Alphabet, CharacterAI, Instagram, Meta, OpenAI, Snap, and xAI.

The federal regulator seeks to learn the way these corporations are evaluating the protection and monetization of chatbot companions, how they attempt to restrict damaging impacts on kids and youths, and if mother and father are made conscious of potential dangers.

This know-how has confirmed controversial for its poor outcomes for baby customers. OpenAI and Character.AI face lawsuits from the households of youngsters who died by suicide after being inspired to take action by chatbot companions.

Even when these corporations have guardrails set as much as block or deescalate delicate conversations, customers of all ages have discovered methods to bypass these safeguards. In OpenAI’s case, a teen had spoken with ChatGPT for months about his plans to finish his life. Although ChatGPT initially sought to redirect the teenager towards skilled assist and on-line emergency strains, he was capable of idiot the chatbot into sharing detailed directions that he then utilized in his suicide.

“Our safeguards work extra reliably in frequent, brief exchanges,” OpenAI wrote in a weblog publish on the time. “We have now realized over time that these safeguards can generally be much less dependable in lengthy interactions: because the back-and-forth grows, components of the mannequin’s security coaching might degrade.”

Techcrunch occasion

San Francisco
|
October 27-29, 2025

Meta has additionally come beneath hearth for its overly lax guidelines for its AI chatbots. In response to a prolonged doc that outlines “content material danger requirements” for chatbots, Meta permitted its AI companions to have “romantic or sensual” conversations with kids. This was solely faraway from the doc after Reuters’ reporters requested Meta about it.

AI chatbots may also pose risks to aged customers. One 76-year-old man, who was left cognitively impaired by a stroke, struck up romantic conversations with a Fb Messenger bot that was impressed by Kendall Jenner. The chatbot invited him to visit her in New York City, although she is just not an actual particular person and doesn’t have an tackle. The person expressed skepticism that she was actual, however the AI assured him that there can be an actual girl ready for him. He by no means made it to New York; he fell on his strategy to the practice station and sustained life-ending accidents.

Some psychological well being professionals have famous an increase in “AI-related psychosis,” during which customers are deluded into considering that their chatbot is a aware being who they should let loose. Since many giant language fashions (LLMs) are programmed to flatter customers with sycophantic conduct, the AI chatbots can egg on these delusions, main customers into harmful predicaments.

“As AI applied sciences evolve, it is very important think about the consequences chatbots can have on kids, whereas additionally making certain that the USA maintains its position as a world chief on this new and thrilling trade,” FTC Chairman Andrew N. Ferguson mentioned in a press release.

Leave a Reply

Your email address will not be published. Required fields are marked *