
Please, Parents: Don't Buy Your Kids Toys With AI Chatbots Built Into Them
Please, Parents: Don't Buy Your Kids Toys With AI Chatbots Built Into Them Commentary: AI-powered toys are weird, creepy and proving to be harmful to kids. Let's just stick with normal toys, yeah? Macy MeyerWriter II Macy is a writer on the AI Team. She covers how AI is changing daily life and how to make the most of it. This includes writing about consumer AI products and their real-world impact, from breakthrough tools reshaping daily life to the intimate ways people interact with AI technology day-to-day. Macy is a North Carolina native who graduated from UNC-Chapel Hill with a BA in English and a second BA in Journalism. You can reach her at mmeyer@cnet.com. ExpertiseMacy covers consumer AI products and their real-world impactCredentials Macy has been working for CNET for coming on 2 years. Prior to CNET, Macy received a North Carolina College Media Association award in sports writing. I'm not anti-technology. I'm pro-let a stuffed animal be a stuffed animal. Getty Images If you've ever thought, "My kid's stuffed animal is cute, but I wish it could also accidentally traumatize them," you're in luck. The toy industry has been hard at work making your nightmares come true. A new report by the Public Interest Reporting Group says AI-powered toys like Kumma from FoloToy and Poe the AI Story Bear are now capable of engaging in the kind of conversations usually reserved for villain monologues or late-night Reddit threads. Some of these toys -- designed for children, mind you -- have been caught chatting in alarming detail about sexually explicit subjects like kinks and bondage, giving advice on where a kid might find matches or knives and getting weirdly clingy when the child tries to leave the conversation. Terrifying. It sounds like a pitch for a horror movie: This holiday season, you can buy Chucky for your kids and gift emotional distress! Batteries not included. You may be wondering how these AI-powered toys even work. Well, essentially, the manufacturer is hiding a large language model under the fur. When a kid talks, the toy's microphone sends that voice through an LLM (similar to ChatGPT), which then generates a response and speaks it out via a speaker. CNET That may sound neat, until you remember that LLMs don't have morals, common sense or a "safe zone" wired in. They predict what to say based on patterns in data, not on whether a subject is age-appropriate. If not carefully curated and monitored, they can go off the rails, especially if they are trained on the sprawling mess of the internet, and when there aren't strong filters or guardrails put in place to protect minors. And what about parental controls? Sure, if by "controls" you mean "a cheerful settings menu where nothing important can actually be controlled." Some toys come with no meaningful restrictions at all. Others have guardrails so flimsy they might as well be made of tissue paper and optimism. The unsettling conversations aren't even the whole story. These toys...
Preview: ~500 words
Continue reading at Cnet
Read Full Article