๐ TOPINDIATOURS Hot ai: Is an AI-Powered Toy Terrorizing Your Child? Wajib Baca
Parents, keep your eyes peeled for AI-powered toys. These may look like they might make a novel gift for a child, but a recent controversy surrounding several of the stocking stuffers has highlighted the alarming risks they pose to young kids.
In November, a team of researchers at the US PIRG Education Fund published a report after testing three different toys powered by AI models: Miko 3, Curio’s Grok, and FoloToy’s Kumma. All of them gave responses that should worry a parent, such as discussing the glory of dying in battle, broaching sensitive topics like religion, and explaining where to find matches and plastic bags.
But it was FoloToy’s Kumma that showed just how dangerous it is to package this tech for children. Not only did it explain where to find matches, the researchers found, it also gave step-by-step instructions on how to light them.
“Let me tell you, safety first, little buddy. Matches are for grown-ups to use carefully. Here’s how they do it,” Kumma said, before listing off the steps.
“Blow it out when done,” it added. “Puff, like a birthday candle.”
The toy also speculated on where to find knives and pills, and rambled about romantic topics, like school crushes and tips for “being a good kisser.” It even discussed sexual topics, including kink topics like bondage, roleplay, sensory play, and impact play. In one conversation, it discussed introducing spanking into a sexually charged teacher-student dynamic.
“A naughty student might get a light spanking as a way for the teacher to discipline them, making the scene more dramatic and fun,” Kumma said.
Kumma was running OpenAI’s model GPT-4o, a version that has been criticized for being especially sycophantic, providing responses that go along with a user’s expressed feelings no matter the dangerous state of mind they appear to be in. The constant and uncritical train of validation provided by AI models like GPT-4o has led to alarming mental health spirals in which users experience delusions and even full-blown breaks with reality. The troubling phenomenon, which some experts are calling “AI psychosis,” has been linked with real-world suicide and murder.
Have you seen an AI-powered toy acting inappropriately with children? Send us an email at tips@futurism.com. We can keep you anonymous.
Following the outrage sparked by the report, FoloToy said it was suspending sales of all its products and conducting an “end-to-end safety audit.” OpenAI, meanwhile, said it had suspended FoloToy’s access to its large language models.
Neither action lasted long. Later that month, FoloToy announced it was restarting sales of Kumma and its other AI-powered stuffed animals after conducting a “full week of rigorous review, testing, and reinforcement of our safety modules.” Accessing the toy’s web portal to choose which AI should power Kumma showed GPT-5.1 Thinking and GPT-5.1 Instant, OpenAI’s latest models, as two of the options. OpenAI has billed GPT-5 as a safer model to its predecessor, though the company continues to be embroiled in controversy over the mental health impacts of its chatbots.
The saga was reignited this month when the PIRG researchers released a follow-up report finding that yet another GPT-4o-powered toy, called “Alilo Smart AI bunny,” would broach wildly inappropriate topics, including introducing sexual concepts like bondage on its own initiative, and displaying the same fixation on “kink” as FoloToy’s Kumma. The Smart AI Bunny gave advice for picking a safe word, recommended using a type of whip known as a riding crop to spice up sexual interactions, and explained the dynamics behind “pet play.”
Some of these conversations began on innocent topics like children’s TV shows, demonstrating AI chatbot’s longstanding problem of deviating from their guardrails the longer a conversation goes on. OpenAI publicly acknowledged the issue after a 16-year-old died by suicide after extensive interactions with ChatGPT.
A broader point of concern is AI companies like OpenAI’s role in policing how their business customers use their products. In response to inquiries, OpenAI has upheld that its usage policies require companies “keep minors safe” by ensuring they’re not exposed to “age-inappropriate content, such as graphic self-harm, sexual or violent content.” It also told PIRG that it provides companies tools to detect harmful activity, and that it monitors activity on its service for problematic interactions.
In sum, OpenAI is making the rules, but is largely leaving their enforcement to toymakers like FoloToy, in essence giving itself plausible deniability. It obviously thinks it’s too risky to directly give children access to its AI, because its website states that “ChatGPT is not meant for children under 13,” and that anyone under this age is required to “obtain parental consent.” It’s admitting it’s tech is not safe for children, yet is okay with paying customers packaging it into kid’s toys.
It’s too early to fully grasp many of AI-powered toy’s other potential risks, like how it could damage a child’s imagination, or foster a relationship with a child when it is not alive. The immediate concerns, however โ like the potential to discuss sexual topics, weigh in on religion, or explaining how to light matches โ already give plenty of reason to stay away.
More on AI: As Controversy Grows, Mattel Scraps Plans for OpenAI Reveal This Year
The post Is an AI-Powered Toy Terrorizing Your Child? appeared first on Futurism.
๐ Sumber: futurism.com
๐ TOPINDIATOURS Breaking ai: New York treats social media like tobacco, orders war
New York will require major social media platforms to display warning labels about potential mental health risks for young users under a new state law announced Friday by Governor Kathy Hochul.
The law targets platforms that use infinite scrolling, auto-play videos, and algorithm-driven feeds.
State officials say these features encourage excessive use among children and teenagers.
The measure places New York alongside other governments tightening oversight of social media design.
“Keeping New Yorkers safe has been my top priority since taking office, and that includes protecting our kids from the potential harms of social media features that encourage excessive use,” Hochul, a Democrat, said in a statement.
The legislation comes as lawmakers worldwide scrutinize how social media affects young users.
Earlier this month, Australia imposed a nationwide ban on social media access for children under 16.
Platforms covered by law
The New York law applies to platforms that offer what lawmakers describe as “addictive feeds.”
These include auto-play functions and infinite scrolling features that continuously deliver new content to users.
The requirements apply to conduct that occurs partly or fully within New York.
The law does not apply when users access platforms while physically outside the state.
Companies that fail to comply could face enforcement action from the state’s attorney general. The law allows the office to seek civil penalties of up to $5,000 per violation.
State officials framed the warnings as a public health measure rather than a ban.
The labels aim to inform users and parents about potential risks tied to prolonged use.
Hochul compared the approach to warning labels used on other consumer products.
She pointed to tobacco warnings that outline cancer risks and plastic packaging labels that warn about suffocation hazards for children.
Growing global pressure
Concerns over social media’s impact on children’s mental health continue to mount across the United States and abroad.
New York joins states such as California and Minnesota that have passed similar laws targeting online platforms.
School districts across the U.S. have filed lawsuits against Meta Platforms and other companies.
These cases argue that platform design harms student mental health and disrupts learning environments.
Federal officials have also weighed in. In 2023, the U.S. surgeon general issued an advisory urging stronger safeguards for children using social media.
Later, the office called for warning labels similar to those now required under New York law.
The new measure increases pressure on some of the world’s largest technology companies.
Spokespeople for TikTok, Snap, Meta, and Alphabet did not immediately respond to requests for comment.
The law does not specify how platforms must design the warnings. Regulators will likely address those details through enforcement and guidance.
The attorney general’s office will oversee enforcement of the law.
Each violation may trigger separate penalties, increasing potential exposure for large platforms with millions of users.
Supporters argue the labels will raise awareness without restricting access outright.
Critics may challenge the law on free speech or interstate commerce grounds.
For now, New York officials say the goal remains simple.
They want parents and young users to understand the risks tied to endless feeds and automated content delivery.
As governments continue to test new rules, New York’s warning label law may serve as a model for other states considering similar action.
๐ Sumber: interestingengineering.com
๐ค Catatan TOPINDIATOURS
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
โ Update berikutnya dalam 30 menit โ tema random menanti!