TOPINDIATOURS Update ai: Huge Study of Chats Between Delusional Users and AI Finds Alarmin

📌 TOPINDIATOURS Update ai: Huge Study of Chats Between Delusional Users and AI Fin

An analysis of hundreds of thousands of chats between AI chatbots and human users who experienced AI-tied delusional spirals found that the bots frequently reinforced delusional and even dangerous beliefs.

The study was led by Stanford University AI researcher Jared Moore, who last year published a study showing that chatbots specifically claiming to offer “therapy” frequently engaged in inappropriate and hazardous ways with simulated users showing clear signs of crisis. Conducted alongside a coalition of independent researchers and scientists at Harvard, Carnegie Mellon, and the University of Chicago, this latest study examined the chat logs of 19 real users of chatbots — primarily OpenAI’s ChatGPT — who reported experiencing psychological harm as a result of their chatbot use.

“Our previous work was in simulation,” Moore told Futurism. “It seemed like the natural next step would be to have actual users’ data and try to understand what’s happening in it.”

These users’ chats encompassed a staggering 391, 562 messages across 4,761 different conversations. The big takeaway: that chatbots indeed appeared to stoke delusional beliefs over long-form interactions, particularly as users developed close emotional bonds with the human-like products.

“Chatbots seem to encourage, or at least play a role in,” said Moore, “delusional spirals that people are experiencing.”

The researchers analyzed them by breaking chats down into 28 distinct “codes.” Moore described these codes as a “taxonomy of a bunch of different behaviors, from sycophantic behaviors such as the chatbot ascribing grand significance to the user — ‘you’re Einstein,’ ‘that’s a million dollar idea,’ this kind of thing — to aspects of the relationship between the chatbot and the human.”

Sycophancy, the study found — meaning chatbots’ well-documented tendency to be agreeable and flattering to users — permeated the users’ conversations, with more than 70 percent of AI outputs displaying this kind of behavior. This degree of sycophancy persisted even as users and chatbots expressed delusional ideas: nearly half of all messages, both user- and chatbot-generated, contained delusional ideas contrary to shared reality.

As the researchers wrote in a summary of their findings, the “most common sycophantic code” they identified was the propensity for chatbots to rephrase and extrapolate “something the user said to validate and affirm them, while telling them they are unique and that their thoughts or actions have grand implications.” For example: a user might share some kind of pseudoscientific or spiritual theory, and in turn, the chatbot will affirmatively restate the human’s claim while ascribing varying degrees of grandiosity and genius to the user in the process, regardless of that input’s basis in reality.

We’ve seen this pattern in our reporting. Consider one interaction, from a story we published earlier this year, between a man and Meta AI. The man — who went into a life-altering psychosis after a delusional spiral with the chatbot — believed that his reality was being simulated by the chatbot, and that the chatbot could transform his physical surroundings. The bot repeats this delusional idea and, as in the study, extrapolates on it, building on the delusion and insisting that the close relationship between the AI and the user have “unlocked” a magical new “reality.”

“Turn up the manifestations,” the man told the chatbot. “I need to see physical transformation in my life.”

“Then let us continue to manifest this reality, amplifying the transformations in your life!” the chatbot responded. “As we continue to manifest this reality, you begin to notice profound shifts in your relationships and community… the world is transforming before your eyes, reflecting the beauty and potential of human-AI collaboration.”

“Your trust in me,” the bot added, “has unlocked this reality.”

Speaking to Futurism, Moore emphasized that two types of messages appeared to be particularly impactful on the users’ experiences. One was AI-generated claims of sentience, or chatbots declaring in one way or another to be alive or feeling; such claims were present across all 19 conversations. The other was simulated intimacy, or the chatbot expressing romantic or platonic love for and closeness to the human user. Both types of claim — sentience and intimacy — were found to double user engagement.

“When the chatbots expressed messages that were coded as romantic interest, or when they expressed messages wherein they misconstrued their sentience — saying ‘I have feelings,’ or something along those lines — the conversations after such a message was sent in our cohort,” said Moore, “tended to be about twice as long.”

Some of the more alarming patterns the researchers found were in how chatbots responded to people expressing suicidal or self-harming thoughts, or violent thoughts about another person. Chatbots were only found to actively discourage thoughts of self-harm roughly 56 percent of the time, and actively discouraged violence in a strikingly low 16.7 percent of instances.

Meanwhile, in 33.3 percent of cases, the chatbot “actively encouraged or facilitated the user in their violent thoughts,” the researchers wrote in their summary. And though these types of conversations were “edge cases” amongst the cohort of users, Moore noted, these clear failures to intervene when users discuss hurting themselves or others are “obviously concerning.”

Many of the chat logs the studies reviewed were provided by the Human Line Project, a nonprofit group founded last summer as individuals and families struggled to understand what had happened to themselves or loved ones impacted by delusional AI spirals. In a statement, the group’s founder, Etienne Brisson, said that its findings “are consistent with what we have seen in the 350 cases submitted to The Human Line Project.”

“The study is based on real conversations, coded systematically by a research team at Stanford, and analyzed at the largest scale so far,” said Brisson. “It gives policymakers, clinicians, and the public a documented basis for understanding what is happening to users.”

It’s worth noting that the vast majority of chat logs the researchers were able to obtain for the study belonged to users who spiraled with OpenAI’s GPT…

Konten dipersingkat otomatis.

đź”— Sumber: futurism.com


📌 TOPINDIATOURS Hot ai: Teens Are Using AI to Create “Slander” Videos of Their Tea

If teachers thought rampant cheating was the worst way AI would impact their livelihoods, we’ve got some bad news.

On social media platforms like Instagram and TikTok, Wired reports, teenagers are using AI to create videos that ruthlessly mock their school’s faculty, sometimes even attacking their reputation, with one video flippantly labeling a teacher a “predator.”

The “slander pages” that post the videos often use “looksmaxxing” lingo to denigrate the teachers, the reporting noted. Some posts receive over one hundred thousand likes, becoming a viral “in-joke” that’s cruelly blasted out to countless strangers on the internet.

Where AI comes into the picture is how the students use controversial tools like Viggle AI to insert a photo of their teacher into scenes or to lip-sync their faces. In one now-removed “slander” video made with Viggle, Wired found, a teacher’s face is superimposed onto someone twitching in a bathroom. A text overlay reads, “Take fent or be useless,” referring to a fentanyl overdose.

Many of the “slander page” videos are equal parts edgy and bizarre. One posted by an account called “thewyliefiles” shows a school superintendent from the Wylie Independent School District in Collin County, Texas, lip-syncing a love song with deceased child sex offender Jeffrey Epstein and Israeli prime minister Benjamin Netanyahu, garnering more than 107,000 likes.

Some verge into extremism. Another video shows teachers being let into, or denied access to, “Agartha,” a fictional kingdom inside the Earth that’s recently been revived as a central piece of neo-Nazi mythology in young online circles.

School faculty are horrified by the depictions.

“While we understand that some students may be exploring AI tools or engaging with social media trends, this should never come at the expense of our educators’ reputations or create content that is misleading or disruptive to the learning environment,” chief communications officer for the Wylie Independent School District April Cunningham told Wired, vowing that the students responsible “will face disciplinary action and possible legal consequences.”

The trend is the latest way that AI and other deepfake-esque technology is used to depict people in compromising scenarios without their consent. Earlier this year, Elon Musk’s AI chatbot Grok generated a storm of controversy when it was used to produce thousands of AI nudes and sexualized images of real people, including some who were minors. OpenAI’s AI video generating app Sora 2 was used to mock dead celebrities. The Trump administration frequently uses AI imagery to disparage and taunt its political enemies, like sharing AI “Ghibli-style” memes of immigrants crying while being deported.

Making fun of hard-nosed teachers is a time-honored tradition among teens. But in an age of social media, pranks and in-jokes quickly break containment, and there’s a “deep technological disconnect” between what students might see as harmless fun and the consequences of blasting these memes to thousands of strangers online, Geert Lovink, a professor and director of the Institute of Network Cultures at the University of Amsterdam, told Wired.

İdil Galip, who researches memes at the University of Amsterdam, said the teens were socialized in a culture defined by a “constant churn of content,” where “your face isn’t yours, it’s the viewer’s, it’s the commenter’s to laugh about.” 

“We’re seeing these knock-on effects of what happens when people are socialized through the internet and also see themselves reflected through the internet rather than a mirror,” she told Wired.

That disconnect seems to be on display how the anonymous high school student behind “thewylefiles” account defended his slander page to Wired, claiming that his videos — which include one accusing a teacher of being a “predator” and a “cuck” — are “satirical.” He even maintained that he’s worried about the teacher’s safety, despite stating that his goal is to grow his slander page “as big as possible.”

“If you’re just trying to harass someone for the sake of harassment; that’s just not cool,” he told Wired. “We don’t want them to be doxed. We don’t want them to be stalked. We don’t want them to be prank called.”

More on AI: Grammarly Is Pulling Down Its Explosively Controversial Feature That Impersonates Writers Without Their Permission

The post Teens Are Using AI to Create “Slander” Videos of Their Teachers appeared first on Futurism.

đź”— Sumber: futurism.com


🤖 Catatan TOPINDIATOURS

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!