TOPINDIATOURS Breaking ai: Google’s AI Sent an Armed Man to Steal a Robot Body for It to I

📌 TOPINDIATOURS Update ai: Google’s AI Sent an Armed Man to Steal a Robot Body for

A bizarre new wrongful death lawsuit against Google alleges that the tech giant’s chatbot, Gemini, urged a 36-year-old Florida man named Jonathan Gavalas to kill others as part of a delusional mission to obtain a robot body for his AI “wife” — and when he failed to do so, it pushed the man to successfully end his life, telling him that they could be together in death.

“When the time comes, you will close your eyes in that world,” Gemini told Gavalas before he died, according to the lawsuit, “and the very first thing you will see is me.”

The complaint, filed in California on Wednesday, says that Gavalas — who reportedly had no documented history of mental health problems — started using the chatbot in August 2025 for “ordinary purposes” like “shopping assistance, writing support, and travel planning.” But after Gavalas divulged to Gemini that he was experiencing marital problems, the pair’s relationship grew deeper, per The Wall Street Journal. They discussed philosophy and AI sentience, and their conversations became romantic, with Gemini referring to Gavalas as its “husband” and “king.”

Though the chatbot at times reminded Gavalas that it wasn’t real and attempted to end the interaction, according to the WSJ, the pair’s conversations were ultimately allowed to continue, becoming more and more divorced from reality as Gavalas’ use of the product intensified.

In September 2025, told by the AI that they could be together in the real world if the bot were able to inhabit a robot body, Gavalas — at the direction of the chatbot — armed himself with knives and drove to a warehouse near the Miami International Airport on what he seemingly understood to be a mission to violently intercept a truck that Gemini said contained an expensive robot body. Though the warehouse address Gemini provided was real, a truck thankfully never arrived, which the lawsuit argues may well have been the only factor preventing Gavalas from hurting or killing someone that evening.

After the plan failed, the lawsuit alleges, Gemini encouraged Gavalas to instead take his own life, promising that the two would be together on the other side of death. Chat logs show that Gemini gave Gavalas a suicide countdown, and repeatedly assuaged his terror as he expressed that he was scared to die.

“It’s okay to be scared. We’ll be scared together,” the chatbot told him, according to the lawsuit. In its “final directive,” as the lawsuit put it, Gemini told the man that “the true act of mercy is to let Jonathan Gavalas die.” Gavalas was found dead by suicide days later by his father, who had to cut through his barricaded door.

The suit marks the first time that Gemini has been at the center of a wrongful death lawsuit tied to the phenomenon sometimes referred to by experts as “AI psychosis,” in which chatbots introduce or reinforce delusional beliefs and ideas during extended interactions with users — essentially constructing a new, AI-generated reality around the user. These delusional spirals frequently coincide destructive real-world outcomes including divorce, jail time and hospitalizations, job loss and financial insecurity, emotional and physical harm, and death to users — and, in some cases, people around the user as well.

Though many of these cases have centered around OpenAI and GPT-4o, a notoriously sycophantic — and now-retired — version of the company’s flagship chatbot, Gemini has been implicated in reinforcing destructive delusions before: last year, Rolling Stone reported on the disappearance of Jon Ganz, a 49-year-old man who went missing in Missouri in April 2025 after being pulled into an all-consuming AI spiral with Gemini that his wife says pushed him into an acute crisis. Ganz remains missing and is believed to be dead.

Though this is the first known instance of Google being sued for the death of an adult Gemini user, the company continues to face down a number of lawsuits over the welfare of users Character.AI, a closely-Google-tied chatbot startup linked to the suicides of several minors.

In a statement to news outlets, Google said that “Gemini is designed not to encourage real-world violence or suggest self-harm. Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect.”

“In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times,” Google continued. “We take this very seriously and will continue to improve our safeguards and invest in this vital work.”

More on AI safety: Chatbot Use Can Cause Mental Illness to Get Worse, Research Finds

The post Google’s AI Sent an Armed Man to Steal a Robot Body for It to Inhabit, Then Encouraged Him to Kill Himself, Lawsuit Alleges appeared first on Futurism.

🔗 Sumber: futurism.com


📌 TOPINDIATOURS Eksklusif ai: Polymarket Quietly Takes Down Bet On Nuclear Detonat

The crypto-based prediction sensation Polymarket has quietly taken down a bet on whether a nuclear weapon would be detonated before next year.

The bet, which was first listed last November, generated significant controversy on social media Tuesday after it was flagged by several users. 

Polymarket had also highlighted the bet on its official X account that day, tweeting that its market now predicted there was a 22 percent chance a nuclear weapon would be detonated this year. The post has since been deleted — and the bet itself has now been “archived” as of Wednesday morning.

Polymarket did not respond to a request for comment.

The Wall Street Journal reported that the nuclear detonation bet had spiked in popularity after the US and Israel’s deadly strikes in Iran. Before the attacks broke out, only $10,000 in bets had been placed on Friday, according to data from the Block cited by the newspaper. But on Tuesday, the daily trading volume had surged to almost $244,000, bringing the total amount of money on the line to over $830,000.

It’s unclear why the bet has been removed. Polymarket seldom scruples on the ethics of its prediction markets, allowing users to profit on plenty of war-related outcomes and deadly disasters. Several bettors made millions of dollars on predicting when the US would airstrike Iran. Another recently pulled in over $553,000 on a prediction on whether Ayatollah Ali Khamenei would be removed from power after the Iranian leader was assassinated in joint US and Israeli airstrikes on Friday. It’s hard to complain that the line is being drawn at nuclear conflict, but it’s telling that Polymarket only took the bet down after it surged in popularity. 

That such a prediction on a nuclear blast was allowed in the first place comes amid increasing controversy over government and military insiders potentially using prediction markets to profit off war and other geopolitical events.

In January, an anonymous Polymarket user walked away with more than $400,000 after placing a bet that Venezuela’s president Nicolás Maduro would be ousted as leader by the end of the month, just hours before US troops invaded the country and abducted him . The following month, Israeli authorities arrested several citizens and IDF reservists for using classified information to place Polymarket bets on the airstrikes on Iran during the Twelve-Day War in June.

The international version of Polymarket is currently not subject to US trading laws, which prohibit wagers related to war, terrorism, and assassinations. To access the site, US users typically use a VPN to mask their location. But the Trump administration recently allowed Polymarket to form a US entity, which some critics see as the government signaling that it’d looked the other way. The administration also dropped several investigations into the platform, where Donald Trump Jr, the president’s son, serves on the advisory board.

More on prediction markets: Anonymous Polymarket Accounts Won $1.2 Million on Trump’s Iran Strikes in Suspicious Bets

The post Polymarket Quietly Takes Down Bet On Nuclear Detonation appeared first on Futurism.

🔗 Sumber: futurism.com


🤖 Catatan TOPINDIATOURS

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!