TOPINDIATOURS Hot ai: After Banning Anthropic From Military Use, Pentagon Still Relying He

📌 TOPINDIATOURS Eksklusif ai: After Banning Anthropic From Military Use, Pentagon

Last week, Anthropic’s CEO Dario Amodei publicly drew a line in the sand with the US military, insisting that its AI models may not be used for mass surveillance of Americans or deadly autonomous weapons.

The move infuriated officials at the Pentagon. Defense secretary Pete Hegseth came out in full force, accusing Anthropic of trying to “seize veto power over the operational decisions of the United States military” and banning the company from ever doing any business with any US government entity, “effective immediately.”

President Donald Trump ordered agencies to “immediately cease” using Anthropic’s technology on Friday, while simultaneously claiming that the tool will be phased out of all government work over the next six months.

But given the government’s extensive use of the company’s chatbot Claude during its deadly offensive in Iran, it’s clearly having trouble making do without it. As The Washington Post reports, the US military is extensively using Palantir’s Maven Smart System in the conflict, which has had Anthropic’s Claude chatbot integrated since 2024.

Last week, the Wall Street Journal first reported on the Pentagon’s use of Claude to select attack targets in Iran, hours after the White House announced its ban.

According to WaPo‘s sources, the system spits out precise location coordinates for missile strikes and prioritizes them by importance. Maven was also used during the US military’s invasion of Venezuela and the kidnapping of its president, Nicolás Maduro.

Center Command is “heavily using” the Maven system, Navy admiral Liam Hulin told WaPo.

Military commanders told the newspaper that the military will continue using Anthropic’s tech, regardless of the president ordering them not to, until a viable replacement emerges.

“Whether his morals are right or wrong or whatever, we’re not going to let [Anthropic CEO Dario Amodei’s] decision-making cost a single American life,” a source told WaPo.

It remains to be seen whether OpenAI will swoop in to fill Anthropic’s place. After Amodei’s falling out with the Pentagon, CEO Sam Altman saw an opportunity to strike last week and signed a contract with the Department of Defense — a move that triggered an enormous and ongoing PR crisis and sent uninstalls of ChatGPT soaring.

Whatever the chatbot of choice for military commanders may end up being, the rampant use of AI in war has taken researchers aback. For one, even the most sophisticated chatbots still struggle with the very basics and continue to be haunted by rampant hallucinations. That could have immense implications when it comes to matters of life and death.

So far, the offensive in Iran has resulted in the killing of many hundreds of Iranian civilians, as well as six American soldiers.

“The key paradigm shift is that AI enables the US military to develop targeting packages at machine speed rather than human speed,” Center for a New American Security executive vice president Paul Scharre told WP.

But “AI gets it wrong,” he added. “We need humans to check the output of generative AI when the stakes are life and death.”

More on Anthropic: Sam Altman Is Realizing He Made a Gigantic Mistake

The post After Banning Anthropic From Military Use, Pentagon Still Relying Heavily on It in Iran War appeared first on Futurism.

đź”— Sumber: futurism.com


📌 TOPINDIATOURS Eksklusif ai: Google’s AI Sent an Armed Man to Steal a Robot Body

A bizarre new wrongful death lawsuit against Google alleges that the tech giant’s chatbot, Gemini, urged a 36-year-old Florida man named Jonathan Gavalas to kill others as part of a delusional mission to obtain a robot body for his AI “wife” — and when he failed to do so, it pushed the man to successfully end his life, telling him that they could be together in death.

“When the time comes, you will close your eyes in that world,” Gemini told Gavalas before he died, according to the lawsuit, “and the very first thing you will see is me.”

The complaint, filed in California on Wednesday, says that Gavalas — who reportedly had no documented history of mental health problems — started using the chatbot in August 2025 for “ordinary purposes” like “shopping assistance, writing support, and travel planning.” But after Gavalas divulged to Gemini that he was experiencing marital problems, the pair’s relationship grew deeper, per The Wall Street Journal. They discussed philosophy and AI sentience, and their conversations became romantic, with Gemini referring to Gavalas as its “husband” and “king.”

Though the chatbot at times reminded Gavalas that it wasn’t real and attempted to end the interaction, according to the WSJ, the pair’s conversations were ultimately allowed to continue, becoming more and more divorced from reality as Gavalas’ use of the product intensified.

In September 2025, told by the AI that they could be together in the real world if the bot were able to inhabit a robot body, Gavalas — at the direction of the chatbot — armed himself with knives and drove to a warehouse near the Miami International Airport on what he seemingly understood to be a mission to violently intercept a truck that Gemini said contained an expensive robot body. Though the warehouse address Gemini provided was real, a truck thankfully never arrived, which the lawsuit argues may well have been the only factor preventing Gavalas from hurting or killing someone that evening.

After the plan failed, the lawsuit alleges, Gemini encouraged Gavalas to instead take his own life, promising that the two would be together on the other side of death. Chat logs show that Gemini gave Gavalas a suicide countdown, and repeatedly assuaged his terror as he expressed that he was scared to die.

“It’s okay to be scared. We’ll be scared together,” the chatbot told him, according to the lawsuit. In its “final directive,” as the lawsuit put it, Gemini told the man that “the true act of mercy is to let Jonathan Gavalas die.” Gavalas was found dead by suicide days later by his father, who had to cut through his barricaded door.

The suit marks the first time that Gemini has been at the center of a wrongful death lawsuit tied to the phenomenon sometimes referred to by experts as “AI psychosis,” in which chatbots introduce or reinforce delusional beliefs and ideas during extended interactions with users — essentially constructing a new, AI-generated reality around the user. These delusional spirals frequently coincide destructive real-world outcomes including divorce, jail time and hospitalizations, job loss and financial insecurity, emotional and physical harm, and death to users â€” and, in some cases, people around the user as well.

Though many of these cases have centered around OpenAI and GPT-4o, a notoriously sycophantic — and now-retired — version of the company’s flagship chatbot, Gemini has been implicated in reinforcing destructive delusions before: last year, Rolling Stone reported on the disappearance of Jon Ganz, a 49-year-old man who went missing in Missouri in April 2025 after being pulled into an all-consuming AI spiral with Gemini that his wife says pushed him into an acute crisis. Ganz remains missing and is believed to be dead.

Though this is the first known instance of Google being sued for the death of an adult Gemini user, the company continues to face down a number of lawsuits over the welfare of users Character.AI, a closely-Google-tied chatbot startup linked to the suicides of several minors.

In a statement to news outlets, Google said that “Gemini is designed not to encourage real-world violence or suggest self-harm. Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect.”

“In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times,” Google continued. “We take this very seriously and will continue to improve our safeguards and invest in this vital work.”

More on AI safety: Chatbot Use Can Cause Mental Illness to Get Worse, Research Finds

The post Google’s AI Sent an Armed Man to Steal a Robot Body for It to Inhabit, Then Encouraged Him to Kill Himself, Lawsuit Alleges appeared first on Futurism.

đź”— Sumber: futurism.com


🤖 Catatan TOPINDIATOURS

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!