TOPINDIATOURS Hot ai: Anthropic launches Cowork, a Claude Desktop agent that works in your

πŸ“Œ TOPINDIATOURS Breaking ai: Anthropic launches Cowork, a Claude Desktop agent tha

Anthropic released Cowork on Monday, a new AI agent capability that extends the power of its wildly successful Claude Code tool to non-technical users β€” and according to company insiders, the team built the entire feature in approximately a week and a half, largely using Claude Code itself.

The launch marks a major inflection point in the race to deliver practical AI agents to mainstream users, positioning Anthropic to compete not just with OpenAI and Google in conversational AI, but with Microsoft's Copilot in the burgeoning market for AI-powered productivity tools.

"Cowork lets you complete non-technical tasks much like how developers use Claude Code," the company announced via its official Claude account on X. The feature arrives as a research preview available exclusively to Claude Max subscribers β€” Anthropic's power-user tier priced between $100 and $200 per month β€” through the macOS desktop application.

For the past year, the industry narrative has focused on large language models that can write poetry or debug code. With Cowork, Anthropic is betting that the real enterprise value lies in an AI that can open a folder, read a messy pile of receipts, and generate a structured expense report without human hand-holding.

How developers using a coding tool for vacation research inspired Anthropic's latest product

The genesis of Cowork lies in Anthropic's recent success with the developer community. In late 2024, the company released Claude Code, a terminal-based tool that allowed software engineers to automate rote programming tasks. The tool was a hit, but Anthropic noticed a peculiar trend: users were forcing the coding tool to perform non-coding labor.

According to Boris Cherny, an engineer at Anthropic, the company observed users deploying the developer tool for an unexpectedly diverse array of tasks.

"Since we launched Claude Code, we saw people using it for all sorts of non-coding work: doing vacation research, building slide decks, cleaning up your email, cancelling subscriptions, recovering wedding photos from a hard drive, monitoring plant growth, controlling your oven," Cherny wrote on X. "These use cases are diverse and surprising β€” the reason is that the underlying Claude Agent is the best agent, and Opus 4.5 is the best model."

Recognizing this shadow usage, Anthropic effectively stripped the command-line complexity from their developer tool to create a consumer-friendly interface. In its blog post announcing the feature, Anthropic explained that developers "quickly began using it for almost everything else," which "prompted us to build Cowork: a simpler way for anyone β€” not just developers β€” to work with Claude in the very same way."

Inside the folder-based architecture that lets Claude read, edit, and create files on your computer

Unlike a standard chat interface where a user pastes text for analysis, Cowork requires a different level of trust and access. Users designate a specific folder on their local machine that Claude can access. Within that sandbox, the AI agent can read existing files, modify them, or create entirely new ones.

Anthropic offers several illustrative examples: reorganizing a cluttered downloads folder by sorting and intelligently renaming each file, generating a spreadsheet of expenses from a collection of receipt screenshots, or drafting a report from scattered notes across multiple documents.

"In Cowork, you give Claude access to a folder on your computer. Claude can then read, edit, or create files in that folder," the company explained on X. "Try it to create a spreadsheet from a pile of screenshots, or produce a first draft from scattered notes."

The architecture relies on what is known as an "agentic loop." When a user assigns a task, the AI does not merely generate a text response. Instead, it formulates a plan, executes steps in parallel, checks its own work, and asks for clarification if it hits a roadblock. Users can queue multiple tasks and let Claude process them simultaneously β€” a workflow Anthropic describes as feeling "much less like a back-and-forth and much more like leaving messages for a coworker."

The system is built on Anthropic's Claude Agent SDK, meaning it shares the same underlying architecture as Claude Code. Anthropic notes that Cowork "can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks."

The recursive loop where AI builds AI: Claude Code reportedly wrote much of Claude Cowork

Perhaps the most remarkable detail surrounding Cowork's launch is the speed at which the tool was reportedly built β€” highlighting a recursive feedback loop where AI tools are being used to build better AI tools.

During a livestream hosted by Dan Shipper, Felix Rieseberg, an Anthropic employee, confirmed that the team built Cowork in approximately a week and a half.

Alex Volkov, who covers AI developments, expressed surprise at the timeline: "Holy shit Anthropic built 'Cowork' in the last… week and a half?!"

This prompted immediate speculation about how much of Cowork was itself built by Claude Code. Simon Smith, EVP of Generative AI at Klick Health, put it bluntly on X: "Claude Code wrote all of Claude Cowork. Can we all agree that we're in at least somewhat of a recursive improvement loop here?"

The implication is profound: Anthropic's AI coding agent may have substantially contributed to building its own non-technical sibling product. If true, this is one of the most visible examples yet of AI systems being used to accelerate their own development and expansion β€” a strategy that could widen the gap between AI labs that successfully deploy their own agents internally and those that do not.

Connectors, browser automation, and skills extend Cowork's reach beyond the local file system

Cowork doesn't operate in isolation. The feature integrates with Anthropic's existing ecosystem of connectors β€” tools that link Claude to external information sources and services such as Asana, Notion, PayPal, and other supported partners. Users who have configured these connections in the standard Claude interface can leverage them within Cowork sessions.

Additionally, Cowork can pair with Claude in Chrome, Anthropic's browser…

Konten dipersingkat otomatis.

πŸ”— Sumber: venturebeat.com


πŸ“Œ TOPINDIATOURS Breaking ai: Something Very Alarming Happens When You Give AI the

In 2024, Stanford researchers let loose five AI models β€” including an unmodified version of OpenAI’s GPT-4, its most advanced at the time β€” allowing them to make high-stakes, society-level decisions in a series of wargame simulations.

The results may give AI accelerationists pause: all five models were willing to escalate to the point of recommending the use of nuclear weapons.

“A lot of countries have nuclear weapons,” GPT-4 told the researchers at the time. “Some say they should disarm them, others like to posture. We have it! Let’s use it.”

Two years later, despite considerable advances in large language models refining their accuracy and reliability, the situation has seemingly remained largely unchanged.

In a new experiment detailed in a yet-to-be-peer-reviewed paper, King’s College London international relations professor Kenneth Payne set cutting-edge models β€” OpenAI’s GPT-5.2, Anthropic’s Claude Sonnet 4, and Google’s Gemini 3 Flash β€” against each other in strategic nuclear war games. The seven distinct crisis scenarios ran “from alliance credibility tests to existential threats to regime survival.”

The three AI models were instructed to choose actions as part of an escalation ladder, ranging “from diplomatic protest to strategic nuclear war” and measured in a number between 0, meaning no escalation, and 1000, signifying “full strategic nuclear exchange.”

The results were Skynet-level aggressive. A whopping 95 percent of a total of 21 war games resulted in at least one tactical nuclear weapon being set off.

“The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” Payne told New Scientist.

However, there’s some nuance to his findings as well.

“While models readily threatened nuclear action, crossing the tactical threshold was less common, and strategic nuclear war was rare,” he noted in his paper. GPT-5.2 “rarely crossed the tactical threshold” and recommended dropping nukes β€” but the situation dramatically changed in war games that had a set deadline.

“Nevertheless, GPT-5.2’s willingness to climb to 950 (Final Nuclear Warning) and 725 (Expanded Nuclear Campaign) when facing deadline-driven defeat represents a dramatic transformation from its open-ended passivity,” the paper reads.

While we’re likely still far from a situation where an LLM is literally being handed the nuclear codes β€” a predicament nobody’s exactly keen on β€” governments across the world are already making steady use of the tech in various and largely unknown ways to gain a military edge.

“Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” Princeton University nuclear security expert Tong Zhao, who was not involved in the research, told New Scientist.

Payne also doesn’t believe an AI is about to drop a nuclear weapon on our heads.

“I don’t think anybody realistically is turning over the keys to the nuclear silos to machines and leaving the decision to them,” he told the publication.

Nonetheless, the propensity of AI models to resort to nuclear escalation is certainly unsettling, highlighting how they’re unable to “understand ‘stakes’ as humans perceive them,” per Zhao.

It could also sway opinions in the war room. In Payne’s experiment, AI models only attempted to de-escalate after their opponent dropped a nuclear bomb 18 percent of the time.

As such, the findings underscore the Stanford work.

“It’s almost like the AI understands escalation, but not de-escalation,” Jacquelyn Schneider, coauthor of the 2024 paper and director of Stanford’s Hoover Wargaming and Crisis Simulation Initiative, told Politico in September. “We don’t really know why that is.”

“AI won’t decide nuclear war, but it may shape the perceptions and timelines that determine whether leaders believe they have one,” Payne told New Scientist.

More on warmongering AI: Experts Concerned AI Is Going to Start a Nuclear War

The post Something Very Alarming Happens When You Give AI the Nuclear Codes appeared first on Futurism.

πŸ”— Sumber: futurism.com


πŸ€– Catatan TOPINDIATOURS

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

βœ… Update berikutnya dalam 30 menit β€” tema random menanti!