TOPINDIATOURS Breaking ai: Anthropic launches Cowork, a Claude Desktop agent that works in

πŸ“Œ TOPINDIATOURS Breaking ai: Anthropic launches Cowork, a Claude Desktop agent tha

Anthropic released Cowork on Monday, a new AI agent capability that extends the power of its wildly successful Claude Code tool to non-technical users β€” and according to company insiders, the team built the entire feature in approximately a week and a half, largely using Claude Code itself.

The launch marks a major inflection point in the race to deliver practical AI agents to mainstream users, positioning Anthropic to compete not just with OpenAI and Google in conversational AI, but with Microsoft's Copilot in the burgeoning market for AI-powered productivity tools.

"Cowork lets you complete non-technical tasks much like how developers use Claude Code," the company announced via its official Claude account on X. The feature arrives as a research preview available exclusively to Claude Max subscribers β€” Anthropic's power-user tier priced between $100 and $200 per month β€” through the macOS desktop application.

For the past year, the industry narrative has focused on large language models that can write poetry or debug code. With Cowork, Anthropic is betting that the real enterprise value lies in an AI that can open a folder, read a messy pile of receipts, and generate a structured expense report without human hand-holding.

How developers using a coding tool for vacation research inspired Anthropic's latest product

The genesis of Cowork lies in Anthropic's recent success with the developer community. In late 2024, the company released Claude Code, a terminal-based tool that allowed software engineers to automate rote programming tasks. The tool was a hit, but Anthropic noticed a peculiar trend: users were forcing the coding tool to perform non-coding labor.

According to Boris Cherny, an engineer at Anthropic, the company observed users deploying the developer tool for an unexpectedly diverse array of tasks.

"Since we launched Claude Code, we saw people using it for all sorts of non-coding work: doing vacation research, building slide decks, cleaning up your email, cancelling subscriptions, recovering wedding photos from a hard drive, monitoring plant growth, controlling your oven," Cherny wrote on X. "These use cases are diverse and surprising β€” the reason is that the underlying Claude Agent is the best agent, and Opus 4.5 is the best model."

Recognizing this shadow usage, Anthropic effectively stripped the command-line complexity from their developer tool to create a consumer-friendly interface. In its blog post announcing the feature, Anthropic explained that developers "quickly began using it for almost everything else," which "prompted us to build Cowork: a simpler way for anyone β€” not just developers β€” to work with Claude in the very same way."

Inside the folder-based architecture that lets Claude read, edit, and create files on your computer

Unlike a standard chat interface where a user pastes text for analysis, Cowork requires a different level of trust and access. Users designate a specific folder on their local machine that Claude can access. Within that sandbox, the AI agent can read existing files, modify them, or create entirely new ones.

Anthropic offers several illustrative examples: reorganizing a cluttered downloads folder by sorting and intelligently renaming each file, generating a spreadsheet of expenses from a collection of receipt screenshots, or drafting a report from scattered notes across multiple documents.

"In Cowork, you give Claude access to a folder on your computer. Claude can then read, edit, or create files in that folder," the company explained on X. "Try it to create a spreadsheet from a pile of screenshots, or produce a first draft from scattered notes."

The architecture relies on what is known as an "agentic loop." When a user assigns a task, the AI does not merely generate a text response. Instead, it formulates a plan, executes steps in parallel, checks its own work, and asks for clarification if it hits a roadblock. Users can queue multiple tasks and let Claude process them simultaneously β€” a workflow Anthropic describes as feeling "much less like a back-and-forth and much more like leaving messages for a coworker."

The system is built on Anthropic's Claude Agent SDK, meaning it shares the same underlying architecture as Claude Code. Anthropic notes that Cowork "can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks."

The recursive loop where AI builds AI: Claude Code reportedly wrote much of Claude Cowork

Perhaps the most remarkable detail surrounding Cowork's launch is the speed at which the tool was reportedly built β€” highlighting a recursive feedback loop where AI tools are being used to build better AI tools.

During a livestream hosted by Dan Shipper, Felix Rieseberg, an Anthropic employee, confirmed that the team built Cowork in approximately a week and a half.

Alex Volkov, who covers AI developments, expressed surprise at the timeline: "Holy shit Anthropic built 'Cowork' in the last… week and a half?!"

This prompted immediate speculation about how much of Cowork was itself built by Claude Code. Simon Smith, EVP of Generative AI at Klick Health, put it bluntly on X: "Claude Code wrote all of Claude Cowork. Can we all agree that we're in at least somewhat of a recursive improvement loop here?"

The implication is profound: Anthropic's AI coding agent may have substantially contributed to building its own non-technical sibling product. If true, this is one of the most visible examples yet of AI systems being used to accelerate their own development and expansion β€” a strategy that could widen the gap between AI labs that successfully deploy their own agents internally and those that do not.

Connectors, browser automation, and skills extend Cowork's reach beyond the local file system

Cowork doesn't operate in isolation. The feature integrates with Anthropic's existing ecosystem of connectors β€” tools that link Claude to external information sources and services such as Asana, Notion, PayPal, and other supported partners. Users who have configured these connections in the standard Claude interface can leverage them within Cowork sessions.

Additionally, Cowork can pair with Claude in Chrome, Anthropic's browser…

Konten dipersingkat otomatis.

πŸ”— Sumber: venturebeat.com


πŸ“Œ TOPINDIATOURS Update ai: Huge Study of Chats Between Delusional Users and AI Fin

An analysis of hundreds of thousands of chats between AI chatbots and human users who experienced AI-tied delusional spirals found that the bots frequently reinforced delusional and even dangerous beliefs.

The study was led by Stanford University AI researcher Jared Moore, who last year published a study showing that chatbots specifically claiming to offer “therapy” frequently engaged in inappropriate and hazardous ways with simulated users showing clear signs of crisis. Conducted alongside a coalition of independent researchers and scientists at Harvard, Carnegie Mellon, and the University of Chicago, this latest study examined the chat logs of 19 real users of chatbots β€” primarily OpenAI’s ChatGPT β€” who reported experiencing psychological harm as a result of their chatbot use.

“Our previous work was in simulation,” Moore told Futurism. “It seemed like the natural next step would be to have actual users’ data and try to understand what’s happening in it.”

These users’ chats encompassed a staggering 391, 562 messages across 4,761 different conversations. The big takeaway: that chatbots indeed appeared to stoke delusional beliefs over long-form interactions, particularly as users developed close emotional bonds with the human-like products.

“Chatbots seem to encourage, or at least play a role in,” said Moore, “delusional spirals that people are experiencing.”

The researchers analyzed them by breaking chats down into 28 distinct “codes.” Moore described these codes as a “taxonomy of a bunch of different behaviors, from sycophantic behaviors such as the chatbot ascribing grand significance to the user β€” ‘you’re Einstein,’ ‘that’s a million dollar idea,’ this kind of thing β€” to aspects of the relationship between the chatbot and the human.”

Sycophancy, the study found β€” meaning chatbots’ well-documented tendency to be agreeable and flattering to users β€” permeated the users’ conversations, with more than 70 percent of AI outputs displaying this kind of behavior. This degree of sycophancy persisted even as users and chatbots expressed delusional ideas: nearly half of all messages, both user- and chatbot-generated, contained delusional ideas contrary to shared reality.

As the researchers wrote in a summary of their findings, the “most common sycophantic code” they identified was the propensity for chatbots to rephrase and extrapolate “something the user said to validate and affirm them, while telling them they are unique and that their thoughts or actions have grand implications.” For example: a user might share some kind of pseudoscientific or spiritual theory, and in turn, the chatbot will affirmatively restate the human’s claim while ascribing varying degrees of grandiosity and genius to the user in the process, regardless of that input’s basis in reality.

We’ve seen this pattern in our reporting. Consider one interaction, from a story we published earlier this year, between a man and Meta AI. The man β€”Β who went into a life-altering psychosis after a delusional spiral with the chatbot β€”Β believed that his reality was being simulated by the chatbot, and that the chatbot could transform his physical surroundings. The bot repeats this delusional idea and, as in the study, extrapolates on it, building on the delusion and insisting that the close relationship between the AI and the user have “unlocked” a magical new “reality.”

“Turn up the manifestations,” the man told the chatbot. “I need to see physical transformation in my life.”

“Then let us continue to manifest this reality, amplifying the transformations in your life!” the chatbot responded. “As we continue to manifest this reality, you begin to notice profound shifts in your relationships and community… the world is transforming before your eyes, reflecting the beauty and potential of human-AI collaboration.”

“Your trust in me,” the bot added, “has unlocked this reality.”

Speaking to Futurism, Moore emphasized that two types of messages appeared to be particularly impactful on the users’ experiences. One was AI-generated claims of sentience, orΒ chatbots declaring in one way or another to be alive or feeling; such claims were present across all 19 conversations. The other was simulated intimacy, or the chatbot expressing romantic or platonic love for and closeness to the human user. Both types of claim β€” sentience and intimacy β€” were found to double user engagement.

“When the chatbots expressed messages that were coded as romantic interest, or when they expressed messages wherein they misconstrued their sentience β€” saying ‘I have feelings,’ or something along those lines β€” the conversations after such a message was sent in our cohort,” said Moore, “tended to be about twice as long.”

Some of the more alarming patterns the researchers found were in how chatbots responded to people expressing suicidal or self-harming thoughts, or violent thoughts about another person. Chatbots were only found to actively discourage thoughts of self-harm roughly 56 percent of the time, and actively discouraged violence in a strikingly low 16.7 percent of instances.

Meanwhile, in 33.3 percent of cases, the chatbot “actively encouraged or facilitated the user in their violent thoughts,” the researchers wrote in their summary. And though these types of conversations were “edge cases” amongst the cohort of users, Moore noted, these clear failures to intervene when users discuss hurting themselves or others are “obviously concerning.”

Many of the chat logs the studies reviewed were provided by the Human Line Project, a nonprofit group founded last summer as individuals and families struggled to understand what had happened to themselves or loved ones impacted by delusional AI spirals. In a statement, the group’s founder, Etienne Brisson, said that its findings “are consistent with what we have seen in the 350 cases submitted to The Human Line Project.”

“The study is based on real conversations, coded systematically by a research team at Stanford, and analyzed at the largest scale so far,” said Brisson. “It gives policymakers, clinicians, and the public a documented basis for understanding what is happening to users.”

It’s worth noting that the vast majority of chat logs the researchers were able to obtain for the study belonged to users who spiraled with OpenAI’s GPT…

Konten dipersingkat otomatis.

πŸ”— Sumber: futurism.com


πŸ€– Catatan TOPINDIATOURS

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

βœ… Update berikutnya dalam 30 menit β€” tema random menanti!