📌 TOPINDIATOURS Update ai: OpenAI hardware chief resigns after AI models deployed
OpenAI’s hardware lead, Caitlin Kalinowski, said she has stepped down from the company after raising concerns about its recent agreement with the US’ Department of War. In a post shared on X, Kalinowski said the decision followed OpenAI’s move to deploy its AI models on the Pentagon’s classified cloud networks.
According to her, the company moved too quickly to finalize the arrangement without allowing sufficient time for broader internal and public discussion about the implications. Kalinowski noted that while AI can play an important role in national security, certain boundaries require far more scrutiny.
Surveillance of Americans without judicial oversight and the development of lethal autonomous systems without clear human authorization, she said, are issues that deserved more deliberation before the partnership moved forward.
OpenAI defends safeguards after Kalinowski’s resignation
The former hardware lead said she holds “deep respect” for OpenAI CEO Sam Altman and the broader team, but argued the Pentagon partnership was announced before clear safeguards had been defined. “It’s a governance concern first and foremost. These are too important for deals or announcements to be rushed.” Kalinowski wrote in a follow-up post.
OpenAI said the day after the agreement was announced that the partnership includes additional safeguards designed to limit how its technology can be used. The company reiterated on Saturday that its “red lines” prohibit applications such as domestic surveillance or the deployment of autonomous weapons.
In a statement to Reuters, the company said it understands that its work in this area can generate strong opinions and debate. It added that it plans to continue engaging with employees, government representatives, civil society groups, and communities worldwide as the conversation evolves.
Just over a week ago, OpenAI revealed its partnership with the Pentagon, following failed talks between the Department of War and Anthropic, which had sought safeguards to prevent its AI from being used for mass domestic surveillance or fully autonomous weapons.
Altman stresses human oversight and legal protections
After the company signed the agreement last week, Altman explained that the contract incorporates protections similar to those that were a point of contention in Anthropic’s negotiations.
The tech leader emphasized that two of OpenAI’s core safety principles – bans on domestic mass surveillance and ensuring human responsibility for the use of force, including in autonomous weapons systems – are reflected in the Pentagon agreement. He added that the Department of War agrees with these principles, has codified them in law and policy, and that the contract puts them into practice.
“We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only,” Altman wrote on X.
He added that the company is urging the U.S. Department of the Treasury to extend the same terms to all AI firms, arguing that these conditions should be acceptable across the industry. He added that OpenAI prefers resolving tensions through practical agreements rather than legal or governmental action.
According to media reports, Altman also told employees during an all-hands meeting that the government will let OpenAI develop its own “safety stack” to prevent misuse. He emphasized that if the AI model declines a task, the government would not compel the company to override that refusal.
🔗 Sumber: interestingengineering.com
📌 TOPINDIATOURS Hot ai: Which Agent Causes Task Failures and When?Researchers from
Share My Research is Synced’s column that welcomes scholars to share their own research breakthroughs with over 1.5M global AI enthusiasts. Beyond technological advances, Share My Research also calls for interesting stories behind the research and exciting research ideas. Contact us: chain.zhang@jiqizhixin.com
Meet the authors
Institutions: Penn State University, Duke University, Google DeepMind, University of Washington, Meta, Nanyang Technological University, and Oregon State University. The co-first authors are Shaokun Zhang of Penn State University and Ming Yin of Duke University.
In recent years, LLM Multi-Agent systems have garnered widespread attention for their collaborative approach to solving complex problems. However, it’s a common scenario for these systems to fail at a task despite a flurry of activity. This leaves developers with a critical question: which agent, at what point, was responsible for the failure? Sifting through vast interaction logs to pinpoint the root cause feels like finding a needle in a haystack—a time-consuming and labor-intensive effort.
This is a familiar frustration for developers. In increasingly complex Multi-Agent systems, failures are not only common but also incredibly difficult to diagnose due to the autonomous nature of agent collaboration and long information chains. Without a way to quickly identify the source of a failure, system iteration and optimization grind to a halt.
To address this challenge, researchers from Penn State University and Duke University, in collaboration with institutions including Google DeepMind, have introduced the novel research problem of “Automated Failure Attribution.” They have constructed the first benchmark dataset for this task, Who&When, and have developed and evaluated several automated attribution methods. This work not only highlights the complexity of the task but also paves a new path toward enhancing the reliability of LLM Multi-Agent systems.
The paper has been accepted as a Spotlight presentation at the top-tier machine learning conference, ICML 2025, and the code and dataset are now fully open-source.
Paper:https://arxiv.org/pdf/2505.00212
Code:https://github.com/mingyin1/Agents_Failure_Attribution
Dataset:https://huggingface.co/datasets/Kevin355/Who_and_When
Research Background and Challenges
LLM-driven Multi-Agent systems have demonstrated immense potential across many domains. However, these systems are fragile; errors by a single agent, misunderstandings between agents, or mistakes in information transmission can lead to the failure of the entire task.
Currently, when a system fails, developers are often left with manual and inefficient methods for debugging:
Manual Log Archaeology : Developers must manually review lengthy interaction logs to find the source of the problem.
Reliance on Expertise : The debugging process is highly dependent on the developer’s deep understanding of the system and the task at hand.
This “needle in a haystack” approach to debugging is not only inefficient but also severely hinders rapid system iteration and the improvement of system reliability. There is an urgent need for an automated, systematic method to pinpoint the cause of failures, effectively bridging the gap between “evaluation results” and “system improvement.”
Core Contributions
This paper makes several groundbreaking contributions to address the challenges above:
1. Defining a New Problem: The paper is the first to formalize “automated failure attribution” as a specific research task. This task is defined by identifying the failure-responsible agent and the decisive error step that led to the task’s failure.
2. Constructing the First Benchmark Dataset: Who&When : This dataset includes a wide range of failure logs collected from 127 LLM Multi-Agent systems, which were either algorithmically generated or hand-crafted by experts to ensure realism and diversity. Each failure log is accompanied by fine-grained human annotations for:
Who: The agent responsible for the failure.
When: The specific interaction step where the decisive error occurred.
Why: A natural language explanation of the cause of the failure.
3. Exploring Initial “Automated Attribution” Methods : Using the Who&When dataset, the paper designs and assesses three distinct methods for automated failure attribution:
– All-at-Once: This method provides the LLM with the user query and the complete failure log, asking it to identify the responsible agent and the decisive error step in a single pass. While cost-effective, it may struggle to pinpoint precise errors in long contexts.
– Step-by-Step: This approach mimics manual debugging by having the LLM review the interaction log sequentially, making a judgment at each step until the error is found. It is more precise at locating the error step but incurs higher costs and risks accumulating errors.
– Binary Search: A compromise between the first two methods, this strategy repeatedly divides the log in half, using the LLM to determine which segment contains the error. It then recursively searches the identified segment, offering a balance of cost and performance.
Experimental Results and Key Findings
Experiments were conducted in two settings: one where the LLM knows the ground truth answer to the problem the Multi-Agent system is trying to solve (With Ground Truth) and one where it does not (Without Ground Truth). The primary model used was GPT-4o, though other models were also tested. The systematic evaluation of these methods on the Who&When dataset yielded several important insights:
– A Long Way to Go: Current methods are far from perfect. Even the best-performing single method achieved an accuracy of only about 53.5% in identifying the responsible agent and a mere 14.2% in pinpointing the exact error step. Some methods performed even worse than random guessing, underscoring the difficulty of the task.
– No “All-in-One” Solution: Different methods excel at different aspects of the problem. The All-at-Once method is better at identifying “Who,” while the Step-by-Step method is more effective at determining “When.” The Binary Search method provides a middle-ground performance.
– Hybrid Approaches Show Promise but at a High Cost: The researchers found that combining different methods, such as using the All-at-Once approach to identify a potential agent and then applying the Step-by-Step method to find the error, can improve overall performance. However, this comes with a significant increase in computational cost.
– State-of-the-Art Models Struggle: Surprisingly, even the most advanced reasoning models, like OpenAI o1 and DeepSeek R1, find this task challenging.- This h…
Konten dipersingkat otomatis.
🔗 Sumber: syncedreview.com
🤖 Catatan TOPINDIATOURS
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
✅ Update berikutnya dalam 30 menit — tema random menanti!