TOPINDIATOURS Breaking ai: US Air Force Cloud One to get major automation, cyber upgrades

📌 TOPINDIATOURS Update ai: US Air Force Cloud One to get major automation, cyber u

Leidos will lead a major upgrade of the US Air Force’s Cloud One platform under a $454.9-million contract aimed at improving military cloud infrastructure.

The initiative focuses on strengthening cybersecurity, expanding automation, and simplifying how the Air Force manages applications in cloud environments.

Officials expect the upgrades to help units deploy digital tools faster while protecting sensitive operational systems.

Cloud One supports a wide range of military applications and data services across the Air Force.

The modernization effort will help teams move systems into secure cloud environments more efficiently.

Leidos will work with several major commercial providers, including Amazon Web Services, Microsoft Azure, Google Cloud Platform, and Oracle Cloud Infrastructure.

The companies will collaborate to enhance the Air Force’s multi-cloud architecture.

The program also aligns with Leidos’ NorthStar 2030 strategy, which targets growth in large-scale technology modernization and cybersecurity work for US federal agencies.

Modernizing the Cloud One platform

Engineers will focus on improving the Air Force’s existing multi-cloud infrastructure.

The project aims to strengthen security controls and increase automation across daily operations.

Teams will also streamline management tools used to oversee cloud services.

These changes should reduce operational complexity and allow faster deployment of mission software.

Leidos said the modernization will give Air Force personnel greater confidence in managing cloud environments while protecting operational data.

“Modernizing Cloud One helps the Air Force deploy mission-critical operations faster and defend them more effectively,” said Steve Hull, president of Leidos’ Digital Modernization.

“It also creates a secure, repeatable cloud foundation that other Department of War organizations can adopt,” Hull added.

“Helping to remove barriers to cloud adoption and enabling teams to move faster and more securely to help meet mission demands and maintain a strategic edge.”

Officials say the upgrades will help Air Force units move applications and data into secure cloud systems more easily.

Strengthening cyber and automation

The program also targets stronger cybersecurity protections across the Air Force’s cloud infrastructure.

Automation tools will monitor systems, improve compliance processes, and help teams respond to potential threats more quickly.

These capabilities aim to safeguard mission-critical data while improving operational readiness.

The contract builds on a series of recent technology initiatives involving Leidos across the United States and allied defense organizations.

Last month, the company partnered with RegScale to integrate cybersecurity platforms that automate compliance monitoring across the US Department of Defense and other federal agencies.

In 2025, Leidos also worked with VML to develop Imperium, an artificial intelligence platform designed to support US information operations.

The system provides tools to plan, execute, and analyze information campaigns while incorporating compliance and oversight mechanisms.

Earlier the same year, Leidos secured an $87-million contract with the NATO Communications and Information Agency.

The project will deliver a centralized IT system based on private cloud technology.

The effort includes a multinational team with Thales in France, CGI in Germany, Leonardo in Italy, and UK firms LA International and NEXO.

The system aims to improve network scalability and strengthen NATO’s cyber resilience.

Together, these initiatives highlight the expanding role of cloud infrastructure, cybersecurity, and AI in modern defense technology programs.

🔗 Sumber: interestingengineering.com


📌 TOPINDIATOURS Eksklusif ai: MIT Researchers Unveil “SEAL”: A New Step Towards Se

The concept of AI self-improvement has been a hot topic in recent research circles, with a flurry of papers emerging and prominent figures like OpenAI CEO Sam Altman weighing in on the future of self-evolving intelligent systems. Now, a new paper from MIT, titled “Self-Adapting Language Models,” introduces SEAL (Self-Adapting LLMs), a novel framework that allows large language models (LLMs) to update their own weights. This development is seen as another significant step towards the realization of truly self-evolving AI.

The research paper, published yesterday, has already ignited considerable discussion, including on Hacker News. SEAL proposes a method where an LLM can generate its own training data through “self-editing” and subsequently update its weights based on new inputs. Crucially, this self-editing process is learned via reinforcement learning, with the reward mechanism tied to the updated model’s downstream performance.

The timing of this paper is particularly notable given the recent surge in interest surrounding AI self-evolution. Earlier this month, several other research efforts garnered attention, including Sakana AI and the University of British Columbia’s “Darwin-Gödel Machine (DGM),” CMU’s “Self-Rewarding Training (SRT),” Shanghai Jiao Tong University’s “MM-UPT” framework for continuous self-improvement in multimodal large models, and the “UI-Genie” self-improvement framework from The Chinese University of Hong Kong in collaboration with vivo.

Adding to the buzz, OpenAI CEO Sam Altman recently shared his vision of a future with self-improving AI and robots in his blog post, “The Gentle Singularity.” He posited that while the initial millions of humanoid robots would need traditional manufacturing, they would then be able to “operate the entire supply chain to build more robots, which can in turn build more chip fabrication facilities, data centers, and so on.” This was quickly followed by a tweet from @VraserX, claiming an OpenAI insider revealed the company was already running recursively self-improving AI internally, a claim that sparked widespread debate about its veracity.

Regardless of the specifics of internal OpenAI developments, the MIT paper on SEAL provides concrete evidence of AI’s progression towards self-evolution.

Understanding SEAL: Self-Adapting Language Models

The core idea behind SEAL is to enable language models to improve themselves when encountering new data by generating their own synthetic data and optimizing their parameters through self-editing. The model’s training objective is to directly generate these self-edits (SEs) using data provided within the model’s context.

The generation of these self-edits is learned through reinforcement learning. The model is rewarded when the generated self-edits, once applied, lead to improved performance on the target task. Therefore, SEAL can be conceptualized as an algorithm with two nested loops: an outer reinforcement learning (RL) loop that optimizes the generation of self-edits, and an inner update loop that uses the generated self-edits to update the model via gradient descent.

This method can be viewed as an instance of meta-learning, where the focus is on how to generate effective self-edits in a meta-learning fashion.

A General Framework

SEAL operates on a single task instance (C,τ), where C is context information relevant to the task, and τ defines the downstream evaluation for assessing the model’s adaptation. For example, in a knowledge integration task, C might be a passage to be integrated into the model’s internal knowledge, and τ a set of questions about that passage.

Given C, the model generates a self-edit SE, which then updates its parameters through supervised fine-tuning: θ′←SFT(θ,SE). Reinforcement learning is used to optimize this self-edit generation: the model performs an action (generates SE), receives a reward r based on LMθ′’s performance on τ, and updates its policy to maximize the expected reward.

The researchers found that traditional online policy methods like GRPO and PPO led to unstable training. They ultimately opted for ReST^EM, a simpler, filtering-based behavioral cloning approach from a DeepMind paper. This method can be viewed as an Expectation-Maximization (EM) process, where the E-step samples candidate outputs from the current model policy, and the M-step reinforces only those samples that yield a positive reward through supervised fine-tuning.

The paper also notes that while the current implementation uses a single model to generate and learn from self-edits, these roles could be separated in a “teacher-student” setup.

Instantiating SEAL in Specific Domains

The MIT team instantiated SEAL in two specific domains: knowledge integration and few-shot learning.

  • Knowledge Integration: The goal here is to effectively integrate information from articles into the model’s weights.
  • Few-Shot Learning: This involves the model adapting to new tasks with very few examples.

Experimental Results

The experimental results for both few-shot learning and knowledge integration demonstrate the effectiveness of the SEAL framework.

In few-shot learning, using a Llama-3.2-1B-Instruct model, SEAL significantly improved adaptation success rates, achieving 72.5% compared to 20% for models using basic self-edits without RL training, and 0% without adaptation. While still below “Oracle TTT” (an idealized baseline), this indicates substantial progress.

For knowledge integration, using a larger Qwen2.5-7B model to integrate new facts from SQuAD articles, SEAL consistently outperformed baseline methods. Training with synthetically generated data from the base Qwen-2.5-7B model already showed notable improvements, and subsequent reinforcement learning further boosted performance. The accuracy also showed rapid improvement over external RL iterations, often surpassing setups using GPT-4.1 generated data within just two iterations.

Qualitative examples from the paper illustrate how reinforcement learning leads to the generation of more detailed self-edits, resulting in improved performance.

While promising, the researchers also acknowledge some limitations of the SEAL framework, including aspects related to catastrophic forgetting, computational overhead, and context-dependent evaluation. These are discussed in detail in the original paper.

Original Paper: https://arxiv.org/pdf/2506.10943

Project Site: https://jyopari.github.io/posts/seal

Github Repo: https://github.com/Continual-Intelligence/SEAL

The post MIT Researchers Unveil “SEAL”: A New Step Towards Self-Improving AI first appeared on Synced.

🔗 Sumber: syncedreview.com


🤖 Catatan TOPINDIATOURS

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!