TOPINDIATOURS Breaking ai: Google’s ‘Nested Learning’ paradigm could solve AI's memor

📌 TOPINDIATOURS Breaking ai: Google’s ‘Nested Learning’ paradigm could solve AI&#0

Researchers at Google have developed a new AI paradigm aimed at solving one of the biggest limitations in today’s large language models: their inability to learn or update their knowledge after training. The paradigm, called Nested Learning, reframes a model and its training not as a single process, but as a system of nested, multi-level optimization problems. The researchers argue that this approach can unlock more expressive learning algorithms, leading to better in-context learning and memory.

To prove their concept, the researchers used Nested Learning to develop a new model, called Hope. Initial experiments show that it has superior performance on language modeling, continual learning, and long-context reasoning tasks, potentially paving the way for efficient AI systems that can adapt to real-world environments.

The memory problem of large language models

Deep learning algorithms helped obviate the need for the careful engineering and domain expertise required by traditional machine learning. By feeding models vast amounts of data, they could learn the necessary representations on their own. However, this approach presented its own set of challenges that couldn’t be solved by simply stacking more layers or creating larger networks, such as generalizing to new data, continually learning new tasks, and avoiding suboptimal solutions during training.

Efforts to overcome these challenges led to the innovations that led to Transformers, the foundation of today's large language models (LLMs). These models have ushered in "a paradigm shift from task-specific models to more general-purpose systems with various emergent capabilities as a result of scaling the 'right' architectures," the researchers write. Still, a fundamental limitation remains: LLMs are largely static after training and can't update their core knowledge or acquire new skills from new interactions.

The only adaptable component of an LLM is its in-context learning ability, which allows it to perform tasks based on information provided in its immediate prompt. This makes current LLMs analogous to a person who can't form new long-term memories. Their knowledge is limited to what they learned during pre-training (the distant past) and what's in their current context window (the immediate present). Once a conversation exceeds the context window, that information is lost forever.

The problem is that today’s transformer-based LLMs have no mechanism for “online” consolidation. Information in the context window never updates the model’s long-term parameters — the weights stored in its feed-forward layers. As a result, the model can’t permanently acquire new knowledge or skills from interactions; anything it learns disappears as soon as the context window rolls over.

A nested approach to learning

Nested Learning (NL) is designed to allow computational models to learn from data using different levels of abstraction and time-scales, much like the brain. It treats a single machine learning model not as one continuous process, but as a system of interconnected learning problems that are optimized simultaneously at different speeds. This is a departure from the classic view, which treats a model's architecture and its optimization algorithm as two separate components.

Under this paradigm, the training process is viewed as developing an "associative memory," the ability to connect and recall related pieces of information. The model learns to map a data point to its local error, which measures how "surprising" that data point was. Even key architectural components like the attention mechanism in transformers can be seen as simple associative memory modules that learn mappings between tokens. By defining an update frequency for each component, these nested optimization problems can be ordered into different "levels," forming the core of the NL paradigm.

Hope for continual learning

The researchers put these principles into practice with Hope, an architecture designed to embody Nested Learning. Hope is a modified version of Titans, another architecture Google introduced in January to address the transformer model's memory limitations. While Titans had a powerful memory system, its parameters were updated at only two different speeds: a long-term memory module and a short-term memory mechanism.

Hope is a self-modifying architecture augmented with a "Continuum Memory System" (CMS) that enables unbounded levels of in-context learning and scales to larger context windows. The CMS acts like a series of memory banks, each updating at a different frequency. Faster-updating banks handle immediate information, while slower ones consolidate more abstract knowledge over longer periods. This allows the model to optimize its own memory in a self-referential loop, creating an architecture with theoretically infinite learning levels.

On a diverse set of language modeling and common-sense reasoning tasks, Hope demonstrated lower perplexity (a measure of how well a model predicts the next word in a sequence and maintains coherence in the text it generates) and higher accuracy compared to both standard transformers and other modern recurrent models. Hope also performed better on long-context "Needle-In-Haystack" tasks, where a model must find and use a specific piece of information hidden within a large volume of text. This suggests its CMS offers a more efficient way to handle long information sequences.

This is one of several efforts to create AI systems that process information at different levels. Hierarchical Reasoning Model (HRM) by Sapient Intelligence, used a hierarchical architecture to make the model more efficient in learning reasoning tasks. Tiny Reasoning Model (TRM), a model by Samsung, improves HRM by making architectural changes, improving its performance while making it more efficient.

While promising, Nested Learning faces some of the same challenges of these other paradigms in realizing its full potential. Current AI hardware and software stacks are heavily optimized for classic deep learning architectures and Transformer models in particular. Adopting Nested Learning at scale may require fundamental changes. However, if it gains traction, it could lead to far more efficient LLMs that can continually learn, a capability crucial for real-world enterprise applications where environments, data, and user needs are in constant flux.

🔗 Sumber: venturebeat.com


📌 TOPINDIATOURS Eksklusif ai: Lithium batteries could last longer in extreme cold,

Researchers have explored the performance of lithium-ion batteries (LIBs) in extreme cold. They reviewed low-temperature (LT) electrolytes that keep LIBs charging and discharging in extreme cold environments.

The new work offers a systematic roadmap for next-generation energy-storage systems that thrive in the cold.

This can also help these batteries to successfully operate in space environments.

Low-temperature (LT) electrolytes

Researchers from Chang’an University and Queensland University of Technology, led by Professor Limin Geng, Professor Weijia Meng and Dr Jiaye Ye, have published their forward-looking work on low-temperature (LT) electrolytes in Springer Nature.

The work elaborates on innovative optimization approaches encompassing lithium salt molecular design with tailored dissociation characteristics, solvent matrix optimization through dielectric constant and viscosity regulation, interfacial engineering additives for constructing low-impedance solid-electrolyte interphase (SEI) layers, and gel-polymer composite electrolyte systems, according to the study.

The review covers ester-based (methyl acetate, ethyl difluoroacetate), ether-based (DOL/DME, THF, CPME), nitrile-based (fluoroacetonitrile) and gel-polymer systems, detailing how freezing point, dielectric constant, and donor number dictate Li⁺ solvation structure.

Researchers revealed that machine-learning models trained on >150 000 molecular candidates predict melting point, viscosity and LUMO energy within 5 K or 0.1 eV, accelerating electrolyte discovery from months to hours.

Machine learning-guided electrolyte formulation strategies

The team also pointed out that high-throughput DFT plus SHAP interpretability identifies dipole moment and molecular radius as key descriptors, delivering non-fluorinated ethers that cycle 300 times at −30 °C with 99 % capacity retention.  LT gel-polymer electrolytes enable flexible printed-circuit modules that operate at −40 °C, providing a new route for cold-weather in-memory computing and IoT sensors.

The team paid emphasis on emerging machine learning-guided electrolyte formulation strategies that enable high-throughput virtual screening of constituent combinations and prediction of structure–property relationships.

These artificial intelligence-assisted rational design frameworks demonstrate significant potential for accelerating the development of next-generation LT electrolytes by establishing quantitative composition-performance correlations through advanced data-driven methodologies, according to the paper.

The work pinpoints the need for standardized LT testing protocols, physics-informed neural networks that couple solvation structure to plating propensity, and automated robotic platforms that translate AI predictions into litre-scale synthesis. Future work will target high-entropy electrolytes, phase-diagram-guided formulations and cryogenic in-situ NMR to close the gap between lab demos and commercial 8 Ah pouch cells, according to a press release.

The team also pointed out that from Mars rovers to Arctic drones, LT electrolytes unlock reliable power where traditional LIBs cease to function.

The team’s work reviews recent advancements in electrolyte engineering strategies aimed at improving the low-temperature operational capabilities of LIBs. The study methodically examines critical performance-limiting mechanisms through fundamental analysis of four primary challenges: insufficient ionic conductivity under cryogenic conditions, kinetically hindered charge transfer processes, Li⁺ transport limitations across the solid-electrolyte interphase (SEI), and uncontrolled lithium dendrite growth.

🔗 Sumber: interestingengineering.com


🤖 Catatan TOPINDIATOURS

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!