๐ TOPINDIATOURS Hot ai: NASAโs Library Shutdown Scandal Is Ballooning Edisi Jam 19
By March of this year, the Trump administration is hoping to shutter over a dozen buildings and more than 100 labs at NASA’s iconic Goddard Space Flight Center (GFSC) in Maryland, the space agency’s first and oldest space center and a keystone for space research since around the time NASA was established in the late 1950s.
As part of its plans to gut the center, it shut NASA’s largest library at the GFSC last month, threatening to trash important documents in the process.
The news was met with outrage by NASA insiders, who pointed out that many of these historical documents remain undigitized, a scandal that has metastasized into the first major PR challenge for recently sworn-in NASA administrator and billionaire space tourist Jared Isaacman.
While agency officials have maintained that the move was part of a 2022 master plan โ a “consolidation, not a closure” per NASA press secretary Bethany Stevens โ and that researchers would continue to have access to the library’s archives as NASA teams conduct a 60-day review of all materials, GFSC staffers were taken aback at the abrupt and careless nature of the Trump administration’s actions.
“I feel like crying,” planetary scientist David Williams, who has curated space mission data for NASAโs archives, told NPR this week. “I mean, it’s horrible. I’m so frustrated. I’m so mad, and I’m just so upset.”
According to staffers interviewed by the broadcaster, the closures of buildings at Goddard were “rushed and disorganized, with no clear blueprint to replace important spaces.”
The approach has been reminiscent of the Trump administration’s vindictive approach to slashing agency budgets, as exemplified by Elon Musk’s bull-in-a-china-shop approach with the so-called Department of Government Efficiency last year.
NASA’s budget, in particular, has become a major target of the administration, with a proposed 2026 fiscal budget potentially cutting the agency’s science directorate budget by more than half, an “extinction-level event,” as characterized by critics. (Congress has since rejected Trump’s proposed cuts outright as NASA’s budget continues to be hotly debated.)
“The way that they’ve gone about it has just been extraordinarily haphazard and chaotic, and really to the point of being cruel to the people who work in these buildings,” Goddard operations research analyst Monica Gorman told NPR.
The library’s closure is only the latest in a string of what appear to be tumultuously implemented consolidation efforts. Gorman recalls “chaotic disruptions” and lab equipment being “treated carelessly.”
The agency has since attempted to pour water on the flames, with Isaacman accusing the New York Times of sensationalizing its reporting and mischaracterizing NASA’s plans to throw out undesirable documents currently being stored at the GSFC library.
“The [NYT] story does not fully reflect the context NASA shared,” he tweeted on January 2. “At no point is NASA ‘tossing out’ important scientific or historical materials, and that framing has led to several other misleading headlines.”
But NASA veterans and advisors say the Trump administration’s approach lead lead to important documents being lost.
“Libraries and archives around the world are being ‘consolidated’ out of existence,” spaceflight engineering expert Dennis Wingo, who has advised NASA as a subject matter expert for decades, wrote in a reply to Isaacman. “I can tell you for an absolute fact that many of the people making the determinations on what is historically valuable or not are not qualified to do so.”
Lawmakers are equally taken aback by the gutting of the GSFC.
“NASA Goddard really is the crown jewel of the NASA facilities when it comes to space science,” senator Chris Van Hollen (D-MA) told NPR. “This administration is essentially doing things without letting the team at NASA Goddard know what their intentions are.”
More on the situation: NASA Veterans Disgusted by Plans to Shut Down Its Largest Library
The post NASA’s Library Shutdown Scandal Is Ballooning appeared first on Futurism.
๐ Sumber: futurism.com
๐ TOPINDIATOURS Hot ai: Nous Research's NousCoder-14B is an open-source codin
Nous Research, the open-source artificial intelligence startup backed by crypto venture firm Paradigm, released a new competitive programming model on Monday that it says matches or exceeds several larger proprietary systems โ trained in just four days using 48 of Nvidia's latest B200 graphics processors.
The model, called NousCoder-14B, is another entry in a crowded field of AI coding assistants, but arrives at a particularly charged moment: Claude Code, the agentic programming tool from rival Anthropic, has dominated social media discussion since New Year's Day, with developers posting breathless testimonials about its capabilities. The simultaneous developments underscore how quickly AI-assisted software development is evolving โ and how fiercely companies large and small are competing to capture what many believe will become a foundational technology for how software gets written.
type: embedded-entry-inline id: 74cSyrq6OUrp9SEQ5zOUSl
NousCoder-14B achieves a 67.87 percent accuracy rate on LiveCodeBench v6, a standardized evaluation that tests models on competitive programming problems published between August 2024 and May 2025. That figure represents a 7.08 percentage point improvement over the base model it was trained from, Alibaba's Qwen3-14B, according to Nous Research's technical report published alongside the release.
"I gave Claude Code a description of the problem, it generated what we built last year in an hour," wrote Jaana Dogan, a principal engineer at Google responsible for the Gemini API, in a viral post on X last week that captured the prevailing mood around AI coding tools. Dogan was describing a distributed agent orchestration system her team had spent a year developing โ a system Claude Code approximated from a three-paragraph prompt.
The juxtaposition is instructive: while Anthropic's Claude Code has captured imaginations with demonstrations of end-to-end software development, Nous Research is betting that open-source alternatives trained on verifiable problems can close the gap โ and that transparency in how these models are built matters as much as raw capability.
How Nous Research built an AI coding model that anyone can replicate
What distinguishes the NousCoder-14B release from many competitor announcements is its radical openness. Nous Research published not just the model weights but the complete reinforcement learning environment, benchmark suite, and training harness โ built on the company's Atropos framework โ enabling any researcher with sufficient compute to reproduce or extend the work.
"Open-sourcing the Atropos stack provides the necessary infrastructure for reproducible olympiad-level reasoning research," noted one observer on X, summarizing the significance for the academic and open-source communities.
The model was trained by Joe Li, a researcher in residence at Nous Research and a former competitive programmer himself. Li's technical report reveals an unexpectedly personal dimension: he compared the model's improvement trajectory to his own journey on Codeforces, the competitive programming platform where participants earn ratings based on contest performance.
Based on rough estimates mapping LiveCodeBench scores to Codeforces ratings, Li calculated that NousCoder-14B's improvemen tโ from approximately the 1600-1750 rating range to 2100-2200 โ mirrors a leap that took him nearly two years of sustained practice between ages 14 and 16. The model accomplished the equivalent in four days.
"Watching that final training run unfold was quite a surreal experience," Li wrote in the technical report.
But Li was quick to note an important caveat that speaks to broader questions about AI efficiency: he solved roughly 1,000 problems during those two years, while the model required 24,000. Humans, at least for now, remain dramatically more sample-efficient learners.
Inside the reinforcement learning system that trains on 24,000 competitive programming problems
NousCoder-14B's training process offers a window into the increasingly sophisticated techniques researchers use to improve AI reasoning capabilities through reinforcement learning.
The approach relies on what researchers call "verifiable rewards" โ a system where the model generates code solutions, those solutions are executed against test cases, and the model receives a simple binary signal: correct or incorrect. This feedback loop, while conceptually straightforward, requires significant infrastructure to execute at scale.
Nous Research used Modal, a cloud computing platform, to run sandboxed code execution in parallel. Each of the 24,000 training problems contains hundreds of test cases on average, and the system must verify that generated code produces correct outputs within time and memory constraints โ 15 seconds and 4 gigabytes, respectively.
The training employed a technique called DAPO (Dynamic Sampling Policy Optimization), which the researchers found performed slightly better than alternatives in their experiments. A key innovation involves "dynamic sampling" โ discarding training examples where the model either solves all attempts or fails all attempts, since these provide no useful gradient signal for learning.
The researchers also adopted "iterative context extension," first training the model with a 32,000-token context window before expanding to 40,000 tokens. During evaluation, extending the context further to approximately 80,000 tokens produced the best results, with accuracy reaching 67.87 percent.
Perhaps most significantly, the training pipeline overlaps inference and verification โ as soon as the model generates a solution, it begins work on the next problem while the previous solution is being checked. This pipelining, combined with asynchronous training where multiple model instances work in parallel, maximizes hardware utilization on expensive GPU clusters.
The looming data shortage that could slow AI coding model progress
Buried in Li's <a href="https://nousresearch.com/nouscoder-14b-a-co…
Konten dipersingkat otomatis.
๐ Sumber: venturebeat.com
๐ค Catatan TOPINDIATOURS
Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.
โ Update berikutnya dalam 30 menit โ tema random menanti!