TOPINDIATOURS Breaking ai: Why Trump’s Greenland ambitions won’t quickly solve America’s r

📌 TOPINDIATOURS Breaking ai: Why Trump’s Greenland ambitions won’t quickly solve A

President Donald Trump has once again raised the prospect of the United States taking control of Greenland, arguing that the Arctic island could help break China’s dominance over rare earth elements critical to modern technologies. But experts say Greenland’s harsh environment, limited infrastructure, and complex geology make mining those materials extraordinarily difficult, regardless of who controls the territory.

Rare earth elements are essential for products ranging from electric vehicles and wind turbines to robots, advanced electronics, and military hardware. Trump has repeatedly framed Greenland as a potential solution to U.S. supply chain vulnerabilities, even suggesting military action if Denmark, which oversees the semi-autonomous territory, refuses to sell it. 

However, industry analysts and geologists caution that Greenland is far from a ready answer to America’s rare earth ambitions.

Trump’s renewed focus on Greenland

Trump’s comments come amid heightened concern in Washington over China’s grip on the rare earth supply chain. After the U.S. imposed broad tariffs last spring, China sharply restricted who could buy these materials, prompting the Trump administration to invest hundreds of millions of dollars in alternative supply efforts and take stakes in several companies.

According to the Associated Press, Trump has increasingly linked these concerns to Greenland, which holds significant but largely untapped deposits of critical raw materials. “We are going to do something on Greenland whether they like it or not,” Trump said on Friday.

The president has also framed Greenland as a geopolitical necessity, arguing that U.S. control is needed to prevent China and Russia from expanding their influence in the Arctic. “We don’t want Russia or China going to Greenland,” Trump said, adding that allowing either power to establish a foothold there “is not going to happen.”

Trump’s interest in Greenland is not new. In 2019, he described acquiring the island as “essentially a real estate deal,” while in 1946, President Harry Truman unsuccessfully offered Denmark $100 million in gold for Greenland following World War II. Trump’s latest remarks come on the heels of U.S. involvement in the capture of Venezuelan leader Nicolás Maduro, fueling concern in Europe that rhetoric around Greenland could escalate further.

Why mining rare earths in Greenland is so hard

Despite Greenland’s estimated 1.5 million tons of rare earths locked in rock, no commercial mine has yet been built. A detailed Associated Press report by journalists Josh Funk and Suman Naishadham mentions that most projects have stalled at the exploratory stage, held back by geography, cost, and environmental risks.

“The main challenge is, of course, the remoteness,” said Diogo Rosa, an economic geology researcher at the Geological Survey of Denmark and Greenland. Even in southern Greenland, where population centers exist, there are few roads and no railways, meaning mining companies would need to build basic infrastructure from scratch. Power generation would also have to be created locally, while skilled workers would need to be flown in.

Environmental concerns further complicate the picture. In the report, Patrick Schröder, a senior fellow at Chatham House, warned that rare-earth mining often involves the use of toxic chemicals to separate minerals from rock, posing serious risks to Greenland’s fragile Arctic ecosystem

Rare-earth deposits are also frequently found alongside radioactive uranium, heightening environmental and political sensitivities as Greenland seeks to grow its tourism industry.

A risky bet compared with other supply options

Even outside Greenland, rare earth mining is notoriously difficult to make profitable. Prices are highly volatile, and China has historically flooded the market with cheaper material to drive competitors out of business. Today, most rare earths are still processed in China, limiting the impact of new mining projects elsewhere.

“The fixation on Greenland has always been more about geopolitical posturing than a realistic supply solution,” said Tracy Hughes, founder of the Critical Minerals Institute, adding that the economic and scientific challenges are often underestimated.

Industry experts argue the U.S. would be better served by expanding production at existing projects in more accessible locations. The U.S. government has already invested in MP Materials, which operates the country’s only rare-earth mine, as well as in companies involved in lithium mining and in recycling rare-earth-containing products.

While Greenland remains strategically important, analysts say its role in solving America’s rare earth problem is likely to remain limited for years, if it materializes at all.

🔗 Sumber: interestingengineering.com


📌 TOPINDIATOURS Hot ai: Nous Research's NousCoder-14B is an open-source codin

Nous Research, the open-source artificial intelligence startup backed by crypto venture firm Paradigm, released a new competitive programming model on Monday that it says matches or exceeds several larger proprietary systems — trained in just four days using 48 of Nvidia's latest B200 graphics processors.

The model, called NousCoder-14B, is another entry in a crowded field of AI coding assistants, but arrives at a particularly charged moment: Claude Code, the agentic programming tool from rival Anthropic, has dominated social media discussion since New Year's Day, with developers posting breathless testimonials about its capabilities. The simultaneous developments underscore how quickly AI-assisted software development is evolving — and how fiercely companies large and small are competing to capture what many believe will become a foundational technology for how software gets written.

type: embedded-entry-inline id: 74cSyrq6OUrp9SEQ5zOUSl

NousCoder-14B achieves a 67.87 percent accuracy rate on LiveCodeBench v6, a standardized evaluation that tests models on competitive programming problems published between August 2024 and May 2025. That figure represents a 7.08 percentage point improvement over the base model it was trained from, Alibaba's Qwen3-14B, according to Nous Research's technical report published alongside the release.

"I gave Claude Code a description of the problem, it generated what we built last year in an hour," wrote Jaana Dogan, a principal engineer at Google responsible for the Gemini API, in a viral post on X last week that captured the prevailing mood around AI coding tools. Dogan was describing a distributed agent orchestration system her team had spent a year developing — a system Claude Code approximated from a three-paragraph prompt.

The juxtaposition is instructive: while Anthropic's Claude Code has captured imaginations with demonstrations of end-to-end software development, Nous Research is betting that open-source alternatives trained on verifiable problems can close the gap — and that transparency in how these models are built matters as much as raw capability.


How Nous Research built an AI coding model that anyone can replicate

What distinguishes the NousCoder-14B release from many competitor announcements is its radical openness. Nous Research published not just the model weights but the complete reinforcement learning environment, benchmark suite, and training harness — built on the company's Atropos framework — enabling any researcher with sufficient compute to reproduce or extend the work.

"Open-sourcing the Atropos stack provides the necessary infrastructure for reproducible olympiad-level reasoning research," noted one observer on X, summarizing the significance for the academic and open-source communities.

The model was trained by Joe Li, a researcher in residence at Nous Research and a former competitive programmer himself. Li's technical report reveals an unexpectedly personal dimension: he compared the model's improvement trajectory to his own journey on Codeforces, the competitive programming platform where participants earn ratings based on contest performance.

Based on rough estimates mapping LiveCodeBench scores to Codeforces ratings, Li calculated that NousCoder-14B's improvemen t— from approximately the 1600-1750 rating range to 2100-2200 — mirrors a leap that took him nearly two years of sustained practice between ages 14 and 16. The model accomplished the equivalent in four days.

"Watching that final training run unfold was quite a surreal experience," Li wrote in the technical report.

But Li was quick to note an important caveat that speaks to broader questions about AI efficiency: he solved roughly 1,000 problems during those two years, while the model required 24,000. Humans, at least for now, remain dramatically more sample-efficient learners.


Inside the reinforcement learning system that trains on 24,000 competitive programming problems

NousCoder-14B's training process offers a window into the increasingly sophisticated techniques researchers use to improve AI reasoning capabilities through reinforcement learning.

The approach relies on what researchers call "verifiable rewards" — a system where the model generates code solutions, those solutions are executed against test cases, and the model receives a simple binary signal: correct or incorrect. This feedback loop, while conceptually straightforward, requires significant infrastructure to execute at scale.

Nous Research used Modal, a cloud computing platform, to run sandboxed code execution in parallel. Each of the 24,000 training problems contains hundreds of test cases on average, and the system must verify that generated code produces correct outputs within time and memory constraints — 15 seconds and 4 gigabytes, respectively.

The training employed a technique called DAPO (Dynamic Sampling Policy Optimization), which the researchers found performed slightly better than alternatives in their experiments. A key innovation involves "dynamic sampling" — discarding training examples where the model either solves all attempts or fails all attempts, since these provide no useful gradient signal for learning.

The researchers also adopted "iterative context extension," first training the model with a 32,000-token context window before expanding to 40,000 tokens. During evaluation, extending the context further to approximately 80,000 tokens produced the best results, with accuracy reaching 67.87 percent.

Perhaps most significantly, the training pipeline overlaps inference and verification — as soon as the model generates a solution, it begins work on the next problem while the previous solution is being checked. This pipelining, combined with asynchronous training where multiple model instances work in parallel, maximizes hardware utilization on expensive GPU clusters.


The looming data shortage that could slow AI coding model progress

Buried in Li's <a href="https://nousresearch.com/nouscoder-14b-a-co…

Konten dipersingkat otomatis.

🔗 Sumber: venturebeat.com


🤖 Catatan TOPINDIATOURS

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!