TOPINDIATOURS Update ai: Youโ€™ll Snort-Laugh When You Learn How Much AI Actually Added to t

๐Ÿ“Œ TOPINDIATOURS Eksklusif ai: Youโ€™ll Snort-Laugh When You Learn How Much AI Actual

Scanning the headlines, it can be easy to get the impression that every investor, banker, and financial analyst is enamored with AI. Yet this simplified view obscures a more complicated story: the US economy isn’t where tech companies say it is.

By and large, businesses have gone bonkers over automation, lavishing $410 billion on AI in 2025 alone. To them, it’s a productivity miracle. AI should obviously make everybody work faster, reducing the need for human labor as it takes less staff to do more โ€” saving companies gobs of cash in the long run.

At least, that’s the narrative in the corporate world. In banking, however, Goldman Sachs is spinning another yarn. After months of carefully-worded warnings about the dangers of over-investing on AI, Goldman has now dramatically escalated its rhetoric: the bank’s analysts now claim that AI has had zero impact on US economic growth over 2025.

The disconnect between AI investment and growth comes down to two structural issues. The first is geographic: when US companies buy chips from Taiwan, for example, that money boosts Taiwan’s economy, not the US. Second is productivity. AI might make some workers faster, sure, but that speed doesn’t automatically make supply chains more efficient โ€” so far, those productivity gains are largely trapped inside company walls.

This pushback on AI’s economic impact marks a sharp break from even the most cynical analyses of 2025, in which even doomers credited the technology with single-handedly keeping US GDP growth afloat. Though the market more broadly has yet to see things Goldman’s way โ€” investors are projected to spend $660 billion on AI across 2026 โ€” a growing number of analysts are starting to cry foul.

Dario Perkins, head of macroeconomics at consulting firm TS Lombard, agrees that AI’s effects on productivity are nonexistent, even as massive layoffs have the workforce reeling. He was recently quoted in the Financial Times, arguing that “there is no evidence that AI deployment is either boosting productivity or damaging US employment.”

“While US productivity has been strong and hiring weak, our analysis finds that cyclical forces โ€” not automation โ€” are to blame,” Perkins concluded.

Meanwhile, former bank regulator at the New York Fed Brian Peters recently wrote that, while AI’s “capabilities are extraordinary” and the “capital deployment is unprecedented,” the “near-term economic payoff is, at best, debatable.”

At the National Bureau of Economic Research, economists studying the effects of AI on productivity recently published a working paper identifying a “productivity paradox,” where “perceived productivity gains are larger than measured productivity gains, likely reflecting a delay in revenue realizations.”

The implications of all this are stark. An investment boom measured in the hundreds of billions has, by Goldman’s accounting, generated essentially no measurable economic return for the US. The question facing us now in 2026 is whether $660 billion more of the same will produce anything other than an even bigger AI bubble.

More on AI: Mark Zuckerberg Secretly Training an AI Agent to Do CEO Job

The post Youโ€™ll Snort-Laugh When You Learn How Much AI Actually Added to the US Economy Last Year appeared first on Futurism.

๐Ÿ”— Sumber: futurism.com


๐Ÿ“Œ TOPINDIATOURS Eksklusif ai: Adobe Research Unlocking Long-Term Memory in Video W

Video world models, which predict future frames conditioned on actions, hold immense promise for artificial intelligence, enabling agents to plan and reason in dynamic environments. Recent advancements, particularly with video diffusion models, have shown impressive capabilities in generating realistic future sequences. However, a significant bottleneck remains: maintaining long-term memory. Current models struggle to remember events and states from far in the past due to the high computational cost associated with processing extended sequences using traditional attention layers. This limits their ability to perform complex tasks requiring sustained understanding of a scene.

A new paper, “Long-Context State-Space Video World Models” by researchers from Stanford University, Princeton University, and Adobe Research, proposes an innovative solution to this challenge. They introduce a novel architecture that leverages State-Space Models (SSMs) to extend temporal memory without sacrificing computational efficiency.

The core problem lies in the quadratic computational complexity of attention mechanisms with respect to sequence length. As the video context grows, the resources required for attention layers explode, making long-term memory impractical for real-world applications. This means that after a certain number of frames, the model effectively “forgets” earlier events, hindering its performance on tasks that demand long-range coherence or reasoning over extended periods.

The authorsโ€™ key insight is to leverage the inherent strengths of State-Space Models (SSMs) for causal sequence modeling. Unlike previous attempts that retrofitted SSMs for non-causal vision tasks, this work fully exploits their advantages in processing sequences efficiently.

The proposed Long-Context State-Space Video World Model (LSSVWM) incorporates several crucial design choices:

  1. Block-wise SSM Scanning Scheme: This is central to their design. Instead of processing the entire video sequence with a single SSM scan, they employ a block-wise scheme. This strategically trades off some spatial consistency (within a block) for significantly extended temporal memory. By breaking down the long sequence into manageable blocks, they can maintain a compressed “state” that carries information across blocks, effectively extending the model’s memory horizon.
  2. Dense Local Attention: To compensate for the potential loss of spatial coherence introduced by the block-wise SSM scanning, the model incorporates dense local attention. This ensures that consecutive frames within and across blocks maintain strong relationships, preserving the fine-grained details and consistency necessary for realistic video generation. This dual approach of global (SSM) and local (attention) processing allows them to achieve both long-term memory and local fidelity.

The paper also introduces two key training strategies to further improve long-context performance:

  • Diffusion Forcing: This technique encourages the model to generate frames conditioned on a prefix of the input, effectively forcing it to learn to maintain consistency over longer durations. By sometimes not sampling a prefix and keeping all tokens noised, the training becomes equivalent to diffusion forcing, which is highlighted as a special case of long-context training where the prefix length is zero. This pushes the model to generate coherent sequences even from minimal initial context.
  • Frame Local Attention: For faster training and sampling, the authors implemented a “frame local attention” mechanism. This utilizes FlexAttention to achieve significant speedups compared to a fully causal mask. By grouping frames into chunks (e.g., chunks of 5 with a frame window size of 10), frames within a chunk maintain bidirectionality while also attending to frames in the previous chunk. This allows for an effective receptive field while optimizing computational load.

The researchers evaluated their LSSVWM on challenging datasets, including Memory Maze and Minecraft, which are specifically designed to test long-term memory capabilities through spatial retrieval and reasoning tasks.

The experiments demonstrate that their approach substantially surpasses baselines in preserving long-range memory. Qualitative results, as shown in supplementary figures (e.g., S1, S2, S3), illustrate that LSSVWM can generate more coherent and accurate sequences over extended periods compared to models relying solely on causal attention or even Mamba2 without frame local attention. For instance, on reasoning tasks for the maze dataset, their model maintains better consistency and accuracy over long horizons. Similarly, for retrieval tasks, LSSVWM shows improved ability to recall and utilize information from distant past frames. Crucially, these improvements are achieved while maintaining practical inference speeds, making the models suitable for interactive applications.

The Paper Long-Context State-Space Video World Models is on arXiv

The post Adobe Research Unlocking Long-Term Memory in Video World Models with State-Space Models first appeared on Synced.

๐Ÿ”— Sumber: syncedreview.com


๐Ÿค– Catatan TOPINDIATOURS

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

โœ… Update berikutnya dalam 30 menit โ€” tema random menanti!