TOPINDIATOURS Breaking ai: New AI app helps visually impaired users find everyday objects

📌 TOPINDIATOURS Hot ai: New AI app helps visually impaired users find everyday obj

Penn State researchers have developed a new navigation tool that could reshape digital assistance for people with visual impairments.

The smartphone-based system, called NaviSense, uses artificial intelligence to identify objects in real time and guide users toward them with audio and haptic cues.

The team unveiled the technology at the ACM SIGACCESS ASSETS ’25 conference in Denver, where it won the Best Audience Choice Poster Award.

NaviSense aims to fix long-standing problems in assistive navigation software. Many current tools depend on human support teams. Some rely on object libraries that must be preloaded in advance.

Vijaykrishnan Narayanan, Evan Pugh University Professor and A. Robert Noll Chair Professor of Electrical Engineering, said this limits flexibility.

“Previously, models of objects needed to be preloaded into the service’s memory to be recognized,” Narayanan said.

“This is highly inefficient and gives users much less flexibility when using these tools.” He said the team turned to AI to break this bottleneck.

The app connects to an external server running large-language models and vision-language models.

These systems allow NaviSense to interpret voice prompts, scan the surroundings, and identify targets without relying on static object databases.

“Using VLMs and LLMs, NaviSense can recognize objects in its environment in real-time based on voice commands, without needing to preload models of objects,” Narayanan said. “This is a major milestone for this technology.”

Built with user input

The team shaped the app after extensive interviews with visually impaired participants.

Ajay Narayanan Sridhar, a computer engineering doctoral student and lead student investigator, said these sessions helped map out real-world needs.

“These interviews gave us a good sense of the actual challenges visually impaired people face,” Sridhar said.

NaviSense listens for a user’s spoken request, searches the space, and filters out irrelevant objects. When the system needs clarification, it asks follow-up questions.

The conversational feedback offers flexibility that many existing tools struggle to provide.

One of the app’s standout features is hand guidance. The system tracks the user’s hand by monitoring the phone’s movement and then gives directional cues that help them reach the object.

Sridhar said this capability filled a major gap. “There was really no off-the-shelf solution that actively guided users’ hands to objects, but this feature was continually requested in our survey,” he said.

Strong early performance

The team tested NaviSense with 12 participants in a controlled environment. Users compared it with two commercial options.

Researchers measured how quickly each tool identified objects and how accurately it guided users.

NaviSense reduced search time and provided more precise detection. Participants also reported a better overall experience.

One user wrote, “I like the fact that it is giving you cues to the location of where the object is, whether it is left or right, up or down, and then bullseye, boom, you got it.”

The team is now refining power consumption and improving model efficiency. Narayanan said the tool is nearing commercial readiness.

“This technology is quite close to commercial release, and we’re working to make it even more accessible,” he said.

The project received support from the U.S. National Science Foundation.

🔗 Sumber: interestingengineering.com


📌 TOPINDIATOURS Breaking ai: Disney debuts its most lifelike Olaf robot yet as Fro

Olaf just stepped out of the screen and into real life.

Disneyland Paris unveiled a next-generation robotic Olaf this week, marking one of Disney Imagineering’s most ambitious technological leaps as the animated snowman takes form as a fully expressive, physical character.

The debut was led by Bruce Vaughn, President and Chief Creative Officer of Walt Disney Imagineering (WDI), and Natacha Rafalski, Présidente of Disneyland Paris.

The moment signals a new chapter where advances in robotics, AI, and simulation fuse with Disney’s storytelling tradition to bring iconic characters into the real world.

Olaf’s appearance builds on the newest episode of WDI’s R&D showcase series, We Call It Imagineering, which dives deep into the technologies reshaping future Disney experiences.

It also highlights years of behind-the-scenes collaboration between engineers, animators, and AI researchers, working to build characters that feel as alive as their animated counterparts.

At the heart of it all is a simple idea: make the technology disappear and let the emotional performance shine.

Animating real motion

Kyle Laughlin, SVP of Walt Disney Imagineering Research & Development, described the approach: “Like everything at Disney, we always start with the story. We think about how we want the guest to feel.”

That philosophy guided the transformation of Olaf from a digital creation into a physical character capable of eye contact, stylized movement, and conversation.

Every gesture, and even his snow-like shimmer, was crafted to match what audiences know from the films. Iridescent fibers capture light like real snow, while a deformable “snow” costume lets Olaf move in ways robotic shells typically can’t.

But unlike the BDX droids from Star Wars, which already roam Disney parks, Olaf demanded a different level of motion realism.

As Laughlin noted, “A key technology in our platform is deep reinforcement learning that enables robotic characters to learn to imitate artist-provided motion in simulation.”

This marriage of art and AI allows engineers to iterate quickly, refining gait, style, and personality until Olaf moves exactly as imagined by animators.

AI powers the magic

To scale that process, WDI has been developing Newton, an open-source simulation framework built with NVIDIA and Google DeepMind.

Laughlin describes it as a system where “building blocks enable the rapid development of GPU-accelerated simulators.”

One key component, a simulator called Kamino, boosts the speed at which robots learn. With it, characters like Olaf can master complex motion—walking, gesturing, interacting—in dramatically less time.

These breakthroughs help translate animated, often physically impossible movements into convincing real-world performances.

Soon, Olaf will meet guests at the upcoming Arendelle Bay Show in World of Frozen at Disneyland Paris. Credit: Walt Disney

Olaf’s fully articulating mouth, expressive eyes, removable carrot nose, and conversational abilities are supported by these AI-trained motion layers.

And the process keeps accelerating. “What’s so exciting is that we’re just getting started,” Laughlin said. The rapid evolution from the BDX droids to self-balancing H.E.R.B.I.E. and now Olaf shows how quickly Disney can now prototype and deploy new characters.

Soon, Olaf will meet guests at the upcoming Arendelle Bay Show in World of Frozen at Disneyland Paris, as well as in limited-time appearances at Hong Kong Disneyland’s World of Frozen.

The technological deep dive behind Olaf’s creation appears in the latest episode of We Call It Imagineering, released alongside the announcement in Nature Machine Intelligence.

🔗 Sumber: interestingengineering.com


🤖 Catatan TOPINDIATOURS

Artikel ini adalah rangkuman otomatis dari beberapa sumber terpercaya. Kami pilih topik yang sedang tren agar kamu selalu update tanpa ketinggalan.

✅ Update berikutnya dalam 30 menit — tema random menanti!