Nvidia CEO explains how new AI models could work on future smart glasses

Rate this post


Tech gadgets – whether they are telephones, robots or autonomous vehicles – are getting better at understanding the world around us, thanks to AI. That message rang out loud and clear in 2024. and became even stronger in CES 2025where chipmaker Nvidia unveiled a new AI model for understanding the physical world and a family of large language models for power future AI agents.

Nvidia CEO Jensen Huang positioned these world-leading models as ideal for robots and autonomous vehicles. But there’s another class of devices that could benefit from a better understanding of the real world: smart glasses. Tech-enabled glasses like Meta’s Ray-Bans are quickly becoming the hot new AI gadget, with shipments of Meta’s glasses passing the 1 million mark in November, according to Counterpoint studies.

Such devices seem like the perfect vessel for AI agents or AI assistants that can understand the world around you using cameras and speech and visual input processing to help you get things done instead of just answering questions.

Huang did not say whether Nvidia-powered smart glasses are on the horizon. But he explained how the company’s new model will power future smart glasses if partners embrace the technology for that purpose.

“The use of AI as it connects to wearables and virtual presence technology like glasses is all super exciting,” Huang said when asked if his models would work with smart glasses during a press Q&A at CES.

Read more: Smart glasses will work this time, Google’s Android president told CNET

Watch this: These new smart glasses want to be your next AI companion

Huang listed cloud processing as an option, which would mean that requests that use Nvidia’s Cosmos model could be processed in the cloud rather than on the device itself. Compact devices such as smartphones often use this method to ease the processing load when working with demanding AI models. If a device maker wants to create glasses that can use Nvidia’s AI models on-device instead of relying on the cloud, Huang said Cosmos will distill its knowledge into a smaller model that’s less generalized and optimized for specific tasks.

Nvidia’s new Cosmos model is being touted as a platform for collecting data about the physical world to train models for robots and self-driving cars — much like how a large language model would learn how to generate text responses after it’s been trained of written media.

“ChatGPT’s moment for robotics is coming,” Huang said in a press release.

Nvidia also announced a set of new AI models built with Meta’s Llama technology, called Llama Nemotoron, which is designed to accelerate the development of AI agents. But it’s interesting to think about how these AI tools and models could potentially be applied to smart glasses as well.

Recent Nvidia patent filing fueled speculation about upcoming smart glasses, even though the chipmaker hasn’t made any announcements about future products in the space. But Nvidia’s new models and Huang’s comments come as Google, Samsung and Qualcomm announced last month that they are building a new mixed reality platform for smart glasses and headsets called Android XRhinting that smart glasses may soon become more prominent.

Several new types of smart glasses were also shown at CES 2025, such as RayNeo X3 Pro and Halliday smart glasses. The International Data Corporation also predicted in September that shipments of smart glasses would grow 73.1% in 2024. Nvidia’s moves are also another place to keep an eye on.



 
Report

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *