Samsung’s new Galaxy phones lay the groundwork for upcoming headsets and glasses
Samsung and Google work on Similar to Apple Vision Pro A mixed reality VR headset running Android XR and Google Gemini. We already knew this and even received demonstration from last year. But Samsung also revealed a little more about its focus on the phone Samsung Unpacked winter eventspecifically, a general partnership between Google and the Samsung AI ecosystem, which could be the missing piece to join all of this. This AI-inspired experience will be on next-gen VR/AR headsets this year, but expect it to work on Galaxy S25 phone and glasses that will connect to them.
In a way, I already got a preview of what the future holds at the end of last year.
Samsung’s vision for its products is connected through AI. And now that AI is getting consistent.
Seeing AI that works in real time
Samsung briefly addressed its upcoming VR/AR headsets and glasses at its latest Unpacked event, but we pretty much knew about them already. Still, Samsung’s demonstration of real-time AI that can see things on your phone or through cameras is exactly the trend that expected to arrive in 2025.
Project Moohan (which means “Infinity” in Korean) is a VR headset with pass-through cameras that blend the virtual and the real, similar to the Vision Pro or The mission of Meta 3. The design looks a lot like that of the discontinued Meta Quest Pro but with much better features. The headset has hand and eye tracking, runs Android apps through the Android XR operating system that will be fully revealed later this year, and uses Google Gemini AI as an assist layer throughout. on Google Project Astra technology that enables this real-time assistance for glasses, phones and headphones, debuted of Samsung’s Galaxy S25 series of phones. But I’ve already seen it in action on my face.
My demos last year allowed me to use Gemini to help me while I was looking around a room, watching YouTube videos, or doing something else. The live AI had to be started in this live mode to use it, then it could see and hear what I was looking at or hearing. There were also pause modes to temporarily stop live help.
Samsung showed off what appeared to be similar real-time AI features on the Galaxy S25 phones, and more was promised. I expect it will be able to work while watching YouTube videos, similar to my Android XR demo. And according to Samsung and Google executives working on Android XR, it can even be used for live assistance while playing games.
Gemini’s skills for visual recognition on the go may start to feel the same between glasses and phones.
Better battery life and processing…for glasses?
Samsung and Google have also confirmed that they are working on smart glasses that also use Gemini AI to compete with them Meta’s Ray-Bans and a wave of other emerging glasses. AR glasses are also apparently in the works.
While Project Moohan is a standalone VR headset with its own battery pack and processors, like Apple’s Vision Pro, the smaller smart glasses that Google and Samsung are working on — and all glasses after that — will rely on connections and processing help from phones to work. That’s how smart glasses like Meta’s Ray-Bans already work.
But, perhaps, with more features means the need for more intensive phone processing. Live AI could start to become an increasingly used feature, relying on phones constantly working to help these glasses. Better processing, graphics, and most importantly improved battery life and cooling sounded to me like ways to make these phones better pocket PCs for eventual glasses.
Personal data clouds are what Samsung and Google will rely on to power smarter AI assistants on both glasses and phones.
A set of personal data that these AI gadgets will need
Samsung also announced a vague-sounding Personal Data Machine that Google and Samsung’s AI will take advantage of, gathering personal data into a place where AI can eventually develop richer conclusions and connections to all the things that are part of your life.
How this occurs or is secured, or where its limits are, was extremely unclear. But it sounds like a repository of personal data that Samsung and Google’s AI can train and work with connected advanced products, including watches, rings and glasses.
Camera-enabled wearables are only as good as the data that can help them, which is why so many of these devices currently feel clunky and weird to use, including Meta’s Ray-Bans in their AI modes. Typically, these AI devices come into conflict when you need to know things that your existing applications already know better. Google and Samsung are apparently trying to fix this.
Do I want to trust this process to Google and Samsung or someone else? How will these phones and future glasses make this relationship between AI and our data clearer and more manageable? We feel like we’re watching one shoe drop here, with others to come as Google’s I/O developer conference is likely to discuss Android XR and Gemini progress in much greater depth.
Samsung is making Project Moohan its first headphones, followed by glasses in the future after that. Expect Google to go into more detail with Samsung at the developer-focused Google I/O conference around May or June, and possibly a full reveal in the summer at Samsung’s next anticipated Unpacked event. By then, we may know a lot more about why this seemingly boring new wave of Galaxy S25 phones may be building an infrastructure that will play out in clearer detail by the end of the year… or even beyond.