Meta’s Ray-Bans New Live AI and Translation, Hands-on: Signs of upcoming AR glasses
I activated Meta Ray-Bans’ new live AI feature and took a morning walk around Manhattan. It was a strange experience. A white LED in the corner of my eyes stayed lit as my glasses powered my life. I awkwardly asked questions: about the pigeons, about the construction workers, about whether he knew what car was nearby or whose trucks were across the way. I got mixed responses, sometimes no response at all. And then my connection dropped due to bad Bluetooth in the city.
My first steps with an ever-aware AI companion were strange and even more fantastical than what I had experienced in the past year. Like a recent demo with Google always on Gemini glasses, Meta’s Ray-Bans — which are already widely available — are taking the next steps to be something like an always-informed assistant. or agentas the AI landscape now calls it. Live AI and Live Translation; once on, stay on. AI is supposed to be able to see what you see. And maybe it will help you do something you don’t know how to do.
But those specs also look like previews of what could be a whole new set of Meta glasses coming next year, ones that could have their own display and maybe even a gesture control cuff, based on hints , which Mark Zuckerberg gave on topics last week after written history by Joanna Stern of The Wall Street Journal.
Right now, Live AI seems like a strange glimpse into a more permanent and more intrusive AI future, it’s more of a companion than a helper from my very early experiences. Still, the translation, when it works, feels surprisingly useful… even if it does run a bit laggy.
Live AI mode is part of a suite of Early Access features. Turns on and off separately.
Live AI: a state of mind that is constantly listening and watching
Turning on Live AI means starting live video recording. Although the video isn’t saved for you to watch later, it is processed by Meta’s AI through your phone and transmitted to the glasses. The LED light stays on to let people know it’s on, but in my experience people don’t notice the LED light all that much or don’t seem to care. Everything you say can be interpreted by the Meta AI, so forget about talking to others. In the office, I just seemed like a weirdo, talking to myself or maybe pretending to talk to others (only to have people try to talk to me and find out I wasn’t talking to them). But Live AI can be paused by tapping the side of the glasses.
Stopping live AI can be done by saying “Stop Live AI”, but sometimes the Meta AI thought I was asking if it was Live AI — “Who’s First?” a moment. I had to yell several times before it stopped.
With the Meta Ray-Bans on, it’s hard for anyone to know you’re wearing smart tech… or having a conversation with an AI.
The challenge with living AI is figuring out what to do with it. I walked around the office asking about the furniture placement and was told everything looked fine: “the room looks well designed and functional with no obvious changes”. I asked about a story I was writing on my laptop and she said, “The text feels like a cohesive and well-structured piece, with no parts that feel unnecessary.” I kept trying to get constructive feedback and it was hard to get anything that not to be general, although it pointed out some salient lines and summarized my points.
As I was walking out it told me what street I was on, but it was wrong — I corrected it and then it just acknowledged it and moved on. He knew the Chase bank I was looking at and told me the bank’s hours, and he knew Joe’s Pub when I was standing at the entrance to the Public Theater, but he couldn’t tell me what was playing that night. It could recognize common pigeons, misidentified a car on the curb as a Mercedes (it was a Lincoln), and for some reason recommended a bar down the street that was now, according to the Meta AI, “non-existent.”
The live AI is currently in early access beta, but I also need to figure out what I’m going to do with it. The early beta feel and unclear purpose can combine to make it absurd. Or unexpectedly deep. Either way, keeping it running affects battery life: 30 minutes of use, instead of the hours Ray-Bans usually last.
Live Translation needs to download separate language packs to work.
Translation: Useful, for several languages
Live translation works the same way, starting on demand. But language packs must be downloaded for the specific languages you want to translate: Spanish to English, for example. Currently only Spanish, French, Italian and English are supported, which is disappointing.
I was talking to a colleague at CNET Danny Santana in bustling Astor Place, near our New York office. He spoke Dominican Spanish and I spoke English. The translated answers came to my ears a few seconds later and during our chat I felt like I understood enough to understand. It wasn’t perfect: the translation AI didn’t seem to understand some phrases or idioms. The delay made it difficult to know when the translation would end or if more were still coming. I had trouble timing my responses to Danny as he patiently waited for me to speak across the table.
The Meta also displays a live transcription of the conversation in the Meta View phone app, which you can refer to while using the glasses to show the person you’re talking to or clarify what’s being said.
The Ray-Bans translation feature seems much more immediately useful than the Live AI, but that’s also because the Live AI still doesn’t make it clear what I’m supposed to use it for. Maybe I could turn it on while cooking or building IKEA furniture or playing a board game? I don’t know Help me figure this out Meta. Also, the lack of a heads-up display makes Live AI feel like I’m guessing what the glasses are looking at.
You can of course just use Google Translate on your phone instead. The Meta uses its glasses to translate in a similar way to how you would use a pair of headphones. But Meta’s glasses can also see and translate written items, but that’s not part of the live translation conversational mode.
Meta’s AR glasses moonshot, Orion, has its own wristband with neural input and heads-up 3D displays. When will the Ray-Ban ones slowly arrive?
What’s Next: Display or Gestures? Or both?
Meta’s annual Ray-Bans have now arrived multiple core AI functionswith each changing the equation in surprising ways. However, the latest additions to live AI seem to push the limits of the hardware, reducing battery life. I wish I had better ways to understand what the AI can see or point with its hand to indicate what I want to ask.
Future glasses may move in this direction: both with heads-up displays and gesture recognition. Meta’s CTO, Andrew Bosworth, in a conversation I had with him at the end of the year, acknowledges that these are the next steps – but the time frame is unclear. Meta’s Orion Glasses — an ambitious pair of futuristic glasses with 3D displays and a wrist-based gesture tracker I demonstrated earlier this year that can recognize finger taps and pinches — are still a long way from reality. But Meta’s wrist-worn neural band could come sooner, or perhaps a way for camera glasses to recognize hand gestures. And as for displays in smart glasses, Meta may explore a smaller heads-up display for displaying information before moving to larger, more immersive AR displays. Bosworth points to next-generation AR glasses in a recent blog postbut will some of this be possible in the next generation of Ray-Ban-like glasses next year?
“Gesture-based controls require downward-facing cameras and possibly some lighting,” Bosworth says of the future Meta glasses. “You can do it in the current Ray-Ban Metas — in Live AI, we’ve played with it — but you just have to do it (from) the camera’s field of view.” He does, however, acknowledge the possibilities of adding an EMG bar to the glasses earlier or later on. “Now you add a device that has to be charged, it’s extra cost, extra weight, but it’s so convenient.” But Bosworth sees the EMG band as only useful when it has a display on the glasses — something Ray-Bans don’t have… ever more. It’s likely that when Ray-Ban get some kind of heads-up display, an entry bar could debut together. I’ve seen attempts of ideas like this in other products.
And then there’s the question of battery life: How will these more permanent glasses last for more than a few hours at a time? Or how all of this would increase the cost of a next-generation pair of glasses?
Meanwhile, Meta’s artificial intelligence can also be used in areas like fitness, as something that also connects VR, where Meta has another version of Meta AI. “It would be very unusual if a year from now the artificial intelligence you’re using to track your steps in the world and give you advice isn’t aware that you’re also doing these training (in VR)says Bosworth.
As Live AI continues to evolve, having better ways to add gestures may be absolutely necessary. Bosworth sees targeting things as a key way to train AI to get better in the future. “As artificial intelligence improves, the need for these much simpler, more intuitive gestures actually increases significantly.”
The Meta’s Ray-Bans don’t let me point at things right now and it makes using the Live AI seem a bit confusing to use at times. But perhaps it will need newer hardware and added gestures and displays to make the next leap.