Why Context-Aware AI Agents Will Give Us Superpowers in 2025
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn more
2025 will be the year that big tech moves from selling ever more powerful tools to selling ever more powerful capabilities. The difference between a tool and an ability is subtle but profound. We use tools like external artifacts that help us overcome our organic limitations. From cars and airplanes to phones and computers, tools greatly expand what we can achieve as individuals, in large teams, and as vast civilizations.
Abilities are different. We experience abilities in the first person as self-embodied abilities that feel internal and immediately accessible to our conscious minds. For example, language and mathematics are man-made technologies that we load into our brains and carry with us throughout our lives, expanding our abilities to think, create, and collaborate. They are superpowers that feel so intrinsic to our existence that we rarely think of them as technology at all. Fortunately, we don’t need to buy a service plan.
The next wave of superpowers won’t be free, though. But just like our abilities to think verbally and numerically, we will experience these powers as self-embodied abilities that we carry with us throughout our lives. I call this new technological discipline as expanded mentality and will arise from the convergence of AI, conversational computing and augmented reality. And in 2025 will start an arms race among the biggest companies in the world to sell to us superhuman abilities.
These new superpowers will be unleashed by context-aware AI agents which are loaded into body worn devices (like AI glasses) that travel with us throughout our lives, see what we see, hear what we hear, experience what we experience, and provide us with enhanced abilities to perceive and interpret our world. By 2030, actually. I predict that the majority of us will live our lives with the help of context-aware AI agents that wear digital superpowers in our normal everyday experiences.
How will our super human future unfold?
First of all, we will I whisper of these intelligent agents and they will whisper backacting as an omniscient alter ego that gives us context-sensitive recommendations, knowledge, guidance, advice, spatial remindersdirectional signs, haptic nudges and other verbal and perceptual content that will instruct us through our days and educate us about our world.
Consider this simple scenario: You’re walking downtown and notice a store across the street. Wondering what time it opens? So you grab your phone and type (or say) the name of the store. You quickly find the opening hours on a website, and you may also view other information about the store. This is the main tool-based computing model prevalent today.
Now, let’s see how big tech will move to a capability computing model.
Stage 1: You wear AI-powered glasses that can see what you see, hear what you hear, and process your surroundings through a multimodal large language model (LLM). Now, when you spot that store across the street, you just whisper to yourself, “I wonder when it opens?” and a voice will immediately ring in your ears, “10:30 am”.
I know it’s a subtle change from asking your phone to look up the name of a store, but you’ll feel deep. The reason is that the context-aware AI agent will share your reality. It’s not just tracking your location like GPS, it’s seeing, hearing and paying attention to what you’re paying attention to. This will make it feel much less like a tool and much more like an internal ability that is connected to your first-person reality.
And when asked a question by the AI-powered alter ego in our ears, we’ll often simply answer we nod to confirm (detected by sensors in the glasses) or shaking heads to refuse. It will feel so natural and seamless that we may not even consciously realize we’ve responded.
Stage 2: By 2030 there will be no need to whisper to the AI agents traveling with us through our lives. Instead, we’ll be able to simply speak the words with our mouths, and the AI will know what we’re saying by reading our lips and detecting activation signals from our muscles. I am convinced that “word of mouth” will be deployed because it is more personal, more resilient in noisy spaces and most importantly, it will feel more personal, visceral and self-embodied.
Stage 3: Until 2035 you may not even have to say the words by mouth. That’s because AI will learn to interpret the signals in our muscles with such finesse and precision that we’ll just have to think about saying words to convey our intent. We will be able to focus our attention on any object or activity in our world and think something, and useful information will ring out from our AI glasses like omniscient voice in our heads.
Of course, the possibilities will go beyond simply marveling at the things around you. That’s because the built-in AI that shares your first-person reality will learn to predict the information you want before you even ask for it. For example, when a co-worker approaches from down the hall and you can’t remember his name, the AI will sense your anxiety and chime in a voice: “Greg from engineering.”
Or when you pick up a can of soup at the store and are curious about the carbs or wonder if it’s cheaper at Walmart, the answers will just ring in your ears or appear visually. It will even give you superhuman abilities to judge the emotions on other people’s faces, predict their moods, goals or intentions, and coach you during real-time conversations to make you more engaging, attractive or persuasive (check out this fun video example).
I know some people will be skeptical about the level of adoption I predict above and the quick time frame, but I don’t make these claims lightly. I have spent much of my career working on technologies that increase and expand human capabilitiesand I can say that without a doubt the mobile computing market is about to move in that direction in a very big way.
In the past 12 months, two of the world’s most influential and innovative companies, Meta and Google, have revealed their intentions to give us embodied superpowers. Meta made the first big move by adding AI to their Ray-Ban glasses and by showing off their mixed reality prototype Orion, which adds impressive visual capabilities. Meta is now in a very good position to use its heavy investments in AI and augmented reality (XR) to become a major player in the mobile computing market, and it will likely do so by selling us superpowers we can’t resist.
Not to be outdone, Google recently announced Android XRa new AI-based operating system for expanding our world with seamlessly contextualized content. They also announced a partnership with Samsung to launch new glasses and headphones. With more than 70% market share for mobile operating systems and an increasingly strong AI presence with Gemini, I believe Google is well positioned to be the leading provider of technology-enabled human superpowers in the next few years.
Of course, we have to consider the risks
To quote the famous A 1962 Spider-Man comic“with great power comes great responsibility.” This wisdom is literally about superpowers. The difference is that much of the responsibility will fall not on the consumers who buy these technological powers, but on the companies that provide them and the regulators that control them.
After all, when wearing AI-powered augmented reality (AR) glasses, each of us can find ourselves in a new reality where technologies controlled by third parties can selectively change what we see and hear as AI-powered voices whisper into our ears with advice, information and guidance. Although the intentions are positive, even magical, potential for abuse is just as deep.
To avoid dystopian outcomes, my main recommendation to both consumers and producers is yes adopt a subscription business model. If the arms race to sell superpowers is driven by which company can deliver the most amazing new abilities for a reasonable monthly fee — we all win. If the business model instead becomes a competition to monetize superpowers by delivering the most effective targeted influence to our eyes and ears in our daily lives, consumers could they are easily manipulated with a precision and comprehensiveness never before encountered.
After all, these superpowers won’t feel optional. After all, their absence can put us at a cognitive disadvantage. It is now up to industry and regulators to ensure that we deploy these new capabilities in a way that is not intrusive, manipulative or dangerous. I am convinced that this could be a magical new direction for computing, but it requires careful planning and oversight.
Louis Rosenberg founded Immersion Corp, Outland Research and Unanimous AIand author of our next reality.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including technical data people, can share data-related insights and innovations.
If you want to read about cutting-edge ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.
You might even think contributing an article of your own!