We need to think again about what the ‘A’ in AI signifies
AI bots have slipped into Santa’s grotto this month.First of all, AI-enabled gifts are on the rise. as I know myself, having just received an impressive AI-dictating device.
Meanwhile, retailers like Walmart are offering AI tools to provide holiday assistance to infamous shoppers are working pretty well judging by recent reviews.
But here is the paradox. even as artificial intelligence spreads into our lives and Christmas stockings, hostility remains high.Earlier this month, for example, a UK Government Inquiry found that four in ten people expect AI to bring benefits, while three in ten expect significant harm due to “data security” breaches, “spread of misinformation” and “job displacement”.
That is perhaps not surprising. The risks are real and well-publicized, but as we move toward 2025, it’s worth reflecting on three oft-overlooked points about the current anthropology of AI that can help frame this paradox more constructively.
First, we need to review which “A” we are using “AI” today. Yes, machine learning systems are “artificial.” However, bots don’t always, or usually, replace our human brains. Instead, they usually enable us to perform tasks more quickly and efficiently in. Shopping is just one case.
So perhaps we should reframe AI as ‘augmented’ or ‘accelerated’ intelligence, or else ‘agentive’ intelligence, to use the buzzword The latest Nvidia blog calls the “next frontier” of AI.It’s about bots that can act as autonomous agents for humans. That will be a key theme in 2025. Or as Google recently announced The Gemini AI model.The age of AI agency is here“.
Second, we need to think outside the cultural confines of Silicon Valley.Until now, “Anglophone actors” have “dominated the debate” around intelligence on the world stage, like academics Steven Cave and Kanta Diehal. note in the introduction to their book, Imagining AI. It reflects US technological dominance.
However, other cultures view AI a little differently. In developing countries, for example, attitudes are much more positive than in developed countries, as James Manica, co-chair of the UN Advisory Body on AI and a senior Google official, says. .Chatham House said recently.
Countries like Japan are also different.It is noteworthy that the Japanese public has long shown a much more positive attitude towards robots than their Anglophone counterparts.And this is now reflected in the attitude towards AI systems.
One factor is Japan’s labor shortage (and the fact that many Japanese are wary of immigrants filling this gap.) Another is popular culture in the second half of the 20th century, when Hollywood movies , such as The Terminator or 2001: A Space Odyssey spread the fear of smart cars among anglophone audiences, the Japanese public was enamored with it Astro Boy: saga that portrayed robots in a favorable light.
Its creator, Osamu Tezuka, has attributed it before influenced by the Shinto religion, which does not draw strict boundaries between animate and inanimate objects, unlike Judeo-Christian traditions. “The Japanese make no distinction between man, the superior creature, and the world about him,” he observed earlier. everything is one.”
And that’s reflected in how companies like Sony or SoftBank are designing AI products today, one of the essays Imagining AI Notes: these are trying to create “robots of the heart” in a way that American consumers will find terrifying.
Third, these cultural variations show that our reactions to artificial intelligence need not be set in stone, but can evolve as technological changes and cross-cultural influences emerge anthropologist Ken Anderson and colleagues studied Chinese and American consumers’ attitudes toward facial recognition tools, and found that while the former accepted the technology for everyday tasks like banking, the latter did not.
That distinction seemed to reflect American concerns about privacy.However, in the same year that study was published, Apple introduced facial recognition tools to US consumers. So the key point is that “cultures” are not like Tupperware boxes, sealed and static.They are more like slow-moving rivers with muddy banks into which new streams flow.
So whatever 2025 brings, the one thing that can be predicted is that our attitude towards AI will continue to change subtly as the technology becomes more and more normal. This may alarm some, but it may also help us to more constructively reframe the technology debate and focus on ensuring that people control their digital “agents,” not the other way around.Investors today may be in an artificial rush in intelligence, but they need to ask what “A” they want in that AI label.