It’s time to overcome AI nationalism
In 2025 there will be a course correction in AI and geopolitics as world leaders increasingly understand that their national interests are best served by the promise of a more positive and cooperative future.
The post-ChatGPT years in AI discourse can be characterized as something between a gold rush and a moral panic. In 2023, at the same time as there was record investment in AI, tech experts including Elon Musk and Steve Wozniak published an open letter calling for a six-month moratorium on training AI systems more powerful than GPT-4, while others compare AI to ‘nuclear war’ and ‘pandemic’.
This has understandably clouded the judgment of political leaders, pushing the geopolitical conversation about AI into some uncomfortable places. At the AI & Geopolitics Project, my research organization at the University of Cambridge, our analysis clearly shows a growing trend towards AI nationalism.
In 2017, for example, President Xi Jinping announced plans for China to become an AI superpower by 2030. The Chinese “Next Generation AI Development Plan” aimed for the country to reach a “world-leading level” of AI innovation by 2025. and become a major hub for AI innovation by 2030.
The CHIP and Science Act of 2022 – a US ban on semiconductor exports – was a direct response to this, designed to favor US domestic AI capabilities and constrain China. In 2024, following an executive order signed by President Biden, the US Treasury Department also published draft rules to ban or limit investment in artificial intelligence in China.
AI nationalism portrays AI as a battle to be won, not an opportunity to be seized. Those who support this approach, however, would do well to draw deeper lessons from the Cold War beyond the notion of an arms race. At the time, the United States, while striving to become the most technologically advanced nation, was able to use politics, diplomacy, and statesmanship to create a positive and ambitious vision for space exploration. Successive US administrations also managed to win support at the United Nations for a treaty that protected outer space from nuclear destruction, specified that no nation could colonize the moon, and guaranteed that outer space was “the domain of all mankind.”
The same political leadership is lacking in AI. In 2025 however, we will begin to see a shift in the direction of cooperation and diplomacy.
AI Summit in France in 2025 will be part of that change. President Macron is already reimagining his event from a strictly “safe” framing of AI risk to one that he says focuses on more pragmatic “solutions and standards.” In a virtual address to the Seoul summit, the French president made it clear that he intends to tackle a much broader set of policy issues, including how to actually ensure that society benefits from AI.
The UN, acknowledging the exclusion of some countries from the AI debate, also published in 2024. own plans aimed at a more collaborative global approach.
Even the US and China have begun to engage hesitant diplomacyestablishing a channel for bilateral consultations on AI in 2024. Although the impact of these initiatives remains uncertain, they clearly indicate that in 2025 the world’s AI superpowers will likely pursue diplomacy over nationalism.