UPDATED 21:55 EDT / FEBRUARY 10 2024

AI

The rising tide of sovereign AI

In an interview late last year, Nvidia Corp. CEO Jensen Huang declared that the world is going through a second wave of artificial intelligence.

The first one, he argued, was driven by the private sector – companies across the tech industry spectrum, from large tech multinationals to nascent startups. A major catalyst for the second wave, he argued, is government engagement in AI and the “recognition that every region and every country needs to build their sovereign AI.” Only in this way, he suggested, can nation-states serve their specific language and cultural needs and leverage their particular business strengths in the age of AI.

It’s not the first time a leading tech CEO has called for sovereign AI. IBM Corp. CEO Arvind Krishna has made a similar call to action: “I am a firm believer that every country ought to have some sovereign capability on artificial intelligence, including large language models for AI,” Krishna stated, encouraging the Indian government – and presumably others – to set up national AI computing centers and common data sets for specific use cases.

So what precisely is “sovereign AI”? The concept is an offshoot of digital sovereignty – the idea that because digital technology shapes an ever-increasing number of significant political, economic, military and societal trends and outcomes, controlling such technology is critical to defending and promoting the national interest. (Certain forms of digital sovereignty have many critics, including the U.S. government, which advocates for an alternative approach called digital solidarity.)

In practice, sovereign AI involves national governments’ strategic development and deployment of AI technologies to protect national sovereignty, security, economic competitiveness and societal well-being. Sovereign AI encompasses a nation’s approach to harnessing AI for its socioeconomic, cultural and geopolitical context and – in theory – stands in contrast with corporate-controlled AI, which is developed and deployed less in any nation-state’s interest and more to generate profits and gain market share.

Several examples of sovereign AI strategies are emerging around the world:

  • India’s Sovereign AI Plan: Though Minister of State for Electronics and IT Rajeev Chandrasekhar has stated that the Indian government is not planning to compete against private sector AI actors, recent reports indicate a plan to organize and make available Indian data for the creation of AI models and public-private partnerships for the development of infrastructure to train and deploy AI in India and perhaps abroad as well.

  • Singapore’s Southeast Asia AI Plan: In December, the Singaporean government announced plans to build an LLM, prompted in part by the “strategic need to develop sovereign capabilities in LLMs. Singapore and the region’s local and regional cultures, values and norms differ from those of Western countries, where most large language models originate.” The LLM project also supports Singapore’s National AI Strategy 2.0.

  • The Netherlands’ Generative AI Vision: In January, the Dutch government released its generative AI plan, which includes further development of the Netherlands’ open large language model, GPT-NL (supported by the Dutch Ministry of Economic Affairs and Climate). It also will pursue investments in large-scale scientific and technological infrastructure – including supercomputers – at the national and European Union levels. (The EU itself is pursuing sovereign AI plans by investing in supercomputers to support European AI startups.)

  • Taiwan’s Sovereign Model Strategy: Taiwan is building an LLM called the Trustworthy AI Dialogue Engine, or Taide, funded primarily by Taiwan’s government. Its main purpose is to counter the influence of Chinese AI tools like Baidu’s Ernie bot, which provides politically biased information. The model is based on Meta’s open-source Llama 2 model, fine-tuned using licensed content from local media and government agencies.

As the Taiwan example shows, and Nvidia’s and IBM’s advocacy suggests, the private sector – and especially the U.S. tech industry – plays a critical role in the move toward sovereign AI. These companies are key in what the Dutch government and others call “open strategic autonomy” – a willingness to partner and cooperate with foreign governments and companies but the ability to act alone and independently if necessary.

Importantly, sovereign AI encompasses but isn’t limited to regulating the technology. Governments embarking on the strategy are thinking about AI as infrastructure rather than just a problem to solve with laws. They are starting to plan, build and enhance their efforts in the four key areas of technology development: physical infrastructure, software, capital and workforce development. Furthermore, governments pursuing sovereign AI strategies are focusing on AI as critical to their overall national efforts relating to defense, public safety, economic security and development, and even foreign policy.

Sovereign AI represents a significant shift in how nations approach technology in the age of AI. Much of this could be a positive development, especially if the U.S., its allies and like-minded partners work together to shape the development of AI in a manner that encourages self-sufficiency but discourages the creation of digital walls among allies and partners – and also brings more participants into what might otherwise be a two-way AI race between the U.S. and China.

But the move to sovereign AI could risk a further fragmentation of the global digital ecosystem, which is already breaking up into regions. That fragmentation could heighten geopolitical tensions, as nations vie for technological superiority, and AI competition among nation-states potentially leads to conflicts over intellectual property, trade disputes or even military confrontations in extreme cases. The potential for friction and conflict is one deeply important reason why it remains critical for the U.S. and other countries engaged in AI development and deployment to continue international cooperation and coordination efforts through the G7 and other international forums.

Pablo Chavez is an adjunct senior fellow with Center for a New American Security’s Technology and National Security Program and a technology policy expert who most recently served as vice president of global government affairs and public policy for Google Cloud until February 2022. He wrote this article for SiliconANGLE.
Image: SiliconANGLE/Microsoft Copilot Designer

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU