The author, Dr Cristina Vanberghen is a Senior Expert, European Commission and a Distinguished Expert at the Advisory Council of the Indian Society of Artificial Intelligence and Law.
Artificial Intelligence is defining a new international order. Cyberspace is reshaping the geopolitical map and the global balance of power. Europe, coming late to the game, is struggling to achieve strategic sovereignty in an interconnected world characterized by growing competition and conflicts between States. Do not think that cyberspace is an abstract concept. It has a very solid architecture composed of infrastructure (submarine and terrestrial cable, satellites, data centers etc), a software infrastructure (information systems and programs, languages and protocols allowing data transfer and communication between the Internet Protocol (TCP/IP), and a cognitive infrastructure which includes massive exchange of data, content, exchanges of information beyond classic “humint”. Cyberspace is the fifth dimension: an emerging geopolitical space which complements land, sea, air and space, a dimension undergoing rapid militarization and in consequence deepening the divide between distinct ideological blocs at the international level. In this conundrum, the use and misuse of data – transparency, invisibility, manipulation, deletion – has become a new form of geopolitical power, and increasingly a weapon of war. The use of data is shifting the gravitational center of geopolitical power. This geopolitical reordering is taking place not only between states but also between technological giants and States. The Westphalian confidence in the nation state is being eroded by the dominance of these giants which are oblivious to national borders, and which develop technology too quickly for states to understand, let alone regulate. What we are starting to experience is practically an invisible war characterized by data theft, manipulation or suppression, where the chaotic nature of cyberspace leads to a mobilization of nationalism, and where cyberweapons - now part of the military arsenal of countries such as China, Israel, Iran, South Korea, the United States and Russia – increases the unpredictability of political decision-making power. The absence of common standards means undefined risks, leading to a level of international disorder with new borders across which the free flow of information cannot be guaranteed. There is a risk of fragmentation of networks based on the same protocols as the Internet but where the information that circulates is now confined to what government or the big tech companies allow you to see.
Whither Europe in this international landscape?
The new instruments for geopolitical dominance in today’s world are AI, 5 or 6G, quantum, semiconductors, biotechnology, and green energy. Technology investment is increasingly based on the need to encounter Chinese investment. In August 2022, President Joe Biden signed the Chips and Science Act granting 280 billion US$ to the American Tech industry, with 52.7 billion US$ being devoted to semiconductors. Europe is hardly following suit. European technological trends do not reflect a very optimistic view of its technological influence and power in the future. With regard to R&D invested specifically in Tech, the share of European countries’ investments, relative to total global R&D in Tech, has been declining rapidly for 15 years. Germany went from 8% to 2%; France from 6% to 2%. The European Union invests five times less in private R&D in Tech than the United States. Starting from ground zero 20 years ago, China has now greatly overtaken Europe and may catch up with the US . The question we face is whether given this virtual arms race, each country will continue to develop its own AI ecosystem with its own (barely visible) borders, or whether mankind can create a globally shared AI space anchored in common rules and assumptions. The jury is out. In the beginning, the World Wide Web was supposed to be an open Internet. But the recent trend has been centrifugal. There are many illustrations of this point: from Russian efforts to build its own Internet network to Open AI threatening to withdraw from Europe; from Meta withdrawing its social networks from Europe due to controversies over user data, to Google building an independent technical infrastructure. This fragmentation advances through a diversity of methods, ranging from content blocking to corporate official declarations.
But could the tide be turning? With the war in Ukraine we have seen a rapid acceleration of use of AI, along with growing competition from the private sector, and this is now triggering more calls for international regulation of AI. And of course, any adherence to a globally accepted regulatory and technological model entails adherence to a specific set of values and interests.
Faced with this anarchic cyberspace, instead of increasing non-interoperability, it will be better to set up a basis for an Internationalized Domain Name (IDN), encompassing also the Arabic, Cyrillic, Hindi, and Chinese languages, and avoiding linguistic silos. Otherwise, we run the clear risk of undermining the globality of the Internet by a sum of national closed networks. And how can we ensure a fair technological revolution? If in the beginning military research was at the origin of technological revolution, we are now seeing that emerging and disruptive technologies (EDTs), not to mention with dual-use technologies including artificial intelligence, quantum technology or biotechnology are mainly being developed by Big Tech, and sometimes by start-ups. It is the private sector that is generating military innovation. To the point that private companies are becoming both the instruments and the targets of war. The provision by Elon Musk of Starlink to the Ukrainian army is the most recent illustration of this situation. This makes it almost compulsory for governments to work in lockstep with the private sector, at the risk of missing the next technological revolution.
The AI war
At the center of AI war is the fight for standardization, which allows a technological ecosystem to operate according to common, interoperable standards. The government or economic operator that writes the rules of the game will automatically influence the balance of power and gain a competitive economic advantage. In a globalized world, we need however not continued fragmentation or an AI arms race but a new international Pact. Not however a Gentlemen’s Pact based on goodwill because goodwill simply does not exist in our eclectic, multipolar international (dis)order. We need a regulatory AI pact that, instead of increasing polarization in a difficult context characterized by a race for strategic autonomy, war, pandemics, climate change and other economic crises, reflects a common humanity and equal partnerships. Such an approach will lead to joint investment in green technology and biotechnologies with no need of national cyberspace borders.
EU AI Act
Now the emergence of ChatGPT has posed a challenge for EU policymakers in defining how such advanced Artificial Intelligence should be addressed within the framework of the EU's AI regulation.
An example of a foundation model is ChatGPT developed by OpenAI which has been widely used as a foundation for a variety of natural language processing tasks, including text completion, translation, summarization, and more. It serves as a starting point for building more specialized models tailored to specific applications. According to the EU AI Act, these foundations models must adhere to transparency obligations, providing technical documentation and respecting copyright laws related to data mining activities. But we shall take into consideration that the regulatory choices surrounding advanced artificial intelligence, exemplified by the treatment of models like ChatGPT under the EU's AI regulation, carry significant geopolitical implications.
The EU's regulatory stance on this aspect will shape its position in the global race for technological leadership. A balance must be struck between fostering innovation and ensuring ethical, transparent, and accountable use of AI. It is this regulatory framework that will influence how attractive the EU becomes for AI research, development, and investment. Stricter regulations on high-impact foundational models may impact the competitiveness of EU-based companies in the global AI market. It could either spur innovation by pushing companies to develop more responsible and secure AI technologies or potentially hinder competitiveness if the regulatory burden is perceived as too restrictive.
At international level the EU's regulatory choices would influence the development of international standards for AI. If the EU adopts a robust and widely accepted regulatory framework, it may encourage other regions and countries to follow suit, fostering global cooperation in addressing the challenges associated with advanced AI technologies. The treatment of AI models under the regulation can have implications for data governance and privacy standards. Regulations addressing data usage, transparency, and protection are critical not only for AI development but also for safeguarding individuals' privacy and rights. The EU's AI regulations would have impact its relationships with other countries, particularly those with differing regulatory approaches. The alignment or divergence in AI regulations could become a factor in trade negotiations and geopolitical alliances.
Last but least, the regulatory decisions will reflect the EU's pursuit of strategic technological autonomy. By establishing control over the development and deployment of advanced AI, the EU intends to reinforce its strategic autonomy and reduce dependence on non-European technologies, ensuring that its values and standards are embedded in AI systems used within its borders. The EU AI Act can influence to the ongoing global dialogue on AI governance. It may influence discussions in international forums, where countries are working to develop shared principles for the responsible use of AI.
The EU's regulatory choices regarding advanced AI models like ChatGPT are intertwined with broader geopolitical dynamics, influencing technological leadership, international standards, data governance, and global cooperation in the AI domain. We have noticed that a few days before the discussion on the final format of EU AI Act, the OECD made an adjustment to its definition of AI, in anticipation of the European Union's AI regulation demonstrate a commitment to keeping pace with the evolving landscape of AI technologies.
The revised definition of AI by the Organisation for Economic Co-operation and Development (OECD) appears to be a significant step in aligning global perspectives on artificial intelligence. The updated definition, designed to embrace technological progress and eliminate human-centric limitations, demonstrates a dedication to staying abreast of AI's rapid evolution.
At international level, we can notice that the G7 also reached urgent Agreement on AI Code of Conduct! In a significant development, the G7 member countries have unanimously approved a groundbreaking AI Code of Conduct. This marks a critical milestone as the principles laid out by the G7 pertain to advanced AI systems, encompassing foundational models and generative AI, with a central focus on enhancing the safety and trustworthiness of this transformative technology. In my view, it is imperative to closely monitor the implementation of these principles and explore the specific measures that will be essential to their realization. The success of this Code of Conduct greatly depends on its effective implementation. These principles are established to guide behavior, ensure compliance, and safeguard against potential risks. Specifically, we require institutions with the authority and resources to enforce the rules and hold violators accountable. This may involve inspections, audits, fines, and other enforcement mechanisms but also educating about these principles, their implications, and how to comply with them is essential. It will be essential to ensure regular monitoring of compliance and reporting mechanisms that can provide insights into the effectiveness of the regulations. Data collection and analysis are crucial for making informed decisions and adjustments. Periodic reviews and updates are necessary to keep pace with developments. Effective implementation often necessitates collaboration among governments, regulatory bodies, industry stakeholders, and the public. Transparent communication about these principles is crucial to build trust and ensure that citizens understand the rules. As the AI landscape evolves, it becomes increasingly vital for regulators and policymakers to remain attuned to the latest developments in this dynamic field. Active engagement with AI experts and a readiness to adapt regulatory frameworks are prerequisites for ensuring that AI technologies are harnessed to their full potential while effectively mitigating potential risks. An adaptable and ongoing regulatory approach is paramount in the pursuit of maximizing the benefits of AI and effectively addressing the challenges it presents.
First, the ideological differences between countries on whether and how to regulate AI will have broader geopolitical consequences for managing AI and information technology in the years to come. Control over strategic resources, such as data, software, and hardware has become important for all nations. This is demonstrated by discussions over international data transfers, resources linked to cloud computing, the use of open-source software, and so on.
Secondly, the strategic competition for control of cyberspace and AI seems at least for now to increase fragmentation, mistrust, and geopolitical competition, and as such poses enormous challenges to the goal of establishing an agreed approach to Artificial Intelligence based on respect for human rights.
Thirdly, despite this, there is a glimmer of light emerging. To some extent values are evolving into an ideological approach that aims to ensure a human rights-centered approach to the role and use of AI. Put differently, an alliance is gingerly forming around a human rights-oriented view of socio-technical governance, embraced, and encouraged by like-minded democratic nations: Europe, the USA, Japan, India. These regions have an opportunity to set the direction through greater coordination in developing evaluation and measurement tools that contribute to credible AI regulation, risk management, and privacy-enhancing technologies. Both the EU AI Act and the US Algorithmic Accountability Act of 2022 or US Act for example, require organizations to perform impact assessments of their AI systems before and after deployment, including providing more detailed descriptions on data, algorithmic behavior, and forms of oversight. India is taking the first steps in the same direction.
The three regions are starting to understand the need to avoid the fragmentation of technological ecosystems, and that securing AI alignment at the international level is likely to be the major challenge of our century.
Fourthly, undoubtedly, AI will continue to revolutionize society in the coming decades. However, it remains uncertain whether the world's countries can agree on how technology should be implemented for the greatest possible societal benefit or what should be the relationship between governments and Big Tech.
Finally, no matter how AI governance will be finally designed, the way in which it is done must be understandable to the average citizen, to businesses, and practising policy makers and regulators today confronted with a plethora of initiatives at all levels. Al regulations and standards need to be in line with our reality. Taking AI to the next level means increasing the digital prowess of global citizens, fixing the rules for the market power of tech giants, and understanding that transparency is part of the responsible governance of AI.
The governance of AI of tomorrow will be defined by the art of finding bridges today! If AI research and development remain unregulated, ensuring adherence to ethical standards becomes a challenging task. Relying solely on guidelines may not be sufficient, as guidelines lack enforceability. To prevent AI research from posing significant risks to safety and security, there's a need to consider more robust measures beyond general guidance.
One potential solution is to establish a framework that combines guidelines with certain prescriptive rules. These rules could set clear boundaries and standards for the development and deployment of AI systems. They might address specific ethical considerations, safety protocols, and security measures, providing a more structured approach to ensure responsible AI practices.
However, a major obstacle lies in the potential chaos resulting from uncoordinated regulations across different countries. This lack of harmonization can create challenges for developers, impede international collaboration, and limit the overall benefits of AI research and development. To address this issue, a global entity like the United Nations could play a significant role in coordinating efforts and establishing a cohesive international framework. A unified approach to AI regulation under the auspices of the UN could help mitigate the competition in regulation or self-regulation among different nations. Such collaboration would enable the development of common standards that respect cultural differences but provide a foundational framework for ethical and responsible AI. This approach would not only foster global cooperation but also streamline processes for developers, ensuring they can navigate regulations more seamlessly across borders.
In conclusion, a combination of guidelines, prescriptive rules, and international collaboration, potentially spearheaded by a global entity like the United Nations, could contribute to a more cohesive and effective regulatory framework for AI research and development, addressing ethical concerns, safety risks, and fostering international collaboration.