Around the world, international organizations and transnational initiatives are working on standards for AI development and use. Some outline abstract principles; others negotiate much more concrete regulations. On the one hand, global AI standards can help create public sovereignty over these technologies. On the other hand, they may prove paper tigers and lock the EU into agreements that fall short of its own ambitions. Project IV asks: when are global AI rules the right answer to regulatory concerns?
The impact of digital technologies cannot be contained within national borders. In principle, international regulatory cooperation can help to maximize AI's positive effects and limit potential bad ones. It may not only allow countries to share experiences and best practices. It could also disable deleterious regulatory races to the bottom, in which jurisdictions would outbid each other with relatively lax rules to eek out advantages for local firms.
Currently, multilateral initiatives for AI governance proliferate. They include for example the non-binding G20 principles or the conclusions of UNESCO’s 2021 AI summit. Eight international organizations, prominently including the EU, have launched globalpolicy.ai, a platform for global standard development.
At the same time, the 2020 Global Partnership on AI (GPAI) brings together roughly two dozen “like-minded” countries plus the EU, aspiring to build a shared AI vision around democratic principles and liberal values. The OECD has emerged as a knowledge hub to support this work. And as if that was not enough, rising geopolitical tensions, fuelled by the Russian war against Ukraine, have boosted international AI alliances that put its military dimension more central, with NATO currently leading the way. Truly global efforts - crucially involving China and Russia - thus compete with alliances of “like-minded” countries.
The EU faces diverse motivations to get engaged in global rule setting. The latter might help to universalize EU rules and circumvent regulatory competition. It might also allow the EU to step outside of asymmetrical EU-US power relations in AI regulation and tie the United States into a multilateral regime that might favour, in relative terms, European perspectives.
Indeed, the EU has repeatedly managed to export its own regulatory preferences to the rest of the world in the past. Sometimes, its rules, developed ahead of those elsewhere, simply became a welcome template to copy. In other instances, access to EU markets was conditional on EU rule compliance, and countries dependent on exports into the EU simply embraced European regulation more or less wholesale.
Some in Brussels once more hope for relatively easy export of EU regulatory preferences. It is uncertain, however, how much scope for such a "Brussels effect" there really is in AI regulation, given the geo-strategic charge attached to AI and the fact that in technological terms, the EU clearly lags China and the USA.
In any case, a political determination to find multilateral agreement might dilute high regulatory ambitions. After all, global, or at least multilateral, AI rules would require substantive agreement or a political compromise. It is ex ante unclear to what degree cross-border agreement justifies substantive concessions in regulatory negotiations. Given all these competing initiatives, which ones does and should the EU push as solutions to regulatory challenges?