EU regulation of Artificial intelligence in the shadow of global interdependence

V

Effective AI regulation is not just a question of ambitious high-level principles. It also requires much more technical, and seemingly mundane, standardization that underpins a globally integrated digital space and technology sector in the first place. Typically, this is done in private transnational organizations. But the line between technical and political standards is much more blurry than it may seem, especially when technical definitions codify politically sensitive characteristics of AI systems. Project V asks: to what degree should the EU outsource AI standard setting to private organizations?

In technical domains, regulation often is not just a question of legal rules derived from politically charged principles. It also involves seemingly much more mundane shared specifications that allow technologies to be standardized and interoperable. Just about any product that is for same is underpinned, as a whole or through its components, by standards that specify things like its safety, the quality of its materials, its interoperability, and so on. The standards that hide behind the myriad small labels we find on any electrical device or the technical specifications in user manuals are essential grease for mass production of both goods and services.

These technical standards are typically set through organizations that are private, at least formally. On the international level, they include the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) - two bodies that have set up a Joint Committee known as JTC 1/SC 42, which has begin work on AI standards in 2017. The European counterparts to the ISO and IEC - CEN and CENELEC - are central actors in global standardization dynamics, but they, too, are formally private bodies, not arms of the European Union.

The role of bodies such as CEN and CENELEC goes far beyond technical matters. At least in the original AI Act proposal tabled by the European Commission, standard setting organization were charged with formalizing the criteria by which regulatory compliance would be judged. For example, if European rules would require algorithms applied by public bodies to be bias-free, it might be up to private standard setters to develop the technical procedures according to which such freedom from bias would be certified. Opinions differ about just how much of the problem deep-seated and hard-to-detect biases in algorithms are. Therefore, any technical procedure to detect them or certify their absence is likely to draw intense political fire. What looks like a technical question turns into a politically fraught topic.

At the same time, there are good reasons for outsourcing technical standard setting to experts. Algorithms are sufficiently complicated to require people with in-depth knowledge to assess whether any particular set of rules would indeed have the desired effect. The higher a field's technical complexity, the higher the chance that standards are ineffective or have unintended consequences. On top, outsourcing standard setting to experts once the political goal posts have been set can avoid an endless political tug-of-war in which conflicting interests still try to pull regulatory outcomes their way. The challenge then is to square efficiency of standard setting and effectiveness of standard themselves with political sovereignty over policy.

Next to technical standard setters, both alliances of firms central in the AI field and NGOs try to shape AI governance, for example by developing certification procedures and labels, comparable for example to the Forest Stewardship Council. Here, too, the question is to what degree EU cooperation with or co-optation of such initiatives can promote its regulatory aims, or instead create a form of regulatory capture, especially when large corporations are involved.

go to project IIIIIIIVV