Subscribe to regular updates:

Share

Why cooperation on AI rules is a game of multidimensional chess

Report Image

The second finding of our AI series shows how governments draw from ten different policy areas to establish AI rules.

Authors

Johannes Fritz, Tommaso Giardini

Date Published

03 May 2024

Governments draw from ten different policy areas to establish AI rules and impose different requirements to operationalise each OECD AI principle. Today, governments have a unique opportunity to learn from diverse regulatory approaches and avoid fragmentation risk.

AI rules draw from diverse policy areas

Our comparative analysis of eleven AI rulebooks reveals that AI rules are not a single, delineated policy area, but rather draw from almost a dozen existing ones. Over half of the total requirements (720) concern either regulatory compliance and transparency, or design and testing standards. Less frequently used policy areas include consumer protection, data governance, and content moderation. The diversity of AI rules is aligned with the over 550 AI regulation and enforcement developments documented by the Digital Policy Alert since January 2020. AI rules are diverse because AI is a multifaceted technology. Data governance rules regulate the data with which AI is trained and protect each AI user’s privacy. Content moderation rules set guardrails for AI-generated output. Transparency rules address the opacity of AI systems. While these policy areas all pursue legitimate objectives, their interplay complicates international alignment.

Multiple policy areas intersect within each OECD AI Principle

When governments operationalise the OECD AI Principles, they combine regulatory requirements from different policy areas. To implement the principle of human-centred values and fairness (1.2), governments draw from six policy areas. The principle of transparency and explainability (1.3) is implemented through rules regarding regulatory compliance and transparency, consumer protection, and content moderation. The rules implementing the other principles span across at least four policy areas.

In turn, several policy areas implement multiple OECD AI Principles. For instance, regulatory compliance and transparency are relevant to all five principles. Design and testing standards, as well as consumer protection, are pertinent to the implementation of four principles. Other policy areas implement only one principle, namely competition, data governance, intellectual property, and labour law. 

AI rules create a risk of multidimensional divergence

The diversity of AI rules creates risk for divergence in the implementation of the OECD Principles on three levels.

  • Governments prioritise the OECD AI Principles differently, as demonstrated in our first piece

  • When implementing the same principle, governments may focus on different policy areas.

  • When using the same policy area to implement the same principle, governments impose different requirements.

For example, multidimensional divergence is visible in how governments implement the principle of human-centred values and fairness (1.2). 

  • China and the United States emphasise this principle more than other governments. 

  • Some governments establish rules regarding data governance, such as data protection requirements. Other governments demand consumer protection, for example through non-discrimination obligations. 

  • Even within these policy areas, a patchwork of divergent requirements emerges. Within data governance, only some governments establish data subject rights and data security requirements. Within consumer protection, some governments establish rights (to object to or contest AI), while others establish age verification requirements. 

Multidimensional divergence, across the OECD AI Principles, is evidenced by how rarely a single regulatory requirement is used across borders. Our comparative analysis found 75 different regulatory requirements, applied a total of 720 times across the seven studied jurisdictions. Only one requirement, the requirement to disclose technical documentation about the AI system, is featured in all the AI rulebooks we studied. In contrast, over a third of all regulatory requirements are foreseen in only one or two jurisdictions, except for principle 1.2 (29 percent). 

The opportunity to coordinate AI rules resembles a multidimensional chess game

Governments working towards international alignment on AI rules face a unique opportunity. The diversity of AI rules enables governments to learn from both previous experience and each other. Governments have a unique opportunity to draw from their experience in other policy areas, including the expertise accumulated by national regulators. In addition, governments are currently experimenting to find effective AI rules. Studying and comparing different approaches to operationalising the OECD AI Principles is an opportunity for rapid learning. 

The urgency for international alignment on AI rules is underestimated: Multidimensional divergence on AI rules can amplify digital fragmentation risk. Currently, the global digital economy is struggling with different national rules regarding data transfers. Concerning AI, such differences multiply since they can occur within each pertinent policy area. It is imperative that governments study different regulatory approaches to start formulating best practices and to avoid fragmentation risks from AI rules. 

 

When governments pursue the coordination of AI rules, they should approach it like a game of multidimensional chess:

  • Understand how the pieces move, by knowing the relevant policy areas in AI rules and their singularities.

  • Be aware of all the dimensions, by differentiating between the high-level OECD principles and the granular requirements that implement them in national AI rules.

  • Know their counterparts, by studying and learning from national regulatory approaches.

 

To enable the proper preparation of this complex chess game, the DPA provides:

  • An analytical series synthesising our findings on two further levels: 

    • OECD Principle level: Which requirements are used to implement each principle?

    • Requirement level: What are the differences within the requirements that implement the same principle?

  • CLaiRK: A suite of public tools to analyse global AI rules to:

    • Navigate each AI rulebook with our tagging of requirements and OECD principles;

    • Compare different rulebooks with chromatic highlighting; and 

    • Explore the state of AI regulation using our high-accuracy chat.

Downloads