History so far: The global governance of artificial intelligence (AI) is becoming increasingly intricate, even as countries attempt to manage AI at their borders in a variety of ways, from legislation to executive orders. Many experts (and the Pope) have formulated a global treaty to this end, but the obstacles in its path are daunting.
What is the European Artificial Intelligence Convention?
While there are many ethical guidelines, cushioned law tools and governance principles enshrined in many documents, none of them are binding or will lead to a global treaty. There are also no artificial intelligence treaty negotiations taking place anywhere at the global or regional level.
In this context, the Council of Europe (COE) took a major step by adopting on May 17 the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law – known as the “Artificial Intelligence Convention”. COE is an intergovernmental organization created in 1949, currently with 46 members, including the Holy See, Japan and the United States, as well as countries of the EU bloc and others.
The Agreement is a comprehensive convention covering the governance of artificial intelligence and links to human rights, democracy and the responsible employ of artificial intelligence. The Framework Convention will be opened for signature in Vilnius, Lithuania, on September 5.
What is the Framework Convention?
A ‘framework convention’ is a legally binding treaty that sets out the broader obligations and objectives of the Convention and establishes mechanisms for achieving them. The task of setting specific goals, if necessary, was left to subsequent agreements.
The agreements negotiated under the Framework Convention will be called protocols. For example, the Convention on Biological Diversity is a framework convention, while the Cartagena Protocol on Biosafety is its framework protocol on living modified organisms. Similarly, a ‘Protocol on Artificial Intelligence Risks’ may be drawn up in the future under the European Artificial Intelligence Convention.
The framework approach is useful because it allows for flexibility even as it encodes the basic principles and processes by which the goals are to be achieved. The parties to the Convention are free to decide on how to achieve the goals, depending on their capabilities and priorities.
The Artificial Intelligence Convention could catalyze negotiations on similar regional conventions elsewhere. On the other hand, since the United States is also a member of the COE, the convention could indirectly impact AI governance in the US as well, which is significant since the country is currently a hotbed of AI innovation. A related (sort of) disadvantage of the Artificial Intelligence Convention is that it can be seen as being more influenced by European values and norms in technology governance.
What is the scope of the convention?
Article 1 of the Convention states:
“The provisions of this Convention are intended to ensure that activities within the lifecycle of artificial intelligence systems are fully compatible with human rights, democracy and the rule of law.”
The definition of artificial intelligence is similar to that in the EU AI Act, which is based on the OECD definition of artificial intelligence: “An artificial intelligence system is a machine-based system that, for explicit or implicit purposes, infers, from the inputs it receives, how to generate outputs , such as forecasts, content, recommendations or decisions that may impact physical or virtual environments.
Article 3 states:
“The scope of this Convention covers the following activities within the lifecycle of artificial intelligence systems that have the potential to interfere with human rights, democracy and the rule of law:
A. Each Party shall apply this Convention to lifecycle activities of artificial intelligence systems undertaken by public authorities or private entities acting on its behalf.
B. Each Party shall address the risks and impacts arising from the activities of private entities during the lifecycle of artificial intelligence systems to the extent not covered by subparagraph a, in a manner consistent with the object and purpose of this Convention.”
How does the text relate to national security?
The exclusion of the private sector from the scope of the Convention was a contentious issue, and the text reflects a compromise that had to be reached between two contrasting positions: complete exclusion of the private sector and no exemption. Article 3(a) (b) provides Parties with flexibility in this matter, but does not allow them to completely exempt the private sector.
Moreover, the exceptions in Articles 3.2, 3.3 and 3.4 are broad in scope and relate to the protection of national security interests, research, development and testing and national defense, respectively. As a result, military applications of AI are not covered by the Artificial Intelligence Convention. While this raises concerns, it is a pragmatic move given the lack of consensus on the regulation of such apps. In fact, the exceptions contained in Art. 3.2 and 3.3 – although broad – do not completely exclude the application of the Convention in relation to national security and testing purposes respectively.
Finally, the “general obligations” of the convention concern the protection of human rights (Article 4), the integrity of democratic processes and respect for the rule of law (Article 5). While the issues of disinformation and imitation news are not specifically addressed, Parties to the Convention are expected to take action against them under Article 5 – just as they are expected to assess the employ of artificial intelligence and its mitigation.
In fact, the Convention also indicates (in Article 22) that Parties may go beyond certain obligations and obligations.
Why do we need an AI convention?
The Artificial Intelligence Convention does not create fresh and/or significant human rights specific to AI. Instead, it argues that existing human rights and fundamental rights protected under international and national law will need to remain protected also when AI systems are used. The obligations are aimed primarily at governments, which are expected to introduce effective remedies (Article 14) and procedural guarantees (Article 15).
In summary, the Convention adopts a comprehensive approach to reducing the risks arising from the employ and exploitation of artificial intelligence systems in support of human rights, democracy and the rule of law. Its implementation certainly presents many challenges, especially at a time when AI regulatory regimes are not yet fully established and technology continues to outpace law and policy.
However, while the European concept of the rule of law can be debated, the convention itself is the need of the hour because of the balance it codifies between AI innovations and human rights threats.
Krishna Ravi Srinivas is Assistant Professor of Law at NALSAR Law University, Hyderabad and Associate Fellow of CeRAI, IIT Madras.