Technology“AI Human Rights” are being negotiated, you should be interested

“AI Human Rights” are being negotiated, you should be interested

It is a text on artificial intelligence which has so far been eclipsed from the scene by the “AI Act”, the European regulation on AI, but which should deserve our full attention. On March 14, the fifty countries that met in Strasbourg to discuss the “Framework Convention on AI” managed to find a provisional agreement on what is considered the very first international treaty on AI. ‘AI.

Within the Council of Europe – an institution which has nothing to do with the European Union – the United States, Canada, Japan, the European Union and many other countries – have been trying for almost two and a half years to agree on fourteen pages relating to human rights, and what we could call “ethical AI”.

” VSSome AI-related activities could undermine human dignity and individual autonomy, human rights, democracy and the rule of law », They write in the preamble. There are in fact “ risks of discrimination “, of ” misuse of AI » – like these systems « used for repressive purposes, arbitrary or illegal surveillance and censorship », They add.

What is this new text on AI?

This is the very first international treaty on AI, developed by the Council of Europe (as distinguished from the “Council”, the representation of the 27 countries of the European Union), an international institution which includes 46 members and whose objective is to protect human rights. If an agreement has been found within its “artificial intelligence committee”, it must still be endorsed by its “committee of ministers”, a step which should take place during the month of May.

Unlike the recent United Nations resolution which calls for regulating artificial intelligence, this text will ultimately be binding. But it will have to follow a long process to become applicable: it will have to be ratified by each signatory State, then be transposed into each national law. It could apply, once all these steps have been completed, to the European Union, but also to the United States, Australia, Canada, Japan, Mexico, Costa Rica, Argentina… And to all the countries that wish it.

His goal ” aims to align the development, design and application of artificial intelligence with the principles of the Council of Europe », underlines its Secretary General, Marija Pejčinović Burić, in a press release. For its editors, artificial intelligence, which includes both generative AI like ChatGPT and predictive AI – AI which is used to make recommendations on social networks – must not infringe on human rights, democracy and the rule of law.

The treaty aims to fill a void. So far, the rules are mainly defined by the companies that develop these AIs – such as OpenAI, Google, Mistral. And respect for human rights is far from being among their priorities. “ The idea is not to wait for their systems to be put on the market (and accessible to the general public, editor’s note), it is to put in place democratic rules as quickly as possible and upstream », Explains Katharina Zügel, Policy Manager at the Forum on Information and Democracy, whom interviewed.

Among the principles to be respected are: human rights, democracy, human dignity, transparency, equality and non-discrimination, respect for the rules regarding personal data and privacy, as well as the idea of ​​safe innovation. For example, artificial intelligence systems should not be used to “ undermine the integrity, independence and effectiveness of democratic institutions and processes, including the principle of separation of powers, respect for judicial independence and access to justice “.

What is the difference with the AI ​​Act?

Unlike the European AI regulation, this is an international treaty. “ Its scope goes beyond the European Union, since the negotiations involve countries such as the United States, which is home to many companies in the sector, Japan and Canada, countries which can all sign the agreement », Specifies Katharina Zügel, who follows the text closely.

Another difference: unlike the AI ​​Act which lists, throughout its 459 pages, the rules that AI tools must respect to be placed on the market, the framework agreement is a fairly short document. The ten pages of the provisional agreement of March 14 were published by a former journalist fromEURACTIV on LinkedIn, then by Context. It defines major principles, standards and rights to be respected in the creation, application and development of AI. But it is up to the States which ratify it to transpose it into their own law, and to tackle the question of its implementation measures – even if the convention provides for a monitoring mechanism.

The latter could rely on the recommendations published last February by the Forum on Information and Democracy, which proposes measures to adopt to implement the principles of equality and non-discrimination. Among them, we find, for example, the fact of “ opening the teams developing AI to more inclusive and diverse teams, deciding on the choice of data used to train the AI ​​with civil society and researchers, to implement impact analyzes of its systems...”, lists his public policy manager.

Also read: Why algorithms are still sexist

Will it be applicable to the private sector?

On paper, we understand the intention of the treaty, but in practice, would private companies that develop AI tools really be required to apply this text? This point has been hotly debated within the institution. In the initial version, the drafters proposed applying the convention to the private and public sectors without distinction, with exceptions for national defense. But in the version published last December, the lines have moved. An exception for the private sector has been introduced.

Enough to cheer up rights defenders, like NGOs like the Human Rights League, Reporters Without Borders, Feminists Against Cyberharassment, Access Now, or Public Eye. In an open letter, they deplored this possible exclusion of Big Tech and companies from the AI ​​sector. This would amount to “ empty the convention of its substance “, by not offering ” little meaningful protection to individuals who are increasingly subject to powerful AI systems, prone to bias, human manipulation and the destabilization of democratic institutions “, they wrote.

The provisional agreement of March 14 ultimately opted for a middle ground: the private sector is not, by default, excluded, but the signatory states will be able to choose the way in which the convention will be applied to them. Negotiators proposed that countries could choose between:

  • apply the convention as it stands to private actors or
  • take “other appropriate measures” to achieve the objectives and purposes of the text.

It is a huge missed opportunity,” regret Katharina Zügel. “It is obviously important to have this convention applicable to the public sector.” But for the private sector, if the provisions of the provisional agreement become final, ” vseach country will decide which rules apply to businesses » – who are the players who develop AI.

Exclusion from the defense sector has also been a source of tension. Here too, the NGOs requested that at a minimum the obligation be included in the Convention for AI used for national security purposes – therefore military applications or those used by intelligence services – to respect international law. Some countries, for their part, defended the idea of ​​a general exemption. Finally, it is the first approach that was adopted. THE Activities related to the protection of national security will not be required to comply with the Framework Convention, but they must be carried out in accordance with international law.

Will these compromises be maintained in the final version of the text? The next stage of its adoption will take place next May, before the institution’s Committee of Ministers.

🔴 To not miss any news from 01net, follow us on Google News and WhatsApp.

Daily Global Times
Daily Global Times
Daily Global Times provides the latest news from India and around the world. Read the latest headlines and news stories, from Politics, Entertainment, Tech and more.

Related Article

Editors Picks