This month, French lawmakers are expected to pass legislation for the 2024 Paris Olympics, which, for the first time in France’s history, will permit mass video surveillance powered by artificial intelligence (AI) systems.
When governments embark on the slippery slope towards the expansion of surveillance powers, it has damning consequences for fundamental human rights, including the rights to privacy, equality and non-discrimination, as well as freedom of expression and peaceful assembly. Under the guise of ensuring security and fighting terrorism, the French authorities will be able to monitor the movements of millions of people from around the world, whether they are heading to or near stadiums, or using public transportation leading in or out of the premises of the grand sporting event.
The need for security during the game is understandable, but transparency and legal justification are needed at every step of the way. Any proposal concerning security must comply with fundamental rights. International human rights law still applies to the Olympics, and rigorous review of such measures is vital.
So far, the bill fails to demonstrate how such AI-powered video surveillance will be consistent with human rights standards. The French government has not shown how the measures meet the principle of proportionality and what safeguards will be in place to prevent a permanent surveillance infrastructure, such as privacy protection measures, strict constraints and limitations on purpose and data minimisation.
This is a pernicious, blanket application of AI-driven mass surveillance that cannot be justified. The human rights threats posed by AI development and usage by private companies and public authorities in the European Union are well documented. The technology is used to the detriment of marginalised groups, including migrants, and Black and Brown people. In an open letter initiated by the European Center for Not-for-Profit Law, 38 civil society organisations, including Amnesty International, have called on French policymakers to reject the draft legislation allowing invasive surveillance, as it would pose a monumental threat to fundamental rights and freedoms.
The draft legislation would subject spectators heading to sporting events in Paris to unjustifiable surveillance, from ubiquitous fixed CCTV cameras to drones set to detect “abnormal or suspicious” activity in crowds. Such overly broad definitions must be contested, and we must ask ourselves some urgent questions: Who sets the norm for what is “normal”? Officials who control the designations of “abnormal or suspicious” activities also have the power to exacerbate a chilling effect on dissent and protest, and to supercharge discrimination against communities already targeted.
States have used major sporting events to introduce and embed a panopticon of surveillance measures, moving societies toward an Orwellian dystopia. While French authorities claim that this is a short-term experimental move, Amnesty International fears that this bill will silently extend mass surveillance and police powers permanently in France.
The London Olympics of 2012 stands as a vivid example of how states have used major sporting events to install and expand intrusive, permanent and oppressive surveillance measures. In 2017, at the UEFA Champions League final in Cardiff, the South Wales Police used facial recognition cameras and wrongfully flagged 2,000 people as possible criminals, showing how such measures are intrusive and unreliable.
At Amnesty International, we have extensively documented how thousands of facial recognition-capable CCTV cameras have been deployed across New York City – most of them across communities of colour and amplifying racially discriminatory policing. The technology has led to the harassment of Black Lives Matter protesters and wrongful arrests of predominantly Black residents.
Not only is this bill a dangerous step concerning privacy and human rights, but it betrays the very spirit of the European Union’s (EU) AI Act – a globally significant piece of legislation that aims to regulate AI and protect fundamental rights in the EU, of which France is an influential member.
France’s plan to deploy such staggering measures during the Olympic Games could shape how AI systems and mass surveillance are regulated and governed in the EU. Amnesty International believes that the EU, through its AI Act negotiations, should put an end to rampant, abusive and discriminatory artificial intelligence-based practices, including the use of all facial recognition systems used for mass surveillance.
Together with a coalition of civil society actors campaigning for a human-rights-compliant European AI Regulation, Amnesty International has called for a complete ban on facial recognition technologies that enable mass and discriminatory surveillance, as well as systems that categorise people based on protected characteristics, or gender identity. We have also called for the prohibition of emotion recognition systems that claim to infer people’s emotions and mental states, given these technologies’ lack of scientific validity and their extreme intrusiveness.
As an EU member state, France would have to abide by the EU’s AI regulation. This new bill will bring French law into direct conflict with the pending EU legislation. In the meantime, as an influential member state, France is attempting to lower the high bar that the EU AI Act aims to set for the protection of human rights.
If France goes ahead with legalising mass surveillance at the national level, one of the biggest sporting events on Earth risks becoming one of the single most significant abuses of the right to privacy, globally.
The views expressed in this article are the author’s own and do not necessarily reflect Al Jazeera’s editorial stance.
Proposed legislation to expand surveillance powers risks transforming the Olympics into an assault on privacy.
The post French plans for AI surveillance during Olympics are dangerous appeared first on Energy News Beat.
Energy News Beat