EU, Court of Justice of the EU, 21 June 2022, C 817/19 – Ligue des droits humains ASBL v Conseil des ministres

Case Overview

Share via:

CountryBelgium

Deciding BodyCJEU

AreaMigration

UserPublic

Case NameC 817/19 - Ligue des droits humains ASBL v Conseil des ministres

Authority (English)Court of Justice of the EU

TechnologyRisk Assessment

ProviderPublic

Decision Date21 June 2022

Authority (Original)Court of Justice of the EU

Grounds for DecisionEU Law, Human Rights Law

Legal RequirementLawfulness, Human Oversight, Transparency, Remedial Fairness

Case Summary

The case originates from a reference for a preliminary ruling on the interpretation and validity of the Directive 2016/681 on the use of passenger name record (PNR) data for the prevention, detection, investigation and prosecution of terrorist offences and serious crime (PNR Directive) with Articles 7 and 8 of the CFREU. The PNR Directive regulates the use of PNR data from passengers in extra-EU flights to prevent, detect and prosecute terrorist offences and crimes. For this purpose, PNR data of passengers are analysed by automated means to identify persons who require further examinations by the authorities (Article 6 PNR Directive). In the judgment, the Court recalls Opinion 1/15 and the requirements set for risk assessment of PNR data, particularly the principles of non-discrimination, reliability and specificity of pre-established models and criteria (§106), the need for a connection between the use of data and the objectives pursued (§118) and the requirement of reasonable suspicion justifying follow up actions (§219). In this sense, the Court also clarifies that reliability of pre-established models and criteria means taking into account not just incriminating but also exonerating circumstances (§200). The Court also highlight that the obligation of individual review by non-automated means of positive results requires Member States to provide their national authorities with material and human resources to carry out such review (§180).

Additionally, the Court considers, for the first time, the compatibility between automated analyses based on AI systems with right to an effective remedy. According to the judgment, the issues are twofold. Firstly, the use of “artificial intelligence technology in self-learning systems (‘machine learning’), capable of modifying without human intervention or review the assessment process” does not provide sufficient certainty both for the human reviewer and for the data subject and should, therefore, be precluded (§194). Secondly, opacity in AI systems prevents understanding the reasons “why a given program arrived at a positive match”, hence depriving data subjects of their right to an effective judicial remedy enshrined in Article 47 of the Charter (§195). In this regard, the Court sets transparency rights for data subjects to foster their right to an effective remedy against decisions based on automated analyses. Firstly, in administrative procedures, the person concerned must be able to “understand how those criteria and those programs work, so that it is possible for that person to decide with full knowledge of the relevant facts whether or not to exercise his or her right to the judicial redress” (§210). Secondly, in the context of judicial redress, the person and the court involved “must have had an opportunity to examine both all the grounds and the evidence on the basis of which the decision was taken including the pre-determined assessment criteria and the operation of the programs applying those criteria”(emphasis added §211).

Access to the full judgment

Further notes on contested technology

  • → Partly Automated-Decision
  • → The technology is deployed

Author of the case note

Francesca Palmiotto