Justice by algorithm – The role of artificial intelligence in policing and criminal justice systems
- Author(s):
- Parliamentary Assembly
- Origin
- Text
adopted by the Standing Committee, acting on behalf of
the Assembly, on 22 October 2020 (see Doc. 15156, report of the Committee on Legal Affairs and Human
Rights, rapporteur: Mr Boriss Cilevičs).See also Recommendation 2182 (2020).
1. Artificial intelligence (AI) applications
can now be found in many spheres of human activity, from pharmaceutical
research to social media, agriculture to online shopping, medical
diagnosis to finance, and musical composition to criminal justice.
They are increasingly powerful and influential, and the public is
often unaware of when, where and how they are being used.
2. The criminal justice system represents one of the key areas
of the State’s responsibilities, ensuring public order and preventing
violations of various fundamental rights by detecting and investigating
criminal offences, and prosecuting and punishing their perpetrators.
It gives the authorities significant intrusive and coercive powers,
including surveillance, arrest, search and seizure, detention, and
the use of physical and even lethal force. It is no accident that
international human rights law requires judicial oversight of all
of these powers: effective, independent, impartial scrutiny of the
authorities’ exercise of criminal law powers with the potential
to interfere profoundly with fundamental human rights. The introduction
of non-human elements into decision making within the criminal justice
system may thus create particular risks.
3. If the public is to accept the use of AI and enjoy the potential
benefits that AI can bring, it must have confidence that any risks
are being properly managed. If AI is to be introduced with the public’s
informed consent, as one would expect in a democracy, then effective,
proportionate regulation is a necessary precondition.
4. Regulation of AI, whether voluntary self-regulation or mandatory
legal regulation, should be based on universally accepted and applicable
core ethical principles. The Parliamentary Assembly considers that
these principles can be grouped under the following broad headings:
4.1 transparency, including accessibility
and explicability;
4.2 justice and fairness, including non-discrimination;
4.3 human responsibility for decisions, including liability
and the availability of remedies;
4.4 safety and security;
4.5 privacy and data protection.
5. The Assembly welcomes Committee of Ministers Recommendation
Rec/CM(2020)1 on the human rights impacts of algorithmic systems,
along with its accompanying guidelines, and the recommendation of
the Council of Europe Commissioner for Human Rights entitled “Unboxing
Artificial Intelligence: 10 steps to protect Human Rights”. The
Assembly endorses the general proposals made in these texts for
application also in the areas of policing and criminal justice systems.
6. The Assembly notes that a large number of applications of
AI for use by the police and criminal justice systems have been
developed around the world. Some of these have been used, or their
introduction is being considered, in Council of Europe member States.
The applications include facial recognition, predictive policing,
the identification of potential victims of crime, risk assessment
in decision making on remand, sentencing and parole, and identification
of “cold cases” that could now be solved using modern forensic technology.
7. The Assembly finds that there are many ways in which the use
of AI in policing and criminal justice systems may be inconsistent
with the above-mentioned core ethical principles. Of particular
concern are the following:
7.1 AI
systems can be provided by private companies, which may rely on
their intellectual property rights to deny access to the source
code. A company may even acquire ownership of data being processed
by the system, to the detriment of the public body that employs
its services. The users and subjects of a system may not be given
the information or explanations necessary to have a basic understanding
of its operation. Certain processes involved in the operation of
an AI system may not be fully penetrable to human understanding.
Such considerations raise transparency (and, as a result, responsibility/accountability)
issues;
7.2 AI systems are trained on massive datasets, which may
be tainted by historical bias, including through indirect correlation
between certain predictor variables and discriminatory practices
(such as postcode being a proxy identifier for an ethnic community
historically subject to discriminatory treatment). This is a particular
concern in relation to policing and criminal justice, because of
both the prevalence of discrimination on various grounds in this
context and the significance of decisions that may be taken. The
apparent mechanical objectivity of AI may obscure this bias (“techwashing”),
reinforce and even perpetuate it. Certain AI techniques may not
be readily amenable to challenge by subjects of their application.
Such considerations raise issues of justice and fairness;
7.3 resource constraints, time pressure, lack of understanding,
and deference to or reluctance to deviate from the recommendations
of an AI system may lead police officers and judges to become overly reliant
on such systems, in effect abdicating their professional responsibilities.
Such considerations raise issues of responsibility for decision
making;
7.4 these considerations also affect one another. Lack of
transparency in an AI application reduces the ability of human users
to take fully informed decisions. Lack of transparency and uncertain
human responsibility undermine the ability of oversight and remedial
mechanisms to ensure justice and fairness;
7.5 the application of AI systems in separate but related
contexts, especially by different agencies relying sequentially
on one another’s work, may have unexpected, even unforeseeable cumulative impacts;
7.6 the addition of AI-based elements to existing technology
may also have consequences of unforeseen or unintended gravity.
8. The Assembly concludes that, whilst the use of AI in policing
and criminal justice systems may have significant benefits if it
is properly regulated, it risks having a particularly serious impact
on human rights if it is not.
9. The Assembly therefore calls upon member States, in the context
of policing and criminal justice systems, to:
9.1 adopt a national legal framework to regulate the use of
AI, based on the core ethical principles mentioned above;
9.2 maintain a register of all AI applications in use in the
public sector and refer to this when considering new applications,
so as to identify and evaluate possible cumulative impacts;
9.3 ensure that AI serves overall policy goals, and that policy
goals are not limited to areas where AI can be applied;
9.4 ensure that there is a sufficient legal basis for every
AI application and for the processing of the relevant data;
9.5 ensure that all public bodies implementing AI applications
have internal expertise able to evaluate and advise on the introduction,
operation and impact of such systems;
9.6 meaningfully consult the public, including civil society
organisations and community representatives, before introducing
AI applications;
9.7 ensure that every new application of AI is justified,
its purpose specified and its effectiveness confirmed before being
brought into operation, taking into account the particular operational
context;
9.8 conduct initial and periodic, transparent human rights
impact assessments of AI applications, to assess, amongst other
things, privacy and data protection issues, risks of bias/ discrimination
and the consequences for individuals of decisions based on the AI’s
operation, with particular attention to the situation of minorities
and vulnerable and disadvantaged groups;
9.9 ensure that the essential decision-making processes of
AI applications are explicable to their users and those affected
by their operation;
9.10 only implement AI applications that can be scrutinised
and tested from within the place of operation;
9.11 carefully consider the possible consequences of adding
AI-based elements to existing technologies;
9.12 establish effective, independent ethical oversight mechanisms
for the introduction and operation of AI systems;
9.13 ensure that the introduction, operation and use of AI
applications can be subject to effective judicial review.