The criminal justice system represents one of the central areas of state activity, ensuring public order, preventing violations of various fundamental rights and detecting, investigating, prosecuting and punishing criminal offences. It gives the authorities significant intrusive or coercive powers including surveillance, arrest, search and seizure, detention, and use of physical and even lethal force.
Data processing tools are increasingly being used in criminal justice systems. The most advanced systems use predictive algorithms to inform decision-making in areas including policing patterns, bail and sentencing. They have in many ways proved effective and are often valued by the authorities that use them.
There are, however, grounds for concern. These systems are usually provided by private companies, in which case the algorithms are commercial secrets – “black boxes” that cannot be subject to public scrutiny. The quality of output of an algorithm is dependent on the quality of the input data: if the input data inadvertently reflects, for example, racial bias, so will the output, despite the algorithm’s apparent neutrality and objectivity. Decision makers may be reluctant to depart from recommendations generated by algorithms, to the detriment of the often important role of individual judgment and discretion. Police departments may lose control over their own data, making them dependent on the private companies that have acquired it, with little choice but to maintain contractual relations whatever the cost.
The Parliamentary Assembly should examine the role of algorithms and artificial intelligence in criminal justice systems from the perspective of Council of Europe standards on human rights and the rule of law, with a view to making possible recommendations to member States and to the Committee of Ministers for further action.