PACE is calling for national legal frameworks to regulate the use of Artificial Intelligence in police and criminal justice work, based on core principles of transparency, fairness, safety, privacy, and the clear attribution of human responsibility for all decisions in this area.
While the use of Artificial Intelligence by police, prosecutors and courts may have “significant benefits if it is properly regulated”, the Assembly’s Standing Committee said in a resolution based on a report by Boriss Cilevičs (Latvia, SOC), “it risks having a particularly serious impact on human rights if it is not”.
Such AI systems in use or in development include facial recognition, predictive policing, identifying potential victims of crime, assessing the risk of remand prisoners, sentencing and parole, and identifying “cold cases” that could now be solved using modern forensic technology.
The Assembly pointed to existing concerns such as private companies denying access to the source code for their systems on intellectual property grounds, hoarding data or failing to explain their systems fully to the public authorities who use them. Other concerns include AI systems being trained on massive datasets which are tainted by historical bias, leading to discrimination, or “tech-washing” which obscures and therefore perpetuates bias.
“If the public is to accept the use of AI and enjoy its potential benefits, it must have confidence that any risks are being properly managed,” the parliamentarians said. “If AI is to be introduced with the public’s informed consent, as one would expect in a democracy, then effective, proportionate regulation is a necessary condition.”