Need for democratic governance of artificial intelligence
- Author(s):
- Parliamentary Assembly
- Origin
- Text
adopted by the Standing Committee, acting on behalf of
the Assembly, on 22 October 2020 (see Doc. 15150, report of the Committee on Political Affairs and Democracy,
rapporteur: Ms Deborah Bergamini).See also Recommendation 2181 (2020).
1. Technology has always had a strong
impact on the course of human history. Yet, the pace of technological
progress has never been as swift, and its effects on humans never
as direct, tangible and wide-ranging as they are now, on the verge
of the fourth industrial revolution. Artificial intelligence (AI),
which is the key driver of it, is broadly considered to be a determining
factor in the future of humanity as it will substantially transform
individual lives and impact on human communities.
2. AI-powered devices are already widely present in our daily
lives and carry out multiple tasks previously fulfilled by individuals,
both in a personal and an official capacity. Predictive algorithms,
inherent to AI, are frequently deployed for important decisions,
such as university admissions, loan decisions and human resources
management, but also for border control (including at airports)
and crime prevention (through predictive policing practices and
the use, within the criminal justice system, of risk-assessment
instruments in repeat offending). As all our societies are struggling
to fight the ongoing Covid-19 pandemic, AI is also used to enhance
pharmaceutical research and help analyse medical data.
3. However, the long-term effects of AI on humans and society
are still far from being clear. While AI may generate great opportunities
to advance economic and social progress, it also presents a series
of complex challenges. On the one hand, it is hoped that AI will
bring about a substantial increase in productivity and economic
growth, scientific breakthroughs, improvements in healthcare, higher
life expectancy, security and ever-increasing convenience. On the
other hand, there are fears that AI might severely disrupt labour
markets around the globe, lead to increased inequality of income
and wealth and social inequality, and jeopardise social and political
stability, as well as international security.
4. AI-based technologies have an impact on the functioning of
democratic institutions and processes, as well as on the social
and political behaviour of citizens. Its use may produce both beneficial
and damaging impacts on democracy. Indeed, the rapid integration
of AI technologies into modern communication tools and social media
platforms provides unique opportunities for targeted, personalised
and often unnoticed influence on individuals and social groups,
which different political actors may be tempted to use for their
own benefit.
5. On the positive side, AI can be used to improve government
accountability and transparency, help fight corruption and produce
many benefits for democratic action, participation and pluralism,
making democracy more direct, efficient and responsive to citizens’
needs. AI-based technologies can broaden the space for democratic
representation by decentralising information systems and communication
platforms. AI can strengthen informational autonomy for citizens,
improve the way they collect information about political processes
and help them participate in these processes remotely by facilitating
political expression and providing feedback channels with political
actors. It can also help to establish greater trust between the
State and society and between citizens themselves.
6. However, AI can be – and reportedly is – used to disrupt democracy
through interference in electoral processes, personalised political
targeting, shaping voters’ behaviour and manipulating public opinion. Furthermore,
AI has seemingly been used to amplify the spread of misinformation,
“echo chambers”, propaganda and hate speech, thus eroding critical
thinking, contributing to rising populism and the polarisation of
democratic societies.
7. Moreover, the broad use by States and private actors of AI-based
technologies to control individuals, such as the automated filtering
of information amounting to censorship, mass surveillance using
smartphones, the gathering of personal data and tracking one’s activity
online and offline may lead to the erosion of citizens’ psychological
integrity, civil rights and political freedoms and the emergence
of digital authoritarianism – a new social order competing with
democracy.
8. The concentration of data, information, power and influence
in the hands of a few major private companies involved in developing
and providing AI-based technologies and services, and the growing dependence
of individuals, institutions and society as a whole on these services,
are also a cause for concern. These big companies no longer serve
as simple channels of communication between individuals and institutions
but play an increasingly prominent role on their own, controlling
and filtering information flows, exercising automated censorship
of content published on social media, setting the agenda and shaping
and transforming social and political models. Acting on the basis
of business models that prioritise the profits of shareholders over
the common good, these actors may be a threat to democratic order
and should be subject to democratic oversight.
9. The Assembly notes that, in recent years, governments, civil
society, international institutions and companies have been engaged
in extensive discussions with a view to identifying a set of commonly
accepted principles on how to respond to concerns related to AI
use. It welcomes the fact that the Council of Europe, as a leading
human rights organisation, has been actively involved in these discussions
on the future of AI and its governance, and in particular welcomes
the contribution to this process by the Committee of Ministers,
the Commissioner for Human Rights and the intergovernmental co-operation
bodies.
10. The Assembly considers that self-regulatory ethical principles
and policies voluntarily introduced by private actors are inadequate
and insufficient tools to regulate AI, as they do not necessarily
lead to democratic oversight and accountability. Europe needs to
ensure that the power of AI is regulated and used for the common
good.
11. Therefore, the Assembly strongly believes that there is a
need to create a cross-cutting regulatory framework for AI, with
specific principles based on the protection of human rights, democracy
and the rule of law. Any work in this area needs to involve all
stakeholders, including, in particular, citizens and major private companies
involved in developing and providing AI-based technologies and services.
12. The Council of Europe, as a leading international standard-setting
organisation in the field of democracy, must play a pioneering role
in designing procedures and formats to ensure that AI-based technologies
are used to enhance, and not to damage, democracy.
13. In this context, it welcomes the setting up, by the Committee
of Ministers, of an Ad hoc Committee on Artificial Intelligence
(CAHAI), to examine, based on broad multistakeholder consultations,
the feasibility and potential elements of a legal framework for
the design, development and application of AI. It calls on the Council of
Europe member States and other observer States participating in
CAHAI to work together towards a legally binding instrument aimed
at ensuring democratic governance of AI and, where necessary, complement
it with sectoral legal instruments.
14. The Assembly deems that such instrument should:
14.1 guarantee that AI-based technologies
are designed, developed and operated in full compliance with, and
in support of, the Council of Europe’s standards on human rights,
democracy and the rule of law;
14.2 promote a common understanding and provide for the respect
of key ethical principles and concepts and the implementation of
the above-mentioned standards, including:
14.2.1 transparency,
including accessibility and explicability;
14.2.2 justice and fairness, including non-discrimination;
14.2.3 human responsibility for decisions, including liability
and the availability of remedies;
14.2.4 safety and security;
14.2.5 privacy and data protection;
14.3 seek to maximise the possible positive impact of AI on
the functioning of democratic institutions and processes, including,
inter alia to:
14.3.1 improve
government accountability;
14.3.2 help fight corruption and economic crime;
14.3.3 facilitate democratic action, participation and pluralism;
14.3.4 make democracy more direct, efficient and responsive to
citizens’ needs;
14.3.5 broaden the space for democratic representation by decentralising
information systems and communication platforms;
14.3.6 strengthen informational autonomy for citizens, improve
the way they collect information about political processes and help
them participate in these processes remotely by facilitating political
expression and providing feedback channels with political actors;
14.3.7 improve transparency in public life and help to establish
greater trust between the State and society and between citizens
themselves;
14.4 contain provisions to prevent and/or limit the risks that
AI is misused to weaken and disrupt democracy, including,
inter alia through:
14.4.1 interference
in electoral processes, personalised political targeting, shaping
voters’ political behaviours and manipulating public opinion;
14.4.2 amplifying the spread of misinformation, “echo chambers”
and propaganda;
14.4.3 eroding individual and societal critical thinking;
14.4.4 contributing to rising populism and the polarisation of
democratic societies;
14.5 contain provisions to limit the risks of the use of AI-based
technologies by States and private actors to control people, which
may lead to an erosion of citizens’ psychological integrity, civil
rights and political freedoms;
14.6 contain safeguards to prevent the threats to democratic
order resulting from the concentration of data, information, power
and influence in the hands of a few major private companies involved
in developing and providing AI-based technologies and services,
and the growing dependence of individuals, institutions and society
as a whole on these services, as well as provisions that the activity of
such actors is subject to democratic oversight.
15. Furthermore, the Assembly believes that, in order to ensure
accountability, the legal framework to be put in place should provide
for an independent and proactive oversight mechanism, involving
all relevant stakeholders, which would guarantee effective compliance
with its provisions. Such a mechanism would require a highly competent
body (inter alia in technical,
legal and ethical terms), capable of following the new developments
in digital technology and evaluating accurately and authoritatively
its risks and consequences.
16. With regard to algorithms and social media platforms, the
Assembly deems it necessary to:
16.1 make
more transparent the decision-making factors behind algorithmically
generated content;
16.2 give users more flexibility to decide how algorithms shape
their online experience;
16.3 urge platforms to conduct more systematic human rights
due diligence in order to understand the social impact of their
algorithms;
16.4 consider establishing an independent expert body to provide
oversight over technological platforms and the operation of their
algorithms;
16.5 tighten privacy controls on user data so that algorithms
have less ability to exploit data in the first place.