The control of online communication: a threat to media pluralism, freedom of information and human dignity
- Author(s):
- Parliamentary Assembly
- Origin
- Assembly
debate on 23 June 2022 (24th sitting) (see Doc. 15537, report of the Committee on Culture, Science, Education
and Media, rapporteur: Mr Frédéric Reiss). Text
adopted by the Assembly on 23 June 2022 (24th sitting).
1. The Parliamentary Assembly holds
that communication policies must be open, transparent and pluralistic,
and that they must build on unhindered access to information of
public interest and the responsibility of those disseminating information
to society. It notes that online communication has become an essential
part of people’s daily lives and is concerned that a handful of
internet intermediaries are de facto controlling
online information flows. This concentration in the hands of a few
private corporations gives them huge economic and technological
power, in addition to the ability to influence almost every aspect
of people’s private and social lives.
2. There are questions about the capacity and willingness of
an economic and technological oligopoly to ensure diversity of information
sources and pluralism of ideas and opinions online; on the expediency
of entrusting artificial intelligence with the task of monitoring
online pluralism; and on the real capacity of legal frameworks and
democratic institutions in place to prevent the concentration of
economic, technological and informational power from being converted
into non-democratic political power. Indeed, as electoral communication
shifts to the digital sphere, whoever controls online communication
during election campaigns may become a formidable political force.
Voters may be seriously encumbered in their decisions by misleading, manipulative
or false information.
3. Among the main risk factors in this context are: the lack
of transparency of new forms of online advertising, which can too
easily escape the restrictions applicable to advertising on traditional
media, such as those intended to protect children, public morals
or other social values; the fact that journalists, whose behaviour
is guided by sound editorial practices and ethical obligations,
are no longer the ones playing the gatekeeper role; and the growing
amount of disinformation available online, in particular when it
is strategically disseminated with the intent to influence election
results.
4. In contrast, a virtuous model of interrelation between users
and intermediaries was developed by WikiLeaks. Therefore, it is
unacceptable that States tend to punish this horizontal and democratic
information communication model instead of encouraging it with a
view to gradually implementing a comprehensive legal declassification
mechanism.
5. From an economic point of view, network effects and economies
of scale create a strong tendency towards market concentration.
In the context of oligopolistic competition driven by technology,
inefficiencies and market failures may come from the use of market
power to discourage the arrival of new competitors, from the creation
of barriers to switching services or from information asymmetries.
Therefore, to address the dominance of a few internet intermediaries
in the digital marketplace, member States should use antitrust legislation. This may enable citizens to
have greater choice when it comes to deciding, to the extent possible, which
platforms are most likely to better protect their privacy and dignity.
6. A few innovative remedies to mitigate the power of internet
intermediaries include giving users, where possible, the option
of accessing, consulting and receiving services from third-party
providers of their choice, which would rank and/or deliver content
following a classification previously made by the user and could
alert him or her in case of violent, shocking or other dangerous
content.
7. Beyond the business model, crucial issues for internet intermediaries
and for the public are the quality and variety of information, and
the plurality of sources available online. Internet intermediaries
are increasingly using algorithmic systems, which are helpful to
search the internet, automatically create and distribute content, identify
potentially illegal content, verify information published online
and moderate the communication online. However, algorithmic systems
can be abused or used dishonestly to shape information, knowledge,
the formation of individual and collective opinions and even emotions
and actions. Coupled with the economic and technological power of
big platforms, this risk becomes particularly serious.
8. With the emergence of internet intermediaries, harmful content
is spreading at a very high speed on the web. Internet intermediaries
should be particularly mindful of their duty of care where they
produce or manage the content available on their platforms, or where
they play a curatorial or editorial role, while avoiding taking down
third-party content, except for clearly illegal content.
9. The use of artificial intelligence and automated filters for
content moderation is neither reliable nor effective. Big platforms
already have a long record of mistaken or harmful content moderation
decisions in areas such as terrorist or extremist content. Solutions
to policy challenges such as hate speech, terrorist propaganda and
disinformation are often multifactorial; therefore, mandating automated
moderation by law is an inappropriate and incomplete solution. It
is important to acknowledge and properly explain the role and necessary
presence of human decision makers, in addition to the participation
of users in the establishment and assessment of content moderation
policies.
10. Today there is a trend towards the regulation of social media
platforms. While increased democratic oversight is necessary, regulation
enacted in practice often entails overly broad powers and the discretion
of government authorities over information flows, which endanger
freedom of expression. Lawmakers should aim at reinforcing transparency
and focus on companies’ due processes and operations, rather than
on content itself. Moreover, legislation should deal with “illegal
content” and avoid using broader notions such as “harmful content”.
11. If lawmakers choose to impose very heavy regulations on all
internet intermediaries, including new smaller companies, this might
consolidate the position of big actors which are already in the
market. In such a case, new actors would have little chance of entering
the market. Therefore, there is a need for a gradual approach, to
accommodate different types of regulations on different types of
platforms.
12. The Parliamentary Assembly recalls that, in Recommendation
CM/Rec(2018)2 on the roles and responsibilities of internet intermediaries,
the Committee of Ministers of the Council of Europe indicates that any
legislation should clearly define the powers granted to public authorities
as they relate to internet intermediaries; and Recommendation CM/Rec(2020)1
on the human rights impacts of algorithmic systems confirms that
rule-of-law standards must be maintained in the context of algorithmic
systems.
13. Internet intermediaries must ensure a certain degree of transparency
of the algorithmic systems they use, because this may have an impact
on our freedom of expression. At the same time, in their capacity
as private companies, they should enjoy, without prejudice for effective
transparency and for human rights, their legitimate right to commercial
secrecy. Member States must strike a balance between the freedom
of private economic undertakings, with the right to develop their
own commercial strategies, including the use of algorithmic systems,
and the right of the public to communicate freely online, with access
to a wide range of information sources. They should also recognise
that content removal is not in itself a solution for societal harm, as
more rigorous content moderation may displace the problem of online
hate speech to less popular platforms rather than address its causes.
14. Internet intermediaries have the responsibility to ensure
the protection of users’ rights, including freedom of expression.
Therefore, member States should ensure internet intermediaries’
accountability for the algorithmic systems they develop and use
in the automated production and distribution of information, in addition
to their lines of funding and policies they implement for creating
information flows and dealing with illegal content.
15. In particular, internet intermediaries should assume specific
responsibilities based on international standards and national legislation
regarding users’ protection against manipulation, disinformation, harassment,
hate speech and any expression which infringes privacy and human
dignity. The functioning of internet intermediaries and technological
developments behind their operation must be guided by high ethical principles.
It is from both a legal and ethical perspective that internet intermediaries
must assume their responsibility to ensure a free and pluralistic
flow of information online which is respectful of human rights.
16. When they deal with information, internet intermediaries have
to act in accordance with the principles set out in
Resolution 2382 (2021) “Media
freedom, public trust and the people’s right to know”. The right
to free and pluralist information is enhanced by adherence to the
professional rules and ethics of the journalist, who, with cross-checking,
subjects sources to scrutiny. This was scrupulously respected by
Julian Assange and member States must act in line with
Resolution 2300 (2019) “Improving
the protection of whistle-blowers all over Europe”. Each member
State must recognise, and ensure respect for, the right of journalists
to protect their sources, and develop an appropriate normative,
judicial and institutional framework to protect whistle-blowers
and those who facilitate whistle-blowing. It is unfair that this
right was not taken into account in the extradition judgment brought
against Julian Assange. In accordance with
Resolution 2317 (2020) “Threats
to media freedom and journalists’ security in Europe”, the detention
and criminal prosecution of Mr Assange constitute a dangerous precedent
for journalists. As the UN Special Rapporteur on Torture and Other
Cruel, Inhuman or Degrading Treatment or Punishment declared on
1 November 2019, Mr Assange’s extradition to the United States must
be barred and he must be promptly released.
17. Consequently, the Assembly calls on Council of Europe member
States to:
17.1 bring their legislation
and practice into line with Recommendation CM/Rec(2020)1 on the
human rights impacts of algorithmic systems, and Recommendation
CM/Rec(2018)2 on the roles and responsibilities of internet intermediaries;
17.2 consider whether the concentration of economic and technological
power in the hands of a few internet intermediaries can be properly
dealt with via general and already existing competition regulations
and tools;
17.3 use antitrust legislation to force monopolies to divest
a part of their assets and reduce their dominance in the digital
markets;
17.4 develop a gradual regulatory approach to accommodate different
types of regulations to different types of internet intermediaries,
with the aim to avoid pushing new actors outside the market or to
enable them to enter the market;
17.5 address the issue of anticompetitive conduct in digital
markets by strengthening the enforcement of regulations on merging
and abuse of monopolistic positions;
17.6 guarantee that any legislation imposing duties and restrictions
on internet intermediaries with an impact on users’ freedom of expression
is exclusively aimed at dealing with “illegal content”, thus avoiding
broader notions such as “harmful content”;
17.7 ensure that mere automated content moderation is not allowed
by legislation; in this context, encourage internet intermediaries,
via legal and policy measures, to:
17.7.1 allow users to
choose means of direct and efficient communication which do not
solely rely on automated tools;
17.7.2 ensure that where automated means are used, the technology
is sufficiently reliable to limit the rate of errors where content
is wrongly considered as illegal;
17.8 guarantee that legally mandated content moderation provides
for the necessary presence of human decision makers, and incorporates
sufficient safeguards so that freedom of expression is not hampered;
17.9 encourage, via legal and policy measures, the participation
of users in the establishment and assessment of content moderation
policies;
17.10 ensure that regulations enacted to ensure transparency
of automated content moderation systems are based on a clear definition
of information that is necessary and useful to disclose and of the public
interest that legitimises the disclosure;
17.11 support the drafting of, and respect for, a general framework
of internet intermediaries’ ethics, including the principles of
transparency, justice, non-maleficence, responsibility, privacy,
rights and freedoms of users;
17.12 encourage internet intermediaries, via legal and policy
measures, to counteract hate speech online by issuing warning messages
to persons who spread hate speech online or by inviting users to review
messages before sending them; encourage internet intermediaries
to add such guidelines to the codes of conduct dealing with hate
speech;
17.13 consider adapting election legislation and policies to
the new digital environment by reviewing provisions on electoral
communication; in this respect, reinforce accountability of internet
intermediaries in terms of transparency and access to data, promote
quality journalism, empower voters to evaluate electoral communication
critically and develop media literacy.