C Explanatory memorandum
by Mr Stefan Schennach, rapporteur
1 Introduction
1. In December 2018, the Committee
on Social Affairs, Health and Sustainable Development (“the committee”)
discussed the opportunities and challenges that the spread of artificial
intelligence (AI) technologies is expected to bring into our lives
and the way we work. It then decided to further pursue the consideration
of this issue and tabled a motion for a resolution on the subject
(
Doc. 14778). The motion points to the game-changing aspects of
AI technologies which “may offer countless new opportunities and
benefits for many”, but “may also significantly disrupt the current
patterns of work” and “affect workers’ rights”. We, as policy makers
at national and European level, have to take a strategic look at
the challenges in the making and propose adequate regulatory options.
When the motion was referred to the committee for report, I was appointed
rapporteur on 9 April 2019.
2. The committee then held exchanges of views with Ms Judith
Pühringer, Managing Director of Arbeit plus (Austria), on 14 May
2019 and with Ms Corinna Engelhardt-Nowitzki, Head of the Industrial
Engineering department at the University of Applied Sciences, Vienna,
on 3 December 2019.
Note I also
participated in the OECD Global Parliamentary Network meeting on
10-11 October 2019 and carried out a fact-finding visit – together
with our Committee’s rapporteur on “Artificial intelligence in health
care: medical, legal and ethical challenges ahead” – to the International
Labour Organization (ILO) and the World Health Organization (WHO) in
Geneva on 16-17 January 2020.
3. This report will seek to present a global picture of the opportunities
and challenges in the making for the world of work due to AI and
will look at the policy implications from the Council of Europe
perspective. It will review options for organising man-machine working
patterns and the existing examples of such practice so as to draw
lessons and recommendations for the national and European decision-makers
with a view to smoothing the transition to a different world of
work and minimising disruptions in society. The report should also
explore what safeguards should not be transgressed in order to uphold
the fundamental rights of people at work, and how to address or
prevent potential inequalities, prejudices and stereotypes in this
context.
2 AI: what is it?
4. As the Council of Europe Commissioner
for Human Rights notes in her Recommendation “Unboxing Artificial
Intelligence: 10 steps to protect Human Rights” (May 2019),
Note there is no agreed definition of
AI. The term is commonly used to describe automated data-processing
techniques that significantly improve the ability of machines to
perform tasks requiring intelligence – something that has been a
nearly exclusive domain of humans so far. Equipped with algorithms,
modern machines and robots can effectively “learn” new things, change
the way they perform tasks and implement their own decisions without
any human intervention. The appendix to this report contains a tentative
description of AI, “machine learning” and “deep learning” concepts, as
well as an overview of key ethical principles for trustworthy AI,
from the Council of Europe perspective.
5. The Organisation for Economic Cooperation and Development
(OECD) and United Nations Conference on Trade and Development (UNCTAD)
have described AI as “the ability of machines and systems to acquire and
apply knowledge, and to carry out intelligent behaviour”. This definition
shows that AI comprises a variety of technologies capable of manipulating
objects and of cognitive tasks (such as sensing, reasoning, learning, making
decisions). In its factsheet on a Digital Single Market, the European
Commission refers to AI as “systems that show intelligent behaviour;
by analysing their environment they can perform various tasks with some
degree of autonomy to achieve specific goals”.
Note More recently, the European Union
described AI simply as “a collection of technologies that combine
data, algorithms and computing power”
Note. Moreover, as one
ILO research paper notes, AI is aiming to replace humans in strenuous
mental tasks rather than physical ones which has largely happened
with the previous waves of automation and robotisation.
Note
3 Ethical
aspects of AI in general and in relation to human work
6. AI technologies and practical
applications are developing fast: they no longer belong to a science
fiction domain, and we are increasingly likely to encounter them
in daily life, sometimes even without realising it. Both commercial
and public entities already employ AI to analyse, predict, reinforce
and even control human behaviour via surveillance techniques
Note. They can assist
and facilitate our work and render it more efficient but can also
manipulate our decisions or decisions affecting us in the context
of employment.
7. Concerned about legal and ethical aspects of AI within the
existing human rights framework, the Council of Europe, through
its Ad Hoc Committee on Artificial Intelligence (CAHAI),
Note has undertaken a comprehensive mapping
exercise with a view to exploring the feasibility of a standard-setting
instrument, possibly a convention. Its inventory includes amongst
others the first European Ethical Charter on the use of AI in judicial
systems, the Guidelines on AI and data protection, the Declaration
by the Committee of Ministers on manipulative capabilities of algorithmic
processes, and the Study on human rights dimensions of automated
data processing techniques, as well as the above-mentioned recommendation
by the Commissioner for Human Rights and, more recently, Recommendation
CM/Rec(2020)1 of the Committee of Ministers to member States on
the human rights impacts of algorithmic systems. The Commissioner’s
recommendation refers to the need to “monitor the potential negative
impacts on the right to work and plan for mitigation, including
through schooling”. The Council of Europe’s overview of international
studies on ethical principles of AI has identified some core benchmarks
(see the appendix), notably transparency, justice and fairness,
responsibility, safety and security, and privacy.
8. Considering AI as a strategic technology that can benefit
society and the economy, the European Commission published a European
strategy (April 2018), a co-ordinated plan (December 2018) and a Communication
(April 2019)
Note putting
emphasis on human-centred development of AI “with the ultimate aim
of increasing human well-being”. The Commission’s High-Level Expert
Group on AI issued guidelines for trustworthy AI that stress seven
major requirements that AI applications should respect: human oversight, technical
safety, personal data governance, transparency, diversity and non-discrimination,
societal (and environmental) well-being, and accountability.
9. From June 2019, these guidelines have been tested and assessed
by various stakeholders and individuals from both the private and
the public sector, and a white paper was issued in February 2020.
The latter singles out the issue of employment equality for all
sectors, underscoring that the use of AI for recruitment and in
situations affecting workers’ rights should always be treated as
“high-risk” and hence heightened regulatory requirements should
apply. In July 2020, the High-Level Expert Group launched the Assessment List
for Trustworthy AI (ALTAI) to help AI developers and users check
AI applications against the requirements of trustworthy AI. The
European Union has been pleading for the elaboration of international
AI ethics guidelines, including through multilateral fora such as
the G7 and G20. The latter has endorsed “Principles on Human-centred
AI” in June 2019.
10. The Foresight Brief on “Labour in the age of AI” by the European
Trade Union Institute (ETUI)
Note warns about potential violations to human
dignity caused by AI-powered surveillance technologies in the workplace which
illustrates the need to better protect the right to privacy and
personal data protection in line with requirements of the Council
of Europe Convention for the Protection of Individuals with regard
to Automatic Processing of Personal Data (ETS No. 108) and its amending
Protocol (CETS No. 223, “Convention 108+”) and the General Data
Protection Regulation (GDPR). The ETUI also calls on European countries
to guarantee the right to explanation of decisions made by AI, since
algorithmic decisions are based on large data sets that may reflect
human biases and prejudice, thus inheriting them and potentially
producing discriminatory decisions. We should recall that article
12 of the GDPR guarantees the right to obtain information that is understandable,
meaningful and actionable, while article 9.1.a. of Convention 108+
insists on every individual’s right “not to be subject to a decision
significantly affecting him or her based solely on an automated
processing of data without having his or her views taken into consideration”.
11. The OECD for its part adopted (on 22 May 2019) the first intergovernmental
policy guidelines on AI in seeking to uphold international standards
towards ensuring the design and operation of “robust, safe, fair
and trustworthy” AI systems which could deliver “the best outcomes
for all”. The OECD principles on AI have the support of the European
Commission and echo the latter’s guidelines for trustworthy AI.
Although not legally binding, they have a very strong potential
to become a global benchmark and to influence national legislation across
the world. As rapporteur, I note the OECD’s recommendation to governments
to “equip people with the skills for AI and support workers to ensure
a fair transition”.
12. In this context, the ILO’s Global Commission on the Future
of Work has proposed a human-centred strategy to cushion the impact
of AI. It urges investment in people’s skills, lifelong learning
(acquiring skills, reskilling and upskilling) and institutions for
learning, as well as in decent and sustainable work. The latter aspects
imply additional efforts to ensure “work with freedom, dignity,
economic security and equality”. This is a tall order: as we saw
during the exchange of views with the representative of Arbeit plus,
the first algorithmic applications used in Austria by employment
agencies do not have the trust of the civil society nor of the social partners
as to their capacity to make adequate assessments of human potential
and motivation to work, and risk perpetuating a gender bias, sclerotic
stereotypes and inequalities.
13. Some States have already published national strategies for
responsible AI; they include France, Germany, Italy, New Zealand
and the United States of America. The Italian AI strategy is viewed
as one of the most comprehensive ones: it takes a human-centric
approach considering that AI should be designed as a service for
humans and should not seek to replace humans, but merely to enhance
their capacities and lives. This strategy also highlights the need
for strong government guidance and regulation of the labour market
to preserve employment quality, to mainstream sustainability (notably
inclusiveness and equal opportunities) and to prevent high levels
of unemployment; it also pleads for systemic changes in the education
system so as to provide for robust lifelong learning paths for workers.
Note
4 Growing
job insecurity and the transformation of jobs
14. AI clearly crystallises fears
around the possibility of AI to replace humans in more jobs than
it could create new ones, and thus uncertainty about how we are
going to earn our living in the world with super-smart robots and
“black-box” applications everywhere. Various studies point to a
potential rise in income and wealth inequalities as a result of
increased automation. Some reports find that up to 35% of workers
in the United Kingdom and 47% in the United States risk being ousted
from their jobs by AI over the next 20 years or so.
Note The
World Bank predicts an even more gloomy scenario for developing
countries with 69% of jobs being at risk in India and 77% in China
where multinational companies may be tempted to use more and more
automation despite abundance of still cheap but gradually more expensive
labour. Not all researchers are alarmist though, pointing to job
displacements and transformation. The OECD, for instance, estimates
that about 14% of jobs in its member countries are “highly automatable”,
whilst another 32% are likely to be substantially transformed due
to advanced technologies.
15. ILO research shows that, unsurprisingly, businesses tend to
introduce smart technologies for highly skilled tasks in substitution
of workers if such changes are profitable and 24-hour service is
necessary; AI applications are actually promising to optimise the
performance of low-skilled workers by speeding up their work and
reducing errors. As jobs are constituted by a set of tasks, if some
of these tasks are automatised, job profiles might change by adding
new tasks or modifying existing ones instead of suppressing a (human)
job entirely. According to many observers, AI has the potential
of new “general purpose technology” (such as electricity, computerisation,
Internet) that could permeate our lives via multiple applications
in different activities and occupations. At this stage, the ILO
observes that there is little ‘hard evidence’ of net job displacements
or actual job destruction. This, however, should not stop policy
makers from anticipating deep, wide and multi-faceted impact of
AI on human jobs.
16. The OECD foresees a very uneven spread of AI applications
across countries, sectors and jobs. The most advanced AI systems
appear still to be narrow in scope in as far as they are designed
to carry out specific problem-solving or reasoning tasks. This is
however no consolation to translators whose jobs are increasingly threatened
by highly accurate, quasi-instantaneous and cheap, if not free,
translation applications such as Google Translate (applicable to
more than 130 languages), DeepL, Dict Box, Microsoft Translator,
Day Translations, Waygo and iTranslate, to mention just a few most
used systems globally. This might probably not render translators’
jobs totally obsolete but might gradually transform them into proof-reading
jobs, at least for some, to make sure that important aspects do
not get “lost in [machine] translation”.
17. A study on the ethics of artificial intelligence by the Scientific
Foresight Unit (STOA Panel) of the European Parliament
Note predicts that AI and automation
may exacerbate existing social and economic inequalities, emphasiSing
the disproportionate impact on young people entering the labour
market with little technical experience and minorities lacking high-skill
training.
18. This job-replacement trend can be observed more obviously
in European pharmacies with pharmaceuticals-dispensing robots, whilst
in the USA this trend is already well advanced and the pharmaceutical
robots’ market is estimated to weigh over USD 430 million by 2025.
The best performing pharma-machines are now capable of dispensing
about 225 types of drugs, allegedly make fewer mistakes than humans
and cost about USD 12 per hour compared to about USD 18 per hour
of human pharmacist in the US. In addition, pharma-robots are being
trained to help identify counterfeit or fraudulent drugs, can reduce contamination
for locally packed drugs and are available to serve clients at round-the-clock
hours. The risk though is to see the devaluation of human skills
and diminishing human responsibility, control and advice that are
so important in the medical sector.
Note
19. If the earlier waves of automation put more low-skilled jobs
at risk, AI-driven machines will also affect the so-called white-collar
jobs, namely those with high-skills. As one expert explained during
our Committee’s recent exchange of views
Note, technically
speaking, machines so far are not really creative; they are merely imitating
humans and their reasoning capacity. However, in certain sectors,
more advanced AI applications already provide a relatively good
basis for decision making even though they remain limited by algorithms’ probabilistic
nature and potential bias based on analysis of past or current behavioural
patterns. Indeed, AI machines and algorithms lack the disruptive
ability to enact positive change to eliminate bias or errors. In
some sectors, such as medicine, professionals’ reliance on AI could
even be dangerous (due to lack of understanding of machine- and
data-related limitations) and might gradually lead to deskilling
of professionals. I believe that rather than allowing smart machines
to take over human decisions completely, we should consider instead
how man-machine synergy could be optimised in order to facilitate
human work and to enhance the quality of the end result.
20. Given that AI could potentially boost labour productivity
by up to 40% by 2035 in developed countries, private sector enterprises
are not the only ones rushing to tap the potential of AI. The public
sector agencies, also in Europe, have been testing AI applications
in delivering services to the population, such as in Italy where the
Ministry of Economy and Finance has introduced an AI-driven help
desk for handling citizen calls, and saw customer satisfaction grow
by 85%. In the United Kingdom, the British Department for Work and
Pensions started using AI to process incoming correspondence.
Note More and
more managers view AI as an “enabler” for working differently –
in new and better ways.
21. Demographic changes in Europe may also lead to the need to
embrace AI-enabled solutions more widely. On the one hand, Europe’s
population is aging and there is already a shortage of workforce
in the (social and medical) care-giving sector, also to assist
persons with disabilities; on the other hand, scores of young people
and the long-term unemployed are struggling to find jobs as their
skills do not necessarily match the employers’ and society’s needs.
AI can support care-giving jobs by alleviating humans from strenuous physical
tasks and freeing them for more interactive and problem-solving
work that requires more emotional intelligence than smart machines
can currently offer. This means that European society will need
many new workers with skills allowing for a professional and responsible
use of AI options. At the same time, I must caution States against
the massive deployment of assistive AI technology if this is done
to the detriment of traditional care and if it deprives persons
with special needs of a meaningful choice of affordable and accessible
care.
Note
5 Taming
AI-driven disruptions through social policy innovations: protect
people, not jobs?
22. While the future of work is
in the making following the successive waves of digitalisation of
data, computerisation of processes and big data management, then
automation, robotisation and smart-reasoning machines, the nature
of human work is undergoing rapid transformation and some even talk
about the ‘end of work’, well may be not for all but for increasing
numbers and types of jobs. Some researchers argue that we should
aim to better protect people/workers, not jobs.
Note Unlike machines,
we as humans are more creative at work, seek and nourish social
contacts, have empathy, use critical thinking and are attentive
to avoid discrimination; but we also get tired, may project our
stereotypes on others and make mistakes. We defend our right to
work because work equals self-accomplishment and income. Work has
a major social value that we want to preserve for the current and
future generations of Europeans. As one observer puts it, “We have
a choice between a society that ‘works to live’ and one that ‘lives
to work’, arguing that the latter is what makes humanity great and
calling to “preserve the social role of work”.
Note
23. AI innovation brings with it new options for optimising and
organising our work, compelling us to diversify our skills, to be
more flexible and also to share some jobs with machines or other
humans (such as by reducing work hours or workloads). This trend
combines with the continuous economic globalisation which already caused
the relocation of so many “European” jobs to developing countries
and the increasing precariousness of the remaining jobs due to the
global race to the bottom in terms of standards for ‘decent work’
and fierce global competition. Thus, in some countries, workers
in non-standard forms of work (including platform workers) are 40-50%
less likely to obtain social benefits when they lose their job than
those in “traditional” jobs.
Note The pace of
change with the deployment of intelligent machines has accelerated
so much that policy makers have to adapt the existing legal frameworks
and social systems without having a full picture of all the challenges
ahead.
24. We have been through the disaster of global financial and
economic crisis brought about by derivative financial products which
their users did not really understand and high-frequency trading
that humans did not master; we now face the ‘black box’ of algorithmic
applications which may lead to the best, as much as the worst outcomes.
Unlike the earlier generation of digital machines with linear lines
of action, AI technology can produce unpredictable outcomes, aggravate
and perpetuate existing biases and discrimination on the labour markets,
but it can also be more neutral than some humans and actually help
correct or prevent biases, discrimination and inequalities. The
quality of data and algorithm is key: from a human rights perspective,
we must ensure that fundamental ethical, legal and social safeguards
are in place through public policies. Policy makers should clarify
benchmarks on different sources of personal information to be used
by algorithmic systems for decision-making, especially in areas
that may be subject to discrimination (for instance, in recruitment
processes).
25. On a global scale, a few big countries currently dominate
the development of AI applications and patents (the United States,
China, Russia), and European countries are advancing at variable
speed. We should note that total European investment (public and
private) in research and innovation is much lower than that of other regions
of the world: about €3.2 billion were invested in AI in Europe in
2016, compared to €12.1 billion in North America and €6.5 billion
in Asia.
Note In
the last three years, the European Union’s funding for AI research
and development increased substantially – by 70% – and reached €1.5
billion. This effort builds on a “Coordinated plan on AI”
Note,
with the aim to mobilise some €20 billion annually in AI-related
investment across the European Union in the next decade.
26. In this context, the EU’s White Paper on
Artificial Intelligence - A European approach
to excellence and trust calls for a “common European
approach to AI”. It notably supports a dual approach based on regulation and
investment (through an ‘ecosystem of excellence and trust’) in order
to promote the uptake of AI and to tackle risks inherent in the
use of AI technology, and pledges for Europe to become a global
leader in the domain
Note.
27. Uncertainties surrounding the future of human jobs with AI
and subsistence earnings that go with them should compel us to have
a fresh look at the idea of a basic income. This concept is no longer
dismissed as utopia and is getting traction among experts,
Note business leaders
Note and politicians as an alternative
system of income distribution. Past experiments with a limited-scale
basic income have largely demonstrated positive effects on human
well-being. The analysis of the most recent basic income experiment
in Finland over 2017-2018, published in June 2020, confirms improved
well-being and economic security among those concerned. I should
recall this Assembly’s
Resolution
2197 (2018) “The case for a basic citizenship income” which stated that
“introducing a basic income could guarantee equal opportunities
for all more effectively than the existing patchwork of social benefits,
services and programmes” and urged member States to study “the modalities
for such a permanently guaranteed income and the ways of funding
it as part of a new social contract between citizens and the State”.
28. Another option to alleviate the impact of automation on human
workers that is gaining consideration
Note is a ‘robot tax’ or so-called ‘automation
tax’. Levying a tax on the use of robots raises interrogations around
the absence of an agreed upon definition of the term ‘robot’. OECD
researcher Xavier Oberson
Note – building upon the definition provided
by a European Parliament resolution of February 2017 (2015/2103(INL))
Note –
suggests taking a form-neutral approach on the term ‘robot’ in the
context of taxation, which would then include not only tangible
robots and smart machines but also virtual agents. The use of robots
could then be taxed according to the “imputed hypothetical salary
the robots should receive from equivalent work done by humans” or
based on the ratio of a company’s revenues to their number of robots.
The ILO research also proposes to explore other promising solutions
such as carbon taxes which could foster resource-saving rather than
labour-saving innovation, thus helping address simultaneously climate
change and inequalities (these would widen with AI-induced job polarisation
between safe, well-paying jobs and precarious, less well paid jobs).
Note
29. Considering that AI technology will eliminate some jobs and
significantly modify the organisation and structure of work around
the man-machine partnership, we should note some expert proposals
for regulatory frameworks. In line with requirements set out in
the European Social Charter (ETS No. 35 and 163), policy makers
should in particular consider the following regulatory priorities
with regard to AI systems in relation to human work:
- algorithms ought to be explainable,
transparent, ethical, gender sensitive and, as far as possible, certified
at European level, with only mature algorithms being authorised
for use in the public sphere;
- AI applications should be complementary to human work
and should not be allowed to completely replace humans in decision-making;
- users should be notified whenever they are in contact
with AI applications and should consequently have the choice of
using them or not; any use of surveillance techniques at the workplace
should be subject to special precautions in terms of consent and
privacy protection;
- States should control algorithmic developments so that
existing legal norms and standards are respected by AI developers
and users, and a regulatory capture by some AI business giants is
avoided;
- education and training systems should emphasise the differences
between human and artificial intelligence and cater for more expert-level
skill development.Note
6 Learning
systems for matching workplace needs, education and know-how
30. While business enterprises
have the appetite for AI as a way to increase productivity, economic competitiveness
and profits, public institutions can use AI to deliver public services
and to save resources more efficiently. The current wave of AI-driven
automation is innovative and unstoppable, creating many winners
but also losers, notably in terms of employment when there is a
mismatch between labour market needs and workers’ skills. To narrow
this gap and tap human potential fully, national education and training
systems need to learn and adapt to mainstream basic knowledge about
AI technology and ethical implications towards all generations;
they need to help cultivate creativity and social intelligence which
are most likely to preserve one’s employability in the new era.
In fact, many educational tools on AI and using AI are already available.
Note
31. For sure, European countries need to focus more on AI literacy
– both through digital education programmes for young people and
life-long learning/training systems for all. Indeed, as life-long
jobs are disappearing, many will have to move from job to job during
their working years, and more time and resources will have to be
devoted to adapting one’s skills and competences. Public policies
have to accommodate this reality at several levels of governance
and involve the private sector more actively in supporting training
or retraining paths.
32. Some countries, such as France, have introduced the concept
of personal training accounts for all workers, which entails positive
obligations for all employers to set up skills development plans
or training.
Note Social
partners in other countries could replicate this approach for handling
technology change with AI and smoothing the transition to more fragmented
careers. Moreover, educational and training systems should be better
adapted to “fit the purpose of an aging society with (fast) technological
change”
Note and an increased focus on
a broad range of competences rather than skills. As the ILO recommends
from a regulatory point of view, it is important to ensure certification
and a greater portability of competences and to soften some occupational licencing
requirements which hinder cross-sector mobility of professionals.
33. The OECD analysts also point out that our educational systems
need a vast overhaul to cater for the development of capabilities
which machines do not master. A traditional widespread approach
tended to emphasise the memorisation of facts, rules, equations,
formulas and the like; what we rather need is to cultivate human
values such as creativity, inquisitiveness, empathy, negotiation,
social interaction, team-building and critical thinking. People
should prepare for “jobs that have not yet been created, to use
technologies that have not yet been invented, and to solve social
problems that we don’t yet know will arise […] amid unforeseeable disruptions”.
Note
7 Our
ambition for the future – human-machine complementarity
34. Various States and in particular
businesses are racing to embrace AI technology as innovation that
will transform the way we live, work and interact. What is good
for business and economic competitiveness, might not necessarily
benefit people at work and even threaten their well-being if they
drop out from the labour market or are unable to enter it. Although
hard evidence on potential impacts of AI on labour markets is not
yet available, it is clear that the contents of many human jobs
will change in that they will have to teamwork with smart machines
in complementarity. Humans must be prepared for more fragmented
careers and life-long adjustment of their competences; they must
never allow AI to take over decision-making completely.
35. Many experts concur in saying that policy makers need to think
early about the economic strategies around AI, the continuous requalification
of workers and the rebalancing of social protection systems in order to
meet the challenges faced by our legal frameworks and our workforce,
while reaping the benefits AI may bring into our lives. Running
at the forefront of technological innovation and continuously reflecting
on how AI might put human rights as well as human work at risk and
how to respond to these hazards should be a collective priority
for Europeans.
36. In this context, we should reiterate the message contained
in Recommendation CM/Rec(2020)1 of the Committee of Ministers to
member States on the human rights impacts of algorithmic systems:
“private sector actors, in line with the United Nations Guiding
Principles on Business and Human Rights, have the corporate responsibility
to respect the human rights of their customers and of all affected
parties”. Understanding the labour-related social rights as fundamental
human rights means obligation for both States and businesses to adopt
“flexible governance models” so as to ensure that “responsibility
and accountability … are effectively and clearly distributed throughout
all stages of the process, from the proposal stage through to task
identification, data selection, collection and analysis, system
modelling and design, through to ongoing deployment, review and
reporting requirements”. As a further step, we should recall that
the Committee of Ministers has accepted this Assembly’s proposal
to consider “the feasibility and advisability of revising the recommendation
CM/Rec(2016)3” on Human rights and business, including as relates
challenges linked to AI deployment.
37. This report has outlined some policy options for organising
man-machine working patterns. As rapporteur, I wish to insist on
the human-centred approach to AI development and deployment that
is needed to protect human dignity, fundamental rights and the social
value of work. We should not underestimate the potential harm to
individuals and society at large if black-box algorithms get deployed
massively in an unethical manner, and vested business interests
prevail over the public interest. This points to the huge responsibility
of policy makers and States to better anticipate the transformative
effects of AI on human work and devise ambitious national strategies
to accompany transition towards greater human-machine complementarity
where rights-compliant AI is used as an enabler for working differently
– in new and more flexible ways, to positive effect. To confront
the uncertainties of the future with AI, we need public policies
that tap human potential fully, ensure substantive human oversight
of AI-based decision-making, help better match labour market needs
and workers’ qualifications, and cultivate essential ethical values,
such as inclusiveness and sustainability. I therefore believe that
it is time for the Council of Europe to start drafting a comprehensive
standard-setting instrument on AI, such as a convention open to
non-member States, that will build on our collective wisdom and
vision for the future we want.