B Explanatory memorandum
by Mr Damien Cottier, rapporteur
1 Introduction
1. On 4 July 2019, the motion
for a resolution entitled “Emergence of lethal autonomous systems
(LAWS) and their necessary apprehension through European human rights
law” (
Doc. 14945) was referred to the Committee on Legal Affairs and
Human Rights for report. The committee appointed me as rapporteur
on 23 June 2022, following the resignation of the previous rapporteur,
Fabien Gouttefarde (France, ALDE).
2. The motion for a resolution calls for an analysis of the ethical
and legal issues raised by the potential future use of lethal autonomous
weapons systems in armed conflicts, and more specifically their
compatibility and conformity with human rights, in particular the
European Convention for the Protection of Human Rights (ETS No.
5, “the Convention”). Attention will be drawn to the difficulties
encountered in developing a legal definition.
3. It should be pointed out that only LAWS will be examined here
and that these are not to be confused with automated or remote-controlled
weapons such as armed drones. Armed drones (UCAVs, Unmanned Combat
Aerial Vehicles) are unmanned aircraft that can be operated automatically
or remotely and can carry weapons as a payload. Although they have
no on-board pilot, they are remotely controlled by a pilot or can follow
independently pre-programmed flight routes or even automatically
track a target. They are automated or remote-controlled systems.
The choice of target or the decision to use lethal force is always
made by a human.
4. By contrast, according to some concepts, LAWS would be systems
that make decisions autonomously, namely without any human intervention,
concerning target selection or flight path, or the use of lethal
force. In the case of drones, this technology has not yet been used
to control the missile or operate the payload. LAWS, therefore,
are neither remote-controlled systems in which a human retains control
throughout, nor automatic systems in which a particular process
has been programmed in advance so that their action is totally predictable.
5. The military powers of the international community have significantly
different views as to the use of LAWS. Some consider that at least
initially, LAWS will not entirely replace human soldiers, but they
will have tasks of substitution adapted to their specific capabilities.
They will most likely be used in some form of collaboration with
humans during armed conflict, although they will still be autonomous
in terms of their own functions. The existing legal framework needs
to be examined in the light of this scenario, therefore, along with the
scenario in which LAWS would be deployed without any human participation.
Note
6. Activists from 54 non-governmental organisations have launched
a campaign in favour of a preventive prohibition of research and
development of this emerging technology and thus, even more so,
of any deployment of what they call “Killer Robots”.
Note This position
of principle was endorsed by the European Parliament in its resolution
dated 12 September 2018 on autonomous weapons systems.
Note Since 2014, the States Parties to the
UN Convention on Certain Conventional Weapons (CCW) have been holding
regular rounds of discussions on autonomous weapons in order to
develop a common definition and the beginnings of some regulation.
In 2017, artificial intelligence experts sent an open letter calling
on governments and the UN to “prevent an arms race in these weapons”
and “to avoid the destabilising effects of these technologies” which pose
a threat to,
inter alia, international
humanitarian and human rights law.
7. One reason why this analysis is so urgently needed is that
current assessments of the future role of LAWS will affect the level
of investment of financial, human and other resources in the development
of this technology over the next few years. To some extent, therefore,
the current assessments – or lack of them – risk becoming self-fulfilling
prophecies.
Note On the other
hand, the risks associated with lack of capacity for the global
military powers who would fail to invest in this new technological
field and thus the “arms race” logic implied in this field prompts
some researchers to consider that LAWS are the third military revolution
in the history of international relations, after the invention of
gunpowder and that of nuclear weapons.
8. This report focuses on the use of such weapons in the context
of armed conflict and thus primarily in the context of the application
of international humanitarian law (IHL). However, important ethical
and legal questions would also arise if such weapons were used by
civilian authorities, in particular police forces, outside the context
of conflict, for special operations (for example anti-terrorism).
This related issue, which does not appear to arise in Council of
Europe member States today, would involve a detailed analysis of
obligations under the European Convention on Human Rights and other
European and international human rights standards. This should be
the subject of a report in its own right.
2 Definition
of LAWS
9. Given the different aspects
of the technology and artificial intelligence, it remains difficult
to reach a consensus on the definition of LAWS. Most parties to
the discussions agree that the defining characteristics of LAWS
are their full autonomy and lethality, although the details of these
terms are the subject of much debate.
10. In his report to the United Nations General Assembly in 2013,
Christof Heyns talked about LAR or “lethal autonomous robotics”
Note which he defines in the same way that the
United States Department of Defense defines LAWS: “weapon systems
that, once activated, can select and engage targets without further
intervention by a human operator.”
11. The International Committee of the Red Cross (ICRC) takes
a similar approach with the following more detailed definition of
LAWS: “Any weapon system with autonomy in its critical functions.
That is, a weapon system that can select (i.e. search for or detect,
identify, track, select) and attack (i.e. use force against, neutralize,
damage or destroy) targets without human intervention.”
Note
12. What emerges from these definitions is that autonomous systems
are able to select and engage targets individually and independently
without any human involvement. Thus, crucial military targeting
decisions that would otherwise be made by humans will be made by
a machine. Human decisions are limited to the preliminary stages
such as programming and initial deployment; there is no human control
during missions, other than a potential general command capability
such as deactivation.
Note
13. The ICRC working definition encompasses any weapon system
capable of independently selecting and attacking targets and provides
a useful basis for legal analysis by delineating the broad scope
of the discussion about autonomous weapon systems without the need
to immediately identify the systems that raise legal concerns.
Note
2.1 Forms
of autonomy in context
14. Weapon systems autonomy can
be divided into three categories.
Note The
degree of autonomy used by these weapons systems, according to today’s
state of technological maturity, depends on the scope of the intervention
of a human operator in their deployment and use.
Note
a “Human in the loop”: weapon systems
that use autonomy to engage individual targets or specific groups of
targets that a human can and must decide to engage,
Note for
example guided munitions where the weapon’s technology assists the
operator in striking the target. The person launching the weapon
knows what specific targets are to be engaged, and retains the conscious
decision that those targets should be destroyed.
Note
b “Human on the loop”: weapon systems that use autonomy
to select and engage targets, but human controllers can halt their
operation if necessary.
Note At
least 30 nations use human-supervised defensive systems with greater
autonomy, where humans are “on the loop” for selecting and engaging
specific targets.
Note To
date, these have been used for defensive situations where the reaction
time required for engagement is so short that it would be physically
impossible for humans to remain “in the loop” and take a deliberate
action before each engagement and still defend effectively. Human
operators supervise; they are aware of the criteria for the selection
of specific targets and the engagement of force follows pre-programmed
rules. Human controllers can intervene to deactivate the weapon
system, but do not make an active decision to engage specific targets.
Note
c “Human out of the loop”: weapon systems that use autonomy
to select and engage specific targets without any possible intervention
by human operators.
Note
15. Autonomy is interdependent on the extent of the human operator's
intervention in the deployment and use of the weapon system, which
can be highly variable depending on the complexity of the technology
and the environment in which the weapon is used, ranging from remote-controlled
systems to automation and empowerment.
Note
2.2 Autonomous
or semi-autonomous weapons
16. The most likely near-term candidates
for autonomous weapons are not sentient or malevolent humanoid robots
but rather something more like wide-area search-and-destroy loitering
munitions, like those depicted in the video “Slaughterbots”.
Note Thus, the definitions must clearly
distinguish, in a way that is technically rigorous, between autonomous
weapons and the precision-guided homing munitions, known as semi-autonomous weapons
systems (SAWS) that have been in use for over seventy years.
Note Unlike
autonomous systems which select and engage targets autonomously,
SAWS are weapon systems which incorporate autonomy into one or more
targeting functions and, once activated, are intended to only engage
individual targets or specific groups of targets that a human has
decided are to be engaged. Falling mid-way between the two are human-supervised
autonomous weapon systems, with the characteristics of LAWS, but
with the ability for human operators to monitor the weapon system’s
performance and intervene to halt its operation, if necessary.
Note
17. The idea of a human decision is embedded within each of the
above definitions. The decision to place an autonomous weapon into
operation versus a semi-autonomous one is a very different decision.
Even in the case of a fire-and-forget homing missile, which, once
launched, is capable of moving in total autonomy, without any human
intervention, the decision about which individual target or specific
group of targets is to be engaged by that homing missile was made
by a human operator. By contrast, in the case of an autonomous weapon, the
human has decided to launch a weapon to seek out and destroy a general
class of targets over a wide area but does not take a decision about
which specific targets are to be engaged. Both definitions, however,
focus on the decision the human is making or not making
Note and
do not apply the word “decision” to something the weapon itself
is doing. This could raise important difficulties as to taking into
account the integration of the systems’ artificial intelligence
in what could be likened to free will.
Note
2.3 Human
control
18. The ICRC definition is not
intended to prejudge the level of autonomy in weapon systems but
its purpose is to help define an appropriate degree of human control
that may, or may not, be considered capable of guaranteeing the
respect of international humanitarian law.
Note In
the legal discussion, the analysis of the closeness of the link
between human decision making and the action of the machine is of
primary importance. Compliance with IHL can only be assured by maintaining
human control, the intensity of which varies according to the positions
taken by States and other actors of the international community.
Note
19. There is general agreement among CCW States Parties that “meaningful”
or “effective” human control, or “appropriate levels of human judgement”
Note must
be retained over lethal weapon systems.
Note
20. ARTICLE 36, an NGO, developed the concept of “significant”
human control, arguing that other terms such as “important, appropriate,
proper or necessary”
Note human implication or
control could very well serve to describe the concept whose importance
resides in the development of more precise criteria. This has been widely
discussed ever since.
Note However, whatever the terminology
that will be adopted, the criteria for the definition of human control
which will achieve consensus cannot render illicit the use of certain
weapons that have been in use for a long time, including ones that
drastically reduce the risk of civilian casualties, in order to
avoid that the rules of international law are divorced from the
reality of war. For example, the definition of meaningful human
control proposed by the International Committee for Robot Arms Control
(ICRAC) includes a provision to the effect that, for there to be
meaningful human control, “a human commander (or operator) must
have full contextual and situational awareness of the target area
and be able to perceive and react to any change or unanticipated
situations that may have arisen since planning the attack.”
Note The fact is, however, that humans
have been using weapons where they do not have real-time sight of
the target area since at least the invention of the catapult.
Note Such
criteria therefore seem to be unrealistic. In this respect, it is
worrying that the definition of LAWS adopted by the group of experts
set up by the European Commission in its “Ethics Guidelines for
Trustworthy AI” includes weapons types that have been in use for
a long time.
Note
21. In its original 2013 document introducing the concept of meaningful
human control, ARTICLE 36 argues that there are three necessary
requirements for meaningful human control:
a Information – a human operator, and others responsible
for attack planning, need to have adequate contextual information
on the target area of an attack, information on why any specific
object has been suggested as a target for attack, information on
mission objectives, and information on the immediate and longer-term
weapon effects that will be created from an attack in that context.
b Action – initiating the attack should require a deliberate
action by a human operator.
c Accountability – those responsible for assessing the information
and executing the attack need to be accountable for the outcomes
of the attack.
Note
22. Both the ARTICLE 36 and ICRAC statements emphasise the general
notion of informed action by a human. While the standard for information
required may be unrealistic in these proposals, informed action
is central to the concept of meaningful human control. This raises
the question of how much information is required for a human operator
to make a meaningful decision about the use of force.
23. ARTICLE 36’s approach of “adequate” information might be the
most appropriate: in order to make a decision about the lawfulness
of their action, the person must have enough information about the
target, the weapon, and the context for engagement. This does not
mean that each human operator involved in the chain of decision
making need have the complete picture. As happens today for soldiers
intervening in a building or a pilot dropping a bomb on a pre-planned
target, human operators may rely on decisions that have been made by
other humans in the chain of command. However, relying on others
does not mean blind trust or abrogating one's own moral judgement.
A single individual may not be responsible for all aspects of decision-making relating
to attacking a target, but any given person can be held accountable
for his or her own actions related to that attack.
Note
24. According to the study carried out in 2015 by the Centre for
a New American Security (CNAS), human control is meaningful when
humans make informed, conscious decisions about the use of the weapon
(no one is merely pushing a button when they see a light blink on)
and when the information they have to make that decision is sufficient
for them to ensure the lawfulness of the action they are taking,
given what they know about the target, the weapon, and the context
for action. This is important, especially in the context of responsibility for
errors. Human operators must have effective control over the use
of weapons. This is the case even if some of them are “fire and
forget” weapons that cannot be recalled after launch. This is because
trained human operators have a clear understanding of how the weapon
will function in certain environments as well as its limitations,
so they can use it appropriately.
Note
25. Human control can be exercised at the development stage, including
through technical design and programming of the weapon system. Decisions
taken during the development stage must ensure that the weapon system
can be used in accordance with IHL and other applicable international
law in the intended or expected circumstances of use. At this stage,
the predictability and reliability of the weapon system must be verified
through testing in realistic environments. Operational limits must
be set so that the weapon is only activated in situations where
its effects will be predictable. Also, the operational requirement
and technical mechanism for human supervision, as well as the ability
to deactivate the weapon, will need to be established.
26. Human control may also be exerted at the point of activation,
which involves the decision of the commander or operator to use
a particular weapon system for a particular purpose. This decision
must be based on sufficient knowledge and understanding of the weapon’s
functioning in the given circumstances to ensure that it will operate
as intended and in accordance with IHL. This knowledge must include
adequate situational awareness of the operational environment, especially
in relation to the potential risks to civilians and civilian objects.
It will also depend on various operational parameters, most of which
will be set at the development stage, and some that will be set
or adjusted at the activation stage:
a The
task the weapon system is assigned to,
b The type of target the weapon system may attack,
c The type of force and munitions it employs (and associated
effects),
d The environment in which the weapon system is to operate,
e The mobility of the weapon system in space,
f The time frame of its operation,
g The level of human supervision and ability to intervene
after activation.
27. In order to ensure compliance with IHL, there may need to
be additional human control during the operation stage, when the
weapon autonomously selects and attacks targets. Where the technical performance
of the weapon and operational parameters set during the development
and activation stages are insufficient to ensure compliance with
IHL in carrying out an attack, it will be necessary to define the
conditions in which the ability for human control and decision making
during the operation stage must be retained.
3 Legal
perspective
28. Autonomous weapon systems,
as defined, are not specifically regulated by international treaties.
It is the way in which they are used, against whom and for what
purposes that must be compliant with international humanitarian
and human rights law, however. The International Court of Justice
was clear in its 1996 Advisory Opinion that the established principles
and rules of humanitarian law applicable in armed conflict apply
to “all forms of warfare and to all kinds of weapons, those of the
past, those of the present and those of the future”.
Note
29. Given that lethal weapon systems are at stake, it needs to
be considered, whether and to what extent LAWS could interfere with
the guarantees foreseen by the European Convention on Human Rights,
in particular the right to life (Article 2).
30. LAWS are most likely to be used in situations of armed conflict
rather than in any other situations, so international humanitarian
law would apply. Analysing the conformity of LAWS and the criteria
for meaningful human control through the lens of human rights is
also necessary, however, since human rights apply at all times and
in all places, whereas the application of humanitarian law depends
on the existence of armed conflict in which humanitarian law takes
precedence over human rights law as
lex
specialis. Human rights might form the governing legal
framework in many situations. For example, during military operations
in situations that cannot be classified as an armed conflict, in
situations of occupation or armed conflict in which humanitarian law
and human rights law often overlap in practice.
Note
31. The European Court of Human Rights has even pointed out that
Article 2 must be interpreted in so far as possible in light of
the general principles of international law, including the rules
of international humanitarian law which play an indispensable and
universally accepted role in mitigating the savagery and inhumanity
of armed conflict.
Note Consequently, even in situations of international
armed conflict, the safeguards under the Convention continue to
apply, albeit interpreted against the background of the provisions
of international humanitarian law.
Note
3.1 European
human rights law perspective
32. The first major requirement
of Article 2 is to clearly regulate the use of autonomous weapon
systems. The right to life contains two substantive obligations,
and one of them is the obligation to protect the right to life by
law. That means the State must put in place a legal framework which
defines the limited circumstances when the use of force is allowed.
With regard to weapons in general, the Court has emphasised that
it is of primary importance that domestic regulations exclude the
use of weapons that carry “unwarranted consequences”.
Note These requirements can be connected to the
concept of human control examined above, which aims to ensure that
humans can make context-based assessments and that the technology
will function reliably and predictably. The national regulation
will most likely be required to ensure that the use of autonomous
weapon systems will comply with the requirements of, for example,
“unwarranted effects” and “safeguards against avoidable accidents”.
Note Even
if existing case law concerns other kinds of weapons, such as firearms,
it seems reasonable that the Court would not place less strict standards
on autonomous weapon systems.
Note
33. The text of Article 2, read as a whole, demonstrates that
paragraph 2 does not primarily define instances where it is permitted
intentionally to kill an individual, but describes the situations
where it is permitted to “use force” which may result, as an unintended
outcome, in the deprivation of life. The use of force, however,
must not exceed what is “absolutely necessary”
Note to
preserve a person’s life or to defend a person from unlawful violence,
which is a stricter test of necessity than that applicable to most
of the other rights enshrined in the Convention when determining
whether State action is “necessary in a democratic society”.
Note
34. The European Court of Human Rights has emphasised that it
is acutely conscious of the difficulties faced by modern States
in the fight against terrorism and the dangers of hindsight analysis.
Consequently, the absolute necessity test formulated in Article
2 is bound to be applied with different degrees of scrutiny, depending
on whether and to what extent the authorities were in control of
the situation and other relevant constraints inherent in operative
decision making in this sensitive sphere.
Note
35. The Court makes a distinction between “routine police operations”
and situations of large-scale anti-terrorist operations. In the
latter case, often in situations of acute crisis requiring “tailor-made”
responses, States should be able to rely on solutions that would
be appropriate to the circumstances. That being said, in a lawful
security operation which is aimed, in the first place, at protecting
the lives of people who find themselves in danger of unlawful violence
from third parties, the use of lethal force remains governed by
the strict rules of “absolute necessity” within the meaning of Article
2 of the Convention. Thus, it is of primary importance that the
domestic regulations be guided by the same principle and contain
clear indications to that extent, including the obligations to decrease
the risk of unnecessary harm and exclude the use of weapons and ammunition
that carry unwarranted consequences.
Note
36. The case of
Streletz, Kessler and
Krenz v. Germany concerning the border-policing regime
of East Germany resulting in the killing of East Germans attempting
to escape to West Germany illustrates the need to make necessity
assessments in the light of automated use of force.
Note The weapons used
in this case, anti-personnel mines and automatic-fire systems, were
not autonomous in the sense of LAWS, but due to their automatic
and indiscriminate effect, together with the categorical nature
of the orders given to border guards to annihilate border violators
and protect the border at all costs, the Court considered that the
automated killing flagrantly infringed the fundamental rights of
the Constitution and violated the right to life.
Note This
case is not about the autonomy of the weapon technology itself,
but the organisation of the operation as such and the absence of
a necessity assessment when automating the killing. This particularly
needs to be considered when it comes to LAWS used for defence purposes.
Streletz, Kessler and Krenz v. Germany illustrates
that there must be control over the individuated use of the system
– in the sense of making, and complying with, necessity assessments
– because otherwise the use of lethal force will probably be considered
as having automated and indiscriminate effects which would flagrantly
violate the right to life.
Note
37. In the “Gibraltar case”, where British soldiers shot suspected
IRA terrorists, it was not the actions of the soldiers in themselves
which gave rise to a violation of the right to life, but the control
and organisation of the operation as a whole.
Note The case illustrates that
the planning stage of an operation is connected to whether the use
of force was absolutely necessary. Consequently, the condition of
meaningful human control for the compliance of LAWS with European
human rights law will have to integrate the criterion of the necessity
of the use of force in the planning of the operation.
Note The requirement
to plan and exercise “strict control” over operations possibly involving
the use of lethal force would probably place even stricter demands
on the planning stage before launching an autonomous weapon system
which could self-initiate the use of force, than when engaging State
agents.
Note
38. This aspect might be even more important in relation to LAWS
than in cases such as
McCann (Gibraltar case)
regarding the shooting by human agents.
Note The
reason why the actions of the soldiers did not, in themselves, give
rise to a violation in this case was the soldiers’ “honest belief
which [was] perceived, for good reasons, to be valid at the time
but subsequently [turned] out to be mistaken.”
Note Justifying an infringement based
on a mistaken honest belief will probably not be accepted when an
autonomous weapon system kills someone by mistake. The concept of
an “honest belief” would be difficult to apply to a machine, unless
the Court was to consider whether the human operator or military
organisation had an honest belief that the use of force was necessary.
Such an argument would most likely not be accepted since this belief
must be subjectively reasonable with regard to the circumstances
at the relevant time.
Note This
requirement will not be met in the case of autonomous weapon systems.
The timespan between the human decision to launch the weapon system
and the eventual use of force initiated by the system would be insufficient,
unless there are possibilities for human supervision and intervention
providing sufficient environmental understanding for an operator
to form an honest and genuine belief valid at the relevant time.
Note
39. Beyond necessity, another required assessment is the one of
proportionality (the balance to be struck between, for example,
the value of life and military advantage). It is the responsibility
of humans using the weapons to make this assessment, which is another
necessary aspect of meaningful human control over LAWS. The Court
has emphasised that States that take on a pioneer role in the development
of new technologies have a special responsibility to strike the
right balance in their proportionality assessments.
Note
3.2 Compliance
with international humanitarian law
40. According to Article 36 of
the Additional Protocol to the Geneva Conventions of 12 August 1949,
and relating to the Protection of Victims of International Armed
Conflicts (Protocol I), States which develop, supply and use new
weapons must ensure their compliance with IHL rules. It is humans,
therefore, who are responsible for applying the law and who can
be held accountable for violations, not the weapon itself. These legal
requirements, notably the rule of distinction, the prohibition of
indiscriminate attacks, the rule of proportionality and precautions
in attack, must be fulfilled by those persons who plan, decide on
and carry out attacks.
Note
3.2.1 Rule
of distinction
41. Articles 48 and 51 paragraph
4 of Protocol I prohibit indiscriminate attacks, namely attacks
in which no distinction is made between civilian and military targets.
According to this rule of distinction, the system must have the
capacity to distinguish between active combatants and protected
persons, and between military and civilian objects, because the
attacks must never be directed against protected persons and objects.
This prohibition includes the prohibition of attacks which employ
inherently indiscriminate means of combat, whose effects cannot
be limited and which therefore affect legitimate objectives and
civilians without distinction (Article 51 paragraph 4 c) of Protocol
I). That includes, for example, biological weapons which, by their
nature, cannot distinguish between civilians and combatants. In
its advisory opinion on the legality of the threat or use of nuclear
weapons, however, the International Court of Justice did not rule
out the possibility that even nuclear weapons could be used in such
a way as to avoid violating the rule of distinction, for example,
by being directed against a military target in a vast desert, so
that their effects were confined to the military target alone and affected
neither civilians nor civilian objects.
Note
42. A particular category of protected persons is that of wounded
combatants (
hors de combat)
and those wishing to surrender. Any LAWS must therefore be able
to protect these persons.
Note In this context it is also worth
recalling the “Martens Clause”, which is part of customary international
law and according to which the “laws of humanity and the dictates
of public conscience” must be respected even in the absence of an
explicit prohibition.
Note
43. On the assumption that LAWS are specifically designed for
targeting and high precision, they would as such be fundamentally
capable of complying with the distinction rule. Even though a violation
of the distinction rule may occur through the actual use of LAWS,
in a specific situation, that does not a
priori appear sufficient to render the entire category
of weapons unlawful.
3.2.2 Rule
of proportionality
44. In addition, there must be
compliance with the principle of the law of war that any military
action must be necessary and proportionate to the damage (see in
particular Articles 50 of the Geneva Convention I and 51 paragraph
5b of its Protocol I).
45. The challenge arises because LAWS operate and act on the basis
of technical indicators, namely pre-programmed target profiles.
LAWS obtain information about their environment through sensors
and computer-generated analysis and apply it to the profiles. Many
experts agree that such processes do not in themselves constitute
the proportionality assessment and cannot replace the decisions
required of persons under the rule of proportionality.
Note
46. Qualitative and evaluative judgements as to whether an attack
complies with the rule of distinction or is proportionate are made
on the basis of values and interpretations of the particular situation
rather than numbers or technical indicators. For example, it is
difficult to quantify civilian casualties or military necessity and
to solve the wide variety of situations in the LAWS’ numerical terms.
Making such assessments requires uniquely human judgement. Such
judgements, which also reflect ethical considerations, are or must
be part of military training.
Note
47. A human can certainly take advice from a system, however.
The algorithms’ assessment can be communicated to the human who
would take control, insofar as they would actually decide whether
the system attacks or not. In such a scenario, the judgment as to
whether an attack satisfied the IHL rule of proportionality would
remain within the realm of human decision making. Such mechanisms
referring to a human decision must therefore be put in place to
respect the principle of proportionality. In this sense, not only
the States using such systems but also those manufacturing and supplying
them have a responsibility under Protocol I (see paragraph 39 above).
3.2.3 Principle
of precaution
48. In order to comply with the
rules of distinction and proportionality, and also the requirement
to take precautions in attack, LAWS must be predictable to some
extent. The users must be capable of limiting or nullifying the
effects of the weapon systems, if necessary, something that is only
possible if they can reasonably foresee how a system will react.
49. All LAWS, even so-called “deterministic” systems, raise concerns
about unpredictability, because the consequences of any output will
vary depending on the circumstances in the environment at the time
of the attack. A LAWS will apply force at a specific time and place
unknown to the user when they activated the system. Moreover, the
environment may vary over time and the status and surroundings of
the target may change swiftly or frequently (for example if civilians
have moved into the immediate vicinity).
Note
3.3 Legal
and moral responsibility
50. The emergence and
a fortiori the use of LAWS during
conflicts raise new legal issues which are not directly and expressly
governed by the existing rules of the law of armed conflicts and
highlight potential gaps in terms of accountability. On the assumption
that future LAWS meet all the legal requirements of the laws of war
when they operate normally, malfunctions of the system could cause
an erroneous attack and thereby raise accountability issues. In
the case of a malfunctioning LAWS, it could be difficult, if not
impossible, to establish the responsibility of a human operator.
It must be possible to establish where responsibility lies in case
of malfunctioning LAWS by determining whether there was sufficient
control according to the criteria described above.
Note The
question of the manufacturer's responsibility will also arise in
such a case, and the manufacturer will have to be able to demonstrate
that he has taken sufficient precautions to ensure compliance with
IHL at his level.
51. Any action aimed at establishing the responsibility of a LAWS
would be futile, as the machine is neither designed nor capable,
by nature, and despite a high degree of autonomy, to understand
the consequences of its actions from the perspective of criminal
liability designed for humans. Consequently, unlawful actions committed
by a LAWS resulting in violations of IHL should be able to be linked
alternatively to the individual or groups of individuals at the
origin of its design or programming or its deployment and ultimately
to the State of nationality of the armed forces which holds it.
Note Thus,
under the law of the international responsibility of States, a State
could be held responsible for violations of IHL resulting from the
use of an autonomous lethal weapon system. Indeed, under general
international law governing State responsibility, they would be
held responsible for internationally wrongful acts, such as violations
of IHL committed by their armed forces using an autonomous weapons
system. The main question is that of the capacity of the international
legal order to extend its material domain without the need to adopt
new formal guarantees of application.
Note
52. Some authors believe that in the current state of international
law, the responsibility of those in charge of political and military
decisions, of the operational part, of industrial design or programming
could always be established in case of a violation of IHL by a lethal
autonomous weapons system.
Note A
State would also be responsible if it were to use an autonomous
weapon system that has not been adequately designed, tested or reviewed
prior to deployment.
Note
53. Unlike humans, machines do not have feelings and are not moral
agents. Even if a person committed a war crime with an autonomous
weapon, it would be the human who committed the crime, using the autonomous
weapon as the tool for committing the crime. For this to remain
true, however, then humans must remain not only legally accountable
but also morally responsible for the actions of autonomous weapons systems.
Furthermore, some decisions pertaining to the use of weapons require
legal and moral judgements, such as weighing likely expected civilian
casualties against military advantages from conducting attacks.
Some have argued that regardless of whether machines could perform
these functions in a legally compliant manner, humans ought to validate
them since they are also moral judgements. In this respect, “meaningful”
refers to humans retaining moral responsibility for the use of weapons,
even weapons that might incorporate high degrees of autonomy.
Note Otherwise,
the choice of political or military leaders who agree to acquire
such a weapon or to use it in a particular context, with knowledge
of the machine's decision-making systems and the resulting risks
of violation, should engage their responsibility, and the obligation
to test and verify the weapon and to determine in which contexts
it can be used takes on particular importance within the meaning
of Article 26 of Protocol I.
Note
54. Nevertheless, some underline that while it is perfectly possible
to hold a military authority responsible for an unlawful act committed
by a LAWS, just as it can be for the same type of act committed
by a soldier having acted under its orders,
Note there
is a high risk that the element of intention required to assign
responsibility is lacking. In fact, for a military authority to
be held responsible it must have been aware of the planned wrongful acts
without intervening to prevent them or it has not sanctioned its
subordinate who committed the act. However, it is reasonable to
doubt the ability of military leaders to be “able to have a sufficient
understanding of the complex programming”
Note of the LAWS which perpetrated
the unlawful act.
Note
55. By contrast, a programmer who intentionally programs an autonomous
weapon to function in violation of IHL or without taking IHL sufficiently
into account, or a commander who activates a weapon unable to function
in accordance with IHL in that environment would certainly be criminally
liable for any ensuing violation. Similarly, a commander who knowingly
decides to activate an autonomous weapon system whose performance
and effects they cannot reasonably foresee in a given situation
may be held criminally liable for any resulting violation of IHL,
to the extent that their decision to deploy the weapon is considered
reckless in the circumstances.
56. In addition, under national product liability laws, manufacturers
and programmers could also be held liable for programming errors
or the malfunction of an autonomous weapon system or for the absence
of sufficient precautionary measures. It should, however, be emphasised
in this regard that the responsibility then brought into play will
be of a civil and internal character and not of a penal and international
character as provided for in international humanitarian law or international
human rights law. Moreover, it seems useful to recall that international
law only marginally allows for international liability of companies
to be engaged and that consequently the companies which design and
manufacture LAWS are not formally subject to any obligation to comply
with IHL.
Note It
is therefore the responsibility of the State that acquires and commits
the LAWS to ensure that its design and programming meet strict criteria
and to test and review their reliability. Otherwise, a design or
programming problem, whether intentional or unintentional, could
circumvent important IHL norms, for example the issue of differentiation
or proportionality, without it being possible to hold anyone accountable under
IHL.
4 Hearing
of 5 November 2021
57. At the meeting on 5 November
2021, I organised a hearing with four leading experts:
- Raja Chatila, Professor Emeritus,
former Director of the Institute of Intelligent Systems and Robotics, Sorbonne
University, Paris, France;
- Noel Sharkey, President of the NGO “The international
committee for Robot arms control”, computer scientist specialising
in robotics, University of Sheffield, United Kingdom;
- Jean-Gabriel Ganascia, President of the Ethics Committee
of the “Centre National de la Recherche Scientifique” (Comets),
Paris, France;
- Jean-Baptiste Jeangène Vilmer, Director of the Institute
for Strategic Research at the Military School (IRSEM), Paris, France.
58. Dr Jeangène Vilmer focused on the ethical and diplomatic dimensions
of LAWS, with reference to discussions on the topic, beginning with
the first examination by the UN Human Rights Council in 2013 up
to the Group of Governmental Experts (GGE) as of 2016. The group
represented some 90 States, the ICRC and NGOs. It was to deliver
its final report in December 2021. Dr Jeangène Vilmer stressed that
most NGOs and some member States were urging a total ban on the
use of this type of weaponry. Other States took a different view:
some were “obstructionist” (Russia) while others were more constructive
(United States, United Kingdom, France, Israel, etc.). In all events,
no consensus had yet been reached, and two key questions were still
pending before the GGE: how to define LAWS and what ethical grounds
there might be to authorise the use of such weapons. Dr Jeangène
Vilmer then outlined the arguments for and against the use of LAWS,
from the ethical and utilitarian viewpoints. As the law currently
stood, some rules already applied to the use of these arms. Firstly,
LAWS could not be used either against non-military targets or against
certain military targets in specific contexts. Secondly, they should
be programmed to refrain from striking where there was any doubt
and generally used only on a subsidiary basis, as part of a human
decision-making process. To conclude, the expert strongly opposed
describing these arms as fully “autonomous”: the “human factor”
was in fact always necessary. Given the reluctance of major States
to agree to a ban on LAWS, he was in favour of drawing up a set
of guiding principles or a “code of conduct”.
Note
59. Mr Chatila mentioned the difficulties in defining this type
of weaponry. The term “autonomous” must not be understood as absolute
but rather as relating to computational intelligence. Mr Chatila
stressed that the autonomy of a machine had to be viewed in relation
to the tasks and the environments in which any intelligent computer
system operated. That was why the term “autonomous” meant both operational
autonomy and autonomy of decision-making. Mr Chatila then described
the characteristics of these forms of autonomy and listed a number
of issues linked to the use of LAWS, which included the lack of
contextual decision-making processes, the impossibility of predicting
developments unfolding on the battlefield and the inability of LAWS to
adapt to unforeseen circumstances. On top of these factors came
the excessive faith placed by humans in the data supplied by information
technologies (“automation bias”), differing moral values of individuals
and the general issue of the possibility of delegating responsibility
for “acts” committed by machines. Finally, it appeared that LAWS
were becoming easier to access, including, potentially, by non-state
players, which would make them even more difficult to control.
Note
60. Mr Ganascia spoke about the use of LAWS from the sociological
and ethical viewpoints. He mentioned the aspects of unpredictability,
lethality, autonomy and automaticity of these weapons and explained
that artificial intelligence was not a reliable tool in an armed
conflict, as it was incapable of taking decisions based on moral
grounds. He compared LAWS with other types of banned weapons, such
as chemical weapons or other weapons of mass destruction that were
incapable of discriminating between combatants and civilians. He
analysed LAWS from different points of view and concluded more arguments
had been put forward against their development rather than in favour
of it. He referred to the initiatives in some countries, chiefly
in Europe, seeking to impose a moratorium on the development of
LAWS. However, in his view, certain major States would never feel
bound by an international moratorium and would continue to develop
these weapons. This would pose a threat to European values, and
the Council of Europe should oppose it.
Note
61. Mr Sharkey explained how he saw the issues posed by the use
of LAWS, from a somewhat more technical viewpoint. Firstly, there
was the meaning of “autonomous system”, which should be taken as
meaning a machine, robot or information system capable of acting
without human intervention. This notion raised numerous legal and
ethical questions. Mr Sharkey pointed out that the States were still
incapable of reaching a consensus on some of these questions. While
some States were in favour of a total ban on these weapons, others
had a preference for regulation by non-binding legal instruments
or rejected any restrictions outright. None of the “autonomous weapons” that
were available today or would be in the near future could offer
a full guarantee of complying with the laws of war, and more specifically
the principles of proportionality, distinction and precaution. Only
a human mind was capable of assessing these aspects, which could
not be transposed to a mathematical algorithm. Mr Sharkey highlighted
the phenomenon of “algorithmic bias” to be found in other sectors
that used artificial intelligence, such as law and order, health
and social protection. LAWS could also destabilise global security
by triggering a new arms race, geared in particular to developing
artificial intelligence for military purposes, which would not be
subject to any human control. Mr Sharkey agreed with the other experts
that ethical decisions on matters of life or death must not be delegated
to LAWS. In this regard, the aspect of respect for human rights
had never been properly examined by the UN negotiating group. LAWS opened
up a whole host of possibilities for oppressive regimes to violate
human rights with complete impunity. The proliferation of autonomous
weapons prevented humans from controlling them. These weapons operated at
such speed that the human brain could not keep up. In addition,
there was still a risk of several autonomous algorithms interacting
and shutting humans out of the equation, with disastrous consequences.
62. In reply to questions and comments from committee members,
Dr Jeangène Vilmer confirmed that there was a risk of “privatisation” of
these weapons, given the growing financial and economic power of
private companies. Another risk was that LAWS could be used by terrorists.
Mr Vilmer reiterated that context was crucial for taking a decision
in the light of humanitarian and human rights law, and machines
were still unable to do this. It was impossible to give a definitive
answer as to whether LAWS should be banned or, failing that, regulated.
These weapons had their pros and cons. That said, the mood tended
towards drawing up a code of conduct governing their use, rather
than an outright ban. Mr Chatila confirmed that the risk of these
weapons becoming widespread persisted, in view of the enormous military
advantage they offered. Consequently, a ban had never been entertained
by most States, which had nevertheless agreed to sketch out some
rules limiting their use in temporal and spatial terms. Mr Ganascia
also agreed that there was a risk of privatisation, especially if
such weapons were banned. A straight ban would be declarative in
nature and would not stop private enterprises developing them in
secret. He stressed that the most important question in this context concerned
the establishing of their responsibility for using these weapons.
The more autonomous the weapons, the less clear-cut the responsibility
of humans was. Properly regulating the development of these weapons
would be more effective than introducing a total ban. If humans
had the will and the ability to keep control, these weapons would
no longer be regarded as fully "autonomous". For Mr Sharkey the
important question was definitely the responsibility for using LAWS,
and it was a question for which there was still no answer.
5 Current
state of discussions within the specialised Group of Governmental
Experts (GGE)
63. Within the framework of the
6th Review Conference of the States Parties
to the CCW on 13-17 December 2021, the States agreed that the work
of the GGE on emerging technologies in the area of LAWS should continue
in 2022.
64. In the final document of the 6th Conference,
Note the States Parties to the CCW reaffirmed
that international humanitarian law was also applicable to LAWS,
that such weapons systems must not be used if they are of a nature
to cause superfluous injury or unnecessary suffering, or are inherently
indiscriminate, or are otherwise incapable of being used in accordance
with international humanitarian law. The Conference further considered that
the CCW provided an appropriate framework for dealing with the issue
of emerging technologies in this area.
65. The Stop Killer Robots.org NGO rejects the outcome of this
conference and considers that “a minority of states including the
US and Russia, already investing heavily in the development of autonomous
weapons, are committed to using the consensus rule in the CCW to
hold the majority of States hostage and block progress towards the
international legal response that is urgently needed. The outcome
of the Review Conference falls drastically short, and does not reflect
the will of the vast majority of States, civil society, or international
public opinion”.
Note
66. The 6th Review Conference of the
CCW mandated the GGE to meet for 10 days in 2022
Note to consider proposals and elaborate
possible measures and other options related to the normative and
operational framework on emerging technologies in the area of LAWS,
building upon the earlier recommendations and conclusions of the
Group (notably the “11 Guiding Principles on LAWS” adopted in 2019)
Note and bringing in expertise on legal,
military and technological aspects while maintaining the principle
of decision by consensus
Note. After two meetings in March and
July 2022, the Group adopted a report
Note containing certain recommendations,
including that the Group's work be continued in 2023. In its conclusions,
the Group noted that it had discussed a number of options regarding
a future legal framework for LAWS: a legally binding instrument
under the framework of the CCW or a non-legally binding instrument;
clarity on the implementation of existing obligations under international
law, in particular IHL; an option that prohibits and regulates LAWS on
the basis of IHL; and the option that no further legal measures
are needed. Notwithstanding, the Group agreed that the right of
the parties to an armed conflict to choose methods or means of warfare
was not unlimited and that international humanitarian law was also
applicable to LAWS. Any violation of international law, including
those involving LAWS, incurred the international responsibility
of the State concerned. The Group's “recommendations” went no further,
however, than proposing that the work of the GGE be continued in
2023, employing the same working methods (notably the requirement
of a consensus) and with the same terms of reference that had governed
its meetings in 2022.
67. A working paper submitted to the GGE by a group of European
countries proposed a two-tier approach aimed at getting discussion
moving again.
Note The document pointed out that the
States Parties to the CCW should recognise that LAWS that cannot
comply with international law, including IHL, are
de facto prohibited; and, consequently,
that LAWS operating completely outside human control and a responsible
chain of command are unlawful. The second sphere of action entails
proposing international regulation of other weapons systems presenting
elements of autonomy to ensure compliance with IHL.
68. To operationalise these proposals, the authors of the document
invite the States Parties to
(1) commit to not developing, producing, acquiring, deploying
or using fully autonomous lethal weapons systems operating completely
outside human control and a responsible chain of command (see guiding principles
b, c, and d);
(2) commit to only developing, producing, acquiring, modifying,
deploying or using LAWS when two conditions are met: firstly, that
compliance with international law is ensured when studying, acquiring,
adopting or modifying and using lethal weapons systems featuring
autonomy and, secondly, that appropriate human control is retained
during the whole life cycle of the system in question by ensuring
that humans will be in a position to inter
alia:
- at all times:
have sufficient assurance that weapons systems, once activated,
act in a foreseeable manner in order to determine that their actions
are entirely in conformity with applicable national and international
law, rules of engagement and the intentions of their commanders
and operators. For this purpose, developers, commanders and operators
– depending on their role and level of responsibilities – must have
a sufficient understanding of the weapons systems’ way of operating,
effect and likely interaction with their environment. This would
enable the commanders and operators to predict (prospective focus)
and explain (retrospective) the behaviour of the weapons systems;
- during the development phase: evaluate the reliability
and predictability of the system, by applying appropriate testing
and certification procedures, and assess compliance with IHL through
legal reviews;
- during the deployment: define and validate rules of use
and of engagement as well as a precise framework for the mission
assigned to the system (objective, type of targets etc.), in particular
by setting spatial and temporal limits that may vary according to
the situation and context, and monitor the reliability and usability
of the system;
- when using: also exercise their judgement with regard
to compliance with rules and principles of IHL, in particular distinction,
proportionality and precautions in attack, and thus take critical
decisions over the use of force. This includes human approval for
any substantial modification of the mission’s parameters, communication
links and ability to de-activate the system if and when necessary,
unless technically not feasible;
(3) preserve human responsibility and accountability (see
guiding principles b and d) at all times, in all circumstances and
across the entire life cycle as basis for State and individual responsibilities
which can never be transferred to machines. To that end, the following
measures and policies should be implemented:
- where responsibility is concerned: doctrines and procedures
for the use of lethal weapons systems featuring autonomy; adequate
training for human decision-makers and operators to understand the system’s
effect and its likely interaction with its environment; operation
of the system within a responsible chain of human command, including
human responsibility for decisions to deploy and for the definition and
validation of the rules of operation, use and engagement;
- where accountability is concerned: measures enabling an
after-action review of the system to assess compliance of a system
with IHL; mechanisms to report violations; investigation by States
of credible allegations of IHL violations by their armed forces,
their nationals or on their territory; and disciplinary procedures
and prosecution of suspected perpetrators of grave breaches of IHL
as appropriate.
(4) adopt and implement tailored risk mitigation measures
and appropriate safeguards regarding safety and security.
6 Conclusion
69. By way of conclusion, it seems
clear that international regulation of LAWS does ultimately need
to be developed. International law, as it stands, does not provide
sufficient safeguards to deal with the new issues raised by LAWS.
It is undeniable that the latter may give rise to a new paradigm
in the governance of warfare, which could lower the threshold for
engaging in armed conflict, as States see a drastically reduced
threat of losses of their own human soldiers. Consequently, more
efforts are needed to find the right balance between military advantage
and human rights protection. Some of those involved in the debate
about LAWS argue that the requirement of meaningful human control
over the use of lethal force is already implied in international
law. This would mean that weapons that lack meaningful human control
are illegal. But it remains to be seen whether that requirement
must be made explicit. In my opinion, in the spirit of Article 7
of the European Convention on Human Rights (
nulla
poena sine lege), this condition should be made explicit,
with a clear and realistic definition of what meaningful human control
signifies.
Note
70. The aforementioned working paper submitted to the GGE in July
2022 (see paragraphs 64-65) advocates a two-tier approach, to move
forward efforts to reach a consensus. The first level of action
entails clarifying that certain systems operating completely outside
human control cannot comply with international humanitarian law
while other systems incorporating elements of autonomy can be governed
through positive obligations set out in a regulatory framework to
be defined in a second phase.
71. I tend to share this position, which seems to me pragmatic
and reasonable as well as mindful of important principles. Between
on one side the position of NGOs and a number of countries campaigning
for an outright ban on the development, deployment and use of LAWS,
and on the other that of certain countries, including Russia and
the United States, which refuse to submit to any legal regulation
of this emerging technology, we should look for the right middle
path. According to the above proposal, States (still within the GGE
framework) should commit to:
- recognising
that fully autonomous lethal weapons systems operating completely
outside human control and a responsible chain of command are prohibited
by current international law;
- regulating the other lethal weapons systems with autonomous
features in order to guarantee compliance with the rules and principles
of international humanitarian law, while preserving human responsibility
and accountability, ensuring appropriate human control, testing
and verifying weapon systems, and implementing measures to mitigate
the risks, including by setting up appropriate instruction and training systems
for the persons using them.
72. The ongoing work in the context of the CCW is encouraging
and the framework for discussion is appropriate. However, the consensus
rule that prevails in the CCW may lead to a lengthy delay in the conclusion
of the process, or of one of its phases, or even to its blocking.
This risk increases in the current context of high international
tensions. I therefore recommend that, should this be the case, Council
of Europe member and observer States consider, as a subsidiary measure,
launching a process at the level of the Organisation that could
lead to a legal framework open to the participation of other States.
73. It is from this viewpoint that I have drawn up the draft resolution
preceding this report.