Lethal autonomous weapons systems (LAWS) raise high stake ethical and legal issues. For several years, NGOs and public decision-makers have been calling for a regulation or even a ban on LAWS. In 2017, artificial intelligence experts published an open letter urging governments and the United Nations to "prevent a race for autonomous weapons" and "avoid the destabilising effects of these technologies”.
In 2018, a prohibition of principle against LAWS was laid by a majority of United Nations member States, and reiterated during several diplomatic meetings since then. However, several major economic and military powers (China, the Russian Federation, the United States), including in Europe, are now developing public subsidies and private sector investment, resulting in a race towards this type of armaments.
The deployment of LAWS could lead to serious changes or even a sociological breakdown in conducting of war, by the role dedicated to the machine in the usual human relationship at war. The decision-making process of going to war of governments would eventually be drastically changed.
The wording «Lethal autonomous weapon systems» expresses per se the difficulties of drawing up a legal definition, reflecting various technological and artificial intelligence levels, particularly with regard to the degree of human intervention to be maintained (man in the loop, man on the loop and man off the loop), making any international or regional consensus difficult.
Adopted on 8 April 2019, the ethics guidelines for trustworthy artificial intelligence of the European Commission provide for an ambiguous and hazardous definition of LAWS.
The Parliamentary Assembly should take up this subject matter as from now in order to prevent the reality of the development of weapons technologies and artificial intelligence from threatening the enforcement of international humanitarian law and human rights.