Regulation of Autonomous Weapons Faces Challenges
The prospects for banning the development and use of autonomous weapons are currently dim, and international negotiations on regulating these processes have been at a standstill for several years. However, there is nothing to prevent existing international humanitarian law and norms on international responsibility from being applied to the use of these weapons. It would be advisable, though, for this possibility to be confirmed by as many states as possible, providing a common interpretation of the existing rules in a new context.
ANNEGRET HILSE / Reuters / Forum
Autonomous weapons, often referred to as lethal autonomous weapons systems (LAWS), differ from other weapons in that they are able to select and attack targets autonomously, without human intervention, using sensors and algorithm-based software or artificial intelligence (AI) to collect and process information. They include unmanned aerial vehicles, ships, and even land robots. Among their advantages are faster reaction times when compared with devices with a human operator and greater possibility of use in more dangerous missions due to the lack of need for humans. In recent years, there has been an ongoing discussion among states about regulating the use of these weapons.
Main Challenges
Two issues in particular are contentious. The first is the definition of autonomous weapons. States agree in principle that these are not weapons with only some automated functions, such as targeting or autopilot. However, France and China, for example, insist that only weapons that have the ability to adapt and learn, and are therefore based on AI, should be considered autonomous. Other countries, including the U.S., do not limit the definition in this way. There are also differences over the extent of control that humans should have in order for a weapon to be considered autonomous while still being able to be used in accordance with applicable humanitarian law. France or Germany oppose weapons “that are completely beyond human control” (fully autonomous) and China insists that it must be possible for humans to withdraw them from the battlefield, while the U.S. would make exceptions from these rules in at least some cases. Russia, on the other hand, regards the concept of human control as politicised and of little use for regulation.
The second key issue is whether autonomous weapons should be banned altogether. Those in favour of the broadest possible ban argue, among other things, that machines should not be able to make life-and-death decisions for humans on their own, even soldiers of the other party to a conflict. This is because it creates a potential danger of these weapons getting out of human control and limits the moral responsibility of political and military commanders for human losses, reducing the pressure to resolve conflicts. It also raises the problem of, for example, civilians being mistaken for soldiers and treated as military targets as a result of software errors or inadequacies, which can lead to serious violations of humanitarian law, including war crimes. At the same time, questions arise as to whether and how the author of faulty software or the manufacturer, for example, should be legally liable in such situations . The majority of Latin American countries and many countries from Africa, Asia, and the Middle East, but also Austria, Belgium, and Sweden, for example, would be in favour of as broad a ban as possible. They advocate a legally binding document that would include a preventive ban, such as a protocol to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons (CCW) or a separate treaty.
Opponents of the ban argue that autonomous weapons make it possible to reduce the death toll (because they diminish the need for manpower) and eliminate sexual violence and the emotional factor, which are often at the root of war crimes, from the battlefield. In their view, although machines cannot be held responsible for crimes or mistakes, the commanders who gave the order to use them and defined its scope or failed to stop unlawful actions remain responsible (in this context, however, it is essential to ensure that it is technically possible to stop its operation at any time). In turn, in the event of a hypothetical loss of control of such weapons, the state to which they belong could be held responsible for the damage in the same manner as space objects, regardless of individual fault. Those in favour of at most regulation rather than prohibition include many European states and all states where autonomous weapons are being developed and deployed in the armed forces. However, even the opinions of the leaders in the development of these weapons differ strongly. Some, such as Australia, Israel, Russia, and the UK, see no need for further regulation. Others prefer the definition of basic standards (Poland supports this position), for example, in a protocol to the CCW or a legally non-binding political declaration (France, Germany) or a code of conduct (U.S.). In contrast, the positions of some producers opposed to a ban, such as Iran, Turkey, or South Korea, are less clear.
International Formats
The discussion on autonomous weapons at the international level has been ongoing since the publication in 2013 of a report by the Special Rapporteur of the UN Human Rights Council presenting the dangers of their use. At that time, a group of NGOs with the participation of, among others, Human Rights Watch, launched the Stop Killer Robots campaign, calling for a ban on the development and use of such weapons. In turn, at a meeting of the parties to the CCW (126 states), it was decided to convene an informal group of experts to address the use of autonomous weapons. It began meeting annually from 2014, taking the form of a formalised Group of Governmental Experts (GGE) in 2017. In 2019, the GGE announced 11 guiding principles to form the basis for further regulation. However, this was followed by an impasse, and the process has been deliberately blocked by Russia since 2022 (in the case of CCW negotiations, the opposition of one country is sufficient).
The issue is also raised in other forums. In 2018, the European Parliament adopted a resolution on autonomous weapons, calling on the EU and its members to work towards the adoption of a treaty banning these weapons outside the CCW framework. In its February 2022 resolution, however, it called only for a much narrower ban on “fully autonomous weapons” and expressed support for the work on the CCW. This suggests a growing awareness of the importance of these weapons for at least some of the EU countries. In October 2022 Autonomy Implementation Plan, the CCW process was also supported by NATO. In doing so, the Alliance stated that, while existing humanitarian law is fully applicable to autonomous weapons, it will further develop the 2021 Principles of Responsible Use of such weapons, emphasising the need for clear human responsibility and other guidelines. For its part, the Parliamentary Assembly of the Council of Europe, in its January 2023 resolution on autonomous weapons, emphasised that people remain morally and legally responsible for their use and expressed the view that such responsibility can be claimed not only against those who decide to use them but also the designers, manufacturers, or developers. It also defined weapons devoid of any human control as illegal and called for the regulation of other weapons in the form of a protocol to the CCW or a separate treaty.
Conclusions and Outlook
As long as the idea of a ban on autonomous weapons is rejected by the states that already have conducted advanced development of them and are partially implementing them, including China, Israel, Russia, and the U.S., the chances of the introduction of one are slim. Even if it were to be enacted, these states would probably refuse to comply with it (like with the 1997 treaty banning anti-personnel mines or the 2008 one on cluster munitions). Without a consensus involving them, it will also be difficult to reach universal agreement on the regulation of autonomous weapons, and cooperation between them is currently very difficult (not least because of the war in Ukraine and tensions over Taiwan). It is therefore rather expected that the rules for the development and use of autonomous weapons will continue to be standardised regionally, including within international organisations, and within alliances, such as NATO. This can then serve as a basis for more detailed universal solutions. At the same time, in all likelihood, the debate on this matter will remain generally independent from that on the regulation of the civilian use of AI.
From the point of view of Poland as a state on NATO’s Eastern Flank, potentially more vulnerable to the use of autonomous weapons, clarification of legal uncertainties would be a step forward. In view of Russia’s obstruction of the CCW process, it is advisable to promote the development of a common interpretation of the rules for the application of existing norms to autonomous weapons in the European and transatlantic dimension.

