AWS: The dark side of AI (I)

The use of AI in military operations creates fertile ground for both good and bad actors to partake in the dark side of artificial intelligence.

ARTIFICIAL Intelligence (AI) presents enormous opportunities to improve the quality of life of people across the world.

There are vast potential applications in all sectors, particularly education, healthcare, agriculture, infrastructure, mining, trade facilitation, banking/finance, creative industries, and governance.  

However, there are also potential dangers and risks associated with the technology —  the dark side of artificial intelligence.

Characterising this space are risky applications of AI by folks who mean well, and of course, AI tools in the hands of bad actors with evil intentions.

The use of AI in military operations creates fertile ground for both good and bad actors to partake in the dark side of artificial intelligence.

Autonomous weapons systems (AWS) consist of combat equipment or technology that can identify, target, and engage the enemy without human intervention.

These systems use AI, sensors, and other technologies to perform tasks that traditionally require human decision-making. AWS have also been referred to as lethal autonomous weapons systems or killer robots.

They range from armed drones and unmanned aerial vehicles (UAVs) to ground-based robots and naval vessels. Such systems are designed to carry out missions autonomously, such as surveillance, reconnaissance, and combat operations, without direct human control.

The concern with autonomous weapons systems lies in their potential to make life-and-death decisions without meaningful human oversight.

There are ethical, moral, legal, and humanitarian concerns regarding their use, including issues related to accountability, unintended harm to civilians, and the potential for escalating conflicts.

Of particular interest is the moral and ethical dilemma of whether AI (a machine) should make the call to kill a human! It is instructive to note that both good actors (national governments and armies) and bad actors (terrorists, thieves, and fraudsters) have the potential to have access to AWS.

Both groups have the propensity to irresponsibly deploy AWS with devastating effects. Various international organisations and advocacy groups have called for regulations or outright bans on the development and deployment of autonomous weapons systems.

The key objective is to ensure that humans remain in control of decisions regarding the use of lethal force. However, debates about the appropriate regulation of such systems continue among policymakers, ethicists, military leaders, and technology experts.

US and China’s approaches to AWS

Three nations are leading the development of AWS: China, Russia, and the United States. It is prudent and illustrative to review the approaches to AWS by two of these countries: China and the United States.

China and the United States take different approaches to autonomous weapons systems. While both countries are actively developing AWS, their specific approaches vary.

China has been investing extensively in modernising its military, including developing advanced AI and robotics technologies for combat operations.

The People’s Liberation Army has been exploring the integration of AI and autonomy into various weapons systems, including drones, unmanned vehicles, and other platforms.

Similarly, the United States has a long history of investing in military technology and has been a leader in developing and deploying unmanned systems and AI-enabled weapons.

The US military, including the Army, Navy, Air Force, and Defence Advanced Research Projects Agency branches, has been researching and testing autonomous systems for various military purposes.

These efforts have included reconnaissance, surveillance, and combat operations. There are differences in the policies and regulations of AWS between China and the United States.

The US has engaged in discussions and debates regarding autonomous weapons systems’ ethical and legal implications. While no specific international treaties or agreements regulate AWS, the US Department of Defence has issued policy directives and guidelines on the development and use of autonomous weapons.

On the other hand, China’s approach to policy and regulation regarding AWS may be less transparent than that of the United States. It has not been as involved in international discussions on the regulation of AWS and tends to prioritise national sovereignty and security interests in its policy decisions.

However, China is a party to international arms control agreements, and its stance on AWS may evolve as the technology develops and international norms emerge. The United States has been actively engaged in diplomatic efforts to address concerns about AWS through international forums, such as the United Nations.

It has participated in discussions on arms control and disarmament, including debates on the regulation of autonomous weapons systems.

China’s approach to international cooperation and diplomacy on AWS may be influenced by its broader foreign policy objectives and strategic interests.

While China has participated in international discussions on emerging military technologies, it may prioritise bilateral or regional partnerships over multilateral initiatives on AWS regulation.

The specifics of the Chinese and US approaches to AWS may evolve in response to technological advancements, geopolitical dynamics, and international norms.

Current status of AWS technology

The increased autonomy of weapons through the introduction of AI will fundamentally transform the future of armed conflict. As explained earlier, AWS raise profound questions from a legal, ethical, humanitarian and security perspective.

What are the implications of AI systems making killing decisions without humans in the loop?

Obviously, ceding killing decisions to machines leads to autonomous warfare. There is also autonomous cognitive warfare, which entails using autonomous AI systems to take out, disable or disorient opponents in military operations.

The primary objective of AWS is reducing human loss while increasing combat power. Given these new battlefield advantages, there is a danger that political and military leaders will find armed and confrontational options less costly or prohibitive.

Thus, it is easier for countries to go to war, as the decision to fight would have been lightened. Once AWS are commonplace, there is also the challenge of: “How do we end wars?”

How can humans end a war in which they do not control the military operations? What if the AI system makes a mistake and identifies a wrong target? What of other harmful and egregious technology errors? What about autonomous AI-based military cyberattacks?

Indeed, humanity confronts an existential challenge — an unprecedented crossroads — that demands collective and binding global rules and regulations for these weapons.

Widely deployed autonomous weapons integrated with other aspects of military digital technologies could result in a new era of AI-driven warfare.

There has to be worldwide ownership and buy-in for any meaningful AWS regulatory framework. In 2023, a fully autonomous weapon that uses AI to make its own decisions about who to kill on the battlefield was developed in Ukraine.

The drone carried out autonomous attacks on a small scale.

While this was a baby step technologically, it is a consequential moral, legal, and ethical development.

The next stage is the production of fully autonomous weapons capable of searching out, selecting and assailing targets without human involvement.

The unconstrained development of autonomous weapons could lead to wars that expand beyond human control, with fewer protections for both combatants and civilians.

Clearly, a wholesale ban on AWS is neither realistic nor practical. Once the genie is out of the bottle, you cannot put it back! AWS cannot be un-invented.

However, governments can adopt many practical regulations to mitigate the worst dangers of autonomous weapons. Without limits, humanity risks gravitating towards a future of dangerous, machine-driven warfare.

Countries worldwide have used partially autonomous weapons in limited, defensive circumstances for decades.

These include air and missile defence systems or anti-rocket protection systems for ground vehicles that have autonomous modes.

Once activated, these AI-driven defensive systems can automatically sense incoming rockets, artillery, mortars, missiles, or aircraft, and intercept or disrupt them.

However, in semi-autonomous weapons systems, humans are still in charge. They supervise the operations and can intervene if something goes awry.

The war in Ukraine has led to accelerated adoption of commercial AI innovations such as drones into weapon systems by both belligerents — Moscow and Kyiv. They have used drones extensively for reconnaissance and attacks on ground forces.

Drone counter mechanisms have been achieved through AI systems that detect and destroy drones’ communications links or identify and eliminate the operators on the ground.

This strategy works because most drones are remotely controlled. Without human operators, remotely controlled drones lose their utility.

This creates the rationale for autonomous drones, which are not dependent on vulnerable communication links to human operators.

With further advances in AI technologies, all these drones, which are currently remotely controlled, can be upgraded to become autonomous, allowing continued utility in the event of the destruction of communications links or operators.

Consequently, such autonomous drones can be used to target air defences or mobile missile launchers without the involvement of humans in the exercise.

Battlefield singularity

The development of ground autonomous weapons has lagged behind that of air and sea AWS, but future possibilities include autonomous weapons deployed on battlefield robots or gun systems.

Military AI applications can accelerate information gathering, data processing and scenario selection. This will shorten decision cycles.

Thus, the adoption of AI reduces the time it takes to find, identify, and strike enemy targets.

Theoretically, this could allow humans more time to make thoughtful, deliberate and precise decisions. However, adversaries will feel pressured to respond in kind, using AI to speed up execution.

This will inevitably lead to the escalation of automation away from human control.

Hence, autonomous warfare becomes unavoidable! Swarms of drones could autonomously coordinate the behaviour of these systems, reacting to changes on the battlefield at a speed beyond human capabilities, with accuracy and efficacy far superior to that of the most talented military commander.

When this happens, it is called battlefield singularity. This entails a stage where the AI’s decision-making speed/capacity and effectiveness far surpass that of the most intelligent human — a point wherein the pace of machine-driven warfare outstrips the speed of human decision-making.

When this occurs, an unassailable rationale exists for removing humans from the battlefield decision loops.

Thus, autonomous, AI-driven warfare becomes a reality.

Battlefield singularity can be restated as a condition in the combat zone where humans must be taken out of the loop for maximum speed, efficiency, and efficacy.

It is a tipping point that forces rational humans to surrender control to machines for tactical decisions and operational-level war strategies.

  • Mutambara is the director and full professor of the Institute for the Future of Knowledge at the University of Johannesburg in South Africa. He is also an independent technology and strategy consultant and former deputy prime minister of Zimbabwe.

Related Topics