top of page

Killer Robots: How should we face this?



When discussing weaponry, the moment artificial intelligence is introduced into the equation is the moment alarm bells start ringing regarding the dreaded ‘killer robots’. To use more professional terminology, Lethal Autonomous Weapons Systems (LAWS) are currently among the most immediate yet most uncertain concerns of global governance.

On what grounds and with what qualifications can we - or should we - craft policy surrounding weaponized AI? Shall we be pragmatists, or idealists? How do we avoid fearmongering yet cap wanton development of systems that challenge us on moral and ethical grounds? Less a case of technological capacity, this governance issue rests within our legal and ethical principles with regards to the conduct of warfare.

Before discussing current and future attempts at governance, it is imperative to understand the key arguments for and against the use of LAWS in combat zones. In favour of the weapons are largely utilitarian arguments, but most have a moral bent behind them.

Robots mean fewer boots on the ground, with greater efficacy. Fewer and more effective warriors means less casualties, both civilian and military. The human instincts that can hinder soldiers - self-preservation, fear, hysteria, incoming sensory information - may be discounted completely. The processing of new information comes down to algorithms, not emotions or instincts or preconceptions. Further, as in other sectors as diverse as mining or space exploration, robots are ready to do the dirty, dangerous jobs in place of humans with greater efficiency and effectiveness but yet much lower risk.

But these arguments do not account for some of the moral and legal implications of LAWS use. The concept alone of robots killing robots leaves room for the notion of perpetual war. Beyond that, we do not have a precise definition of the technology that constitutes LAWS - of what is or isn’t permissible, and where to draw the line.

For example, close-in weapons systems (CIWS) have been operational for years in many navies. These systems, such as the well-known Phalanx, are designed as a last line of defence for warships to shoot down incoming missiles. Yet when the discussion turns to LAWS, systems like these aren’t often factored in as particularly problematic. Without a distinction, it becomes difficult to draw any lines, and therefore any legislation beyond guiding standards.

One of the main disputes by opponents is autonomy - the selecting and engaging of targets without human intervention. This so called ‘lethal autonomous targeting’ raises serious moral questions surrounding the principle of accountability. How does it factor in collateral damage, or civilians? At what point does an action result from a fault in programming, or from the outcomes reached using autonomous decision making via algorithms?

That is, we must question whether accountability lies at the programming stage, or at the critical moment. And even then, does responsibility lie with the officer who ordered the strike, the one who contracted the developer, or the developer himself – who may well be a civilian not specifically creating the technology for the purpose of being weaponised. Pinning the tail on human involvement remains at the heart of the argument, and thus we must be prepared to deal with the new human challenges of people making decisions of life and death behind a computer screen

There has naturally been massive public backlash to LAWS, and this has become a driver in determining the future of their governance. The 2018 Joint Conference on Artificial Intelligence saw the Future of Life Institute pledging against the use of LAWS. It is full of industry heavyweights such as Google DeepMind, XPRIZE Foundation, Elon Musk, UCL, and others. Most first articulated this pledge in a 2015 open letter, which included individuals like Stephen Hawking and Noam Chomsky among its signatories.

With all this in mind, a good starting point for a guiding framework may be found in The EU, which has drafted a strategy of AI due to be implemented by the end of 2018, based on four key principles - international law, human control in terms of using lethal force and accountability over life and death, the UN Convention on Certain Conventional Weapons, and the dual use of emerging technology such that civilian R&D will not be hampered.

So, shall we follow a principle of ‘wait and see’, or should we take pre-emptive measures in terms of legislation? The EU’s proposed guidelines are part of a broader measure to establish guiding norms for R&D - wide reaching norms that must be respected by both civilian and military actors.

International law as we know it arose out of norms. Governance of weaponised AI - as sensitive as it is to cross-sector relevance, ethics, and sovereignty - may need to undergo a similar process in order to establish a framework within which to manage policy. Policymakers have only to hope that the R&D of systems doesn’t outflank them – which, almost fittingly, is rather a Catch-22.

Comments


Follow us @ieustork

Related posts

bottom of page