It may sound like something out of a science-fiction flick, but the US Air Force recently announced that it has now embedded artificial intelligence (AI) into its targeting operations – and that’s not a drill.

According to Frank Kendall, Secretary of the Air Force, AI algorithms were deployed into a live operational kill chain. Kendall, however, did not disclose whether this was done by a human pilot or a remote-controlled drone. Likewise, nothing was mentioned regarding the possible loss of human life.

It’s a development that is raising some serious questions regarding the ethical merits and moral consequences of using technology in warfare.

A technical definition

A kill chain is, essentially, the structure of an attack. In conventional warfare, it includes the process of identifying one’s target, dispatching a force to that target, making the decision to attack, commanding the attack, and the final destruction of the target. 

The use of technology in a kill chain usually involves gathering and analyzing data through strategically placed sensors – a process that involves selecting targets, planning attacks, and evaluating the results. In this aspect, AI makes things easier as less time is expended in locating and positively identifying targets.

According to official statements from the Air Force, the information gleaned by AIs is made available to those in Distributed Ground Stations as a way of augmenting existing intelligence protocols.

A moral dilemma

But while AI improves the efficiency of the Air Force’s fighting abilities, there are a number of ethical concerns regarding its use on the battlefield.

While it’s easy to assume that these involve the indiscriminate use of AI for large-scale attacks, the real issue is that the technology will just locate the targets but leave the killing to human soldiers.

Experts at the United Nations Institute for Disarmament Research agree that using AI as a way of relaying information to intelligence personnel opens its own can of worms. Specifically, how does one know if the AI has been properly tested and vetted? Does an operator know when to trust an AI’s judgment over his or her own? And, most importantly, who takes responsibility in the event that an error on the part of the AI results in a lethal strike? 

But the US Air Force is of the opinion that AI can actually improve decision making, preventing potentially catastrophic mistakes. Regardless of what objections may be thrown at it, AI has become part of its arsenal and its use in warfare may prove to be a boon rather than a bane in the long run.