Photo/IllutrationGovernment representatives discuss guidelines for lethal autonomous weapons systems at the United Nations Office in Geneva on Aug. 20. (The Asahi Shimbun)

International efforts are entering a crucial stage for setting rules on whether machines should be allowed to make decisions on their own to take human lives.

A panel of experts recently met in Geneva to discuss regulations of robotic weapons and worked out a report of their three-year discussions. They presented a set of guidelines saying, among other things, that international humanitarian law should be applied to the conduct of robot weapons and that humans should be responsible for their use.

Those conclusions were rightly reached.

The panel, however, failed to agree there should be legally binding regulations, for example, in the form of a convention.

The experts on the panel say they will continue with their talks.

They should deepen their discussions even further toward the goal of setting effective and concrete regulation systems.

Targeted by the discussions are weapons systems equipped with artificial intelligence that are designed to work autonomously to kill or wound enemies. The lethal autonomous weapons systems (LAWS), as they are called, are also dubbed “killer robots.”

There is a deep rift of opinions between countries that are developing similar weapons, including the United States and Russia, and countries calling for a ban treaty, including those in Latin America and Africa.

Nations in both camps mostly agree there should be human involvement in the use of robot weapons, but when it comes to the extent of that “involvement,” they remain widely apart in their understanding.

Some argue, for example, that robots may be allowed to make individual decisions and movements on their own in the battlefield on condition that a comprehensive direction or order has been given by a human commander.

Deployment of such robots in a conflict area would realize a movie-like scene, where weapons that have no qualms about killing or wounding others will be fighting flesh-and-blood humans.

The question here is about the meaning of war without a modicum of humanity and about whether society would allow that to happen. It is profoundly existential and ethical.

There is also an argument saying the use of robot weapons would, in fact, facilitate the practice of international humanitarian law.

The proponents of that argument say the use of such weapons would enhance the precision of enemy identification and attack actions, which would reduce the killing or wounding of wrong targets. They also say the availability of detailed records would facilitate investigations into, and reports on, illegal acts, which would be more humanitarian after all.

The deeper the learning functions of an AI system, however, the more it resembles a black box, which means humans never understand on what grounds the AI system has identified a target as such or has made a decision.

Some also point out the use of biased data in a learning process could cause an AI system to make wrong decisions. In addition, more than a few experts are concerned that robots could engage in unpredictable behavior in real battlefields, where the situation is so entangled and disordered.

We cannot just go along easily with an argument in favor of robotic weapons.

To start with, we should realize a total ban on the use of fully autonomous weapons, which operate independently of humans.

When that is done, we should then identify elements, in the respective phases of target selection, identification and attacks, that could pose a danger if they were left to the discretion of AI systems. We should seek to impose binding regulations on such elements.

What is being tested here is the wisdom of humanity.

--The Asahi Shimbun, Aug. 25