Experts ridiculed British politicians' suggestion that AI could one day make decisions on the battlefield
Technology law and software experts have discussed with the UK House of Lords the possibility of transferring responsibility for combat decisions to weapons with artificial intelligence.
Here's What We Know
The Lords tried to persuade the experts that such weapons could be introduced gradually or cautiously, apparently fearing they would lose their advantage in using AI for military purposes.
One of the Lords asked whether AI weapons could meet the criteria of distinction and proportionality. Professor Christian Enemark replied that only humans are capable of doing this, and that autonomous action by a non-human entity is "philosophical nonsense".
AI ethics expert Laura Nolan said AI weapons cannot assess the proportionality of their actions because they do not know the strategic military value of an attack. According to her, weapons on the battlefield know nothing but some images and machine learning. Nor does it understand the military value, which depends on the wider strategic context.
Researcher Taniel Yusef said that simple algorithms to classify data to determine targets could be wrong. A weapons report would be based on mathematics, which would be correct in terms of code but not reality.
She expressed concern that without a legally binding instrument enshrining human control over AI weapons, this aspect would be missed.
Source: The Register.