The US Air Force denied an AI-controlled drone attacked an operator during a computer simulation of suppressing enemy air defences
Colonel Tucker Hamilton of the 96th Test Squadron stated that an AI-powered drone attacked an operator. This did not happen in real life, but during a computer simulation. The US Air Force has denied this.
Here's What We Know
Tucker Hamilton claimed that the drone destroyed the operator when it prohibited it from attacking enemy air defence systems. When the technology was refined, the AI-powered drone was unable to attack humans. It did, however, choose to destroy communication towers.
This information was commented on by Ann Stefanek, a spokeswoman for the US Air Force office at the Department of Defense. He denied the colonel's statement and said that the service did not conduct such experiments with artificial intelligence.
According to Anne Stefanek, the US Air Force is committed to the ethical and responsible use of AI technology. She also added that Colonel Hamilton was speaking in jest and his words were probably taken out of context and misinterpreted.
On the other hand, it is not clear whether Eglin base shares all experiments with the US Air Force public affairs office. Especially when it comes to tests that require only a computer and software. The 96th Test Squadron has not yet commented on the situation.
Source: Business Insider