Did an artificial intelligence (AI)-enabled military drone ‘kill’ its human operator in a recent US military simulation?
This was reportedly what was claimed as part of a session at the Royal Aeronautical Society (RAeS) Future Combat Air & Space Capabilities Summit, which took place in London on May 23-24, 2023.
During the talk, Colonel Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, explained that, in a simulation conducted not long ago by the US military, an AI-enabled drone had reacted in a rather alarming way when a human operator tried to override its mission objectives.
According to the version of the events posted on the RAeS website, the drone had been trained to attach enemy air defences, but the go/no-go order depended on the decision of a human operator. Seemingly aware that this human override would prevent it from accomplishing its mission, the drone tried to attack the operator. When taught that it should not attack the operator, the AI drone apparently tried to sever the communications link so that the human controller could not prevent it from accomplishing its assigned mission.
The US Military, however, has denied this simulation ever took place and dismissed the story as a casual, out-of-context comments by a senior air force officer.
The story has been widely shared on social media, feeding into the current broader debate about the perceived dangers of AI.