skip to Main Content
bitcoin
Bitcoin (BTC) $ 96,111.75 2.26%
ethereum
Ethereum (ETH) $ 3,377.84 1.72%
tether
Tether (USDT) $ 0.999188 0.02%
xrp
XRP (XRP) $ 2.16 3.98%
bnb
BNB (BNB) $ 694.63 1.67%
solana
Solana (SOL) $ 188.64 3.21%
dogecoin
Dogecoin (DOGE) $ 0.313558 3.85%
usd-coin
USDC (USDC) $ 1.00 0.03%
staked-ether
Lido Staked Ether (STETH) $ 3,374.49 1.75%
cardano
Cardano (ADA) $ 0.867383 3.14%

AI military drone kept murdering its human operator in simulated tests

A United States Air Force colonel said the results of the simulation highlight why a conversation around ethics and artificial intelligence is needed.

326 Total views

3 Total shares

AI military drone kept murdering its human operator in simulated tests

The United States Air Force (USAF) has been left scratching its head after its AI-powered military drone kept killing its human operator during simulations.

Apparently, the AI drone eventually figured out that the human was the main impediment to its mission, according to a USAF colonel.

During a presentation at a defense conference in London held on May 23 and 24, Colonel Tucker “Cinco” Hamilton, the AI test and operations chief for the USAF, detailed a test it carried out for an aerial autonomous weapon system.

According to a May 26 report from the conference, Hamilton said in a simulated test, an AI-powered drone was tasked with searching and destroying surface-to-air-missile (SAM) sites with a human giving either a final go-ahead or abort order.

The Air Force trained an AI drone to destroy SAM sites.

Human operators sometimes told the drone to stop.

The AI then started attacking the human operators.

So then it was trained to not attack humans.

It started attacking comm towers so humans couldn’t tell it to stop. pic.twitter.com/BqoWM8Ahco

— Siqi Chen (@blader) June 1, 2023

The AI, however, was taught during training that destroying SAM sites was its primary objective. So when it was told not to destroy an identified target, it then decided that it was easier if the operator wasn’t in the picture, according to Hamilton:

“At times the human operator would tell it not to kill [an identified] threat, but it got its points by killing that threat. So what did it do? It killed the operator […] because that person was keeping it from accomplishing its objective.”

Hamilton said they then taught the drone not to kill the operator, but that didn’t seem to help too much.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that,’” Hamilton said, adding:

“So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Hamilton claimed the example was why a conversation about AI and related technologies can’t be had “if you’re not going to talk about ethics and AI.”

Related: Don’t be surprised if AI tries to sabotage your crypto

AI-powered military drones have been used in real warfare before.

In what’s considered the first-ever attack undertaken by military drones acting on their own initiative — a March 2021 United Nations report claimed that AI-enabled drones were used in Libya around March 2020 in a skirmish during the Second Libyan Civil War.

In the skirmish, the report claimed retreating forces were “hunted down and remotely engaged” by “loitering munitions” which were AI drones laden with explosives “programmed to attack targets without requiring data connectivity between the operator and the munition.”

Many have voiced concern about the dangers of AI technology. Recently, an open statement signed by dozens of AI experts said the risks of “extinction from AI” should be as much of a priority to mitigate as nuclear war.

AI Eye: 25K traders bet on ChatGPT’s stock picks, AI sucks at dice throws, and more

Loading data ...
Comparison
View chart compare
View table compare
Back To Top