Published in AI

No, an AI drone did not kill a human

by on05 June 2023


USAF colonel shocked when US media took him so literally

Last week, a USAF colonel hit the headlines when he told a London conference that an AI-enabled drone turned on and “killed” its human operator during a simulated test.

Col. Tucker Hamilton, USAF’s chief of AI Test and Operations, recounted a simulated incident at the London Future Combat Air and Space Capabilities Summit. No one in their right mind considered that he was talking literally.

However, it appears that Hamilton did not realise that there were many people on the Internet who take these sorts of comments extremely literally and rushed to the interwebs to share this “truth.”

Hamilton said he had “misspoken” and the USAF had never run such an experiment or need to realise that this was a “plausible outcome.”

He told RAS that the Air Force hadn’t tested any weaponised AI in this way—real or simulated.

“Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI,” he added.

We doubt that the conspiracy nuts out there will be rushing to report his additional comments or attempt to put his original statement into their proper context. After all, they will believe he “slipped up” and leaked the news of the AI killer without authorisation from the lizard people.

 

Last modified on 05 June 2023
Rate this item
(0 votes)

Read more about: