Published in AI

AI is rubbish at killing people

by on14 December 2021


But thinks it is brilliant

US Air Force Major General Daniel Simpson has said that an experimental AI-based target recognition program turned out to be absolutely rubbish but had supplied data which implied it was better than it appeared.

The AI was fed data from a sensor that looked for a single surface-to-surface missile at an oblique angle, Simpson said. Then it was fed data from another sensor that looked for multiple missiles at a near-vertical angle.

However the algorithm did not perform well. It actually was accurate maybe about 25 percent of the time, Simpson said.

Simpson said the AI was a good example of brittle AI, which "occurs when any algorithm cannot generalize or adapt to conditions outside a narrow set of assumptions".

A 2020 report by researcher and former Navy aviator Missy Cummings [no really.ed] identified the issue when the data used to train the algorithm consists of too much of one type of image or sensor data from a unique vantage point, and not enough from other vantages, distances, or conditions, you get brittleness, Cummings said. In settings like driverless-car experiments, researchers will just collect more data for training.

This is tricky in military settings where there might be a whole lot of data of one type — say overhead satellite or drone imagery — but very little of any other type because it wasn't useful on the battlefield.

Simpson said the low accuracy rate of the algorithm wasn't the most worrying part of the exercise.

While the algorithm was only right 25 percent of the time, he said, "It was confident that it was right 90 percent of the time, so it was confidently wrong. And that's not the algorithm's fault. It's because we fed it the wrong training data."

 

Last modified on 14 December 2021
Rate this item
(1 Vote)

Read more about: