In February, DARPA issued a call for proposals for a new programme called GARD [Guaranteeing Artificial Intelligence (AI) Robustness against Deception]. It's a multimillion-dollar, four-year initiative that's aiming to create defences for sensor-based artificial intelligence — think facial recognition programs, voice recognition tools, self-driving cars, and weapon-detection software.
According to Protocol 17 organisations are involved in GARD: Johns Hopkins University, Intel, Georgia Tech, MIT, Carnegie Mellon University, SRI International and IBM's Almaden Research Centre.
Intel will be leading one part of the project with Georgia Tech, focusing on defending against physical adversarial attacks.
The project is split amongst three groups. One set of organisations will be looking at the theoretical basis for attacks on AI, why they happen and how a system can be vulnerable. Another group will be building the defenses against these attacks, and the last set of teams will serve as evaluators. Every six months, they'll test the defences others built by throwing a new attack scenario their way and looking at criteria like effectiveness and practicality.
Jason Martin, a senior staff research scientist at Intel Labs said it was a rarity in research to be able to spend time worrying about tomorrow's problems.
"It's a nice place to be; it's not a 'panic now' sort of scenario. It's a 'calmly do the research and come up with the mitigations.'"
Intel and Georgia Tech have partnered on adversarial attack research for years. One of their focuses has been the ease with which bad actors can trick an algorithm into thinking a bird is a bicycle, for example, or mislabelling a stop sign — just by changing a few pixels.