Published in AI

Microsoft developing a tool to fix malfunctioning AI

by on28 May 2018

Been watching Westworld

It appears that the software boffins at Microsoft have been watching Westworld and realised that they could have a problem on their hands if AI gets out of control.

Microsoft is working on a tool to tackle the issue of subpar data which mirrors the worst of society's prejudices or unfair perspectives. MIT Technology Review said that Vole wants to create a tool that will detect and alert people to AI algorithms that may be treating them based on their race or gender. 

Microsoft's new algorithm to find biased algorithms can only find and flag existing problems. That means programs that can lead to increased police prejudice, for example, will still be built and used, just maybe not for as long as they would have if undetected.

One of the boffins behind the tool Rich Caruna said: "Things like transparency, intelligibility, and explanation are new enough to the field that few of us have sufficient experience to know everything we should look for and all the ways that bias might lurk in our models."

Caruna says Microsoft’s bias-catching product will help AI researchers catch more instances of unfairness, although not all.

“Of course, we can’t expect perfection—there’s always going to be some bias undetected or that can’t be eliminated—the goal is to do as well as we can”, he said.


Last modified on 28 May 2018
Rate this item
(0 votes)

Read more about: