A study published Monday in the journal Proceedings of the National Academy of Sciences has shown that the systems misidentified words about 19 percent of the time with white people. With black people, mistakes jumped to 35 percent. About two percent of audio snippets from white people were considered unreadable by these systems, according to the study, which was conducted by researchers at Stanford University. That rose to 20 percent with black people.
The study, which took an unusually comprehensive approach to measuring bias in speech recognition systems, offers another cautionary sign for AI technologies rapidly moving into everyday life.
The Stanford study indicated that leading speech recognition systems could be flawed because companies are training the technology on data that is not as diverse as it could be -- learning their task mostly from white people, and relatively few black people.
The best performing system, from Microsoft, misidentified about 15 percent of words from white people and 27 percent from black people. Apple's system, the lowest performer, failed 23 percent of the time with whites and 45 percent of the time with black people.