AI recruitment tools that depend on algorithms that read a person’s vocabulary, speech patterns, and even microchanges in facial expressions were supposed to help HR assess a massive pool of job applicants for the right “culture” fit and personality type.
But a new study published in the journal Philosophy and Technology argues that some uses of AI in recruitment are little more than an “automated pseudoscience” similar to the discredited belief that behaviour can be inferred from skull shape and facial features.
Boffins from Cambridge’s Centre for Gender Studies found an AI tool dubbed the “Personality Machine” came up with different views depending on random changes in clothing, facial expressions, background, and lighting.
AI recruitment tools to narrow candidate pools ultimately promote uniformity rather than diversity in the workplace because the tool is calibrated to look for the company’s fantasy “ideal candidate” which is usually the right man for the job.
Dr. Eleanor Drage, one of the study’s co-authors, believes that by claiming that sexism, racism, and other types of discrimination can be removed from the hiring process using AI tools, employers reduce gender and race to “insignificant data points” rather than systems of power.
The researchers reiterate that these AI tools are dangerous examples of “techno-solutionism,” where we turn to technology to provide band-aid solutions or quick fixes for deep-rooted discrimination issues that require changes in company culture.