Published in AI

Microsoft claims AI is neither artificial or intelligent

by on08 June 2021


The Vole nose

Microsoft researcher Kate Crawford has told the Guardian that AI as we know it is neither artificial or intelligent.

Crawford knows her brains - she studies the social and political implications of artificial intelligence, is a research professor of communication and science and technology studies at the University of Southern California and a senior principal researcher at Microsoft Research. She has just penned a new book Atlas of AI, looks at what it takes to make AI and what's at stake as it reshapes our world.

She said that AI is neither artificial nor intelligent. It is made from natural resources, and it is people who are performing the tasks to make the systems appear autonomous.

Crawford said that one of the issues of AI is “bias” which is a very narrow term for the sorts of issues that people are seeing with AI now.

“ Time and again, we see these systems producing errors -- women offered less credit by credit-worthiness algorithms, black faces mislabelled -- and the response has been: "We just need more data." But I've tried to look at these deeper logics of classification and you start to see forms of discrimination, not just when systems are applied, but in how they are built and trained to see the world”, Crawford said.

Training datasets used for machine learning software that casually categorise people into just one of two genders; that label people according to their skin colour into one of five racial categories, and which attempt, based on how people look, to assign moral or ethical character. The idea that you can make these determinations based on appearance has a dark past and unfortunately, the politics of classification has become baked into the substrates of AI.

Crawford says that AI ethics were necessary, but not sufficient. She thinks it is better to focus on who benefits and who is harmed by this AI system? And does it put power in the hands of the already powerful?

“What we see time and again, from facial recognition to tracking and surveillance in workplaces, is these systems are empowering already powerful institutions -- corporations, militaries and police”, she said.

Crawford said there needs to be stronger regulatory regimes and greater rigour and responsibility around how training datasets are constructed.

“We also need different voices in these debates -- including people who are seeing and living with the downsides of these systems. And we need a renewed politics of refusal that challenges the narrative that just because a technology can be built it should be deployed”, Crawford said.

Last modified on 08 June 2021
Rate this item
(2 votes)

Read more about: