Published in AI

EU to ban AI for mass surveillance or for ranking social behaviour

by on15 April 2021


Thanks to Brexit, UK can still do it


The European Union is poised to ban governments and companies from using ban AI for mass surveillance or for ranking social behaviour.

Companies developing such AI could face fines as high as four percent of global revenue if they fail to comply with new rules governing the software applications.

The rules are part of legislation set to be proposed by the European Commission, the bloc's executive body. The details could change before the commission unveils the measure, which is expected to be as soon as next week.

The EU proposal will prevent the use of AI systems used to manipulate human behaviour, exploit information about individuals or groups of individuals, used to carry out social scoring or for indiscriminate surveillance would all be banned in the EU.

Remote biometric identification systems used in public places, like facial recognition, would need special authorisation from authorities.

AI applications considered to be 'high-risk' would have to undergo inspections before deployment to ensure systems are trained on unbiased data sets, in a traceable way and with human oversight.

High-risk AI would pertain to systems that could endanger people's safety, lives or fundamental rights, as well as the EU's democratic processes -- such as self-driving cars and remote surgery, among others.

Some companies will be allowed to undertake assessments themselves, whereas others will be subject to third party checks. Compliance certificates issued by assessment bodies will be valid for up to five years.

The rules will apply equally to companies based in the EU or abroad.

 

Last modified on 15 April 2021
Rate this item
(1 Vote)

Read more about: