Published in AI

Gates says AI risks manageable

by on17 July 2023


It is not as grim as all that 

Software King of the World Sir William Gates III said that life under our AI overlords will not be as bad as all that. 

He said that there were more reasons than not to be optimistic that we can manage the risks of AI while maximizing its benefits.

"One thing that's clear from everything that has been written so far about the risks of AI — and a lot has been written — is that no one has all the answers. Another thing that's clear to me is that the future of AI is not as grim as some people think or as rosy as others think. The risks are real, but I am optimistic that they can be managed," Gates said.

Gates said that many of the problems caused by AI had a historical precedent. For example, it will have a big impact on education, but so did handheld calculators a few decades ago and, more recently, allowing computers in the classroom. We can learn from what's worked in the past.

He pointed out that many of the "problems" caused by AI can also be managed with the help of AI.

"We'll need to adapt old laws and adopt new ones — just as existing laws against fraud had to be tailored to the online world," Gates said.

Gates said that governments need to build up expertise in artificial intelligence so they can make informed laws and regulations that respond to this new technology.

However Gates didn't think that it would all be plain sailing and AI needed to be taught to recognise its own hallucinations.

"OpenAI, for example, is doing promising work on this front," he said.

Gates believes AI tools can be used to plug AI-identified security holes and other vulnerabilities — and does not see an international AI arms race.

"Although the world's nuclear nonproliferation regime has its faults, it has prevented the all-out nuclear war that my generation was so afraid of when we were growing up. Governments should consider creating a global body for AI similar to the International Atomic Energy Agency."

He's "guardedly optimistic" about the dangers of deep fakes because "people are capable of learning not to take everything at face value" — and the possibility that AI "can help identify deepfakes as well as create them.

Intel, for example, has developed a deepfake detector, and the government agency DARPA is working on technology to identify whether video or audio has been manipulated."

"It is true that some workers will need support and retraining as we make this transition into an AI-powered workplace. That's a role for governments and businesses, and they'll need to manage it well so that workers aren't left behind — to avoid the kind of disruption in people's lives that has happened during the decline of manufacturing jobs in the United States."

 

Last modified on 17 July 2023
Rate this item
(0 votes)