Published in AI

A third of companies will be hit by deep fakes

by on01 February 2024


We will not know who is real

By 2026, crooks using fake faces made by AI will trick 30 per cent of businesses that use face biometrics to check who you are, according to Gartner.

Gartner top analyst Akif Khan said deepfakes, can be used by hackers to get past biometric security or make it useless.

"This means that businesses may start to doubt their identity checks, as they won't know if the face they see is real or not."

Face biometrics today use a system called PAD to check if the user is alive. "But PAD can't stop digital attacks using the deepfakes that AI can make now," said Khan.

Gartner research said the most common attacks are PAD attacks, but injection attacks went up 200 per cent in 2023. To stop these attacks, businesses will need to use PAD, IAD and image inspection together.

To help businesses protect themselves from deepfakes, security bosses and risk leaders must pick vendors who can show they have the skills and a plan to beat the current standards and keep an eye on these new attacks.

"Businesses should start setting a minimum level of protection by working with vendors who have spent money on fighting the latest deepfake threats using IAD and image inspection," said Khan.

Once the plan is made and the level is set, security bosses and risk leaders must add more signals, such as device ID and behaviour analysis, to spot attacks on their identity checks.

Most of all, security and risk leaders in charge of identity and access management should take steps to stop AI-driven deepfake attacks by choosing technology that can prove real human presence and by using more ways to stop account takeover.

Last modified on 01 February 2024
Rate this item
(0 votes)