Published in AI

Microsoft wants to purge deepfakes from Bing

by on06 September 2024


Teams up with StopNCII

Software King of the World Microsoft has announced a partnership with StopNCII to help remove non-consensual intimate images, including deepfakes, from its Bing search engine.

When a victim opens a "case" with StopNCII, the database creates a digital fingerprint, or "hash," of an intimate image or video stored on their device without uploading the file.

This hash is then sent to participating industry partners, who can search for matches and remove them from their platforms if they violate content policies. This process also applies to AI-generated deepfakes of real people.

Several other tech companies have jumped on the bandwagon, agreeing to work with StopNCII to scrub intimate images shared without permission. Meta helped build the tool and uses it on Facebook, Instagram, and Threads. Other services that have partnered with the effort include TikTok, Bumble, Reddit, Snap, Niantic, OnlyFans, PornHub, Playhouse, and Redgifs.

Absent from this list is Google. The tech giant has tools for reporting non-consensual images, including AI-generated deepfakes. However, by not participating in one of the few centralised efforts for scrubbing revenge porn and other private images, Google arguably places an additional burden on victims, forcing them to take a piecemeal approach to reclaim their privacy.

In addition to efforts like StopNCII, the US government has taken steps this year to address the harms caused by deepfake non-consensual images specifically. The US Copyright Office has called for new legislation on the subject, and a group of Senators introduced the NO FAKES Act in July to protect victims.

Rate this item
(3 votes)