Published in News

Apple admits it is a great platform for distributing child porn

by on23 August 2021


Gotta be the best at everything

Fruity cargo cult Apple’s anti-fraud chief Eric Friedman admitted the company is the “greatest platform for distributing child porn”.

The company has angered human rights groups and its own stuff by wanting to check users accounts for kiddie porn and report anything nasty to the authorities.

But Friedmen’s comments have turned up in a pile of documents for a court case. Apparently the root of the problem was discovered in 2020. Friedman’s comments have created more questions.  How did he know that Apple had a kiddie porn problem unless Jobs' Mob has already been closely monitoring users?

The iMessage thread was spotted by the Verge as it works its way through the internal emails, messages, and other materials handed over by Apple as part of the discovery process in the Epic Games lawsuit.

Ironically, Friedman suggests that Facebook does a better job of detecting kiddie porn than Apple.

"The spotlight at Facebook etc. is all on trust and safety (fake accounts, etc). In privacy, they suck. Our priorities are the inverse. Which is why we are the greatest platform for distributing child porn, etc."

The Tame Apple Press has rushed to defend Friedman from spying claims by pointing out that iCloud photo storage is on by default, even if it's just the paltry 5GB the company gives everyone as standard.

This means the service may be the most-used cloud service for photos -- in contrast to competing ones where users have to opt in. Apple has said that it has been looking at the CSAM problem for some time, and was trying to figure out a privacy-protecting way to detect it. It may well be this specific conversation that led the company to prioritise these efforts.

Of course, this logic is pants. Just having a big database does not mean that your users are automatically going to be filling it will kiddie porn and 5GB is nothing when it comes to photofiles. A simple scan of the data base looking for porn before sending the email is a much more logical way of confirming that there is a problem.

Meanwhile the authors of the only peer-reviewed publication on how to build a system like Apple's -- and we concluded using such scanning technology was dangerous.

The report’s authors Anunay Kulshrestha and Jonathan Mayer, of Princeton University  said: “We're not concerned because we misunderstand how Apple's system works. The problem is, we understand exactly how it works.”

The two year research project started as an experimental system to identify CSAM in end-to-end-encrypted online services.

“As security researchers, we know the value of end-to-end encryption, which protects data from third-party access. But we're also horrified that CSAM is proliferating on encrypted platforms. And we worry online services are reluctant to use encryption without additional tools to combat CSAM.”

They built a system where If someone shared material that matched a database of known harmful content, the service would be alerted. If a person shared innocent content, the service would learn nothing. People couldn't read the database or learn whether content matched, since that information could reveal law enforcement methods and help criminals evade detection.

However they discovered that the  system could be easily repurposed for surveillance and censorship. The design wasn't restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.

 

Last modified on 23 August 2021
Rate this item
(0 votes)

Read more about: