Apple To Begin Scanning iOS and iCloud for Harmful Content

Apple's protective measures for children might open a backdoor to worse

You can trust PC GuideOur team of experts use a combination of independent consumer research, in-depth testing where appropriate – which will be flagged as such, and market analysis when recommending products, software and services. Find out how we test here.

Last Updated on

Apple will begin to scan your photos stored on their cloud storage service, iCloud, as an update in the upcoming iOS 15, iPad OS 15, watchOS 8, and macOS Monterey.

The feature, neuralMatch, is designed to scan photos and material stored on the cloud for any evidence of child abuse. When it detects something, it’ll alert a team of human reviewers to determine whether or not law enforcement should be involved.

This also applies to Siri, which will “intervene” whenever someone potentially searches for that content via those systems. Siri will also assist in reporting content by pointing you to the right websites.

With the scan in place, it’ll also begin to warn younger users and parents in the Messages app if something isn’t particularly suitable to be sent. Via Machine Learning, Apple intends to prevent explicit content from being sent from or to their younger customer and if the parent is connected via parental controls, they’ll get a notification and the child a warning that the notification will be sent.

Apple and Privacy

In 2015, Apple rejected the FBI’s request to be allowed a ‘backdoor’ into the San Bernardino shooter’s phone. Now, the Electronic Frontier Foundation (EFF) has raised concerns about the potential for this Apple-approved backdoor to be used by certain governments to censor users who might be sharing LGBT+ content or to search for dissenting members of the population.

In a 2019 article, the EFF explained that adding in a client-side scanning service for explicit content is a sure-fire way to break privacy measures.

The EFF reiterates that in their 2021 article about this new system from Apple, is that it’s a breaking of the promise that Apple has repeatedly stood by in regards to privacy.

You’ll never be able to lock the backdoor once it’s opened and the nefarious uses that a system intended to prevent the nefariousness being committed can suddenly turn a once secure system into something that could actively harm the users.

Apple’s Accidental End of End-to-End Encyrption

iMessage (or just Messages) is no longer the secure service it purports to be. The method of detection gives Apple a larger insight into their customer’s data and content, while the intention is good, the method is likely to be abused by higher powers than Apple in areas of the world not particularly interested in only ensuring the safety of children, but oppressing others.

However, speaking with The Guardian, the developer of PhotoDNA – a similar piece of software – Hany Farid, says he isn’t worried about the inclusion due to other programs using it, including WhatsApp, which will scan messages for harmful links or files.

Meanwhile, Matthew Green, a professor and teacher of Cryptography at Johns Hopkins University, claims that this is on the beginning for ‘mission creep’, a term used to describe the changing objectives over a long period of time.

In a Twitter thread, Green also brings back up the fact that the American government asked for this on behalf of other nations, to allow them into the backdoor for ‘security’ purposes.

Whatsapp weighs in on Apple Privacy

In a lengthy Twitter Thread, the head of WhatsApp commented on Apple’s plans for scanning photos, claiming it to be a “surveillance system”.

He states that Whatsapp has managed to report over 400, 000 incidents of child abuse shared on the platform without breaking end-to-end encryption over the last year but also fails to mention that it was only recently that WhatsApp was in the news, with The Information reported that Facebook (WhatsApp’s parent company) was trying to circumvent E2E for advertising purposes.

Cathcart rejects the notion, but also, Facebook members of staff aren’t exactly the most trusted at the moment considering their bannings of political researchers in the US (among other things), with none of the participants in the research being banned.