Apple is said to be planning to scan your iPhone for child abuse imagery. This is in addition to other security measures that the company has in place, such as a password manager and a face recognition system. If this plan comes to fruition, it would be a huge violation of your privacy and could lead to serious consequences.
What Is Apple Doing?
The reports of Apple’s plans originated from The Financial Times and Johns Hopkins University Professor Matthew Green, both generally reliable sources. Of course, until Apple confirms this to be the case, there’s always a chance this doesn’t happen. Apple reportedly demonstrated the plan to some US academics earlier this week.
— Matthew Green (@matthew_d_green) August 4, 2021
According to the reports, Apple will use a system it calls “neuralMatch” to scan American iPhones for child abuse imagery.
Essentially, an automated system would alert a team of human reviewers if it believes illegal imagery is detected. From there, a member of the team would review the imagery and contact law enforcement.
Technically, this isn’t anything new—cloud-based photo storage systems and social networks already do this sort of scanning. The difference here is that Apple is doing it on the device level. According to Matthew Green, it will initially only scan photos uploaded to iCloud, but it would be performing that scan on the user’s phone. “Initially” is the key word there, as it very well could be used to scan all photos locally at some point.
This is supposed to make the system less invasive, as the scanning is done on the phone and sent back only if there’s a match, meaning not every photo you upload is subject to the eyes of strangers.
According to those at the briefing, every photo uploaded to iCloud is given a “safety voucher,” saying whether it is suspect or not. When a certain amount of pictures are marked as suspect, Apple will decrypt them and send them to authorities if anything involving child abuse is found.
How will the system differentiate between child abuse images and other images? According to the report, it has been tested on 200,000 sex abuse images collected by the US non-profit National Center for Missing and Exploited Children.
Images are converted into a string of numbers through hashing and then compared to the pictures in the database.
After all of this came out, Apple declined to comment to Financial Times about what is happening. However, we presume the company is working on some official statement before the messaging regarding this move gets out of hand.
This Sets a Dangerous Precedent
We probably don’t need to tell you how scary this could be. People who abuse children should be caught and punished, but it’s easy to see how something like this could be used for far more invasive purposes.
Will similar technology be rolled out on other platforms, like Macs, Windows PCs, and Android phones? Could countries like China use it to detect subversive imagery on their citizens’ phones? If it becomes widely accepted, could the copyright industry use it to start scanning for pirated content in a few years?
And, even if it works as advertised: Will innocent people get caught in the crossfire?
Hopefully, this isn’t as concerning as it looks.