Apple announced new technology aimed at detect child abuse photos in iCloud Photo Libraries users. That is, Apple will scan the photos we upload to iCloud or send through iMessage for photos that may be related to child abuse.
Apple being one of the companies that more concerned with user privacy, a measure like this one can call into question all this effort. However, the technology used by Apple promises to protect privacy, unless images of child abuse are found, Child sexual abuse material in English, or CSAM.
How does Apple detect CSAM, violate our privacy?
Apple describes that what they’re basically going to do is analyze images uploaded to iCloud and compare them to a database of known CSAM images to see if there is a match. The whole process is done on the device itself and Apple turns the images into hashes which are stored securely on our device.
A cryptographic hash function, known simply as a “hash,” is a mathematical algorithm that turns any block of data into a different new string of characters. So that it is impossible for Apple to see your imagesIt is only when a match occurs that Apple receives the warning and manually checks if there is a real match, and if so, puts it in the hands of the authorities.
Here’s how Apple explains this technology:
“Before an image is stored in iCloud Photos, a matching process is performed on the device for that image against the unreadable set of known CSAM hashes. This matching process is powered by a cryptographic technology called the Private Set Intersection, which determines if there is a match without revealing the result. Private Set Intersection (PSI) allows Apple to know if an image hash matches known CSAM image hashes, without learning about incompatible image hashes. PSI also prevents the user from learning if there was a match“.
That is to say, our photos will be safe and no one will be able to see themunless we find false positives. Although Apple assured that there would have to be multiple matching images in the user’s library for the report to be sent. However, this still carries other risks.
The main problem with the system, taken up by the Electronic Frontier Foundation, is that after all, Apple is opening a back door to scan our device before our photos are sent.
Apple uses the database with hashes for third party review, they use Child Sexual Abuse (CSAM) material from the National Center for Missing & Exploited Children. But the same in other countries, some totalitarian governments may upload different photos to detect dissidents or locate protesters. To find out and accounts, as much as we trust Apple, the data is obtained from a third party.
Despite these possible faults, it must be said that this is a technology that has been used for many years on many platforms of content. Some may not like it and others think it is a good average, but that Apple should have kept it a secret, since now these people have been warned.
In both cases, Apple seems proud of its technology. In an internal memo, Sébastien Marineau-Mes, Apple’s vice president of software, said he was very happy with this novelty which will arrive at the end of the year.
“Keeping children safe is a very important mission. Much like Apple, pursuing this goal required deep cross-functional engagement, spanning engineering, GA, HI, legal, product marketing, and public relations. What we’re announcing today is the product of this incredible collaboration, providing tools to protect children, but also to uphold Apple’s deep commitment to user privacy.“.
These types of actions are difficult to assess and there is no really correct way to do it. However Apple should be applauded for trying to help with child abuse at the same time that it does not compromise the privacy of its users.