Apple yesterday unveiled a series of three measures designed to protect children in various ways. A triple system of more information via Siri, scanning the contents of iCloud Photos for child pornography and a prevention system against explicit images in iMessage for users under the age of 13. Measures that will arrive with iOS 15, iPadOS 15 and macOS 12 Monterey and what extend protections for children while always respecting the confidentiality of all data.
Of the three measures Apple has presented to protect children, two are we are interested in this article– Detection of images of minors in iCloud photos and iMessage security measures against explicit photos. Let’s start by explaining the latter.
Security in message communications
In a few words we will say that the new security system in iMessage communications will detect and block the explicit images received in conversations with children under 13. This is an optional setting that parents can configure if they want, and it will take care of hiding inappropriate pictures from conversations.
The child will however have the ability to see the image anyway. If the image is touched, the system warns that this image could be sensitive and explains the situation in three ways: that sensitive photos show parts of the body that are covered in swimsuits, that these images can be used to injure the body. sensitivity and that the person appearing there may not want to be seen. After this explanation, “Not now” or “I’m sure” is suggested.
By pressing the second option, the system warns that the parents “want to be sure that you are well”, so they will receive a notification. He also warns the child not to share what he doesn’t want and to talk to someone he trusts if he feels pressured, and finally, he offers help stating that he is not alone in certain situations. After that, he suggests “Do not see the photo” or “See the pho to”.
As we have already said, this system is designed and available only for those under 13 as long as their parents deem it appropriate to activate it. The way of communicating the information, although in childish language, is very clear and reveals a very simple operation: if an explicit image is detected, it is blocked, the possibility is given to choose and parents are warned in case he ends up seeing.
This system works through machine learning on the same device which analyzes the images received and also sent via iMessage to present the appropriate notices. Since all processing occurs locally on the device no one, not even Apple, has access to messages, while maintaining the security of iMessage that we all trust.
Detecting images of minors in iCloud
The other measure Apple will implement is the detection of images of minors in iCloud Photo Libraries. A system which, preserve the privacy of all users, will allow the competent bodies to be informed if images contrary to the law are detected.
Instead of scanning images into the cloud, with the resulting deterioration of privacy, Apple offers a system that compares images from devices against a database locally. This database is stored securely on the device and contains some versions in chop images reported by the responsible bodies, so that their content is completely illegible.
Those chop They are designed in such a way that they not only represent the original image, but also allow variations of it, such as transforming the image into black and white or cutting out some parts. Yes indeed versions of the same image can be detected.
The system works in such a way that before an image is uploaded to iCloud photos, the device generates a chop
This is where another technology called a Threshold Shared Secret comes in. In a nutshell, we can say that the key to decrypting the contents of the vouchers it is divided into X pieces. When there is enough vouchers in one account, Apple receives all the pieces of the full key and can decrypt the contents of these vouchersas well as the images that have been uploaded, and review them manually. The threshold of vouchers is adjusted to ensure that there is only one in a billion a year chance of mislabeling certain content.
Exceeded the threshold of vouchers, Apple receives a report that it can decipher and verify manually. If it is confirmed that the uploaded images match those in the database Apple suspends its account and informs the authorities. However, it offers a system of recourse to restore the account if the user considers that an error has been made.
This mechanism, which will work as long as the device has enabled iCloud Photos, is intended to fully protect our privacy and, at the same time, allow the detection of illegal images on the system.
All of these protections will arrive this fall via iOS 15, iPadOS 15, and macOS Monterey. Protections which, while preserving the confidentiality of our content and our communications, are capable of preventing certain content from having a place.
More information | Manzana