Apple’s new FAQ unsuccessfully attempts to clarify the new child abuse scanning features

Apple’s new FAQ unsuccessfully attempts to clarify the new child abuse scanning features

Apple has attempted to assuage concerns regarding its new anti-child abuse measures in a new FAQ. The company wrote in its FAQ, “Let us be clear, this technology is limited to detecting CSAM [child sexual abuse material] stored in iCloud and we will not accede to any government’s request to expand it.”

Apple’ announced new tools last Thursday including two features for protecting children. One such feature is “communication safety” which uses on-device machine learning to identify and blur sexually explicit images received by children in the Messages app. It sends a notification to a parent if a child aged less than or equal to 12 views or sends such an image. The second feature is designed to detect known CSAM by scanning users’ images once such an image is synced in iCloud. Apple gets notifications only if CSAM is detected. Once Apple receives such a notification, Apple will alert authorities after verification.

Apple’s plans were not welcomed with open arms because digital privacy groups and campaigners argued that these steps introduce a backdoor into Apple’s software through which Apple gets more exposure to scan types of content going beyond child sexual abuse. The basic problem identified by the groups is authoritarian governments around the world could use it to scan for politically dissent material or anti-LGBT regimes could use it to crack down on sexual expression.

The Electronic Frontier Foundation wrote “Even a thoroughly documented, carefully thought-out, and narrowly scoped backdoor is still a backdoor. We’ve already seen this mission creep in action. One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of ‘terrorist’ content that companies can contribute to and access for the purpose of banning such content.”

On the other hand, Apple argues that it has safeguarded the system from being misused. It has designed the security of the system in such a way that it does not have the potential of detecting things other than sexual abuse imagery. Apple said that its list of banned images is provided by the National Centre for Missing and Exploited Children (NCMEC) and other child safety organizations. Apple argued, the system “only works with CSAM image hashes provided by NCMEC and other child safety organizations.” In simple words, Apple clarified that it won’t add to this list of image hashes.

Apple further said it would refuse demands from governments to add non CSAM images to the list. Apple said “We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future.”

Note that despite Apple’s assurances, in the past, the company has made concessions to governments for the sake of getting a pass to continue operating in their countries. It is to be noted that Apple sells iPhones without FaceTime in countries that don’t allow encrypted phone calls. Similarly, Apple removed thousands of apps from its App Store in China. The FAQ fails to address concerns regarding the feature that scans Messages for sexually explicit material, therefore, it remains to be seen what concrete steps are taken next by the company.

EFF wrote, “All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts.”

Maria Janulis
Maria is a Florida-based columnist, working in the Journalism industry for the last five and a half years. She spends most of her time interacting with the like-minded group of people on social media.