Site icon News Update

Apple Walks a Privacy Tightrope to Spot Child Abuse in iCloud

Apple Walks a Privacy Tightrope to Spot Child Abuse in iCloud

For years, tech companies have struggled between two impulses: the need to encrypt their users’ data to protect their privacy, and the need to detect the worst sorts of abuse on their platforms. Now Apple is debuting a new cryptographic system that seeks to thread that needle, detecting child abuse imagery stored on iCloud without—in theory–introducing new forms of privacy invasion. In doing so, it’s also driven a wedge between privacy and cryptography experts who see its work as a innovative new solution, and those that see it as a dangerous capitulation to government surveillance.

Today Apple introduced a new set of technological measures in iMessage, iCloud, Siri, and search, all of which the company says are designed to prevent the abuse of children. A new opt-in setting in family iCloud accounts will use machine learning to detect nudity in images sent in iMessage. The system can also block those images from being sent or received, display warnings, and in some cases alert parents that a child viewed or sent them. Siri and search will now display a warning if it detects that someone is searching for or seeing child sexual abuse materials, also known as CSAM, and offer options to seek help for their behavior or to report what they found.

But in Apple’s most technically innovative—and controversial—new feature, iPhones, iPads, and Macs will now also integrate a new system that checks images uploaded to iCloud in the US for known child sexual abuse images. That feature will use a cryptographic process that takes place partly on the device and partly on Apple’s servers to detect those images and report them to the National Center for Missing and Exploited Children or NCMEC, and ultimately US law enforcement.

Apple argues that none of those new features for dealing with CSAM endanger user privacy—that even the iCloud detection mechanism will use clever cryptography to prevent Apple’s scanning mechanism from accessing any visible images that aren’t CSAM. The system was designed and analyzed in collaboration with Stanford University cryptographer Dan Boneh, and Apple’s announcement of the feature includes endorsement from several other well-known cryptography experts. 

“I believe that the Apple PSI system provides an excellent balance between privacy and utility, and will be extremely helpful in identifying CSAM content while maintaining a high level of user privacy and keeping false positives to a minimum,” Benny Pinkas, a cryptographer at Israel’s Bar-Ilan University who reviewed Apple’s system, wrote in a statement to WIRED.

Children’s safety groups, for their part, also immediately applauded Apple’s moves, arguing they strike a necessary balance that “brings us a step closer to justice for survivors whose most traumatic moments are disseminated online,” as Julie Cordua, the CEO of the child safety advocacy group Thorn wrote in a statement to WIRED.

Other cloud storage providers from Microsoft to Dropbox already perform detection on images uploaded to their servers. But by adding any sort of image analysis to user devices, some privacy critics argue, Apple has also taken a step towards a troubling new form of surveillance and weakened its historically strong privacy stance in the face of pressure from law enforcement.

“I’m not defending child abuse. But this whole idea that your personal device is constantly locally scanning and monitoring you based on some criteria for objectionable content and conditionally reporting it to the authorities is a very, very slippery slope,” says Nadim Kobeissi, a cryptographer and founder of the Paris-based cryptography software firm Symbolic Software. “I definitely will be switching to an Android phone if this continues.”

Apple’s new system isn’t a straightforward scan of user images, either on their devices or on Apple’s iCloud servers. Instead it’s a clever—and complex—new form of image analysis designed to prevent Apple from ever seeing those photos unless they’re already determined to be part of a collection of multiple CSAM images uploaded by a user. The system takes a “hash” of all images a user sends to iCloud, converting the files into strings of characters that are uniquely derived from those images. Then, like older systems of CSAM detection such as PhotoDNA, it compares them with a vast collection of known CSAM image hashes provided by NCMEC to find any matches.

Apple is also using a new form of hashing it calls NeuralHash, which the company says can match images despite alterations like cropping or colorization. Just as crucially to prevent evasion, its system never actually downloads those NCMEC hashes to a user’s device. Instead, it uses some cryptographic tricks to convert them into a so-called “blind database” that’s downloaded to the user’s phone or PC, containing seemingly meaningless strings of characters derived from those hashes. That blinding prevents any user from obtaining the hashes and using them to skirt the system’s detection.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsUpdate is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – abuse@newsupdate.uk. The content will be deleted within 24 hours.
Exit mobile version