Updated News Around the World

Craig Federighi explains why the iPhone’s new photo scanning feature isn’t a backdoor

Apple introduced two distinct security features a few days ago meant to fight child pornography. One is a private way to scan for Child Sexual Abuse Material (CSAM) photos stored in iCloud. The other is an iMessage tool that detects nudity in messages sent to children and can notify parents of such an occurrence. The two are not related. Apple isn’t scanning all the photos on your phone in search of porn. Apple explained as much in the days that followed the announcement, hoping to address unsurprising privacy concerns. Some worried that Apple would be opening Pandora’s box by effectively building a nascent backdoor into the iPhone.

The worry was that some governments might pressure Apple into customizing their photo scanning tool to look for particular material on iPhones. Craig Federighi, SVP of Software Engineering at Apple, tried to dispel the notion that Apple’s tool is a backdoor in a new interview.

Today’s Top Deal Amazon’s Echo Dot is flying off the shelves at just $35! Price:Was $50, Now $34.99 Buy Now Available from Amazon, BGR may receive a commission Available from Amazon BGR may receive a commission

Features like the new CSAM scanning tool and the iMessage communication safety tool do not make it to iOS without Federighi’s blessing. He explained the two new features to Wall Street Journal’s Joanna Stern, addressing confusions and worries.

Federighi admitted that the way Apple announced the new child safety features was problematic, fueling confusion. “It’s really clear a lot of messages got jumbled pretty badly in terms of how things were understood,” Federighi said. “We wish that this would’ve come out a little more clearly for everyone because we feel very positive and strongly about what we’re doing.”

The CSAM photo scanning feature

Federighi made it clear that privacy is at the core of the new features. The CSAM photo scanning feature has various layers of security in place to ensure it works as intended. First of all, the tool doesn’t scan one’s entire photo library on the iPhone, only those stored in iCloud. And it doesn’t look at what’s inside the images.

A tool on the iPhone compares the cryptographic codes of images to CSAM photos that sit in a database on the iPhone. Those images come from the National Center for Missing and Exploited Children (NCMEC) and other child-safety organizations. If there’s a match, something called a Safety Voucher flags the iCloud image. A tool in iCloud then counts all of the Safety Vouchers. The algorithm notifies Apple once there are more than 30 vouchers. Then a manual review process begins. Only then would Apple contact the authorities.

Thus, Federighi argues, Apple’s ability to look for specific image hashes doesn’t constitute a backdoor.

The iPhone doesn’t have a backdoor

Federighi explained that the CSAM database is the same in all markets. “Imagine someone was scanning images in the cloud,” the exec said, adding that one wouldn’t know what a cloud might scan for “In our case, the database is shipped on device. People can see, and it’s a single image across all countries.”

“We ship the same software in China, with the same database as we ship in America, as we ship in Europe,” he continued. “If someone were to come to Apple, Apple would say no, but let’s say you aren’t confident. You don’t want to just rely on Apple saying no. You want to be sure that Apple couldn’t get away with it if we said yes.”

“Well, that was the bar we set for ourselves in releasing this kind of system,” Federighi said. “There are multiple levels of audibility, and so we’re making sure that you don’t have to trust any one entity or even any one country, as far as […] what images are part of this process.”

The iMessage communication safety

Federighi also explained how the iMessage communication safety feature works. He repeated that it’s a different security tool that’s optional for parents. If the child is under 12 and receives pornographic content via iMessage, the image will not be visible immediately. It will present a warning to the viewer, offering kids the choice of sending the image to their parents or viewing it themselves. Kids under 12 viewing the image would trigger a notification to their parents.

“In hindsight, introducing these two features at the same time was a recipe for this kind of confusion,” Federighi told WSJ. “By releasing them at the same time, people technically connected them and got very scared: What’s happening with my messages? The answer is…nothing is happening with your messages.”

The full report is available on WSJ’s website, complete with a video interview with Federighi.

Today’s Top Deal Amazon’s Echo Dot is flying off the shelves at just $35! Price:Was $50, Now $34.99 Buy Now Available from Amazon, BGR may receive a commission Available from Amazon BGR may receive a commission

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsUpdate is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.