Apple initially said that CSAM detection would be implemented in an update for iOS 15 and iPadOS 15 by the end of 2021, but the company eventually postponed the feature based on “feedback from customers, advocacy groups, researchers and others.” . In September 2021, Apple posted the following update on its child safety page: We previously announced plans for features to help protect children from predators who use communication tools to recruit and exploit them, and to help curb the spread of child sexual abuse material. Based on feedback from customers, advocacy groups, researchers, and others, we’ve decided to spend additional time over the coming months to gather information and make improvements before releasing these critical child safety features. In December 2021, Apple removed the above update and all references to CSAM detection plans from its child safety page, but an Apple representative told The Verge that Apple’s plans for the feature had not changed. As far as we know, however, Apple has not publicly commented on the plans since then. We’ve reached out to Apple to ask if the feature is still planned. Apple did not immediately respond to a request for comment. Apple rolled out child safety features for Messages and Siri with the release of iOS 15.2 and other software updates in December 2021, and expanded the Messages app to Australia, Canada, New Zealand and the UK with iOS 15.5 and other software releases in May 2022. Apple said the CSAM detection system was “designed with user privacy in mind.” The system would perform “device matching using a database of known CSAM image hashes” from child safety organizations, which Apple would turn into an “unreadable set of hashes stored securely on users’ devices.” Apple planned to report iCloud accounts with known CSAM image exploits to the National Center for Missing and Exploited Children (NCMEC), a non-profit organization that works in partnership with US law enforcement agencies. Apple said there would be a “threshold” that would ensure a “less than one in a trillion chance per year” of an account being flagged incorrectly by the system, plus a manual review of flagged accounts by a human. Apple’s plans have been criticized by a wide range of individuals and organizations, including security researchers, the Electronic Frontier Foundation (EFF), politicians, policy groups, university researchers, and even some Apple employees. Some critics have argued that Apple’s child safety features could create a “backdoor” into the devices that governments or law enforcement agencies could use to surveil users. Another concern was false positives, including the potential for someone to intentionally add CSAM images to another person’s iCloud account to flag their account. Note: Due to the political or social nature of the discussion on this topic, the discussion thread is located in the Political News forum. All forum members and site visitors are welcome to read and follow the thread, but posting is limited to forum members with at least 100 posts.