ALBAWABA - Child safety specialists claim that Apple is not adequately monitoring its platforms or searching for photographs and videos of child sexual abuse, as reported by The Guardian. This is causing worries about how the business will manage the increasing number of such information related to artificial intelligence.
Apple is accused by the National Society for the Prevention of Cruelty to Children (NSPCC) in the UK of significantly underestimating the frequency of child sexual abuse material (CSAM) in their products.
According to law enforcement statistics received by the NSPCC, criminals targeting children used Apple's iCloud, iMessage, and Facetime in a year to store and share CSAM in more incidents in England and Wales alone than the firm disclosed across all other nations combined, with a total of 337 cases.
Richard Collard, the NSPCC's head of child safety online policy, stated that “there is a concerning discrepancy between the number of UK child abuse image crimes taking place on Apple’s services and the almost negligible number of global reports of abuse content they make to authorities,” as reported by Engadget.
“Apple is clearly behind many of their peers in tackling child sexual abuse when all tech firms should be investing in safety and preparing for the roll out of the Online Safety Act in the UK,” Collard adds.
Earlier in 2021, Apple said that it will scan photographs before uploading them to iCloud and then evaluate them in comparison to NCMEC and other CSAM image databases. Apple later postponed the release of its CSAM detection tools after privacy and digital rights organizations heavily criticized it, ending the project in 2022.