AFP Says AI Image Database Will Help Catch Predators, But Many are Skeptical
The Australian Federal Police (AFP) are appealing to members of the public to donate their childhood photos to a project being undertaken jointly with Monash University.
The agency says the photos will be used to programme and train an Artificial Intelligence (AI) system, known as My Picture Matters, so it can recognise pictures of children on the dark web.
Playing catch up
Rapid developments in AI have meant that local and international policing agencies have some catch up work to do in order to combat rising crime. One of the most serious crimes that is flourishing from improving AI technology is child sexual abuse and the generation of child sexual abuse material.
‘Doctored’ images make it much more difficult for authorities to find the actual children who are being exploited and abused.
The AI project now being developed needs at least 10,000 images of children in order to programme the system to identify potential images of children on the dark web and also on devices that may have been seized during criminal investigations.
The images are combined to work with algorithms which have been developed to detect sexual content or violent material.
Another surveillance tool in the making?
The AFP says the My Picture Matters project will have strict controls and those who donate their images will be able to withdraw consent at any time.
They have also assured the public that the dataset of images will not be held by police, but stored and managed by Monash University. They claim that once the project is complete, the dataset will not be used for any other purpose.
And while it makes sense that police would seek to develop their own intelligence and technology capabilities to combat criminal activity developed using AI, it does raise questions about how the technology will be used once it is in police hands.
There is a distinct line between such a resource being used as a tool to solve crimes and catch criminals, and it being used as a general surveillance tool.
Facial recognition technology
Earlier this year, senior AFP officials met with US-based facial recognition company Clearview AI just months after the Australian Information Commissioner and Privacy Commissioner jointly determined that Clearview AI, Inc. breached Australians’ privacy by scraping their biometric information from the internet and social media sites, and used it in a facial recognition tool.
The investigation found clear breaches of the Privacy Act 1988 (Cth), including:
- collecting Australians’ sensitive information without consent
- collecting personal information by unfair means
- not taking reasonable steps to notify individuals of the collection of personal information
- not taking reasonable steps to ensure that personal information it disclosed was accurate, having regard to the purpose of disclosure
- not taking reasonable steps to implement practices, procedures and systems to ensure compliance with the Australian Privacy Principles.
Clearview AI was ordered to “cease collecting facial images and biometric templates from individuals in Australia, and to destroy existing images and templates collected from Australia.”
The company, which has been one of the forerunners in the area of AI technology has sold its facial recognition technology to private and government organisations around the world, including law enforcement agencies. But in recent years it has also been the subject of numerous lawsuits and privacy breaches too.
After the news broke that the AFP met with Clearview’s executives, the AFP issued a statement saying the AFP does not use Clearview, and “has not made any recommendations to the Commonwealth to allow the use of the technology”.
Laws not keeping pace with technology
However in Australia, police and security services, along with a number of organisations do already use facial recognition technology, (a version of the technology which unlocks your smartphone) and have done so for a number of years, usually through CCTV by scanning an individual’s face and matching it to images held in a database.
The New South Wales Police Force uses the technology on a regular basis to “identify potential suspects of crime, unidentified deceased and missing persons”.
NSWPF claims the information is only used for “intelligence purposes” and assures the public it is “committed to the responsible and ethical use of facial recognition technology.
However, there have long been concerns that the technology cannot be wholly relied upon as an identification tool, meaning suspects of crime can be incorrectly identified, as well as that it could potentially encroach on a person’s right to be assumed innocent until proven guilty, and that dependence on the use of the technology in policing only contributes further to our slippery slide into a “surveillance state”.
Shopping giants, such as Kmart and Bunnings have suspended the use of their technology amid privacy concerns. But earlier this year it was reported that major stadiums around Australia, including Sydney Cricket Ground, Allianz Stadium, and Qudos Bank Arena in Sydney are using technology which records a customer’s faceprint, without knowledge and consent.
Human rights concerns
The problem is that currently there are no adequate protections within existing privacy laws around the use of facial recognition technology, and no dedicated laws with regard to the use of the technology.
In 2021, the Australian Human Rights Commission (AHRC) called for a suspension of all use of the technology until a regulatory or oversight body could be set up with the appropriate skills and expertise to develop technical standards, oversee mandatory human rights risk assessments, and provide advice to developers, deployers and affected individuals.
But to date, there has been little progress in the way of protecting individual’s rights, and yet there have been giant leaps in the sophistication and prevalent use of the technology itself.