Police Accused of Lying About Use of ‘Ineffective’ Facial Recognition Software
An online tech news source recently ran a story detailing a data breach at controversial facial recognition company Clearview AI, which exposed its entire client list.
According to the report, the list includes four Australian police organisations, comprising the Queensland Police Service, Victoria Police, South Australia Police and the Australian Federal Police.
The leaked client list suggests that police officers have used the highly inaccurate technology in an attempt to ‘identify’ around 1000 suspects in Australia – a process which has been proven over and over again to lead to the false identification and arrest of innocent persons.
Indeed, a previous trial of facial recognition technology in Queensland was ruled a ‘complete failure’ – with the software misidentifying people the ‘vast majority’ of cases – and a trial in the United Kingdom in 2016/17 got it wrong in 98% of cases.
Police had previously denied using the Clearview AI software and, despite the leak, have continued to do so – with the South Australian Police Force issuing a statement which asserts that its officers have not been using it.
Queensland has been slightly more forthcoming, saying that facial recognition technology is one of ‘many capabilities’ available to its officers.
Victoria Police claims the software has not been used in any ‘official capacity’, which begs the question as to why police organisations would spend large amounts of taxpayer dollars on purchase and licensing.
The AFP has remained silent.
Clearview AI’s programme has attracted an enormous amount of controversy worldwide, being variously labelled as ‘ineffective’, ‘wasteful’, a ‘gross breach of privacy’ and a ‘honeypot for hackers’.
The Clearview database contains billions of images amassed from sources such as Facebook, Instagram, LinkedIn and other public websites, and the application of the software has the potential to lead to wrongful arrests, whereby innocent persons are wrongly matched to suspected offenders.
The reports regarding the leaked client list have heightened concerns that ill-intentioned hackers will gain access to a wealth of private information and use it to engage in criminal conduct such as identity theft.
Privacy laws
Under current Australian privacy laws, biometric information, that is your face, fingerprints, eyes, palm, and voice is considered sensitive information.
The Privacy Act 1988 (Cth) makes clear that any organisation or agency collecting this ‘sensitive’ information must first obtain consent to do so.
However, there are exceptions to this general rule including where the information is “necessary” to prevent a serious threat to the life, health or safety of any individual.
It’s an exception many believe has been exploited by law enforcement agencies, with legal commentators suggesting it is not broad enough to encompass all of the conduct that police have been engaging in.
National surveillance
Red flags were raised last year when the Federal Government announced plans to create a national facial recognition database by collecting photos from drivers’ licences and passports.
The government justified the implementation of the database, by saying that it would both help to combat identity theft (which is on the rise) as well as be a useful tool for protecting national security, because the database would be made available to law enforcement agencies too.
The legislation presently before parliament allows both government agencies and private businesses to access facial IDs held by state and territory traffic authorities, and passport photos held by the foreign affairs department.
The legislation is currently stalled because of concerns about privacy implications and lack of safeguards in the proposed law.
But most state and territory governments have already updated their driver’s licence laws in anticipation of the database after an agreement at the Council of Australian Governments in October 2017. If you’re applying for, or renewing a passport, then you are required to sign a consent form.
Facial recognition AI is unreliable
One of the most significant concerns is that AI technology is still unreliable – the benefits don’t outweigh the massive intrusion into our personal privacy. Plus, there are inherent problems with the current technology. False positives are a major issue.
In 2016 and 2017, London’s Metropolitan Police used automated facial recognition in trials and reported that more than 98% of cases, innocent members of the public were matched to suspected criminals.
Despite these concerns, the Home Affairs Department is impatient to implement the technology and says that facial recognition experts (humans) will work with the technology to provide more accurate outcomes.
But that’s of cold comfort to anyone concerned about their privacy. Because, as is already the case in China, facial recognition can be used for mass surveillance.
And, we’ve already seen many examples of how data breaches can occur even with appropriate legislation in place.
Data breaches in government departments
Last year, information came to light showing that data breaches of the My Health Record database rose from 35 to 42 in the past financial year, despite consistent claims by the federal government that the database is safe and secure, and that the privacy of those who choose not to opt out is protected.
In 2018, the South Australian government was forced to shut down guest access to its online land titles registry, after an unidentified overseas ‘guest user’ was able to download the personal details of more than a million Australian home owners, information that could potentially be used to develop a false identity.
Police forces and other government organisations have repeatedly failed to properly secure confidential information of members of the public, and some rogue police officers have broken the law by releasing sensitive information, putting vulnerable individuals in danger.
Right now, the fact that Australian police forces exist on Clearview AI’s client list, and they’re not forthcoming about it should also set alarm bells ringing for all Australians.
The Office of the Australian Information Commissioner (OAIC) has launched an inquiry into whether the software is being employed in Australia, or if its database contains information on Australians. The commission’s final report will no doubt reveal all.