The Brave New and Inaccurate World of Artificial Intelligence Policing in New South Wales
![Inaccurate ai policing](https://www.sydneycriminallawyers.com.au/app/uploads/2025/02/inaccurate-ai-policing-267x150.png)
The New South Wales Police Force has increasingly been employing the use of emerging artificial intelligence (AI) technology, which includes biometric facial recognition technology (FRT) and machine learning (ML), in its investigative processes since the turn of the century.
These state-of-the-art techniques are currently being applied to identifying suspects or objects linked to a crime, to comb through hours of CCTV footage and to sort through terabytes of data, which is saving police officers from thousands of hours of previously manual work.
Yet, these technologies are fallible. FRT has long been identified as producing racially biased results, with UK civil liberties group Big Brother Watch having revealed in 2018 that UK police force use of facial recognition technology was misidentifying innocent people, on average, 95 percent of the time.
As reported in the Saturday Paper on 1 February, the NSW government AI Advisory Committee raised concerns about the NSW Police Force use of the Microsoft Insights platform and the potential impacts it may have for traditionally overpoliced populations during a November 2021 meeting.
Formed that same year, the AIRC does not publish its reports, so the documents revealing this were sought via freedom of information, and they outlined the AIRC’s fears around this technology misidentifying people as suspects, as they’ve been found to frequent high-crime areas.
Indeed, the NSW Police Force has a long history of systemic racism, racial profiling and overpolicing of marginalised groups. And it appears that while the latest technologies are speeding up policing processes, due to the biases involved in the tech developmental stage, these systems are also accelerating the misidentification of people of colour, as well as any resulting wrongful convictions.
Streamlined and biased
NSW police uses the AI and ML-infused Microsoft Insights program to turn voice to text, stitch disparate videos together and to identify objects within CCTV footage.
But Insights does not utilise facial recognition technology due to its identified racial discrepancies, and this is why Microsoft made the decision not to provide US law enforcement agencies with programs using FRT in 2019.
Insights is utilised alongside the NSW Police Integrated Policing Operating System (IPOS), which replaced the 27-year-old state law enforcement central database. The IPOS integrates triple zero call data, along with data relating to arrests and charges, firearms, criminal investigations and forensics, and it makes it easier for officers to search through all this information at once.
Both these systems were moved onto the Microsoft Azure cloud computing platform in June 2021. This system further allows cops on the beat to access IPOS information on their MobiPol mobile devices. And Microsoft has further underscored that these developments were leading to the wider adoption of AI cognitive capabilities, or the mimicking of human thought processes.
The issues identified by journalist Jeremy Nadel in the Saturday Paper article include the misidentification of people as suspects due to their appearance in high crime areas via Insights, along with racial bias being identified in one of the FRT programs NSW police uses, Cognitec Systems, as it’s been shown to misidentify West African people almost seven times more often than Caucasians.
The 2018 UK Big Brother Watch Face Off report outlines that biometric facial recognition “algorithms often disproportionately misidentify minority ethnic groups and women”.
“In the context of law enforcement, biased facial recognition algorithms risk leading to disproportionate interference with the groups concerned – whether through police stops and requests to show proof of identity, or through the police’s storage of ‘matched’ biometric photos,” BBW explains.
The civil liberties group further sets out that the reason for these discrepancies is due to the predominately white males developing these technologies, who’ve trained them on datasets that “contain mostly white and male faces, as well as the fact that cameras are not configured to identify darker skin tones”.
Utilised, yet untested
During NSW parliament budget estimates over recent years, NSW Greens MLC Abigail Boyd has been keeping an eye on AI developments in the NSW Police Force. In August 2022, she quizzed then NSW police minister Paul Toole in respect of NSW police use of facial recognition technology, and the questions taken on notice received reticent answers a month later.
Toole told Boyd that NSW police has been using FRT since 2004, and it has been used to “identify potential suspects of crime, unidentified deceased and missing persons”, as well as the collection of information “used for intelligence purposes only”. He also explained that a 2020-established governance body has been oversighting FRT use.
The minister further outlined that additional information was classified, while the state government was unaware of other jurisdictions having banned the use of such technologies because of the “inbuilt biases”.
As Boyd put further questions to senior police management in February 2024, NSW police commissioner Karen Webb told her that over the financial year 2022/23, facial recognition technology had generated 411 leads, while over 2023/24, 408 leads were generated by FRT.
Boyd put it to NSW deputy commissioner David Hudson last February that the US-based National Institute of Standards and Technology has conducted research into the algorithms being applied by Cognitec Systems, which NSW police is currently using, to find that it falsely identifies people of colour 10 to 100 times more than Caucasian people.
Yet, Hudson explained that NSW police is comfortable with using it as the force doesn’t “use facial recognition or facial matching services as the only evidence” to charge someone, but rather officers use “it as an investigative tool to give… an indication of matching… a CCTV photograph with our offender photographs”.
So, the NSW Police Force is satisfied in using Cognitec, despite its proven biases, Hudson set out, as the agency has not observed any discrepancies when it has been using it.
Indeed, when Boyd suggested that the system must be misidentifying more people of colour in the early stages of investigations due to this bias, Hudson said that this wasn’t the case simply based on NSW police observations.
NSW police is continuing to utilise this technology despite other research revealing its faults and not having attempted to test the system itself.
AI policing mimics traditional prejudices
Despite AI coming to prominence in the public discourse of late, state law enforcement has been increasingly applying AI technologies since the turn of the century.
These systems that tend to replicate the same racial prejudices that have always been a part of NSW policing culture started being used around the same time that police forces commenced proactive policing in order to prevent future crime.
“Proactive policing came into dominance in the last few decades across most western states, largely for economic reasons,” UNSW Associate Professor Vicki Sentas told Sydney Criminal Lawyers in late 2019. “Proactive policing was contrasted to traditional policing, so-called reactive policing. And proactive policing was seen to be a more efficient and effective technique of crime control.”
“For so-called volume crimes, like theft, reactive policing was seen to be too labour intensive and costly,” the academic continued, “and there was this idea that proactive policing would involve less labour time by relying on technology, like crime mapping.”
Then Sydney Institute of Technology researcher Dr Roman Marchant explained in 2017, that the USYD Centre for Translational Data Science was developing machine learning technology that analysed domestic violence statistics to develop algorithms that could be applied to predict the levels of DV crime expected in certain suburban areas in order to reduce crime and allocate resources.
A key NSW proactive policing program was the now defunct Suspect Target Management Plan (STMP), which was introduced in early 2000.
The STMP was a secret list of individuals nominated by police officers as being persons of concern in terms of offending in the future, and due to this they were subjected to enhanced surveillance and random stop and searches without reasonable suspicion applying. And people could be added to this even if they’d never been convicted of a crime in the past.
The 2017 Policing Young People in NSW report inquired into how the STMP was being applied to youths and it found that almost 50 percent of those subjected to extra policing attention were young people, and two separate sets of figures from police area commands found that from 44 percent to 54 percent of the youths on the list were First Nations kids, hinting at the racial bias of police.
The extent of this racial prejudice is further heightened when it’s consider that Aboriginal children aged 10 to 17 only account for 6.2 percent of the entire NSW population within this age bracket.
The article in the Saturday Paper outlines that NSW police began utilising AI to identify individuals that might be placed on the STMP in November 2020. The Justice Equity Centre found that by 2022, 71 percent of kids on the list were First Nations children. And the program was dropped in 2023 due to the overwhelming criticism of it in respect of its prejudicial outcomes.
So, rather than producing more accuracy in policing and reducing human error, in line with the outcomes of applying DNA profiling since the 1990s, the increasing use of artificial intelligence to the field of law enforcement might be accelerating the number of investigations being finalised, but it’s also reproducing the same policing prejudices of the past, and likely expediting wrongful convictions.