The Problems With Using Artificial Intelligence And Facial Recognition In Policing

News

Jeff Brantingham, anthropology professor at the University of California Los Angeles, displays a computer generated view of “predictive policing,” zones at the Los Angeles Police Department Unified Command Post (UCP) in Los Angeles. (AP Photo/Damian Dovarganes, File)

Recently, I’ve been reading about the effectiveness of predictive policing, which can be used to prevent crime and terrorism.

It seems only a matter of time until we employ artificial intelligence in this way on a larger scale. Funding cuts in the UK have meant that over 7,000 neighborhood officers have lost their jobs in three years, putting the public at risk. As alternative options, some have considered private security or pooling resources for an organised neighborhood crime watch. Another alternative – already utilized in the United States – is using predictive policing systems to reduce street crimes.

Predictive policing uses data to forecast areas where crime will happen, by mapping ‘hot spots’. More interestingly, it can also score and flag people most likely to be involved in violence. Early evidence from David Robinson and Logan Koepke from Upturn studied ten vendors of predictive policing systems, to find that software programs were inputting social media, connections and relationships, social events and school schedules, and commercially available data from data brokers into systems to predict crime. As well as mapping out possible criminal hotspots, software could also assign a numerical threat score and a color coded threat level (red, yellow, or green) to any person that a police department searched for.

The way these tools make predictions, and police departments actually use these systems, is not transparent. The authors found predictive policing was used in 35 locations in the United States. The Chicago police department, for example, began using its subject list in 2013: it is the most prominent example of a person-based policing system known to the public to date.

Such systems can also be applied by the police when it comes to preventing terrorism. Existing human systems are so overburdened that errors can lead to grave consequences. The country’s most senior counter-terrorism officer, Neil Basu, recently stated that police forces are no match for the threat of Islamist and extreme far-right terrorism: currently, there are 700 live terrorism investigations. The UK is also home to over 23,000 jihadists on a watch list. A review following the Manchester bombing by David Anderson QC, illustrated that intelligence about suicide bomber Salman Abedi before he struck was misinterpreted, thwarting the opportunity to prevent the attack. Artificial intelligence systems may therefore provide much needed assistance in monitoring terrorists.

In the context of white collar crime, companies are already creating software to predict the ‘typical’ face of a white collar financial criminal. Researchers can therefore apply machine learning techniques to quantify the ‘criminality’ of an individual. Doing so in the terrorism space for aiding arrests, however, would be problematic. Concerns have been voiced by many that that stop and search powers are already used unfairly against those who look visibly Muslim. Others have argued that artificial intelligence is likely to reduce bias, as police and judges tend to arrest and sentence according to preconceived notions.

The face of a predicted white collar criminal. From Clifton, Lavigne, and Tseng 2017 study: p.8https://arxiv.org/abs/1704.07826

This leads to several important issues. The first is on effectiveness – the aforementioned strategic subject list used by the Chicago police department, for example, does not appear to have been successful in reducing gun violence. The second is on bias: predictive policing systems tend to rely on records of crimes reported by the community or those identified in police patrols, which may lead to feedback loops and more enforcement in communities that are already heavily policed. Therefore, when it comes to either crime or terrorism, police resources could be dedicated to a perceived threat, rather than an actual one.

Because data does not always reflect reality, we must push for greater transparency from the vendors and creators of this technology. Crucial to this making how data is used publicly available. The model to predict the typical white collar criminal, for example, meant researchers downloaded pictures of 7,000 corporate executives whose LinkedIn profiles ‘suggested’ they work for financial organisations. Using data in this way without approval is extremely problematic.

With strained financial resources and increasing crime, there will be an advantage in using artificial intelligence programs to predict criminality on a wider scale. Therefore, we must ensure that algorithms are fair and transparent, while we still can. Perhaps the way to do this is to create an international commission on the regulation of AI when it comes to crime. Doing so would ensure that software benefits justice and policing systems. Results from this commission could be made publicly available, to ensure that people are aware of how their data is used, and how their neighborhood is being watched – and input into the development of these systems accordingly.

Articles You May Like

An incredibly fast ‘dark matter hurricane’ is blowing past Earth right now
Scientists just found errors in a huge climate study that made headlines two weeks ago
ArianeGroup to shed 2,300 employees as Ariane 6 nears completion
These fractal triangles made out of electrons will quite possibly blow your mind
After aborted mission, NASA astronaut confident about December launch

Leave a Reply

Your email address will not be published. Required fields are marked *