Overview of change

Algorithms and data analytics are playing an increasing role in all aspects of society, including in the UK’s policing and security services.1 A number of police forces across the UK have trialled ‘predictive policing’ tools, which use algorithms and historic data to predict where certain types of crime (for example, burglaries and street violence) are likely to occur.2 Similar tools have also been used by a small number of police forces to predict the likelihood of known individuals exhibiting certain behaviours or characteristics in the future. For example, Durham Constabulary’s Harm Assessment Risk Tool uses machine learning to predict how likely an offender is to re-offend in the next 2 years, and supports police officers in deciding whether the individual should be referred for a rehabilitation programme.3 Police have also trialled facial recognition technology to identify people automatically from live video footage (such as CCTV). For example, the Metropolitan Police Service and South Wales Police have trialled this technology in a number of areas, including at large events.4,5

Challenges and opportunities

Some of the potential benefits of using algorithmic tools in policing and security services include reduced resourcing pressures, improved public safety and the potential for more consistent outcomes. However, the use of this technology also presents several challenges, with one of the major concerns being the potential for algorithms to introduce, replicate or exacerbate biases. Several civil liberties groups, academics and others have raised concerns about the risk of predictive policing systems exhibiting racially biased outcomes as a result of being trained on historic crime data that reflects racial discrimination.6–10 In addition, there is a risk that police algorithms may direct officers to patrol areas that are already disproportionately over-policed, which may further entrench certain kinds of discrimination.6,11

There are concerns that facial recognition algorithms may disproportionately misidentify or fail to identify certain groups, with some evidence showing differences in the accuracy of certain facial recognition algorithms depending on the subject’s ethnicity or gender.9 Some campaign groups and academics have called for greater ethical scrutiny of the use of facial recognition, citing concerns about its accuracy and infringement on rights to privacy and consent.10,12,13 In 2020, the Equality and Human Rights Commission (EHRC) called for the suspension of the use of automated facial recognition and predictive algorithms in policing in England and Wales until their impact has been independently scrutinised.14 The EHRC and others have also said that the law needs to catch up with the technology.14–16 In August 2020, the Court of Appeal ruled that the use of live facial recognition by South Wales Police was unlawful, saying that the police service had not gone far enough to check that the technology did not exhibit gender or racial biases.17 Some uses of AI and digital technologies may risk enhancing public fears about surveillance and privacy. This has been a particular concern during the COVID-19 pandemic (see ‘use of digital technologies to tackle pandemics’).18,19

Some experts have highlighted that there is currently a lack of transparency around how predictive policing algorithms and facial recognition technologies work, meaning that victims and perpetrators are not able to assess the accuracy and fairness of a system’s output.20 21,22 There is also widespread debate around how to ensure fairness in decisions that are made or informed by algorithms and the risk of ‘automation bias’, whereby police or law enforcement staff may become over-reliant on automated outputs.1,6,23 Some experts have suggested that the predictions made by a police AI system should be assigned a ‘confidence rating’, indicating the level of uncertainty associated with them.24

Key unknowns

The extent and impact of the use of AI in policing and security in the long-term is uncertain. Some academics have highlighted that the evidence base on the efficacy of such systems is limited and it is unclear whether algorithmic decision-making processes carry more or less risk than decisions made by humans.23,24 The future application of live automated facial recognition in public spaces is also uncertain,25,26 and a greater understanding and assessment of its potential social benefit is needed, balanced against the cost to civilian privacy.27 Public attitudes towards these technologies are likely to be a key factor in their uptake. Research by the Ada Lovelace Institute suggests broad public support for the use of facial recognition; however, this is conditional on its demonstrable public benefit, use of appropriate safeguards, and informed consent to its use.28

Key questions for Parliament

  • How can governance and regulation of AI in policing and security be improved?
  • What more is needed to ensure the effectiveness and impacts of such technologies are fully evaluated?
  • What safeguards currently exist to prevent bias being introduced into AI systems used in this sector and what further guidance for bias detection and mitigation is needed?
  • How can the scientific validity of AI systems used in the sector be evaluated and monitored on an ongoing basis?
  • Should the UK Government be doing more to ensure greater transparency and accountability around the use of algorithmic systems in public sector decision-making?
  • What type and level of explanation should an individual receive about an AI system that is used to support a decision made about them, and is existing guidance on this sufficient?
  • Should there be a mandatory requirement for AI-based systems used in policing to undergo an audit or certification process prior to being deployed? How should this be implemented and what standards do systems need to meet?
  • How can citizens best be involved in shaping the policies and laws to govern big data and the use of AI in policing and security?

Likelihood and impact

High impact and likelihood in a 5-year timescale.

Research for Parliament 2021

Experts have helped us identify 30 areas of change to help the UK Parliament prepare for the future.

References

  1. Babuta, A. et al. (2019). Data Analytics and Algorithmic Bias in Policing.
  2. The Law Society (2019). Algorithms in the criminal justice system. 
  3. Oswald, M. et al. (2018). Algorithmic risk assessment policing models: lessons from the Durham HART model and ‘Experimental’ proportionality.   Information & Communications Technology Law, Vol 27, 223–250.
  4. What is AFR? – deployments.  AFR South Wales Police.
  5. Metropolitan Police and NPL (2020). Metropolitan Police Service Live Facial Recognition Trials. 
  6. Couchman, H. (2019). Policing by machine.   Liberty.
  7. Lum, K. et al. (2016). To predict and serve?   Significance, Vol 13, 14–19.
  8. Richardson, R. et al. (2019). Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice.   New York University Law Review, Vol 94, 192–233.
  9. Buolamwini, J. et al. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.   Proceedings of Machine Learning Research, Vol 81, 1–15.
  10. Leslie, D. (2020). Understanding bias in facial recognition technologies: an explainer.   The Alan Turing Institute.
  11. Ensign, D. et al. (2018). Runaway Feedback Loops in Predictive Policing.   Proceedings of Machine Learning Research, Vol 81, 1–12.
  12. Nature editorial (2020). Facial-recognition research needs an ethical reckoning.   Nature, Vol 587, 330–330.
  13. Couchman, H. et al. (2019). Liberty’s briefing on facial recognition.   Liberty.
  14. Equality and Human Rights Commission (2020). Facial recognition technology and predictive policing algorithms out-pacing the law.   Equality and Human Rights Commission.
  15. (2020). Biometrics Commissioner’s address to the Westminster Forum 5 May 2020.   GOV.UK.
  16. Information Commissioner’s Office (ICO) (2019). ICO investigation into how the police use facial recognition technology in public places.   ICO.
  17. (2020). Bridges v CC South Wales Police. 
  18. Csernatoni, R. (2020). Coronavirus Tracking Apps: Normalizing Surveillance During States of Emergency.   Carnegie Europe.
  19. Checa, M. (2020). The “normalisation of mass surveillance” could pose a threat to social mobilisation, warns digital rights advocate Diego Naranjo.   Equal Times.
  20. Bushway, S. D. (2020). ‘Nothing Is More Opaque Than Absolute Transparency’ The Use of Prior History to Guide Sentencing.   Harvard Data Science Review, Vol 2,
  21. Završnik, A. (2020). Criminal justice, artificial intelligence systems, and human rights.   ERA Forum, Vol 20, 567–583.
  22. Quattrocolo, S. (2020). Equality of Arms and Automatedly Generated Evidence.   in Artificial Intelligence, Computational Modelling and Criminal Proceedings: A Framework for A European Legal Discussion. (ed. Quattrocolo, S.) 73–98. Springer International Publishing.
  23. Centre for Data Ethics and Innovation (2019). Bias in Algorithmic Decision Making. 
  24. Babuta, A. et al. (2020). Data Analytics and Algorithms in Policing in England and Wales. RUSI.
  25. Big Brother Watch Stop Facial Recognition.   Big Brother Watch.
  26. European Digital Rights (EDRi) (2020). Campaign “Reclaim Your Face” calls for a Ban on Biometric Mass Surveillance.   EDRi.
  27. Benjamin, G. (2020). Facial recognition is spreading faster than you realise.   The Conversation.
  28. Ada Lovelace Institute (2019). Beyond face value: public attitudes to facial recognition technology. 

Photo by Tungsten Rising on Unsplash

Related posts