How technology is accelerating changes in the way we work
Many organisations have turned to technology during the COVID-19 pandemic to aid social distancing. Could ongoing automation reinforce existing inequalities?
Artificial intelligence could change policing. But the efficacy of such technologies is not well established. What are the governance and privacy concerns?
Algorithms and data analytics are playing an increasing role in all aspects of society, including in the UK’s policing and security services.1 A number of police forces across the UK have trialled ‘predictive policing’ tools, which use algorithms and historic data to predict where certain types of crime (for example, burglaries and street violence) are likely to occur.2 Similar tools have also been used by a small number of police forces to predict the likelihood of known individuals exhibiting certain behaviours or characteristics in the future. For example, Durham Constabulary’s Harm Assessment Risk Tool uses machine learning to predict how likely an offender is to re-offend in the next 2 years, and supports police officers in deciding whether the individual should be referred for a rehabilitation programme.3 Police have also trialled facial recognition technology to identify people automatically from live video footage (such as CCTV). For example, the Metropolitan Police Service and South Wales Police have trialled this technology in a number of areas, including at large events.4,5
Some of the potential benefits of using algorithmic tools in policing and security services include reduced resourcing pressures, improved public safety and the potential for more consistent outcomes. However, the use of this technology also presents several challenges, with one of the major concerns being the potential for algorithms to introduce, replicate or exacerbate biases. Several civil liberties groups, academics and others have raised concerns about the risk of predictive policing systems exhibiting racially biased outcomes as a result of being trained on historic crime data that reflects racial discrimination.6–10 In addition, there is a risk that police algorithms may direct officers to patrol areas that are already disproportionately over-policed, which may further entrench certain kinds of discrimination.6,11
There are concerns that facial recognition algorithms may disproportionately misidentify or fail to identify certain groups, with some evidence showing differences in the accuracy of certain facial recognition algorithms depending on the subject’s ethnicity or gender.9 Some campaign groups and academics have called for greater ethical scrutiny of the use of facial recognition, citing concerns about its accuracy and infringement on rights to privacy and consent.10,12,13 In 2020, the Equality and Human Rights Commission (EHRC) called for the suspension of the use of automated facial recognition and predictive algorithms in policing in England and Wales until their impact has been independently scrutinised.14 The EHRC and others have also said that the law needs to catch up with the technology.14–16 In August 2020, the Court of Appeal ruled that the use of live facial recognition by South Wales Police was unlawful, saying that the police service had not gone far enough to check that the technology did not exhibit gender or racial biases.17 Some uses of AI and digital technologies may risk enhancing public fears about surveillance and privacy. This has been a particular concern during the COVID-19 pandemic (see ‘use of digital technologies to tackle pandemics’).18,19
Some experts have highlighted that there is currently a lack of transparency around how predictive policing algorithms and facial recognition technologies work, meaning that victims and perpetrators are not able to assess the accuracy and fairness of a system’s output.20 21,22 There is also widespread debate around how to ensure fairness in decisions that are made or informed by algorithms and the risk of ‘automation bias’, whereby police or law enforcement staff may become over-reliant on automated outputs.1,6,23 Some experts have suggested that the predictions made by a police AI system should be assigned a ‘confidence rating’, indicating the level of uncertainty associated with them.24
The extent and impact of the use of AI in policing and security in the long-term is uncertain. Some academics have highlighted that the evidence base on the efficacy of such systems is limited and it is unclear whether algorithmic decision-making processes carry more or less risk than decisions made by humans.23,24 The future application of live automated facial recognition in public spaces is also uncertain,25,26 and a greater understanding and assessment of its potential social benefit is needed, balanced against the cost to civilian privacy.27 Public attitudes towards these technologies are likely to be a key factor in their uptake. Research by the Ada Lovelace Institute suggests broad public support for the use of facial recognition; however, this is conditional on its demonstrable public benefit, use of appropriate safeguards, and informed consent to its use.28
High impact and likelihood in a 5-year timescale.
Photo by Tungsten Rising on Unsplash
Many organisations have turned to technology during the COVID-19 pandemic to aid social distancing. Could ongoing automation reinforce existing inequalities?
Many innovations have improved the COVID-19 response and could be key for future-proofing against pandemics. What are the governance and privacy concerns?
The use of social media raises questions around security, state regulation, privacy, and online safety. How can states balance regulation and personal freedom?