• A POSTnote on algorithms and accountability will examine how algorithms can be accountable and transparent.
  • It will look at the design and auditing of data, and present how algorithms can become biased and possible solutions.
  • In production. To contribute expertise, literature or an external reviewer please emial Dr Lorna Christie

Algorithms, which identify patterns in data to inform decision-making, are increasingly being used for a variety of applications, from verifying a person’s identity based on their voice to diagnosing disease. They have the potential to bring many social and economic benefits; one estimate predicts that the automation of complex tasks could increase UK labour productivity by 25% by 2035. However, concerns have been raised about how to ensure fairness and accountability for decisions that are made or informed by algorithms. This is particularly an issue for systems involving artificial intelligence (AI), where in some cases (such as deep neural networks) it is not yet possible to explain thoroughly how decisions are reached. Furthermore, AI systems can be susceptible to introducing or perpetuating bias. One study of facial recognition algorithms found that those developed in East Asian countries were less accurate when identifying Caucasian faces, and that those developed in the West were less accurate for East Asian faces. Experts have warned that a lack of clarity on accountability may be a barrier to the adoption of these technologies. 

This POSTnote will discuss what it means for algorithms to be accountable and transparent, and the technical barriers to this. It will look at ways of improving accountability and transparency, including an overview of ongoing research into the design and auditing of algorithms and data. It will also examine how biases can be introduced into algorithms, and steps that might be taken to mitigate this.