Documents to download

DOI: https://doi.org/10.58248/PN708

Key points

  • Artificial intelligence (AI) is developing at a rapid pace and can be found throughout society in a growing range of everyday applications and decision-making. There are implications for security, privacy, transparency, liability, labour rights, intellectual property and disinformation. It presents some risks and benefits to democracy more widely.
  • There is no dedicated AI legislation in the UK. Existing legislation restricts how AI can be used in practice, such as in relation to data protection, equality and human rights, and intellectual property.
  • In March 2023, the UK Government announced a ‘pro-innovation’ approach to AI regulation, which largely regulates AI via existing laws enforced by existing regulators. It outlined cross-sectoral principles, such as safety, security, robustness, transparency, fairness, accountability, contestability, and redress, for existing regulators to consider. The approach applies to the whole of the UK, although some policy areas are devolved.
  • The Government has brought forward legislation and regulatory action on automated vehicles and data protection and digital information.
  • Some stakeholders have indicated that additional legislation and action may be required, including mandatory impact assessments, bans on certain AI applications, and a right for human intervention to challenge AI decision-making. There are concerns that regulators are not currently equipped with the staffing, expertise or funding to regulate AI.

Contributors

POSTbriefs are based on literature reviews and interviews with a range of stakeholders and are externally peer reviewed. POST would like to thank interviewees and peer reviewers for kindly giving up their time during the preparation of this briefing, including:

  • Members of the POST Board*
  • Dr Elena Abrusci, Brunel University London*
  • Dr Mhairi Aitken, Alan Turing Institute*
  • Emmanuelle Andrews, Liberty
  • Dr Hayleigh Bosher, Brunel University London*
  • Matt Davies, Ada Lovelace Institute
  • Maximilian Gahntz, Mozilla Foundation
  • Conor Griffin, Google DeepMind
  • Professor Oliver Hauser, University of Exeter
  • Harry Law, Google*
  • Mia Leslie, Public Law Project*
  • Mavis Machirori, Ada Lovelace Institute
  • Professor Gina Neff, University of Cambridge
  • Sam Nutt, London Office of Technology and Innovation
  • Lucy Purdon, Mozilla Foundation*
  • Adam Smith, British Computer Society*
  • Amy Smith, Queen Mary University of London
  • Anna Studman, Ada Lovelace Institute
  • Mary Towers, Trades Union Congress
  • Professor Shannon Vallor, University of Edinburgh*
  • National Centre for AI for Tertiary Education, Jisc

*denotes people and reviewers who acted as external reviewers of the briefing.


Documents to download

Related posts