
Table of contents
DOI: https://doi.org/10.58248/HS51
Overview
Artificial intelligence (AI) is developing at a rapid pace (PB57) and can be found in everyday applications and decision-making. A 2024 POST horizon scanning consultation of academic experts identified the ethics, governance and regulation of AI as an issue of growing importance for Parliament.
While there is currently no general statutory regulation of AI in the UK, various areas of law touch on AI regulation in practice (CBP9817). In particular, the General Data Protection Regulations (GDPR) govern the collection and use of personal data and place some restrictions on automated decision making with significant effects on people’s lives.
The UK Government’s March 2023 white paper ‘A pro-innovation approach to AI regulation’[1] marks one of the UK’s first steps towards creating a specific framework around responsible AI development and use.
In contrast, the European Union’s EU AI Act came into force in August 2024, taking a more active regulatory approach, based on categorising AI use into risk tiers with corresponding legal obligations and significant financial penalties for misuse.[2] The United States has taken a more industry-led approach, although the US Federal Government has been directed to follow a set of key principles of responsible use of AI.[3]
Representatives of 28 countries and the EU came together at the AI Safety Summit hosted by the UK Government in Bletchley Park in autumn 2021, pledging to cooperate on the development of safe and responsible use of AI.[4]
The July 2024 Kings Speech said the government will ‘harness the power of artificial intelligence’ and ‘look to strengthen safety frameworks’.[5]
The Institute of Electrical and Electronic Engineers (IEEE), a body which sets industrial global technical standards, is developing its IEEE P7000 series of standards relating to the ethical design of AI systems.[6][7]
The Science, Innovation and Technology Select Committee noted in response to the UK’s AI White Paper in 2023 that there was an opportunity for the UK to become a global regulatory leader in AI, while acknowledging the risk that the UK might fall behind and see international standards set primarily by the EU and USA.[8]
Challenges and opportunities
The adoption of AI could benefit society, including in:[9]
- healthcare, where AI could help clinicians to diagnose diseases
- transportation, where AI could support freight management (PN692)
- education, where AI could assist teachers with lesson planning, scheduling and marking (PN712)
- customer services, where AI could help provide cheaper, improved and more sustainable products and services
AI could boost the UK economy and productivity. Research commissioned by Microsoft suggested AI powered innovation could create a potential £550 billion in economic value to the UK economy by 2035.[10] KPMG, meanwhile, forecast that around 2.5% of overall tasks could be performed by generative AI, with a potential increase of 1.2% to UK productivity.[11]
However, AI developments may disproportionately affect some groups and exacerbate existing inequalities, such as inequalities in gender pay gaps.[12][13] For example, most clerical work is carried out by women and some reports state it could become redundant due to AI (PN708). Regional disparities in net employment impacts of AI could result in increased regional inequalities in economic benefits (PN708).
Due to high costs, concerns exist around the concentration of market power by a few private sector organisations and the inaccessibility of developing the most cutting-edge AI models for small and medium-sized enterprises, academia and non-governmental organisations (PB57). The 2024 Kings Speech mentioned an Employments Rights Bill to ‘place requirements on those working to develop the most powerful artificial intelligence models’.[5]
If used responsibly, experts in the POST consultation felt AI could have potential to promote digital inclusion. For example, in education, AI could provide learners with affordable learning support. Conversely, experts also highlighted the risks of certain groups lacking access to infrastructure needed to use AI tools, such as the internet or computers.[14] Digital exclusion is associated with health, financial social and employment inequalities and can limit people from participating fully in society (PN725). Experts also mentioned the need to ensure the UK population has relevant skills, understanding and access to harness AI and data responsibly (PN697).[15]
AI tools are increasingly used in the workplace (CBP9817). Some unions have raised concerns about possible detrimental impacts on worker dignity and mental health due increased uses of AI for work allocation, monitoring, and disciplinary decisions (PN708).[16]
Bias in AI systems could lead to discriminatory outcomes and increased inequalities.[17] A lack of transparency in how large AI models make decisions also poses difficulties for determining liability and responsibility if a person has been adversely affected by an automated decision (PN708). The Equality and Human Rights Commission has recognised the growth of AI in decision making as a major challenge for regulation.[18]
The use of large volumes of data to train AI systems has raised widespread concerns around risks to privacy and the need for data protection.[19][20] For example, in March 2023 Italy’s data regulator became the world’s first to ban Open AI’s ChatGPT from using its citizens’ personal data in their training data sets.[21]
Copyrighted data can be used to train some generative AI models, which may then produce outputs that resemble the copyrighted data. These outputs raise difficult questions both for copyright law and for the plagiarism policy of educational institutions such as universities.[22][23][24]
The government has identified many cybersecurity risks posed by AI, including generative AI helping cyber attackers to create fake personas online, gain confidential information, send convincing phishing and scam calls, and producing child sexual abuse material.[25]
Many experts have raised concerns about AI aiding the spread of misinformation or disinformation (PN719), with implications for democratic debate and elections globally. Several commentators have suggested that 2024 global elections, including the US presidential election, are among the first being seriously affected by the growing use of AI.[26]
Some authors have also identified longer-term and potentially existential risks to humanity from AI if given control over key decision-making systems without suitable safeguards to ensure human interests are protected.[27]
Potential recommendations by experts for future policies include (PN708):
- a law to enshrine a right to human intervention in automated decision-making or banning uses of automated decision-making akin to the AI act in the European Parliament
- allowing open access to underlying AI code and related documentation for transparency on how the models work and improving the accessibility of AI developments
- placing a duty to carry out impact assessments of automated decisions on companies and public bodies
- creating resources of expertise on AI that regulatory bodies could consult in order to respond to AI related matters that concern their individual remits
Key uncertainties/unknowns
The speed at which AI technologies continue to develop and spread across society can pose a significant challenge for regulation in attempting to keep pace.[28]
The degree to which other countries may set global standards in this area ahead of the UK and how these standards may impact the UK is also unknown. Technology companies may adapt their products to regulatory regimes in other countries if they are faster to impose regulation than the UK, for example the EU AI Act. If compliance between regulators or shared standards are not in place then technology companies may adhere to some and not others.
It is unclear how much control and decision-making power public and private sector organisations will cede to automated systems over the coming years, but it may impact on the urgency of potential regulation in this area.
Finally, research into the best ways to embed safety and ethical principles into the algorithms underpinning AI is still ongoing. It remains unclear if it is technically possible to successfully render such systems safe and responsible without direct human oversight.[29][30][31]
Key questions for Parliament
- What kind of regulatory regime should the UK adopt towards AI. Would it involve more, or less, central oversight?
- Should the overnment develop a system of verification or certification for AI systems?
- What should the government’s role be in any algorithmic audit or algorithmic risk evaluation?[32]
- How should copyright law adapt to generative AI, where the relationship between an author’s work and how it is used in training an AI model and generating related outputs is often opaque?
- How should employment law adapt to the use of automated decision making by management in areas such as recruitment or performance management?[33]
- How do data protection laws need to adapt? What ownership and control should people have over data they generate?[34][35]
- Who should be held legally accountable when AI systems violate these legal principles? What should the balance of liability be between the original developer, the organisation employing or disseminating the tools, and the end user(s)?
- What steps should the government take to reduce the sources, spread and impact of AI generated mis and disinformation? How can potential regulations be balanced with freedom of expression (PN719)?
- If and how should the UK cooperate with other countries for the development of safe and responsible AI? Should the UK attempt to work with other countries to set global standards, or adopt our own distinct regulatory approach in the UK?
Related documents
- Policy Implications of artificial intelligence POSTnote
- Artificial intelligence and employment law HoC briefing
- Disinformation: sources, spread and impact POSTnote
- AI in society research in brief
- How is artificial intelligence affecting society rapid response
- Artificial intelligence: An explainer POSTbrief
References
[1] UK Government (2023). A pro-innovation approach to AI regulation.
[2] Hickman et al. (2024). Long awaited EU AI Act becomes law after publication in the EU’s Official Journal. White & Case LLP.
[3] The White House (2023). White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
[4] Prime Minister’s Office (2023) The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023.
[5] UK Government (2024). King’s Speech 2024: background briefing notes.
[6] IEEE Standards Association (2021). IEEE 7000-2021: IEEE Standard Model Process for Addressing Ethical Concerns during System Design.
[7]Spiekermann, S. (2022). What to Expect From IEEE 7000: The First Standard for Building Ethical Systems. Technology and Society.
[8] House of Commons Science, Innovation and Technology Committee (2023). The governance of artificial intelligence: interim report.
[9] European Parliament (2020). Artificial intelligence: threats and opportunities.
[10] Microsoft and Public First (2024). Unlocking the UK’s AI Potential: Harnessing AI for Economic Growth.
[11] KPMG (2023). Productivity boost from Generative AI could add £31 billion of GDP to the UK economy.
[12] Gomez-Herrera, E. et al. (2022). A gender perspective on artificial intelligence and jobs: The vicious cycle of digital inequality. Bruegel.
[13] Aldasoro, B. et al. (2024). The gen AI gender gap. Bank for International Settlements.
[14] Mehrabi, Z., et al. (2021). The global divide in data-driven farming, Nature Sustainability, Volume 4, pages 154–160.
[15] Knobel, M. (2008). Digital Literacies: Concepts, Policies and Practices. Peter Lang, New York USA.
[16] Brione, P. et al., (2023). Artificial intelligence and employment law. House of Commons Library.
[17] Manyika, J. et al., (2019). What Do We Do About the Biases in AI? Harvard Business Review.
[18] Equality and Human Rights Commission (2024). An update on our approach to regulating artificial intelligence.
[19] Davis, K., et al. (2012). Ethics of Big Data: Balancing Risk and Innovation. O’Reilly Media Inc, USA.
[20] Birch, K., et al. (2021). Data as asset? The measurement, governance, and valuation of digital personal data by Big Tech. Big Data & Society, Volume 8, 1.
[21] Wired (2023). ChatGPT Has a Big Privacy Problem.
[22]Zulhusni, M. (2023) Content creation: Is ChatGPT guilty of plagiarism? Techwire Asia.
[23] Wired (2023). ChatGPT Is Making Universities Rethink Plagiarism.
[24] Cortinhas, C. et al. (2023). Prevention and Detection of Plagiarism in Higher Education: Paper Mills, Online Assessments and AI. University of Exeter.
[25] Department for Science, Innovation and Technology (2024). Cyber security risks to artificial intelligence.
[26] Andrew, J. (2023). The first AI-enhanced presidential election. The Hill.
[27] Center for AI Safety (2023). An Overview of Catastrophic AI Risks.
[28] Marchant, G. E. (2020). Governance of Emerging Technologies as a Wicked Problem, Vanderbilt Law Review, Volume 73, 6, pages 1861 – 1877.
[29] Li, L., et al. (2020). SoK: Certified Robustness for Deep Neural Networks, IEEE Symposium on Security and Privacy.
[30] Lavaei, A., et al. (2022). Automated verification and synthesis of stochastic hybrid systems: A survey, Automatica, Volume 146.
[31] Seshia, S. A., et al. (2022). Toward verified artificial intelligence, Communications of the ACM, Volume 65, 7, pages 46-55.
[32] Ada Lovelace Institute (2020). Examining the Black Box: Tools for assessing algorithmic systems.
[33] Allen, R., et al. (2021). Technology Managing People – the legal implications (PDF), AI Law Consultancy and Trades Union Congress.
[34] Hummel, P., et al. (2021). Own Data? Ethical Reflections on Data Ownership. Philosophy & Technology, Volume 34, pages 545–572.
[35] Thouvenin, F., et al. (2021). Data ownership and data access rights: Meaningful tools for promoting the European digital single market. In: Big Data and Global Trade Law, eds Burri, M., Cambridge University Press, pages 316-339.
Photo by: Robin Worrwall via Unsplash
