Cyber security of elections
This briefing discusses cyber security risks to elections. It explores the potential impacts on election outcomes and how these risks can be tackled.
What are the effects of AI for decision making, workplace rights, transparency, surveillance, civil liberties and intellectual property?
DOI: https://doi.org/10.58248/RR15
Swift advances in Artificial intelligence (AI) grabbed the public attention in 2023. Human-like communication, computer-generated images and deepfake videos have played into long-held concerns about how we distinguish fact from fiction in the digital age. ChatGPT and similar technologies are fast becoming integrated into all parts of our day to day lives.
Policy makers around the world are now hurrying to balance AI’s potential to both help and harm us. In the past year we’ve seen the European Union’s AI Act, the international AI summit hosted by the UK at Bletchley Park, and the UK Government’s response to the consultation outcome of its white paper, a ‘pro-innovation’ approach to AI regulation. But what are the known effects of AI so far?
AI systems are being used to take decisions. However, there is the risk that such decisions are biased, either because of flaws in the data used to build the system or because they copy the biases of their developers. Biased AI systems have already caused real-world harm.
For example, Amazon reportedly scrapped an experimental system for ranking CVs in its hiring process. It was trained using successful hiring data from the past decade, which was biased towards men. This led the system to conclude that CVs from men were preferable to those from women.
In healthcare, AI is being used successfully to help diagnose diseases, find new drugs, and develop personalised treatments based on a patient’s unique biology. All these applications could lead to better health outcomes for patients.
However, biased AI healthcare systems may also worsen health outcomes. A study from 2019 found that a widely used healthcare system in the US wrongly prioritised White patients for treatment over Black patients. Rather than basing decisions on clinical need it had taken decisions based on previous health spending, which is systematically lower for Black people in the US.
AI is increasingly being used in the workplace to make efficiencies in hiring, management and to determine pay, particularly in gig-economy work, such as taxi driving and food delivery. Its use can help free labour up to do other tasks. A 2023 report from the management consultancy McKinsey estimated that AI has the potential to add between $2.6 trillion to $4.4 trillion annually to the global economy.
However, AI may have negative impacts on workers. Those who are monitored by AI systems have reported high levels of anxiety. The Trades Union Congress has expressed concerns over accountability and transparency as well as how automated pay decisions might affect collective bargaining and the accountability of employers.
AI ethicists point to transparency and access to redress as key to ensuring that AI can ethically be used in such decisions.
The police can now use AI to recognise individuals in images or videos containing very large numbers of people. This can make it quicker to identify and locate criminals.
However, there are concerns about how monitoring at this scale might conflict with an individual’s right to privacy and freedom of expression. There was a widespread backlash from civil liberties groups to Government plans to allow police facial recognition systems to search the UK’s passport photo database to identify criminals.
AI can now produce realistic (and entertaining) ‘creative’ works, in the form of text, images, videos and music.
This raises several questions around intellectual rights. Can AI be considered an author of a creative work, and what rights should it have? When AI systems are trained on the work of specific human authors, should the original authors have rights over the output?
Courts in the UK and US are currently considering cases involving AI and intellectual property. Increased protection for writers’ intellectual property was a part of the deal that ended the Writers Guild of America’s 148-day strike in 2023.
In January 2024, the House of Lords Communications and Digital Select Committee report on ‘large language models and generative AI’ recommended that the Government should set out options, including legislative changes if necessary, to ensure copyright principles remain future proof and provide sufficient protections to rightsholders.
In February 2024, the UK Government announced in its response to the AI White Paper consultation that a working group consisting of the Intellectual Property Office, rights holders and AI developers were not able to agree on a voluntary code of practice for AI and copyright.
Some intellectual Property lawyers called for clarity on the Government’s approach to AI and copyright protections for rightsholders. They said clarity is important to position the UK as an AI leader and to promote the growth of the UK’s creative industries.
The House of Commons Culture Media and Sport Committee April 2024 inquiry report on ‘creator remuneration’ also called for Government action and raised concerns that the ‘the status quo simply favours AI developers’.
It’s hard to guess how AI will affect our lives in the future, although the impact so far has already been significant. While AI has immense potential to aid society, such as improving healthcare outcomes, it could also create social and ethical harms including greater inequality.
How society benefits from AI will depend on how its regulated, what safety measures are in place to safeguard against harms, who has access to it, who owns it, how people use it and wider public attitudes.
For more information on AI read recent POST reports on
Look out for information on the good information toolkit too – a series of training and resources from the House of Commons Library and POST, designed to help you avoid misinformation. Subscribe to get the latest from POST delivered to your inbox, including new research.
POST research is based on literature reviews and interviews with a range of stakeholders and are externally peer reviewed. POST would like to thank interviewees and peer reviewers for kindly giving up their time during the preparation of recent POSTnotes and POSTbriefs on AI.
Image credits: Image by Kohji Asakawa from Pixabay
This briefing discusses cyber security risks to elections. It explores the potential impacts on election outcomes and how these risks can be tackled.
Researchers are exploring the role of psychedelic drugs as a treatment for addictions. How are addiction disorders currently treated and what does the latest research on psychedelic drugs show?
What are the challenges of public opinion polling in advance of a general election? Who regulates it, and does it have an impact on the public?