DOI: https://doi.org/10.58248/HS104

Overview

The horizon scan identified a range of digital issues related to national security that may be relevant to parliament over the next five years. These include:

  • Cyber security: the practice of protecting IT systems, devices, and data from unauthorised access and interference, or cyber attacks.[1]
  • Artificial intelligence (AI): technologies that enable computers to simulate elements of human intelligence including learning, reasoning, and perception (POSTbrief 57).[2]
  • Disinformation: the deliberate creation and spread of false or misleading content. (POSTnote 719). Misinformation is accidental spread of such content.

Challenges and opportunities

Expanding cyber risks

There is increasing risk of foreign states targeting the UK for political, military, or financial gain (POSTnote 684). Cyber attackers may try to take control of critical infrastructure, disrupt operations, or gather confidential information.[3] Risks to the UK include the loss or modification of data, or disruption to critical services such as health care records, utilities, and banking (POSTnote 684).[7]

In 2023, the National Cyber Security Centre (NCSC) stated they had “seen the emergence of state-aligned actors as a new and emerging cyber threat to critical national infrastructure.”[4] A number of states, such as China, Russia, Iran and North Korea have been identified by the UK Government and the NCSC as threats.[5][6][7] However, the boundary between actors is increasingly complex, with the growth of hacking organisations or cyber criminal groups that can operate as a service, or with the implicit backing of states.[8]

The number of devices connected by digital networks is increasing,[9] for example in Internet of Things (IoT) networks, and exposes infrastructure like energy, food and water supplies, to new risks (POSTnote 656).[8]

Also, growing data collection in the public sector risks exposing the public to hacking risks over sensitive personal information (POSTnote 664).

Managing cybersecurity

The UK is recognised by some as having strengths in cyber security and cyber intelligence, with clear strategic oversight at the political level (POSTnote 684).

The Cyber Security Operations Centre, National Cyber Force, and Cyber Crime Unit manage UK cyber security. The UK Government’s National Cyber Strategy has been in operation since 2022 and will continue until 2030.[10] The strategy focuses on increasing the UK’s ‘cyber power’, which is “the ability of a state to protect and promote its interests in and through cyberspace.”[11]

Challenges include the UK’s:

  • need to upskill cyber workforces (POSTnote 697 and POSTnote 643)
  • need to invest in “complex” and “inconsistent” public digital services
  • limited industrial base to build and export equipment that may influence the future of cyberspace (POSTnote 684)

International alliances, however, may offer ways to offset these limitations (POSTnote 684).

AI: benefits and risk

Using AI for predictive policing or facial recognition systems, for instance, can aid national security by recognising dangerous individuals, but also create new vulnerabilities including (POSTnote 681):

  • The risk of bias in AI decision making, for instance training AI models on biased data sets. AI decision making could also be weaponised by intentionally manipulating learning models or input data provided to AI (POSTnote 731 and POSTnote 708).
  • AI systems can also be undermined by tampering with input data, for example an AI-driven car could be deceived by the creation of deceptive markings on a road.[12]

AI is also being used to generate a new evolution of highly disruptive cyber attacks that are more intelligent, capable of learning, and adaptable.[17]

Autonomous military operations are also more frequent, with AI either being part of weapon systems or in some cases the weapon itself, reducing the risk UK troops.[13] In 2021, the Defence Artificial Intelligence Centre was established to manage the UK’s use of AI in military operations.[14]

AI models are also complex, with outputs that are difficult to confirm. This may create challenges for accountability and transparency.[15]

AI increasing the risk of disinformation

AI can also generate inaccurate text, images, videos and other forms of disinformation (POSTnote 708), and may lead to increasing risk of online extremism (POSTnote 622).[16][17] Foreign state-backed disinformation may aim to provoke confusion, aggravate political polarisation, undermine democracy, or create distrust in societies (POSTnote 719).[18][19][20]

Whilst studies indicate that disinformation can influence beliefs, the extent to which it leads to behavioural change or has a real-world effect is inconclusive and difficult to measure.[21][22]

In January 2024, the World Economic Forum labelled disinformation as the biggest short-term risk globally, due to its perceived potential to undermine democratic elections, promote societal unrest and increase censorship through counter-disinformation initiatives (POSTnote 719).[23]

This has presented risks to political processes. For example, a 2023 deepfake audio clip of the Mayor of London Sadiq Khan could have caused “serious disorder” ahead of Armistice Day.[24]

However, there are also concerns that an atmosphere of distrust may be used by authorities to discredit genuine evidence of actions by claiming AI intervention.[25] In contrast, AI could strengthen democracy by improving engagement in elections, for instance explaining party manifestos (POSTnote 708).[26]

Regulating AI to improve security

In May 2024, a government report on the Safety of AI concluded that AI may advance the public interest and enhance national security systems if properly governed.[27] It also stated that malfunctions in AI and its malicious use are creating new risks of harm. The upcoming Product Safety and Metrology Bill from the 2024 King’s Speech says it will aim to “effectively regulate these high-risk products [such as AI] and protect consumers and workers.”

In 2022, the Ministry of Defence (MoD) established the Defence AI Centre (DAIC). The MoD said it intends to “be the world’s most effective, efficient, trusted and influential Defence organisation for our size” for AI that will deliver “AI, data analytics, robotics, automation and other cutting-edge capabilities.”[28]

In 2023, the House of Lords AI in Weapon Systems Committee report cautioned using AI in weapon systems,[29] and recommended:[14]

  • the need to retain public confidence and democratic endorsement in the development of AI technologies
  • prohibiting the use of AI in nuclear command, control, and communications
  • leading by example in international engagement on regulating automated weapon systems

Other challenges include the fast pace of AI development making it difficult for regulators to keep up and finding the best approach to regulating AI to protect the public and promote innovation.[30]

Key uncertainties/unknowns

  • The international approach to AI safety is unclear: in 2023, the 28 countries that participated in the AI Safety Summit, and the EU, signed the Bletchley Declaration on AI safety.[31] It is not known how many more will apply the declaration globally.
  • Shortfall in skilled cyber workforce: there is a need for the UK to improve skills, security, technologies and offensive capability (POSTnote 684).
  • Cyber attacks are not always well reported, making impacts of attacks difficult to determine.

Key questions for Parliament

  • Is the National Security Act 2023 and Foreign Interference Offence[32] sufficient to address malicious activity by foreign powers?
  • How will open-source intelligence support or create risks for national defence, and to what extent is it legal?[33][34][35]
  • Will international collaboration help enforce cyber security regulations? Will UK foreign policy need to change?[36]
  • How will the UK engage in international treaty discussions for cyber security and AI safety?
  • How is ‘ethical hacking’, the use of hacking techniques by friendly parties, regulated when used in research and for strengthening defence? [37][38]
  • Should regulation change our response to cyber attacks, such as banning ransom payments, or requiring mandatory reporting?
  • Secure-by-default products are those which have been designed from the beginning to be secure. How much should vendors and corporations use these products?[39]
  • How much should human oversight be integrated into the use of AI, for example in training or warfare, or where decisions about national security involves AI-derived intelligence.[40]

Related documents

References

[1] Clark, A. House of Commons Library (2024). Cybersecurity in the UK.

[2] Rough, E. and Sutherland, N. (2023). Debate on Artificial Intelligence. House of Commons Library.

[3] NCSC (2024). NCSC warns of enduring and significant threat to UK’s critical infrastructure.

[4] National Cyber Security Centre (2023). NCSC Annual Review 2023.

[5] UK Government (2023). National Risk Register.

[6] NCSC (2024). NCSC warns of enduring and significant threat to UK’s critical infrastructure.

[7] NCSC (2024). NCSC and partners issue warning over North Korean state-sponsored cyber campaign to steal military and nuclear secrets.

[8] Clark, A. (2024). Cybersecurity in the UK. House of Commons Library.

[9] House of Commons Culture, Media and Sport Committee (2023). Connected tech: smart or sinister?

[10] UK Government (2022). National Cyber Strategy.

[11] UK Government (2022). Government Cyber Security Strategy.

[12] National Institute of Standards and Technology (2024). NIST identifies types of cyberattacks that manipulate behaviour of AI systems

[13] House of Lords AI in Weapon Systems Committee (2023). Proceed with Caution: Artificial Intelligence in Weapon Systems.

[14] UK Government (2022). Defence Artificial Intelligence Centre.

[15] Cheong, B. (2024). Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic decision-making. Front. Hum. Dyn. 6:1421273.

[16] Risius, M. et al. (2023). The digital augmentation of extremism: Reviewing and guiding online extremism research from a sociotechnical perspective. Inf. Syst. J., vol 34, 931.

[17] Global Internet Forum to Counter Terrorism (2023). Considerations of the Impacts of Generative AI on Online Terrorism and Extremism.

[18] National Cyber Security Centre (2023). Case study: Defending our democracy in a new digital age – at the ballot box and beyond.

[19] Karlsen, G. H. (2019). Divide and rule: ten lessons about Russian political influence activities in Europe. Palgrave Commun., Vol 5, 1–14. Palgrave.

[20] World Economic Forum (2024). The Global Risks Report 2024.

[21] Watts, D. J. et al. (2021). Measuring the news and its impact on democracy. Proc. Natl. Acad. Sci., Vol 118.

[22] Adams, Z. et al. (2023). (Why) Is Misinformation a Problem? Perspect. Psychol. Sci., Vol 18, 1436–1463. SAGE Publications Inc.

[23] World Economic Forum (2024). Global Risks 2024: Disinformation Tops Global Risks 2024 as Environmental Threats Intensify.

[24] Spring, M. (2024). Sadiq Khan says fake AI audio of him nearly led to serious disorder. BBC News.

[25] Kroetsch, J. (2023). Skepticism in Era of AI Deep Fakes Will Erode Defamation Claims. Bloomberg Law.

[26] Krimmer, R. et al. (2022). Elections in digital times: a guide for electoral practitioners. UNESCO.

[27] Department for Science, Innovation and Technology (2024). International Scientific Report on the Safety of Advanced AI.

[28] Ministry of Defence (2022). Defence AI Strategy.

[29] Ministry of Defence (2022). Ambitious, safe, responsible: our approach to the delivery of AI-enabled capability in Defence.

[30] Tobin, J. (2023). Artificial intelligence: Development, risks and regulation. House of Lords Library.

[31] UK Government (2023). The Bletchley Declaration by Countries Attending the AI Safety Summit.

[32] Home Office (2024). Foreign interference: National Security Bill factsheet.

[33] CWSI (2024). What is Open Source Intelligence?

[34] Centre for Emerging Technology and Security, Alan Turing Institute (2024). Report: The future of open-source intelligence for UK national security.

[35] Tech UK (2023). Unleashing Open Source Intelligence for UK National Security.

[36] King’s College London (2022). The National Cyber Force that Britain Needs?

[37] Lipscombe, S. et al. (2023). Criminal Justice Bill 2023-24.

[38] IBM (2024). What is ethical hacking?

[39] National Cyber Security Centre (2018). Secure by default.

[40] Centre for Emerging Technology and Security, Alan Turing Institute (2024). AI and Strategic Decision-Making.


Photo by: Glenn Carstens-Peters via Unsplash.

Horizon Scan 2024

Emerging policy issues for the next five years.