DOI: https://doi.org/10.58248/HS56

Overview

A horizon scan consultation of researchers revealed significant concern about the expanding landscape of cyber crime and its harm, including:

  • entities gaining unauthorised access to digital devices or networks, for example to commit fraud or collect and leverage confidential data to extort money. Entities include individuals, organised criminal groups, states and state-aligned groups[1]
  • online bullying
  • assault
  • cyberstalking
  • coercive control
  • the spread of mis and disinformation
  • radicalisation

In April 2024, a Cyber Security Breaches Survey by the Home Office and Department for Science, Innovation and Technology of 2000 businesses and 1004 charities found half of the businesses and a third of the charities experienced cyber crime in the last 12 months.[2] This was despite organisations increasingly adopting protections against the most common cyber-attacks.2

Motivations for cyber crime can include financial gain, to gather confidential information, or to influence political discourses (PN684, CBP9821). The UK Government has identified China and Russia as the greatest state-based cyber threats, with Iran and North Korea also possessing cyber capabilities (PN684).

Various technologies relate to cyber crime and harm:

  • Cryptocurrencies are a digital means of financial exchange not overseen by a central authority (CBP8780). Cryptocurrencies are increasingly used by criminals for money laundering, investment fraud and in the online trade of illicit goods (CBP8780). Consumer exchanges can be hacked and there are many cryptocurrency scams, such as fraudsters encouraging consumers to invest in non-existent new coins (CBP8780).
  • The metaverse is a range of technologies that allow users to interact in believable virtual worlds and each other (PB61). Cyber security risks include identity fraud, virtual assaults, online child sexual exploitation and abuse, the recruitment and training of people to extremist organisations and manipulation from users personal and biometric data being collected (PB61).
  • Attackers can use generative AI to generate realistic images and videos, known as ‘deepfakes’ and realistic texts and responses to victims quickly and to manipulate them into providing access to systems or information for online fraud (PN708).
  • Social media is increasingly being used by state actors and other organised groups to gather sensitive political and military information, spread fake online information and radicalisation, which presents a profound challenge for democratic institutions.[3]

Experts in the horizon scanning consultation highlighted how technical and legislative solutions could address the challenges raised by criminality in an increasingly online world.

Challenges and opportunities

Unauthorized access to digital devices and networks could breach sensitive datastores, threaten personal privacy and put individuals at risk from physical harm such as stalking.

In June 2024, 300 million pieces of blood test patient data were exposed from two NHS Trusts.[4]­ This was attributed to hacker-group Qilin, thought to be located in Russia.4 It is not known how much money the hackers demanded from NHS provider Synnovis or if the company entered negotiations.4 for individuals and society.

Analysts at the Internet Watch Foundation are particularly concerned about a rise in AI generated child sexual abuse material for sale on the dark web.[5] This poses the risk of re-victimisation of known victims, where perpetrators use AI to manipulate existing child sexual abuse material into media featuring famous children and those already known to abusers.5 Cryptocurrencies are also particularly prevalent in the trade of child sex abuse material (CBP8780).

There have been numerous incidents of deepfake pornographic content of individuals, predominantly women, being shared online, leading to harassment, humiliation and distress for individuals (PN708). Sharing of non-consensual pornographic deepfakes has been criminalised by the Online Safety Act 2023 along with various other online harms (PN708).

Further legislation that makes it an offence to create sexually explicit deepfake images was planned through an amendment to the Criminal Justice Bill but was halted at report stage in the House of Commons due to parliamentary prorogation in May 2024.[6],[7]

Some reports have found that users, including children, have experienced intense trauma or distress following virtual assaults in the metaverse (PB61).[8] UK laws, such as on sexual assault, may cover some but not all legal issues experienced in the metaverse and is yet to be tested in UK courts (PB61).

In evidence provided to the Department for Culture, Media and Sport, UK online safety charity Glitch noted how increasing online interactions since the Covid-19 lockdowns have led to new forms of online abuse, such as attackers joining video calls to display violent or pornographic material.[9]

Experts from the horizon scanning consultation also noted the increasing role of online technology in coercive control,[10] emotional abuse, online stalking, and online bullying. Domestic violence charity Refuge reported in 2020 that 72% of service users experienced online abuse.[11]

Researchers found criminal organisations increasing their online efforts to recruit children into drug gangs during lockdowns. These criminal organisations developed strategies for recruitment via social media that persisted after the pandemic.[12]

Finally, many researchers have highlighted concerns about potential impacts of mis and disinformation on society and democratic institutions:

  • Online misinformation and disinformation could undermine legitimate public health initiatives (COVID-19 misinformation). For example, during the COVID-19 pandemic, research by KCL found false claims of 5G masts spreading the virus led to adherents neglecting social distancing guidelines.[13]
  • An increasing proliferation of mis- or disinformation could erode trust in online news, democratic institutions and election processes and outcomes (PN 719, cyber security of elections).[14]
  • Online radicalisation and the spread of extremist ideology presents an acute threat to democratic debate and a risk of enabling domestic terrorism or violent disorder (PN 622).[15]

Cyber experts, such as academics and government entities, identified several opportunities for combatting cyber crime and harm:

  • The National Cyber Security Centre noted AI has potential applications for cyber security, for example AI effectively identifying fraudulent emails.[16]
  • Designing technologies to be “secure-by-design and default” can embed security considerations at each stage of development, and is a strategic priority in the Government Cyber Security Strategy.[17] Secure-by-design technologies are likely to be required for critical UK infrastructure such as telecommunications, supply chains and the energy grid.[18] Smart devices may require certification processes, particularly in cases where a cyber-attack poses a threat to human life, such as in a self-driving vehicle.[19]
  • Industry and academic experts have highlighted how governments regulating how AI is developed and used could help to ensure models are developed safely to protect against malicious action outside of intended purposes, such as AI being used to generate disinformation (PN708).[20]
  • In 2023, the NSPCC recommended the government “review legislation on a rolling basis to ensure that immersive environments are adequately covered” from online harms (PB61).[21]
  • Policy considerations for countering mis and disinformation could include limiting its spread once published, preventing people from engaging with disinformation and producing good information (PN719).

Key uncertainties/unknowns

There is very little evidence on the impact of misinformation and disinformation campaigns on people’s beliefs and behaviours (PN719) and some researchers have said measuring its influence is a “notoriously difficult task”.[22] Some researchers argue that small groups of voters can be swayed enough to influence election results, while others believe the scale of the problem is overstated.[23]

The Internet Watch Foundation noted a potential legal ambiguity around AI-generated child sexual abuse material.[24] Creation and possession of such images may straddle two pieces of legislation – the Protection of Children Act 1978[25] and the Coroners and Justice Act 2009[26] – and a risk of shortened sentences if an existing victim cannot be identified. There is no current legislation prohibiting the publishing of guides to the creation of AI-generated child sexual abuse material.24

Experts noted that attributing cyber crimes to their perpetrators can be difficult and time-consuming.1 This can make legal proceedings challenging, especially if entities are operating with the backing of foreign governments.1 This has been exacerbated by the rise of cyber crime “as-a-service”, which may make uncertain the responsibility between the developer and the user of malicious software.1 Consultation respondents suggested legal frameworks may need to acknowledge shared culpability between users and creators of AI models for cyber crime.

Changing international landscapes can affect cyber crime, harm and security. For example, the war in Ukraine has made cyber attacks and disinformation campaigns a more critical strategic objective for some states (cyber security of elections).

Key questions for Parliament

  • Does critical UK infrastructure have sufficient protection against cyber crime, such as state sponsored attacks, and how can this be improved?
  • If and how will existing regulation be reviewed to ensure it protects individuals from online harms?
  • How can the government, regulators and social media companies combat online radicalisation, hate crime and recruitment into criminal activity and ensure that online misinformation and disinformation do not undermine democratic processes and institutions?
  • Should social media companies take greater responsibility and action in combatting online harms?
  • Is legislation required on the criminalisation of the creation of sexually-explicit deepfakes?
  • What legal reforms are needed to ensure legislation against AI-generated child sexual abuse material is clear and comprehensive?

Related documents

References

[1] POST (2024). Cyber security of elections.

[2] Home Office (2024). Cyber security breaches survey 2024.

[3] Home Office (2024). Foreign interference: National Security Bill factsheet.

[4] Campbell, D. (2024). Records on 300m patient interactions with NHS stolen in Russian hack. The Guardian.

[5] Internet Watch Foundation (2024). How AI is being abused to create child sexual abuse material (CSAM) online (summary).

[6] UK Government (2024). Government cracks down on ‘deepfakes’ creation.

[7] Criminal Justice Bill [as amended in Public Bill Committee]

[8] BBC (2022). Female avatar sexually assaulted in Meta VR platform, campaigners say.

[9] Glitch (online). Response to DCMS – The impact of COVID-19 on online abuse and harassment: Written evidence submitted by Glitch.

[10] Dragiewicz, M. (2018). Technology facilitated coercive control: domestic violence and the competing roles of digital media platforms. Feminist Media Studies, Volume 18, 4, pages 609-625.

[11] Refuge (2020). 72% of Refuge service users identify experiencing tech abuse.

[12] Brewster, B., et al. (2021). Covid-19 and child criminal exploitation in the UK: implications of the pandemic for county lines. Trends in Organized Crime, Volume 26, pages 156-179.

[13] Allington, D., et al. (2020). The relationship between conspiracy beliefs and compliance with public health guidance with regard to COVID-19. Kings College London.

[14] House of Lords Democracy and Digital Technologies Select Committee (2020). Digital Technology and the Resurrection of Trust, para 23-27, HL 77.

[15] Department for Education (2023). Understanding and identifying radicalisation risk in your education setting.

[16] National Cyber Security Centre (2024). The near-term impact of AI on the cyber threat.

[17] UK Government (2022). Government Cyber Security Strategy: 2022 to 2030.

[18] House of Commons Science, Innovation and Technology Committee (2024). Cyber resilience of the UK’s critical national infrastructure – Oral evidence.

[19] Chattopadhyay, A., et al. (2020). Autonomous Vehicle: Security by Design. IEEE Transactions on Intelligent Transportation Systems, Volume 22, 11, pages 7015-7029.

[20] House of Commons Science, Innovation and Technology Committee (2024). Governance of artificial intelligence.

[21] NSPCC (2023). Child Safeguarding and Immersive Technologies: An outline of the risks.

[22] Robins-Early, N. (2023). Disinformation reimagined: how AI could erode democracy in the 2024 US elections. The Guardian.

[23] Adam, D. Misinformation might sway elections — but not in the way that you think.

[24] Internet Watch Foundation (2023). How AI is being abused to create child sexual abuse material (CSAM) online.

[25] Protection of Children Act 1978

[26] Coroners and Justice Act 2009


Photo by: smolaw11 via Adobe Stock

Horizon Scan 2024

Emerging policy issues for the next five years.