Documents to download

Extremism lacks a clear definition, which has contributed to difficulty in regulating it. The UK Government characterises extremism as “opposition to fundamental values, including democracy, the rule of law, individual liberty, and respect and tolerance for different faiths and beliefs”. However, this definition lacks the legal precision for extremism to be grounded in UK law. Extremism is not the same as terrorism. However, exposure to extremism can encourage an individual’s support for terrorist tactics. Assessing the scale of exposure to extremist content is difficult. People may access it covertly, unknowingly or be unsure whether they should report it. 

The internet may facilitate extremism in multiple ways, including recruitment, socialisation, communication, networking and mobilisation. Content can be posted instantly without verification, so information can be produced rapidly and disseminated widely. Contributors consciously evade detection by using multiple accounts and avoiding specific terms. Responsibility for regulating the internet is shared across multiple public and private bodies, including government, counter-terrorism police and private internet companies. Existing counter-extremism strategies include content removal (often using automatic detection) and deplatforming (removal of a group or individual from an online platform), as well as social interventions. Many stakeholders believe that current counter-extremism responses are too focused on law and technology, and do not address the underlying reasons that people are drawn to extremist content. UK Government, public sector practitioners and industry stakeholders have called for a coordinated response to address the range of social and technological challenges around extremism. In 2019, the UK Government put forward proposals to address online extremism in the Online Harms White Paper. 

Key Points 

  • The internet can leave users vulnerable to social challenges, which creates opportunities for extremism to spread. Users can be exposed to extremism in multiple ways, including through recruitment and socialisation. 
  • Extremist content may be found on mainstream social media sites and ‘alt-tech’ platforms, which replicate the functions of mainstream social media but have been created or co-opted for the unconventional needs of specific users. 
  • Automatic detection can be used to moderate extremist content on a large scale. However, this is prone to false positives and may disproportionately impact a particular group, which can fuel mistrust in the state. 
  • Many stakeholders believe that current counter-extremism responses are too focused on law and technology, and do not address the underlying reasons that people are drawn to extremist content. Individual and societal interventions aim to identify underlying socio-economic and cultural contributors and implement protective factors to reduce how many people develop extremist views. 

Acknowledgements 

POSTnotes are based on literature reviews and interviews with a range of stakeholders and are externally peer-reviewed. POST would like to thank interviewees and peer reviewers for kindly giving up their time during the preparation of this briefing, including: 

  • Clean Up The Internet 
  • Commission for Countering Extremism 
  • Counter-Terrorism Internet Referral Unit 
  • Home Office* 
  • Moonshot CVE* 
  • Ofcom 
  • Prevent* 
  • Dr Stephane Baele, University of Exeter 
  • Professor Mark Bellis, Public Health Wales* 
  • Ben Bradley, techUK* 
  • Joseph Briefel, Integrity UK 
  • Dr Chico Camargo, Oxford Internet Institute 
  • Professor Maura Conway, Dublin City University 
  • Professor Paul Gill, University College London* 
  • Dr Scott Hale, Oxford Internet Institute 
  • Dr Katie Hardcastle, Public Health Wales* 
  • Ivan Humble, Me & You Education 
  • Haydn Kemp, College of Policing* 
  • Ashton Kingdon, Southampton University 
  • Alex Krasodomski-Jones, Demos 
  • Dr Benjamin Lee, Lancaster University* 
  • Dr Caroline Logan, Greater Manchester NHS Mental Health Trust 
  • Professor Stuart Macdonald, Swansea University 
  • Dr Raffaello Pantucci, Royal United Services Institute for Defence and Security Studies* 
  • Dr Lorenzo Pasculli, Coventry University* 
  • Dr Bertie Vidgen, Alan Turing Institute and Oxford University* 
  • Charlie Winter, Kings College London 

* denotes people and organisations who acted as external reviewers of the briefing 


Documents to download

Related posts

  • Large-scale woodland creation is being promoted internationally to mitigate climate change. It can also supply other benefits, such as improving biodiversity, air and water quality. This POSTnote summarises key factors influencing how much carbon is taken up by woodland, the different objectives of woodland creation, constraints to increasing UK tree cover and different finance options.

  • The digital divide is the gap between people in society who have full access to digital technologies (such as the internet and computers) and those who do not. Concerns about the digital divide have been particularly acute during the COVID-19 pandemic as the internet and digital devices have played an important role in allowing people to access services, attend medical appointments and stay in touch with friends and family. What impact has the digital divide had on children and adults in the UK during the COVID-19 pandemic and what has been done to tackle it?

  • Devices with screens include game consoles, laptops and televisions. Screen use refers to activities undertaken on such devices and the time spent on them. Children’s screen use has increased over the past decade. Policy-makers and parents have expressed concerns about possible effects of screen use on children/young people’s development and health. This POSTnote provides an overview of how children/young people use screens, the opportunities and risks of this use, evidence on the possible effects on health and development, and evidence on ways to support healthy screen use.