Overview of change
The widespread use of social media globally has led to debates around how to protect users from harmful content that can be generated anywhere in the world and disseminated internationally. There is also increasing evidence that state actors are using social media to censor and control information. These concerns relate to wider issues around the perceived power of social media firms over society,1 online harassment,2-4 and data collection, surveillance and privacy.5,6
There has been continuing discussion on how to reconcile the opportunities that the digital world presents with the associated risks.7,8 The latter include exposure to illegal or harmful content; the use of content in a harmful way, such as non-consensual sharing of intimate images; or the creation and facilitation of groups of individuals whose common interest is illegal or harmful in nature, or spreads misinformation.9,10 Evidence suggests that online services appealing to young people have been used for recruitment purposes by both state actors and radical groups.11 For example, while live-streaming services have been used as part of US military recruitment campaigns, YouTube has also been identified as a potential platform for radicalisation.12,13 Suggestions on how to address online harms have included online safety education and increased regulation.14,15 Social media companies are increasingly taking steps to encourage users to think more critically about the content they produce or engage with.16-20 For example, Twitter has tried using prompts to encourage users to rethink use of ‘harmful’ language.21 Both Twitter and Facebook have also introduced new features to flag and remove dis/misinformation.22,23 Market pressures may also have a role in encouraging a safer online environment, with growing trends for companies to boycott advertising on certain sites in response to perceived failures to act.24
Additional concerns have also been raised about the security of social media companies that are deemed to be at risk from state interference. For example, leaked information shows that TikTok had been instructed by the Chinese state to censor videos relating to contentious political issues (including Tibetan independence) and certain political figures (including Vladimir Putin and Kim Jong-un).25 Some apps have also been accused of collecting information that could be misused by state actors, such as recording the last thing saved to a user’s clipboard.26 Suggestions of states disseminating dis/misinformation have also continued. For example, intelligence operations identified the use of false personas promoting anti-NATO narratives (aligning with Russian security interests) in various countries,27 and so-called ‘Advanced Persistent Threat (APT)’ actors have targeted US think tanks and other organisations focused on international affairs or security policy.28 In response to concerns about state use of social media, the EU asked tech companies to share more information about how dis/misinformation is generated and targeted (specifically posts’ country of origin and target audiences).29
Challenges and opportunities
- There are continuing challenges for how states balance regulation of social media with personal freedom. This is particularly acute in the case of private messaging services, which have greater privacy considerations and raise issues related to end-to-end encryption.
- Emerging evidence suggests that market pressures may be encouraging tech companies to self-regulate, though stakeholders tend to advocate greater measures than self-regulation on its own.30-32
- There may be opportunities for international cooperation across the regulatory frameworks of the digital economy on preventing the spread of political dis/misinformation and censorship.
- Attempts to limit content on social media may prevent dissident and marginalised voices from being heard.33-36
Key unknowns
- It is unclear what the short- or long-term effects of exposure to ‘harmful’ content are for young people nor what effect different interventions (such as education) may have on their engagement with such content.
- Data from tech companies relating to dis/misinformation, censorship, targeted advertising, the specifics of blocked content and other aspects are not regularly made available, making it difficult to assess the extent of these issues.
- State censorship and dissemination of dis/misinformation are, by their nature, secretive activities and it is not currently possible to estimate their prevalence or effect accurately. However, state-sponsored media outlets provide an indicator for how prevalent and sophisticated these techniques may be.37
Key questions for Parliament
- The Online Harms White Paper outlined some suggestions for tackling online harms. How will these be implemented and evaluated? How will future online threats be considered?
- What role can the UK play in ensuring global tech companies are not enabling malicious activities or misinformation by state or non-state actors?
Likelihood and impact
High impact and high likelihood in the next 5 years.
References
- Anderson, M. (2020). Most Americans say social media companies have too much power, influence in politics. Pew Research Center
- Vogels, E. (2021). The State of Online Harassment. Pew Research Center
- POST (2018). POSTnote 592: Stalking and Harassment
- Parsons, C. et al. (2019). The Predator in your Pocket: A Multidisciplinary Assessment of the Stalkerware Application Industry. The Citizen Lab
- Auxier, B. et al. (2019). Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information. Pew Research Center
- Robertson, K., Khoo, C., Song, Y. (2020). To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada. The Citizen Lab
- Orben, A. (2020). Outpaced by technology: A provocation paper. The British Academy, Reframing Childhood Past and Present
- Livingstone, S. (2020). Can we realise children’s digital rights in a digital world? A provocation paper. The British Academy, Reframing Childhood Past and Present.
- Greenemeier, L. (2018). Social Media’s Stepped-Up Crackdown on Terrorists Still Falls Short. Scientific American
- Taylor, E. and Hoffmann, S. (2020). Follow the Money: How the Online Advertising Ecosystem Funds COVID-19 Junk News and Disinformation. Oxford Information Labs.
- Fernandez, M., Gonzalez-Pardo, A. and Alani, H. (2019). Radicalisation Influence in Social Media. Journal of Web Science, 6
- Hope Not Hate (2020). State of Hate 2020.
- Katwala, A. (2020). The military is turning to Twitch to fix its recruitment crisis. Wired
- POST (2020). POSTnote 608: Online safety education.
- POST (2020). Online extremism.
- Facebook (2020). Labeling State-Controlled Media On Facebook.
- Bond, S. (2020). Facebook Begins Labeling ‘State-Controlled’ Media. NPR
- Hutchinson, A. (2020). Zuckerberg Says That Facebook Will Start Adding Labels to Rule-Breaking Content From Politicians. SocialMediaToday
- Twitter (2020). Updating our approach to misleading information
- Twitter (2020). New labels for government and state-affiliated media accounts.
- Hatmaker, T. (2020). Twitter runs a test prompting users to revise ‘harmful’ replies. Tech Crunch
- Constine, J. (2020). Facebook will pay Reuters to fact-check Deepfakes and more. Tech Crunch
- Twitter (2020). Updating our approach to misleading information
- Stop Hate for Profit [online] Participating Businesses. Accessed 26/03/21
- Hern, A. (2019). Revealed: How TikTok censors videos that do not please Beijing.
- Doffman, Z. (2020). Warning—Apple Suddenly Catches TikTok Secretly Spying On Millions Of iPhone Users. Forbes.
- Mandiant (2020). ‘Ghostwriter’ Influence Campaign: Unknown Actors Leverage Website Compromises and Fabricated Content to Push Narratives Aligned with Russian Security Interests.
- United States Cybersecurity & Infrastructure Security Agency (2020). Alert (AA20-336A): Advanced Persistent Threat Actors Targeting U.S. Think Tanks
- Espinoza, J. and Peel, M. (2020). EU demands tech giants hand over data on virus disinformation. Financial Times
- Information Commissioner’s Office [online]. Children’s Code hub. Accessed 26/03/21.
- CMA (2016). UK Competition and Markets Authority response to the European Commission’s consultation on the regulatory environment for platforms, online intermediaries, data and cloud computing and the collaborative economy.
- The Internet Commission (2021). Accountability Report 1.0.
- Crete-Nishihata, M. et al (2020). Censored Contagion II: A Timeline of Information Control on Chinese Social Media During COVID-19. The Citizen Lab
- Ruan, L, et al (2017). We (can’t) Chat: “709 Crackdown” Discussions Blocked on Weibo and WeChat. The Citizen lab
- Crete-Nishihata, M. et al (2017). Tibetans blocked from Kalachakra at borders and on WeChat. The Citizen Lab
- Dalek, J. et al (2015). Information Controls during Military Operations: The case of Yemen during the 2015 political and armed conflict. The Citizen Lab.
- Bright, J., et al (2020). Coronavirus Coverage by State-Backed English-Language News Sources (Data Memo 2020.2; COVID-19 Series). Project on Computational Propaganda.
Photo by dole777 on Unsplash