DOI: https://doi.org/10.58248/RR02

Overview

  • According to a recent study from Ofcom, 46% of respondents have encountered false or misleading coronavirus information since the lockdown.
  • Most cases of misinformation are found on social media.
  • Misinformation can lead to public mistrust, endangerment of public health, as well as hate crime and exploitation.
  • Different approaches are being implemented to fight misinformation including content moderation, myth-busting, and a focus on education.
  • This is part of our rapid response content on COVID-19. You can view all our reporting on this topic under COVID-19

The volume of inaccurate information circulating around the COVID-19 outbreak has prompted a global ‘infodemic’.

Widespread misinformation has included proposed underlying causes of the virus (such as 5G radio waves), conspiracies around the actions of public bodies and unverified treatments and preventative measures.

An Ofcom survey of over 2,000 people found that, within the first week of the ‘stay at home’ measures, 46% encountered false or misleading information. Within this group, 66% reported that they were seeing COVID-19 misinformation at least once a day and 55% said that they did nothing about it.

So where does this false information come from? What are the consequences? And what can be done to counter harmful misinformation?

Sources of COVID-19 misinformation

Misinformation may be passed on in many ways, including discussions with family and friends, through the media and online. A study by the Reuters Institute of Journalism (a UK-based think-tank) and Oxford University recently analysed 225 items of COVID-19 misinformation and found that 88% appeared on social media. Social media content can be posted instantly without verification or editorial judgement, which allows misinformation to be produced rapidly and disseminated widely.

The Reuters Institute found that 56% of misinformation around COVID-19 appears to have been based upon true information which has been reconfigured. For example, the NHS recommends that washing bed linen in temperatures of 60 degrees Celsius will help prevent germs outside of the body from spreading. This may have evoked false claims that people can protect themselves from COVID-19 by taking hot baths or using hairdryers.

Misinformation can arise from genuine misconceptions around terms and statistics. For example, the term ‘coronavirus’ refers to a family of viruses, some of which can cause the common cold. The term is not limited to the virus which causes COVID-19.

References to coronavirus date back to the 1960s, which has fuelled conspiracies that the current pandemic was expected. A photo of a disinfectant bottle label, which claims to ‘kill human coronavirus’ has been shared on Facebook over 2500 times, which has led users to speculate that the product’s manufacturers knew about COVID-19 ahead of the public. Misinformation may also be spread through parody or satirical content, which some people may interpret as fact.

In other instances, some people may publish deliberately misleading information about COVID-19. This could be motivated by financial gain (for example, to sell products), or to promote political interests. People may also publish misleading ‘click-bait’ in order to gain widespread viral attention.

This is potentially further fuelled by the algorithms that underpin online platforms: when a user views online content, the hosting website displays adverts to generate revenue. Algorithms attempt to direct users to content they are likely to view, in order to increase engagement and hence revenue. Sensationalist content, which may contain misinformation, is more likely to be viewed by users and be recommended by the algorithms.

Harmful consequences of misinformation 

The spread of COVID-19 misinformation can be harmful in many ways. It can fuel mistrust in public authorities, enable criminal activity and lead to severe health consequences.

Public mistrust

The Reuters Institute’s analysis of COVID-19 misinformation found that 39% of false claims were about the actions of public authorities (such as government, the World Health Organisation or the United Nations), which was the single largest category of claims within the sample.

This can decrease the public’s trust in public authorities’ actions. For example, false reports of secret mass cremations provoked upset and gave the impression that the severity of the situation was being concealed from the public. This can have severe consequences for public cooperation with government guidelines. A recent study from King’s College London found that people who believe in COVID-19 conspiracy theories (such as 5G radiation being connected to symptoms) are more likely to neglect public health guidance on social distancing.

Health implications

COVID-19 misinformation, unsupported by medical evidence and masquerading as official health guidelines, may contradict official health advice. As a result, individuals are more likely to endanger themselves and others. There is evidence that misinformation can cause humans to take greater risks, such as sharing food with ill people and failing to wash their hands. In some cases, acting on misinformation can pose a risk to life. It has been reported that in Iran more than 300 people have died after drinking methanol (which is highly toxic), following false claims that it be used to treat COVID-19.

Crime

A UN official has commented that misinformation that attempts to blame particular organisations, or individuals, presents a ‘risk of stigma and fear’. Researchers suggest that misreporting around the origins of the virus has spurred an increase in xenophobic abuse against people of Asian descent since the outbreak of COVID-19.

Criminals have also exploited public uncertainty for fraudulent purposes. People have received phishing-scam texts claiming to be from the UK Government, notifying them that they must pay a £35 fine for breaching social distancing measures. Google has reported that scammers are sending 18 million hoax emails about COVID-19 every day. Action Fraud (the UK’s national reporting centre for fraud and cybercrime) recorded losses of over £1.6 million due to COVID-19 related fraud.

Box 1
In 2019, the Government put forward proposals to address disinformation in its Online Harms White Paper. The White Paper outlined proposals to establish a duty of care for internet companies that will make clear companies’ responsibilities around online harms. So far, attempts to address these issues have largely been industry-led. However, stakeholders including the House of Commons Digital, Culture, Media and Sport Committee have called for further action from government and technology companies.

The Government has committed to work with social media companies to combat false information during the pandemic. In March 2020 it set up a Counter Disinformation Unit (part of the Department for Digital, Culture, Media and Sport), specifically to identify and respond to COVID-19 misinformation and scams. The Government has reported that the Unit is identifying and resolving up to 70 incidents per week.

Preventing and challenging COVID-19 misinformation

Digital platforms have attempted to address the spread of misinformation. This includes moderating content, fact checking services and ‘myth busting’ false claims about the virus, and education and guidance for users on how to recognise misinformation.

Content moderation by digital platforms

In March 2020, Facebook, Google, Twitter, YouTube, LinkedIn, Reddit and Microsoft released a joint statement announcing their collaboration in preventing online misinformation and fraud around coronavirus. Some of the approaches taken by these platforms and others include:

Content removal, deprioritising and labelling: Online platforms are not currently obliged to remove content containing misinformation in the UK (see Box 1). However, private companies may choose to remove or deprioritise content. Content can be removed or demoted by human moderators, or can be detected and removed automatically.

Several of the major social media platforms are taking steps to remove or demote content containing COVID-19 misinformation. Facebook is removing all COVID-19 related content that could cause imminent physical harm to users. Misinformation that does not directly result in physical harm is referred to a fact checking system. If the content is rated as false, it is demoted so that it ranks lower in users’ news feeds. In some cases it is tagged with a warning label. In April 2020, YouTube announced it would remove conspiracy theory videos linking coronavirus to 5G.

Prioritising and promotion of official information: Some social platforms and search engines have implemented measures to direct users to official information, including information from national governments and health services.

NHS England has collaborated with Twitter and other social media platforms to provide users with easy access to NHS guidance on the virus. When users search for COVID-19 or related terms on Twitter, a banner is displayed that provides them with links to the NHS website and the Department for Health and Social Care Twitter account.

Similarly, when users carry out a Google search on COVID-19, an information panel appears linking to UK Government and NHS information.

Some platforms have collaborated with health authorities, including the World Health Organization (WHO), to set up dedicated COVID-19 information hubs. Facebook recently launched a COVID-19 Information Centre, available in several countries. The WHO has launched a chatbot on Facebook Messenger and WhatsApp to provide instant information on COVID-19.

Advertising bans: Some platforms, including Twitter and Google, have placed restrictions on hosting certain adverts that mention COVID-19, including adverts for certain products related to the virus (such as hand sanitisers, face masks and testing kits).

Messenger service restrictions: In April 2020, WhatsApp imposed restrictions on message forwarding as a way to prevent the spread of misinformation via the app. The new restrictions mean that messages that have already been forwarded multiple times can only be forwarded on to one chat at a time (rather than five).

One of the challenges of using automated tools to identify misinformation is the potential for social media algorithms to incorrectly flag legitimate information as misinformation. For example, in March 2020, a bug in Facebook’s software led to some news articles about COVID-19 being incorrectly labelled as spam.

It has also been suggested that, in some cases, labelling content as misinformation could be counterproductive. This is because it may draw additional attention to it and result in the misinformation being amplified. Some commentators have also raised concerns that removing conspiracy theory content may fuel further conspiracy theories by making users feel they are being censored.

Fact-checking and myth busting

The number of fact checking organisations globally has increased in recent years. The majority are independent or civil society organisations, although many are linked to established news institutions such as Channel 4 News or the BBC.

Some examples of UK-based fact checking organisations include:

Fact checking organisations are carrying out an increasing number of checks on information related to COVID-19, with many now directing resources to debunking false claims about the pandemic. One analysis estimated that there had been a 900% increase in English language fact checks from January to March 2020. Some fact checking organisations have reported that tackling COVID-19 misinformation is causing a strain on staff capacity.

Reports rated false by fact checking organisations may be removed from the platform on which they are hosted or have a warning label attached to them to make users aware that the content might be false or misleading.

The Reuters Institute and the University of Oxford found that there had been a varying response to fact checked posts by social media platforms. On Twitter, 59% of posts rated as false remained up on the site with no warning label. 27% remained up on YouTube and 24% on Facebook.

The International Fact-Checking Network aims to bring together fact-checking bodies worldwide. The organisation has created a database of COVID-19-related fact checks, which pools together debunked misinformation published across 70 countries. The WHO has added a ‘myth busters’ section to its online resources about the virus, and UNESCO is promoting the use of hashtags such as #thinkbeforeyouclick.

Education and guidance

Commentators suggest that educating people about misinformation and improving their ability to appraise information critically could reduce some of its negative impacts. A number of organisations, including the UK Government, have produced guidance to help prevent the spread of misinformation about COVID-19.

The Centre for Countering Digital Hate (CCDH, a UK based charity), recently produced guidance called ‘Don’t Spread The Virus’. It encourages social media users not to share or comment on false information they see online, even if they want to point out that it is wrong, to prevent the content from appearing in other users’ social media feeds.

Instead, users are encouraged to block people who are sharing misinformation and report the content to the platform. The guidance also suggests that users can help to ‘drown out’ misinformation by posting and sharing information and advice from official sources.

The UK Government has relaunched its ‘Don’t Feed The Beast’ campaign. This is a public information campaign first launched in 2019, which aims to empower users to question information they read online. The campaign includes a five-step checklist to help the public identify whether information may be misleading:

  1. Source: make sure information comes from a trusted source.
  2. Headline: always read beyond the headline.
  3. Analyse: check the facts.
  4. Retouched: does the image or video look as though it has been doctored?
  5. Error: look out for bad grammar and spelling.

You can find more content from POST on COVID-19 here.

You can find more content on COVID-19 from the Commons Library here.

Related posts

  • Drug Therapies for COVID-19

    Research studies involving thousands of people have allowed scientists to test which drugs are effective at treating COVID-19. Several drug therapies are now available to treat people who are in hospital with COVID-19, or to prevent infections in vulnerable people becoming more serious. This briefing explains which drugs are available, the groups of people in which they are used and how they work. It also outlines the importance of monitoring the emergence of new variants and drug resistance.