COVID-19 vaccines have been deployed in the UK since December 2020. This article examines the impact of COVID-19 vaccines on transmission of the virus. It also considers the potential implications of vaccine-induced protection for easing lockdown restrictions and debate about potential introduction of immunity certification or a vaccine passport scheme.
Documents to download
AI and healthcare (471 KB, PDF)
There is increasing interest in the use of AI in healthcare among academics, industry, healthcare professionals, and policymakers. AI has the potential to improve health outcomes and offer cost savings through reducing the time spent by staff on routine work. While some AI systems are commercially available, few are currently used widely in the NHS. Most AI products for healthcare are still at the research or development stage, with some being trialled or evaluated in NHS settings.
In the 2017 Industrial Strategy the UK Government stated its aim to use data and AI to “transform the prevention, early diagnosis and treatment of chronic diseases by 2030.” In 2018, it invested £50m in five new centres of excellence for using AI to improve diagnostic imaging and pathology, with a further £50m allocated as part of its long-term response to the COVID-19 pandemic.
Improved use of AI and digital healthcare technologies is identified as a priority in the 2019 NHS Long Term Plan. In 2019, the Government established NHSX, a new unit responsible for setting policy and best practice around the use of digital technologies in England. This included the creation of an AI Lab with £250M of funding to support the development and deployment of AI technologies in the NHS and care system.
- The capabilities of AI systems have improved in recent years due to increasing computing power, greater availability of training data, and development of more sophisticated algorithms using techniques like deep learning.
- Automation of administrative and clinical tasks using AI could reduce the costs of healthcare and increase productivity. AI systems have the potential to make diagnoses more accurately and quickly than clinicians. This could allow patients to access earlier treatment, improving health outcomes, and reducing treatment costs.
- Despite these potential benefits, some stakeholders have raised concerns that the use of AI risks dehumanising the healthcare system. In addition, real-world operating conditions may differ from those expected in development, leading an AI system to perform worse than expected, or to give dangerous recommendations.
- Few studies have examined the performance of AI systems in real-world clinical settings.
- Healthcare staff may need new skills and technical knowledge to operate and understand AI systems. New, more specialised roles may be created.
- Large, high-quality datasets are needed to develop AI systems. Developers often use patient data, such as medical images, gathered by healthcare providers. Surveys suggest a lack of awareness among the public of how patient data is used, and scepticism towards sharing it.
- The need to share large datasets with external developers during AI development may increase the risks of a data breach. There are additional cyber-security risks which are specific to AI systems.
- Various laws and principles govern the use of patient data, including the EU General Data Protection Regulation, Common Law Duty of Confidentiality, and Caldicott Principles.
- The quality and organisation of data varies widely between different NHS services, with some parts of secondary care still using paper records. Many IT systems used in the NHS are unable to communicate with other systems, making it difficult to gather data in a consistent way.
- Currently, most AI systems provide recommendations to clinicians, who balance these against their knowledge and experience. If a recommendation produced by an AI system led to a patient being harmed, there could be legal consequences for the clinician, healthcare provider, and AI developer. There is a lack of precedent for how such a case would be resolved.
- AI systems could provide more consistent recommendations of treatments or diagnoses, reducing health inequalities. However, there is a risk of AI systems exhibiting ‘algorithmic bias’, providing recommendations that discriminate against certain demographic groups.
- Various organisations oversee regulations and standards in the development, implementation, and use of AI systems. Some stakeholders view existing regulatory processes as difficult to navigate, and attempts are being made to streamline these processes.
- Future UK regulations relevant to AI systems used in healthcare will be developed under provision of the Medicines and Medical Devices Bill 2019-21.
POSTnotes are based on literature reviews and interviews with a range of stakeholders and are externally peer reviewed. POST would like to thank interviewees and peer reviewers for kindly giving up their time during the preparation of this briefing, including:
- Max Prangnell, Academy of Medical Royal Colleges*
- Reema Patel, Ada Lovelace Institute*
- Aidan Peppin, Ada Lovelace Institute*
- Dr Keith Grimes, Babylon Health*
- Dr Ben Panter, Blackford Analysis
- Dr George Harston, Brainomix & Oxford University Hospitals NHSFT*
- Riaz Rahman, Brainomix
- Alexander Ottley, British Medical Association (BMA)*
- David Parkin, British Medical Association (BMA)*
- Rob Turpin, British Standards Institution (BSI)*
- Michael Birtwistle, Centre for Data Ethics and Innovation (CDEI)
- Dr Stewart Whiting, Current Health*
- Andrew Fearn, EMRAD East Midlands Imaging Network*
- Simon Harris, EMRAD East Midlands Imaging Network*
- Jacqueline Moxon, EMRAD East Midlands Imaging Network*
- Penny Storr, EMRAD East Midlands Imaging Network*
- Health Research Authority (HRA)*
- Information Commissioner’s Office (ICO)*
- Prof Bissan Al-Lazikani, Institute for Cancer Research (ICR)
- Phil Booth, medConfidential*
- Sam Smith, medConfidential*
- Software Medical Devices Team, Medicines and Healthcare products Regulatory Agency (MHRA)*
- Dr Junaid Bajwa, Microsoft*
- Dr Indra Joshi, NHS AI Lab*
- Dr Emma Pencheon, NHS AI Lab*
- Jem Rashbass, NHS Digital*
- David Evans, Data and Information Governance Policy Team, NHSX*
- Jeanette Kusel, National Institute of Health and Care Excellence (NICE)*
- Kaysar Miah, Office for Artificial Intelligence*
- Alison Hall, PHG Foundation*
- Colin Mitchell, PHG Foundation*
- Johan Ordish, formerly PHG Foundation
- Members of the POST Board*
- Eleonora Harwich, Reform*
- Ross Scrivener, Royal College of Nursing (RCN)*
- Dr Jim Weatherall, Royal Statistical Society & AstraZeneca
- Dr Caroline Jones, Swansea University*
- Dr Natalie Banner, Understanding Patient Data*
- Dr Vasileios Lampos, University College London (UCL)
- Prof Laurence Lovat, University College London (UCL) & Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS)
- Prof Dario Landa-Silva, University of Nottingham
- Prof Aiden Doherty, University of Oxford*
- Prof Jens Rittscher, University of Oxford*
- Prof Jeremy C. Wyatt, University of Southampton*
- Dr Bilal A. Mateen, Wellcome Trust & Alan Turing Institute*
*denotes people and organisations who acted as external reviewers of the briefing.
Documents to download
AI and healthcare (471 KB, PDF)
How does COVID-19 affect children? Will children be vaccinated against the disease? This article summarises the latest findings from research and highlights where more research can explore some of the remaining uncertainties.
On December 31, 2020 the four UK Chief Medical Officers (CMOs) published a statement announcing changes to the dosing schedule for the second dose of the Pfizer/BioNTech and University of Oxford/AstraZeneca vaccines. It stated that the interval between the first and second dose should be extended from 3–4 weeks to up to 12 weeks. This rapid response examines the evidence behind this decision.