Documents to download


What is AI capable of?

  • interpreting, processing and generating realistic human-like speech and text
  • interpreting, processing and generating images, videos and other visuals
  • independently performing tasks in the real world, such as when they are paired with machinery such as robots

How can AI be used?

AI technologies can be found in a wide range of everyday applications, including virtual assistants, search engines, navigation software, online banking and financial services, and facial recognition systems.

As a result, they can be applied in a wide range of sectors, such as healthcare, finance, education and commerce and can assist in tasks, such as decision making and improving productivity.

How does AI work?

Many AI technologies are underpinned by ‘machine learning,’ which works by finding patterns in existing data (known as ‘training data’) and using these patterns to inform the processing of new data to make predictions or generate other outputs.

Some AI technologies, known as generative AI, can generate realistic outputs, such as text, audio, code, pictures, videos and music. Many AI technologies are designed to perform a specific task and cannot be adapted to other tasks.

Foundation Models are a type of machine learning model that can increasingly be adapted to a wide range of tasks, including generating realistic outputs.

Large Language Models (such as ChatGPT) are Foundation Models that carry out a range of language related tasks, such as processing and generating text.

Factors driving advances in AI

  • greater availability and volume of training data
  • computing power
  • computing investments
  • new uses of algorithms

Concerns about AI include

  • who has access to the biggest Large Language Models
  • the source, management and sharing of data that is used to train AI models, and the related privacy, security and discrimination implications
  • impacts on the environment of training and running AI models
  • challenges around the supply of AI hardware
  • the ability of AI models to generate false information, which could lead to disinformation, biased decisions or discriminatory outcomes
  • a lack of understanding about how large AI models make recommendations or decisions
  • implications of AI for the economy and a lack of specialised AI skills to meet the growing demand in the UK workforce
  • employment conditions for outsourced workers involved in developing large AI models

Perceptions of AI

In the past few years, various research has been conducted by academia, industry, NGOs and the public sector to determine public understanding of AI.

Experts have varying views on if, how and when future forms of AI are achievable and what nature these forms will take.


POSTbriefs are based on literature reviews and interviews with a range of stakeholders and are externally peer reviewed. POST would like to thank interviewees and peer reviewers for kindly giving up their time during the preparation of this briefing, including:

Members of the POST Board* 

Dr David Busse, Government Office for Science* 

Matt Davies, Ada Lovelace Institute* 

Dr Yali Du, Kings College London* 

Dr Gordon Fletcher, University of Salford 

Dr Matthew Forshaw, Newcastle University and The Alan Turing Institute* 

Professor Oliver Hauser, University of Exeter*

Elliot Jones, Ada Lovelace Institute*  

Dr Clara Martins-Pereira, Durham University* 

Dr Shweta Singh, University of Warwick and The Alan Turing Institute* 

Adam Leon Smith, British Computing Society, Chair of the Fellows Technical Advisory Group* 

Professor Michael Wooldridge, University of Oxford and The Alan Turing Institute 

*Denotes people and organisations who acted as external reviewers of the briefing.

Documents to download

Related posts