Skip to main content

Artificial intelligence (AI) governance policy

Contents

1 Introduction

In the rapidly evolving field of healthcare, the integration of artificial intelligence (AI) technologies has the potential to revolutionise patient care, streamline administrative processes, and enhance overall operational efficiency. Rotherham Doncaster and South Humber NHS Foundation Trust (RDaSH) recognises the importance of adopting AI technologies while ensuring their ethical and responsible use. This AI policy serves as a guiding framework to ensure the appropriate deployment, management, and oversight of AI systems across the trust.

2 Purpose

The purpose of this policy is to establish clear guidelines for the development, implementation, and monitoring of artificial intelligence (AI) systems to protect personal data, uphold ethical standards, and mitigate potential risks. We recognise that AI systems, including machine learning algorithms and natural language processing, can contribute significantly to research, improving healthcare outcomes and resource allocation. It is imperative, however, to ensure that AI technologies are used in a manner that aligns with legal requirements, respects patients’ rights, and maintains the trust and confidence of our patients, employees, and stakeholders. This policy outlines key principles and procedures that must be adhered to when utilising AI technologies within the trust. It addresses critical aspects such as data privacy, algorithm transparency, accountability, and ongoing monitoring of AI systems. By implementing these guidelines, we aim to foster a culture of responsible AI use, where the benefits of AI are harnessed while minimising potential risks. It is important to note that this AI policy is not exhaustive and may need to be adapted and updated periodically as technology advances, regulatory requirements evolve, and best practices in AI governance emerge. Rotherham, Doncaster and South Humber NHS Foundation Trust are committed to staying at the forefront of responsible AI implementation to ensure the ethical and effective use of AI technologies.

3 Scope

This policy applies to all those working for the trust in whatever capacity, including the trust’s employees, volunteers, students, temporary workers, contractors, suppliers and third parties. It applies to all departments and services that utilise artificial intelligence (AI) irrespective of their scale or scope. It applies to both internally developed artificial intelligence systems and those procured from external vendors.

Non-compliance with this policy will result in disciplinary action, which may include dismissal.

4 Responsibilities, accountabilities and duties

Persons with lead responsibility for this policy are:

4.1 Chief executive

Accountable for having policies and procedures in place to support best practice, effective management, service delivery, management of associated risks and meet national and local legislation and, or requirements in relation to and including the artificial intelligence policy.

4.2 Data protection officer (DPO) and head of information governance

  • Oversee and ensure compliance with data protection regulations and best practice associated with artificial intelligence (AI).
  • Provide guidance on data privacy related to artificial intelligence systems.
  • Review and approve data protection impact assessments (DPIAs) for artificial intelligence projects.
  • Serve as the point of contact for employees, data subjects and supervisory authorities regarding data protection concerns related to artificial intelligence.

4.3 Caldicott guardian (CG)

  • Ensure data is processed in accordance with the Caldicott Principles
  • Ensure confidential patient information is processed legally, ethically and appropriately.

4.4 Senior information risk owner (SIRO)

  • Take responsibility for the overall governance and management of information risks associated with artificial intelligence systems.
  • Ensure that appropriate risk management processes, controls, and policies are in place.
  • Collaborate with other stakeholders to address potential risks and mitigate any adverse impacts arising from artificial intelligence implementation.
  • Provide oversight and strategic direction to ensure the responsible use of artificial intelligence technologies.

4.5 Clinical safety officer

  • Assess the safety risks associated with artificial intelligence systems used in clinical settings.
  • Collaborate with relevant stakeholders to establish safety protocols and guidelines for artificial intelligence implementation.
  • Monitor and evaluate the performance and safety of artificial intelligence systems.
  • Investigate and address any incidents or concerns related to the clinical safety of artificial intelligence systems.

4.6 Information technology or technical and business intelligence

  • Assist in the implementation, integration, and maintenance of artificial intelligence systems.
  • Ensure the proper configuration, security, and compatibility of artificial intelligence systems with existing information technology infrastructure.
  • Collaborate with vendors and other stakeholders to address technical issues and provide technical support for artificial intelligence systems as required.

4.7 Finance and procurement teams

  • Finance and procurement teams have an obligation to make the senior information governance manager aware of any requests to implement artificial intelligence software.
  • Requests for artificial intelligence solutions will be assessed and authorised by the information governance and information technology teams.
  • A data protection impact assessment must be completed prior to implementation; this is a legal requirement for artificial intelligence.

4.8 Research and Innovation team

Assist researchers by signposting them to the correct Health Research Authority (HRA) and Medicines and Healthcare products Regulatory Authority (MHRA) guidance and advising them on how to apply for ethical and regulatory approvals.

4.9 Head of people and organisational development

Support the lead manager in any change management programmes relating to artificial intelligence including undertaking a people impact assessment, employee or trade union engagement or consultation.

4.10 Employees and authorised users

  • Utilise artificial intelligence systems in accordance with established guidelines and protocols.
  • Provide feedback and insights on the effectiveness, usability, and impact of artificial intelligence technologies.
  • Report any incidents or concerns related to artificial intelligence system performance or safety.
  • Familiarise themselves with and adhere to the trust’s Information Governance and Security policies, protocols and guidelines.
  • Report any concerns or issues related to the artificial intelligence systems to the Information Governance team at rdash.ig@nhs.net.

It is important to note that these roles and responsibilities may vary, collaboration and clear communication among these roles are essential for the successful and responsible use of artificial intelligence.

5 Procedure

Generative artificial intelligence can be used in many ways to enhance the work of the trust.

It is important that the purpose and use of artificial intelligence is clearly defined and agreed, including why artificial intelligence is being used and what value it will bring to the organisation. You must also determine if a legal basis for the use of data is required before any data is processed. Where possible, any data should be anonymous, so a legal basis would not be required. It is important, however, that data and use cases are carefully assessed to determine if individuals can be identified using the contents of the information even if common identifiers such as name, address and phone number are removed. The combined details of a local area, a rare disease and a very young age may enable a patient to be identified. In such cases you would need to treat this as personal data and therefore identify a legal basis for the processing along with meeting the requirements of the common law duty of confidentiality.

The above requirements also apply to data used to test and develop artificial intelligence systems even if there is no outcome or decision for an individual, this is because you are processing data by using it to train artificial intelligence models or algorithms.

In general, artificial intelligence can be used in healthcare in three ways:

  • artificial intelligence specifically for use in healthcare settings
  • artificial intelligence for population or health research
  • freely or commercially available generic artificial intelligence

How these should be used in health and care settings is outlined below.

5.1 Developing artificial intelligence products for healthcare

The NHS Artificial Intelligence and Digital Regulations Service is an artificial intelligence regulation service for people who develop or plan to use artificial intelligence or a digital technology in health and social care. It brings together regulations, guidance and resources for digital healthcare technologies. The service comprises four partners:

  • National Institute for Health and Care Excellence (NICE)
  • Medicines and Healthcare products Regulatory Service (MHRA)
  • Health Research Authority (HRA)
  • Care Quality Commission (CQC)
Generative artificial intelligence capability  Applications in healthcare
Generating relevant insight from vast bodies of text Identify relevant guidance for this patient?
Finding connections between disparate information Using unstructured text to identify rare conditions
Creating new content in the style of something else Creating personalised guides to medical procedures
Searching large corpus and extending themes Create executive summaries of board papers
Creating high definition media, images, audio and video from text Creating a personalised avatar clinician to deliver non-urgent medical advice
Creating “code” to execute an instruction Automate back-office activities, for example, freedom of information (FOI) complaints
Creating synthetic data sets Train artificial intelligence algorithms and collaborate on research safely
Searching records or text to find cohorts with features Analyse records to find clinical variation
Providing a simple chat interface to complex systems Patients use chat to interact with records and advice
Helping to elaborate on related concepts in ideation Identify new opportunities to transform care pathways

5.2 Using artificial intelligence for research

Health Research Authority (HRA) approval is required for research studies that take place in the NHS in England. The Health Research Authority Artificial Intelligence and Digital Regulations Service can provide guidance for NHS Artificial Intelligence adopters, and digital health innovators.

Review by an NHS Research Ethics Committee (REC) is required, as well as an assessment of regulatory compliance and related matters undertaken by dedicated Health Research Authority staff. Where artificial intelligence is being used for research, approval from the Medicines and Healthcare products Regulatory Service might also be required.

5.3 Freely available artificial intelligence apps and services

Artificial intelligence is a feature of many applications currently used by colleagues including apps within Microsoft Teams. It is important to use artificial intelligence appropriately and responsibly to ensure that it does not compromise personal data, business sensitive information, violate policies, or pose a risk to patient safety or our network integrity. The trust recommends caution when using freely available artificial intelligence software such as ChatGPT. Although it can be used in the same way you might use different sources to kickstart a research project or better understand what people are saying about a topic, it should not be used as your primary source for information because it can produce inaccurate, biased or false information.

The UK’s National Cyber Security Centre (NCSC) states that you should not enter sensitive information (such as personal details or company intellectual property) into chatbots, and not to perform queries that could be problematic if made public (for example sharing your secrets and asking ChatGPT to solve a personal dilemma).

If using publicly available artificial intelligence then you must follow the following basic rules:

  • no personal data should be used in these apps or services
  • no business sensitive data should be used in these apps or services
  • these apps must only be used for non-clinical purposes
  • you must inform the Information Governance team where you intend to use these services for routine working.
  • you must be aware of any copyright and intellectual property considerations when using generative artificial intelligence
  • users should be aware of any potential ethical considerations when using these products, including the potential to propagate biased, discriminatory, or harmful content
  • be aware that you will need to verify any output of these products to ensure accuracy
  • artificial intelligence software used for work purposes should only be accessed via corporate devices
  • as per the information technology security policy you must not install any software without explicit permission from information technology (IT), additionally downloading commercial software is not permitted without a license, in this case please refer to procuring artificial intelligence products

When procuring and implementing artificial intelligence products or systems that include artificial intelligence features you must:

  • engage with the procurement process set out within the procurement policy
  • engage with information technology (IT) or technical and information governance teams
  • you are legally required to complete a data protection impact assessment (DPIA), the service area and the supplier must engage with this process
  • you must consider the risks and practical steps to reduce these risks that are documented in the Information Commissioner’s Office artificial intelligence toolkit and data protection risk toolkit
  • if the artificial intelligence is associated with healthcare provision (such as image reading) a digital technology assessment criteria (DTAC) must be completed
  • as part of the data protection impact assessment and digital technology assessment criteria processes any associated biases or ethical concerns must be documented and addressed; potential societal impact and ethical implications of artificial intelligence deployments should be carefully assessed and mitigated
  • if the artificial intelligence is associated with research, you must obtain approval from the Health Research Authority (HRA)
  • the clinical safety officer must be consulted throughout procurement and implementation
  • you must adhere to the conditions set out in Article 22 of the UK General Data Protection Regulation in relation to automated individual decision-making, including profiling, individuals have the right not to be subject to automated decision-making. Artificial intelligence outcomes or outputs must be reviewed by a human. You cannot rely solely on the use of artificial intelligence for decision-making, there must be substantial involvement from an appropriately qualified human
  • there must be an agreed process to flag any concerns regarding the output of any artificial intelligence products
  • if there are concerns that have led to an incident, this must be reported as per the incident management policy
  • incident response plans should be established to handle security incidents, including data breaches, unauthorised access, and system failures
  • use of artificial intelligence must be transparent to employees and patients ensuring they understand where it is being used and how it may impact their employment, work or care, the logic behind it must be explainable
  • data must be collected and processed in a lawful and ethical manner, with appropriate consent and anonymisation measures in place
  • data access and sharing must be strictly controlled, and data must be stored securely throughout its lifecycle
  • you should conduct patient and public engagement activities that include determining if individuals support the use of data for your intended purpose, or if they have any concerns on how their data will be used
  • if the use of artificial intelligence involves service change, then prior to the implementation of any artificial intelligence programme, formal consultation must take place with employees and their trade union representatives in accordance with the change management policy and procedure
  • you must be assured that any product mitigates against bias and discrimination
  • artificial intelligence systems should be continuously monitored for suspicious activities, anomalies, and potential security breaches

6 Training implications

Within the trust induction programme, all employees commencing employment are trained in the data security and awareness training.

It is a requirement of the data security and protection toolkit (DSPT) that this training is refreshed annually.

Employees involved in the implementation of artificial intelligence will require training. This will be addressed as and when required with the level of training dependent on the level of involvement. Additionally, employees will be reminded of the governance implications of using artificial intelligence via employee briefings, newsletters et cetera. Records of additional training undertaken should be documented in line with trust processes.

The information governance and data protection department offer training to meet identified needs: this can be requested by contacting a member of the team.

6.1 All colleagues: data security and awareness training

  • How often should this be undertaken: upon commencement of employment and annually thereafter.
  • Length of training: 1 and a half hours.
  • Delivery method: e-learning.
  • Training delivered by whom: NHS Digital e-learning package.
  • Where are the records of attendance held: electronic staff record (ESR).

7 Monitoring arrangements

7.1 Policy content

  • How: paper for debate
  • Who by: data protection officer or head of information governance.
  • Reported to: information governance group.
  • Frequency: every three years.

7.2 Finance, Digital and Estates Committee report

  • How: paper.
  • Who by: data protection officer or head of information governance.
  • Reported to: Finance, Digital and Estates Committee.
  • Frequency: as per Finance, Digital and Estates Committee work plan.

8 Equality impact assessment screening

To access the equality impact assessment for this policy, please email rdash.equalityanddiversity@nhs.net to request the document.

8.1 Privacy, dignity and respect

The NHS Constitution states that all patients should feel that their privacy and dignity are respected while they are in hospital. High Quality Care for All (2008), Lord Darzi’s review of the NHS, identifies the need to organise care around the individual, “not just clinically but in terms of dignity and respect”.

As a consequence the trust is required to articulate its intent to deliver care with privacy and dignity that treats all service users with respect. Therefore, all procedural documents will be considered, if relevant, to reflect the requirement to treat everyone with privacy, dignity and respect, (when appropriate this should also include how same sex accommodation is provided).

8.1.1 How this will be met

No issues have been identified in relation to this policy.

8.2 Mental Capacity Act (2005)

Central to any aspect of care delivered to adults and young people aged 16 years or over will be the consideration of the individuals’ capacity to participate in the decision-making process. Consequently, no intervention should be carried out without either the individual’s informed consent, or the powers included in a legal framework, or by order of the court.

Therefore, the trust is required to make sure that all staff working with individuals who use our service are familiar with the provisions within the Mental Capacity Act (2005). For this reason all procedural documents will be considered, if relevant to reflect the provisions of the Mental Capacity Act (2005) to ensure that the rights of individual are protected and they are supported to make their own decisions where possible and that any decisions made on their behalf when they lack capacity are made in their best interests and least restrictive of their rights and freedoms.

8.2.1 How this will be met

All individuals involved in the implementation of this policy should do so in accordance with the guiding principles of the Mental Capacity Act (2005).

  • Information governance policy and management framework
  • Information technology security policy
  • Data protection impact assessment procedure
  • Individuals’ rights policy
  • Information governance handbook
  • Incident management policy

10 References


Document control

  • Version: 1.
  • Unique reference number: 1109.
  • Approved by: digital transformation group.
  • Date approved: 12 August 2025.
  • Name of originator or author: data protection officer and head of Information governance.
  • Name of responsible individual: director of health informatics
  • Date issued: 20 August 2025.
  • Review date: 31 August 2028
  • Target audience: all employees.

Page last reviewed: September 12, 2025
Next review due: September 12, 2026

Problem with this page?

Please tell us about any problems you have found with this web page.

Do not include personal or medical information in your message. For example, NHS number, date of birth or medical history.

If you would like us to contact you about your issue, please enter your contact information below.