Skip to main content

Artificial intelligence (AI) policy

Contents

1 Introduction

In the rapidly evolving field of healthcare, the integration of artificial intelligence (AI) technologies has the potential to revolutionise patient care, streamline administrative processes, and enhance overall operational efficiency. Rotherham Doncaster and South Humber NHS Foundation Trust (RDaSH) recognise the importance of adopting artificial intelligence technologies while ensuring their ethical and responsible use. This artificial intelligence governance policy serves to ensure the appropriate deployment, management, and use of any artificial intelligence technology across the trust.

2 Purpose

2.1 Innovation and emerging technologies

The trust is committed to fostering a culture of innovation across all levels of the organisation, encouraging staff to explore new ideas, challenge conventional practices, and embrace creative solutions that enhance patient care, operational efficiency, and workforce wellbeing. As part of this commitment, we actively support the responsible exploration and integration of emerging technologies, including artificial intelligence (AI). artificial intelligence has the potential to transform healthcare delivery through predictive analytics, personalised care, and intelligent automation. The trust will continue to evaluate and pilot AI-driven initiatives in alignment with ethical standards, data protection principles, and clinical governance, ensuring that innovation remains safe, inclusive, and impactful for our communities.

The purpose of this policy is to encourage and promote the use of artificial intelligence while establishing clear guidelines for the appropriate procurement and use of any artificial intelligence technology. It is recognised that artificial intelligence systems, including machine learning algorithms and natural language processing, can contribute significantly to research, improving healthcare outcomes and resource allocation. There are many potential positives with these tools, there are also risks which must be considered and managed to maintain patient safety and meet data protection requirements. It is important, to ensure that artificial intelligence technologies are used in a manner that respects patients’ rights, and maintains the trust and confidence of our patients, employees, and stakeholders. This policy outlines key principles and procedures that must be adhered to when procuring and using artificial intelligence technologies within the trust. It addresses aspects such as data privacy, transparency, accountability, and appropriate use. By implementing these guidelines, we aim to foster a culture of responsible artificial intelligence use, where the benefits of artificial intelligence are harnessed while minimising potential risks. It is important to note that this artificial intelligence policy is not exhaustive and may need to be adapted and updated periodically as technology advances, regulatory requirements evolve, and best practices in artificial intelligence governance emerge. The trust is committed to staying at the forefront of responsible artificial intelligence implementation to ensure the ethical and effective use of artificial intelligence technologies.

3 Scope

This policy applies to all those working for the trust in whatever capacity, including the trust’s employees, volunteers, students, temporary workers, contractors, suppliers and third parties. It applies to all departments and services that utilise artificial intelligence (AI) irrespective of their scale or scope.

4 Responsibilities, accountabilities and duties

Persons with lead responsibility for this policy are:

4.1 Chief executive

Accountable for having policies and procedures in place to support best practice, effective management, service delivery, management of associated risks and meet national and local legislation and, or requirements in relation to and including the artificial intelligence policy.

4.2 Data protection officer (DPO)

  • Oversee and ensure compliance with data protection regulations and best practice associated with artificial intelligence (AI).
  • Provide guidance on data privacy.
  • Review and approve data protection impact assessments (DPIAs).
  • Serve as the point of contact for employees, data subjects and supervisory authorities regarding data protection concerns.

4.3 Caldicott guardian (CG)

  • Ensure data is processed in accordance with the Caldicott Principles
  • Ensure confidential patient information is processed legally, ethically and appropriately.

4.4 Senior information risk owner (SIRO)

  • Take responsibility for the overall governance and management of information risks associated.
  • Ensure that appropriate risk management processes, controls, and policies are in place.
  • Collaborate with other stakeholders to address potential risks and mitigate any adverse impacts arising from artificial intelligence implementation.
  • Provide oversight and strategic direction.

4.5 Clinical safety officer

  • Assess the safety risks associated with artificial intelligence systems used in clinical settings.
  • Monitor and evaluate the performance and safety of all health information technology (IT) systems.

4.6 Information technology or technical and business intelligence

  • Assist in the implementation, integration, and maintenance of artificial intelligence systems.
  • Ensure the proper configuration, security, and compatibility of artificial intelligence systems with existing information technology infrastructure.
  • Collaborate with vendors and other stakeholders to address technical issues and provide technical support for artificial intelligence systems as required.

4.7 Finance and procurement teams

  • Finance and Procurement teams have an obligation to follow the procurement of systems and technologies including digital technology assessment criteria (DTAC).
  • A data protection impact assessment must be completed prior to implementation; this is a legal requirement for artificial intelligence.

4.9 Employees and users

  • Utilise artificial intelligence (AI) systems in accordance with guidelines and protocols.
  • Familiarise themselves with and adhere to the trust’s data protection, information governance and security policies, protocols and guidelines and the appropriate use to person identifiable data and data security requirements.
  • Report any concerns or issues related to the artificial intelligence, data or data security to the Information Governance team, rdash.ig@nhs.net.

5 Procedure

Generative artificial intelligence can be used in many ways to enhance the work of the trust.

It is important that the purpose and use of artificial intelligence is clearly defined and understood. Where person identifiable data is expected to be used determine a legal basis for the use of data is identified before any data is processed. Where possible, any data should be anonymous, so a legal basis is not required. It is important, however, that data is carefully assessed to determine if individuals can be identified using the contents of the information even if common identifiers such as name, address and phone number are removed. The combined details of a local area, a rare disease and a very young age may enable a patient to be identified. In such cases you would need to treat this as personal data and therefore identify a legal basis for the processing along with meeting the requirements of the common law duty of confidentiality.

The above requirements also apply to data used to test artificial intelligence systems even if there is no outcome or decision for an individual, this is because you are processing data by using it to train artificial intelligence models or algorithms.

In general, artificial intelligence can be used in healthcare in three ways:

  • artificial intelligence specifically for use in healthcare settings
  • artificial intelligence for population or health research
  • freely or commercially available generic artificial intelligence

How these should be used in health and care settings is outlined below.

5.1 Developing artificial intelligence products for healthcare

The NHS Artificial Intelligence and Digital Regulations Service is an artificial intelligence regulation service for people who develop or plan to use artificial intelligence or a digital technology in health and social care. It brings together regulations, guidance and resources for digital healthcare technologies. The service comprises four partners:

  • National Institute for Health and Care Excellence (NICE)
  • Medicines and Healthcare products Regulatory Service (MHRA)
  • Health Research Authority (HRA)
  • Care Quality Commission (CQC)
Generative artificial intelligence capability  Applications in healthcare
Generating relevant insight from vast bodies of text Identify relevant guidance for this patient?
Finding connections between disparate information Using unstructured text to identify rare conditions
Creating new content in the style of something else Creating personalised guides to medical procedures
Searching large corpus and extending themes Create executive summaries of board papers
Creating high definition media, images, audio and video from text Creating a personalised avatar clinician to deliver non-urgent medical advice
Creating “code” to execute an instruction Automate back-office activities, for example, freedom of information (FOI) complaints
Creating synthetic data sets Train artificial intelligence algorithms and collaborate on research safely
Searching records or text to find cohorts with features Analyse records to find clinical variation
Providing a simple chat interface to complex systems Patients use chat to interact with records and advice
Helping to elaborate on related concepts in ideation Identify new opportunities to transform care pathways

5.2 Using artificial intelligence for research

Health Research Authority (HRA) approval is required for research studies that take place in the NHS in England. The Health Research Authority Artificial Intelligence and Digital Regulations Service can provide guidance for NHS Artificial Intelligence adopters, and digital health innovators.

Review by an NHS Research Ethics Committee (REC) is required, as well as an assessment of regulatory compliance and related matters undertaken by dedicated Health Research Authority staff. Where artificial intelligence is being used for research, approval from the Medicines and Healthcare products Regulatory Service might also be required as well as the trust Ethics Committee.

5.3 Use of publicly available artificial intelligence in clinical practice

Artificial intelligence (AI) may be considered for use as a supportive tool within clinical practice; however, it must not be used for:

  • making clinical decisions or determining diagnosis and treatment
  • processing identifiable patient data
  • replacing professional judgment

The clinician retains full responsibility for all aspects of diagnosis, treatment, and patient care.

For guidance and examples, see appendix A.

If trust approved artificial intelligence is used during patient consultations or for recording patient information (for example, Microsoft Teams and Ambient Scribe technologies):

  • patients must be fully informed and fully understand the use
  • explicit consent must be obtained prior to any processing

5.4 Freely available artificial intelligence apps and technologies

Artificial intelligence is a feature of many applications currently used by colleagues including apps within Microsoft Teams. It is important to use artificial intelligence appropriately to ensure that it does not compromise personal data, business sensitive information, or pose a risk to patient safety or our network integrity. The trust recommends considered approach when using freely available artificial intelligence software such as ChatGPT. Although it can be used in the same way you might use different sources to kickstart a research project or better understand what people are saying about a topic, it should not be used as your primary source for information because it can produce inaccurate, biased or false information.

5.5 Use of ChatGPT (and other as per 5.4) on trust devices

Staff can consider using ChatGPT via trust-issued devices to support tasks such as drafting documents, generating ideas, summarising information, or exploring non-clinical queries, provided usage aligns with trust policies on data protection, confidentiality, and acceptable use.

ChatGPT (and other as per 5.4) must not be used to process patient-identifiable information, confidential or business sensitive data, or make clinical decisions. Users are responsible for verifying outputs and ensuring that any content generated is appropriate, accurate, and compliant with relevant governance standards.

However, staff should note that trust approved applications such as Microsoft Teams Recording and Transcribing, Intelligent Recap (licensed per user) and Co-Pilot Chat are available to use within our secure network and perform similar functions. Support is available from trust IT Service desk, Microsoft Support or NHS England digital workforce.

The UK’s National Cyber Security Centre (NCSC) states that you should not enter sensitive information (such as personal details or company intellectual property) into chatbots, and not to perform queries that could be problematic if made public (for example sharing your secrets and asking ChatGPT to solve a personal dilemma).

If using publicly available artificial intelligence, follow basic rules:

  • do not use person identifiable data.
  • do not use business sensitive data (see definition below)
  • do not use publicly available artificial intelligence for generating entries into Clinical records
  • be aware of any copyright and intellectual property considerations.
  • be aware of ethical considerations when using these products. Including the potential to produce biased, discriminatory, or harmful content
  • be aware that you will need to verify any output of these products to ensure accuracy
  • as per the information technology (IT) security policy you must not install any software without explicit permission from information technology

Business sensitive data refers to any information that, if disclosed without proper authorisation, could negatively impact trust’s operations, reputation, financial standing, or strategic interests. This includes, but is not limited to, unpublished financial reports, procurement details, strategic plans, internal communications, contractual agreements, and any data that supports decision-making or competitive positioning. Such data must be handled in accordance with organisational policies and relevant legal and regulatory frameworks to ensure confidentiality, integrity, and appropriate access controls.

When procuring and implementing artificial intelligence products or systems that include artificial intelligence features:

  • engage with the procurement process set out within the procurement policy
  • engage with Information Technology Technical and Information Governance teams
  • complete a data protection impact assessment (DPIA), where person identifiable data is to be used, or a new system is being procured, the service area and the supplier must engage with this process
  • if the artificial intelligence is associated with research, you must obtain approval from the Health Research Authority (HRA)
  • artificial intelligence outcomes or outputs must be reviewed by a human; you cannot rely solely on the use of artificial intelligence for decision-making, there must be substantial involvement from an appropriately qualified human
  • use of artificial intelligence must be transparent to employees and patients ensuring they understand where it is being used and how it may impact their employment, work or care; the logic behind it must be explainable
  • data must be processed in a lawful and ethical manner, with appropriate consent and anonymisation measures in place

6 Training implications

Within the trust induction programme, all employees commencing employment are trained in the data security and awareness training.

It is a requirement of the data security and protection toolkit (DSPT) that this training is refreshed annually.

6.1 All colleagues: data security and awareness training

  • How often should this be undertaken: upon commencement of employment and annually thereafter.
  • Length of training: 1 and a half hours.
  • Delivery method: e-learning.
  • Training delivered by whom: NHS Digital e-learning package.
  • Where are the records of attendance held: electronic staff record (ESR).

7 Monitoring arrangements

7.1 Policy content

  • How: paper.
  • Who by: director of health informatics.
  • Reported to: Digital Transformation Group.
  • Frequency: annually or as required.

7.2 Finance, Digital and Estates Committee report

  • How: paper.
  • Who by: via chief executive report.
  • Reported to: board of directors.
  • Frequency: as per board work plan.

8 Equality impact assessment screening

To access the equality impact assessment for this policy, please email rdash.equalityanddiversity@nhs.net to request the document.

8.1 Privacy, dignity and respect

The NHS Constitution states that all patients should feel that their privacy and dignity are respected while they are in hospital. High Quality Care for All (2008), Lord Darzi’s review of the NHS, identifies the need to organise care around the individual, “not just clinically but in terms of dignity and respect”.

As a consequence, the trust is required to articulate its intent to deliver care with privacy and dignity that treats all service users with respect. Therefore, all procedural documents will be considered, if relevant, to reflect the requirement to treat everyone with privacy, dignity and respect, (when appropriate this should also include how same sex accommodation is provided).

8.1.1 How this will be met

No issues have been identified in relation to this policy.

8.2 Mental Capacity Act (2005)

Central to any aspect of care delivered to adults and young people aged 16 years or over will be the consideration of the individuals’ capacity to participate in the decision-making process. Consequently, no intervention should be carried out without either the individual’s informed consent, or the powers included in a legal framework, or by order of the court.

Therefore, the trust is required to make sure that all staff working with individuals who use our service are familiar with the provisions within the Mental Capacity Act (2005). For this reason all procedural documents will be considered, if relevant to reflect the provisions of the Mental Capacity Act (2005) to ensure that the rights of individual are protected and they are supported to make their own decisions where possible and that any decisions made on their behalf when they lack capacity are made in their best interests and least restrictive of their rights and freedoms.

8.2.1 How this will be met

All individuals involved in the implementation of this policy should do so in accordance with the guiding principles of the Mental Capacity Act (2005).

10 References

11 Appendices

11.1 Appendix A use of generative artificial intelligence in clinical practice

Understanding the governance and clinical safety aspects of artificial intelligence (AI) can be complex, and guidance may sometimes feel unclear. While the General Medical Council (GMC) primarily regulates doctors and physician associates, this framework is relevant to all staff. It should be viewed as a set of principles to guide safe and responsible use of AI tools across roles and functions.

11.1.1 NHS England guidance on Copilot (relevant for all generative artificial intelligence models)

  • Personal or sensitive patient information may be entered into Copilot but must not be entered into generative or publicly available artificial intelligence tools.
  • Appropriate uses include drafting non-clinical documents (for example, policies, reports), summarising publicly available information, and supporting administrative tasks.
  • Copilot must not be used for making clinical decisions or replacing professional judgment.
  • All outputs should be reviewed and validated before use to ensure accuracy and compliance.

For acceptable use of Copilot.

11.1.2 Key principles

  • Artificial intelligence can support clinical decision-making but must not replace professional judgment.
  • Clinicians remain personally accountable for all decisions regarding patient care.
  • General Medical Council guidance requires understanding the limitations of any technology used.

11.1.3 General Medical Council accountability requirements

  • Final responsibility for diagnosis, treatment, and patient care rests with the clinician.
  • Over-reliance on artificial intelligence without oversight may compromise patient safety and breach professional obligations.

11.1.4 Risks of over-reliance on artificial intelligence

  • Potential for incorrect or biased outputs from artificial intelligence systems.
  • Reduced clinician engagement and critical thinking.
  • Increased risk of regulatory non-compliance.

11.1.5 Practical guidance for clinicians

  • Use artificial intelligence as a supportive tool, not as a substitute for clinical judgment.
  • Validate artificial intelligence outputs against clinical evidence and patient context.
  • Document decisions and rationale clearly in patient records.
  • Report any safety concerns or adverse incidents involving artificial intelligence tools.
  • Microsoft 365 Copilot chat and Microsoft 365 Copilot (web or licensed) are provided for use across NHS.net and following the same rules currently set for the platform with sensitive information and can be used for all administrative and business support purposes including clinical administration activities with sensitive information. Copilot must not be used for any clinical decision-making or direct patient care, or any activity requiring clinical judgement.
  • Always label confidential and sensitive information as official sensitive: Microsoft 365 Copilot (web or licensed) can access all information you have permission to view within your files and conversations except that which is labelled as “Official Sensitive”. The content of Copilot’s responses is based on this accessible information; therefore it is important to ensure that you only have access to data and files necessary for your role.

11.1.6 Further guidance

Refer to General Medical Council official resource: Artificial Intelligence and Innovative Technologies.

Refer to Microsoft 365 Copilot acceptable use policy.

Examples of appropriate and inappropriate use of Copilot
Examples of appropriate use (okay) Examples of inappropriate use (not okay)
Using Copilot chat to explore the most effective exercises for weight loss in patients prescribed olanzapine (ensuring that you check the sources of the information it retrieves and assess the quality of that information source). Using Copilot chat to create a care plan for John Smith, NHS number 123 456 7890, to help him lose weight.
Using Copilot chat to summarise National Institute for Health and Care Excellence (NICE) guidelines on diabetes management for clinician education. Generating a diagnosis for a specific patient.
Drafting a generic patient leaflet on healthy eating (validated before use). Writing discharge instructions for an identified patient.

11.2 Appendix B Microsoft 365 Copilot acceptable use policy

Refer to appendix B: Microsoft 365 Copilot acceptable use policy.


Document control

  • Version: 1.1.
  • Unique reference number: 1109.
  • Approved by: clinical leadership executive.
  • Date approved: 18 November 2025.
  • Name of originator or author: data protection officer and head of information governance.
  • Name of responsible individual: director of health informatics
  • Date issued: 5 February 2026.
  • Review date: 31 August 2028.
  • Target audience: all employees.

Page last reviewed: February 10, 2026
Next review due: February 10, 2027

Problem with this page?

Please tell us about any problems you have found with this web page.

Report a problem