University of Reading cookie policy

We use cookies on reading.ac.uk to improve your experience, monitor site performance and tailor content to you.

Read our cookie policy to find out how to manage your cookie settings.

Data Protection and AI

This page is subject to change and updates - please revisit this page when seeking advice and do not make local copies which may become out of date.

Artificial Intelligence (AI) and Data Protection

Use of Artificial Intelligence (AI) may have benefits to the University when it comes to finding new ways of working with these tools. We should always ask ‘what are we using it for?’, ‘does it do the job well?’ and ‘what problem does it solve?’.

The University must also ensure it meets its legal obligations under data protection laws.

Restrictions on use of AI

Currently, due to risks of non-compliance with these laws, any information classified as restricted or highly restricted must not be submitted to generative AI tools (such as ChatGPT). Detail on what Information falls into these classifications can be found within the University Classification Policy. 

This includes the personal data of our staff, students, visitors, research participants and others.

This also includes sensitive non personal information, which includes commercially sensitive data.

What are the risks of AI when it comes to personal data?

Data protection laws require all organisations to comply with a set of principles. The most pertinent to uses of AI are summarised below, along with the issues presented.

Personal data must be:

  • processed lawfully, fairly and in a transparent manner

People must be made aware of how their data is being used, for what purposes, and to who it is being disclosed. It can be difficult (and in some cases, may be impossible) to know or communicate how personal data will be processed within AI tools and who may have access to it. A disclosure of information to a third party will likely be made when submitting personal data to providers of these tools, even if the provider advises it will be deleted post query. Data protection laws also contain rights for individuals, including the right to know who is controlling their data, the right to access it, and the right to seek erasure and object to its use in some cases. Where personal data is processed within AI tools, these rights can become very difficult to exercise, particularly if a person has no way of knowing where or how their personal data is being used. This can be of particular concern where an individual is subject to bias or unfair outcomes as a result of use of their data in AI tools and where they have a lack of ability to challenge these, or a way to exercise these rights.

  • collected for specified, explicit and legitimate purposes and not used for incompatible purposes.

Those that trust the University with their personal data expect the University to use if for the purposes it was provided or collected and we are required to limit use to these purposes in many cases unless limited exemptions apply. Use of AI tools may involve a differing purpose and in many cases, this information will be used by the tool to further train AI which can be without the users knowledge or agreement. What further purposes it may be used for beyond our use may not be known, even to the user of the tool.

  • accurate and kept up to date.

AI can pose considerable issues when it comes to accuracy of data. When this poses risks to information concerning individuals being inaccurate this principle may not be met. AI can also run risks of machine learned bias that could have detrimental effects on individuals. This is particularly a concern where use of AI results in the generating of information that could be used to make decisions about individuals or provide them with advice where accuracy is important.

Accountability is also a key requirement of data protection laws. Senior responsible owners should be in place for uses of AI who will be accountable for ensuring the security and governance of its use.

Limited and qualified exemptions from some principles exist with the law, such as those connected to research in the public interest.

How can these risks be addressed? What if I want to use AI?

The below must be considered:

  • Does the use of AI have a clear purpose? Do you need to use it? What do you expect it to achieve? Can the objectives be met without it?
  • Are you aware of the restrictions on use for sensitive and personal data? How will you ensure these are met?
  • How will you address risks of inaccuracy? Is it important that the outputs are accurate? What impacts could inaccurate data have?
  • Could AI outputs lead to detrimental effects on individuals? How will you monitor, prevent and address this?
  • How will you be alert to unexpected outcomes?
  • Are there ethical considerations? Is it important whether the AI tool you wish to use was trained with data that was sourced ethically or in compliance with data protection laws?
  • How will you ensure you are alert to inherent bias? How will you identify and address it? 
  • How will you know where your inputted data will end up? Will it be exposed to others? Does it matter?

What else do I need to know and what actions should I take?

If you are considering using AI for personal and sensitive data you must seek prior advice from IMPS or DTS.

There must be a senior accountable owner for AI projects the University undertakes.

You may be required to undertake a Data Protection Impact Assessment. These can take time so ensure that you factor this into the time and resources you need.

Risks of non compliance may be referred to a senior risk owner who must be identified and accountable for the data and those risks.

If using a generative AI account for personal use not connected to your role at the University, do not use your University email as your username.

If using a generative AI account for work connected to your role do not use your University password. Choose strong and unique passwords.

Where available, use settings within the tool to disable or opt out of the data you input being used for training or machine learning. This will sometimes be worded as 'help us make this better for others' or similar. Understand how it works. Read the user guidance.

University training on the use of generative AI tools should not be taken to be authorisation to use the tools for any data that is restricted.

Be aware that some of these tools have terms and conditions that restrict what they can be used for. In many cases, these terms will also require you have the authorisation of the University to accept those terms (bind the organisation to a legal contract). Read the terms and conditions and the small print so you fully understand what you are agreeing to.

Should the University make available access to enterprise or organisational licensed AI tools, it is the expectation that these will be used as opposed to individual accounts.

Where can I go for advice?

For questions relating to personal data and AI you can contact imps@reading.ac.uk or call us on 0118 378 8981.

DTS can be contacted at dts@reading.ac.uk or by calling 0118 378 6262.

More information is available from CQSD.

Additional Resources

The UK regulatory authority for the protection of personal data (the ICO) has guidance available.

More information on Data Protection Impact Assessments can be found at::Data Protection by Design

The Government have produced guidance for the Civil Service which may also be helpful.

Whilst the UK currently has no specific AI laws (or clear plans for such) there are plans for legislation within the European Union. Whilst the UK is no longer a member of the EU, the territorial reach of any EU Act passed may include the UK where involving the data of EU residents. See more information on the progress of the EU Artificial Intelligence Act.