placeholder
Stuart Gentle Publisher at Onrec

Expert’s Guide To Updating Your Privacy Notices for AI

As AI becomes embedded in core business processes, the way organisations collect, analyse, and act on personal data is changing.

Automated decision-making, profiling, and model training are now part of everyday operations. In many cases, existing Privacy Notices were drafted before these technologies were introduced, meaning they no longer fully reflect how personal data is being used in practice.

At the same time, organisations are reassessing whether their current privacy oversight model is equipped to manage AI-driven complexity. For some, this means strengthening in-house expertise. For others, it involves supplementing internal capability through independent support such as DPO as a Service, ensuring that transparency obligations keep pace with innovation.

Updating a Privacy Notice is not a routine compliance exercise. It is a necessary step for broader AI governance maturity.

With insight from industry experts, this article explores the key considerations for organisations reviewing their Privacy Notice to ensure it accurately reflects AI-driven personal data processing.

1. Where and why you use AI

A central expectation of AI-related transparency is clarity around where and why AI is used. Users should be able to understand when AI systems are involved in processing personal data or generating information about identifiable people, particularly where this influences decisions or outcomes that affect them.

Vague references to “automated tools” are unlikely to be sufficient. Instead, organisations are increasingly expected to explain:

  • What AI systems are used, including third-party or embedded tools 
  • Why AI is used, such as decision support, monitoring, or content generation
  • How AI is integrated into processes like recruitment or customer support 
  • Where individuals may interact with AI systems or receive AI-generated outputs 
  • How those outputs are used in practice, including whether decisions are taken automatically or subject to human review

In some cases, meaningful human review is not only good practice but a legal requirement, particularly where AI supports or enables automated decision-making with significant effects. For instance, if AI is used to assess a loan or finance application and the outcome is a refusal, individuals should be able to request a human review of that decision. 

2. What personal data AI uses and generates 

AI systems often analyse data in ways that differ from traditional processing and may generate new information about individuals. Transparency therefore requires more than stating that personal data is processed by AI systems; it also involves explaining how that data is used.

This includes defining:

  • What categories of personal data are used by AI systems 
  • What data is used for training AI systems 
  • Whether data is used in real-time decision-making or analysed to identify patterns, trends, or predictions 
  • Whether special category data is involved directly or indirectly, including through inference 

Where AI systems generate new data about individuals (scores, predictions, risk profiles, behavioural inferences etc.) this should be made clear. These outputs can still constitute personal data, and may have significant impacts on individuals. 

Should an AI system introduce new methods of processing existing data, or enact change in the scale or nature of that processing, a review and update of existing Privacy Notices must be carried out to reflect how personal data is actually used.  

3. How personal data is shared for AI processing 

AI-related data sharing should be clearly distinguished from generic third-party disclosures. This helps users understand when their data moves outside an organisation’s direct control and into model-driven environments. 

A Privacy Notice should answer the following questions: 

  • Is personal data shared with external AI vendors or cloud-based AI platforms?

If this is the case, you’ll need to explain the type of provider involved, why personal data is shared, and whether the provider acts only on the organisation’s behalf.

  • Are foundation models or large language models (LLMs) involved?

If they are, it’s important to state whether personal data is submitted to a general-purpose model, whether it is hosted internally or by a third-party, and whether data is used solely to generate outputs or also to improve the model.

  • Does personal data leave the organisation’s environment?

If it does, you must also explain whether data is processed or stored outside the UK or EU, the regions involved, and any safeguards in place.

4. How personal data is used to train or improve AI

The use of personal data for AI training is one of the most sensitive and frequently misunderstood areas of AI transparency. People often aren’t aware that their data could be being used to improve or refine AI systems beyond the immediate service they receive. 

Because of this, a Privacy Notice should clearly explain where personal data is used to train or improve AI systems. This includes outlining whether training data is anonymised, pseudonymised or identifiable, and whether training takes place as a one-off activity or on an ongoing basis. 

5. AI-supported and automated decision-making

Where AI systems make decisions that significantly affect individuals, specific transparency and rights obligations apply under GDPR Article 22. 

In these cases, Privacy Notices should explain whether decisions are fully automated or supported by AI, whether they are likely to have a meaningful impact on individuals, and what level of human oversight or review is in place.

Where Article 22 applies, individuals must be informed how they can: 

  • Object to AI-driven processing 
  • Request human review 
  • Express their views and contest decisions 
  • Opt out of certain forms of automated decision-making, where applicable

6. How data protection user rights apply when AI is used

AI does not remove data protection rights, but it can change how those rights work in practice. Unlike traditional systems, AI can generate inferences or probabilistic outputs that are not always fixed, directly editable, or easily isolated from wider models.

For organisations, this creates a challenge in being transparent about practical limits without undermining individuals’ rights. For individuals, it can be difficult to understand what exercising a right will realistically achieve.

A well-written Privacy Notice should explain how rights apply in an AI context, including: 

  • What access means where AI is involved 
  • The limits of rectification where data is inferred or probabilistic 
  • What erasure means once data has influenced a model 
  • How objections to AI-driven processing are handled 

Being open about these realities can help manage expectations, reduce complaints, and support more constructive engagement when individuals exercise their rights.

7. AI risks, limitations, and safeguards

In this stage of development in AI systems, they are still able to produce inaccurate results, reflect bias in training data, or change over time in ways that are not immediately obvious. Being transparent about these limits helps individuals understand how AI affects them, and helps organisations demonstrate responsible use. 

Whilst not always legally required, it’s becoming a more frequent expectation that that a Privacy Notice will: 

  • Acknowledge that AI outputs may be inaccurate or biased 
  • Explain the steps taken to monitor accuracy, fairness, and potential misuse 
  • Clarify how AI systems are reviewed and updated as they evolve 
  • Explain how to contest an AI-based decision 
  • 8. Transparency requirements under the EU AI Act
  • For organisations directly in scope of the EU AI Act, there are additional, legally binding transparency obligations that sit alongside the broader principles outlined above.  

For AI systems that directly interact with individuals, organisations must clearly inform individuals when they are interacting with an AI system, rather than a human, so the nature of the interaction is not misleading or obscured. Equally, any AI-generated or manipulated content should be clearly identified as artificial in a way that is understandable to people and machine readable. This includes any synthetic text, audio, or video content.

As you might expect, high-risk AI systems require a greater level of care, and organisations making use of these should consider seeking specialist advice to ensure full alignment with the AI Act. 

Making AI information easy to understand

AI systems can be difficult to understand, particularly when there’s a wealth of technical terminology involved. A well-written Privacy Notice should be read in such a way that can be easily understood when AI is used, what it does, and how it may influence outcomes. 

In practice, this means using plain language to explain AI use in clear, straightforward terms. Avoid technical jargon, internal system names, or model labels that have little meaning outside the organisation. Where necessary, use simple, relevant examples to illustrate how AI processes personal data or supports decisions, helping readers understand what this means in real-world situations.

Structure is important too, and the Privacy Notice should be laid out in such a way that AI-related disclosures are clearly signposted and not buried within generic processing descriptions. 

Keep your Privacy Notice updated

AI systems are evolving fast, and models are being retrained, updated and repurposed at pace. New use cases are being discovered as organisations scale and adopt new tools, and when Privacy Notices are not kept up to date with these changes, transparency information may quickly fall out of sync with what’s actually happening. Regularly reviewing AI-related disclosures helps ensure Privacy Notices accurately reflect how AI is used in practice, particularly where changes affect individuals’ rights or expectations. If you haven’t already scheduled a review of your documentation, now may be as good a time as any.