placeholder
Stuart Gentle Publisher at Onrec

Ensuring fairness and transparency in AI-based recruitment

Taking a responsible approach to AI in recruitment

AI is having a huge impact on recruitment, giving employers powerful tools to support them with hiring, managing applications and reducing administrative burden. However, while AI can improve efficiency, it also raises significant legal, ethical and practical concerns. Without proper safeguards, the use of AI can undermine fairness and entrench bias, exposing employers to legal risk. Here Fiona Morgan, head of employment at Arbor Law, explains why ensuring transparency and maintaining human oversight are therefore essential.

One of the biggest risks associated with AI-based recruitment is bias from the algorithm. AI systems are only as objective as the data that they’re trained on. Where algorithms are designed to identify successful candidates based on historical hiring data, they may replicate existing inequalities. For example, if an organisation’s past workforce is dominated by white males, an AI system may learn to favour those characteristics, systematically disadvantaging candidates of another gender or from different backgrounds. This can result in unlawful discrimination and an employer’s ignorance of any algorithmic bias giving rise to discrimination will not necessarily relieve them from liability.

Using AI to assess video interviews presents particular concerns. Some tools claim to analyse facial expressions, tone of voice or body language to predict how suitable they are for the job. There’s questions about the scientific reliability of the technology, but on top of that, these systems risk discriminating against candidates who do not conform to what the AI system considers are “typical” behavioural norms. This includes neurodivergent individuals or people from different cultural backgrounds.

In the UK and across Europe, the use of AI in recruitment must also be considered through the lens of data protection law. Feeding CVs, application forms or video recordings into AI systems constitutes the processing of personal data and, in some cases, special category data (which is sensitive personal data such as racial or ethnic origin, politic or religious beliefs, health, sexual orientation, etc). Employers are still responsible for compliance with UK GDPR, even when third-party AI providers are used. Transparency is the most important thing here. Candidates must be informed that AI is being used, what data is collected, how it will be processed and for what purpose.

UK GDPR limits the circumstances in which employers can use automated decision making. Individuals also have the right to challenge decisions based solely on automated processing. If AI is used as a pre-screening tool that automatically rejects candidates without meaningful human involvement, employers must ensure that applicants have the opportunity to seek human review of those decisions. Employers are also required under UK GDPR to carry out a Data Protection Impact Assessment before introducing any AI-based recruitment system to identify and mitigate potential risks before they become a problem.

It's also important to carry out due diligence on AI providers. Employers should ask suppliers how they test for and mitigate bias, what safeguards are in place to protect personal data and whether they can provide evidence of compliance with equality and data protection laws. These obligations should be reflected in the contract between the business and the AI provider, with responsibilities placed on providers to cooperate with audits, information requests and regulatory investigations.

We are also seeing generative AI being used informally during the recruitment process. Recruiters may be tempted to search for candidates using tools such as ChatGPT to gain additional background information. This practice is extremely risky. Generative AI can produce inaccurate or fabricated information, and relying on such material could lead to unlawful decisions, particularly if it reveals or invents information about protected characteristics, trade union activity or political views, for example.

The most effective safeguard against these risks is consistent human oversight. AI should support, not replace, human decision-making. Employers should regularly review recruitment outcomes to identify patterns that may indicate bias and conduct equality monitoring where possible. Any recommendations produced by AI systems should be double-checked by trained staff who understand both the technology and the legal framework within which it operates.

AI can have huge benefits in recruitment, but fairness and transparency cannot be automated. By combining clear communication with candidates and meaningful human involvement at every stage, employers can benefit from AI while meeting their legal obligations and promoting genuinely inclusive hiring practices.

As the use of AI in recruitment continues to grow, employers should prioritise reviewing their hiring processes. Taking early legal advice and ensuring appropriate safeguards are in place will help businesses benefits from AI while minimising legal and reputational risk.

If you are considering introducing AI into your recruitment process, or already rely on automated tools, specialist employment law guidance can help ensure your approach remains fair, transparent and compliant.