placeholder
Stuart Gentle Publisher at Onrec

Lawyer warns of AI recruitment risks

Businesses and HR professionals should be wary of unintentional discrimination as a result of using artificial intelligence in the recruitment process, an employment law expert has warned.

Katherine Cooke, Senior Associate in Employment Law at Midlands law firm Higgs LLP, said businesses would be liable for any discrimination that occurred as a result of using AI.

There have already been examples of businesses having problems when using AI to screen job applicants, including retail giant Amazon.

“There are going to be many ways that AI can be used to streamline processes and improve productivity,” said Katherine.

“One of the ways businesses are already using AI is in screening CVs for vacancies where there are many applications. The AI can scan the applications and rank them on pre-determined criteria, for example academic results or keywords used to describe skillsets.

“While this may seem an economic strategy to produce a manageable shortlist, I would urge caution. There are already examples where businesses have used AI in this way and unintentional gender and race discrimination has occurred, which is clearly unacceptable and illegal.

“It is unintentional and even if the discrimination has been committed by an AI algorithm, the business could still be held liable. AI software must be carefully monitored to identify possible discriminatory outcomes.”

Amazon implemented an automated recruitment tool that unwittingly resulted in discriminatory bias against female applicants. Despite creating the algorithm on a neutral basis, the CV screening system self-modified itself to prefer male candidates due to the data collected from Amazon's employees in the preceding 10 years. This resulted in CVs including the word ‘women’s’ being downgraded by the algorithm.

Uber, meanwhile, introduced AI software to ensure that only its registered drivers were taking bookings using the app through facial recognition. The authentication software experienced issues in recognising dark-skinned faces, which resulted in those users being unable to access the app and find jobs. The disparity was more apparent for darker-skinned females, with a failure rate of 20.8% compared with a failure rate of 6% for males.

Katherine said another potential risk with using AI is data protection as AI systems may use employee or customer data elsewhere that can easily be accessed by third parties.

She said: “Where AI is used, a transfer of data outside of the company's organisation will occur. Employers will, therefore, need to be alert to where this data is going and how it will be stored and subsequently used.

“ChatGPT stores every conversation logged, including personal data. This carries the risk of a breach of confidentiality and processing data outside of the given lawful purpose.

“Data processors must act to ensure that staff are trained to recognise the data protection risk of using AI software and take security measures to protect personal data.”

Higgs’ experienced employment lawyers can advise businesses on all potential pitfalls of AI in the workplace and help to draft effective policies for its use.