placeholder
Stuart Gentle Publisher at Onrec

AI: the only defence against rising cyberattacks in the recruitment sector?

Scott Brooks, Technical Strategist at IT solutions provider Cheeky Munkey, provides his expertise.

Because of the large amounts of data stored in the recruitment sector, it is a vulnerable target for cyberattacks. Recruitment companies have data on clients, candidates and the company, which can include bank details, passport copies, visa information and contact information. This wealth of information is appealing to cybercriminals, and the impacts of these attacks can be huge - damaging the reputation of an organisation and the trust between clients and candidates1. 

With the number of cyberattacks in the recruitment sector increasing and big firms such as Whitbread2 and Career Group3 being affected, the need to consider how AI can be used to help protect the industry against cyberattacks is more potent than ever.

Big businesses such as Google, Tesla and PayPal4 are using AI systems to improve their cybersecurity solutions.  At the same time, cybercriminals are able to use AI technology to create new cyberattack methods which are harder to defend against.

With this in mind, recruitment firms must invest in learning about the new kinds of cyber threats they may face and AI cybersecurity systems. This article provides an overview of the new threats AI poses to recruitment companies, as well as the reasons to consider using AI as a defensive system.

New AI threats to cybersecurity

Hackers using AI

It’s been found that AI is making cybercrime more accessible, with less skilled hackers using it to write scripts – enabling them to steal files5. It’s easy to see how AI can increase the number of hackers by eliminating the need for sophisticated cyber skills.

Hackers can also use machine learning to test the success of the malware they develop. Once a hacker has developed malware, they can model their attack methods to see what is detected by defences. Malware is then adapted to make it more effective, making it much harder for IT staff to catch and respond to threats.

False data can also be used to confuse AI systems. When companies use AI systems for cybersecurity, they learn from historical data to stop attacks. Cybercriminals create false positives, teaching cybersecurity AI models that these patterns and files are ‘safe’. Hackers can then exploit this to infiltrate companies’ systems. 

Imitation game

Cyber threats that would once have been categorised as ‘easy’ to repel are getting harder to defend against as AI is improving its ability to imitate humans. A key example of this is phishing emails. Bad grammar and spelling are usually telltale signs warning recipients not to click a link in an email. Attackers are now using chatbots to ensure their spelling and grammar are spot on, making it trickier for recruiters to spot the red flags.

Cybersecurity skills gap

Currently, there’s a skills gap within the cybersecurity industry. It’s argued that not enough people have the skill level and knowledge required to develop and implement cybersecurity AI systems. This is because AI is developing at such a rapid pace that it’s hard for professionals to keep up6. 

Hiring people with the specialised skills needed, as well as procuring the software and hardware required for AI security systems, can also be costly – especially for recruitment companies with already stretched budgets. This means that firms are likely playing catch-up with hackers. 

How can AI help improve cybersecurity?

Although AI can be used for ever-more sophisticated attacks, it can also be a powerful tool for improving cybersecurity.

Analysis

AI offers an improved level of cybersecurity, which can help reduce the likelihood of an attack on recruitment companies. By analysing existing security systems and identifying weak points, AI allows IT staff to make necessary changes.

Artificial intelligence systems learn to identify which patterns are normal for a network by using algorithms to assess network traffic. These systems can quickly spot when traffic is unusual and immediately alert security teams to any threats, allowing for rapid action. 

In addition to preventing network attacks, AI can also be used to improve endpoint security. Devices such as laptops and smartphones are commonly targeted by hackers. To combat this threat, AI security solutions scan for malware within files – quarantining anything suspicious. 

Advanced data processing

AI-based security solutions are continuously learning and can process huge volumes of data. This means that they can detect new threats and defend against them in real-time. By picking up on subtle patterns, these systems are able to detect threats that humans would likely miss. It also enables AI to keep up with ever-changing attacks better than traditional antivirus software, which relies on a database of known malware behaviours and cannot identify threats outside of that database. 

The ability of AI systems to handle so much data also makes their implementation incredibly scalable. These systems can handle increasing volumes of data in cloud environments and Internet of Things devices and networks.

Working with humans

Since AI systems can automatically identify threats and communicate the severity and impact of an attack, they help cybersecurity teams to prioritise their work. This saves workers time and energy, allowing them to respond to more urgent security threats. 

Task automation is another key benefit of AI for firms. AI systems can automate tasks such as routine assessments of system vulnerabilities and patch management. This reduces the workload of external cybersecurity teams and allows for more efficient working, reducing costs for recruitment companies. By automating these tasks, AI can alleviate the shortage of skilled workers, addressing the cyber skills gap7. 

The rise of AI is understandably a cause of concern for recruitment companies and staff alike. Improved cyber threat capabilities mean that organisations need to be prepared for changing attacks. However, it’s clear that adopting AI systems is the best way for the recruitment sector to improve their own cybersecurity. By combining adept cybersecurity staff with artificial intelligence cybersecurity systems, the recruitment sector can stay ahead of new threats and improve the efficiency of its operations.

About Scott Brooks

Scott Brooks is a Technical Strategist at IT Support company, Cheeky Munkey. 

Scott has worked in technology for over 16 years, including as a specialist in the London 2012 Olympic games organising committee. He is also well-versed in artificial intelligence.

About Cheeky Munkey

Cheeky Munkey provide a full suite of managed IT services and support to hundreds of SME customers across a diverse range of sectors, including education, finance, legal, manufacturing, logistics, not-for-profit, recruitment and architecture. They have the years of experience and expertise needed to help your business succeed.


1https://www.linkedin.com/pulse/impact-cyberattacks-recruitment-industry-hirewing/

2https://www.apsco.org/resource/biggest-cyber-security-threats-to-the-recruitment-sector.html

3https://www.rec.uk.com/our-view/insights/business-advice/growing-threat-ransomware-recruitment

4https://jaydevs.com/machine-learning-and-its-use-in-cybersecurity/

5https://research.checkpoint.com/2023/opwnai-cybercriminals-starting-to-use-chatgpt/

6https://www.harnham.com/addressing-the-ai-and-digital-skills-gap/

7https://www.forbes.com/sites/forbestechcouncil/2023/04/06/can-ai-help-solve-the-workforce-skills-gap/?sh=ad60af6134f7