placeholder
Stuart Gentle Publisher at Onrec

Testing for AI Overreliance in Remote Employees

The impact of AI on the remote workforce is undeniable. New jobs are being created, old jobs are being made obsolete, and existing careers are being reshaped with the introduction of AI tech.

While the implementation of AI can sometimes prove positive, more often it's a mixed bag.

In remote work, overreliance on AI can come from several sources. It might be that an employee has taken on too much work, and they're using it to plug gaps when they come up short. It could also be that the employee has little or no actual experience or skill in a position, and they simply assume AI can do the job just as well as a real person. In the worst case, it could come from bad actors who know they're scamming and who know they'll be caught; they're just counting on a payday beforehand.

Whatever the case, detecting and correcting or removing these people from your workforce is a skill every modern employer needs in their arsenal. To save on wasted time and money, learning this approach is becoming more necessary with time, especially if you're looking to expand.

Screening New Employees

The best place to catch people who rely on AI instead of actual knowledge starts at the job interview stage. One reliable approach is to have a couple of AI large language models open while you’re interviewing, such as Grok and ChatGPT. Any questions that you ask through text or voice chat can then be entered into these LLMs, so you can match the interviewee's responses against those of the AI. If they're near identical, especially after delays in response, you might have an indicator.

The eagerness of some AI users to please could also be leveraged as a useful tool. An AI user who has no problem reading off an AI response can easily be tested in a non-traditional job-related area where you have knowledge. For example, if you're a fan of progressive jackpot slots, you could bring up a game like Red Wizard or the Age of the Gods series. If the person on the other side is eager to answer, but they get basic facts like game features wrong, it could signal an area worth exploring further.

Managing Existing Questions

If you already have employees who you suspect of relying on AI to the detriment of their work, output can also be tested by inserting hidden text into work requests. At the end of a request, for example, have a line asking for specific output, but make that text invisible and the smallest text size possible. This way, if the text is copied directly into an AI, the output will reflect this fact.

Free Man Computer photo and picture

Source: Pixabay

Most importantly, before seriously considering or implementing any of the above ideas, it's important to understand the significance of being honest with our staff. You'll need to clearly lay out why AI use isn’t accepted, and that suspected work might be tested. New employees need to be informed that you don’t accept AI, to ensure the entire process is above board and everyone is respected.

There might be a day when AI like LLMs can reliably be used in business applications, but today is not that day. Inaccuracies, mistakes, and outright hallucinations are all still issues for modern AI, and until this is fixed, the systems shouldn't be relied upon without considerable oversight. It's still up to us, and the value of employees who know the importance of human-led work can't be understated.