We need to talk about cheating. Because it’s coming up a lot in our space, as a provider of a platform that helps companies assess the capabilities of technical job candidates. Especially now that ChatGPT and generative AI have exploded on the scene. Let me explain.
Most of us remember a time when we were in school, and we were not allowed to use a calculator on a test or a homework assignment. To do so was considered cheating. The reason, of course, was that we needed to learn the fundamentals of mathematics.
Now, we have jobs – many of which involve some occasional math, even if it's rare. And of course, none of us are doing long division with pencil on paper. We have better tools for that — whether it’s a basic calculator app or a spreadsheet or another more advanced tool. Our employers, of course, wouldn’t accuse us of cheating by using these tools. Because they help us do our jobs better.
To build technology, developers use tools that are infinitely more complex. Because the work they do is infinitely more complex. Those tools are constantly evolving and improving, too. They help developers become more efficient and better at their jobs, just as the problems they are expected to tackle continue to become more complex. These are way beyond calculators.
These tools — including code samples, plugins, and libraries that developers can plug into the solutions they are building — are often made by someone else and found on the open web. You might call some of these tools shortcuts, but using them is not cheating.
Except when it comes to a code test. Unlike many professionals, developers interviewing for new jobs are usually asked to complete a series of tests designed to assess their technical ability. For often mundane and common code tests, the hiring company typically discourages searching for and finding code samples online. They might even say it’s cheating.
Which begs the question: If a developer approaches a code test in the same way — and with the same tools — they would approach a job assignment, then why is that considered cheating?
Recently, we wrote about how developers can — and should — be using generative AI tools like ChatGPT. Our own CTO Oliver Weng is using it. It helps him, among other benefits, save time over the course of a project.
Any developer with a little know-how (a.k.a. “prompt engineering”) can use ChatGPT to instantly generate code. So, those simple take-home code tests, a staple of many companies’ technical interviewing processes, will be easier than ever for candidates to cheat on. That is, if you consider using tools to research, find, or generate code samples as cheating.
We know that coding tests are a popular way for talent acquisition teams to efficiently vet technical talent in various stages of the hiring process. We also know that many developers don’t like these simplistic tests for a few reasons. Most job candidates don’t believe coding tests are a good measure of whether they could succeed in the position they applied for, as we found in a recent survey.
Not surprisingly, most of our survey respondents either know someone who has or have themselves cheated on these code tests. Of course, “cheating” may be as simple as googling for a code string that solves a parenthetical question — and many of these technical challenges are both re-used by companies and easy to find on the open web.
To discourage candidates from copying code, some companies use plagiarism detectors. As one company wrote, “The key is to avoid questions that have answers so short that a plagiarism detection system can’t detect when a candidate has used a tool like ChatGPT.”
However, maybe we should all reconsider what we mean by “cheating.” If we believe that generative AI tools like ChatGPT have a place in our work, then why would we discourage a candidate from using those tools during the interviewing process?
At Filtered, we help talent acquisition and hiring teams administer technical assessments that simulate the job through a virtual development environment.
Instead of simplistic code tests, candidates are given a series of exercises that mimic real-world development challenges. And they can use a variety of tools that are part of the development stack for the company they are interviewing with.
By simulating the job and tools, hiring teams can understand how candidates approach the problems they are given, and how they solve them. That includes how they use different resources — and could include emerging generative AI tools like ChatGPT.
If you accept that generative AI tools will be standard in the developer’s toolkit, then it’s time to fully embrace them in the technical hiring process.
ChatGPT isn’t a one-off revolutionary breakthrough.
OpenAI’s primary innovation is ChatGPT’s simple interface: It’s easy for anyone to log in and ask ChatGPT a question and get — at least on its surface — what appears to be a compelling response. But the under-the-hood AI technology — a neural network trained on massive amounts of text data — is the result of an evolution that’s been happening for years.
And while the buzz around ChatGPT will surely fade, the applications that leverage generative AI are bound to explode. Especially as the underpinning AI technology continues to improve.
Hiring practices need to evolve with it. If companies want to acquire that top technical talent, their technical hiring process needs to be consistent with how developers work. If it’s not, as we found in our recent survey, that top talent is likely to withdraw from the process.
That means we need to stop worrying about “cheating” and allow our technical job candidates to show how they use the tools of the trade. After all, we wouldn’t ask a finance job candidate to do long division, would we? We wouldn’t take away their spreadsheet tools and calculator apps and prove they can do basic math.
It’s time to ditch the code test and use technical assessments that better simulate how developers actually work instead of just asking them to regurgitate basic code. And we need to embrace the tools — including generative AI — that will help our talent solve the problems they are hired to solve.