Back Original

Tech Interviews Are Broken: Here’s How to Fix Them in the Age of AI

If you’ve recently interviewed candidates over video calls, you’ve likely experienced the frustration of determining whether a candidate is using AI to generate code or writing it themselves.

The Tell-Tale Signs

There are a few unmistakable patterns.

The first is the “silent pause followed by perfection” pattern. You ask a coding question, and the candidate goes quiet for a minute or so. Then, suddenly, they start typing syntactically and semantically correct code—line by line, in perfect order. Some of the smart candidates even narrate what each line does as they type. Before Generative AI, this rarely happened. Candidates who didn’t know the answer would ask clarifying questions, explore different approaches, and iteratively work toward a solution. Only those who had memorized a specific problem could produce code this smoothly without going back and forth while writing each line.

You get the second sign when you ask follow-up questions. Perhaps you tweak the original problem or ask the candidate to explain their choice of data structure or algorithm. Here, two scenarios typically unfold:

In the first scenario, technically competent candidates can explain the AI-generated solution and adapt it to your follow-up questions. These candidates clearly understand what they’ve produced, even if they didn’t write it from scratch. But this puts you in a difficult position: Do you give them an inclined vote since they answered everything correctly? Or do you reject them because the recruiter’s instructions explicitly prohibited AI use during interviews?

In the second scenario, the candidate struggles to explain their initial answer or modify it. This makes your decision straightforward—you can confidently provide a not inclined vote.

What Should We Actually Be Evaluating?

However, the question remains what are you looking for in a candidate in a coding interview. With advanced coding agents available, companies expect their software development engineers (SDEs) to use these agents to produce code at a rate never seen before. The coding agents are good enough now to debug simple bugs and expected to get better to debug and fix more complex ones. So, what should you look for in a candidate?

System design thinking: Can they identify patterns or wrong choices the AI is making in building a complex system?

Advanced debugging skills: Second, even if AI can fix most of the bugs, can they fix the ones that are really difficult and where AI just gives up by producing incorrect or inefficient solutions. There are still areas in existing products and services where AI is just not good enough to have a comprehensive understanding of the underlying systems ranging from low level to high level systems. Can the candidate navigate such complexities?

In short, the bar for software engineers have become even higher now and simple coding interviews with questions sourced from Leetcode and other platforms are not giving the right signals for hiring a candidate. Since software engineering is not dead yet, you would still need to know if the candidate can code. There are a few ways to get that signal.

Practical Solutions

First, insist for in person coding interviews which just takes the whole AI factor out of the picture.

Second, if your company doesn’t have budget for flying out candidates for in person interviews, create a sandbox of a system and ask the candidate to solve problems in it. With Gen AI, creating and deploying a sandbox good enough for the interview should be neither difficult nor time consuming. And there could be multiple such sandboxes created so that these problems are not disclosed on online platforms, ensuring each candidate faces a fresh challenge.

The goal isn’t to eliminate AI from the equation entirely—after all, the engineers will use it daily. The goal is to find candidates who can think critically, design systems effectively, and solve problems that AI cannot.