![]() |
We're Hiring Engineers All Wrong. Here's How HuffPost Evolved
But how do we know if they can code?
This is the key anxiety a software organization faces when evaluating a potential hire. At HuffPost Engineering, we've tried to turn this question on its head -- and for the most part, eliminate it. As it turns out, there's a great deal of research out there on best practices for interviewing. But for whatever reason, it seems the software industry has gravitated toward a set of practices that don't really align with the goal of hiring the right people. And we at HuffPost have fallen into some of these pitfalls in the past. What follows is an account of how we have rethought our engineering hiring practices. Engineering competencies To begin with, the question above doesn't really reflect what we're trying to discern from our interview process. The TechCrunch article, "On Secretly Terrible Engineers", does a great job of skewering this anxiety. To summarize, it's safe to assume that any applicant that has spent at least a year at a job in the engineering world "knows how to code" well enough not to get fired. A simple pre-screening process filters for this. What we're really looking for is more nuanced. Instead of asking if a candidate can code, why not ask does the applicant have the concrete skills necessary to succeed in the role? That's a little bit better, and it leads directly to the more fundamental question of what those concrete skills are. Ah, now, we're getting somewhere. Many job descriptions make the mistake of focusing on what a good applicant looks like (e.g., five years of Java experience, experience coding single-page MVC web apps), not what a successful engineer actually does -- in other words, identifying the competencies of the role. This filters out people who have the aptitude of doing the job at hand and artificially selects for people with relevant knowledge, when what we really need is relevant ability. For the typical developer role at HuffPost, competencies might include things like:
Even when presented with good lists of competencies, it can be tough untrain oneself from retreating back to "knows how to code". We found it to be productive to think about what attributes are exemplified by our current team members. Since all of our existing engineers do, in fact, "know how to code", it tends to be the so-called soft skills and work habits that stand out. This exercise is essential to a predictive screening process. Yet I know that in the past, I have performed interviews and been interviewed in unstructured processes that don't reflect any preselected competencies. Implicitly, such processes tend to select for interviewees that are attractive, articulate, assertive, or affable. While these may be very pleasant attributes, it is unlikely that they correlate with on-the-job success. Competency filters Once the competencies for success in the role are selected, the next task is to figure out how to evaluate the applicant with respect to the competencies. The number one goal of the interview process is to reliably select applicants who will be successful in the role. This is accomplished by what you might call filters, which are the tasks the company does to evaluate an applicant. In much the same way as we aim to evaluate our applicants objectively in terms of competencies, we also choose the filters used measure the applicants in terms specific criteria, including:
"So, tell me about your past experience" This is how a typical interview session starts off, and it's problematic for a number of reasons. First of all, it has an indeterminate opportunity cost. As the question provides no parameters, the applicant is free to go on at length and chew through your precious interview time. It does not directly measure any well-defined competencies - unless, of course, ability to talk extemporaneously about oneself is actually an important job skill for the role. And worse off, it is often asked by every single interviewer, such that any benefit the question does have drops to nearly nil after the first application of the filter. This filter falls under a broader category of open-ended discussion, which generally suffers from poor specificity. Whiteboard coding The hallmark of the engineering interview. Completely divorced from the normal coding environment and resources, and under artificial time pressure, we watch the applicant try to solve some problem. Being generous, let's suppose that the problem selected is actually representative of the work done on the job, and therefore has good specificity (rarely the case, in my experience, but whatever). This filter still typically suffers dearly from high prep cost, as it is difficult to come up with the perfect whiteboard exercise. It suffers from high opportunity cost as well, as it often takes up a pretty solid chunk of time per problem. Objectivity can be poor, too, given that a particular problem may happen to fall into a given applicant's wheelhouse but not others. "What would you do if..." Questions of this sort fall under the category of hypotheticals. These often fail the core goal of predictiveness. A crafty applicant will tell you what you want to hear, but you'll be left with little indication of whether they'll actually do what they say. Take home work Homework has high applicant load and poor objectivity. There is also a high prep cost, because it tends to take a lot of effort to design a good homework assignment. Like whiteboard coding, assignments also tend to suffer in practice from poor specificity. Some applicants love doing a sample project and really dive in, but others may find themselves strapped for time, because they have a current job or are searching among many places. Pair programming tryout The nice thing about this is that you get to see how the applicant performs on real programming tasks faced in a real day. It doesn't get much more predictive and specific than this. The problem is high opportunity cost and poor objectivity. This fails the objectivity test because the task the applicant gets depends on what happens to be available that day. Weirding the applicant out I have never experienced this myself, but I've heard of it before, and it is an example of the stress interview technique. It probably goes without saying that this intentionally scores low on applicant load. The gold standard of filters Fortunately, we have a technique that performs quite well under the above criteria. It's known as the the behavioral question, and it has a long history in the business world. The idea is based on the premise of past behavior predicting future performance. The approach centers on asking specific, competency-aligned, open-ended questions about the applicant's past experience. A critical difference between a behavioral and hypothetical question is that the latter asks what they would do rather than what they have done. A variety of techniques should be used to get the most out of the behavioral approach.
One key advantage of the behavioral approach over many others is its objectivity. Because it allows the candidate to choose the venue, it allows people of diverse backgrounds to shine. Suppose you have a junior role open. One applicant has a bit of relevant experience but a lack of individual concrete achievement. Another applicant has determinately less directly relevant experience but has clearly demonstrated exceptional application of the soft skills of the job as well as adaptability, grit, and ability to quickly scale a learning curve. A well-calibrated behavioral approach allows these two applicants, with different backgrounds, to be compared. To be clear, sometimes directly applicable experience is more important than adaptability, but this should be explicitly considered in the pre-planning process. Our findings At HuffPost, we've experimented with many of the above interview techniques, and we likely will continue to do so. There's nothing inherently wrong with any interview technique, as long as it strikes an acceptable balance between predictiveness and the other attributes of a filter. This balance varies by role. The important part is to make the consideration of filters intentional with regard to the role's competencies and the filter's characteristics. We have simply observed that the behavioral approach to filtering tends to land at a very attractive point stacked up against other filtering techniques. For pre-planning our interviews, we used a competency guide developed by our parent company, Aol. The competencies in the Aol handbook are things like communication, learning agility, and coachability, generic enough to apply roles in pretty much any department. For other organizations, similar guides of competencies and aligned prompts/questions are available online. A quick search turned up a link to a Complete List of Behavioral Interview Questions on Henderson State University's website, for instance. Redundancy between interviewers was eliminated (unless retained by intention). As we prepared to test the application of the behavioral approach to real interviews, we had some initial concern that our generic competencies wouldn't filter for engineering ability specifically. This concern turned out to be unfounded. We found that engineering candidates are eager to answer behavioral questions with recollections that reveal their technical experience. We do also augment the soft-skill and work habit competencies with some tech-specific competencies. There were other benefits to our restructured interview process. Having a specific screening framework put our interviewers at ease. Although we don't track data on this, I'm certain that the discipline it imposed on the process left a better impression on our applicants, too. I have seen applicants loosen up after telling them in the interview preamble that there wouldn't be any whiteboarding. A more comfortable interview experience leads to better conversion rates for candidates that we make offers to, but we see upside in leaving a good impression with candidates we don't make offers to, as well. Don't get me wrong -- we haven't solved every problem. Optimal application of any screening technique requires training and practice on the part of the interviewers. But the upside of a behavioral-heavy approach is that any such honing of the skills pays dividends for the entire interview process. The end result of the overhaul has been a process that yields comprehensive pictures of applicants' soft skills and work habits within technical roles. I would argue that these, not raw coding ability, is the heart of what separates top performers from people who struggle to contribute. Want to give our interview process a spin? We're hiring. -- This feed and its contents are the property of The Huffington Post, and use is subject to our terms. It may be used for personal consumption, but may not be distributed on a website. ![]() More... |
| All times are GMT. The time now is 06:52 PM. |