Hiring Reimagined: Why a Little Human Touch Makes Algorithms Way Better!
Alright, let’s dive into something that’s on a lot of our minds these days: how we make big decisions, especially when it comes to hiring. Imagine you’re a psychologist, or any hiring manager really, trying to pick the best person for a job. You’ve got test scores, interview notes, gut feelings – a whole mix of information. So, how do you put it all together?
The Old Ways vs. The New Tech
Traditionally, we’ve relied on what’s called holistic prediction. That’s just a fancy way of saying we mull it over in our own minds, using our experience and intuition. Think of it as the “human touch” in its purest form. Then there’s mechanical prediction – using an algorithm, a statistical model, to combine that information. Now, study after study, across all sorts of fields, tells us something pretty clear: algorithms are generally better at making accurate predictions. Yep, even in hiring.
Why is that? Well, we humans, bless our hearts, are a bit… noisy. As Daniel Kahneman and his colleagues pointed out, our judgments can be inconsistent. The same hiring manager might weigh a candidate’s interview score differently on a Monday morning versus a Friday afternoon. Different managers? Even more variation! Algorithms, on the other hand, are ruthlessly consistent. They apply the same rules, the same weights, every single time, cutting out all that human “noise.”
So, if algorithms are so great, why aren’t we all using them for every hiring decision? Ah, there’s the rub. Many folks feel like they’re being turned into “robots” or that their expertise is being “robbed” when an algorithm takes over. We like our autonomy, our sense of control. And fair enough! But here’s the kicker: giving decision-makers more autonomy usually means predictive accuracy takes a nosedive compared to just letting the algorithm do its thing. This, my friends, is what we call the “autonomy-validity dilemma.” We want the freedom to decide, but that freedom can lead to less accurate outcomes. It’s a real head-scratcher.
Can We Have Our Cake and Eat It Too?
The big question we’ve been wrestling with is: can we find a middle ground? Can we introduce just enough human touch, just enough autonomy, to make algorithmic approaches more palatable, without completely wrecking their predictive power? We’re looking for that sweet spot.
This led us to explore a couple of hybrid approaches, often called autonomy-affording algorithms (AAAs). These aren’t about throwing the algorithm out, but about letting humans interact with it in specific ways. The two main contenders we looked at are:
- Clinical Synthesis: Here, the algorithm makes a prediction, and then the human decision-maker can adjust that prediction based on their own judgment. Maybe they spot an exception to the rule (what Meehl famously called a “broken leg” – like a star candidate who bombed an interview due to a sudden illness) or want to consider factors in a more complex, non-linear way. Technically, this is still a form of holistic prediction because the final adjustment is made inconsistently in the human mind.
- Mechanical Synthesis: This one’s a bit different. The decision-maker first makes their own holistic prediction. Then, this human prediction is treated as another piece of data, given a specific weight, and fed into an algorithm along with all the other information (like test scores and interview ratings). The algorithm then combines everything consistently to produce the final decision. The human’s intuitive judgment becomes a structured part of the mechanical process. This can be self-designed (where the human helps decide the weight of their own prediction) or prescribed (where the organization sets the weight).
Our hope was that by designing selection procedures with this “human touch”—either through final discretion in clinical synthesis or the structured integration of holistic judgment in mechanical synthesis—we could hit that balance between accuracy and user acceptance. We want HR folks to actually want to use these tools!

Putting It to the Test: Study 1
So, we ran a study (Study 1, with 261 participants) to see how these approaches stacked up. Participants had to predict job performance for 40 real airline ticket agent applicants, using cognitive ability scores, conscientiousness scores, and unstructured interview ratings. They were randomly assigned to one of five conditions: pure holistic prediction, clinical synthesis, self-designed mechanical synthesis, prescribed mechanical synthesis, or a strictly prescribed algorithm (where they just saw the algorithm’s output).
What did we find?
First off, predictive validity (how accurate the predictions were):
- Both clinical synthesis and mechanical synthesis were better than good old holistic prediction. That’s a win!
- Even better, mechanical synthesis outperformed clinical synthesis. It seems consistently incorporating that human judgment as a weighted predictor is more effective than letting humans inconsistently tweak an algorithm’s output.
- No surprise, the strictly prescribed algorithm was still the king of accuracy.
Interestingly, whether participants chose the weight for their own holistic prediction in mechanical synthesis (self-designed) or had it set for them (prescribed at 0.5) didn’t make much difference to the accuracy. And on average, people chose a weight pretty close to 0.5 anyway!
Now, for user reactions (how people felt about the procedures – things like perceived autonomy, competence, and whether they’d use it again):
This was a bit more of a mixed bag in Study 1.
- People felt they had more autonomy with holistic prediction (obviously) and also with mechanical synthesis compared to the strict algorithm. Clinical synthesis also offered a bit more autonomy than the strict algorithm.
- Only mechanical synthesis made people feel more competent than using the strict algorithm.
- Surprisingly, there weren’t huge differences in how likely people said they’d be to use these methods in the future, though mechanical synthesis did fare better than clinical synthesis on this front, and also on perceived fairness.
One thought was that maybe our participants in Study 1, many of whom had limited hiring experience, didn’t have super strong feelings about these different procedures. Also, we gave optimal predictor weight information in most conditions, which might have influenced things.
Round Two: Study 2 – Bigger and More Experienced
To dig deeper, we launched Study 2. This time, we had a much larger group (610 participants) who were all employed and had experience making hiring decisions. We also tweaked the design a bit: the prescribed algorithm was a within-subjects condition (everyone experienced it after their main condition), and we explicitly manipulated whether participants received information on optimal predictor weights.
The predictive validity story was pretty consistent with Study 1:
- Clinical synthesis beat holistic prediction.
- Mechanical synthesis beat both holistic prediction and clinical synthesis. Go, mechanical synthesis!
- Providing information on optimal predictor weights? It changed how people said they’d weigh predictors if they could, but it didn’t actually make their holistic or clinical synthesis predictions more accurate or consistent. This really highlights that consistency is key, something mechanical synthesis enforces.
Judgment consistency (how consistently people applied their own weighting strategy) was much higher in clinical synthesis than in pure holistic prediction, likely because the algorithm’s initial prediction acts as an anchor.

What about user reactions in Study 2? This is where things got even more interesting, especially with our experienced decision-makers:
- Autonomy: Big surprise – holistic prediction, clinical synthesis, AND mechanical synthesis all made people feel much more autonomous than the strict prescribed algorithm.
- Competence: Same story! All three (holistic, clinical, mechanical) led to higher feelings of competence compared to the strict algorithm.
- Use Intentions: Yep, people were moderately more inclined to use holistic prediction, clinical synthesis, or mechanical synthesis in the future compared to the strict algorithm.
- Fairness: All three were also seen as fairer procedures than just using a prescribed algorithm.
In Study 2, mechanical synthesis and clinical synthesis had pretty similar user reactions across the board. This is great news because if users feel similarly positive about both, but mechanical synthesis is more accurate, then it’s looking like a really strong contender. The fact that participants in Study 2 directly compared their assigned method to the prescribed algorithm (because of the within-subjects design for the latter) likely made these differences in user reactions pop more than in Study 1.
So, What’s the Takeaway? The Human Touch, Algorithmically Enhanced!
Across both studies, a clear pattern emerged: mechanical synthesis seems to be a fantastic solution to the autonomy-validity dilemma. It consistently resulted in higher predictive accuracy than allowing decision-makers to just adjust an algorithm’s output (clinical synthesis), and miles better than purely relying on human intuition (holistic prediction). And crucially, it did this while still preserving a sense of autonomy and eliciting positive reactions from users, especially experienced ones.
It looks like consistently weighting and incorporating one’s own holistic prediction into an algorithm is a more robust approach than inconsistently modifying an algorithm’s prediction. This really backs up the idea that a major reason algorithms win is because of their consistency.
What’s also fascinating is that people seemed to appreciate having some autonomy, but they weren’t necessarily demanding full autonomy, nor were they overly sensitive to exactly how that autonomy was given, as long as their input was valued. For instance, there wasn’t a huge difference in reactions between self-designed mechanical synthesis (where they chose the weight for their holistic input) and prescribed mechanical synthesis (where the weight was fixed). On average, the weight they chose for themselves was pretty similar to the fixed weight anyway! This suggests that just knowing their judgment is being formally included is a big deal.

It’s pretty cool that decision-makers didn’t massively overweigh their own judgment when they had the chance in the self-designed mechanical synthesis. Often, we hear about people discounting advice from others (or algorithms) and overvaluing their own gut. But here, when working with the algorithm in this structured way, they seemed quite reasonable. Maybe being explicitly told that algorithms are generally more accurate helped!
Where Do We Go From Here?
Of course, no research is perfect. In Study 1, our sample was mostly students. While Study 2 used experienced hiring managers, both were still lab-based scenarios. Real-world hiring can be messier, with more information, group decisions, and multiple rounds. People might also feel differently if they’d conducted the interviews themselves, as we tend to love our own narrative information.
Future research could explore these real-world complexities. What happens with more predictors? What if people can revise their inputs? How does group decision-making change things? It would also be interesting to see how mechanical synthesis stacks up against even simpler models, like those that just weight all predictors equally. Is it the simplicity or the active involvement that users appreciate most?
But for now, the message is pretty exciting. If you’re looking to boost hiring accuracy but are facing resistance to purely algorithmic approaches, mechanical synthesis offers a really promising path. It’s about putting that human touch into the equation, but in a smart, structured, and consistent way. It allows us to harness the power of algorithms while respecting the human need for involvement, leading to better decisions and happier decision-makers. And in the world of hiring, that’s a win-win!
Selecting the right people is so crucial for any organization. We’ve got amazing tech and science to help us do it better, but we can’t ignore the human element. Mechanical synthesis seems to be one of those clever interventions that doesn’t just make people feel better about using algorithms, but actually leads to more accurate decisions than the old ways of just tweaking the machine’s advice. It’s a step towards truly effective human-algorithm collaboration.
Source: Springer
