Recruiting using AI: unbiased or unfair?

Can we afford to leave important decisions down to algorithms...
28 November 2022

Interview with 

Tomas Chamorro Premuzic, UCL

Share

As uneasy as this might make some people feel, the proponents for introducing machine learning into the hiring process make some compelling points about the bias prevalent in the traditional alternatives (like interviews). I spoke with Professor of Business Psychology at UCL Tomas Chamorro Premuzic, who also works for staffing and human resources firm Manpower. At Manpower, they claim to help their clients build their workforce using science, and run studies on a whole range of recruitment technologies. Alongside personality tests, another technique gaining traction are video call interviews where the interviewee responds to questions without a human being on the other end to receive them. Instead, AI analyses the tone and language used by the candidate to judge their performance. I asked Tomas whether these types of technology improve efficiency without improving fairness…

Tomas - We need to have the maturity and the rationality to distrust our instincts and to understand that when people say, "well, in my experience, this is biased" or "this doesn't work," or "this isn't very helpful," their experience is always based on an N of one and conflated with theor preferences, etc.. I mean, the point of scientific research is to provide evidence that comes from thousands if not millions of people. I think taking into account those studies and also understanding that this isn't rocket science. You are never going to be able to predict somebody's future job performance or somebody's fit to a team, group or organisation with a 100% degree of accuracy. The point is to do it as well as we can and as reliably as we can. It's possible to do this with, let's say, 70 or 75% degree of accuracy. And of course you can tell me, "but my cousin, she was really, really brilliant and she was unfairly rejected for this job by these recruiters." And perhaps you're right. But the point is that we want to minimise the number, or the incidence, of false positives and false negatives. And if you do that, more often than not you become a more meritocratic organisation and you become a more talent centric organisation. It's interesting to me that some of the same organisations that are championing diversity and inclusion are still looking for talent or trying to assess potential in the same old ways: looking at people's resumes and their qualifications and their educational credentials. And while it is absolutely possible for somebody who doesn't come from a high social class background to go to Cambridge, Oxford or Harvard and do really well, the vast majority of people that have these degrees are rich and they come from very affluent areas of society whereas if you look at people's personality and you try to understand what they're like, how they differ from others, you can not truly focus on diversity because we're all different. And if you don't try to understand what makes us unique and how we differ from others, then you truly don't care about diversity. And also look at qualities that are not conflated with social class, with socioeconomic status, with privilege. You can be more or less curious, more or less creative, more or less extroverted, more or less conscientious, more or less ambitious, more or less likable, irrespective of your class.

James - The difficulty that comes with accountability when we become more reliant on data and algorithms, because it's perhaps easier to blame a recruiter who demonstrated some bias, but it becomes a bit more difficult. Who do we hold to account when people slip through the net and find it difficult to overcome the low score that the computer gives them.

Tomas - Here I disagree. I have to say there's two issues really at stake. We rightly worry and have been concerned and have been raising attention as to the potential consequences and drawbacks of so-called black box AI models: algorithms or systems scoring you high or low or rejecting you for jobs without any explanation. But the only truly black box algorithm is the human brain. The only decision that is impossible to unpack, decode, and reverse engineer is what humans do. If I'm interviewing you and I reject you, I can come up with the best possible explanation of why you weren't a good fit for that job. I can say you didn't seem confident or you don't have expertise, or you were rude or you didn't make eye contact. And sometimes I truly believe that; it's not like I'm deliberately trying to deceive others and look for excuses because I have a nepotistic candidate that I prefer. However, with AI, you can always reverse engineer the decision making that underpins the algorithm. Algorithms are basically like recipes. And the only thing that is novel about AI is that it's a self generating recipe. You give it data and then can find out what the key ingredients are and identify patterns, and then influence or make decisions on the basis of those patterns. AI that is ethical by design has competent humans overseeing these algorithms, testing them for bias and adverse impact, and ideally still being involved in the decision making process. So I think it's very unlikely today that anybody is hired purely as a function of what a fully autonomous AI or algorithmic system does, which is also quite interesting because sometimes adding a human in the loop actually increases the bias and doesn't decrease it. I'll give you an example. Some of the video interviewing software technologies that have been developed in the last 5 or 10 years can actually give us a sense of whether, for example, you are more confident, whether you're more narcissistic, whether you have a higher or a lower integrity score. And when these scores are confirmed or checked by humans that come in the loop and they look at the same videos of people, actually they don't become more accurate. They often become less accurate because the person is driven by a lot of signals that actually have to do with things like race or class or attractiveness. Humans are very good at learning, but very bad at unlearning. No matter how much unconscious or conscious bias training you undergo, you cannot suddenly forget that the person sitting in front of you is male or female, white or black, old or young, attractive, you know? And in fact, the more you try to suppress that information, the more prominent it becomes in your mind. In the near future, we're probably going to see humans enhanced by AI, including assessments of people's personalities, scored with machine learning and artificial intelligence enhanced by human expertise. And the combination of both will be better than one way or the other.

Comments

Add a comment