CHRISTOF STACHE/AFP/Getty Images
show image

Emma Leech

Danson Foundation intern at the New Statesman

The perils of AI recruitment

For many people, the interview part of a recruitment process is the first step away from buzzword-driven data and assessment analytics. For others, the human interaction in the interview could cause anxiety about bias. Companies such as HireVue purport to fix this issue, using artificial intelligence to help the interviewer choose candidates who will be the most successful at their job.

AI interviewing scans the face of the person speaking, picking up on cues which might be missed by a human recruiter. These cues include facial expressions and eye-contact which will gauge your interaction with the task.

According to HireVue, this algorithm-based element reduces the risk of a number of biases. An “AI-driven approach mitigates bias by eliminating unreliable and inconsistent variables like selection based on resume and phone screens, allowing you to focus on more job-relevant criteria,” HireVue says.

Another pressing issue facing recruiters is the time it takes to complete the recruitment process. Lengthy admissions systems can disrupt business flow and can be costly. HireVue claims a 90 per cent reduction in time taken to hire.

However, artificial intelligence has been regularly in the news, not for solving bias but maintaining it. Several studies have found that AI perpetuates and bolsters bias, especially when it comes to facial recognition.

Dr Lauren Rhue, assistant professor of information systems and analytics at Wake Forest University, tried two different AI facial recognition systems, Face++, which has been developed by a Chinese tech company, and a system created by Microsoft. The study found that Face++ marked black faces twice as “angry” as white faces, while Microsoft’s system scored black faces as being three times more contemptuous. “If professionals of colour are systematically viewed as having more negative emotions, then they could be eliminated from the interview pool prematurely. AIs could lead to disproportionate impact for candidates of colour,” the report concludes.

The data-based nature of algorithms makes them easy to trust when compared to humans who are prone to cultural bias for a wide range of reasons. However, the information fed to AI systems, often the data of previously successful candidates when used in recruitment, means that AI can learn and perpetuate biases which have historically affected the institution.

In 2018, Amazon had to drop its AI assessment of CVs after it was found that the computer was less likely to accept CVs which featured the term “women’s”, such as in “women’s captain”. This was a result of trends recognised in the information about previously successful candidates.

More broadly, studies have found that AI analysis into emotions is unreliable as they can be displayed in a variety of ways. If pre-recorded video interviews are dismissed without human revision, machine learning could be erroneously rejecting applicants based on inaccurate data.

Concerns have also been raised about the focus on eye-contact and facial expressions as factors in a person’s potential success in a company as it overlooks the implications of this for people with learning difficulties. Making eye contact can be anxiety-inducing for those on the autism spectrum and so if the format was used indiscriminately then they could be significantly disadvantaged.

Beyond the current intended uses, the employment of AI in recruiting opens up worrying doors into other facets of machine learning. Studies claimed to have found that, after having adjusted for race, gender, and age, AI can accurately detect risk of criminality from facial information. If this is true or even just believed to be so, what is to stop recruiters extending their parameters beyond facial expression and assessing their candidates on the apparent likelihood that they will act unlawfully?

Though this may appear to be a reasonable concern, AI is not faultless and people could lose out on career opportunities if they are wrongly deemed a liability. What’s more, the judgement of a person’s character based on their facial features rings historical alarm bells as it is reminiscent of racial pseudoscience and eugenics.

HireVue counters the “misconception” that “assessments use facial recognition technology in scoring candidates” by detailing the use of other factors such as language in the system’s analysis. However, if not for issues such as those faced by Amazon, the question of language is a problematic one. It is unlikely that AI would be able to take into account the use of dialect or slang, especially if it is basing its decisions on listed key words or phrases.

While AI is being improved to better recognise regional accents, the specification of certain language features as automatic deciders counters work done to level the playing field for people of certain socio-economic or educational backgrounds. A human recruiter can recognise nuance of meaning regardless of language in a way that AI cannot.

Although use of this technology seems to be predominantly taking place in the US at the moment, it has been adopted by big name corporations such as IKEA and T-Mobile which have offices and shops across the globe.

The speed at which machine learning technology is developing creates issues when it comes to ethics. The laws and cultural infrastructure around new uses cannot be established as quickly as the technology itself. This leads to the use of such practices, potentially without considered and extensive research into the benefits and risks.

HireVue counters this argument by posing a question: “Which should we prefer? The world where hiring is influenced by a human with an unclear definition of job success asking inconsistent questions […] evaluating on unknown criteria, OR a data-driven method that’s fairer, consistent, auditable, improvable, and inclusive?”

However, while AI might counter some unconscious bias, its human creation means that it has been fed this same impartiality. Posing such a question suggests that the traditional method of recruitment is immutable and that the choice is binary.

Recruiters in big corporations are actively diversifying their methods in order to branch out from the traditional two personal panel system in which unconscious bias can arise easily. Assessment centres and strengths based interviewing allow for diversity of background and increased equality.

Both unconscious bias in face-to-face interviews and AI technology could and should be improved but, until machine learning has proven itself to be the far better option, the transition should be made tentatively.

Emma Leech is a Danson Foundation intern at the New Statesman