AI-based recruitment tools may not boost workplace diversity, study

AI-based job recruitment tools may not boost workplace diversity, study argues

Image:
AI-based job recruitment tools may not boost workplace diversity, study argues

Such tools are 'automated pseudoscience' according to researchers

A new study by the researchers at the University of Cambridge warns that artificial intelligence (AI)-based job recruitment tools may not increase diversity in hiring and could actually worsen biases that already exist at workplaces.

AI-powered recruitment tools that are trained to analyse body language and predict the applicant's emotional intelligence are increasingly used in today's job markets to eliminate prejudice at workplaces and encourage diversity.

Those who support the use of machine learning algorithms argue that algorithms offer a more objective method of evaluating employees since it does not take into account factors like gender and colour.

The elimination of racism, sexism, and any other conceivable gender and ethnicity inclinations is the goal of such solutions, according to their proponents.

However, Cambridge research published in the journal Philosophy and Technology asserts that AI recruiting software is superficial and is nothing more than "automated pseudoscience."

For their study, researchers from the Centre for Gender Studies of the British school reproduced a commercial model used in the industry. The researchers examined how this AI recruiting software predicts people's personalities based on images of their faces.

Called the "Personality Machine", the system examines "big five" personality traits: openness, extroversion, agreeableness, conscientiousness, and neuroticism. The researchers discovered that changes in people's facial expressions, lighting, backdrops, and their clothing had an impact on the software's predictions.

According to the researchers, the use of AI for recruiting purposes is flawed since these characteristics have nothing to do with the capabilities of a potential employee.

The findings are supported by earlier research that has shown how a candidate's scores for conscientiousness and neuroticism may be lowered when they are wearing glasses and a headscarf in a video interview.

The Cambridge researchers assert that since AI is trained to look for the employer's "ideal candidate," it may eventually encourage uniformity rather than diversity in the workforce when used to narrow candidate pools.

Individuals with the right training and background may "win over the algorithms" by imitating the behaviours that AI is programmed to recognise and applying those attitudes into the workplace.

"We are concerned that some vendors are wrapping 'snake oil' products in a shiny package and selling them to unsuspecting customers," said co-author Dr Eleanor Drage.

"By claiming that racism, sexism and other forms of discrimination can be stripped away from the hiring process using artificial intelligence, these companies reduce race and gender down to insignificant data points, rather than systems of power that shape how we move through the world."

The researchers also draw attention to the fact that these AI recruiting tools are often proprietary and closed-source, making it unclear how they operate.

Drage said that while companies may not be acting in bad faith, there is little oversight of the development and testing of these products.

In its new draft legal framework on AI, the European Union has categorised AI-powered recruiting software and performance assessment tools as "high risk". This classification means that such tools will be subject to more scrutiny and must adhere to particular compliance requirements.