Artificial intelligence systems are reinforcing gender and racial biases

Human Resources consultant, Cate Oliver, follows up from her last blog on robots in the workplace with a discussion on bias in artificial intelligence

Last week I blogged about robots and intelligent algorithms in the workplace and the potential for them to replace certain jobs and the impact this might have. What I hadn’t considered was the risk that artificial intelligence (AI) could show gender and racial biases and therefore, if they are involved in decision-making in the workplace, could reinforce, rather than counteract, inequalities and prejudices that currently exist in society.

Previous studies have shown that when looking at identical CVs, a candidate is more likely to be invited to interview if the name is European American than if it is African American and a recent BBC study showed that a job applicant with an English-sounding name was offered three times the number of interviews than an applicant with a Muslim-sounding name, with identical skills and experience.

AI is already in use in recruitment processes and is arguably more objective than human-based screening processes. Intelligent systems are used for automatic screening and selection assessments. In many respects this makes sense – manually screening CVs and application forms is very time consuming, particularly where a high volume of applications is received, for example with unqualified roles. AI can automate this process, speeding up the time-to-hire. It could also be argued that AI may improve the quality of recruitment – it uses data to standardise the matching between candidates’ knowledge, skills, qualifications and experience and the job’s requirements.

However, recent research has shown that as machines are developing more ‘human-like language abilities’, they are also acquiring the biases hidden within our language. The research looked at ‘word embeddings’ – the numerical representation of the meaning of a word based on the words it most frequently appears with. This has shown that biases that exist in society are also being learnt by algorithms. For example, the words ‘female’ and ‘woman’ were more closely associated with the home and arts and humanities occupations, while the words ‘male’ and ‘man’ were more aligned to maths and engineering. In addition, it was found that European American names were more likely to be associated with pleasant words such as ‘gift’ or ‘happy’, while African American names were more likely to be associated with unpleasant words. This research suggests that AI, unless explicitly programmed to counteract this, will continue to reinforce the same prejudices that still exist in our society today.

This shows the scary reality that any human bias that may already be in the recruiting process – even if it is unconscious – can potentially be learned and reinforced by AI. The Equality Act 2010 legally protects people from discrimination in the workplace and in wider society. The legislation requires that employers must not unlawfully discriminate against any individual in the recruitment process, which includes making assumptions about people based on the information they provide and any protected characteristics. Whilst I am not aware that the Act specifically refers to discrimination by AI, I think that it would apply to any system making decisions on behalf of an organisation, as it would a person making those decisions on behalf of an organisation.

To avoid replicating human bias that may already exist, it is critical that you ensure that any recruiting software you use has been developed to identify and remove clear patterns of potential bias. With the rise of AI in the workplace and more specifically, the recruitment process, it is necessary that Equality Act’s scope be widened to prevent discrimination by intelligent systems.

 

Source – The Guardian