Ctdit23 1125 125 website image.jpg

Will AI get any less biased in 2023?

AI is creating bad outcomes due to underlying bias

Image:
AI is creating bad outcomes due to underlying bias

AI may be growing smarter, but it isn't becoming any less sexist and racist.

The AI and data science workforce is broadly as gender diverse as the technology workforce as a whole, which is to say, not very. The usual analysts have yet to look at this field in detail (which is interesting in itself,) but according to the World Economic Forum around 22% of AI employees globally are female. Innovation Foundation Nesta put the number even lower at around 14%.

Racial and intersectional data is harder to come by. What Google does in terms of AI and machine learning has an impact on everybody so it's a useful reference point when trying to assess the extent of diversity of input into AI. Although Google doesn't break its DEI data down by division such as Google AI, it does publish disaggregated data for different ethnic groups in different geographies.

In EMEA for example, 12.1% of the overall workforce is Asian, 3.2% Black or African and 3.9% Hispanic or Latinx. 7.8% identify as Middle Eastern or North African. 78.1% are white or European. In the US, Asians account for a much higher proportion of the workforce at 43.2%. However, Asian women number only 15.3% of that workforce. Black people are seriously under represented. 5.3% of Google's US workforce is Black, and only 2.3% are Black women. Despite the lack of AI specific data it seems reasonable to extrapolate that Black people and Black women in particular are dangerously under represented in the building of the biggest datasets and most heavily leveraged AI - and this has some serious consequences for them.

Some of these consequences have been highlighted by Joy Buolamwini, a computer scientist and digital activist based at MIT Media Lab and founder of the Algorithmic Justice League. Buolamwini founded the AJL after her research uncovered serious racial and gender biases in AI services from Google and fellow giants Amazon. Microsoft and IBM. During a project to build a device which could project images onto faces, Buolamwini was experimenting with computer vision software which used AI enabled facial recognition. The software seemed to have trouble following her face so she experimented with a white mask and lighter skinned colleagues. The software worked much more consistently.

Bad AI Takes of 2022

Last year saw some seriously bad takes from AI driven tech. One example was the AI generated art app Lensa which generates "magic avatars," out of selfies provided by the user (at least the makers hope this is the case because the potential for this type of technology to be used without the consent of the subject is significant). These avatars perpetuate racist and misogynist stereotypes. Users have reported hypersexualised images, and that the app lightens skin tones and anglicises features. In a digital update of the "art mirrors society," discussion, Prisma Labs which created Lensa, said that its AI is trained on unfiltered internet data and therefore simply reflects existing biases.

Another example of a bad AI driven take came to light earlier in 2022, although the incident itself dated back to 2020. AI is being increasingly used in recruitment - a trend that picked up pace during the pandemic when video interviews became necessary. Companies such as HireVue provide AI powered platforms for video interviews, and according to HireVue's customer case studies page, some big-name companies are using this platform, including Amazon. Cosmetics giant Estée Lauder also used the platform when it asked make-up artists working for its MAC subsidiary to reapply for their jobs in Summer 2020 in preparation for a round of redundancies. Several women were made redundant following these interviews but when they appealed their decision, said that nobody could explain to them how they were scored in the interviews.

You may also like

King's Speech promises regulation of 'the most powerful AI technologies'
/news/4336662/kings-speech-promises-regulation-most-powerful-ai-technologies

Legislation and Regulation

King's Speech promises regulation of 'the most powerful AI technologies'

But no specific AI bill

BCS calls for publication of ethical AI policies and improved cybersecurity
/news/4335888/bcs-calls-publication-ethical-ai-policies-improved-cybersecurity

Government

BCS calls for publication of ethical AI policies and improved cybersecurity

Separate open letter from good governance advocates offers support in the rebuilding of trust in government

Long reads: Why do so many women experience imposter syndrome?
/feature/4331535/long-reads-women-experience-imposter-syndrome

Leadership

Long reads: Why do so many women experience imposter syndrome?

And is it always a bad thing?

Ctdit23 1125 125 website image.jpg

Will AI get any less biased in 2023?

AI may be growing smarter, but it isn't becoming any less sexist and racist.

The women took legal action against Estée Lauder and the company settled out of court in March 2022. In a statement made at the time, HireVue stated that the visual analysis component of the HireVue algorithm was "voluntarily discontinued nearly two years ago." The word "nearly" could be doing a lot of heavy lifting in that sentence and it isn't clear whether or not that visual analysis was still in place during this particular round of video interviews. The women concerned cannot discuss details, but the decision of Estée Lauder to settle out of court could be interpreted as a reluctance to allow further scrutiny of the technology involved.

In October 2022, concerns about the transparency of some types of AI and the underlying data led the ICO issued a warning to organisations using emotional analysis technology which processes data such as facial movement and expressions, gaze tracking and gait analysis, that they needed to properly assess the risks of doing so. The ICO statement said:

The inability of algorithms which are not sufficiently developed to detect emotional cues, means there's a risk of systemic bias, inaccuracy and even discrimination."

AI with wisdom and integrity

AI consultancy and education provider AI Governance recently released its 2022 report, for which it surveyed over 700 leaders, including members of top business organisations like the Institute of Directors, to find out how ready they are to control the use of AI in their organisations and ensure these new tools and technologies are beneficial. It makes for worrying reading.

Despite the growing prevalence of AI-powered tools and services, most of the leaders surveyed were unaware of how AI works and how they could harness its power. This means that their understanding of the scope for algorithmic bias is likely to be limited. Because they are unable to assess and control the risks AI use can bring, these directors are vulnerable to making mistakes with AI that harm their organisations and may damage wider society.

Image
null
Description
Sue Turner

Sue Turner set up AI Governance in 2020 after completing a government backed MSc in AI & Data Science to inspire as many organisations as possible to use AI with wisdom and integrity, although as Turner herself emphasises, "wisdom and integrity is the really tricky bit." Turner considers the lack of diverse input into AI at the developmental stage to be a significant long term challenge, and expects the volume of unintended consequences of AI to increase as a result. How does she think AI can improve to avoid these biases?

"It isn't reasonable to expect AI engineers to be philosophers, psychologists or ethicists therefore you need to get as many of those different types of people, types of skills and types of lived experience in the room in order to consider what the impacts of the technology are."

What Turner would like to see is a paid mechanism to bring this about.

"The dream scenario is that we can get to a stage where there's a pool of people across the globe who all have very diverse lived experience and they get paid to be consulted by companies that are wanting to bring some new technology to the fore.

The lived experience that people have is what these companies need to know about. They need to value that and they need to pay people or find some way to recompense people for the skills that they are exhibiting and the knowledge that they have."

It's an incredibly positive sounding scenario which Turner is well aware is unlikely to happen, and until there is greater diversity of input into the workforces that create AI, and greater transparency of datasets it's difficult to see how the extent of algorithmic bias can be reduced in 2023. If anything, we can expect to see more AI driven bad takes, with women, particularly Black women, being disproportionately affected.

Individual companies and investors should tread carefully.

You may also like

King's Speech promises regulation of 'the most powerful AI technologies'
/news/4336662/kings-speech-promises-regulation-most-powerful-ai-technologies

Legislation and Regulation

King's Speech promises regulation of 'the most powerful AI technologies'

But no specific AI bill

BCS calls for publication of ethical AI policies and improved cybersecurity
/news/4335888/bcs-calls-publication-ethical-ai-policies-improved-cybersecurity

Government

BCS calls for publication of ethical AI policies and improved cybersecurity

Separate open letter from good governance advocates offers support in the rebuilding of trust in government

Long reads: Why do so many women experience imposter syndrome?
/feature/4331535/long-reads-women-experience-imposter-syndrome

Leadership

Long reads: Why do so many women experience imposter syndrome?

And is it always a bad thing?