International Women’s Day: Addressing gender bias in AI

Be the first to comment

International Women’s Day: Addressing gender bias in AI

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

One of the key missions of International Women’s Day 2022 is to celebrate digital advancement and champion the women spearheading innovation. With every development, however, it is important to ensure gender equity is baked into the system. While bias may be a fact of life, it doesn’t need to become a feature of our technologies too; #BreaktheBias

There’s no doubt about the far-reaching utility of artificial intelligence (AI) and machine learning within financial services. From loan applications to fraud prevention, these technologies have a big influence on our decisions – as well as the ones made on our behalf. Ensuring the models and datasets are fair, accurate, and representative, is therefore vital.   

Finextra spoke with a number of female thought leaders in the data science and AI space to learn more on International Women’s Day.

Where does the bias come from?

Unfortunately, bias seems to already be infiltrating AI. In 2019, Apple was placed under investigation by US regulators, following customer complaints that its card’s algorithms were setting higher credit limits for men. More recently, during the Covid-19 pandemic, Google ran into issues when its Vision AI tool – trained to recognise handheld thermometers – started delivering different results depending on the skin tone of the person being photographed. These are just two of many examples of how technology can produce unintended discriminatory results.

The Berkeley Haas Centre for Equity, Gender and Leadership tracks publicly available instances of bias in AI systems. In an analysis of 133 biased systems – across industries from 1988 to present day – it found that 44% percent demonstrate gender bias.

“It's important to remember that AI is a series of technologies designed and developed by humans, who all have biases. It can creep in at any point in the design process,” said Sara El-Hanfy, head of AI & ML, Innovate UK, in an interview with Finextra.

Heather Wilson, CEO, CLARA Analytics, agreed: “Artificial intelligence brings up images of robots and self-driving cars, but there is always a human element where bias can sneak in. We develop the underlying code and decide on the data sources that power these amazing innovations. The initial programming logic can include algorithmic bias as it is based on the experience and thought processes of the engineers and the people advising them. The data sets can be too limited and under-represent different groups.”

Data can even surface historical biases. “Imagine we use AI to determine the most successful CEOs of leading banks over the last 100 years – what would the models view be of women CEOs given in banking we’ve only seen female CEOs in the last decade or so?” asked Sarah Carver, head of digital at Delta Capita.

There are countless real-world examples of this manifesting – and not just from companies trying out AI for the first time, but big tech companies. Clearly, there are still a lot of improvements to be made in the space.

Why is eliminating bias in technology important?

One could argue that bias in technology isn’t so damaging; as long as we can recognise it when it surfaces, we can mitigate it. Unfortunately, it isn’t so simple. Bias in AI can exacerbate existing biases to a much greater scale than any individual human – leading to lower quality of service for consumers, unfair allocation of resources, and reinforcement of existing inequalities.  

Deployed tactfully, however, AI can actually be a powerful tool to “eliminate discrimination as it focuses on facts and can avoid the unconscious bias of humans,” argued Wilson. “This is why CLARA Analytics believes in supporting diversity, equity, and inclusion by building AI solutions that create better consistency for insurance claims. The solutions reduce the bias of individual insurance adjusters. This results in equitable outcomes for consumers and insurers. We develop AI solutions that take away bias – not introduce it.”

But eliminating bias in AI is not just a moral pursuit to fix models that skew an accurate view of the world. “Even if there is not a ESG focus for the organisation involved, it fundamentally makes good business sense,” noted Carver. “Organisations use AI to learn, to experiment and ultimately to evolve their business internally or their customer offerings. Making any decisions off a biased or incomplete model is just bad business.”

So, biased AI tools are bad for business and bad for society – but they’re also bad for AI itself. As things stand, the technology seems to have a fair reputation publicly, but it will not stay this way if issues (such as those we saw with Apple and Google in the last few years) continue. If they do, people will begin to lose faith in AI tools, and we risk hitting what El-Hanfy describes as an “AI winter”. This would only stymie innovation.

How do we get more women to contribute to the datasets?

As we have established, bias in AI occurs when developers build AI solutions that are based on the narrow experiences of the engineers and the people advising them – or incomplete data sets that under-represent women and the LGBTQ+ community.

This data gap piece is a tricky challenge to overcome. Due to the fact that 300 million fewer females than males have access to the internet via a mobile phone, a lot of the information that is surfaced is an incomplete picture of the real world. Even if representative data is sourced, it can have prejudices baked in. In the consumer credit industry, for instance, marital status was once used to ascertain creditworthiness. These practices have now, largely, been stopped, but many women still suffer from a sub-par financial record.

"AI is still in its early stage of development and innovation is coming from start-ups like CLARA Analytics,” said Wilson. “We need to elevate underrepresented voices in the AI community through additional investment.”

Indeed, while the role of women in technology is growing, less than 3% venture capital investments are made with start-ups led by women.

“We need more women and LGBTQ+ led start-ups funded by the venture community,” Wilson continued. “I am proud to be one of the few women leaders in the AI space, and I am here to support other women and LGBTQ+ to succeed in technology. It seems counterintuitive, but the best way to fight against bias is for LGBTQ+ people to identify themselves as such, so the data they end up contributing is properly labelled. Individuals should also ‘opt in’ to sharing their data whenever possible.”

Carver agreed that data contribution is imperative, and proposes three key steps to ensuring AI becomes more representative:

  1. Look at data sources critically, is there a starting bias in the source(s) of that data? Are you getting the data from sources that would tend to favour a certain profile? If so, look at alternative sources.
  2. Ensure the dataset is of a sufficient size, and that the training process and feature selection is understood.
  3. Actively ensure both the initial data and group interrogating it has sufficient representation.

Indeed, AI is not an exact science; it will not provide definitive answers – it will instead identify patterns. If there is one homogenous group, they will tend to come to the same conclusions about that data set based on their framework and view of the world.

FICO chief analytics officer Scott Zoldi agreed: “All data is biased. It’s up to the data scientists to correct this, and that is why it is so important to achieve more diverse teams building AI.”  

“Recognising that we need diversity in innovation and teams is the first step,” said Louise Lunn, vice president, global analytics delivery, FICO. “We can mitigate biases by including people across race, gender, sexual orientation, age, and economic conditions to challenge our own thinking views. By bringing in people with different thoughts and approaches to our own, analytics teams will see a quick improvement in the code.”

Internationally, the mission of inclusion is being driven by non-profit organisations like Women in AI, which uses education, research, and events to ensure that women’s voices are heard at all levels in data and AI. In the UK, the government launched a National AI Strategy in 2021, which acknowledges that in order to establish the country as a trusted centre for AI in the world, the technology must be built by a diverse talent pool and benefit all levels of society.

“The remedy shouldn’t just be about women contributing to datasets,” contested El-Hanfy. “Participation has got to be across the board. If there isn’t diversity in the development process – from initial ideation of the problem through to testing – we risk creating products that serve only those who design them.”

How can we ensure our technologies represent the entire gender spectrum?

But it isn’t just women we need to consider when developing AI tools. Gender is a spectrum, so it is vital that the developer and data scientist talent pool is representative of the whole LGBTQ+ community.

Because the departure from the idea that gender is a binary phenomenon is – for the most part – relatively recent, the sentiment is “unlikely to be represented in our historical data,” warned El-Hanfy. “If we want to stop technology from seeing gender as a binary, we need to start collecting and curating data which informs the outcomes we want to see. Developing new technologies, that have new or improved capabilities, and which do not heavily rely on historical data, is absolutely key.”

1. Governance and due diligence

There are a few ways to solve this issue. The first is through concerted governance practices.

“We can overcome bias in AI by building foundational data and algorithm governance,” argued Wilson. “Technology leaders in data science, engineering, and compliance need to work together to build strong corporate governance to review their underlying data sets and algorithms for bias. Ultimately, it is about consumers and corporations both taking a hard stance on eliminating bias. Corporations need to look internally at their practices and audit themselves for algorithmic bias. Consumers need to vote with their dollars and demand better. Raise your voice and ask for change.”

It is true – the only way to truly make models fair is to understand their behaviour, and interrogate unexpected relationships and uncomfortable unconscious biases, in order to continually learn.

“Not realising your underlying models are biased is no longer an acceptable response, given the awareness now around this topic,” said Carver. “Any firm utilising AI has a responsibility to do proper due diligence.”

2. Bias bounties

Fortunately, there are mechanisms, known as bias bounties, available which firms can leverage to ensure their AI systems do not have inherent flaws. According to Christophe Bourguignat, CEO and co-founder of insurance tech provider Zelros, “bias bounties catch bad data, avoiding further deviation of the analytics.”

Bias bounties can be implemented to reward users for identifying bias in AI systems, and help protect against reputation and revenue losses, bad predictions, ethical misconduct, as well as attacks or information leaks. 

Already, several major companies are using bias bounties as a means to ensure their systems are responsible and robust.

3. Role models

Another, more pro-active means to ensure AI systems represent the entire gender spectrum, is to provide students with diverse role models.

“The first time I experienced bias in my field of work was at university. I was one of six women in a 200–300-year group. Only three of us were taking computer science as a main degree focus,” highlighted El-Hanfy.

This imbalance continues into the professional sphere. According to the World Economic Forum, just 22% of professionals in AI and data science fields are women.

“Role models are vital,” El-Hanfy continued. “We need to overhaul our legacy perception of what a programmer looks like. Having a more diverse set of role models, in senior positions, is key to inspiring the next generation and busting the myth that women or minority groups do not belong in AI.”

Evidently, the mission needs to start early. We must ask ourselves how we help children engage in, and be excited by, AI from a young age. Working in this area are foundations such as Raspberry Pi – a British charity founded in 2009 to promote the study of basic computer science in schools. Its aim is to “put the power of computing and digital making into the hands of people all over the world.”

Fortunately, diversity in tech seems to be on the up. In 2021, the UK’s leading non-profit driving diversity in tech – the Tech Talent Charter (TTC), published a report which revealed that the number of tech roles held by women has increased from 25% in 2020 to 27% in 2021. It also highlighted the important role for SMEs in positively impacting the future tech talent pipeline, since these companies can easily implement new diversity and inclusion practices than larger, more complex multinationals.

“Inclusion must be baked in now, or the tech sector risks cementing inequalities that have been exacerbated by the pandemic,” warned Debbie Forster, MBA, CEO and co-founder, Tech Talent Charter.

The real work begins today

While the situation is improving, progress is slow. Getting the full gender gamut participating in, and represented by, AI could take decades. Either way, its key we get there, for the sake of inclusivity, consumer wellbeing, the bottom line, and even the continuation of AI itself.

“I am hopeful for the future,” said El-Hanfy. “There's lots of positive energy and attention going into this.”

Comments: (0)

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.