Bethan Turner
asks whether algorithms can ever be completely objective and explores the issue of algorithmic bias.

In case you haven’t heard yet, Honeycomb were delighted to be invited to speak at this year’s IIeX North America conference. As part of this, we were also asked to contribute some thinking to Greenbook’s “Big Ideas” series, which you can read here or read on to grasp the key themes.

We may not be used to hearing the words “algorithm” and “bias” together yet. However, we can be certain that we will in the near future, and it’s important that we start the journey to understanding what it means, and what the implications are for our lives, our communities and our societies.

You’d probably be surprised at seeing how much algorithmic bias is already at work in society. It can already be found in machine learning tools that predict whether or not criminals will re-offend (and finds that black offenders are much more likely to re-offend). It is present in English proficiency tests used for visa applications (and those with a different accent are more likely to fail, according to voice recognition software). Perhaps at the top of the tree for ‘average Joe’, it is being used extensively in insurance markets, where algorithms are being utilised to assess risk of claim and calculate premiums accordingly.

The idea that algorithms were fundamentally flawed by human biases was brought home whilst reading Safiya Noble’s book, “Algorithms of Oppression”. In the book, she argues

“When we think of terms such as ‘big data’ and ‘algorithms’ as being benign, neutral, or objective, they are anything but”.

Another example of algorithmic bias at play was in a study of fake profiles on a job advertising site. The research behind the creation of the profiles found that males were significantly more likely to see adverts for higher paid jobs than their female counterparts.

So we’ve seen that algorithms are being used to understand risk, and to help reduce the number of job applications that organisations have to sift through. But surely the big tech giants such as Google are one step ahead of this? Not according to Noble. In her book, she speaks at length about the bias existing in digital search engines, and how, at the beginning of her research in this field, over 80% of top search results for “black girls” related to pornography; whilst pornographic results for “white girls” were rare. (Some of the results she mentions in her book have since been manually changed by Google).

So what does all this tell us? Stuart Geiger’s words ring true here:

“Data are made by people. Data are people.”

Without people, there’s no-one to programme the machine to allow it to then learn. Without an initial data set (yes, collected and cleaned by people), again, the machine can’t learn. Ultimately, machines need to be explicitly told what success looks like and it’s humans that decide what constitutes a success, and what is deemed as a failure. On that basis, if we know that people are biased, should we be quite so surprised that algorithms are biased?

Having established that algorithmic bias exists, we need to focus on what we can do about it. It would be easy to give up, and accept that it’s the way of the world, but shouldn’t we spend time recognising bias, assessing it, and questioning it? Let’s use our hypothesis testing skills to challenge our notions of success, failure, and fairness with regards to these specific data and this specific model.

Of course, we know that discrimination continues to exist, and in the case of algorithmic bias, there are not yet legal frameworks or policies to deal with it. If we keep asking these questions, raising these issues, and researching solutions, we will be able to look towards a future where artificial intelligence and the broader cultural, social and economic landscape can harmoniously co-exist.