Bias in AI systems

By

AI – artificial intelligence. It sounds quite exciting, conjuring up images of towering databanks, secret underground facilities, writhing masses of tentacle-like cables connected to colossal mechanical brains of staggering intellect. In reality, AI is far more boring, and the so-called “AI” that is used in so many parts of our lives has a long way to go before it even remotely approaches the intelligence of your regular human. We’ve got another name for this kind of AI, one that you’ll probably be familiar with if you use any kind of social network – “algorithms”. 

Algorithms are a kind of limited machine learning, highly specialised for a specific purpose: you might have heard of algorithms selecting what videos to recommend for you on Youtube or what offers to present you on a shopping website. But while those examples seem relatively harmless (though not without their own share of issues), algorithms are employed for some quite serious functions. AI is often used to screen applicants for jobs, perform facial recognition in security software or even assess prisoners to determine how likely they are to be a repeat offender.

These functions, which previously would be performed by experienced professionals, now have an automated element. For the most part, there is still a human element at play here: an applicant screening algorithm acts as a filter, and the selected candidates are still interviewed by hiring managers. But it’s not too hard to imagine a world where the entire application process was entirely automated – after all, we’re trusting these algorithms to filter out the least preferable candidates, why can’t we trust them to just pick the best applicant for the job? 

Almost everyone experiences some sort of bias

So here’s where the topic of bias comes in. Bias is something often seen as a fundamental human flaw – almost everyone experiences some sort of bias against or towards something or other. Indeed, the world of job interviews is no stranger to bias, and many a quizzical eyebrow or discrimination lawsuit has been raised at some companies who appear to be hiring an unusually low number of people of a certain gender, race or culture. Some would argue that bias is unavoidable, and no matter how egalitarian and saintly someone may appear, deep down there is still an unconscious, unaware bias affecting their decisions. So who do we turn to in order to avoid bias? Unfeeling, emotionless machines.

As well as a time-saver, some companies tout their AI-based hiring reviews as an example of a lack of discrimination – an AI will only consider the cold, hard facts, and will select the best candidate for the job based purely on merit. Right? Well, umm, not quite. There are actually loads of different types of bias, but let’s look at a common one that affects AI hiring managers: historical bias. 

Let’s say that a company that has recently been the subject of scrutiny due to the low number of successful female applicants decides to employ an AI algorithm to select candidates in order to appear more fair. How would the algorithm designers choose the criteria for selecting a successful applicant? Probably by giving it the data of all the past successful applicants as a reference. But lots of the previous successful applicants have been men, due to previous bias. So the algorithm will probably retain this bias, and favour male candidates.

The designers could then remove the gender specification from the data, and have the algorithm ignore it. This might yield better results, but the skewed data could have other, more subtle tells. If applicants put down a line about their hobbies on their CV, the algorithm might notice some male-dominated sports or hobbies seem to be an indicator of more successful applicants. Of course, this isn’t the case, but according to the data it sure seems that way. In fact, even small skews in datasets can often become magnified by AI, exacting an even greater bias in its own reasoning.

Even attempting to remove ‘obvious’ sources of bias may not be enough to eliminate skewed preferences

How, then, do we fix this? For now, the main culprit is data. Underlying datasets are often the source of bias, but lots of data is needed to train an AI to perform its function. No data source will be perfect, but we can’t just make it up. The prevailing solution nowadays is pre-processing: attempting to remove any elements of the data that might contain or imply sensitive information while maintaining its accuracy. Algorithms need to be under constant vigilance, and any results analysed for underlying bias.

However, a better solution might be to lessen our reliance on algorithms for certain societal functions. AIs are responsible for deeming people worthy of a job, or whether or not they will commit crimes, and these decisions can have pretty big impacts on people’s lives. In the case of the latter, it was found that the AI responsible for evaluating the chances of reoffending was no more successful than a human. Even with a perfect, completely fair AI, is it appropriate to let a machine pass judgement in such a way? Probably not.

Image: Mike MacKenzie via Flickr

Leave a Reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.