Durham University presents its research to reduce racial bias in facial recognition technology

By

Durham University’s Computer Science Department presented their findings to reduce racial bias in facial recognition technology at the Computer Vision and Pattern Recognition conference, held virtually on June 15th

Facial recognition is used in a variety of industries to help identity or verify individuals, from the security industry, healthcare, virtual assistant technology, education and gyms.

These algorithms work to analyse particular facial features such as the relative position, size and shape of the eyes, nose, cheekbones, and jaw. The features are then used with other images to identity matching features.

However, most facial recognition algorithms have a racial bias.

According to a US federal study on facial recognition technology, looking at 189 software algorithms from 99 developers, images of Asian and African American faces are between 10-100 times more likely to be incorrectly recognised than Caucasian faces.

Conducted by PhD students Seyma Yucer-Tektas and Samet Akçay, alongside staff members Dr Noura Al Moubayed and Professor Toby Breckon, the research has led to a 1% improvement in reducing racial bias and has increased the accuracy of facial recognition across all ethnicities.

Their method uses a synthesised dataset showing different facial characteristics and racial feature, focusing less on skin colour and more on identifying features that are irrelevant to skin colour.

Image:

Leave a Reply

Your email address will not be published.

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.