Skip main navigation

New offer! Get 30% off one whole year of Unlimited learning. Subscribe for just £249.99 £174.99. New subscribers only. T&Cs apply

Find out more

Ethics of Facial Recognition Technology

In recent years, face recognition technology has increasingly become part of our lives: we use it to unlock mobile phones, we are automatically tagged in photographs on social media, our passage through airports is speeded by automated passport checks and we are silently checked against watchlists in crowded situations such as sports events, festivals or demonstrations.
Passport control sign
© University of York

In recent years, face recognition technology has increasingly become part of our lives: we use it to unlock mobile phones, we are automatically tagged in photographs on social media, our passage through airports is speeded by automated passport checks and we are silently checked against watchlists in crowded situations such as sports events, festivals or demonstrations.

Many of these applications are controversial. Take the use of facial recognition technology to identify individuals in a crowd, for example. While some people emphasise the speed and efficiency of such searches and the benefits that can bring over traditional surveillance and security procedures, other people raise concerns over privacy and human rights violations, the role of state surveillance in everyday life and a gradual threat to social cohesion and individual self-expression.

One often-discussed concern is the issue of bias and discrimination in the development and application of face recognition technology. Ethically, this is a complex space, for cultural and technological reasons. Culturally, we are dealing partly with the legacy of outdated cultural norms and assumptions in society (which might inform decisions about features and characteristics to ‘target’ in database searches, for example) but this is set against the backdrop of instability and fluidity around the concepts and definitions of “race”, “ethnicity” and “gender”.

Technical Bias

There are also technical and design issues which mean that face recognition technologies are likely to perform differently for different groups in society. We need to be aware of these implications, and make responsible decisions about how to deploy the technologies fairly, ensuring that we are mindful of and take steps to mitigate their inherent biases.

Some of the technical biases are a carry-over from the history of photography itself. From the earliest days of photography, techniques and materials were developed to privilege whiteness: in the early twentieth century, for example, film emulsions were biased towards “Caucasian” skin tones – colour film tended to be unable to accurately represent a wide range of non-white skin types and failed to pick up significant facial features. Although digital cameras enabled considerable amounts of post-processing after the image had been taken, they still privileged “whiteness” in visual reproduction. In 2014, African-American photographer Syreeta McFadden wrote:

“Even today, in low light, the sensors search for something that is lightly coloured…before the shutter is released. Focus it on a dark spot, and the camera is inactive. It only knows how to calibrate itself against lightness to define the image.”

Algorithmic Bias

The implications of this, in terms of accurate representation of all people, are clear. In the early 2000s, this difficulty in “seeing” contrast caused several well-publicised incidents of webcams failing to detect faces and movements from people with non-white skin. Manufacturers were accused of “algorithmic bias”.

As we have seen, deep learning approaches are highly dependent on the datasets which are used to train them. Any biases in the data sampled, the way it is collected and labelled will be reflected in the AI system’s learning, and the outputs it produces. Facial recognition algorithms, again, give us an interesting and disturbing insight into this. In 2018, Buolamwini and Gebru audited facial classification systems developed by Microsoft, IBM and others and showed that, depending on the context, dark-skinned women were up to 35 times more likely to be misclassified than were white men. Their study revealed that the large datasets used to train these systems under-represented people of colour and women.

A recent IBM study of facial diversity in datasets found that out of the eight most significant, publicly available face image datasets, six have more male images than female ones, and six have over 80% light-skinned faces. It is worth also considering the way in which the data has been collected: until very recently, the approach has largely been one of large-scale “internet scraping” rather than carefully controlled, bias-aware data collection.

Another problem is that designers have not generally taken steps to correct these imbalances in their datasets. For example, as of 2019, none of the ten largest face image datasets included any labelling for skin colour or type. This means that any differences in the performance of the algorithm in facial recognition across different racial groups will not be detected. Even where data annotation is carried out, biases are often observed. For example, the UTK Face dataset, published in 2017, recognises only five rather crude categories of race – White, Black, Asian, Indian and “Other” – and only two categories of gender: male and female.

Design Without Prejudice

The collection and pre-processing of input data for AI systems in general is challenging, and the example of facial recognition data is a good illustration of the difficulties. As researchers, we need to take care that the data we collect is fully representative, and that our categorisation, labelling and annotation processes reflect – and include – all of the groups who are likely to be affected by the output of our algorithms. We need to be alert to historical prejudice, and aware of its potential practical implications, and take steps to ensure that we counter it in the design of our systems. Not to do so is not only wrong: in an age where facial recognition technologies play a role in so many parts of our lives, the consequences – for individuals and for society – of false positives, or false negatives, are severe.

This article is very heavily based on the report ‘Understanding bias in facial recognition technologies: an explainer’ by Dr David Leslie of the Public Policy Programme of the Alan Turing Institute: Leslie, D. (2020). Understanding bias in facial recognition technologies: an explainer. The Alan Turing Institute. https://doi.org/10.5281/zenodo.4050457

© University of York
This article is from the free online

Intelligent Systems: An Introduction to Deep Learning and Autonomous Systems

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now