Search

Call us on: 0333 2400 308

April 29, 2024

Navigator News – Daily Business Article

This article appeared on Daily Business on 26 April 2024. 

 

How AI is facing discrimination concerns

One of the first employment tribunal claims involving the use of AI has settled out of court and involved an Uber Eats delivery driver who was dismissed after the Facial Recognition Technology (FRT) used by the the company’s app through which the drivers’ work is allocated, failed to identify him. The case has prompted questions around identification, particularly of non-white individuals who could be unfairly judged.

The driver, Mr Manjang, claimed that his employer had not explained the process leading to his suspension from the app, and that there was no process by which the situation could be rectified.   

Additionally, he claimed that the FRT is less accurate when used by non-white individuals such as himself, therefore, placing certain workers at a disadvantage and more vulnerable to losing their jobs. Certainly, a key area of concern around the use of FRT has been the accuracy of the systems, and the risk of algorithmic racial or sexual discrimination or bias.

A report to the UN by The Equality and Human Rights Commission (EHRC) on civil and political rights in the UK highlighted evidence indicating that many automatic FRT algorithms disproportionately misidentify black people and women, and therefore operate in a potentially discriminatory manner.

As an example, one review of a facial recognition technology found that the software misidentified less than 1% of lighter skinned males, compared to almost 35% of darker skinned women, with similar statistics being reflected across other similar FRT products. 

Mr Manjang made claims to the employment tribunal of indirect race discrimination, harassment, and victimisation. He was assisted in this claim by the EHRC and the App Drivers and Couriers Union (ADCU), both bodies having previously expressed cause for concern over the use of automated processes preventing workers from being able to do their jobs.  

Following news of the settlement EHRC chairwoman, Baroness Falkner commented that a major concern for them was the “opaque” nature of the processes applied when Mr Manjang was suspended from the app and that these were not explained to him. And while his access to the app was later restored and he continues to work for Uber Eats, the process involved in the restoration of his access was not explained to him either.

This case was settled out of court, and so a judgement is not available but the key lesson for employers is that they must be open and transparent regarding both how and when AI processes are used.  An employer making decisions solely or mainly based on FRT will be vulnerable to a discrimination claim and so employers must take care to avoid any unintended consequences as these could prove costly.

AI tools are also being used increasingly as an aid to recruitment, and that brings challenges too, something which has been the recent focus of HM Government. The Department for Science, Innovation and Technology has emphasised that whilst AI does have the ability to streamline and simplify HR processes there are risks of discrimination and bias, with employers urged to carefully consider how to ensure that people with disabilities are not disadvantaged by the use of AI in recruitment processes.

The risks here can also be demonstrated by a recent claim in the US brought against the HR software company Workday, by Derek Mobley, a black man, who claims he has been unsuccessful in over 100 job applications to Workday. He argues that the algorithm has discriminated against him on the basis of his race. This is a salutary lesson for any employer, whether in the US or here, which gives over decision-making power around hiring to AI tools.  

AI is currently the hot topic and employers are understandably keen to use it wherever possible, but from an employment law perspective they should proceed with caution.  Knowledge and transparency are key. New software bought in should be fully understood in terms of how it works and what data has been used to train the AI tool, so as to minimise the risk of any prejudices and biases hidden in the software leading to discriminatory outcomes and to legal claims.

If you have any questions on any of the issues raised in the above article, please contact Natalia Milne.

Not Sure Where To Start?

Find Out More

Are you taking on your first member of staff or wondering if you’re compliant with GDPR, maybe you’re unsure if your HR processes are rigorous enough? Get in touch with Navigator today and see how we can help your organisation.

Call Us Now on: 0333 2400 308

or

Newsletter Subscription

Sign Up to the Navigator Newsletter

Stay informed with the latest changes in employment law, health & safety, HR and data protection including noteworthy cases, upcoming events and other useful articles.

We only use your details to send you our monthly newsletter along with event invitations and other useful articles. You can unsubscribe any time.

Contact Us

Get in Touch

0333 2400 308

enquiries@navigatorlaw.co.uk

Floor 3
1-4 Atholl Crescent
Edinburgh
EH3 8HA

Newsletter Subscription

Sign Up to the Navigator Newsletter

Stay informed with the latest changes in employment law, health & safety, hr and data protection including noteworthy cases, upcoming events and other useful articles.

We only use your details to send you our monthly newsletter along with event invitations and other useful articles. You can unsubscribe any time.

Contact Us

Get in Touch

0333 240 308

enquiries@navigatorlaw.co.uk

Floor 3
1-4 Athol Crecent
Edinburgh
EH3 8HA