Nimmi Patel

By Nimmi Patel | Head of Skills, Talent and Diversity | techUK 

WHAT IF AN AI TOOL DECIDED NOT TO HIRE YOU BECAUSE OF YOUR BLUE JUMPER? 

It is no secret that AI is changing the way we work. Admin-heavy tasks that used to absorb our time can now be automated and the speed of AI data processing has made it easier and faster to make real-time decisions. 

Both are fundamentally reshaping how employers operate and how people find and develop in jobs. 

In HR, that transformation is particularly acute. Automated Decision-Making (ADM) tools are now commonplace in recruitment. As Ian Lithgow, Managing Director for Health and Public Service from Accenture, described at a techUK panel on the topic: up to 70% of recruitment tasks could be automated – from job descriptions to interview scheduling. 

On the surface, that sounds like a win for efficiency, but beneath the promise lies a risk; what happens when these systems, trained on imperfect data and deployed without proper oversight, start making decisions that reinforce bias, rather than eliminate it? 

We have seen what can go wrong. Headlines have detailed how AI recruiting tools downgraded applicants based on gendered language in CVs, penalised candidates from certain postcodes, or preferred certain facial expressions. The Information Commissioner’s Office (ICO)’s recent guidance has warned that ADM in recruitment is a “high-impact case” prone to reinforcing demographic biases and performing worse than human judgment in complex, individual assessments. 

Even though it sounds like these machines have a mind of their own, AI reflects our decisions and our behaviour because the data we use to train AI represents our collective experiences, behaviours, and decisions as humans. For that reason, bias can sneak in at various stages of development, embedding itself in what we think are neutral systems without detection until it is too late. If a blue jumper has negative connotations, such a tool can learn to avoid selecting candidates based on that visual cue. Users and institutions, which hold their own biases, can influence the AI through further training and interactions, causing it to acquire and reflect these biased views. 

So how do we make AI in recruitment work for everyone – employers, practitioners, candidates, and even wearers of blue jumpers? 

First, we must recognise that employers must treat AI adoption in recruitment not just as a tech upgrade, but as a leadership and governance challenge. This is where large employers can lead. As techUK highlights in its Making AI work for Britain report, employers have the scale and resources to trial new tools, pilot responsible AI practices, and share findings across industries. Transparency is key. Candidates should know when AI is used, how it affects their application, and what oversight is in place. 

Organisations must also consider whether the automated tool in use falls within scope of Article 22 of the UK GDPR. Article 22 aims to protect individuals from the ramifications of solely ADM, especially in sensitive and legal contexts, and determines whether an organisation needs to complete a data protection impact assessment. The assessment is to ensure the AI system is legally compliant and demonstrate that any high risks have been mitigated. Employers using these tools at scale must be mindful of what these systems are learning. 

As a result, employers should be held legally and morally responsible for the outcomes of their tools, even if those include decisions made by an algorithm. Importantly, employers should work with employability partners to ensure that their recruitment processes do not inadvertently exclude those furthest from work. That means stress-testing tools for bias, embedding human review, and offering clear feedback mechanisms. More comprehensive regulation is on the horizon. The ICO plans to publish a statutory code of practice on AI and ADM and is scrutinising recruitment practices for transparency and fairness. 

This is where the Equalities Act comes in. The Act from 2010 penalises all discriminatory acts, regardless of whether such discrimination is caused directly or indirectly, by man or machine, and it does so by ensuring that there is a due regard to eliminate bias before deployment or procurement, essentially from the outset. Although the Equality Act does not specifically mention AI or automated decision-making, the Equality and Human Rights Commission (which has a statutory mandate to advise government and Parliament) highlights that if an AI tool leads to discriminatory outcomes, employers can still be held liable. 

Yet legal clarity on ADM alone isn’t enough. We often say regulating the technology without addressing the intent and context behind its use could stifle innovation while doing little to address poor decision-making. As Ivana Bartoletti, Wipro’s Global Chief Privacy and AI Governance Officer said: “The problem is not bias; the problem is when that bias becomes discrimination.” 

Detecting AI bias involves regularly auditing training data, monitoring model outputs, and applying fairness metrics. IBM’s AI Fairness 360 allows developers to share and receive state of-the-art codes and datasets related to AI bias detection and mitigation. 

This toolkit also allows the developer community to collaborate with one another and discuss various notions of bias, so they can collectively understand best practices for detecting and mitigating AI bias. Wipro’s AI development programmes have a fairness analyser, which is something that helps them understand where bias and discrimination may come from. 

techUK members such as Workday and MeVitae are spearheading efforts to integrate AI into HR software, offering the promise of objective and efficient hiring processes. 

Workday has shown that generative AI can reduce hours of manual effort by drafting job descriptions and growth plans, potentially freeing HR professionals to focus on more strategic, people-centred work. MeVitae provides solutions to detect and address imbalances in hiring pipelines, focusing acutely on factors like gender, ethnicity, disability, sexuality, and age. Using their AI-enabled technology, MeVitae has demonstrated that anonymised recruiting increases diversity and that they receive twice as many applicants than when using traditional recruitment methods. 

What’s clear is that ADM is also part of the solution. By removing the overtly human snap judgments that affect an application, ADM could possibly offer another tool in the arsenal for employers to ensure their candidates are up to snuff. 

But ADM is only capable of this when a human can remain in the loop and audit its objectivity. Therefore, when used properly, it can help mitigate bias from employers who are both knowingly and unknowingly prejudiced themselves. 

AI always has and needs a human counterpart. Poor recruitment outcomes against wearers of blue jumpers are not the fault of machines, but of flawed logic, bad data, and insufficient oversight. 

Integrating ADM in recruitment is worth doing, and it is worth doing well. 


ABOUT THE AUTHOR

NIMMI PATEL | Head of Skills, Talent and Diversity | techUK 

Nimmi Patel is the Head of Skills, Talent and Diversity at techUK. She works on all things skills, education, and future of work policy, focusing on upskilling and retraining. Nimmi is also an Advisory Board member of Digital Futures at Work Research Centre (digit). The Centre research aims to increase understanding of how digital technologies are changing work and the implications for employers, workers, job seekers and governments. 

Prior to joining the techUK team, she worked for the UK Labour Party and New Zealand Labour Party, and holds an MA in Strategic Communications at King’s College London and BA in Politics, Philosophy and Economics from the University of Manchester. She also took part in the 2024-25 University of Bath Institute for Policy Research Policy Fellowship Programme and is the Education and Skills Policy Co lead for Labour in Communications. 

Share via
Copy link