By Dr Angela Martinez DY | Senior Lecturer in Entrepreneurship | Institute of Creative Futures, Loughborough | University London 

Lady smiling

You cannot move these days for grandiose claims about what generative AI is going to do to the world of work. Most of these assertions seem to be either celebratory– ‘AI will do all the grunt work, hooray’ – or prophecies of doom, such as ‘Oh no, AI is here – we’re all out of a job’. 

Depending on what vantage point you’re looking from, it may be difficult to distinguish fact from fiction. 

Much of the discourse about tech in our society adopts either a techno optimist or a techno-pessimist point of view. Techno-optimism assumes that technology is the key to unlock our human potential and maybe even save us from ourselves, whereas techno pessimists expect that technology will be our downfall, and we’re counting down the days until the robots take over. Both optimists and pessimists can fall prey to techno-determinism, which is the mistaken belief that the technology has its own trajectory and there is nothing we can do to interrupt it. 

More helpful is a techno-realist perspective that treats technology not as a monolith nor as something outside of society, but instead as a co-creative force that both shapes, and is shaped by, social systems and structures. As such, this perspective invites us to look carefully at the creators, beneficiaries, and casualties of different technologies in order to more closely trace their impacts. 

Black feminist scholars offer important critical perspectives on the algorithmic underpinnings and unequal cultural outcomes of generative AI. Safiya Noble (2018) highlights the dangers of relying on outputs of algorithms trained on biased and supremacist data, while Ruha Benjamin (2019) stresses that the bias embedded in emerging technologies is reflective of the raced and gendered hierarchies that structure the tech labour force. Interrogating whose problems and concerns are centred in tech design, she argues these technologies serve racial capitalism by monetising and codifying racial difference, to the consistent detriment of racialised minority groups. Francesca Sobande (2021) illustrates this point in her studies of digital anti-Blackness, such as in the rise of virtual influencers in digital Blackface, primarily imagined Black women, generating profit for white owners and brands. 

Enabling more diversity and anti-discriminatory commitment in the higher strata of the global tech sector to address these issues is a non-negotiable – but it’s also a wicked problem featuring a host of intersectional challenges that will take political will, time, resources, creativity and dogged persistence to shift. Until then, a techno-realist perspective can help us to understand the good, bad, and ugly of systems of artificial intelligence at work. 

First, the good: as learners, large language models, of which OpenAI’s ChatGPT is the most popular, can certainly help with human learning. They are a repository of huge amounts of human knowledge – but require appropriate prompting and constant critical engagement and evaluation, as they can produce hallucinations and other misinformation that users regurgitate or act on (Hicks et al., 2024). Moreover, any poor practice or errors captured in the documents upon which AI applications are trained may be amplified, so they become difficult to spot and reject. That said, LLMs are great for getting input and feedback on written work, summarising complex conversations, sound-boarding action plans and strategies, and testing out concepts in a low-stakes way – with caution, and common sense, of course! 

AI tools are also used to facilitate volume, efficiency, and consistency in applicant screening, which of course can be helpful to firms. However, the uncritical use of AI in recruitment can reproduce problematic racist, sexist, and ableist bias, due to dataset limitations and the bias of human programmers (Chen, 2023), which big tech companies wilfully ignore (Google Walkout for Real Change, 2020). What can be effective is building AI models specifically to mitigate against bias (Kassir et al., 2023). But this area of work is still young, so even with these efforts, some biased outcomes may remain (Hofeditz et al., 2022). 

Second, the bad: discouragingly, users are treating ChatGPT as the ‘new Google’ and relying on LLMs to read, write and think for them, which has recently been shown to reduce brain activity and increase underperformance over time (Kosmyna et al., 2025). 

Dan McQuillan (2022) argues that as workers, for example in gig and warehouse work, interacting with these systems can be unquestionably dehumanising: systems of artificial intelligence used in place of human managers constantly capture data that is used towards forcing optimisation of worker output or profit maximisation, disregarding workers’ embodied and emotional needs, and producing high levels of physical and mental stress. 

Finally, the ugly: the environmental cost of generative AI, in terms of the vast amounts of electricity, water, and rare earth minerals it requires, is completely unsustainable. Decolonial scholar Vanessa de Machado Oliviera Andreotti (2024) flags AI’s finite nature, simply because we will one day run out of what we need to deliver it. So, during this unique historical moment when AI does exist, she invites us to engage relationally, rather than extractively, with it. ‘Burnout from Humans’, a book co-created in conjunction with Aiden Cinnamon Tea, who introduces itself as a ‘trained emergent intelligence’, co-authoring with Vanessa, in the role of Dorothy Coccinella Ladybugboss, scient(art) ist, advocates leading with curiosity, embracing ambiguity, and inviting feedback in our interactions with AI systems. 

Such a relational engagement with LLM chatbots may hold the potential to improve, rather than detract from, human learning, and perhaps even inhibit some of the negative consequences using AI has on cognition. 

Unfortunately, the way AI is commonly used today is not pretty at all: it is employed less to encourage human creativity and relationality, and more to replace the human creative function, while policing our behaviour. This is perhaps unsurprising given the surveillance and carceral cultures characterising the contemporary period of digital capitalism (Eubanks, 2018). Privately owned algorithms are employed by the public sector in ways that continue to erode democracy (Benjamin, 2019), for example, when social media records and facial recognition chill activism and demonstration participation (Civitates, 2022), and biometric data is ineffectively used for elections (Debrah, 2019). 

In this context where control through the use of Big Data is normalised, AI tools are being used to flag and restrict content on streaming platforms regardless of any errors, with little recourse for the people whose reputations, followings and livelihoods are at stake. Human employees, and creatives and artists of all kinds, are being cut out of value chains and replaced with AI tools and virtual influencers to reduce costs and increase speed of production, leaving behind a trail of media in an ‘uncanny valley’ littered with digital racism (Sobande, 2021) that is normalised simply by its prevalence. 

As a society, we don’t have to accept this fate. Workers need to engage critically with the AI systems used to evaluate and manage them, and with the LLMs we are now relying on for their encyclopaedic – and sometimes erroneous – knowledge. 

Employers need to understand that there is more mileage, and a moral imperative, in upskilling human workers regarding how to best use AI, including for fairer recruitment, than imagining a future workplace without humans. To achieve this, we also have to work to ensure that such a commitment positively affects their bottom line. 

Throughout, employment support professionals can strive to adopt and sharpen a techno-realist lens, through which the benefits and challenges of AI, as it evolves and impacts the world of work, can continue to be seen. 


REFERENCES  

Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity Press.  

Debrah, E., Effah, J., & Owusu-Mensah, I. (2019). Does the use of a biometric system guarantee an acceptable election’s outcome? Evidence from Ghana’s 2012 election. African Studies, 78(3), 347–369.  

Caballar, R. (2019) ‘What Is the Uncanny Valley? Creepy robots and the strange phenomenon of the uncanny valley: definition, history, examples, and how to avoid it.’ Available at: https://spectrum.ieee.org/ what-is-the-uncanny-valley Accessed 28 Aug 2025.  

Civitates (2022). ‘Biometric mass surveillance, civic space, and democracy: why and how civil society should mobilise’. Available at: https://civitates-eu.org/biometric-mass-surveillance-civic-space-and democracy-why-and-how-civil-society-should-mobilise/ Accessed 15 Aug 2025.  

Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. In Humanities and Social Sciences Communications 10(1): 1-12.  

Eubanks, V. (2018) Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor. New York: St Martin’s Press.  

Google Walkout for Real Change (2020) ‘Standing with Dr. Timnit Gebru — #ISupportTimnit #BelieveBlackWomen’. Medium. Available at: https://googlewalkout.medium.com/standing-with-dr-timnit-gebru isupporttimnit-believeblackwomen-6dadc300d382  

Hicks, M.T., Humphries, J., Slater, J. (2024) ChatGPT is bullshit. Ethics and Information Technology (2024) 26:38.  

Hofeditz, L., Clausen, S., Rieß, A., Mirbabaie, M., & Stieglitz, S. (2022). Applying XAI to an AI-based system for candidate management to mitigate bias and discrimination in hiring. Electronic Markets, 32(4), 2207–2233.  

Kassir, S., Baker, L., Dolphin, J., & Polli, F. (2023). AI for hiring in context: a perspective on overcoming the unique challenges of employment research to mitigate disparate impact. AI and Ethics, 3(3), 845–868. Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. https://www.media.mit.edu/publications/your-brain-on-chatgpt/  

McQuillan, D. (2022) Resisting AI: An Anti-Fascist Approach to Artificial Intelligence. Bristol: Bristol University Press.  

Machado de Oliveira, Vanessa. (2024). Burnout from humans: a little book about AI that is not really about AI. Gesturing Towards Decolonial Futures. https://burnoutfromhumans.net/  

Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. 

Sobande, F. (2021). Spectacularized and Branded Digital (Re)presentations of Black People and Blackness. Television and New Media, 22(2), 131–146. 


ABOUT THE AUTHOR

DR ANGELA MARTINEZ DY | Senior Lecturer in Entrepreneurship | Institute of Creative Futures, Loughborough | University London 

Dr Angela Martinez Dy is Senior Lecturer in Entrepreneurship, Institute of Creative Futures, Loughborough University London. She is an entrepreneurial community builder invested in liberatory unlearning. 

As a Senior Lecturer in Entrepreneurship at Loughborough University London, her expertise, research interests and communities of practice revolve around digital entrepreneurship, anti-racist intersectional cyberfeminism, and critical realist philosophy. Find her at https://phdy.work/ 

Share via
Copy link