By David Imber FIEP | Head of the Centre for Employability Excellence
1965, H. A. Simon: “Machines will be capable, within twenty years, of doing any work a man can do.” Simon HA (1965), The Shape of Automation for Men and Management, New York: Harper & Row
1970, Marvin Minsky : “In from three to eight years we will have a machine with the general intelligence of an average human being.” Darrach B (20 November 1970), “Meet Shaky, the First Electronic Person”, Life Magazine, pp. 58–68
INTRODUCTION
The excellent and generous contributors to this edition of the IEP Journal have as always given us much to think about, to include in our work and to learn from. They speak from and to the front line with practical, ethical professionalism. I hope you agree that this Journal, so ably curated by our Guest Editor Claudine Adeyemi-Adams with inestimable support from Ryan McGee, gives us a balanced and measured view of AI, and tempers enthusiasm with concern and realism.
In my Afterword, I want to mention moral agency, its application in employment advice, and why we should ask hard questions while we plan to use AI to support our work.
QUESTIONS
Is AI Intelligent? Do we even know what intelligence is? There is no agreement on a definition of intelligence. But we can agree that the ability to find statistical patterns in large datasets is well within the grasp of AI, but hard for people to do. There are other things people do well that AI cannot, and I believe that these are essential to good practice.
Does AI have the critical faculty? It produces statistically probable results from given data; but that isn’t critical thinking in any human sense1.
Can AI do formal reasoning? It is powerful and useful in limited fields. But that is not what is implied by the term ‘intelligence’.
Does it have the emotions, conscious or otherwise, that are needed for human reasoning?
Does it have a sense of truth or truth values? It does not2.
Given these shortcomings, we should we be wary of AI’s potential impact3.
PROBLEMS IN AI?
Ania Mendrek, Matt Kingswood, Oliver Large and Nimmi Patel have all mentioned AI’s inclination to falsehoods, ‘hallucinations’, and bias. Those are not qualities one wants in a person-centred profession. My view is that since AI has no self and no identity in a meaningful human sense, it cannot be said to have either a moral position or moral agency4.
Using chat bots to offer advice and instruction (be that career choices, action plans or other supports) without a human professional in control can be dangerous5. Already such applications exist and can be found online6. As a consequence its use in employability needs the most careful planning and monitoring.
Applying AI support to human professional advisers is another matter. Access to employability data is a challenge that AI may make much easier, and support for the usual administrative problems must be welcome, as exemplified by Belina Grow’s use of online translation tools, in Ania Mendrek’s article. These are not (or should not be) of ethical concern in a ‘human directing the service’ model, as described by Dominic Atkinson. But if AI is used to prescribe action by clients, AI might introduce new, or conceal already existing problematic heuristics7.
Moral standing aside, case management is neither a linear algebra nor a statistical clustering problem. Effective case management relies (as Claudine Adeyemi-Adams says in her editorial) on human input. So we should ask ourselves if AI addresses the right issues, for example:
- if AI can be made less biassed than a human adviser, isn’t there something to be done to help the adviser?
- would you trust a colleague or employee that lies, hallucinates, and has no moral standing?
- if AI is helpful in overcoming bureaucratic hurdles, who put up the hurdles in the first place?
- do we need AI to tell us, as we have been saying for years, that inter agency co-ordination is needed? We can surely use statistical insights but, as I see it, the issue is not in the statistics so much as in the agencies’ constitutions and organisations. It takes human and political intervention to change them.
CHOICES, ALGORITHMIC HEURISTICS
When clients make personal choices in their local labour market, they are not probabilistic statistical recurrencies in a database. They include emotional responses, attitudes to risk, to rewards, personal capabilities and more. These are the working raw materials of good advice. When clients choose, they do not optimise, they react and express their inclinations8.
Three examples illustrate the dangers of applying procedural algorithms, and even plain old mumbo-jumbo, usually in the service of efficiency, effectiveness (great goals in themselves) but also shortcutting human input: Use of tools such as Myers Briggs9, EMDR10, NLP11, left-right brain theories12 may now be fading into disuse. Bad enough in themselves, these remain in the literature for AI large language models to digest and regurgitate. Other algorithms have a better pedigree, but are not universally accepted: Holland codes13 and the Big Five (CANOE) character traits carry their own risks14 and they form the basis of, but are hidden from view within, some computerised career advice systems. To apply them uncritically is to allow the machine to prescribe, not to empower the client.
We might also wonder about the (UK) traditions of SMART Action Planning, for which I have not seen any good independent evidence15. Perhaps it is a placebo, and if it works, that’s fine. But hide it from view in a machine, and the capacity to question and critique is submerged.
THE OTHER AI: ACTIVE INGREDIENTS IN EMPLOYMENT ADVICE
I am arguing for primacy of moral and ethical choices and for an understanding of emotional and social interactions as the drivers of excellent advice services. These should be evident in our respect, trust, adaptability and transparency with clients and employers. They do not reside in datasets. The AI (Active Ingredients) of trust, empathy and honesty rely on us having and valuing truth values. Alongside that, immediate, concrete, non-statistical knowledge (of the labour market, community and environment) underpins the capacity to guide, advise, counsel and support16. Current Artificial Intelligence deploys a technology has no morality, has no emotions, and does not recognise nor use truth-values, even when it presents truths.
But it can help us navigate large datasets, record facts (though I’m sceptical about its interpreting their meaning), provide ready translations, analyse labour markets and many more useful services still to be discovered. Several authors suggest gains in reducing admin, freeing advisers’ time, in reviewing personal action plans (Rebecca Miles, Boris Bambo), in identifying clusters of risk factors and institutional relationships for co-operative development. Other research suggests that it has a role in facilitating communication between clients and advisers17, which anyone might welcome. This study notes that ‘The integration of AI… necessitates careful consideration of ethical issues such as data privacy, algorithmic bias, and the potential for reduced human oversight’, and our authors have emphasised the point that, if implemented well, and with a human in-the-loop approach where the AI is being used to support the adviser, it is the role of that adviser to convey the level of empathy required for each participant. Our authors have illustrated just how AI can support the human relationship between adviser and client. In correctly placing ethical and personal choices in the human domain, they show that there’s a sweet spot to be found, somewhere between human complexity and programmatic convenience. This edition of the IEP Journal is part of the search.
RESPONSIBILITY: AGENCY AND MORAL CHOICES
When I get off the bus for a night out in Exeter, I thank the driver, not the bus. When someone is injured in a road accident, we blame the driver, anot the car.
Responsibility for the consequences of application of AI rests with the people who create, train, advertise, deploy and use it. So even if AI could be made perfect (whatever that might be), it is not a moral agent18, is not worthy of praise nor of blame. It should not replace responsibility for our actions.
Let us remember, each of us carries in her or his own head computing power – and moral capacity – that far exceeds anything produced by technology. So automatic, so powerful, that it is sometimes easy to forget that it is the engine driving client and employer relationships. It can, with care, be usefully lubricated by carefully chosen AI facilities.
THANK YOU
Thanks are due to our Guest Editor, Claudine Adeyemi-Adams FIEP, CEO & Founder at Earlybird and Ryan McGee MIEP, Senior Founder’s Associate at Earlybird for help and guidance in producing this issue of the IEP Journal, to the IEP Team:
Heather Ette FIEP, Head of Marketing, Communications & Brand, Taylor Cunningham MIEP, Marketing & Communications Consultant and also to Jason Measures, Graphic Designer at Deadline Creative, for their work on production and distribution. The Journal would not be created without their professional contributions.
Sincerely Head of the Centre for Employability Excellence
DAVID IMBER FIEP | Head of the Centre for Employability Excellence
David is a Fellow of the Institute of Employability Professionals, with over 40 years’ experience leading employment programmes in the UK.
He is Head of the IEP’s Centre for Employability Excellence, co-author of the IEP’s Quality Improvement Framework, and Editor-in-chief of the IEP Journal. His work on training and curricula have contributed to the IEP Level 3 Certificate in Employability Practice. He provided the review of evidence for quality in advice and guidance services, as part of the UK matrix standard.