
By Dr Tom Staunton, Senior Lecturer in Career Development at International Centre for Guidance Studies University of Derby
It’s always essential to clearly outline your terms. How will we approach ethics in this context? Well, we could think about ethics in terms of duties, which are the behaviours that delivery involves in line with codes of conduct.
Or in terms of consequences, do we think the outcomes of these systems are ethical? These frames would make us ask what ‘duties’ AI systems carry out or what the consequences of these are.
What impact would AI completing action plans or acting as a triage mechanism to ‘screen’ clients before seeing an advisor have on practice? How would we feel about AI sanctioning benefit claimants for not attending an appointment? Can a system which has never attended a job interview prepare a client at risk of depression towards attending an interview? Though AI may have merits, I think the humanistic traditions in counselling, education and public services more generally point towards more relational ethics.
Relational ethics makes central the human orientation towards compassion and care, which is viewed positively across histories and cultures. It looks at ethics as something which occurs between people when the right qualities of a relationship exist. Carol Gilligan (1982) discusses a care-based approach to ethics, which focuses on relationships and responsibilities, particularly with a focus on responding to each other’s needs, which is central to ethics.
Nel Noddings (2012) makes similar points, arguing for a relational approach to care which involves ‘engrossment’ (giving attention to the other) and ‘motivational displacement’ (the carer’s motives move toward the other’s needs). These theories ground ethics as coming out of relationships that develop in professional contexts. They very much align with the humanistic traditions developed by Rogers (1979) and in the careers sector by writers such as Savikcas (2013) and Cochran (1997). Rather than just picking an ethical tradition that I prefer, I believe a focus on a relational ethics of care actually fits better inside the career guidance tradition and fits better with existing ethical frameworks.
For the rest of this piece, I am going to use the ideas found in an ethics of care (as described above) to ask if AI can deliver services in line with the Career Development Institute’s (CDI’s) ethical standards (https://www.thecdi.net/about-us/cdi-code-of-ethics). Using this approach focuses on whether AI is capable of developing the sorts of relationships contained in the CDI code of ethics. As well as referring to ethical frameworks commonly used in the sector this also looks at how ethical issues are currently regulated and what issues are commonly understood from the wider career guidance community (which the CDI represents).
Transparency and Building Trust
Central to an ethics of care is being open and building trust; relationships are built on transparency. Rogers, for example, talked about being congruent in one-to-one work, avoiding having hidden agendas or hiding your ways of work. In contrast, many AI systems operate as ‘black boxes’ where even developers can’t fully explain why a particular recommendation was made. The CDI standard requires gaining trust through openness about approach and methodology – but if the AI itself can’t explain its reasoning in human terms, this becomes problematic.
Competence and Professional Boundaries
AI doesn’t have training, qualifications, or expertise in the traditional sense. It can’t engage in reflective practice or recognise when it’s operating outside its competence. This is similarly central to an open and honest relationship. The system also can’t make judgments about when a client needs to be referred to another professional (like a mental health counsellor) in the nuanced way a human practitioner can. Especially when this is driven by a desire to seek the best for someone, which can stop you from giving advice or support that you are not confident in. AI, in actuality, is never confident or unconfident in its views; it is ambivalent towards them just as it is ultimately ambivalent to the client it is seeking to support. This means it won’t approach complex situations with the same relational commitments as a trained professional will.
Duty of Care and Client-Centred Approach
True client-centred work requires empathy, emotional intelligence, and the ability to adapt to individual circumstances in real-time. It is this nuance that underpins relational care, working through the specifics of the relationship you have developed with your client. A recent investigation by the BBC has shown that AI will happily share health misinformation or even advise clients how to commit suicide (https://www.bbc.co.uk/news/articles/cp3x71pv1qno). This does not mean that AI can have no place, but we need to hold onto the ethical imperative of putting care central to career work.
Now, AI models should be able to be trained not to do this in the future, but this partly reveals that a system like AI, that is ambivalent, cannot be ‘trained’ to care; it can only be designed to mimic caring. This means it will always be limited in how it relates to humans. It will not have the same approach to duty of care as a professional. AI might miss subtle cues about a client’s emotional state, mental health concerns, or unstated needs. Even if it can learn to recognise patterns it will not be able to draw on its own lived emotional experiences and empathise with humans. It also can’t take moral responsibility for advice given – there’s an accountability gap when things go wrong, which we will discuss below.
Accountability
Who is accountable when AI gives poor or harmful advice? The developer? Is the organisation deploying it? The AI itself (which has no legal personhood)? The CDI’s standard requires practitioners to ‘submit themselves to appropriate scrutiny’ – but AI systems can’t defend their decisions or learn from complaints in the way humans can. This again points towards AI’s relational ambivalence: to care must also mean to be accountable. It means to recognise you have moral responsibilities and that you have a social contract where you recognise you might have to stand by your actions. AI is ambivalent and so unsurprisingly is not personally accountable in the way that a person is.
Conclusion
I have argued in this piece that an ethics of care, and by extension the CDI’s code of ethics, requires the perseverance of person-to-person career guidance as the core of our delivery. This is ultimately about the sorts of relationships people want and deserve to have with services and what it means to be professional in evolving situations.
References
Cochran, L. (1997). Career Counseling: A narrative approach. Sage publications.
Gilligan, C. (1993). Carol Gilligan’s In a Different Voice. Chez Baldwin Writer’s House Digital Collection.
Noddings, N. (2012). The language of care ethics. Knowledge quest, 40(5), 52.
Rogers, C. R. (1979). The foundations of the person-centered approach. Education, 100(2), 98-107.
Savickas, M. L. (2013). Career construction theory and practice. Career development and counseling: Putting theory and research to work, 2(1), 144-180.