
5/12/2025
In collaboration with Altitude AI Consulting
If you’ve worked in the employability sector for more than five minutes, you already know: nothing moves fast when vulnerable people and sensitive information are involved. This is professional instinct — hard-earned, well-practiced, and usually correct.
This is a profession built on trust, stability, and doing things properly. So when someone suggests bringing “AI” into the equation, most organisations instinctively tighten their safeguarding lanyards and lean back in their chairs.
Contrary to popular opinion, that reaction isn’t outdated. It’s not fear of progress. It’s not resistance to innovation.
It’s professional instinct. Hard-earned. Well-practiced. Usually correct.
And the public evidence backs this up.
The Public-Domain Warnings Are Already There
Long before AI entered the boardroom agenda, organisations were grappling with the simplest question of all: “Where exactly is our data going?”
Even without AI, the ICO has had no shortage of cases involving councils, schools, and charities accidentally leaking information about people who absolutely should not be named in the wrong context.
Add AI to the mix, and the potential for accidental exposure grows legs.
Take a few examples already in the public domain:
Samsung’s “Oops, We Just Uploaded the Source Code” Moment
When engineers pasted proprietary code into a public AI tool, the company realised the hard way that “convenient” doesn’t always mean “controlled.” A small mistake, massive implications.
Schools Using AI for Safeguarding Notes
Several schools — desperate to reduce admin — began using AI tools to help summarise pupil reports. Some of those reports contained safeguarding details. No malicious intent… just a lack of understanding of where the text actually goes once it leaves the screen.
Charities Experimenting With Case Note Summaries
Frontline workers typed (or pasted) case notes into free AI tools, looking for efficiency. But case notes, even anonymised, have fingerprints: housing issues, immigration details, mental health indicators. It doesn’t take much to piece together who’s who.
In every example, the mistake wasn’t the AI tool. It was the assumption that it behaved like a private notebook when, in reality, it behaved more like a public warehouse.
A Sector-Specific Truth: AI is the World’s Most Capable Rookie
Think of it like this:
AI is the most brilliant new colleague you’ve ever onboarded. It learns at lightning speed. It never gets tired. It can summarise three years’ worth of case notes in the time it takes you to microwave a jacket potato.
But it also has:
- No common sense
- No understanding of confidentiality
- Zero awareness of safeguarding
- Absolutely no idea what is “too sensitive”
- No context for multi-agency boundaries
- No built-in GDPR whisper telling it to stop
If you let it loose on your organisation without structure, it’s not malicious. It’s just eager, fast, and thoroughly untrained.
Giving it free rein is the digital equivalent of handing your entire client caseload to a very enthusiastic intern who hasn’t yet completed their induction.
The Other Analogy: AI is a Courier Whose Route You Can’t See
Using a public AI model for sensitive information is a bit like handing a sealed envelope to a taxi driver.
Sure, the driver gets it from A to B brilliantly. But along the way:
- You don’t know the route
- You don’t know who else gets in the car
- You don’t know where the ride data is stored
- You don’t know whether the GPS logs are kept
- You don’t know what happens to the envelope if it’s scanned or photographed
It’s efficient, yes. It’s also wildly unpredictable if you’re handling information that must remain within strict boundaries.
Employability providers can’t gamble with that.
The real issue isn’t AI capability — it’s architectural ambiguity
Behind every hesitation about AI lies a set of questions that are as old as digital transformation itself:
- What system receives the data once it’s entered?
- Does it stay there?
- Can the model learn from it?
- Where is the server physically located?
- Is the data encrypted?
- Does it get used to train anything else?
- How do we audit an AI tool the same way we audit a human decision-maker?
- How do we stop staff using consumer tools out of convenience?
These are not unreasonable questions. They’re sensible, adult, governance-focused questions — the kind of questions that prevent headlines.
And until organisations get clear answers, caution will remain the most responsible posture.
A sector built on care deserves clarity, not accidental complexity
The employability sector is unique.
You’re not just processing data. You’re processing the stories of people whose lives can change if information is used incorrectly.
People whose trust in the system — or lack of it — can affect their wellbeing, employment outcomes, and long-term progress.
AI can absolutely support this work. It can reduce admin. It can surface insights. It can modernise services.
But only when introduced with:
- Clear boundaries
- Transparent data flows
- Private, secure environments
- Sensible governance
- Training for staff
- And an understanding of what should never be placed into an AI tool
Practitioners already know how careful they must be in person. AI requires the same caution, only coded.
Caution isn’t a barrier to AI. It’s the foundation.
The organisations that succeed with AI won’t be the ones who adopt it first. They’ll be the ones who apply the same diligence they use for safeguarding, case management, or data sharing across partnerships.
Being careful isn’t outdated. It’s leadership.
And in a sector where confidential information can impact real lives, careful is exactly where responsible AI adoption begins.
Next: Episode 2
Where does your data actually go when you enter it into an AI system?
The answer is surprisingly simple… and not always what organisations assume.