Boris Headshot 1

by Boris Bambo | Co-Founder & CTO | Earlybird 

Every few years, new technologies arrive that promise to change the world. Some live up to that promise. For example, the internet reshaped how we communicate and do business. Smartphones put computing power into the hands of everyone. Cloud services made scale and flexibility possible for even small organisations. 

Others, however, never quite reached their potential. Virtual reality, 3D printing, and blockchain were all predicted to transform daily life, but most have stayed limited to niche uses. 

Artificial Intelligence (AI) is the latest of them. And unlike some earlier trendy technologies, it is unstoppable and already showing real, practical benefits across sectors, even in employability, which is often slower to adopt new technology. 

Many are calling 2025 “the year of the AI agent.” and this could represent the biggest change for employment support since the arrival of Customer Relationship Management Systems (CRMs) (IBM, 2025)1

WHAT IS AN AI AGENT? 

To understand AI agents, it helps to start with Large Language Models (LLMs). An LLM is a system trained on huge amounts of text, image or even video data. It can generate responses, answer questions, and summarise information. Think of it as a machine that has absorbed patterns from data, so it can recognise and produce outputs just like humans do. 

But LLMs on their own are limited. They can only respond based on what they have seen in their training data. 

An AI agent is an evolution of the LLM. Instead of just generating answers, it can also act. 

Agents are LLMs that have been given tools that allow them to go beyond their training data. For example, they might:  

Agents do this by combining reasoning loops with tool use. A single agent can break down a task into steps, call the right tools or systems (such as job board APIs (Application Programming Interfaces) or CV templates), and then stitch the results together into something useful. This orchestration is what makes them more powerful than static chatbots and vanilla LLMs. 

The other thing to mention is that more and more, the word “AI” will become synonymous with “AI agents.” However, the distinction is important because models and agents are useful in different ways. Models give us information. Agents go further and can act. 

A good example is ChatGPT itself. It began as a model that only responded with text based on its training data. But today, it can browse the web, generate images, and plug into other services. In this sense, it has moved from being an AI model to being an AI agent. 

WHY THIS MATTERS NOW 

Even tools like CRMs took years to become standard in the employability sector. But the pressures on providers are growing. Advisors carry heavy caseloads. Budgets are tight. Participants expect more flexible and personal support. 

At the same time, we have this new (AI) technology that is developing quickly, its costs are reducing and with clear gains in productivity and service quality from other industries. 

Studies show productivity lifts of 20 – 70% in areas like customer service, software development, and document processing when AI tools are introduced. In some cases, tasks that once took hours are being done in minutes (Nielsen, J. NNN Group, 2023). 

All of these fit key challenges in the sector: time, scale, and resource constraints. 

For employability, the opportunity is clear. AI agents align directly with the sector’s biggest needs: more time for advisors, the ability to scale across programmes and geographies, and better use of limited resources. 

WHERE AI AGENTS CAN HELP 

I believe AI agents can make a significant impact on three fronts. 

Cutting Across Systems 

One of the biggest challenges for providers is the number of systems they have to work with. A single contract may require staff to use a commissioner’s portal alongside their own internal CRM. Add multiple contracts from different commissioners, and the picture gets even messier. The result is duplication, manual data entry, and wasted time that could otherwise be spent with participants. 

AI agents can reduce or eliminate this friction via API-less integrations. Instead of waiting for formal integrations to be built (which are costly, slow, and often never happen), an agent can act as the “glue” between systems. It can log into an internal CRM, extract the required data, and update the commissioner’s portal without the advisor needing to copy and paste the same information multiple times. 

For a sector that lives with multiple contracts and multiple reporting demands, this capability is huge. It means compliance can be met without draining staff time. 

Delivery 

Another area is where advisors spend much of their time on routine but necessary tasks. These include writing CVs, searching for vacancies, and diagnostic assessments. AI agents can take on parts of this workload. For example, I can think of: 

On the participant side 

AI agents also have the ability to work directly with participants to reduce barriers and provide continuous support. For example: 

The bigger advantage is what happens after participants re-enter the workforce.  

By using AI agents throughout their journey, they build “AI fluency”, which is the ability to apply these tools for themselves and extract value from them. Someone who has used AI to draft emails, summarise documents, and prepare interview answers during their programme will carry those skills into the workplace.  

I have no doubt that participants who return to work with these experiences will be far more productive and confident from day one. 

What we are already seeing 

At Earlybird, we are already using AI agents to engage participants at various stages of their programme, from onboarding to collecting feedback. The results have been eye opening. 

Up to 40% of these interactions happen outside regular office hours, between 6 pm and 8 am and on weekends. 

This shows two important things: 

  1. Participants want and need support outside the 9–5 window.  
  1. AI can step in to fill those gaps without adding pressure on case workers. 

The key point to remember is that agents strengthen both sides of the equation. They give providers more productivity and scale, while also making participants more capable and confident as they re-enter the workforce.  

In other words, they improve not just the pathway into work, but also the performance once in work. 

SECURITY AND COMPLIANCE RISKS 

The opportunities are clear, but there are also risks we cannot ignore. Providers in this sector work with sensitive participant data, strict commissioner rules, and multiple reporting systems. 

AI agents add new value, but they also introduce new responsibilities. 

When an agent acts on behalf of a user, for example, updating records in a CRM, it needs access rights. Without the right guardrails, this creates obvious risks. An agent could access the wrong data or take actions that were not intended. The way to manage this is through clear governance: role-based permissions to restrict what an agent can do, ethical reviews, randomised controlled trials, and audit trails so every action is logged and reversible. 

There is also the question of compliance and data sovereignty. If AI agents start moving data between these systems, providers must be confident that the right standards are being followed. The challenge grows when we consider the infrastructure behind AI. Most models today run on global cloud platforms. For commissioners, this is a serious concern. Providers need assurance that participant data stays within approved jurisdictions. Local deployment options and sovereign cloud setups could be essential if AI agents are to be adopted at scale. 

BALANCING INNOVATION AND TRUST 

Even if the security and compliance issues are addressed, there are still challenges in how AI agents are adopted. One risk is over-regulation. If commissioners and providers apply rules that are too restrictive, the sector may block innovation before it has a chance to prove its value. On the other hand, too little oversight could expose participants and providers to real harm. The balance has to be proportionate: guardrails that protect privacy and compliance, but enough freedom to experiment and learn. 

Trust is another critical factor. For many, AI agents can feel like “black boxes.” If advisors do not understand why an agent has made a recommendation, or if participants feel the system is acting without their knowledge, confidence will drop. Transparency is the key. Agents should make their actions visible and explainable. Simple design choices, such as showing the steps an agent has taken, providing logs of its activity, or offering an “explain this recommendation” feature, help build trust for both advisors and participants. 

Adoption will also depend on culture and readiness. Frontline staff need to feel that agents are supporting them, not monitoring or replacing them. Participants need to see agents as tools they can use, not as gatekeepers. This requires careful communication and training alongside the technology itself. 

With the right balance of governance, transparency, and support, AI agents can be introduced in a way that builds trust and accelerates adoption across the sector. 

CONCLUSION 

All of this to say, AI agents are not a distant idea. They are already here, showing results in employability. The potential is clear: agents can save advisors time, reduce barriers for participants, and build new skills that make people more confident in the workplace. 

But potential alone is not enough. The sector needs to act. That means testing AI and agents in real delivery settings, building the governance to keep them secure and compliant, and giving staff and participants the training to use them well. 

The employability sector has always been about helping people move forward. AI agents give us new tools to do that at scale and with greater impact. The challenge now is to adopt them responsibly and quickly, so that providers, advisors, and participants can all share in the benefits. 


ABOUT THE AUTHOR

BORIS BAMBO  | Co-Founder & CTO | Earlybird 

Boris Bambo, CTO and Co-Founder of Earlybird, is a seasoned technology leader dedicating his career to using innovation to create a positive societal impact. 

Specialising in AI, machine learning and data science, he is motivated by the idea that work should be a source of happiness and self-expression, not just a means of survival. 

Boris co-founded Earlybird to build AI-driven solutions that streamline support services and personalise learning experiences, striving to give every individual the tools they need to achieve their career potential. 

Share via
Copy link