Matt Kingswood

By Matt Kingswood FIEP | CEO | The Digital College 

A salesperson at a car garage near me tells an entertaining story about the pitfalls of letting customers test drive cars. They nervously get behind the wheel of a prospective purchase, accidentally set the seat too low and then gently guide it out of the dealership and onto the road, towards a busy crossroads. 

The salesperson used to say “Go straight across the lights”, which customers did without fail – often when they were red. She doesn’t say that anymore. 

Clearly this is a cautionary tale where assumed intelligence – real or artificial – can get you into a lot of trouble. More relevant to us budding AI users the tale highlights the risks of being present but not truly being in control of the situation. Like many businesses, The Digital College has ambitious plans for AI over the next twelve months, incorporating it across all parts of the organisation. However, early pilot projects have unearthed a lot of troubling questions, and like our salesperson, we realised that we weren’t truly in control. Buckle up, read on and hopefully you can avoid a painful trip through a red light. 

This article will walk you through two ways in which our pilot caused us to question our readiness to let AI take the wheel. The first concern was about the output of the AI; in our case when generating content for courses. In particular the risk of misinformation and bias. 

AI GENERATED CONTENT: IS THAT YOUR FINAL ANSWER? 

Our pilot was well aware of the risks of misinformation. We are all aware of hallucinations, or incorrect results. 

AI produces results that are the most likely to be correct based on its training and will authoritatively give its view on anything no matter how little it knows. And for anyone using AI to generate educational content or provide advice this is troubling. 

The reality is that hallucinations are very much a fact of life given where we are in the development cycle of AI. Both Google and Microsoft have fallen foul to hallucinations in public demonstrations.1 2 

Respected newspapers have published must-read lists with plausible yet non existent books3. More concerning for those generating educational content, Amazon’s Kindle Direct Publishing is suspected to have sold AI generated guides about foraging for edible mushrooms which went against accepted best practice.4 

But our pilot showed that we have to be prepared to tackle a potentially bigger problem. AI models have been trained on large datasets which range from peer-reviewed academic research through to user generated internet content like Reddit. And given AI’s somewhat probabilistic approach to inferring answers, there is a risk that the popularity of something in the training data can cloud accuracy. 

IBM recently cited an example where their researchers asked various AI models to give a number between 1 and 50. The researchers suggested that AI will usually choose 27 (although we find Microsoft’s CoPilot offered 37). The researchers suggested that this is because AI models are trained on a wide variety of historical human generated sources – and it turns out we aren’t very good at generating random numbers. In fact we have a bias towards prime numbers and the numbers ending in, yes you guessed it, 7. 

This bias in the dataset training AI models is visible in many other forms and this can conceivably make its way into generated learning content reducing real world diversity. For example, image libraries such as Adobe Stock are dominated by people in western style dress. More challenging, how do we correct for centuries of gender and racial bias in the literature that these models have been trained on. 

Our first foray in AI and automatically generated content came to the following conclusions: 

HEY DUDE, WHERE’S MY DATA? 

Our second area of focus concerned an area AI could well revolutionise in a very short timescale: customer service. In fact this revolution is well under way. According to Hubspot research5, 77% of customer experience teams already use AI With positive results, with 64% of leaders saying AI speeds up the time operators take to resolve tickets. Let the robots answer queries and deal with support calls instantly and at scale – banishing call queues forever. We can then get on with our lives and spend our time sorting out the bigger issues, while on the phone to someone else’s robots. In our pilot it was very easy to unleash the power of AI on our customers (we didn’t) – just sign the form, tick the box and customer service nirvana was at our fingertips. Our customer service software would automatically send responses, in email or chat, and to recommend answers for our human agents to use. 

However we quickly realised that we weren’t in a position to guarantee to a commissioner or a client how their data would be used. We all like to keep our commissioners happy. Personally, I’m very keen to work more with the Ministry of Justice – just not as a customer. Firstly we needed to know what data is being processed. AI improves its answers through the use of contextual information; the more related information you can pass the better. This becomes especially pertinent in third party software with access to historical (although time-limited) customer service conversations or additional data sources (perhaps you have linked in customer information). Is any specific information about the learner requiring support shared – perhaps the AI model feels the gender and age is helpful in formulating a response. What additional information is sent with the query to help with context? Secondly we needed to know where the data is being processed. We know our customer service application operates in the UK but what about processing of AI queries? This is somewhat more straightforward as this information is usually available from your software provider as part of their GDPR disclosures. Microsoft, Google and Amazon all offer UK-based AI processing although you need to check that your provider is flexible enough to support this. (Ours did!) 

Thirdly, it is easy to overlook the problem of data retention after processing. It is common for the underlying AI service providers (eg Microsoft Azure AI) to retain your prompt (questions) and their responses for up to 30 days to detect and prevent abuse (mainly overuse). This should fall within your data retention policy but be aware that you will have some data hanging around. Finally, we felt the need to ask – is my data used for training? This exposed an extra aspect – who are we asking? Microsoft claims not to use your interaction for training, however we discovered that our application provider did although there was the option of an opt-out. But putting the retention of data aside – is this a problem? Afterall the data only remains in the model combined with millions of other bits of information. It is a bit like pouring a glass of Coca-Cola into the sea and worrying if someone can work out the secret recipe. 

Implementing AI responsibly is likely to be a tricky path for many of us. A deep understanding of your suppliers, and their suppliers is likely to be key, even if it feels like a luxury. The risk may not lie with the suppliers you know best – it may be with the ones you trust most. As we know, things do not stay still in life. The traffic lights by the car garage have been replaced by a roundabout. I’m just waiting for someone to be told to go straight across… 


REFERENCES

  1. https://edition.cnn.com/2023/02/08/tech/google-ai-bard-demo-error 
  1. http://edition.cnn.com/2023/02/14/tech/microsoft-bing-ai-errors 
  1. https://www.npr.org/2025/05/20/nx-s1-5405022/fake-summer-reading-list-ai 
  1. https://www.theguardian.com/technology/2023/sep/01/mushroom-pickers-urged-to-avoid-foraging-books-on-amazon-that-appear-to-be-written-by-ai 
  1. https://blog.hubspot.com/service/ai-in-customer-service 

https://privacy.anthropic.com/en/articles/7996866-how-long-do-you-store-my-organization-s-data 20 


ABOUT THE AUTHOR

MATT KINGSWOOD FIEP  | CEO | The Digital College 

Matt has been CEO of The Digital College, an online vocational training specialist, for approaching ten years. During that time the organisation has helped provide training to enable tens of thousands of people enter new careers or progress in their roles. 

Matt is a passionate promoter of e-learning as a means to help people achieve their potential in the workplace, with a special focus on making vocation-based learning available to those with barriers that impede traditional learning. As a result, The Digital College has a mission to provide learning that can reach people with low confidence in their learning capabilities, limited access the internet and computers, and other barriers such as English language ability. 

Matt also helps co-ordinate two Entrepreneur networks to support people starting up and growing their own businesses. He has a MA from Cambridge University and an MBA from The Wharton School, University of Pennsylvania.

Share via
Copy link