The Big Debate: Does artificial intelligence pose a threat to children’s services?

Derren Hayes
Wednesday, May 1, 2024

Software providers and councils are incorporating AI technology into children’s services systems to streamline functions. Amid rising concerns about AI ethics, experts assess if the benefits outweigh the risks.

AI systems can find patterns in complex data quicker than traditional techniques. Jintana/Adobe Stock
AI systems can find patterns in complex data quicker than traditional techniques. Jintana/Adobe Stock

Panellists

Kevin Yong, head of consultancy, Coram

Kevin has been head of consultancy at Coram for nearly a decade after a career in project and programme management in the telecoms and public sector.

His role at Coram focuses on providing improvement support to children’s services and heading up the Coram Innovation Incubator.

 

 

Sean Manzi, senior data specialist, Dartington Service Design Lab

Sean is a specialist in advanced data analytics, data science and data visualisation, and joined Dartington last year.

He has nine years’ experience in applied health service modelling and data science, and is a fellow of The Health Improvement Studies (an institute at University of Cambridge).

 

 

Andy Coulter, Teams technical specialist, Microsoft

Andy has worked for Microsoft since 2021 and specialises in the use of generative AI through the Microsoft365 Copilot system.

He has worked with several councils to develop Copilot systems, including some children’s social care departments through the Coram Innovation Incubator programme.

 

 

What aspects of children’s services could be improved through the application of digital systems?

Kevin Yong: “The sector is, rightly, a bit nervous about this. In the first instance, I think we should be looking at how we can use this technology to make life easier for social workers who are supporting children and families. For example, transcription, summarisation and redaction are just some of the applications that already exist with this technology that could be applied in the sector that are relatively low risk.

When a social worker goes to a visit, they’re taking notes and that means they’re not always able to pay full attention to the family and the child who they’re dealing with. If it’s transcribed automatically for them, then it’d be a lot easier for them to make sure that they retain all the key information from that and pay attention fully to the session that they’re running. It can then be summarised very quickly and help with putting together a good quality case note.

Redaction is a big thing. You’d need to train it to understand what confidential information looks like – for example, names, dates and places. But that is a very resource intensive activity when it’s done manually. If it can be done using technology and automate that process, that could free up social workers to spend time doing the work that’s important with children and families.”

Do you think it will be limited to administrative tasks or could it be used to enhance relationships between practitioners and children?

KY: “A step up from that is some of the stuff that we’ve seen with North Yorkshire Council. They’ve created the technology to draw a relationship map for the child. That’s normally a manual task that could take hours for someone to wade through all the case notes.

The technology does it almost instantaneously. It’ll draw that relationship map and see all these individuals around them. It’s important because that then tells us who could be helping to keep the child safe, to have their wellbeing in mind and to support the work of the social workers. At the moment, it takes time and effort to create that diagram so it very rarely gets updated. It will get done when they’re looking at exploring the family network, but then after that if the child gets taken into care, it’s not necessarily updated on a regular basis. If we can have the technology that produces this diagram, and is always up to date, then that would be useful.”

What opportunities are there from AI?

Sean Manzi: “AI is amazing at looking at complexity and pattern recognition. It can find patterns in data much more complex than humans can using more traditional statistical techniques. AI can perform complex versions of that, looking at all the different factors, and across all the different parameters. We can use it to see what the most important things that are influencing service provision and how to change those to provide better outcomes.

Once you understand which factors are influencing people’s outcomes, then we can start building models to predict what the outcomes are going to be. Once you’ve got that model, you start looking at the subtleties within that and building more data, retraining those models, refining the accuracy, and looking at how can we better serve the needs of different groups of people.

AI is very good at splitting out and looking across socio-demographic variables and the intersections between those and identifying who are the underserved populations. This is where we can use AI to better serve those who services might not be reaching now and ensure really equitable access to services.

Sometimes services are provided to easy to reach populations which biases our data. AI can only see what it’s been trained on already. We need to look specifically for those gaps, and see if there are particular populations that are being underserved. That requires recognition of those biases and [the] limitations of our data.”

That’s the upside, but what about the risks AI poses?

SM: “If people can’t understand what an AI is doing, and how it’s doing it, they are less likely to trust it. We need to have a way of showing people what the AI is doing. This has brought about the area of ‘explainable AI’. This is where local authorities need to be an informed consumer of AI.

There’s lots of different techniques for explainable AI, from text labelling, which say what different parts of the model are doing, to those that show what the relative influence of different parts of the model are. There’s quite a lot of work behind making these models explainable, so ensuring that within a project there’s the capacity and budget to make this all explainable – to make sure the model is understandable for those commissioning it and then that can be communicated to the public, who are the ultimate beneficiary.

Innovation in this space happens through openness and sharing – by working together, sharing our approaches, and the open science approaches, for replicability, we can share what the code is for that. If you’re working with companies that won’t do that, then in some ways you need to be asking: do they hold up to the same principles, the same values as yourselves, are they the people that you should be working with or are they just after your money? Ensuring there’s transparency helps demonstrate an organisation building these AI models is holding the same principles as those providing social care services and supports improving the wellbeing and lives of young people.”

Are measures needed to ensure there are consistent standards of AI software?

SM: “Currently, there are no standards and regulations around the use of AI. That shows how early we are in the development of this sector. Healthcare and medical science is full of quality standards; it’s heavily regulated to make sure that no harm comes to anybody. There’s the potential for applications of AI to do harm if they are of a poor quality, because they produce poor quality information, resulting in bad or wrong decisions.

When working in areas like children’s services we need to make sure that we’re adhering to the highest possible standards and that there is accountability for what’s being produced. At the moment, any company could come along, say, ‘my AI model will help you make a better decision’. But nobody can see inside that if it’s not transparent, if it’s not explainable. What’s the comeback when it’s used to make a decision that led to providing the wrong type of care to people?

Organisations are starting to develop their own AI policies, which include value statements, principles and ethical principles that lay out how AI is used, how quality is assured, and that standards are maintained through ensuring there is a transparent process that holds them accountable for what they’re doing.

This is really where organisations can get started – become an informed consumer by understanding what’s needed to develop a really good AI-use case. Look at examples from the Turing Institute, the NHS and Gov.uk. Aligning with these is a great starting point for any organisation.”

How big a role can AI play in the future?

Andy Coulter: “I think generative AI is genuinely a once in a generation opportunity, in terms of how we think about the way we work and how we want to leverage technology to make our lives easier.

People don’t join a council to do admin, to work after hours or to take on unreasonably complex tasks. They join to help their community. Generative AI has a fantastic potential to help them in that respect.

For example, when a social worker is meeting a person, trying to remove as many barriers as possible to make sure that they can have an authentic conversation and say, what’s going on in your life, how can I, as a representative of the council, help you?

That can be things like using generative AI to transcribe that interaction and makes it as easy as possible for that social worker to upload those case notes into the system.

It can also be used to help keep that social worker up to date with the various policies that are continuously improving.

So how do we make sure that their knowledge of the help that they can provide is as current as possible, without having to send them loads of information and manually read all of it. You can use generative AI to get that summary, to get those insights, and ensure that their knowledge is kept as up to date as possible.

Similarly, being able to use AI for sorting through your inbox, helping you to reply to emails, generating new documents. It’s a huge opportunity.”

Read CYP Now's Special Report on Digital Solutions in Children's Services here.

CYP Now Digital membership

  • Latest digital issues
  • Latest online articles
  • Archive of more than 60,000 articles
  • Unlimited access to our online Topic Hubs
  • Archive of digital editions
  • Themed supplements

From £15 / month

Subscribe

CYP Now Magazine

  • Latest print issues
  • Themed supplements

From £12 / month

Subscribe