Features

AI in children's services: key policy developments

7 mins read Children's Services
The government and local authorities are forging ahead with using artificial intelligence in children's services.
Quality of data is a key risk to AI producing reliable outcomes. Image: Vectormine/AdobeStock

WHITEHALL AI ACTION PLAN

Barely a week goes by without a government minister taking to the airwaves to talk up the potential of artificial intelligence (AI) to make the public sector more efficient and effective. The Prime Minister recently cited potential savings of £40bn from AI taking on many of the administrative tasks in the civil service. In truth, the role of AI has been gradually expanding across Whitehall for a decade from automated call handling of government helplines through to online calculations of tax and benefits.

The AI Opportunities Action Plan: Ramping up AI adoption across the UK, an independent report for the Department of Science, Innovation and Technology published in January sets out how AI is at the heart of ministers' plans to boost economic prosperity and the role the government and public sector can play in that.

The action plan makes clear that high-quality data is crucial to AI development and puts forward measures to improve access to public datasets which it says could be highly valuable and will be helped by the creation of the National Data Library. It recommends the government build public sector data collection infrastructure and finance the creation of new high-value datasets that meet public sector, academia and startup needs. In addition, government should identify how public data will be collected and its quality enhanced, including the use of AI-driven data cleansing tools to curate datasets stored across government, making them suitable for AI developers and researchers.

The action plan calls for the government to “embrace” AI to deliver its economic plans.

“While there are instances of AI being used well across the public sector, often they are at small scale and in silos,” it states. “Scaling these successes is essential, but will require us to think differently about procurement, especially if this activity is to support the domestic startup and innovation ecosystem.”

It adds that Whitehall must “support public sector partners where needed to move fast and learn things” and recommends employing a flexible “scan, pilot, scale” approach to AI development that incorporates:

  • Scanning the horizon for AI opportunities for every project
  • Rapidly developing prototypes to pilot in high-impact areas
  • Identifying successful pilots and rolling them out across boundaries.

The action plan says the government should publish best-practice guidance, results, case-studies and open-source solutions through a single “AI Knowledge Hub” as a single place to access frameworks and insights.

PUBLIC SECTOR ADOPTION

A 2024 review by the National Audit Office (NAO) assessed the progress of government departments and public bodies in adopting AI into working practices since the launch of the 2021 AI National Strategy.

The NAO found that 37% of government bodies responding to its survey had deployed AI, 37% were actively piloting or planning its use. The most common purposes of deployed AI are to support operational decision-making or improve internal processes.

A total of 21% of respondents had a strategy for AI in their organisation, while a further 61% had plans to develop one.

It concluded that development and deployment of AI in government bodies is at an early stage, an unsurprising situation considering the 2021 national strategy “lacked a coherent plan to support adoption of AI in the public sector”.

The NAO also found that oversight and governance arrangements are at an early stage of development. While most of the bodies with deployed AI that responded to the survey always or usually had a named accountable responsible owner for their AI use cases, fewer than half of bodies with deployed AI said that AI use cases were always or usually identified at an organisational level before deployment.

Reflecting the early stage that government bodies are at in adopting AI, only 30% of all survey respondents reported that they had risk and quality assurance processes that explicitly incorporated AI risks, although a further 46% had plans to put these in place.

The Department for Science, Innovation and Technology is developing tools to embed AI assurance into public procurement frameworks.

Departments identified a lack of AI skills as a key barrier to adoption of AI in government. The NAO survey found that difficulties recruiting or retaining staff with AI skills was one of the most common barriers to AI adoption, identified by 70% of respondents.

The NAO concluded that while central government has identified the potential for large-scale productivity gains from AI use in the public sector it has yet to assess the feasibility or cost of delivering these improvements.

Updating legacy systems and improving data quality and access is fundamental to exploiting AI opportunities but will take time to implement, it added. In addition, government standards and guidance to support responsible and safe adoption of AI are still under development.

COUNCIL EARLY ADOPTERS

Last year, the Local Government Association surveyed English councils about their AI readiness, including consideration of governance arrangements, policies in place and other approaches to ensure responsible deployment, as well as AI adoption, benefits and opportunities, barriers and risks, and support requirements.

Most respondents (85%) reported that they were using or exploring AI with half at the beginning of their AI journey, 16% developing their AI capacity and capabilities, 14% making some use of AI while 4% are innovative and considered as leaders among councils in AI use.

Among respondents who were using or exploring AI, the most commonly adopted type was generative AI (systems that generate text or images) which was being used by 70%. This was followed by perceptive AI, (systems that recognise faces or analyse images, audio or video) which had been adopted by 29% and predictive AI, (systems that try to make a prediction about an outcome) used by 22%.

In terms of sectors using AI, children's social care was the fourth most common, cited by 31% of councils, with a wide range of uses. Adult social care (35%) and administration and finance were functions most widely using AI. Children's social care was cited by 31% of respondents as the area with the greatest potential for using AI (see graphics).

The areas where most respondents had realised benefits from using AI were staff productivity (35%), service efficiencies (32%) and cost savings (22%).

The five biggest barriers to deploying AI identified by respondents were shortages of funding (64%), staff capabilities (53%), staff capacity (50%), sufficient governance (including AI policy) and clear case studies (both 41%).

The issues most commonly considered to represent an AI risk were cyber security (81%), organisational reputation and resident trust (75%) and deep fake disinformation (69%). Meanwhile, two-thirds of respondents were using their existing policies to manage AI risk.

POTENTIAL PITFALLS

The government estimates that 80% of the time spent on an AI project is spent cleaning, standardising, and making the data fit for purpose, so found the NAO in its 2024 report.

The poor quality of data is a key risk to AI delivering reliable and useable outcomes – this is particularly so for GenAI that relies on existing sources on the internet to generate answers to people's questions. Experts highlight how using current datasets could risk AI baking in existing biases – for example, with demographic data – and excluding unrepresented groups who, perhaps, engage with society less.

While advances in AI are full of potential, they also come with challenges, explains Sean Manzi, senior researcher and data specialist at Dartington Service Design Lab. “We need an AI-informed and capable workforce who are confident in the use and opportunities of AI, alongside its outputs,” he writes. “The impact of AI in your organisation will be driven by the values of those building and those using the AI, to ensure accountability and responsibility when applying it to practice.”

This should be underpinned by “strong guiding ethical principles” to ensure use of AI promotes equality, wellbeing and justice, Manzi adds. Dartington has developed eight ethical principles that children's services organisations should adopt when developing an AI policy or using it in work. They are:

  • Fairness – ensuring AI is not biased
  • Improvement-focused – AI is used to produce positive change
  • Sustainability – can be continuously developed and re-used
  • Supportiveness – support human decision making not replace it
  • Empowerment – informed and understood by those it seeks to help
  • Transparency – how the AI model was constructed, and its output derived can be understood by all
  • Responsibility – AI is used for the public good
  • Accountability – AI and its use is replicable and can be interrogated.

Manzi says that using open and collaborative practices “can help ensure that AI is used to enhance our society in an ethical and just way”. Adopting such an approach will be crucial in ensuring AI is beneficial to children's services.

--

EXPERT VIEW: ENSURE ETHICS AND CHILD-CENTRED PRACTICE ARE CENTRAL TO OUR AI FUTURE

Ellie Haworth, head of children's services transformation and improvement, The Social Care Institute for Excellence

We are on the cusp of a brave new world.

A lot of ink is being spilt at the moment on the promise of artificial intelligence (AI), large language models and digital tools. I am so pleased that this discourse includes both children's and adults' social care. It is too easy to assume that we are people services and therefore the technological improvements don't apply to us.

But how do we think AI will play out in the day-to-day world of children's social care? I think that we can see several prospects:

  • Early help and prevention
  • Children's support
  • Time-saving and resource efficiencies
  • Communication tools
  • Planning and predictive power.

Each of these has value in their own right and is worthy of consideration. Some are easier than others for us to get our professional heads around. We can see a straightforward value in the tools that allow us to write up our notes swiftly and save time that we could spend directly with children and their families. But others are thornier – the predictive power of AI is potentially its greatest strength but we are going to need the ethics to be rock solid on this.

We will need the digital tools and the safeguards to match, specifically:

  • Prevention materials must be trustworthy and get the right help to people and be based on credible evidence.
  • Time saving tools must not encourage us to take short cuts and human oversight must be continuous and rigorous.
  • Communication tools will have to be authenticated so that they are secure and not open to abuse.
  • Predictive powers must allow us to avoid problems without stigmatising people.

This means we are going to have to take our strengths into the future; our ethics, our child-centred practice, and our care. The AI future will also be the stronger practice future. All of us who work in social care know that time and tide waits for no man, we know that you cannot fight the inevitable. But we also know there will be continuities from the past that retain their value. There does not need to be competition here and if we are reflexive as we adapt to the new technologies we will find the ways that our practice sensibilities can be preserved whilst embracing the new.

FURTHER READING

AI Opportunities Action Plan, HM Government, January 2025

State of the sector – AI, Local Government Association, June 2024

The AI revolution in children's services, Sean Manzi, Dartington Service Design Lab, May 2024

Use of AI in government, National Audit Office, March 2024


More like this

Hertfordshire Youth Workers

“Opportunities in districts teams and countywide”

Administration Apprentice

SE1 7JY, London (Greater)