As the death toll from Covid-19 has continued to rise, I have become increasingly despondent about the use of information by the government and of the reluctance of the media to challenge on behalf of the population on the receiving end of flawed decision-making. Sadly, none of this is new. We have become used to government agencies such as Ofsted presenting professional opinion as if it were immutable fact; we have endured the reduction of the most complex and far reaching policy issues into three-word slogans rather than demanding the nuanced debate they really deserve. Most recently, we have sat and watched daily briefings in which incomplete and inaccurate data are cynically sold to us to underpin failing policy – which has had disastrous consequences.
The response of children’s services over the past couple of months has been phenomenal. While the impact of Covid-19 on the families and children supported by the sector has been far less than that experienced by our colleagues in adult social care, staff and carers in settings throughout the country have been putting their health at risk and kept core services functioning. As we start to plan for a period in which the Covid-19 risk can be better managed, we are beginning to see the emergence of ideas about how things could be done differently.
Local government is talking about the need to gain better access to national datasets, drive up the use of local datasets and find new approaches to improving outcomes. The use of predictive analytics to help focus services on those in greatest need is growing, as is algorithm-based decision making to help identify families to support, and with what interventions. This should not be dismissed, but, as some experts have already highlighted, the adoption of machine learning in the complex world of children’s services needs to be subject to full and open debate, and to acknowledge the quality and limits of the information being used.
Modelling that is based largely on children’s social care information presents at least two big risks: first, we know data quality is poor, inconsistent and was collected for a different purpose; and second, the system is inherently biased – research has shown the institutional context and organisational structure of children’s social care is actively contributing to systematic inequalities in provision. So, if we develop future models based on the data we have gathered in a biased system, we run the risk of perpetuating the bias.
Faced with strong evidence that the government is not interested in data quality if it gets in the way of policy, and given the potential unintended consequences of poorly managed implementation of initiatives based on even the best quality data, we need to stop and think before running headlong into new ways of doing things based on machine learning. If we don’t, the risk is that children’s services future role will simply be confined to policing the poor, regardless of the aspirational three-word slogan under which it is sold.