
HOW IS SOCIAL WORK USING AI?
Dr Caroline Webb, assistant professor in social work, the University of Birmingham
A recent UK survey by Bright et al and the Alan Turing Institute looked at AI use in five public sector areas – the NHS, emergency services, social work, schools and universities – drawing their findings from responses from 938 participants. They found that generative AI (GenAI) use is already widespread in the public sector. What was particularly interesting about this finding is that while GenAI is relatively new, they found it was more widespread than other forms of AI such as decision-making tools.
They highlight the bottom-up impact of AI in that anybody in the workforce can use access GenAI tools as they aren't reliant on the organisations investing in specific AI technologies. They also noted that a key driver of GenAI use was productivity; more than 80% of social care respondents felt GenAI could enhance productivity and help reduce bureaucracy. The NHS respondents felt AI had the potential to save them one day of their working week. Many thought AI could help to improve public services but felt the UK was missing out on opportunities [to deliver this] and noted a lack of employer guidance. Despite the high usage of AI tools, half of respondents felt there was a lack of clear guidance from employers on how to use GenAI systems and nearly half were unsure who was accountable for any GenAI outputs that they used in their work.
When we take the prevalence of AI on the one hand and lack of clarity of employer guidance on the other, it underscores the importance of thinking about this particular issue.
Productivity and efficiency
In terms of social work specifically, AI tools can significantly enhance workers' efficiency and reduce administrative duty. That is particularly important for social work because it is known as being an under-sourced and overburdened sector. GenAI tools can streamline various administrative tasks such as recording meeting notes or drafting communications or written plans freeing up workers to spend more time in contact with clients.
It is not surprising that many employers are investing in AI tools to save time and money. One of these, the Magic Notes tool, has been designed by care experts and developed with frontline staff. Magic Notes claim they can save workers one day a week using their tool to reduce administrative tasks. They also believe they can produce higher quality notes as they capture every detail. It can also increase connection with clients as workers aren't busy taking notes and can focus on the conversation. Magic Notes was piloted in 28 authorities and is now being rolled out more widely.
Standardising decision making
Social work organisations are also starting to recognise AI's potential in supporting sophisticated data analysis to identify otherwise hidden patterns in the data. They are using these insights to develop predictive tools to support traditional decision-making processes with the aim of improving service user outcomes because it allows workers to prioritise cases and facilitate earlier intervention and targeted preventative approaches. For example, Maidstone Borough Council has developed a tool called OneView that aims to recognise people at risk of homelessness by gathering data from different services. The OneView tool generates risk alerts such as missed utility payments or housing assistance, notifying frontline officers so they can intervene earlier and provide support.
We are also seeing AI being used to provide support and intervention for specific issues and conditions. For example, in autism spectrum disorders AI is being used to develop personalised therapeutic support through role play interventions to develop social skills in a controlled and supportive environment. In managing post-traumatic stress disorder symptoms, AI is being used to provide self-assessment tools and exercises to develop relaxation and coping skills. It is also being used with care users to help manage medications and [provide] remote monitoring.
WHAT ARE THE ETHICAL CHALLENGES?
Dr Tarsem Singh Cooner, associate professor in social work, University of Birmingham
There are six key ethical issues around AI and social work.
Privacy and data usage This is a key issue for social work and organisations using AI. There are several concerns – in the training stage of GenAI, the AI is trained on vast amounts of personal data scraped from the internet, normally without the consent of the creator. This has led to concerns that GenAI could inadvertently lead to the sharing of personal information in its outputs. There are concerns over data input around safety breaches and user knowledge of privacy regulations. For example, many people have signed up to use things like ChatGPT or CoPilot, but how many have read the terms and conditions before actually using these tools? When we sign up to GenAI tools, we're consenting for the companies to gather vast amounts of personal data from us. We should have concerns about where the data is stored and whether there's risks of security breaches. When we sign up to use these services the default position is usually to give permission to the AI tools to remember all of our conversations and use them for future training of the models. To not allow this we generally have to actively opt out, but how many of us actually do that? When it comes to the GenAI outputs, there are issues around copyright and intellectual property. From our understanding, in the US, GenAI outputs cannot be copyrighted because AI is classed as a machine. But in the UK, AI outputs may be open to copyright because it's not a machine. This needs to be clarified. For social workers [we need to ask], what we need to do to ensure we keep up to date with the terms and conditions as new releases of GenAI tools take place so that we can make informed choices about these issues.
Consent We've seen clear examples of AI misuse, for example viral deepfakes that raise consent issues. For social work, [deepfakes] should make us question the issue of consent because consent isn't always clear cut. If social workers are using AI to write letters or summarise meeting notes, should we obtain consent from individuals before inputting their personal data into AI systems to generate these types of outputs? These are ethical issues that need to be explored.
Transparency In the AI development world there's a thing called black box solution where users know the inputs and final outputs but don't have any idea of the process that AI uses to make the decision. The black box nature of many AI systems has been a sticking point in the wider adoption across industries in everyday life, such as in finance and law and order. For us to fully trust AI systems, we need transparency in the way we trace AI's thought processes from input to output because it can have real-world consequences. To gain higher trust we need to see how an algorithm reaches its conclusions, and to mitigate risks in AI we need to identify any potential biases or errors. This becomes easier when you understand the logic for the decision-making process. The explainable AI movement should foster understanding about how AI systems make decisions so making the inner workings transparent.
Algorithmic bias Because AI has been trained on existing, real-world data it can reinforce stereotypes. How can we use AI whilst mitigating and avoiding systemic biases in areas such as health, housing, and welfare?
Accountability If we use AI who is ultimately responsible for negative outcomes of AI? A supermarket chain created an app called Pak-n-save. It asked users to input leftover ingredients and it auto generated meal plans and recipes. It drew attention for unappealing recipes [like] poison bread sandwiches and mosquito repellent roast potatoes. If we use AI to make decisions and there's a mistake, are we responsible for that outcome?
Regulation It is difficult to use AI responsibility as at the moment, [as] different countries are taking different approaches [to regulation]. The EU has one approach and California another.