Good evaluation is critical for both children's services providers and commissioners since work that can demonstrate results is more likely to gain funding. Joe Lepper covers all the essentials in this A to Z guide.

With austerity measures continuing to shackle public spending, children’s services providers are facing increasingly tough competition for sources of funding. Cash-strapped local and central government commissioners have made it clear they will focus children’s sector funding on providers who come armed with robust evaluation to show their work improves young lives.

Family Nurse Partnerships, a scheme that has been evaluated extensively for decades in the US and more recently the UK, is among a number of recent evaluation success stories. As a result, the government in Westminster ploughed £17.5m into the scheme in 2013.

The financial impetus to provide strong evidence of success is clear, but Children England chief executive Kathy Evans warns evaluation should not only be to attract commissioners and donors.

“Charities are set up to make things better, which means evaluation should be at the heart of what they should be doing,” she says. “The primary motivation of providers to carry out evaluation should be to make sure they really are making a difference.”

A is for Aims

The starting block for evaluation is to ensure it is linked to an organisation’s aims. According to Michael Little, co- director at the Social Research Unit (SRU) at Dartington, if an organisation’s aims are “not clear, it’s not worth doing an evaluation”. Emma Wallace, acting director at the National Children’s Bureau (NCB) Research Centre, recommends creating an evaluation framework or strategy at the start of a project that clearly sets out how evaluation will support an organisation’s aims.

B is for Back of a napkin

“Even if evaluation is written on the back of a napkin, that is fine as long as it brings about real change,” says Dez Holmes, director of Research in Practice (RIP). While she concedes this is a slight exaggeration, her point is that staff, donors and commissioners are more convinced with clear content showing how a provider is improving children’s lives than a glossy presentation. Children England’s Cathy Evans agrees, saying if presentation is too slick, it can actually put off a commissioner: “There’s nothing less likely to say this organisation can do with money than deluging a funder with expensive documents.”

C is for Collaboration

Much of public sector commissioning is based on competition, with providers jostling for attention with their own evaluations. But what if providers collaborated with each other to create a more comprehensive, joint evaluation of the outcomes for children in an area? This was the question posed by officers at the London Borough of Sutton, who, with help from Children England, have teamed up with its local children in care providers to create a joint outcomes-based evaluation. Through this, data and IT are shared to create a borough-wide picture of the success of their work.

D is for Data collection

Collecting data is something most providers will be doing every day through talking to service users or monitoring progress, such as how many families visit a support centre. Association of Directors of Children’s Services (ADCS) president Andrew Webb recommends providers carry out a data audit to find out what information an organisation is already capturing and how that can be better presented to commissioners and other stakeholders. The NCB says such an audit can cut time and costs by linking existing meetings with evaluation sessions.

E is for Embedding

The SRU’s Little urges providers to ensure they embed evaluation into their everyday work and “to build a culture” where staff are looking at everything they do and asking questions such as, “is this improving children’s lives?” The government’s Children and Young People’s Improving Access to Psychological Therapies programme is a good example of a scheme that has successfully embedded evaluation. After each session, young people give their views on the treatment. This evaluation is used to help improve their care as well as shape services for the long term.

F is for Forward planning

One of the biggest pitfalls providers fall into is to treat evaluation as an afterthought. Planning an evaluation strategy before work is carried out enables providers to create a comparison to map their progress. Without this forward planning, warns RIP’s Holmes, “providers won’t have any information to go back to so that they can benchmark their progress”.  

G is for Graduates

A cost-effective way of getting independent evaluation is to contact local universities and find out if there are any graduates or postgraduates looking to carry out research in the same area. “What you will find is some keen, bright people with a legitimate interest in producing solid, independent evaluation,” says Evans. Some organisations offer a brokering service to link up providers and students to improve evaluation of services, such as Project Oracle, funded by the Mayor of London: www.project-oracle.com

H is for Help

There is a raft of other organisations looking to help providers and commissioners improve evaluation. Free support by RIP can be downloaded at www.reason-network.org.uk and includes a guide to research governance and a governance checklist. The SRU also has a number of free guides about effective evaluation available from www.dartington.org.uk/category/publications

I is for Independent evaluation

Little says the SRU will “discount the impact by half” when it comes to evaluation carried out in-house rather than by an independent, external evaluator. Evans agrees independent evaluation carries more weight, saying that self-analysis “is one of the least reliable” ways of evaluating a service. Commissioners are much more impressed by “something that is academic and independent”, she adds. But self-evaluation can help add to the broader picture of how a service is supporting children and families.

J is for Journey

The focus on outcomes in evaluation is welcomed. But the ADCS warns this should not be at the expense of evaluating the journey to arrive at an outcome. Many improvements to a child’s outcomes may be long-term so “the work in progress measurements are vital”, says Webb. Evaluating the journey is “what really matters to service users”, adds Evans, as it covers the quality of service and support they are receiving.

K is for Keeping it in proportion

Evaluating a service should never become more important than the service itself. Evans warns there is a danger of not keeping evaluation in proportion with day-to-day support of children: “Evaluation should never take a professional away from supporting a child.”

L is for Legacy

Evaluation needs to bring genuine change and give a provider a lasting legacy for the organisation. Independent evaluators also have a duty to leave a legacy, says Holmes, by giving technical support to help staff carry on evaluating services long after they have left.  This helps providers to take responsibility for their own evaluation in the long term.

M is for Measurable

Do not promise commissioners and other stakeholders evaluation that, through time and financial constraints, will be impossible to complete. Key questions smaller providers in particular need to ask include: Do you have the staff time available to measure and collect data? Is your IT system able to store evaluation data? And can that be shared easily with an independent evaluator?

N is for NIFTY

Among evaluation tools available to children’s social care teams is the NCB and RIP’s NIFTY project, which stands for Neat, Informative, Feasible, Timely and Yours. This has been developed with support from councils. Through courses and a handbook, it aims to help children’s professionals improve their research and evaluation skills. For more information, go to www.rip.org.uk

O is for Ofsted’s focus on outcomes

Ofsted announced in November that children’s services inspections would have a greater focus on how services are improving outcomes. Webb says councils still have “a long way to go” to successfully use evaluation to ensure they are commissioning services that are improving children’s lives. He says residential children’s care commissioners in particular need to place a greater focus on evaluation that shows improved outcomes.

P is for Preferred providers

Webb urges council commissioners to develop rosters of preferred providers who are all committed to sharing and jointly improving evaluation. Such joint work is difficult when providers are hired piecemeal through a competitive procurement process. The National Foundation for Educational Research (NfER) recommends commissioners develop “learning networks” with a focus on evaluation and sharing good practice.

Q is for Questioning

How you question a child, member of staff or stakeholder about the quality of a service can be just as important as the question itself. The NCB’s Research Centre recommends finding out how your subject would like to be surveyed first. Wallace outlines its recent evaluation of the role of independent reviewing officers in care planning. This used an online survey to gather the views of busy and IT-savvy directors of children’s services and ensured interviews with families were carried out face to face. Find out more about this research at www.ncb.org.uk

R is for Randomised controlled trials

Randomised controlled trials can offer compelling evidence of the success of children’s services by comparing the experiences and outcomes of those who have been supported with those who have not. Education Secretary Michael Gove is among the advocates of their use in evaluating children’s services. In announcing two government-funded trials last year, covering child protection and school attainment in maths, he said they “offer us the opportunity to establish which policies genuinely help children”.

S is for Signposting

Commissioners need to focus more on signposting support available to providers in carrying out evaluation, says Kerry Martin, NfER senior research manager. This should include the national support available from organisations such as RIP and SRU as well as local training courses in research and evaluation skills. Council commissioners should consider providing a brokering service between providers and local university researchers as well as compiling a list of case studies of good practice.

T is for Training

With providers increasingly asked to evaluate their work, the divide between evaluators and children’s professionals is closing. There is a lot each can learn from one another and joint training should help this sharing of skills, says Holmes. Researchers can benefit from children’s professionals’ experience in dealing with marginalised groups and having to ask potentially difficult, awkward questions. Children’s professionals in turn can benefit from evaluators’ skills in compiling surveys.

U is for US research

Commissioners are impressed with strong evaluation showing improvement in outcomes even if the research was not carried out in the UK. Evaluation from the US has been successful in gaining funding from UK commissioners in recent years, with the Incredible Years parenting programme joining Family Nurse Partnerships as notable US-evidenced schemes rolled out subsequently in the UK. “What matters is not where the evaluation is done, but how well it is done,” says Little.

V is for Value

A good way for commissioners to show the high value of evaluation, according to Martin, is to allocate a ringfenced budget for it within the overall costs. “This will prompt organisations to consider their approaches to evaluation from the outset and integrate them into their overall plans and timelines,” she says.

W is for Who, what, where, when

The key to a successful evaluation strategy or framework at the start of a project is to firmly establish a checklist of who will be carrying it out, what questions it aims to answer, where it will take place, and over what time period. RIP says this can help identify whether extra funding or staff will be needed in good time.

X is for X marks the spot

A well-presented piece of evaluation needs to get to the point quickly or the reader will stop reading. Webb says he is looking for two key questions to be easily answered before he considers funding a project: Does it work and how much does it cost? “I have seen some scrappy evaluation, where the data is either incoherent or the costs do not add up,” he says.

Y is for Young people

When interviewing young people, remember to act ethically, treating them with respect and not asking questions that may cause them stress. A good way to avoid this is to involve young service users in the evaluation process from the start to help researchers draft questions and carry out peer-to-peer research. The NfER has produced an online guide, Developing Young Researchers.

Z is for Zest

For evaluation to succeed, providers and commissioners need to approach it with zest – genuine enthusiasm for compiling evidence and the vital role this can play in creating a better service. Holmes says too often children’s professionals see evaluation as something that is done to them, not for them. “A lot of organisations can feel like rabbits caught in the headlights, scared stiff by evaluation,” she says.

Register Now to Continue Reading

Thank you for visiting Children & Young People Now and making use of our archive of more than 60,000 expert features, topics hubs, case studies and policy updates. Why not register today and enjoy the following great benefits:

What's Included

  • Free access to 4 subscriber-only articles per month

  • Email newsletter providing advice and guidance across the sector

Register

Already have an account? Sign in here


More like this

Hertfordshire Youth Workers

“Opportunities in districts teams and countywide”

Administration Apprentice

SE1 7JY, London (Greater)