
For me, the recent Censuswide survey commissioned by BBC Teach paints a concerning picture of online safety in primary education. With 80% of teachers reporting at least one safeguarding incident linked to online safety in the past year and one-third seeing an increase in these incidents, the reality is clear – children are navigating a digital world filled with risks. Online scams, AI-generated misinformation, and the blurred lines of social media access are no longer just concerns for teenagers; they are now primary school issues.
Yet, while the challenges are real, the response must be more than just fear. We must prepare children not only to be safe online, but also to become responsible and informed digital citizens. The solution is not simply banning technology but embedding digital literacy into every aspect of education.
The growing digital divide in schools
Across England, schools are taking drastically different approaches to technology. Some embrace 1:1 device access, weekly online safety lessons, and AI-supported learning, while others may ban smartphones and limit technology use in the classroom. This growing digital divide raises a fundamental question: are we preparing children equally for the digital world they will enter?
At The Stour Academy Trust, we prioritise integrating online safety across the curriculum, using tools such as the BBC Teach resources. These allow us to educate students about safe and responsible digital behaviour as part of their daily learning, not just in isolated lessons. Every computing session, PSHE discussion, and even RE class presents opportunities to discuss digital ethics, misinformation, and AI-generated content.
Schools that heavily restrict technology use may be safeguarding children in the short term. But what happens when they turn 16 and suddenly gain unrestricted access to social media, AI tools, and unfiltered online content? If we do not prepare them, who will?
Raising age limits: a quick fix or a long-term solution?
The survey reveals over 50% of primary school teachers believe the minimum age for social media and video platforms should be raised. Many would prefer WhatsApp, which recently lowered its age limit from 16 to 13, to return to its previous restriction.
While raising age limits may help delay exposure to online risks, it does not solve the problem. We cannot just say ‘no social media until 16’ and then leave young people to navigate it alone. A sudden introduction to unrestricted digital access, without prior training, is a recipe for disaster. Instead, should we consider joint parent-child accounts until 16 or even 18? Could AI-supported training tools help children and parents navigate social media together?
AI: a force for good or a new risk?
The rise of AI adds another dimension to online safety. The survey highlights that 35% of teachers feel AI could make children more vulnerable to scams, and almost half want better teaching resources on AI-related risks. We are already seeing AI-generated fake videos, images, and stories that look more real than ever.
While we have long been taught how to spot fake news articles, do we – or our children – know how to identify AI-generated videos, photos, and content? Should all AI-generated content be flagged by law? The risk is not just fake news; it’s a complete reconstruction of reality, and young people must be taught critical thinking skills to question what they see online.
AI, however, is not just a risk – it is also a powerful tool for education. It provides immersive learning experiences that transport children to the moon, the deep ocean, or ancient civilizations. It levels the playing field for SEND students, providing adaptive learning tools and accessibility features that make education more inclusive than ever before.
The role of parents: support, not fear
One of the most striking aspects of the survey findings is that children often remain silent about negative online experiences. They fear friends finding out, uncertainty about whom to talk to, or the belief that reporting will not change anything.
If we want to protect children, we must empower parents. Yet, many parents today never received online safety education themselves. At The Stour Academy Trust, we run termly parent training sessions, equipping families with the knowledge to support their children’s digital lives. But is this enough? Should all schools be required to provide digital literacy training for parents?
We cannot expect children to self-regulate in an environment that even adults struggle to navigate. The conversation must extend beyond schools and involve parents, policymakers, and tech companies.
A call for change: making online safety a core priority
At The Stour Academy Trust, we have a number of key priorities:
Online safety should be a core, ongoing part of the curriculum, embedded across subjects rather than limited to standalone lessons. Schools should teach children not just how to be safe online, but how to critically evaluate digital content, particularly AI-generated material.
AI and technology should be embraced responsibly, not feared. While AI presents risks, it also provides incredible learning opportunities, particularly for SEND students, and should be integrated thoughtfully into education.
Schools must equip children with digital safety and critical thinking skills so they can navigate social media, online scams, and AI-driven misinformation responsibly.
Parents need support and training to guide their children safely through an increasingly complex digital world. Schools should offer regular parent workshops on online safety, social media, and AI.
Teachers and staff require ongoing CPD and support to stay ahead of emerging risks. With the pace of technological change accelerating, we cannot expect educators to teach online safety confidently without access to regular training and up-to-date resources. Schools and MATs should invest in continuous professional development in this area.
Tech companies, schools, and policymakers must work together to create a safer online environment for children. This means stronger regulations on AI-generated content, clear minimum age guidelines, and better safety controls on social media platforms.
Technology is not going away, so is banning it the right – or even ethical – answer? Instead, we must train, educate, and empower, creating a generation of digitally resilient children who can navigate the online world safely, responsibly, and with confidence.
More from the BBC Teach collection of online safety resources: www.bbc.co.uk/teach.