Education

What makes an AI tool safe for students?

March 30, 2026
Humna Ikram
What makes an AI tool safe for students?

Generative artificial intelligence (AI) is a type of technology trained using large volumes of data, which can then be used to create new content. This technology is quickly becoming part of the norm in educational settings, from lesson planning, automated feedback on questions and personalised tutoring. At BETT, we saw first-hand the rise of AI in EdTech, as well as the growing focus on standards for trusted EdTech.

In the UK, the Department for Education has listed its own set of expectations that outline the capabilities and features AI products must meet to be considered safe. While these are primarily aimed at EdTech developers and suppliers, they also help schools and colleges assess whether a product is appropriate for their learners.

In this article, we’ve broken these expectations down into three main categories and will explain what they mean.

Monitoring and safeguarding

To start with, every AI product used in education must clearly state its intended purpose, target users, and learning focus or subject area. If features change, the intended purpose must be reviewed and updated. Suppliers must also avoid exaggerated claims or making statements about impact or effectiveness that cannot be backed by robust evidence.

To keep students safe, many safeguarding procedures should be in place, including filtering. Filtering is a technique used by AI algorithms to sift through and analyse datasets to find specific information. For learner-facing tools, this is non-negotiable as it prevents access to harmful or inappropriate content. Filtering should also be adapted to suit different ages and SEND needs.

AI products must also maintain robust logging and reporting procedures, including recording prompts and outputs. If students try to access content that is blocked, they should receive an age-appropriate notification. Any attempts to access harmful content should also be flagged, and the institution’s designated safeguarding lead should be alerted to take appropriate measures.

AI systems should also detect signs of distress, including crisis-related behaviour patterns, night-time usage spikes, and isolation phrases. When distress is detected, the system must provide safe, non-pathologising responses, direct learners to human help, and alert safeguarding leads when necessary.

Crucially, AI products must be secure and resistant to misuse. This includes protection against “jailbreaking,” which is a method of gaining information from an AI model that would usually be restricted.

Privacy and data protection

To ensure data is not misused, AI products must have a lawful basis for processing data. Suppliers must provide clear, age-appropriate privacy notices and conduct data protection impact assessments. These assessments are used to identify and minimise privacy risks associated with data processing. Most importantly, a user’s intellectual property cannot be collected, stored, or used for commercial purposes (including model training) without explicit permission

Design and testing

Like all EdTech products, AI tools must be rigorously evaluated with diverse users, including children. Products must be designed with child safety at their core and supported by formal complaints mechanisms, as well as transparent decision-making processes.

One of the most distinctive elements of these expectations is the focus on cognitive development. AI tools should not default to providing full answers. Instead, they should use progressive disclosure (hints before solutions), prompt learners to attempt the task first, and require genuine effort before revealing complete answers. The aim is to prevent deskilling and ensure AI supports learning rather than replaces thinking.

Another important expectation is that AI products must not anthropomorphise themselves, imply emotions, or encourage emotional dependence. Instead, they should remind learners that AI cannot replace human relationships and should encourage breaks and healthy usage.

Finally, AI products must not use flattery or blend pedagogy with advertising as a way to exploit users to increase engagement or revenue.

Generative AI is a rapidly growing technology, especially in the education sector. With these guidelines, it becomes easier for suppliers to create safe and effective AI products, and for educational institutions to make informed decisions for their learners.

Thank you for reading this article! If you enjoyed it, subscribe to our blog so you don’t miss the next post.

Subscribe to updates

Like what you read? Sign up for news and updates like this.

No strings attached. Unsubscribe anytime. For more details, read our Privacy Policy.

Thanks for signing up to updates from Orso!
Oops! Something went wrong while submitting the form.