Artificial Intelligence (AI) has brought many benefits. It has improved productivity, triggered breakthroughs in healthcare, enabled instant translation between different languages and transformed our daily lives.
However, there is a dark side to this technology: the exploitation of AI for malicious purposes, particularly the creation and transmission of Child Sexual Abuse Material (CSAM).
Alarming evidence from Qoria
Qoria software safeguards 24 million children globally. Their new report discloses the extent to which AI is impacting the safety of children in UK schools, highlighting the urgent need for a coordinated approach from government, digital industries, schools and families to deal with these threats.
Their survey gathered the data from 447 school communities across the UK, and 603 total schools globally. They collected responses from staff in education with a responsibility for safeguarding children. These included school leaders, IT directors, designated safeguarding leads (DSLs), governors, and pastoral staff.
The report has evidence that children as young as eight are involved in these incidents. The main age group is 11-13-year-olds. Snapchat is the primary platform for sharing this content (49%), followed by text messages (16%).