News

image representing news article News

Children with Complex Needs

The Children’s Commissioner for England has published a report detailing the experiences of children with complex needs who have been deprived of their liberty. Interviews were carried out with 15 children with experience of living under a Deprivation of Liberty (DoL) order. Key themes include: all children had experienced significant instability and struggled to get appropriate support before a DoL order was put in place; most children felt there were limited opportunities to have their voices heard and be involved in decisions while living under the order; and most children were in solo placements and were socially isolated. Recommendations include that all children deprived of their liberty should benefit from a statutory framework guaranteeing their rights and setting out the responsibilities of others to promote their welfare. 

Read the report: Children with complex needs who are deprived of liberty: interviews with children to understand their experiences of being deprived of their liberty 


image representing news article News

Public exposure to ‘chilling’ AI child sexual abuse images and videos increases

AI generated child sexual abuse content is increasingly being found on publicly accessible areas of the internet, exposing even more people to the harmful and horrific imagery, says the Internet Watch Foundation (IWF).  

Many of the images and videos of children being hurt and abused are so realistic that they can be very difficult to tell apart from imagery of real children and are regarded as criminal content in the eyes of UK law, much in the same way as ‘traditional’ child sexual abuse material would be1

In the past six months alone, analysts at the IWF have seen a 6% increase in confirmed reports containing AI generated child sexual abuse material, compared with the preceding 12 months2

The IWF, Europe’s largest hotline dedicated to finding and removing child sexual abuse imagery from the internet, is warning that almost all the content (99%)3 was found on publicly available areas of the internet and was not hidden on the dark web. 

Most of the reports have come from members of the public (78%)4 who have stumbled across the criminal imagery on sites such as forums or AI galleries. The remainder were actioned by IWF analysts through proactive searching. 

Analysts say that viewing AI generated content of children being sexually abused can be as distressing as seeing real children in abuse imagery if a person is not prepared or trained to cope with seeing such material. 

Some AI child sexual abuse material is classed as non-photographic imagery, such as cartoons, and is also regarded as harmful to view and accordingly assessed by IWF analysts1

The IWF traces where child sexual abuse content is hosted so that analysts can act to get it swiftly removed. 

More than half of the AI generated content found in the past six months was hosted on servers in two countries, the Russian Federation (36%) and the United States (22%), with Japan and the Netherlands following at 11% and 8% respectively5

Addresses of webpages containing AI generated child sexual abuse images are uploaded on to the IWF’s URL list which is shared with the tech industry to block the sites and prevent people from being able to access or see them. 

The AI images are also hashed – given a special unique code like a digital fingerprint – and tagged as AI on a Hash List of more than two million images which can be used by law enforcement in their investigations. 

To continue reading please click here for the full article on www.iwf.org.uk.


image representing news article News

New report estimates half a million UK teenagers have encountered AI-generated nude deepfakes

New report estimates half a million UK teenagers have encountered AI-generated nude deepfakes

New report by Internet Matters urges the new Government to crack down on AI-generated sexual imagery ‘epidemic’ after survey reveals 13% of teenagers have had an experience with nude deepfakes.

Summary

  • The possibility of uncontrolled nude deepfake abuse has sown fear into many children’s lives; over half of teenagers (55%) believe that it would be worse to have a deepfake nude created and shared of them than a real image.
  • Internet Matters calls for a ban on ‘nudifying’ tools, as the report warns it is simply too easy to create nude deepfakes online.
  • Recommended reforms to the school curriculum should include teaching children how to identify deepfakes and to use AI tech responsibly.

Please find the link to the full article here.


image representing news article News

Children’s social care dashboard

Source: DfE 

Date published: 11 October 2024


The Department for Education (DfE) has published a children’s social care dashboard to support local authorities in England and partners who work in and with children’s social care. The data dashboard is designed to support the implementation of the Children’s social care national framework by displaying data indicators to help both local and central government to understand progress towards the outcomes and enablers set out in the framework. 

Find out more: Children’s social care dashboard


image representing news article News

Safeguarding and the voluntary sector

Safeguarding and the voluntary sector

Source: NSPCC Learning

Date published: 16 October 2024


NSPCC Learning has published a learning from case reviews briefing on safeguarding in the voluntary sector. The briefing explores learning from a sample of case reviews published between 2015 and 2024. Issues identified include a lack of clarity around voluntary agencies’ safeguarding roles and responsibilities. The learning from these case reviews highlights the need for organisations in the voluntary sector to create and share clear child protection policies and procedures and build good working relationships with other agencies.

Read the briefing: Voluntary agencies: learning from case reviews