TikTok under investigation over child data use
UK inquiry looking at whether video-sharing app breaches data protection law.
YouTube's child protection mechanism is breaking down, according to some of the company's volunteer watchdogs.
YouTube's child protection mechanism is breaking down, according to some of the company's volunteer watchdogs.
There's a constant anxiety that those seeking to abuse or groom young children will use social media to reach them - and YouTube is aware of the problem. The video-sharing site has a special network of volunteers, called Trusted Flaggers, who help identify worrisome posts and comments on the network.
But now members of YouTube's Trusted Flagger programme have told BBC Trending that the company has a huge backlog of reports, some months old, and that the company responds to only a small fraction of complaints from the public about child endangerment and suspected child grooming.
One volunteer says that he made more than 9,000 reports in December 2016 - and that none have been processed by the company.
A small group of Trusted Flaggers also grew suspicious about effectiveness of YouTube's public reporting abuse page. Over a recent 60-day period, they used it to flag up hundreds of accounts which potentially violated the site's guidelines.
However, out of a total of 526 reports, they received only 15 responses, and the volunteers say the survey is emblematic of a larger problem with lack of child protection on the site.
The reports were made against accounts which leave potentially objectionable comments, mostly on videos made by young teenagers and children.
YouTube declined to give an interview. In a statement, a company spokesperson told Trending: "YouTube strictly prohibits content that sexually exploits minors... We encourage all users, including Trusted Flaggers, to continue flagging videos or comments so we can take action."
For anyone concerned, the NSPCC has advice for parents and young people on how they can protect themselves online.