YouTubelogo
Screen

YouTube to enlist 10,000 workers to moderate content

As YouTube continues to crack down on inappropriate content aimed at kids, the platform has announced a new set of measures that includes growing its trust and safety teams significantly in 2018.
December 5, 2017

As media coverage surrounding inappropriate YouTube content continues to riseespecially in tandem with US kids’ mobile usage—the Google-owned platform has been vocal about taking initial steps to ensure its youngest of viewers aren’t exposed to nefarious material. But the company’s newest measures intend to take things one step further. 

In an open letter on googleblog.com, YouTube CEO Susan Wojcicki said that growing its content moderation staff is now necessary, given that human judgement is critical to making contextualized decisions on content. YouTube believes that human reviewers remain essential to both removing content and training the company’s new machine-learning technology, and the company’s goal is to bring the total number of people across Google working to address inappropriate content to more than 10,000 in 2018.

The move comes a month after YouTube closed 50 channels that target young viewers with inappropriate content, including Toy Freaks, and deleted thousands of videos that combined received tens of billions of views. Its YouTube Kids app also launched a global update allowing kids to customize their experience with password-protected profiles.

Since June, according to Wojcicki, YouTube’s content moderation staff has manually reviewed nearly two million videos for violent extremist content, and as a result, has removed a total of 150,000 videos. Its machine learning systems, which flag 98% of videos removed for violent extremism, currently help human reviewers remove nearly five times as many videos than they were previously.

As for speed of removal, incoming technology will allow YouTube to take down nearly 70% of violent extremist content within eight hours of upload and nearly half of it in two hours.

YouTube is now training its machine-learning technology to tackle other areas including child safety and hate speech. YouTube is also growing its network of academics, industry groups and subject matter experts to help the company better understand emerging online safety issues.

In terms of improved transparency, YouTube will create a regular report in 2018 offering additional data about the flags it receives and the actions taken to remove videos and comments that violate the company’s content policies. For more aggressive action on comments, new comment moderation tools are in the works, and in some cases, comments will be shut down entirely.

Rounding out the changes, YouTube will increase how it protects advertisers and creators from inappropriate content by applying stricter criteria, conducting more manual curation, and significantly ramping up its team of ad reviewers to ensure ads only run where they should.

The changes are expected to improve revenue stability for creators, and YouTube will discuss the strategy with its advertisers and creators over the next few weeks.

YouTube’s newly announced actions arrive a day after Facebook rolled out its first standalone app specifically designed for kids.

About The Author
Jeremy is the Features Editor of Kidscreen specializing in the content production, broadcasting and distribution aspects of the global children's entertainment industry. Contact Jeremy at jdickson@brunico.com.

Menu

Brand Menu