The report has also revealed that 6.7 million of the 8.3 million videos were first flagged for review by machines and were never even viewed. YouTube explained that its use of machine learning to police its content isn’t a bad thing (despite reports saying that its AI is far from perfect) and leads to “more people reviewing content, not fewer.” While its algorithms can delete some content on their own — like say, spam videos — it mostly forwards anything it suspects is in violation of YouTube’s guidelines to human reviewers. Those reviewers are the ones who’ll be in charge of deciding whether to pull the video or to put it behind an age gate, restricting it to logged-in users above 18 years old.

Back in December, the platform said it’s recruiting 10,000 people across Google to review flagged videos. Its algorithms sending flagging entries for human review won’t make a difference if there’s nobody to look at them, after all. YouTube said it has already “staffed the majority of additional roles needed to reach [its] contribution to meeting that goal,” though it’ll likely take some time before it can say that it has 10,000 reviewers at its disposal.

[SOURCE]