YouTube hard at work on hate speech filters
Philip Ellison 25 April, 2017 at 12:04
Video-sharing platform YouTube has taken something of a reputational battering lately. But the company is learning from its mistakes and may well come back stronger, no doubt motivated by the fact that 90 per cent of its revenue comes from advertisers who will gladly distance themselves from the slightest whiff of scandal.
In a bid to make YouTube safer for both users and advertisers, developers are hard at work on an artificially intelligent content filter which will help to flag videos containing violence or offensive language. At present, the process centres around a team of human moderators who watch hundreds of hours of content, and will one day soon be able to train AIs to take over.
“We have always relied on a combination of technology and human reviews to analyse content that has been flagged to us because understanding context in video can be subjective,” says Google spokesperson Chi Hea Cho. “Recently we added more people to accelerate the reviews. These reviews help train our algorithms so they keep improving over time.”
With 600,000 hours of new content being uploaded every single day, AIs are an essential component in YouTube’s growth. And weeding out hate speech on YouTube has become mission-critical, following a series of programmatic gaffes earlier this year which resulted in ads for global brands running alongside extremist content on the platform. Parent company Google maintains that only “a very, very small” number of videos were affected, but all eyes will now be on YouTube as it sets safeguards in place to prevent this from happening again.
YouTube also announced that following a full investigation into alleged anti-LGBT bias in the algorithms determining what content is accessible under Restricted Mode, this error has been resolved. Creators will also be granted a greater level of transparency with regards to how Restricted Mode works.