Hate Speech: What Are You Going to Do About It?
For as long as the internet has existed as a platform where anyone, anywhere can speak their mind, there has been a conflict between where the line between “free speech” and “hate speech” lies. With the rise of the alt-right, Reddit, and racial tensions fueling more and more hateful content online, this problem has come into the fore more than ever.
How can terrorist videos be hosted online, whilst personal harmless images are deemed inappropriate and removed on social media channels? As TechCrunch recently wrote about the issue, “whatever approaches are being used to address the problem, something, somewhere is clearly not working. YouTube videos by extremists and terrorists abound. Targeted harassment and trolling on Twitter remains to this day, years and years after you might reasonably have thought they might get it right. Facebook almost automatically removes women who breastfeed their children and yet seems to shrug its shoulders over fake news, which might move an entire election, spreading on its network.”
The problem is clear; the issue is figuring out whose job it is to fix it. There have been two major news stories playing out recently which have brought into focus the question of just who is responsible for policing hate speech online, and the answer has been rather decisive.
The first is a bill from the German government which would impose fines on social networks that fail to remove hate speech and criminal activity online, placing the onus firmly on the powerful companies who claim they don’t want to censor. TechCrunch quoted German interior minister Heiko Maas speaking on the necessity of the bill in Europe: “Too little criminal content is being deleted, and it’s not being deleted sufficiently quickly. The biggest problem is and remains that the networks don’t take the complaints of their own users seriously enough.”
The second controversy is at Google, where numerous mega advertisers, including Tesco, Volvo, and Heinz, have pulled their ads from the internet giant after learning their ads were being displayed next to Nazi and terrorist related content. The issue at hand is that Google’s programmatic advertising algorithm cannot prevent this juxtaposition of mega brands next to hate speech, but the furor is over Google’s failure to take responsibility for this issue in a transparent way.
In response to the rampant criticism and widespread media coverage, Google’s chief business officer Philipp Schindler wrote a statement that read: “Recently, we had a number of cases where brands’ ads appeared on content that was not aligned with their values. For this, we deeply apologize,” he writes. “We know that this is unacceptable to the advertisers and agencies who put their trust in us. That’s why we’ve been conducting an extensive review of our advertising policies and tools, and why we made a public commitment last week to put in place changes that would give brands more control over where their ads appear.”
There has been some notable hypocrisy in recent years, as these companies claim they do not want to censor free speech so they can’t clamp down on, say, terrorist videos. Meanwhile, they regularly remove more benign material like breastfeeding pictures. This proves that these platforms aren’t just “dumb” conduits for content and that they do in fact have the ability to control what gets displayed. Indeed, both of these news events in recent weeks seem to be pointing to the fact that mega internet companies like Facebook and Google are fully responsible for policing what appears on their own networks, and they can’t in good faith claim this “hands off” approach any longer. If policies and fines like the German legislation spread—and if networks continue to hemorrhage advertisers at the rate Google is—we could begin to see this problem begin to shift once and for all. Let’s hope we do.