An influential group of politicians savaged Facebook, Twitter, and Google on Tuesday, accusing the tech firms of radicalising prospective terrorists and grooming children through their algorithms.
The European policy heads of the three firms were hauled before the Home Affairs Select Committee to explain why hate speech, extremist content, and inappropriate content for children were still appearing on their platforms.
Committee chair and Labour MP Yvette Cooper tore into all three firms, arguing that once a user had looked at one piece of extremist or hateful content, each platform’s algorithms would lead them down a rabbit hole by suggesting similar material.
This is what she said, emphasis ours:
“The problem you have is that all three of your organisations use your algorithms to encourage people who are interested in one particular thing to then follow something else. Now the police have said very clearly they are extremely worried about online radicalisation and grooming. Isn’t the real truth that your algorithms and the way you want to attract people to look at other linked and connected things is that your algorithms are doing that grooming and radicalising. You are linking people, once they go on one slightly dodgy thing, you are linking them to other similar things, whether that be racist extremism, or Islamist extremism, your technology is doing that job and you’re not stopping it from doing so.”
Simon Milner, Facebook’s European director of public policy, said he disagreed that algorithms pushed users further towards radicalisation or grooming. “I do recognise we have a problem, which is a shared problem with the police, yourselves, civil society organisations, which is how do we address that person who is going down a channel that leads to them being radicalised,” he said.
Milner also cited Facebook’s Online Civil Courage initiative, an educational programme to help charities and government organisations to spot extremist content online.
Google’s European vice president for public policy, Niklas Lundblad, cited the company’s newly launched anti-radicalisation tools. He said these help people break out of a vulnerable pattern, partly by showing them information that debunks “caliphate” ISIS myths.
Lundblad subsequently had to apologise for using the term “caliphate,” since the term has been used by ISIS to gain legitimacy.
Several MPs on the committee held up examples of radical and abusive content across the three platforms.
Labour MP Stephen Doughty asked YouTube why he was able to find content from “dissident organisations” in Northern Ireland, such as one playlist titled “KILL ALL TAIGS.”
Taig is a derogatory term for a Catholic or Irish national. Google’s Lundblad said the firm simply hadn’t caught the content yet, and said it would remove the content.
Doughty said he had also found pro-IRA content on Twitter, apparently citing the @uptheira account. When Business Insider examined @uptheira, it did not appear to show any pro-IRA content and the account has not been active for several years.
Twitter’s vice president of public policy and communications, Sinead McSweeney, explained much of the firm’s anti-terror efforts went into tackling ISIS.
Facebook told the committee it now had more than 7,500 people tackling extremist and other hateful content on its platform, while Google is nearing 10,000. Twitter did not give a figure but Business Insider understands its moderation team is growing.