Facebook turns off more than 1 million accounts a day as it struggles to keep spam, fraud and hate speech off its platform, its chief security officer says.
Still, the sheer number of interactions among its 2 billion global users means it can’t catch all “threat actors,” and it sometimes removes text posts and videos that it later finds didn’t break Facebook rules, says Alex Stamos.
“When you’re dealing with millions and millions of interactions, you can’t create these rules and enforce them without (getting some) false positives,” Stamos said during an onstage discussion at an event in San Francisco on Wednesday evening.
Stamos blames the pure technical challenges in enforcing the company’s rules — rather than the rules themselves — for the threatening and unsafe behavior that sometimes finds its way on to the site.
Facebook has faced critics who say its rules for removing content are too arbitrary and make it difficult to know what types of activity it will and won’t allow.
Political leaders in Europe this year have accused it of being too lax in allowing terrorists to use Facebook to recruit and plan attacks, while a U.S. Senate committee last year demanded to know its policies for removing fake news stories, after accusations it was arbitrarily removing posts by political conservatives.
Free speech advocates have also criticized its work.
“The work of (Facebook) take-down teams is not transparent,” said Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation, which advocates for free speech online.
“The rules are not enforced across the board. They reflect biases,” says Galperin, who shared the stage with Stamos at a public event that was part of Enigma Interviews, a series of cybersecurity discussions sponsored by the Advanced Computing Systems Association, better known as USENIX.
Stamos pushed back during the discussion, saying “it’s not just a bunch of white guys” who make decisions about what posts to remove.
“When you turn up the volume on hate speech, you’ll get more false positives, (and) catch people who are just talking about it,” rather than promoting it, Stamos said.
The company also must operate within the laws of more than 100 countries, some of which use speech laws to suppress political dissent, he said.
“The definition of hate speech in some countries is problematic,” Stamos said.
Facebook CEO Mark Zuckerberg has said the company will hire 3,000 extra workers to monitor and remove offensive content.
That effort continues apace, according to Stamos, who said the company is “massively expanding our team to track threat actors.”
Still, “you can’t do all that with humans,” he said, which is why Facebook also relies on artificial intelligence software to judge whether someone trying to log in is a legitimate user.