Friday, April 26, 2024

Facebook recruits 3,000 additional moderators to review flagged content

Share

Why it matters to you

Facebook wants to become a safer website to visit and realizes it needs to get better at removing objectionable pics and videos.

If Facebook has one big, overriding problem, it’s objectionable content. The social network’s billions of users report videos and pictures that violate the website’s terms of service every day and it is up to the site’s moderation team to review complaints. But, it has struggled lately.

On Wednesday, Facebook said it would recruit as many as 3,000 moderators to help parse the network’s content for hate speech, child exploitation, animal abuse, teenage suicide, and self-harm. They will join the existing 4,500-member review team.

Facebook’s moderation problem is an open secret. In 2016, a BBC reported that private Facebook groups were being used by sexual predators to trade images of exploited children. Despite promises by Facebook’s head of public policy to “[remove] content that shouldn’t be there,” a follow-up investigation found that Facebook failed to remove a vast majority of the images — about 18 or 100 — after the BBC used Facebook’s own systems to report them.

In response, the chairman of the U.K. House of Commons’ media committee, Damian Collins, told the BBC he had “grave doubts” about the effectiveness of Facebook’s moderation. “I think it raises the question of how can users can make effective complaints to Facebook about content that is disturbing, shouldn’t be on the site, and have confidence that it will be acted upon,” he said.

The chairman’s comments came on the heels of more onerous oversights. Earlier in 2017, three men live-streamed the gang-rape of a woman in the city of Uppsala, Sweeden, 50 miles north of Stockholm. Last month, a man in Thailand killed himself and his child and broadcast it. And two days prior to Facebook’s annual F8 developer conference, a Cleveland man filmed the shooting and killing of a 74-year-old man.

“We still have a lot of work to do, and we will keep doing all that we can to prevent tragedies like this from happening,” CEO Mark Zuckerberg said during F8’s keynote address. “We’re going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards, and easier for them to contact law enforcement if someone needs help.”

Facebook is also improving its automated moderation tools. It is developing new algorithms that will automatically identify and take down objectionable content, and tools that will make it easier for users to report problems and contact law enforcement.

But Zuckerberg said that these measures won’t be an instant fix. “Artificial intelligence can help provide a better approach,” Zuckerberg said in an open letter. “[But it will take] many years to fully develop.”

He praised the company’s human moderators, who ensure flagged Facebook content abides by the network’s Community Standards.

“Just last week, we got a report that someone on Live was considering suicide. We immediately reached out to law enforcement, and they were able to prevent him from hurting himself,” Zuckerberg said.




Read more

More News