Facebook Has Removed Half A Billion Fake Accounts So Far This Year

Facebook Has Removed Half A Billion Fake Accounts So Far This Year

Guy Rosen, Facebook's vice president of product management, said the company had substantially increased its efforts over the past 18 months to flag and remove inappropriate content. "Accountable to the community".

Facebook has faced a storm of criticism for what critics have said was a failure to stop the spread of misleading or inflammatory information on its platform ahead of the USA presidential election and the Brexit vote to leave the European Union, both in 2016. "However, we don't have a sense of how many incorrect takedowns happen - how many appeals that result in content being restored".

He said technology like artificial intelligence is still years from effectively detecting most bad content because context is so important.

This led to old as well as new content of this type being taken down.

During the press call, Schultz noted it will be a mix of full-timers and contractors spread across 16 locations around the world. This is a problem perhaps most salient in non-English speaking countries.

Improved technology using artificial intelligence had helped it act on 3.4 million posts containing graphic violence, almost three times more than it had in the last quarter of 2017.

It attributed the increase to the enhanced use of photo detection technology.

The bulk of the posts were found and flagged by the firm before users reported it to Facebook, driven by improvements in artificial intelligence technology.

More news: Chief Rabbi Blesses Ivanka, Jared, Calls Trump 'King of Kindness'

Adult nudity and sexual activity: Facebook says.07% to.09% of views contained such content in Q1, up from.06% to.08% in Q4.

Facebook said it released the report to start a dialog about harmful content on the platform, and how it enforces community standards to combat it. The report provided detailed data on just how much objectionable content CEO Mark Zuckerberg's famed social network had to moderate in recent months. The company also reported that they took down 21 million pieces of adult nudity and 3.5 million pieces of violent content.

FACEBOOK is struggling to block hate speech posts, conceding its detection technology "still doesn't work that well" and it needs to be checked by human moderators.

The company says the increase was likely the result of higher volumes of graphic violence content being shared on Facebook, possibly due to an increase in violence in Syria in the period.

"These kinds of metrics can help our teams understand what's actually happening to 2-plus billion people", he said. According to the report, this step is taken in instances where graphic content is being used to spread awareness or condemn violence, and doesn't go against the company's policies.

The social network's global scale - and the extensive efforts it undertakes to keep the platform from descending into chaos - was outlined Tuesday in its first ever transparency report. Overall, the social giant estimated that around 3%-4% of active Facebook accounts on the site during Q1 were still fake.

The post Facebook Closes 583 Million Fake Accounts appeared first on Channels Television. During Q1, Facebook found and flagged 98.5% of fake accounts it ultimately took action against before any user reported them.