Facebook's fraud detection system takes down 6.5 BILLION fake accounts

Facebook’s new fraud detection system helped the company take down more than 6.5 BILLION fake accounts in 2019

  • A new report from Facebook reveals how the company is fighting fake accounts
  • The company has developed a machine-learning tool to analyze user accounts
  • It also analyzes the activity of an accounts friends to determine authenticity

Facebook has deleted more than 6.5 billion fake accounts thanks to a new machine-learning program designed to sniff out frauds.

The social media giant currently has over 2.89 billion monthly users, of which it estimates five percent are fakes.

To help identify deceitful accounts, the company has developed a system it calls ‘Deep Entity Classification’ (DEC), a computer program equipped with machine-learning capabilities that analyzes user accounts for potential fakes.

A new report from Facebook revealed the company removed over 6.5 billion fake accounts in 2019, more than twice the number of real users it has

Facebook claims to have caught around 99.5 percent of the fake accounts on the site, often before other users have flagged the accounts in question.

‘This is the place where we see machine learning and human review working in concert forever,’ Brad Shuttleworth, Facebook‘s Product Manager for Community Integrity, told The Next Web.

The program works by analyzing the number of other accounts a potentially fraudulent user is connected to, as well as whatever groups and pages they may have liked or followed.

The program then maps out similar data for each person in the account’s friends list, in addition to evaluating their daily activity for potential evidence of fraud, such as sending the same link to a promotional sale on sunglasses to hundreds or thousands of people.

The overall idea is to not just evaluate individual accounts, but look at how they interact with their presumptive friends, and in-turn consider how those friends interact with their own groups. 

Facebook relies on a fraud detection system called ‘Deep Entity Classification,’ which analyzes both individual accounts and the list of accounts that are friends with it to try and better distinguish unusual behavior from actual fraud 

One of the factors that can make it hard to identify fake accounts are the sometimes radically different social standards that users in different regions operate under.

‘When people say fake, they often mean suspicious,’ Bochra Gharbaoui, of Facebook‘s Community Integrity unit, said.

‘They’re not sure of what the intent of the account is, and it may also be that they’re seeing behavior on Facebook which doesn’t align with how they expect people to behave on the platform.’ 


Facebook says their priority in fraud detection is behavior that could put other users at risk through potentially malicious links or other harmful activity

In one culture, sending out mass friend request to people you’ve never met might be a taboo, while in another culture networking with strangers might be normal.

In other instances, a user might have wanted to create a joke page for their pet cat or create an online hub for an in-joke they might have shared with friends.

These kinds of uses would be allowable as Facebook pages, but not as Facebook accounts, a distinction that likely wouldn’t be clear to every user. 

Facebook says its main priorities aren’t honest mistakes or benign attempts at humor, but forms of deception that could cause material harm to others.

‘We prioritize enforcement against users and accounts that seek to cause harm and find many of these fake accounts are used in spam campaigns and are financially motivated,’ the company said in its Community Standards Enforcement Report.


She said Facebook’s fact-checkers and algorithms are searching for three types of fake news commonly spread through images and video.

1) Manipulated or fabricated: Content that has been edited or doctored to spread fake news.

Facebook gives an example in which the face of Mexican politician Ricardo Anaya was photoshopped onto a US Green Card ahead of a key election.

The photo was created to make people believe he was from Atalanta, Georgia, despite running for election in Mexico.

2) Out of context: Facebook posts that take images out of their original context to spread misinformation.

An example given by Facebook shows a user claiming a Syrian girl seen in several photos is an ‘actor’ used as part of a western propaganda campaign.

The post appears to suggest the injured child was spotted in photos of three ‘attacks’ carried out by the forces of Putin-backed Bashar Hafez al-Assad. 

Facebook’s fake-news system was able to confirm that the photos posted were from the same attack on the Syrian city of Aleppo.

3) Text or audio claim: Facebook photo or video that is layered with text or audio that contains fake news.

A photo posted with a hoax caption picked out by Facebook claimed that Indian Prime Minister Narendra Modi was rated by BBC ‘researchers’ as 2018’s seventh ‘most corrupt prime minister in the world’.


Source: Read Full Article