Internal Facebook documents detail how misinformation spreads to users

Whistleblower: Facebook chose profit over safety

Whistleblower: Facebook chose profit over saf… 04: 23

Ahead of the 2020 election, Facebook implemented safeguards to protect against the spread of misinformation by prioritizing safety over growth and engagement. According to a whistleblower, Facebook removed all safeguards to prevent misinformation spreading after the election.

Frances Haugen, a former Facebook employee, filed at least eight separate complaints with the Securities and Exchange Commission, alleging that the social network “misled investors and the public about its role perpetuating misinformation and violent extremism relating to the 2020 election and January 6th insurrection,” including removing “safety systems” put in place ahead of the 2020 election.

“And as soon as the election was over, they turned them back off or they changed the settings back to what they were before, to prioritize growth over safety,” Haugen said in an interview with “60 Minutes” correspondent Scott Pelley.

Facebook disputes that and says it maintained necessary safeguards, adding in a statement that it has “expressly disclosed to investors” the risk of misinformation and extremism occurring on the platform remains.

In 2019, a year after Facebook changed its algorithm to encourage engagement, its own researchers identified a problem, according to internal company documents obtained from the source.

The company set up a fake Facebook account, under the name “Carol,” as a test and followed then-President Trump, first lady Melania Trump and Fox News. The algorithm suggested polarizing content within a day. It recommended conspiracy theories the next day and within a matter of days, an internal document stated.

By the second week, the fake account’s News Feed was “comprised by and large” with misleading or false content. According to internal documents, the account’s News Feed was “intensifyingly mixed with misinformation, misleading, recycled content, polemizing memes and conspiracy content interspersed by occasional engagement bait” in the third week.

Facebook says it used research like the test account to make safety improvements and in its decision to ban QAnon. According to the company, hate speech has decreased in the past five quarters.

While speaking to “60 Minutes,” Haugen explained how the polarizing content reaches users.

“There were a lotta people who were angry, fearful. They spread the groups to many more. When they were asked to pick the most engaging content to post to people’s News Feeds, they chose hateful, angry content. Imagine that you see in your News Feed each day that the election was stolen. The election was stolen. What point do you want to storm the Capitol? Haugen said.

“And you can say, ‘How did that happen?’ You can say, “How did that happen?” You know, like “Why aren’t we discussing these outlandish topics?” QAnon, right, crazy conspiracies. These are the kinds of things Facebook chooses to display.

Haugen is testifying to the Senate Commerce Committee on Tuesday. She stated that Facebook has repeatedly shown it prefers safety to profit.


Facebook statement on suggestion it has mislead the public and investors:

“As is evident from the news and our numerous public statements over the past several years, Facebook has confronted issues of misinformation, hate speech, and extremism and continues to aggressively combat it. These risks are known to our investors and will continue to be so.

Claim removing safety systems after the 2020 election allowed divisive content to spread:

“In phasing in and then adjusting additional measures before, during and after the election, we took into account specific on-platforms signals and information from our ongoing, regular engagement with law enforcement. These signals were changed and so was the measure. These steps are not the cause of January 6th. The measures that we needed remained in place throughout February. Some, such as the decision to stop recommending civic or political groups, remain in effect until today. They were part of an even larger and more complex strategy that was used to secure the election on our platform. We are proud of this work.

Carol’s (the fake Facebook account) journey:

“While this was a study of one hypothetical user, it is a perfect example of research the company does to improve our systems and helped inform our decision to remove QAnon from the platform.”

Role in January 6th:

“The notion that the January 6 insurrection would not have happened but for Facebook is absurd. Former President of USA pushed the narrative that the election had been stolen. He was even present at the Capitol Building that day. The perpetrators of the January 6 violence and their supporters are responsible. Our long history of cooperation with law enforcement agencies, as well as those responsible for domestic terrorist threats, has been a success story. – FB spokesperson

Internal FB research that found only 3-5% of hate speech and less than 1% of Violence/ITV speech prompts action from the platform:

“When combating hate speech on Facebook, bringing down the amount of hate speech is the goal. Facebook’s hate speech prevalence is 0. 05 percent of content viewed and is down by almost 50 percent in the last three quarters, facts that are regrettably being glossed over. These figures are reported publicly at least four times per year. We even open our books up to an outside auditor for verification. It is the largest, most sophisticated, and transparent attempt to eliminate hate speech by any large consumer tech company.

Kris Van Cleave

krisvancleavepromo.jpg

Kris Van Cleave is a congressional correspondent for CBS News based in Washington, D.C.

Thanks for reading CBS NEWS.

Create your free account or log in
for more features.

Please enter email address to continue

Please enter valid email address to continue

Read More

Related Posts