20 Years of Facebook: A Danger to Society

Meta now

Today Facebook has become a dangerous digital space that fuels hate & violence. Read on to learn more about the present state of Facebook.

Download Report

CURRENT DAY: FACEBOOK, NOW META, BECOMES PROBLEMATIC

NOW – FACEBOOK BOUGHT UP COMPETITORS IT VIEWED AS A THREAT TO ITS LONG-TERM GROWTH

  • Facebook maintained its monopoly by buying, copying or killing its competitors according to a U.S. House Antitrust Subcommittee report. Between 2004 – 2020, Facebook acquired at least 63 companies. The U.S. House Antitrust Subcommittee wrote Facebook’s “serial acquisitions reflect[ed] the company’s interest in purchasing firms that had the potential to develop into rivals before they could fully mature.” Zuckerberg described buying companies as a “land grab” to “shore up our position.” Zuckerberg said he wasn’t concerned about competition, because Facebook could “likely always just buy any competitive startups.” Politico wrote that Facebook’s purchase of WhatsApp and Instagram exemplified its “buy or bury” strategy against competitors.
  • Zuckerberg saw Instagram as a major competitor to Facebook and pushed to acquire it – including issuing threats to Instagram’s founders warning of consequences if they didn’t sell. In 2012, Facebook bought Instagram for $1 billion. CNN wrote “as young social network users gravitated toward photo-sharing, Facebook wanted to scoop up what could have eventually become a big rival.” Zuckerberg said that Instagram and other social networks “could be very disruptive to us.” Zuckerberg identified that Instagram had a mobile advantage and could hurt Facebook. The U.S. Antitrust House Subcommittee reported Zuckerberg had issued veiled threats to Instagram’s Founder, with Zuckerberg telling him that “refusing to enter into a partnership with Facebook, including an acquisition, would have consequences for Instagram.” Instagram’s founder was reportedly concerned that his company would be targeted for retribution if he refused to sell to Facebook. Zuckerberg wrote the Instagram founder: “How we engage now will determine how much we’re partners vs. competitors down the line,” noting that Facebook was “developing our own photo strategy.” Facebook’s purchase of Instagram eventually gave it near total control of the social media space, with Facebook and its subsidiaries like Instagram accounting for 75% of all time spent on social media. When purchasing Instagram, Zuckerberg promised Facebook didn’t “plan on doing many more of these, if any at all.”
  • Zuckerberg bought WhatsApp in 2014 for $19 billion. The WhatsApp deal as the largest Facebook ever made. WhatsApp was the most popular messaging app for smartphones when Facebook bought it. Zuckerberg and Facebook executives considered WhatsApp a threat to Facebook Messenger and a threat to Facebook’s network. Facebook believed buying WhatsaAPP was an opportunity to further entrench its dominance.
  • In 2014, Facebook bought Oculus VR, believing VR could be the next big thing. Zuckerberg said Facebook’s purchase of Oculus reflected his belief that virtual reality could be the next big computing platform after mobile. The New York Times wrote that Facebook’s purchase of Oculus was “one of several bets” Facebook was making “in its efforts to anticipate the future and secure its dominance of social communication.”
  • Facebook’s acquisitions cemented its power over social networking. A Facebook presentation said the site controlled “95% of all social media” in the U.S. in terms of monthly minutes of use. Regulators in the UK, Germany, and Australia found Facebook dominated the social network market. A U.S. House Antitrust Subcommittee found Facebook to be a Monopoly and recommended it be broken up, saying Facebook’s “monopoly power [was] firmly entrenched and unlikely to be eroded by competitive pressures from new entrants or existing firms, as it owned three of the seven most popular mobile apps in the U.S.

NOW – ZUCKERBERG SAT AS A DICTATOR OVER THE WORLD’S LARGEST SOCIAL NETWORK

  • Zuckerberg consolidated power at Facebook, giving him a firm hand over all aspects of his company. A 2018 Vox article was headlined “Mark Zuckerberg is essentially untouchable at Facebook. A 2020 Wall Street Journal article was headlined “Mark Zuckerberg asserts control of Facebook, pushing aside dissenters.” Zuckerberg called Facebook a “founder-led company.” Zuckerberg and his allies controlled almost 70% of all voting shares in Facebook. Proving Zuckerberg’s power at Facebook, The Board was notified about the Instagram acquisition only a few days before it was announced. The Wall Street Journal remarked “as both chairman and CEO and with a lock on the majority of Facebook’s super voting shares, Mr. Zuckerberg ha[d] few checks on his power.” Public Citizen said “with a mega-company such as Facebook, there [was] no justification or support for a dual-class stock system.” Public Citizen said, “as a matter of public policy, it [was] dangerous to strip away one of the key tools of discipline for a mega-company.”
  • Zuckerberg acted as an authoritarian leader, forcing out those who disagreed with him and rewarding allies. In 2018, the Wall Street Journal reported that Zuckerberg had “took on the role of a wartime leader” at Facebook, “who needed to act quickly, and, sometimes, unilaterally.” That year, Zuckerberg gave himself power over Instagram and WhatsApp, units he promised to leave independent. Zuckerberg was “not a man much given to quiet reflection,” remarked a TIME reporter, who called described Zuckerberg as “supremely confident, almost to the point of being aggressive.”
  • Zuckerberg refused to take advice from his more seasoned, experienced and knowledge board members. After Erskine Bowles, a former investment banker and Clinton administration official, left the Facebook board, he criticized Facebook’s leader for failing to take his advice on politics – his area of expertise. In 2020, the Wall Street Journal reported that Zuckerberg had fired two board directors and replaced one of them with a longtime friend, which the paper called “the culmination of the chief executive’s campaign […] to consolidate decision-making at Facebook.” Facebook’s lead independent board director, Susan Desmond-Hellmann, left in October 2017 in part because management wasn’t considering board feedback. Kenneth Chenault, former American Express CEO and a close confidant of Zuckerberg, left the board after growing disillusioned. Zuckerberg originally treated Chenault as a “kind uncle” who understood running a big institution. Chenault had proposed an outside advisory group that would study Facebook’s problems and deliver reports to the board directly. The idea sank. In 2018, about a dozen senior or highly visible executives disclosed their resignations or left Facebook. Public Citizen said, “with a mega-company such

NOW – FACEBOOK BECAME A DATA VACUUM THAT SUCKED UP INFORMATION ON A QUARTER OF THE WORLD’S POPULATION

  • Facebook held the personal data of more than a quarter of the world’s population – 2.8 billion out of 7.9 billion. NBC News wrote that Zuckerberg “oversaw plans to consolidate [Facebook’s] power and control competitors by treating its users’ data as a bargaining chip.” A U.S. House Antitrust Subcommittee wrote that Facebook’s data advantage “compounded over time, cementing Facebook’s market position.” WIRED wrote that in the digital era, power came “from controlling data, making sense of it all, and using it to influence how people behave.” Tech Crunch wrote that “data is to the 21st century what oil was to the 20th.”
  • Facebook boasted to advertisers about their platform’s access to users, promoting that it could help advertisers target and sway users. Facebook often emphasized its ability to sway its users with advertisers, portraying itself as an effective mechanism to help promote their products. When someone logged into Facebook, there were typically about 1,500 items the company could display in that person’s news feed. But, it only showed 300 of them. The New Yorker wrote that as private companies amassed more data about us and became the main civic forum for business and life, “their weaknesses could become more consequential.”
  • Facebook allowed third party developers access personal data from a users’ friends without the friend’s knowledge of consent. In 2018, Facebook’s Deputy General Counsel, Paul Grewal, claimed “protecting people’s information [was] at the heart of everything we [did].” But that same year, it was reported that Facebook had allowed developers to access the personal data of friends of the people who used their apps on their platform, without the knowledge or express consent of those friends. In 2018, a Platforms Operations Manager at Facebook, Sandy Paraklis, said tens or even hundreds of thousands of developer may have had friend permission data.

NOW –FAILED TO IMPLEMENT SAFETY PROTOCOLS FOR THIRD – PARTY ACCESS TO USER DATA

  • Facebook had no control over user data once it reached third party developers. Facebook’s Platform Operations Manager, Paraklis, said when it came to the control Facebook had over the data given to outside developers, Facebook had “Zero. Absolutely none.” Paraklis said when he encouraged executives to proactively audit developers, he was discouraged from the approach, with one executive asking him, “do you really want to know what you’ll find?” Paraklis estimated that “a majority of Facebook users” could have had their data harvested by app developers.
  • Facebook knew that third-party developers had misused users data in the past. In 2010, Wall Street Journal reported that many of the most popular apps on Facebook had been “transmitting identifying information […] to dozens of advertising and internet tracking companies.” The issue affected users who had set their profiles to Facebook’s strictest privacy settings. The Wall Street Journal wrote “the practice [broke] Facebook’s rules and renew[ed] questions about its ability to keep identifiable information about its users’ activities secure. Later, in 2019, Facebook suspended tens of thousands of apps for improperly sucking up users’ personal information. The New York Times wrote that the admission and suspension of apps was “a tacit admission that the scale of its data privacy issues was far larger than it had previously acknowledged.”

NOW –FREQUENTLY HARVESTED USER DATA WITHOUT ANYONE’S KNOWLEDGE AND HANDED IT TO THIRD PARTIES

  • Facebook frequently abused their ability to harvest user data without anyone’s knowledge. In 2018, the New York Times reported that Facebook overrode users who denied Facebook permission to share information with third parties, continuing to provide their data to device makers. Facebook’s sharing of information to third parties was a violation of their 2011 consent decree with the FTC, which barred Facebook from overriding users’ privacy settings without first getting explicit consent. In 2019, the Department of Justice and FTC accused Facebook of violating an administrative order issued by the FTC in 2012 by misleading users about the extent to which third-party apps could access users’ personal information. The DOJ and FTC complaint accused Facebook of violating the Federal Trade Commission Act by deceiving users about their user of their data. In 2020, Australian regulators said Facebook’s Onavo Protect mobile app had been used by Facebook for research and identifying future acquisition targets, despite telling customers it would keep their data private. In 2021, WhatsApp was fined $270 million by Irish authorities for not being transparent about how it used data collected from users. Irish regulators said WhatsApp was not clear that its data was being shared with Facebook. Facebook also admitted that it used phone numbers for two factor authentication to also target them with ads.
  • Facebook continued to share user data with 52 hardware and software companies years after they promised to stop doing so – some of which were based in China. The reports about data-sharing agreements with device makers caused renewed controversy because the practice continued years after Facebook began restricting access to the user information available to app makers, with the Washington Post noting Facebook portrayed the news “as a sign that it had grown more careful in guarding user privacy.” Defending themselves, Facebook said the sharing of user data was part of agreements designed to make its social media platform work more effectively on smartphones and other devices.

NOW –BECAME A MAGNET FOR MASSIVE DATA BREACHES BUT WORKED TO NORMALIZE THE PROBLEM

  • Facebook was no stranger to data beaches, but sought to normalize them rather than defend against them. In 2018, Facebook software bugs allowed the exposure of personal information of nearly 50 million users. In April 2021, Facebook suffered a data breach that leaked the data from 553 million people in 106 countries onto a hacking forum. Facebook brushed off the reports, saying the data was old and from a previously reported leak. Facebook denied any wrongdoing by saying the data was scraped from publicly available information on the site, yet Facebook refused to notify the more than 530 million users whose personal data was stolen in the breach. A leaked internal Facebook memo said the company’s “long-term strategy” for dealing with data breaches was to “both frame this as a broad industry issue and normalize the fact that this activity happens regularly.” Between 2016 – 2021, Facebook spent $13 billion on “safety and security,” which represented 4% of revenue. In 2019, Facebook spent $3.7 million on safety and security on its platform. However, in October 2021, Facebook announced that it planned to spend $10 billion on its Facebook Reality Labs project for the development of AR and VR products.

NOW – SECRETLY RECORDED FACEBOOK MESSENGER USERS AND SENDS THE AUDIO TO THIRD-PARTIES

  • Facebook secretly harvested audio from users then provided it to third-party contractors for transcription. Facebook long denied that it collected audio from users to inform ads or help determine what people saw on their news feed. Zuckerberg once called the idea a “conspiracy theory” that Facebook listened “to what’s going on your microphone and use that for ads. We don’t do that.” Further, Facebook’s data-use policy did not make mention of audio, nor did it disclose to users that Facebook might use third parties to review their audio. But in fact, Facebook paid hundreds of outside contractors to transcribe clips from users of its service. The contractors paid by Facebook said they were hearing Facebook users’ conversations, but did not know why Facebook needed them transcribed. Facebook responded to the reports by saying users who had their conversations transcribed had chosen the option in the messenger app to have their voice chats transcribed.

NOW – ALLOWED POLITICAL CONSULTANTS TO EXPLOIT USERS’ PSYCHOLOGY TO SNATCH MORE VOTES

  • Facebook’s third-party data collection permissions allowed Cambridge Analytica to build psychological profiles of millions of Americans. In 2014, Contractors and employees of Cambridge Analytica acquired private Facebook data of tens of millions of users, intending to sell psychological profiles of American voters to political campaigns. Cambridge Analytica had purchased the user data from an outside researcher who claimed to be collecting it for academic purposes. Cambridge Analytica used the data of Facebook users to help target voters, and used private information from 50 million Facebook users without their permission, making it one of the largest data leaks in Facebook’s history. The data Cambridge Analytica took included users’ identities, friend networks and their likes on the platform. Only a fraction of the users Cambridge Analytica harvested data from had agreed to release their information to a third party. The head of Cambridge Analytica, Alexander Nix, boasted of having “a massive database of 4-5,000 data points on every adult in America.” The researchers that sold Cambridge Analytica user data had developed a technique to map personality traits based on what people had liked on Facebook. The researchers paid users small sums to take a personality quiz and download an app that would scrape some private information from their profiles and those of their friends – activity that Facebook permitted at the time.
  • Cambridge Analytica served as a consultant for Trump’s 2016 campaign and led to one of Facebook’s largest scandals ever. Cambridge Analytica was backed by the conservative power-family the Mercers, and Steve Bannon served on Cambridge Analytica’s board – choosing the name for the company. Cambridge Analytica worked with Trump’s 2016 campaign on activities like designing target audiences for digital and fund-raising appeals, modeling voter turnout, buying $5 million in TV ads and determining where Trump should travel to drum up support. Facebook had learned about the Cambridge Analytica data leak back in September 2015, with three Facebook employees requesting an investigation into the Cambridge Analytica data scarping – three months before public reporting on it. But in October 2015, a Facebook employee wrote “it’s very likely these companies [were] not in violation of any of our terms. However, Zuckerberg testified that the company only learned about Cambridge Analytica from The Guardian’s reporting. The Guardian later wrote that the Cambridge Analytica scandal “plunged Facebook into the greatest crisis in its then 14 year history.” After reports of the Cambridge Analytica scandal, Facebook user’s confidence in the company had plunged by 66% and forced Zuckerberg to go on apology tour. Further, The Justice Department and the SEC opened investigations related to Cambridge Analytica. The FTC fined Facebook $5 billion over Cambridge Analytica – it’s largest settlement ever. Later, Meta agreed to pay $725 million to settle a lawsuit over sharing users’ personal information with Cambridge Analytica.
  • Facebook was repeatedly attacked and fined over their gross privacy violations and wanton disregard for keeping user data safe. In 2019, the New York Times wrote that Facebook had shown “a willingness to fight charges of privacy violations.” In 2020, Canada levied a $9 million CSD penalty for making “false or misleading claims about the privacy of Canadians’ personal information. Facebook agreed to pay $90 million to settle a decade old class action lawsuit over a practice that allowed the site to track users’ activity across the internet, even if they had logged out of the platform. In 2019, Brazil fined Facebook the equivalent of $1.6 million for improperly sharing user data. The Canadian Competition Bureau found that Facebook falsely represented how much information a user could control. The bureau found that third-party developers were able to access some user data in ways that were inconsistent with Facebook’s policies.

NOW – REWARDED OUTRAGE AND SENSATIONALISM TO INCREASE USER ENGAGEMENT

  • In 2018, Facebook changed its newsfeed algorithm, purportedly to help users – but really it was to increase user engagement. In 2018, Facebook altered its news feed to prioritize what people’s friends and family shared and commented on, while de-emphasizing content from publishers and brands. The news feed would overall highlight posts that friends interacted with rather than viral videos and news articles shared by media companies. Zuckerberg called it a sacrifice to Facebook’s user engagement metrics that would be good for the community in the long-term. In 2017, Zuckerberg had written that one of his goals of 2018 was “making sure that time spent on Facebook [was] time well spent.” Zuckerberg said the news feed changes were intended to maximize the amount of content with “meaningful interaction.” Zuckerberg had said that the “no. 1 value” at Facebook was the “focus on Impact.
  • Zuckerberg said the changes were driven in an effort to strengthen bonds between users and improve their well-being. Facebook in fact made the changes to its news feed algorithm partly because user engagement was declining. Social interactions on Facebook were declining in favor of passive media consumption. Zuckerberg said the news feed changes were intended to maximize the amount of content with meaningful interaction. No one at Facebook “never really figured out why metrics declined” according to a 2020 internal memo. And according to the Wall Street Journal, even as Zuckerberg was claiming the algorithm change would strengthen user wellbeing, Facebook researchers were warning that the change was making Facebook an “angrier” place.

NOW – REFUSED TO MAKE ALGORITHM CHANGES DESPITE BEING AWARE OF ITS HARM

  • Facebook employees quickly found out that the algorithm change was backfiring and negatively impacting user well-being. A 2021 Wall Street Journal article was headlined “Facebook tried to make its platform a healthier place. It got angrier instead.” Facebook employees warned internally that the algorithm change was having a negative effect on user well-being and mental health. Facebook researchers found that algorithm changes “had unhealthy side effects on important slices of public content” like news and politics. Facebook researchers found the Algorithm’s heavy weighting on reshared material in news feed made the angry voices on the platform louder. According to internal Facebook research: “misinformation, toxicity, and violent content are inordinately prevalent among reshares.” In the summer of 2018, Facebook data scientists surveyed users and found that many felt the quality of their feed had decreased.

NOW – DROVE POLITICAL PARTIES TO INCREASE NEGATIVE MESSAGING

  • Facebook’s algorithm change incentivized publishers and politicians to post sensationalist and negative content because it was successful. Facebook researchers found that after Facebook changed its news feed algorithm, publishers and political parties reoriented their posts towards outrage and sensationalism. The Wall Street Journal reported the tactic “produced high levels of comments and reactions that translated into success on Facebook.” In April 2019, Facebook researchers found in Spain, political parties “learnt that harsh attacks on their opponents net the highest engagement” due to the algorithm change. Facebook Whistleblower Frances Haugen explained “anger and hate [was] the easiest way to grow on Facebook.”
  • Both publishers and political parties warned Facebook that the algorithm changes were forcing them to shift towards sensationalist content. In the Fall of 2018, Buzzfeed editor Jonah Peretti raised concerns to Facebook about how their news feed algorithm changed incentivized divisive content. Peretti wrote that the algorithm change was reward divisiveness and not rewarding “content that drives meaningful social interactions.” Peretti told Facebook it wasn’t just divisive content that saw success on Facebook, but also “fad/junky science” along with “extremely disturbing news” and “gross images.” Political parties in Europe told Facebook their 2018 algorithm change had made them shift their policy positions so they would resonate more on the platform. Political parties in Europe felt Facebook’s algorithm change made it more difficult to directly communicate with their supporters, incentivizing them to create posts feeding on people’s anger to increase visibility. The political parities noted that the incentive to post more negative and sensationalist content raised concerns about its long-term effect on democracy. A political party in Poland shifted the proportion of their posts from 50/50 positive/negative to 80% negative, explicitly because of the algorithm change. In 2018, Facebook acknowledged that social media could have negative effects on democracy.

NOW – ALLOWED MISINFORMATION TO PROLIFERATE AND DECLINED TO ADDRESS THE ISSUE

  • Misinformation was the most engaged with content on the platform, and Zuckerberg & co were well aware of the problem. Fake news and false rumors reached more people, penetrated deeper into social networks and spread much faster than accurate stories. A false story on social media reached 1,500 people six time quicker than a true story did. Researchers from NYU found that Facebook users engaged with misinformation more than other kinds of information on the platform. Brookings wrote that misinformation was “the logical result of a revenue model that reward[ed] the volume of information over its veracity.” Brookings: When lies pay as well as the truth, there is little incentive to only tell the truth.”
  • Facebook knew it was exposing users to misinformation, but chose not to do anything about it. According to a 2019 internal Facebook memo, Facebook was “knowingly exposing user to misinformation that we ha[d] the processes and resource to mitigate.” Internal Facebook documents showed that the platform’s own researchers had identified the platform’s ill effects in areas like political discourse. Auditors found that company Facebook’s algorithms continued to push people toward self-reinforcing echo chambers, which potentially deepened polarization. Internal Facebook documents found that the platform aggravated polarization and tribal behavior.
  • Facebook understood its news feed and recommendation changes fostered rage, misinformation and disinformation. Washington Post ran an article headlined “Five points for anger, one for a ‘like’: how Facebook’s formula fostered rage and misinformation.” Facebook weighted “angry” emoji reactions five-times more heavily than likes, leading to a spread of misinformation, toxicity and low-quality news. Facebook was also the No. 1 social network for disinformation. Internal Facebook research repeatedly found that recommendation tools pushed users into extremist groups. According to a 2016 presentation, Facebook researchers found “64% of all extremist group joins [were] due to our recommendation tools.” An Internal Facebook presented to executives in 2018 found that the company was well aware its products, specifically its recommendation engine, stoked divisiveness and polarization. Internal researchers for Facebook found that Facebook’s “core product mechanics” let disinformation and hate speech flourish on the site. In a 2018 presentation, a Facebook team wrote that their algorithm “exploit[ed] the human brain’s attraction to divisiveness.” In a 2018 article, WIRED wrote “social media platforms ha[d] come to seem like a prime culprit for liberal democracies decline,” saying that social media and “an automated media landscape reward[ed] demagoguery with clicks.
  • Zuckerberg resisted attempts to fix the algorithm causing division and tribal behavior, worrying about the impacts on profits. According to NBC, Facebook had “long known its algorithm and recommendation systems push[ed] users to extremes.” Facebook whistleblower Frances Haugen said at Facebook, she saw conflicts of interest between what was good for the public and what was good for Facebook. Haugen remarked “and Facebook, over and over again, chose to optimize fore its own interest.” The Wall Street Journal wrote that Zuckerberg and Facebook executives “largely shelved” research showing Facebook was causing divisiveness and polarization. A Facebook team said building features to keep Facebook’s algorithms from recommending extremist content would come at the cost of user engagement. The research team said the changes would require Facebook to “take a moral stance.” The Wall Street Journal reported that “fixing the polarization problem on Facebook” would require it “to rethink some of its core products.” But Zuckerberg rejected proposed fixes to the algorithm because he worried it would hurt Facebook users’ engagement. According to the New York Times, “any action taken to reduce popular content, even if its fake news, could hurt [Facebook’s] priority of keeping its users engaged on the platform.” The Washington Post wrote an article titled “the case against Mark Zuckerberg: insiders say Facebook CEO chose growth over safety.” Zuckerberg would not approve of restricting Facebook’s algorithm from boosting content most likely to be shared by a lot of users if there was a “material trade off” with ‘meaningful social interaction.’ Zuckerberg in fact rejected proposed changes to increase the algorithms safety specifically because it would impact meaningful social interactions. And according to the Wall Street Journal, “Facebook executives shut down efforts to make the site less divisive.”

NOW –ALLOWED FOREIGN ACTORS TO THREATEN NATIONAL SECURITY AND RUN DISINFORMATION CAMPAIGNS IN THE U.S.

  • Facebook became a hub for political disinformation campaigns in the U.S., which were run by foreign actors, terrorists and extremists. Facebook acknowledged that the U.S. was the most frequent target of disinformation campaigns. Russia and Iran were the leading purveyors of disinformation on Facebook between 2018 – 2021. The New Yorker wrote that online disinformation was “an ongoing threat to our country” that was “already damaging our political system and undermining public health.” National security leaders sounded the alarm on the threat disinformation posed, with former NSA General Counsel, Glenn Gerstell explaining disinformation was a national security threat because it “either sows discord in our society or undermines confidence in democratic institutions. The American Security Project wrote that disinformation could “degrade the fundamentals of democratic societies: trust in institutions, a free media, civil society” and “trust in free and fair elections.” The American Security project further explained that the propagation of disinformation “could work towards increasing Russian and Chinese spheres of influence” and risked “negatively impacting the U.S’ standing in the world as a global leader and cooperative partner.” And yet, in April 2020, TIME reported that Facebook was “reluctant to crack down on political disinformation.”
  • Facebook consistently understaffed counterespionage and counterterrorism operations – and once exposed the personal details of its content moderators to suspected terrorists. According to whistleblower Frances Haugen, Facebook had a “consistent understaffing of the counter-espionage information operations and counterterrorism teams,” telling lawmakers that she believed Facebook had become a “national security issue.” Stratfor wrote that Big Tech was “no more immune to potential espionage and foreign influence” than any business with vast international times. Worse yet, in 2017, Facebook was found to have inadvertently exposed the personal details of its content moderators to suspected terrorists. The security lapse affected more than 1,000 workers across 22 departments at Facebook. Moderators had their personal profiles viewed by accounts with ties to ISIS, Hezbollah and the Kurdistan Workers Party and were automatically appearing in the activity logs of the groups they were shutting down. The moderators reported receiving friend requests from people affiliated with the terrorist organization they were scrutinizing. The computer glitch that exposed moderators profiles to terrorists was not fixed for a month and had been retroactively exposing the personal profiles of moderators that had censored accounts as far back as a year prior.

NOW – BECAME RUSSIA AND IRAN’S PLATFORM OF CHOICE FOR PROPAGANDA

  • Russia expertly harnessed Facebook to spread propaganda and sow division in the U.S. In 2014, Russia began to promote propaganda and target American voters with polarizing messaging on Facebook. Russia’s troll farm, the Internet Research Agency, used the same internet marketing tools and techniques that common digital advertising campaigns did. Russia bought ad space on Facebook to target Americans with politically charged advertising. By 2016, Russia had started more than 20 disinformation campaigns in 13 countries, 46% of which were on Facebook. But it was reported by the New York Times that it was “difficult to quantify the amount of disinformation that was being produced at any time by Russians or other adversarial powers. Facebook failed to discover the Russia-based Internet Research Agency’s campaign to spread hyper-partisan content and disinformation during the 2016 election Facebook admitted that Russian based operatives had published about 80,000 posts on the platform over a two-year period in an effort to sway U.S. politics. Facebook further acknowledged that up to 126 million American may have seen the posts during that time. Most of the posts focused on divisive social and political messages like race relations. Russia propagandists on Facebook even tried to organize more than a dozen pro-Trump rallies in Florida during the 2016 election, which brought dozens of Trump supporters together in real life.
  • Despite becoming aware of Russian agents were harnessing Facebook, the platform did little to blunt their efforts. Despite banning ads from Russian state media and restricting recommendations for such outlets, Facebook hadn’t stopped pro-Russia countries from using their state channels to buy ads pushing pro-Russian propaganda. Researchers from NYU performing a security analysis on Facebook’s foreign ad policies said that the then-policies and implementation of Facebook’s ad library was not “designed to provide strong security against adversarial advertisers.” In August 2021, it was reported that Instagram had removed hundreds of accounts linked to Russia who were engaged in a misinformation campaign on the platform. In March 2022, Politico reported that Facebook was not making enough efforts to stop Russian propaganda and misinformation in majority Spanish-speaking counties, and thus, “it continue[d] to spread.”
  • Iran used Facebook to spy, spread covid vaccine misinformation and run pro-Trump ads, while China ran similar disinformation campaigns as Russia. Iran had spread COVID disinformation through videos, cartoons, and news stories from state media outlets on social media platforms to appeal to U.S. and western audiences. The Iranian government had used Facebook to conduct espionage on other state actors. In 2020, the Iranian government sent emails and videos in Arizona, Florida and Alaska, purporting to be from the Proud Boys, saying “vote for Trump or we will come after you.” Chinese agents created fake social media accounts akin to Russian-backed trolls that pushed out false messages designed to create chaos in the U.S.

NOW – LET ADVERSARIAL NATIONS BUY ADS IN AMERICA TO PUSH THEIR MESSAGE

  • Foreign actors hoping to spread dysfunction in America bought ads on Facebook to push their message. Facebook found 470 accounts linked to Russian propaganda pushing about 3,000 paid ads. Facebook disclosed that it had identified more than $100,000 worth of divisive ads on hot button issues purchased by a shadowy Russian company linked to the Kremlin. In 2020, it was reported that Facebook sold more than $5 billion a year worth of ad space to Chinese businesses and government agencies looking to promote their messages abroad. China was Facebook’s biggest country for revenue after the U.S. A 2022 Harvard study found that “Facebook advertisements from Chinese state media [were] linked to changes in the tone and content of news reporting on China.
  • Confronted with the fact that disinformation was being spread on his platform, Zuckerberg ignored warnings and worked to suppress the evidence. The New York Times wrote “bent on growth,” Zuckerberg and Sandberg ignored warning signs that Facebook could be used to disrupt elections, spread propaganda and inspire violence, “then sought to conceal them from public view.” Zuckerberg said he was “on the side of giving people a voice and pushing back on censorship.” Facebook’s legal and policy team was at odds with Facebook’s security team on the issue, because the security team generally pushed for more disclosure on how nation states misused their platforms. Whistleblower Frances Haugen said Facebook was “very aware” that their platform was being used by American adversaries to push and promote their interests at the expense of Americans.

NOW – UNDER INVESTED ENFORCEMENT MECHANISMS IN PLACE TO STEM FOREIGN INFLUENCE

  • In 2019, Facebook began labeling posts from state-owned media outlets, but the effectiveness and enforcement of those efforts was questioned by researchers. In October 2019, Facebook said it would “begin labelling media outlets that [were] wholly or partially under the editorial control of their government as state-controlled media.” Facebook said applying labels to state-controlled media outlets would offer “greater transparency” to readers. Facebook noted that it had developed its own “definition and standards for state-controlled media organizations” using input from “40 experts around the world specializing in media, governance, human rights and development.” In June 2020, Facebook said it would block any ads from state-controlled media outlets that targeted U.S. users. But in March 2020, NYU announced that a study by data scientists in their NYU Tandon School of Engineering found “systemic flaws in Facebook’s political ad monitoring and enforcement processes.” NYU said their researchers “found no instance of meaningful long-term enforcement,” despite Facebook’s policy banning political advertising by foreign entities. The NYU researchers noted in their research that “to a large extent,” Facebook relied on ad sponsors cooperating and proactively complying with Facebook’s sponsor disclosure policy. The researchers found $37 million worth of political advertising that failed to identify its funding source, and the researchers noted that the pattern of “frequent non-disclosure occurred often without any visible enforcement level,” even when they were foreign companies or governments. In February 2022, the Center for Countering Digital Hate released a study that found 91% of Facebook posts containing propaganda from Kremlin-funded media did not carry any warning labels about the content being from state run media/

NOW – FACEBOOK’S LACK OF EFFORT TO STEM DISINFORMATION, FALSE POLITICAL ADS AND EXTREMISM SWAYED ELECTIONS AND SOWED DIVISIONS

  • Misinformation on elections was some of the most popular content on Facebook. A 2016 CNBC article was headlined “Facebook users engaged with top fake election news than most popular real reporting, report says.” It was found that fake news generated more engagement on Facebook than real, mainstream news among top election-related articles. In the final three months of the 2016 presidential election, 20 top-performing false elections stories from hoax sites and hyper-partisan blogs generated 8,711,000 shares, reactions and comments on Facebook. Within the same period, the 20 best-performing election stories from 19 major news sites generated a total of 7,367,000 shares. A Facebook spokesman responded to the reports by saying the top stories didn’t reflect overall engagement on the platform.
  • Zuckerberg said he would allow politicians to run ads on the platform that contained misinformation. Zuckerberg said political speech was “one of the most sensitive parts in a democracy, and people should be able to see what politicians [said].” In Jan. 2020, Facebook reaffirmed that it wouldn’t ban, fact-check or limit how political ads could be targeted to specific groups of people. Facebook said it would instead offer users slightly more control over how many political ads they saw, as well as made its online library of political ads easier to browse. In a blog post, Facebook’s Director of Product Management for Ads, Rob Leathern, said the company was “not deaf” to criticism about its rules around political ads. Defending the policy that allowed politicians to peddle ads containing misrepresentations and lies, Zuckerberg said: “I don’t think people want to live in a world where you can only say the things that tech companies decide are 100 percent true.” However, Zuckerberg claimed he “care[d] deeply about the democratic process and protecting its integrity.” Yet in 2019, Politico reported that Facebook had removed several ads placed by Elizabeth Warren’s presidential campaign that called for the breakup of Facebook and other Big Tech giants. Facebook only reposted Warren’s ads after Politico reported on the takedown. Warren’s ads had directed users to a petition on her campaign website that urged them to “support our plan to break up these big tech companies,” and were limited in size and reach, with each costing under $100.
  • Facebook had a secret policy that allowed high profile users to thwart content policies and them immune from enforcement action, even though they posed greater risks than regular users. Facebook’s ‘Xcheck’ program whitelisted some of its high-profile users, allowing them to post inflammatory claims even when they had been deemed false by Facebook’s fact checker, even though internal researchers had raised concerns about the fact that high profile accounts posed greater risks than regular ones and were the least policed. I Some of the post from users in the Xcheck program said vaccines were deadly, that Hillary Clinton had covered up pedophile rings and that Trump called all refugees seeking asylum “animals.” Posts by whitelisted users that contained misinformation had been viewed at least 16.4 billion times before being removed. The lists of those enrolled in the Xcheck program were “scattered throughout the company, without clear governance or ownership” according to Facebook’s internal documents. In fact, most Facebook employees were able to add users in the Xcheck system. The Xcheck program had at least 5.8 million users in 2020. An internal review of Facebook’s whitelisting said “we are not actually doing what we say we do publicly.” Facebook had lied to its own oversight board about Xcheck, saying the system was used in “a small number of decisions.”
  • Zuckerberg rejected claims that Facebook swayed the 2016 election, calling it a “pretty crazy idea.” Zuckerberg said it was “extremely unlikely” that fake news shared on Facebook could have swayed the 2016 election. Denying that Facebook had influenced the 2016 election, Zuckerberg said “there’s a profound lack of empathy in asserting that the only reason why someone could’ve voted the way that they did [was] because they saw some fake news.” Later, Zuckerberg said he regretted dismissing concerns about Facebook’s role in influencing the 2016 election.

NOW – GAVE INSURRECTIONISTS A PLATFORM TO PUSH THEIR ELECTION DENIAL MESSAGES

  • Facebook allowed election denial content to run rampant without push back, paving the way for the January 6th insurrection. Facebook had reportedly established a task force to police violent and hateful election disinformation ahead of the 2020 election, but Facebook disbanded the task force and rolled back enforcement actions after the election. The Washington Post found during the 2022 midterm election cycle, at least 26 candidates posted inaccurate election claims for months and the platform had done “virtually nothing” to refute them. The Post also found that Facebook failed to challenge or make enforcement efforts against 17 candidates who were claiming the 2022 election would be rigged or that voting systems would be rigged. A civil rights audit found Facebook exempted politicians from third-party fact checking and was “far too reluctant to adopt strong rules to limit voting misinformation and voter suppression.” Washington Post and ProPublica found Facebook groups had at least 650,000 posts attacking the legitimacy of Joe Biden’s election as president between election day and the January 6th insurrection. Washington Post and ProPublica reported that its investigation provided “the clearest evidence yet that Facebook played a central role in the spread of false narratives that fomented the violence of January 6th.” Trump used Facebook as a “key platform” for his lies about the 2020 election right up until he was banned on January 6th.

NOW – LET EXTREMISTS, MILITIAS AND WHITE SUPREMACISTS GROW FOLLOWINGS AND RECRUIT MEMBERS

  • Facebook allowed extremists to cultivate followings on their platform and push outlandish content that sowed division. A 2016 internal Facebook presentation found extremist content was thriving in more than one-third of large German political groups on the platform. Facebook knew its algorithms were responsible for the growth of extremist content on their platforms, saying in an internal presentation that “64% of all extremist group joins [were] due to our recommendation tools.” Facebook also profited off of white supremacists on their platform. Politico reported on Tech Transparency Project’s study finding that “Facebook continued to serve ads against searches for white-supremacist content, such as the phrase Ku Klux Klan and American Defense Skin heads.” TPP said white supremacists “continue[d] to have a home” on Facebook’s platforms. TPP found that more than 80 white supremacist groups had a presence on Facebook, including some the platform had labeled as “dangerous organizations.” TPP found that Facebook searches for some groups with Ku Klux Klan in their name generated ads for black churches, which they called “chilling” in the light of the Buffalo mass shooting. TPP found that more than a third of the 225 white supremacist groups deemed hate groups by the Southern Poverty Law Center and American Defamation League had a presence on Facebook.
  • White supremacists and militia groups were continuing to build followings on Facebook, and the platform was automatically creating pages for them. TPP found that Facebook automatically created 24 pages for white supremacists after some listed a supremacist group as an interest or their employer. TPP also found that the Boogaloo Bois had returned to Facebook and were using to platform to funnel new recruits into smaller subgroups to coordinate offline meet-ups and training. TPP found that the Boogaloo Bois were posting propaganda videos, guides to sniper training and guerilla warfare tactics, atop how-tos for assembling untraceable guns. Over merely a few weeks, the group had gained over 2,000 followers.

NOW – BECAME A SOCIAL UTILITY FOR TERRORISTS TO ENGAGE WITH MAINSTREAM MUSLIMS

  • Terrorists harnessed Facebook to recruit mainstream Muslims, recognizing it was a place young Muslims thought was cool. The United Nations wrote that the internet and social media had become “powerful tools for terrorist groups to radicalize, inspire and incite violence.” The DHS once found that Muslim extremists were urging terrorists to open Facebook accounts so they could reach, interact and encourage mainstream Muslims to become extremists. The DHS found that Al-Qaeda used Facebook to transmit its message through an outlet kids thought was cool.

NOW – WAS A HUB FOR FALSE INFORMATION ABOUT COVID AND VACCINES

  • During the COVID pandemic, Facebook did little to block false information and anti-vaccine content from spreading on the platform. In April 2020 alone, Facebook had to put misinformation warning labels on nearly 50 million pieces of content related to COVID. An internal Facebook researcher said the platform’s “internal systems [were] not yet identifying, demoting and/or removing anti-vaccine comments fast enough.” The Department of Homeland Security believed China was waging a disinformation war during COVID to shift responsibility for the pandemic on other countries, including the United States.
  • Anti-vaccine content was the most engaged with and popular content on Facebook’s platforms. In 2021, NPR found that articles connecting vaccines and death had been among the most highly engaged with content online in 2021. The Huffington Post reported in June 2021 that, for more than a week, the top featured results for the hashtag #vaccine returned anti-vax posts, including one that said, “the only thing vaccines eradicated were healthy children.” As of late March 2021, 8 of the first 10 results returned in an Instagram search for “vaccine” were anti-vaccine or vaccine conspiracy accounts. In July 2021, Media Matters for America found 284 public and private anti-vaccine Facebook groups, with 520,000 followers combined. Accountable Tech found that during one week in July 2021, 11 out of the top 15 vaccine related posts on Facebook contained disinformation or were anti-vaccine. Center for Countering Digital Hate research revealed that anti-vax social media accounts gained nearly 1 million more followers in the last six months of 2020 alone. In 2020, the anti-vaxx movement was most popular on Facebook, where it had 31 million followers.
  • Researchers found that even when Facebook worked to tamp down anti-vaccine posts, its algorithm still pushed users to anti-vaccine content through its “related pages” feature. When a researcher from AVAAZ created two new Facebook accounts to conduct an experiment about vaccine disinformation, in just two days the accounts were recommended 109 pages containing anti-vaccine information. The researcher found that when his accounts started searching “vaccine” or liked an anti-vaccine page, more anti-vaccine pages showed up in his results. The researcher found “opening and liking several of these pages, in turn, led our account further into a network of harmful pages.” The researcher said the pages were “seemingly linked together and boosted by Facebook’s recommendation algorithm.” Instagram’s algorithms pushed followers of wellness influencers linked to the antivax movement towards “verified” Instagram antivax accounts. A news story suggesting the COVID vaccine could have been involved in a doctor’s death was the most viwed link on Facebook in the U.S. in the first three months of 2021.
  • Facebook users were among the most likely to believe false claims about COVID vaccines. The Washington post found that Facebook users were among the most likely to believe false claims about COVID vaccines. The Washington Post tested whether demographic or other differences between Facebook users related to lower vaccination rate among users, but found no difference. People who got their news about COVID on Facebook were less likely to be vaccinated and more strident in their opposition to it than even those who got their news from Fox News.
  • Facebook’s permissive approach to lies about vaccines was directly linked to lower vaccination rates. Research revealed that social media played a major role in vaccine hesitancy. A study by Harvard, Northwestern, Northeastern and Rutgers found that those most reliant on Facebook for information had substantially lower vaccination rates than those who relied on other sources. The World Health Organization ranked vaccine hesitancy as one of the top 10 threats to global health. The Associated Press reported that COVID cases “nearly tripled in the U.S. over two weeks amid an onslaught of vaccine misinformation.” In July 2021, in Mississippi, the state with the lowest vaccination rate, the state’s department of health had to shut down their Facebook comments because they had become dominated by misinformation. * Zuckerberg refused to work to stem the spread of vaccine misinformation on his platform in an effort to defend “freedom of expression.” Zuckerberg admitted in a Congressional hearing that Facebook wouldn’t “stop its users from posting information that’s wrong” on vaccines. Zuckerberg said Facebook cared about “freedom of expression” and supported users “fair and open discussions.” Facebook’s head of health, Kang-Xing Jin, said vaccine conversations were “nuanced” and content couldn’t “always be clearly divided into helpful and harmful.”
  • Facebook and Biden had “combative” meetings over the spread of anti-vaccine content on the platform. The White House reportedly grew so frustrated from Facebook’s answers during their meetings that at one point, the Biden administration demanded to hear from the data scientists at the company instead of the lobbyists. White House officials felt that Facebook was making it difficult for the administration to understand their data sets and how vaccine misinformation proliferated on their site. Despite meeting repeatedly with the Biden administration, Facebook did not come up with concrete solutions to curbing vaccine misinformation on their site.
  • Facebook stonewalled independent researchers attempting to study the spread of COVID misinformation on the platform. Facebook refused to give researchers enough real-time data they needed to figure out exactly how much COVID misinformation was on the platform. Over a dozen independent researchers who studied Facebook – six of which were studying the spread of information about COVID – said Facebook made it difficult for them to access vital information. The information researchers were seeking was how many times people viewed COVID related articles, what health information Facebook took don and what was being shared on private pages and groups. Academics said a lack of access to Facebook data was limiting their ability to understand ow many people were seeing COVID misinformation that could be causing vaccine hesitancy. Facebook’s own internal data scientists reported difficulty studying COVID misinformation on their platform.
  • Facebook made millions of dollars off of COVID misinformation and anti-vaccine content. Facebook earned money from advertisements placed by anti-vaxxers. The Center for Countering Digital Hate found that the anti-vaxx movement’s following of over 58 million users could be worth up to $1 billion in annual revenue for Facebook through ad placements. CCDH predicted Facebook could earn up to $23.2 million in revenue from ads directed at existing anti-vaxxer audiences.

NOW –LET MISINFORMATION ABOUT CLIMATE CHANGE HEAT UP THEIR PLATFORM

  • Misinformation about climate change was “increasing substantially” on Facebook and the scale was “staggering.” An article in The Guardian was titled “climate misinformation on Facebook ‘increasing substantially’ study says.” A study found that from 2020 – 2021, climate misinformation on Facebook had grown by 76.7%. The Guardian wrote “the scale of misinformation on Facebook” was “staggering” and “increasing quite substantially” according to an analysis of thousands of posts. In January 2021, Facebook displayed climate disinformation when users searched for climate change information. The Washington Post reported that Brietbart had “outsize influence over climate change denial” on the platform. A Facebook whistleblower alleged that Facebook executive Joel Kaplan proposed exempting Breitbart from misinformation rules. Further, Facebook reportedly suppressed information from a climate scientist aiming to correct misinformation. Facebook allowed reportedly allowed staff to make climate misinformation ineligible for fact-checking by deeming the misinformation to be the “opinion” of the poster or publisher.

NOW – EXPLOITING CHILDREN AND DIRECTING PREDATORS THEIR WAY

  • Facebook was hungry for young users, with Zuckerberg calling them the platform’s “north star.” In 2021, Zuckerberg said he was redirecting teams within his company to “make serving young adults their north star.” One of the more immediate shifts Facebook/Meta planned on was “significant changes” to Instagram like a focus on video. An October 2021 headline from The Verge read “Facebook says it’s refocusing company on ‘serving young adults.’”
  • Facebook had spent years secretly plotting ways to attract preteen and tween users to its platforms. An internal Facebook document called “tweens” a “valuable but untapped audience.” In 2021, the Wall Street Journal reported that Facebook teams had “for years been laying plans to attract preteens that go beyond what is publicly known.” As far back as June 2012, Facebook had explored allowing children younger than 13 years old to use their platform. Facebook formed a team to study preteens, set a three-year goal to create more products for them and commissioned strategy papers about the long-term business opportunities young users presented. In December 2017, Facebook introduced an app for children 13 and under, Messenger Kids, so kids could message, add filters and doodle on photos they sent one another. Facebook said the point of Messenger Kids was to provide a more controlled environment for the types of activity that were already occurring across smartphones and tablets among family members.
  • Young users were already on Facebook in Droves and had been for a long time despite a policy and laws against it. In 2016, the Atlantic reported that Facebook and Instagram’s policy only allowing people over 13 years old did not “appear to be strictly enforced.” In May 2011, ABC News reported there were about 7.5 million children under 13 on Facebook, with about 5 million being under the age of 10. Facebook responded to reports of millions of children using their platform by saying it was not easy for an online company to enforce an age limit and had a policy against children against 13 or younger. Facebook noted that the reports of minors on their platform “highlighted just how difficult it [was] to implement age restrictions on the internet.” Facebook claimed there was “no single solution to ensuring younger children don’t circumvent a system or lie about their age.” A 2014 study of children between the ages of 8-12 found that one-quarter of them reported using Facebook even though they were underage. In 2021, 45% of children aged 9-12 said they used Facebook daily. Big Tech was reportedly “fiercely opposed” to limiting what data could be collected on users under 13, with the industry group, The Internet Association, saying Big Tech was concerned that the rules would “not be workable because they fail[ed] to account for the technical realities of the internet.”
  • Facebook defrauded families by encouraging game developers to let children spend money without their parents permission. Often times, underage users did not realize they were spending money on Facebook. The average age of children playing and spending money on the game Angry Birds on Facebook was 5 years old. Only 50% of Facebook customers received receipts for their transactions. Facebook ignored warning from their employees that they were defrauding children, passing over a proposal to fix the problem in favor of maximizing revenue, with a Facebook employee writing that ending the “friendly fraud” on children would result in lower revenue.
  • Sexual predators were sharing mass amounts of children porn and connecting with real kids on Facebook’s platforms. Zuckerberg asserted Facebook was “really focused on safety, especially children’s safety. Zuckerberg: “we really try to build a safe environment.” An internal Facebook presentation from 2020 titled “Child Safety: State of Play: acknowledged that Instagram employed “minimal safety protections” for children. An internal Meta document noted that one of its “people you may know” algorithm was known among employees to connect child users with potential predators.
  • Facebook reported tens of millions of child sexual abuse images on its platform every year. In 2020, Meta reported 20 million child sexual abuse images between Facebook and Instagram. Facebook made 35 times more reports than the next highest reporter, Google. February 2021: The National Center for Missing and Exploited Children identified over 20.3 million reported incidents of child pornography or trafficking on Facebook, compared to 546,704 incidents on Google. A whistleblower told the SEC that Facebook didn’t know the full scale of the problem of child abuse material because it didn’t track it. At Facebook, senior managers would ask “what’s the return on investment” when it came to exploring the full scale of child abuse material on the platform.
  • Facebook did little to address the issue of child porn on their platform and rarely took down flagged content, including reporter’s flags. A whistleblower said Meta broke up a team it set up to develop software for detecting indecent videos of children because it was seen as “too complex.” A whistleblower said Meta’s efforts to remove child abuse material were “inadequate” and “under-resourced.” Instagram failed to remove accounts that posted pictures of children in swimwear or partial clothing even after the accounts were flagged to Instagram through an in-app reporting tool. An account posting photos of children in sexualized poses was reported using the in-app reporting tool, but Instagram responded the same day, saying “due to high volume” it was unable to view the report. Instagram said its “technology ha[d] found that this account probably doesn’t go against our community guidelines. The account remained live days later with more than 33,000 followers. In April 2017, The Times UK found that Facebook was publishing child pornography after one of its reporters created a fake profile and was quickly able to find offensive and potentially illegal content. The Times UK reported the content to Facebook, but in most instances was told the imagery and videos did not violate the site’s community standards. When BBC approached Facebook about sexualized photos of children – like a 10 year old in a vest accompanied by the words “yum yum,” Facebook said it did not breach community standards and the image stayed up. BBC reported a whole group, called “We Love Schoolgirlz” that featured obscene content of children, and it did not get taken down.
  • Facebook recommended child sexualization groups after a reporter began flagging inappropriate profiles. When a WIRED reporter attempted to report child exploitation profiles to Facebook, an automated message came back a few days later saying the group had been reviewed and did not violate any “specific community standards.” The reporter was then recommended more child sexualization groups after he reported the profiles. According to the WIRED reporter: “as reply after reply hit my inbox denying grounds for action, new child sexualization groups began getting recommended to me.” In 2016, BBC reported that pedophiles were using secret groups on Facebook to post and swap obscene images of children. The pedophile groups on Facebook had names that gave a clear indication of their content. BBC found a number of secret groups, created by and run for pedophiles – including one that was administered by a convicted pedophile who was still on the sex offenders register. Further, a man arrested for sexual exploitation of children online was able to continue to use two Instagram accounts to share images of minors for months after he was arrested. The predator continued to have an active account with nearly 90,000 followers, on which he regularly posted images of teenagers and younger children in swimming attire.
  • Facebook’s platforms easily connected children with predators, resulting in unwanted sexual interactions. An internal 2021 Meta presentation estimated that 100,000 minors each days received photos of adult genitalia or other sexually abusive content. 22% of minors that used Instagram reported experiencing a sexually explicit interaction on the platform. In 2020, employees reported that the prevalence of “sex talk” to minors was 38 times greater on Instagram than on Facebook Messenger in the U.S. When a WIRED reporter searched the numbers “11, 12, 13,” on Facebook, “23 of the first 30 results were groups targeting children of those ages” for sexual interactions or pictures.
  • A bug in Facebook’s Messenger Kids app allowed minors to chat with unapproved adults. A loophole in the app allowed users to invite kids to group chats even if unauthorized users were there too. The Verge wrote that, due to a the bug, “thousands of children were left in chats with unauthorized users, a violation of the core promise of messenger kids.” A group of 100 experts, advocates and parenting organizations criticized Facebook’s Messenger Kids app, claiming that Facebook was “creating” the need in the market to target younger and younger children. Facebook failed to reach out to children safety advocates before launching the Messenger Kids app. The 2020 federal human trafficking report found that 65% of child sex trafficking victims recruited on social media were recruited on Facebook, with 14% being recruited on Instagram. In 2020, Facebook alone was used to facilitate over 366 cases of child exploitation between Jan 2013 – December 2019.

NOW – TEENAGERS WHO USED FACEBOOK’S PLATFORMS WERE REPORTING MAJOR DECLINES IN THEIR MENTAL HEALTH, SELF-IMAGE AND SELF CONTROL

  • Teens reported compulsively using Facebook’s numerous platforms every day, some of whom reported being unable to control their use. 22 million teens logged onto Instagram every day. Roughly half of Facebook users between the age of 18 and 24 checked Facebook upon waking up. Instagram was seen as an addictive product that could send teens spiraling toward eating disorders, an unhealthy sense of their own bodies, and depression. Accountable Tech found that 74% of teens found themselves “Scrolling for too long” while 50% said they lost sleep because they felt “stuck” on social media.
  • Teenagers were blaming Instagram for increased rates of anxiety, depression, and negative self-image. The Wall Street Journal wrote “the features that were core to Instagram were the most harmful to teens.” An internal Meta research slide said teens were blaming Instagram for “increases in the rate of anxiety and depression.” 13% of British teens and 6% of American teens who reported suicidal thoughts traced the desire to kill themselves to Instagram.
  • Facebook ruined teenage girls’ body image and drove them towards eating disorders. An internal Facebook research deck said Instagram made “body image worse for one in three teen girls.” Meta’s internal research found Instagram risked pushing teens to eating disorders, depression and an unhealthy sense of their own bodies. Meta researchers concluded that some of the problems Instagram created with teen mental health were specific to Instagram and not found in social media broadly. Facebook found that more than 40% of teen Instagram users reported feeling “unattractive,” reporting the feeling began on the app. 32% of teenage girls said that when they felt bad about their bodies, Instagram made them feel worse. 14% of teen boys in the U.S. said Instagram made them feel worse about themselves, with one teenager saying, “every time I feel good about myself, I go over to Instagram, and then it all goes away. One teenager said looking at her peers was a “kick in the gut.” Frequent use of image-based social media like Instagram was linked to greater self-objectification. The Wall Street Journal remarked that “the tendency to share only the best moment” and “a pressure to look perfect” was at the core of Instagram’s platform and at the core of the mental health issue.
  • Pressure to look perfect caused teenagers to seek out eating disorder content, which Facebook’s platforms promoted. An internal memo revealed that Meta knew Instagram was pushing girls to dangerous content like posts about eating disorders. In 2022, a report by FairPlay found that Instagram’s algorithm promoted an extensive network of pro-eating disorder content. The report said there were over 90,000 unique accounts promoting eating-disorder content, which could collectively reach nearly 20 million users around the world. Tech Transparency Project said Instagram made it “exceedingly easy to search for hashtags and terms associated with eating disorders on the platform.” TPP “Instagram not only fails to enforce its own policies, but it also proactively recommends toxic body image content to its adult and teen users.”
  • Facebook’s platforms facilitated the bullying of teens, with thousands of users reporting being victims of bullying. In a McAfee study of 11,687 parents and children in 10 countries, nearly 80% of respondents reported cyberbullying on Instagram, compared to 50% on TikTok and Snapchat. According to the McAfee study, cyberbullying complaints were highest on Facebook, WhatsApp and Instagram compared to other social media apps. Cyberbullying occurred at double the rate on Facebook than on Twitter, and four times more on WhatsApp than on Discord. Instagram provided “a uniquely powerful set of tools” for bullying according to The Atlantic, including anonymous profiles, lack of adult oversight, and potential for viral posts. Teenagers described how Instagram users used the ease of making anonymous profiles to create “hate pages” for bully victims.
  • Teens could easily find drugs on Facebook’s platforms. Vice News reported that one in four kids had been advertised drugs on social media. DigitalTrends wrote that the American Addiction center found a “booming business” of codeine, MDMA, weed, painkillers and coke sales on Instagram. When one of TPP’s fake teen accounts started typing the phrase “buyxanax” into Instagram’s search bar, the platform started auto-filling results for buying Xanax before the user was finished typing. TPP wrote “the entire process took seconds and involved two clicks.” TPP said Instagram’s algorithm had automatic features that “even sped up the process” for their teen accounts to buy drugs. TPP submitted 50 posts to Instagram that appeared to violate the platform’s policies against selling drugs, but after a review, Instagram responded that 72% of the flagged posts did not violate its guidelines despite them selling drugs.

NOW – FACILITATED HUMAN TRAFFICKING AND ALLOWED DRUG CARTELS TO USE THEIR PLATFORM

  • Facebook knew its platforms were being used to facilitate human trafficking but failed to take action. Facebook knew people were using their platform for human trafficking but neglected to take widespread action until Apple threatened to remove their app from the App Store following reports on the trafficking. The Wall Street Journal reported how a Facebook researcher had asked “was this issue known to Facebook before BBC inquiry and Apple escalation?” According to the Journal, the response began with “yes.” A polish trafficking expert wrote that 18 months after it first identified human trafficking on Facebook, there was no implemntation of systems to find and remove trafficking posts. Facebook began forbidding any content that provided or facilitated human smuggling or that asked for human smuggling services after TPP found a surge in Facebook groups devoted to human smuggling. In 2020, Facebook deactivated a system that detected human trafficking networks on the platform.
  • Facebook continued to allow a drug cartel leader to use its platform even when security experts alerted them to the leader’s presence. Facebook chose not to fully remove accounts linked to the Drug Jalisco Nueva Generacion’ after an employee was able to untangle the cartel’s activities throughout the platform. The employee and his team were able to untangle CJNG’s online network by examining posts on Facebook and Instagram, as well as private messages on those platforms. Facebook designated the CJNG cartel “dangerous individuals and organizations,” which should’ve led to their posts being automatically removed – but they weren’t. An investigation team at Facebook asked a team to make sure a ban on the cartel was enforced, but the team didn’t follow up on the job.

NOW – FACEBOOK ALLOWED HATE SPEECH TO FLOURISH ON THEIR PLATFORMS WITHOUT MECHANISMS TO BLOCK OR REDUCE ITS SPREAD

  • Facebook refused to disclose the amount of hate speech it removed from its platforms. The Wall Street Journal reported that Facebook didn’t “publicly report what percentage of hate-speech it remove[d].” A Facebook civil rights audit found that it put free speech ahead of other values, which undermined its efforts to curb hate speech and voter suppression. The Anti-Defamation League pointed to whistleblower documents that showed Facebook failed to take down hate speech even though the posts violated its rules. The New York Times wrote that Facebook had been “roundly criticized over the way its platform ha[d] been used to spread hate speech and false information that prompted violence.”
  • Zuckerberg said being open to all viewpoints was at the “core of everything Facebook is and everything I want it to be.” Zuckerberg understood that Facebook was “more than just a distributor of news,” but also “a new kind of platform for public discourse.” A Facebook spokesperson assured Facebook had “built a robust integrity team, strengthened our policies and practices to limit harmful content, and used research to understand our platform’s impact on society so we continue to improve.” Zuckerberg promised to stand up “against those who [said] the new types of communities forming on social media [were] dividing us.”
  • Facebook cut the amount of time human reviewers spent on hate speech. Facebook pledged to add 3,000 more content reviewers and invest in tools to help remove objectionable content after a string of shootings, murders, rapes and assaults had been streamed on Facebook. The live broadcasts were viewable as recorded videos, often for days before being taken down. Facebook cut the time human reviewers focused on hate-speech complaints from users, making the company more dependent on AI. NPR wrote that subcontractors who worked to review flagged posts on Facebook were “told to go fast – very fast,” and were evaluated on speed, meaning workers made a decision about flagged content once every 10 seconds. When NPR tested Facebook’s flagging system in 2016, they found that Facebook reviewers “were not consistent and made numerous mistakes, including in instances where a user called for violence.” In 2016, Facebook received more than one million reports of violations from users every day, according to Facebook’s head of policy management, Monika Bickert.
  • Facebook relied on a faulty AI system to detect hate speech, but it was nowhere close to being effective. Zuckerberg said he expected Facebook’s automated systems would remove “the cast majority of problematic content” by the end of 2019. Facebook was reliant on AI enforcement for content moderation, but its AI was unable to distinguish between cockfighting and car crashes. Facebook’s AI often fell short in flagging sensitive or controversial materials. Facebook was criticized for its lack of expediency over the removal of objectionable content. Internal Facebook documents showed that employees estimated Facebook’s AI only removed a sliver of posts that violated the platforms rules. Employees responsible for keeping Meta’s platforms free from offensive and dangerous content acknowledged that the company was nowhere close to being able to reliably screen it. A Facebook engineer estimated that Facebook’s automated system remove just two percent of the views of hate speech on the platform. Facebook Engineer: We do not and possibly never will have a model that captures even a majority of integrity harms.”

NOW –NEGATIVELY IMPACTED USER’S WELL-BEING ON A FREQUENT AND SEVERE BASIS

  • Facebook researchers found that 1 in 8 of its users reported in engaging in compulsive use of social media that impacted their sleep, work, parenting and relationships. Internal researchers reported that users lacked control over the time they spent on Facebook and had problems in their lives as a result. Facebook’s researchers estimated compulsive use of their platforms affected about 12.6% of Facebook users – more than 360 million people. According to the American Psychological Association, Facebook and Instagram was built to capitalize on users’ biological drive for social belonging and nudged them to keep on scrolling. APA said Instagram was problematic because of its “addictive nature” and lack of “stopping cues.”
  • A large body of literature linked Facebook use with detrimental outcomes such as decreased in mental well-being. A meta study on scientific papers on social media’s influence on mental health found social media use was linked to increased levels of psychological distress, thoughts of self-harm and suicide and poor sleep. One in eight Facebook users reported that their use of the platform harmed their sleep, work, relationships and parenting.
  • Passive use of Facebook – browsing but not engaging on the platform – led to worse outcomes on well-being. People who spent a lot of time passively using Facebook reported feeling worse afterwards. Selective confrontation with other’s success on Facebook could trigger repetitive negative thinking regarding ones imperfections.
  • It was found that the amount someone used Facebook was the no. 1 variable that predicted depression among a study’s participants and those with lower well being used Facebook more. Problematic use of Facebook was associated with lower well-being. Making matters worse, those with low subjective happiness were more susceptible to overusing Facebook. Facebook users with some level mental vulnerability were more at risk for problematic outcomes from their use of the platform.
  • Using Facebook for reasons other than social engagement created decreased well-being. People who read Facebook for 10 minutes a day were in a worse mood than those just posting or talking to friends. People who reported higher levels of Facebook use experienced higher emotional and stronger needs to be connected.
  • Overuse of Facebook skewed user’s perspectives of themselves, the world around them and their social bonds. Those who overused Facebook felt that other people were happier than them, experienced high levels of loneliness and withdrew socially. Facebook addiction was found to negatively affect life satisfaction. People who used Facebook for a long time reported feeling that others were happier than them. Students using Facebook for long durations reported enhanced loneliness. They also reported aggressing less with the idea that life was fair. The problematic use of Facebook led to an avoidance of real social relations.
  • Users who deactivated their Facebook and social media accounts felt greater life satisfaction and more positive emotions than continued users. It was found that people’s life satisfaction increased significantly when they quit Facebook. They had more positive life satisfaction and positive emotions than Facebook users. The increase in well-being resulting from social media deactivation increased levels of subjective well-being by approximately 24-50% as much as standard psychological interventions. Deactivation of social media also led to a statistically significance decrease in depression and loneliness. A study of inpatient patients at a mental health center found that patients using Facebook during their treatment reported higher levels of negative mental health and recovered more slowly than non-users.

NOW – HAD AN AD SYSTEM THAT DISCRIMINATED AGAINST USERS

  • Facebook’s ad targeting system was found to allow advertisers to exclude gender and race groups on ad targeting. Facebook allowed advertisers to exclude certain groups on the base of race, gender and other sensitive factors that were prohibited by federal law in housing and employment. The Department of Housing and urban development sued Facebook for violating the fair housing act by allowing advertisers to limit housing ads based on race, gender and other characteristics. HUD said Facebook’s ad system discriminated against users even when advertisers did not choose to do so. In March 2018, the National Fair Housing Alliance sued Facebook, alleging it allowed advertisers to discriminate against legally protected groups. In October 2019, Facebook was sued in a class action lawsuit that accused the platform of discriminating against older and female users by withholding advertising for financial services like bank accounts, insurance, investments and loans. The complaint was filed seven months after Facebook agreed to overhaul its targeted ad systems to settle lawsuits that it let advertisers discriminate by age, gender and zip code for housing and credit ads.
  • Facebook’s handpicked auditors faulted the platform for infringing on users’ civil rights – even after it had promised to stop. In November 2021, Meta said it would look into whether its platforms treated users differently based on race after years of criticisms from black users about racial bias. In 2017, ProPublica reported that Facebook enabled advertisers to direct their ads to news feeds of people who expressed interest in the topics of “jew hater,” “how to burn jews,” or “history of ‘why jews ruin the world.’” In 2020, Auditors handpicked by Facebook to examine its policies said the company had not done enough to protect people on the platform from discriminatory posts and ads. In the audit, Facebook was repeatedly faulted for prioritizing free expression over discrimination, and for not having a robust infrastructure to handle civil rights. According to a ProPublica headline, “Facebook’s secret censorship rules protect white men from hate speech but not black children.” Facebook’s content rules only detected broad groups of people, like “white men,” but would not flag hate speech if a protected group contained characteristics that wasn’t protected, like “female drivers” or “black children.”

NOW –DOMINATED THE ONLINE ADVERTISING BUSINESS AND LIED TO ADVERTISERS

  • Facebook held half of the total digital ad supply and captured a significant portion of its growth. Facebook held 50% of the total digital display ad supply. The U.S. House Antitrust Subcommittee wrote that Google and Facebook captured “nearly all of [digital ad] growth in recent years.” Facebook derived nearly all of its revenue from personalized advertisements shown on the site. In 2020, Facebook made $86 billion in revenue, nearly all of which came from selling ads placed on users’ news feeds. In 2020, Facebook said it had 8 million advertisers. The highest-spending brands account for $4.2 billion in Facebook advertising in 2020, only 6% of the platform’s ad revenue.
  • Facebook knowingly inflated metrics of ads repeatedly, and with multiple types of advertising. In 2021, it was found that Facebook had inflated estimates for the total time spent watching a video and the total number of viewers by 150% - 900% according to court documents. Due to the miscalculated data, marketers may have misjudged the performance of video advertising purchased from Facebook, impacting how much they spent on Facebook video vs. other sellers. Facebook new of problems in how it measured viewership of video ads on its platform for more than a year before it disclosed them in 2016. Facebook admitted that its metric for the average time users spent watching videos was artificially inflated because it was only factoring in video views of more than three seconds. Facebook told ad buying agency, Publicis, that the earlier counting method likely overestimated average time spent watching videos by between 60% - 80%. The Wall Street Journal said the news was “an embarrassment for Facebook,” which had been “touting the rapid growth of video consumption across its platform.” Facebook admitted that it had miscalculated the total organic rach for business pages and the amount of time spent with instant articles. CNN: “In some cases, the metrics were significantly overstated.” Facebook acknowledged the average time spent on instant articles was “over-reported’ by 7% - 8%. Facebook admitted it had double counted the number of people businesses reached with unpaid posts on their Facebook pages.
  • Facebook employees expressed concerns that they were promoting “deeply wrong” data to advertisers. Some at Facebook believed they were promoting “deeply wrong” data about how many users advertisers could reach. The Verge reported that when a product manager at Facebook proposed a fix that would fix their ad metric reporting, the company allegedly refused to make the changes, arguing it would have a “significant” impact on revenue. In a leaked email, a Facebook employee wrote “the status quo in ad reach estimation and reporting is deeply wrong.”

NOW – KILLING THE NEWS INDUSTRY BY STEALING ITS PROFITS AND READERS WITHOUT COMPENSATION

  • Online market power like Facebook’s had a significant impact on the monetization of news and led to numerous newsroom closures. A U.S. House Antitrust Subcommittee wrote “the rise of market power online has severely affected the monetization of news, diminishing the ability of publishers to deliver valuable reporting.” Columbia Journalism Review wrote that “many rightly [saw] the rise of Big Tech […] as the root of journalism’s problems.” Open Markets Institute claimed “the largest single reason” for the decline in local news was “the loss of advertising revenue to the online advertising duopoly of Google and Facebook. Columbia Journalism Review noted that media companies were “addicted to Facebook’s algorithm-directed traffic.”
  • Facebook had immense power in shaping how news was distributed and consumed. The Australian Competition & Consumer Commission (ACCC) said Facebook was a “vital distribution channel for a number of media businesses.” University of Chicago’s Stigler Center said Facebook and Google had “unprecedented influence on news production, distribution and consumption.” ACCC said Big Tech “increasingly perform[ed] similar functions to media businesses, such as selecting and curating content, evaluating content, and ranking and arranging content online.” ACC Found that Facebook and Google had “significant and durable market power over the distribution of news online,” noting that news publishers were reliant on Google and Facebook for reaching people on line. WIRED’s editorial staff explained that “if Facebook wanted to, it could quietly turn any number of dials that would harm publishers – by manipulating its traffic, ad network and readers.”
  • Many users saw Facebook as a news source, and because that’s where many users got and read their news, it was one. In 2015, 63% of Facebook users considered the service a news source. The New York Times wrote that Facebook was “the world’s most influential source of news.” After Facebook changed their algorithm in 2018 to show users more items shared by friends and family and less from professional publishers, publishers saw Facebook referrals drop dramatically. ACCC claimed Facebook benefitted from news and news extracts appearing on a user’s feed because it allowed them to “retain the user’s attention, enabling more advertisements to be displayed.” Tech Crunch reported that “again and again, Facebook ha[d] centralized attention typically spread across the web.” News Media Alliance wrote, with the vast majority of Americans consuming their news online, readers often skimmed through headlines and only read snippets found on search engines or social media sites. Many Facebook users who viewed news on the platform didn’t click through to the original article, but rather got the overview of the news from just the headline and preview blurb. Most local newspapers relied on digital display advertising for online ad revenue.
  • Facebook and Zuckerberg refused to compensate news outlets for their content, even though the platform sapped a majority of outlet’s revenue. Google and Facebook were able to carry content created by news organizations without directly paying the organizations for creating it. News Media Alliance wrote that Google and Facebook had “leveraged their market dominance to force local news to accept little to no compensation for their intellectual property.” Tech Crunch further reported that publishers had “few major sources of traffic outside of Facebook and Google search.” A Star Tribune Editorial remarked that Big Tech had “taken the same content generated by newspapers, TV, radio and others and used it to reap massive profits while refusing to provide any compensation. Google and Facebook did not offer competitive terms to publishers, refusing to pay for content, traffic or data.
  • Zuckerberg said he had no intention of paying for news and held hostage those who tried to force compensation out of him. In 2018, Zuckerberg said he had no interest in paying publishers for the right to show their stories. The Wall Street Journal wrote that Zuckerberg was “disappointed by regulatory efforts around the world looking to force platforms like Facebook […] to pay publishers for any new content available on their platforms.” The Journal wrote that regulatory efforts had “dampened Mr. Zuckerberg’s enthusiasm for making news a bigger part of Facebook’s offerings.” News Media Canada wrote that Google and Facebook exercised “monopoly power” which created “a market where news publishers [were] coerced to accept anticompetitive and unfair terms” on usage of their content. If local papers refused to provide content rights to Google and Facebook, they lost the opportunity to be featured by Google and Facebook and seen by their users.
  • Zuckerberg and Facebook’s refusal to pay for news led to the closure of 1-in-4 local papers between 2004 – 2019, accelerating political polarization. Between 2004 – 2019, one in every four U.S. newspapers shut down, which contributed to the widening political polarization according to Harvard. Brooking reported that voters in communities that had experienced a newspaper closure were less likely to split their vote. Yale wrote that as local news declined, “local politics [became] increasingly nationalized,” which contributed to polarization.
  • When Australia proposed a law that would require Facebook to pay publishers, Zuckerberg blocked the countries emergency services from the platform. After Australia released the final bill that required Facebook and Google to pay publishers for news content, Zuckerberg pushed to tweak its algorithm to restrict news content for Australians. Documents showed that Facebook had deliberately created an overly broad and sloppy process to take down pages, resulting in swaths of the Australian government and health services to be caught in its web just as the country was launching COVID vaccinations. After being alerted to the fact that they had blocked pages for medical, health and emergency services in Australia, Facebook expanded the use of the algorithm from 50% to 100%. Facebook also blocked pages for Australian health services such as the Children’s Cancer Institute and Doctors without border. Facebook also blocked medical and domestic violence services and women’s shelters. Facebook executives knew its process for classifying news for the removal of pages was so broad that it would likely hit government pages and other social services. The Wall Street Journal reported that Facebook’s goal with taking down the Australian government, health services and charity pages was to “exert maximum negotiating leverage over the Australian parliament.” Following the page shutdowns, Australia’s parliament amended the proposed journalism law to the degree that, a year after its passages, its most onerous provisions hadn’t been applied to Facebook or Meta. Facebook’s Head Of Partnerships, Campbell Brown, wrote “we landed exactly where we wanted to” in a congratulatory email brown sent minutes after the Australian senate voted to approve the watered-down bill. WSJ: “Facebook Chief Executive Mark Zuckerberg And Chief Operating Officer Sheryl Sandberg chimed in with congratulations as well, with Ms. Sandberg praising the ‘thoughtfulness of the strategy’ and ‘precision of execution.’”