From Casetext: Smarter Legal Research

State of Mo. v. Biden

United States District Court, Western District of Louisiana
Jul 4, 2023
3:22-CV-01213 (W.D. La. Jul. 4, 2023)

Opinion

3:22-CV-01213

07-04-2023

STATE OF MISSOURI, ET AL. v. JOSEPH R. BIDEN JR., ET AL.


KAYLA D. MCCLUSKY MAG. JUDGE

MEMORANDUM RULING ON REQUEST FOR PRELIMINARY INJUNCTION

TERRY A. DOUGHTY UNITED STATES DISTRICT JUDGE

At issue before the Court is a Motion for Preliminary Injunction [Doc. No. 10] filed by Plaintiffs. The Defendants oppose the Motion [Doc. No. 266]. Plaintiffs have filed a reply to the opposition [Doc. No. 276]. The Court heard oral arguments on this Motion on May 26, 2023 [Doc. No. 288]. Amicus Curiae briefs have been filed in this proceeding on behalf of Alliance Defending Freedom, the Buckeye Institute, and Children's Health Defense.

Plaintiffs consist of the State of Missouri, the State of Louisiana, Dr. Aaron Kheriaty (“Kheriaty”), Dr. Martin Kulldorff (“Kulldorff”), Jim Hoft (“Hoft”), Dr. Jayanta Bhattacharya (“Bhattacharya”), and Jill Hines (“Hines”).

Defendants consist of President Joseph R Biden (“President Biden”), Jr, Karine Jean-Pierre (“Jean-Pierre”), Vivek H Murthy (“Murthy”), Xavier Becerra (“Becerra”), Dept of Health & Human Services (“HHS”), Dr. Hugh Auchincloss (“Auchincloss”), National Institute of Allergy & Infectious Diseases (“NIAID”), Centers for Disease Control & Prevention (“CDC”), Alejandro Mayorkas (“Mayorkas”), Dept of Homeland Security (“DHS”), Jen Easterly (“Easterly”), Cybersecurity & Infrastructure Security Agency (“CISA”), Carol Crawford (“Crawford”), United States Census Bureau (“Census Bureau”), U.S. Dept of Commerce (“Commerce”), Robert Silvers (“Silvers”), Samantha Vinograd (“Vinograd”), Ali Zaidi (“Zaidi”), Rob Flaherty (“Flaherty”), Dori Salcido (“Salcido”), Stuart F. Delery (“Delery”), Aisha Shah (“Shah”), Sarah Beran (“Beran”), Mina Hsiang (“Hsiang”), U.S. Dept of Justice (“DOJ”), Federal Bureau of Investigation (“FBI”), Laura Dehmlow (“Dehmlow”), Elvis M. Chan (“Chan”), Jay Dempsey (“Dempsey”), Kate Galatas (“Galatas”), Katharine Dealy (“Dealy”), Yolanda Byrd (“Byrd”), Christy Choi (“Choi”), Ashley Morse (“Morse”), Joshua Peck (“Peck”), Kym Wyman (“Wyman”), Lauren Protentis (“Protentis”), Geoffrey Hale (“Hale”), Allison Snell (“Snell”), Brian Scully (“Scully”), Jennifer Shopkorn (“Shopkorn”), U.S. Food & Drug Administration (“FDA”), Erica Jefferson (“Jefferson”), Michael Murray (“Murray”), Brad Kimberly (“Kimberly”), U.S. Dept of State (“State”), Leah Bray (“Bray”), Alexis Frisbie (“Frisbie”), Daniel Kimmage (“Kimmage”), U.S. Dept of Treasury (“Treasury”), Wally Adeyemo (“Adeyemo”), U.S. Election Assistance Commission (“EAC”), Steven Frid (“Frid”), and Kristen Muthig (“Muthig”).

[Doc. No. 252]

[Doc. No. 256]

[Doc. No. 262]

I. INTRODUCTION

I may disapprove of what you say, but I would defend to the death your right to say it.

Evelyn Beatrice Hill, 1906, The Friends of Voltaire

This case is about the Free Speech Clause in the First Amendment to the United States Constitution. The explosion of social-media platforms has resulted in unique free speech issues- this is especially true in light of the COVID-19 pandemic. If the allegations made by Plaintiffs are true, the present case arguably involves the most massive attack against free speech in United States' history. In their attempts to suppress alleged disinformation, the Federal Government, and particularly the Defendants named here, are alleged to have blatantly ignored the First Amendment's right to free speech.

Although the censorship alleged in this case almost exclusively targeted conservative speech, the issues raised herein go beyond party lines. The right to free speech is not a member of any political party and does not hold any political ideology. It is the purpose of the Free Speech Clause of the First Amendment to preserve an uninhibited marketplace of ideas in which truth will ultimately prevail, rather than to countenance monopolization of the market, whether it be by government itself or private licensee. Red Lion Broadcasting Co., v. F.C.C., 89 S.Ct. 1794, 1806 (1969).

Plaintiffs allege that Defendants, through public pressure campaigns, private meetings, and other forms of direct communication, regarding what Defendants described as “disinformation,” “misinformation,” and “malinformation,” have colluded with and/or coerced social-media platforms to suppress disfavored speakers, viewpoints, and content on social-media platforms. Plaintiffs also allege that the suppression constitutes government action, and that it is a violation of Plaintiffs' freedom of speech under the First Amendment to the United States Constitution. The First Amendment states:

Congress shall make no law respecting an establishment of religion or prohibiting the free exercise thereof: or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances. (emphasis added).
First Amendment, U.S. Const. amend. I.

The principal function of free speech under the United States' system of government is to invite dispute; it may indeed best serve its high purpose when it induces a condition of unrest, creates dissatisfaction with conditions as they are, or even stirs people to anger. Texas v. Johnson, 109 S.Ct. 2533, 2542-43 (1989). Freedom of speech and press is the indispensable condition of nearly every other form of freedom. Curtis Pub. Co. v. Butts, 87 S.Ct. 1975, 1986 (1967).

The following quotes reveal the Founding Fathers' thoughts on freedom of speech:

For if men are to be precluded from offering their sentiments on a matter, which may involve the most serious and alarming consequences, that can invite the consideration of mankind, reason is of no use to us; the freedom of speech may be taken away, and dumb and silent we may be led, like sheep, to the slaughter.
George Washington, March 15, 1783.
Whoever would overthrow the liberty of a nation must begin by subduing the free acts of speech.
Benjamin Franklin, Letters of Silence Dogwood.
Reason and free inquiry are the only effectual agents against error.
Thomas Jefferson.

The question does not concern whether speech is conservative, moderate, liberal, progressive, or somewhere in between. What matters is that Americans, despite their views, will not be censored or suppressed by the Government. Other than well-known exceptions to the Free Speech Clause, all political views and content are protected free speech.

The issues presented to this Court are important and deeply intertwined in the daily lives of the citizens of this country.

II. FACTUAL BACKGROUND

In this case, Plaintiffs allege that Defendants suppressed conservative-leaning free speech, such as: (1) suppressing the Hunter Biden laptop story prior to the 2020 Presidential election; (2) suppressing speech about the lab-leak theory of COVID-19's origin; (3) suppressing speech about the efficiency of masks and COVID-19 lockdowns; (4) suppressing speech about the efficiency of COVID-19 vaccines; (5) suppressing speech about election integrity in the 2020 presidential election; (6) suppressing speech about the security of voting by mail; (7) suppressing parody content about Defendants; (8) suppressing negative posts about the economy; and (9) suppressing negative posts about President Biden.

Plaintiffs Bhattacharya and Kulldorff are infectious disease epidemiologists and co-authors of The Great Barrington Declaration (“GBD”). The GBD was published on October 4, 2020. The GBD criticized lockdown policies and expressed concern about the damaging physical and mental health impacts of lockdowns. They allege that shortly after being published, the GBD was censored on social media by Google, Facebook, Twitter, and others. Bhattacharya and Kulldorff further allege on October 8, 2020 (four days after publishing the GBD), Dr. Frances Collins, Dr. Fauci, and Cliff Lane proposed together a “take down” of the GBD and followed up with an organized campaign to discredit it.

[Doc. No. 10-3 and 10-4]

Dr. Kulldorff additionally alleges he was censored by Twitter on several occasions because of his tweets with content such as “thinking everyone must be vaccinated is scientifically flawed,” that masks would not protect people from COVID-19, and other “anti-mask” tweets. Dr. Kulldorff (and Dr. Bhattacharya) further alleges that YouTube removed a March 18, 2021 roundtable discussion in Florida where he and others questioned the appropriateness of requiring young children to wear facemasks. Dr. Kulldorff also alleges that LinkedIn censored him when he reposted a post of a colleague from Iceland on vaccines, for stating that vaccine mandates were dangerous, for posting that natural immunity is stronger than vaccine immunity, and for posting that health care facilities should hire, not fire, nurses.

[Doc. No. 10-4]

[Doc. No. 10-3]

[Id.]

[Id.]

Plaintiff Jill Hines is Co-Director of Health Freedom Louisiana, a consumer and human rights advocacy organization. Hines alleges she was censored by Defendants because she advocated against the use of masks mandates on young children. She launched an effort called “Reopen Louisiana” on April 16, 2020, to expand Health Freedom Louisiana's reach on social media. Hines alleges Health Freedom Louisiana's social-media page began receiving warnings from Facebook. Hines was suspended on Facebook in January 2022 for sharing a display board that contained Pfizer's preclinical trial data. Additionally, posts about the safety of masking and adverse events from vaccinations, including VAERS data and posts encouraging people to contact their legislature to end the Government's mask mandate, were censored on Facebook and other social-media platforms. Hines alleges that because of the censorship, the reach of Health Freedom Louisiana was reduced from 1.4 million engagements per month to approximately 98,000. Hines also alleges that her personal Facebook page has been censored and restricted for posting content that is protected free speech. Additionally, Hines alleges that two of their Facebook groups, HFL Group and North Shore HFL, were de-platformed for posting content protected as free speech.

[Doc. No. 10-12]

[Id.]

Plaintiff Dr. Kheriaty is a psychiatrist who has taught at several universities and written numerous articles. He had approximately 158,000 Twitter followers in December 2021 and approximately 1,333 LinkedIn connections. Dr. Kheriaty alleges he began experiencing censorship on Twitter and LinkedIn after posting content opposing COVID-19 lockdowns and vaccine mandates. Dr. Kheriaty also alleges that his posts were “shadow banned,” meaning that his tweets did not appear in his follower's Twitter feeds. Additionally, a video of an interview of Dr. Kheriaty on the ethics of vaccine mandates was removed from YouTube.

[Doc. No. 10-7]

Plaintiff Jim Hoft is the owner and operator of The Gateway Pundit (“GP”), a news website located in St. Louis, Missouri. In connection with the GP, Hoft operates the GP's social-media accounts with Twitter, Facebook, YouTube, and Instagram. The GP's Twitter account previously had over 400,000 followers, the Facebook account had over 650,000 followers, the Instagram account had over 200,000 followers, and the YouTube account had over 98,000 followers.

The GP's Twitter account was suspended on January 2, 2021, again on January 29, 2021, and permanently suspended from Twitter on February 6, 2021. The first suspension was in response to a negative post Hoft made about Dr. Fauci's statement that the COVID-19 vaccine will only block symptoms and not block the infection. The second suspension was because of a post Hoft made about changes to election law in Virginia that allowed late mail-in ballots without postmarks to be counted. Finally, Twitter issued the permanent ban after the GP Twitter account posted video footage from security cameras in Detroit, Michigan from election night 2020, which showed two delivery vans driving to a building at 3:30 a.m. with boxes, which were alleged to contain election ballots. Hoft also alleges repeated instances of censorship by Facebook, including warning labels and other restrictions for posts involving COVID-19 and/or election integrity issues during 2020 and 2021.

Hoft further alleges that YouTube censored the GP's videos. YouTube removed a May 14, 2022 video that discussed voter integrity issues in the 2020 election. Hoft has attached as exhibits copies of numerous GP posts censored and/or fact checked. All of the attached examples involve posts relating to COVID-19 or the 2020 election.

In addition to the allegations of the Individual Plaintiffs, the States of Missouri and Louisiana allege extensive censorship by Defendants. The States allege that they have a sovereign and proprietary interest in receiving the free flow of information in public discourse on socialmedia platforms and in using social-media to inform their citizens of public policy decisions. The States also claim that they have a sovereign interest in protecting their own constitutions, ensuring their citizen's fundamental rights are not subverted by the federal government, and that they have a quasi-sovereign interest in protecting the free-speech rights of their citizens. The States allege that the Defendants have caused harm to the states of Missouri and Louisiana by suppressing and/or censoring the free speech of Missouri, Louisiana, and their citizens.

The Complaint, Amended Complaint, Second Amended Complaint, and Third Amended Complaint allege a total of five counts. They are:

[Doc. No. 1]

[Doc. No. 45]

[Doc. No. 84]

[Doc. No. 268]

Count One - Violation of the First Amendment against all Defendants.
Count Two - Action in Excess of Statutory Authority against all Defendants.
Count Three - Violation of the Administrative Procedure Act against HHS, NIAID, CDC, FDA, Peck, Becerra, Murthy, Crawford, Fauci, Galatas, Waldo, Byrd, Choi, Lambert, Dempsey, Muhammed, Jefferson, Murry, and Kimberly.
Count Four - Violation of the Administrative Procedure Act against DHS, CISA, Mayorkas, Easterly, Silvers, Vinograd, Jankowicz, Masterson, Protentis, Hale, Snell, Wyman, and Scully.
Count Five - Violation of the Administrative Procedure Act against the Department of Commerce, Census Bureau, Shopkorn, Schwartz, Molina-Irizarry, and Galemore.
Plaintiffs also ask for this case to be certified as a class action pursuant to Federal Rules of Civil Procedure 23(a) and 23(b)(2). For the reasons discussed herein, it is only necessary to address Count One and the Plaintiffs' request for class action certification in this ruling.

The following facts are pertinent to the analysis of whether or not Plaintiffs are entitled to the granting of an injunction.

The Factual Background is this Court's interpretation of the evidence. The Defendants filed a 723-page Response to Findings of Fact [Doc. No. 266-8] which contested the Plaintiffs' interpretation or characterizations of the evidence. At oral argument, the Defendants conceded that they did not dispute the validity or authenticity of the evidence presented.

Plaintiffs assert that since 2018, federal officials, including Defendants, have made public statements and demands to social-media platforms in an effort to induce them to censor disfavored speech and speakers. Beyond that, Plaintiffs argue that Defendants have threatened adverse consequences to social-media companies, such as reform of Section 230 immunity under the Communications Decency Act, antitrust scrutiny/enforcement, increased regulations, and other measures, if those companies refuse to increase censorship. Section 230 of the Communications Decency Act shields social-media companies from liability for actions taken on their websites, and Plaintiffs argue that the threat of repealing Section 230 motivates the social-media companies to comply with Defendants' censorship requests. Plaintiffs also note that Mark Zuckerberg (“Zuckerberg”), the owner of Facebook, has publicly stated that the threat of antitrust enforcement is “an existential threat” to his platform.

[Doc. No. 212-3, citing Doc. No. 10-1, at 202]

A. White House Defendants

White House Defendants consists of President Joseph R. Biden (“President Biden”), White House Press Secretary Karine Jean-Pierre (“Jean-Pierre”), Ashley Morse (“Morse”), Deputy Assistant to the President and Director of Digital Strategy Rob Flaherty (“Flaherty”), Dori Salcido (“Salcido”), Aisha Shah (“Shah”), Sarah Beran (“Beran”), Stuart F. Delery (“Delery”), Mina Hsiang (“Hsiang”), and Dr. Hugh Auchincloss (Dr. Auchincloss”)

Plaintiffs assert that by using emails, public and private messages, public and private meetings, and other means, the White House Defendants have “significantly encouraged” and “coerced” social-media platforms to suppress protected free speech posted on social-media platforms.

(1) On January 23, 2021, three days after President Biden took office, Clarke Humphrey (“Humphrey”), who at the time was the Digital Director for the COVID-19 Response Team, emailed Twitter and requested the removal of an anti-COVID-19 vaccine tweet by Robert F. Kennedy, Jr. Humphrey sent a copy of the email to Rob Flaherty (“Flaherty”), former Deputy Assistant to the President and Director of Digital Strategy, on the email and asked if “we can keep an eye out for tweets that fall in this same genre.” The email read, “Hey folks-Wanted to flag the below tweet and am wondering if we can get moving on the process of having it removed ASAP.”

[Doc. No. 174-1, Exh. A. at 1]

[Id. at 2]

(2) On February 6, 2021, Flaherty requested Twitter to remove a parody account linked to Finnegan Biden, Hunter Biden's daughter and President Biden's granddaughter. The request stated, “Cannot stress the degree to which this needs to be resolved immediately,” and “Please remove this account immediately.” Twitter suspended the parody account within forty-five minutes of Flaherty's request.

[Doc. No. 174-1, Exh. A. at 4]

(3) On February 7, 2021, Twitter sent Flaherty a “Twitter's Partner Support Portal” for expedited review of flagging content for censorship. Twitter recommended that Flaherty designate a list of authorized White House staff to enroll in Twitter's Partner Support Portal and explained that when authorized reporters submit a “ticket” using the portal, the requests are “prioritized” automatically. Twitter also stated that it had been “recently bombarded” with censorship requests from the White House and would prefer to have a streamlined process. Twitter noted that “[i]n a given day last week for example, we had more than four different people within the White House reaching out for issues.”

[Doc. No. 174-1 at 3]

(4) On February 8, 2021, Facebook emailed Flaherty, and Humphrey to explain how it had recently expanded its COVID-19 censorship policy to promote authoritative COVID-19 vaccine information and expanded its efforts to remove false claims on Facebook and Instagram about COVID-19, COVID-19 vaccines, and vaccines in general. Flaherty responded within nineteen minutes questioning how many times someone can share false COVID-19 claims before being removed, how many accounts are being flagged versus removed, and how Facebook handles “dubious,” but not “provably false,” claims. Flaherty demanded more information from Facebook on the new policy that allows Facebook to remove posts that repeatedly share these debunked claims.

[Id. at 5-8]

(5) On February 9, 2021, Flaherty followed up with Facebook in regard to its COVID-19 policy, accusing Facebook of causing “political violence” spurred by Facebook groups by failing to censor false COVID-19 claims, and suggested having an oral meeting to discuss their policies. Facebook responded the same day and stated that “vaccine-skeptical” content does not violate Facebook's policies. However, Facebook stated that it will have the content's “distribution reduced” and strong warning labels added, “so fewer people will see the post.” In other words, even though “vaccine-skeptical” content did not violate Facebook's policy, the content's distribution was still being reduced by Facebook.

[Id. at 6-8]

[Id.]

[Id.]

Facebook also informed Flaherty that it was working to censor content that does not violate Facebook's policy in other ways by “preventing posts discouraging vaccines from going viral on our platform” and by using information labels and preventing recommendations for Groups, Pages, and Instagram accounts pushing content discouraging vaccines. Facebook also informed Flaherty that it was relying on the advice of “public health authorities” to determine its COVID-19 censorship policies. Claims that have been “debunked” by public health authorities would be removed from Facebook. Facebook further promised Flaherty it would aggressively enforce the new censorship policies and requested a meeting with Flaherty to speak to Facebook's misinformation team representatives about the latest censorship policies. Facebook also referenced “previous meetings” between the White House and Facebook representatives during the “transition period” (likely referencing the Biden Administration transition).

[Id.]

[Id. at 6]

[Id. at 5]

(6) On February 24, 2021, Facebook emailed Flaherty about “Misinfo Themes” to follow up on his request for COVID and vaccine misinformation themes on Facebook. Some of the misinformation themes Facebook reported seeing were claims of vaccine toxicity, claims about the side effects of vaccines, claims comparing the COVID vaccine to the flu vaccine, and claims downplaying the severity of COVID-19. Flaherty responded by asking for details about Facebook's actual enforcement practices and for a report on misinformation that was not censored. Specifically, his email read, “Can you give us a sense of volume on these, and some metrics around the scale of removal for each? Can you also give us a sense of misinformation that might be falling outside your removal policies?” Facebook responded that at their upcoming meeting, they “can definitely go into detail on content that doesn't violate like below, but could ‘contribute to vaccine hesitancy.'”

[Doc. No. 214-9 at 2-3]

[Id.]

(7) On March 1, 2021, Flaherty and Humphrey (along with Joshua Peck (“Peck”), the Health and Human Services' (“HHS”) Deputy Assistant Secretary) participated in a meeting with Twitter about misinformation. After the meeting, Twitter emailed those officials to assure the White House that Twitter would increase censorship of “misleading information” on Twitter, stating “[t]hanks again for meeting with us today. As we discussed, we are building on ‘our' continued efforts to remove the most harmful COVID-19 ‘misleading information' from the service.”

[Doc. No. 214-10 at 2, Jones Declaration, #10, Exh. H] SEALED DOCUMENT

(8) From May 28, 2021, to July 10, 2021, a senior Meta executive reportedly copied Andrew Slavitt (“Slavitt”), former White House Senior COVID-19 Advisor, on his emails to Surgeon General Murthy (“Murthy”), alerting them that Meta was engaging in censorship of COVID-19 misinformation according to the White House's “requests” and indicating “expanded penalties” for individual Facebook accounts that share misinformation. Meta also stated, “We think there is considerably more we can do in ‘partnership' with you and your team to drive behavior.”

[Doc. No. 71-4 at 6-11]

[Id. at 10] (emphasis added)

(9) On March 12, 2021, Facebook emailed Flaherty stating, “Hopefully, this format works for the various teams and audiences within the White House/HHS that may find this data valuable.” This email also provided a detailed report and summary regarding survey data on vaccine uptake from January 10 to February 27, 2021.

[Doc. No. 174-1 at 9]

[Id.]

(10) On March 15, 2021, Flaherty acknowledged receiving Facebook's detailed report and demanded a report from Facebook on a recent Washington Post article that accused Facebook of allowing the spread of information leading to vaccine hesitancy. Flaherty emailed the Washington Post article to Facebook the day before, with the subject line: “You are hiding the ball,” and stated “I've been asking you guys pretty directly, over a series of conversations, for a clear accounting of the biggest issues you are seeing on your platform when it comes to vaccine hesitancy and the degree to which borderline content as you define it - is playing a role.”

[Id. at 11]

After Facebook denied “hiding the ball,” Flaherty followed up by making clear that the White House was seeking more aggressive action on “borderline content.” Flaherty referred to a series of meetings with Facebook that were held in response to concerns over “borderline content” and accused Facebook of deceiving the White House about Facebook's “borderline policies.”Flaherty also accused Facebook of being the “top driver of vaccine hesitancy.” Specifically, his email stated:

[Id. at 11-12]

[Id.]

[Id. at 11]

I am not trying to play ‘gotcha' with you. We are gravely concerned that your service is one of the top drivers of vaccine hesitancyperiod. I will also be the first to acknowledge that borderline content offers no easy solutions. But we want to know that you're trying, we want to know how we can help, and we want to know that you're not playing a shell game with us when we ask you what is going on.
This would all be a lot easier if you would just be straight with us.
In response to Flaherty's email, Facebook responded, stating: “We obviously have work to do to gain your trust.. .We are also working to get you useful information that's on the level. That's my job and I take it seriously - I'll continue to do it to the best of my ability, and I'll expect you to hold me accountable.”

[Id. at 11]

[Id. at 11]

Slavitt, who was copied on Facebook's email, responded, accusing Facebook of not being straightforward, and added more pressure by stating, “internally, we have been considering our options on what to do about it.”

[Id. at 10]

(11) On March 19, 2021, Facebook had an in-person meeting with White House officials, including Flaherty and Slavitt. Facebook followed up on Sunday, March 21, 2021, noting that the White House had demanded a consistent point of contact with Facebook, additional data from Facebook, “Levers for Tackling Vaccine Hesitancy Content,” and censorship policies for Meta's platform WhatsApp. Facebook noted that in response to White House demands, it was censoring, removing, and reducing the virality of content discouraging vaccines “that does not contain actionable misinformation.” Facebook also provided a report for the White House on the requested information on WhatsApp policies:

[Id. at 15]

[Id.]

[Id. at 15]

You asked us about our levers for reducing virality of vaccine hesitancy content. In addition to policies previously discussed, these include the additional changes that were approved last week and that we will be implementing over the coming weeks. As you know, in addition to removing vaccine misinformation, we have been focused on reducing the virality of content discouraging vaccines that do not contain actionable misinformation.
On March 22, 2021, Flaherty responded to this email, demanding more detailed information and a plan from Facebook to censor the spread of “vaccine hesitancy” on Facebook. Flaherty also requested more information about and demanded greater censorship by Facebook of “sensational,” “vaccine skeptical” content. He also requested more information about WhatsApp regarding vaccine hesitancy. Further, Flaherty seemingly spoke on behalf of the White House and stated that the White House was hoping they (presumably the White House and Facebook) could be “partners here, even if it hasn't worked so far.” A meeting was scheduled the following Wednesday between Facebook and White House officials to discuss these issues.

[Id. at 15]

[Id.]

[Id.]

[Id.]

[Id. at 14]

On April 9, 2021, Facebook responded to a long series of detailed questions from Flaherty about how WhatsApp was censoring COVID-19 misinformation. Facebook stated it was “reducing viral activity on our platform” through message-forward limits and other speech-blocking techniques. Facebook also noted it bans accounts that engage in those that seek to exploit COVID-19 misinformation.

[Id. at 17]

[Id. at 17]

Flaherty responded, “I care mostly about what actions and changes you are making to ensure you're not making our country's vaccine hesitancy problem worse,” accusing Facebook of being responsible for the Capitol riot on January 6, 2021, and indicating that Facebook would be similarly responsible for COVID-related deaths if it did not censor more information. “You only did this, however, after an election that you helped increase skepticism in, and an insurrection which was plotted, in large part, on your platform.”

[Id. at 17-21]

[Id. at 17]

(12) On April 14, 2021, Flaherty demanded the censorship of Fox News hosts Tucker Carlson and Tomi Lahren because the top post about vaccines that day was “Tucker Carlson saying vaccines don't work and Tomi Lahren stating she won't take a vaccine.” Flaherty stated, “This is exactly why I want to know what ‘Reduction' actually looks like - if ‘reduction' means ‘pumping our most vaccine hesitant audience with Tucker Carlson saying it does not work'... then.. .I'm not sure it's reduction!”

[Id. at 22]

[Id. at 22]

Facebook promised the White House a report by the end of the week.

[Id. at 23]

(13) On April 13, 2021, after the temporary halt of the Johnson & Johnson vaccine, the White House was seemingly concerned about the effect this would have on vaccine hesitancy. Flaherty sent to Facebook a series of detailed requests about how Facebook could “amplify” various messages that would help reduce any effects this may have on vaccine hesitancy.

[Id. at 30-31]

Flaherty also requested that Facebook monitor “misinformation” relating to the Johnson & Johnson pause and demanded from Facebook a detailed report within twenty-four hours. Facebook provided the detailed report the same day. Facebook responded, “Re the J & J news, we're keen to amplify any messaging you want us to project about what this means for people.”

[Id. at 31]

[Id. at 31-32]

(14) Facebook responded to a telephone call from Rowe about how it was censoring information with a six-page report on censorship with explanations and screen shots of sample posts of content that it does and does not censor. The report noted that vaccine hesitancy content does not violate Facebook's content-moderation policies, but indicated that Facebook still censors this content by suppressing it in news feeds and algorithms. Other content that Facebook admitted did not violate its policy but may contribute to vaccine hesitancy are: a) sensational or alarmist vaccine misrepresentation; b) disparaging others based on the choice to or not to vaccinate; c) true but shocking claims or personal anecdotes; d) discussing the choice to vaccinate in terms of personal or civil liberties; and e) concerns related to mistrust in institutions or individuals.Facebook noted it censors such content through a “spectrum of levers” that includes concealing the content from other users, “de-boosting” the content, and preventing sharing through “friction.” Facebook also mentioned looking forward to tomorrow's meeting “and how we can hopefully partner together.”

[Id. at 24-25]

[Id.]

[Id. at 24-25]

[Id. at 24]

Other examples of posts that did not violate Facebook's policies but would nonetheless be suppressed included content that originated from the Children's Health Defense, a nonprofit activist group headed by Robert F. Kennedy, Jr. (labeled by Defendants as one of the “Disinformation Dozen”).

[Id. at 25-27]

(15) On April 14, 2021, Slavitt emailed Facebook executive Nick Clegg (“Clegg”) with a message expressing displeasure with Facebook's failure to censor Tucker Carlson. Slavitt stated, “Not for nothing but the last time we did this dance, it ended in an insurrection.” The subject line was “Tucker Carlson anti-vax message.” Clegg responded the same day with a detailed report about the Tucker Carlson post, stating that the post did not qualify for removal under Facebook policy but that the video was being labeled with a pointer to authoritative COVID-19 information, not being recommended to people, and that the video was being “demoted.”

[Id. at 34]

Id. at 33]

[Id. at 36]

After Brian Rice (“Rice”) of Facebook forwarded the same report on the Tucker Carlson post to Flaherty on April 14, 2021, Flaherty responded to Rice wanting a more detailed explanation of why Facebook had not removed the Tucker Carlson video and questioning how the video had been “demoted” since there were 40,000 shares. Flaherty followed up six minutes later alleging Facebook provided incorrect information through Crowd Tangle.

[Id. at 33-34]

[Id.]

Two days later, on April 16, 2021, Flaherty demanded immediate answers from Facebook regarding the Tucker Carlson video. Facebook promised to get something to him that night. Facebook followed up on April 21, 2021, with an additional response in regard to an apparent call from Flaherty (“thanks for catching up earlier”). Facebook reported the Tucker Carlson content had not violated Facebook's policy, but Facebook gave the video a 50% demotion for seven days and stated that it would continue to demote the video.

[Id. at 33]

[Id.]

[Id. at 33, 36]

(16) On April 21, 2021, Flaherty, Slavitt, and other HHS officials, met with Twitter officials about “Twitter Vaccine Misinfo Briefing.” The invite stated the White House would be briefed by Twitter on vaccine information, trends seen generally about vaccine information, the tangible effects seen from recent policy changes, what interventions were being implemented, previous policy changes, and ways the White House could “partner” in product work.

[Doc. No. 71-7 at 86].

Twitter discovery responses indicated that during the meeting, White House officials wanted to know why Alex Berenson (“Berenson”) had not been “kicked off” Twitter. Slavitt suggested Berenson was “the epicenter of disinfo that radiated outwards to the persuadable public.” Berenson was suspended thereafter on July 16, 2021, and was permanently de-platformed on August 28, 2021.

[Doc. No. 212-14 at 2-5]

[Id.]

[Doc. No. 212-14, Exh. J, at 2-5]

(17) Also on April 21, 2021, Flaherty, Slavitt, and Fitzpatrick had a meeting with several YouTube officials. The invitation stated the purpose of this meeting was for the White House to be briefed by YouTube on general trends seen around vaccine misinformation, the effects of YouTube's efforts to combat misinformation, interventions YouTube was trying, and ways the White House can “partner” in product work.

[Doc. No. 212-15, Exh. K, at 1-4] SEALED DOCUMENT

In an April 22, 2021, email, Flaherty provided a recap of the meeting and stated his concern that misinformation on YouTube was “shared at the highest (and I mean the highest) levels of the White House.” Flaherty indicated that the White House remains concerned that YouTube is “funneling people into hesitancy and intensifying people's vaccine hesitancy.” Flaherty further shared that “we” want to make sure YouTube has a handle on vaccine hesitancy and is working toward making the problem better. Flaherty again noted vaccine hesitancy was a concern that is shared by the highest (“and I mean the highest”) levels of the White House.

[Doc. No. 174-1 at 39-40]

[Id.]

[Id.]

[Doc. No. 174-1 at 39-40]

Flaherty further indicated that the White House was coordinating with the Stanford Internet Observatory (which was operating the Virality Project): “Stanford” has mentioned that it's recently Vaccine Passports and J&J pause-related stuff, but I'm not sure if that reflects what you're seeing.” Flaherty praised YouTube for reducing distribution of content: “I believe you said you reduced watch time by 70% on borderline content, which is impressive.” However, Flaherty followed up with additional demands for more information from YouTube. Flaherty emphasized that the White House wanted to make sure YouTube's work extends to the broader problem of people viewing “vaccine-hesitant content.” Flaherty also suggested regular meetings with YouTube (“Perhaps bi-weekly”) as they have done with other “platform partners.”

[Id. at 39]

[Id.]

[Doc. No. 214-1 at 39-40]

[Id. at 39-40]

(18) On April 23, 2021, Flaherty sent Facebook an email including a document entitled “Facebook COVID-19 Vaccine Misinformation Brief” (“the Brief”), which indicated that Facebook plays a major role in the spread of COVID vaccine misinformation and found that Facebook's policy and enforcement gaps enable misinformation to spread. The Brief recommended much more aggressive censorship of Facebook's enforcement policies and called for progressively severe penalties. The Brief further recommended Facebook stop distributing antivaccine content in News Feed or in group recommendations. The Brief also called for “warning screens” before linking to domains known to promote vaccine misinformation. Flaherty noted sending this Brief was not a White House endorsement of it, but “this is circulating around the building and informing thinking.”

[Doc. No. 214-14 at 2-3]

[Id.]

[Doc. No. 214-14 at 2-3, Jones Declaration]

On May 1, 2021, Facebook's Clegg sent an email to Slavitt indicating Facebook and the White House met recently to “share research work.” Clegg apologized for not catching and censoring three pieces of vaccine content that went viral and promised to censor such content more aggressively in the future:

[Doc. No. 214-1]

I wanted to send you a quick note on the three pieces of vaccine content that were seen by a high number of people before we demoted them. Although they don't violate our community standards, we should have demoted them before they went viral, and this has exposed gaps in our operational and technical process.
Notably, these three pieces of information did not violate Facebook's policies. Clegg told Slavitt that Facebook teams had spent the past twenty-four hours analyzing gaps in Facebook and were making several changes next week.

[Doc. No. 214-1 at ¶ 116].

Clegg listed-in bold-demands that the White House had made in a recent meeting and provided a response to each. The demands were: a) address Non-English mis/disinformation circulating without moderation; b) do not distribute or amplify vaccine hesitancy, and Facebook should end group recommendations for groups with a history of COVID-19 or vaccine misinformation; c) monitor events that host anti-vaccine and COVID disinformation; and d) address twelve accounts that were responsible for 73% of vaccine misinformation. Facebook noted that it was scrutinizing these accounts and censoring them whenever it could, but that most of the content did not violate Facebook's policies. Facebook referred to its new policy as their “Dedicated Vaccine Discouraging Entities.” Facebook even suggested that too much censorship might be counterproductive and drive vaccine hesitancy: “Among experts we have consulted, there is a general sense that deleting more expressions of vaccine hesitancy might be more counterproductive to the goal of vaccine uptake because it could prevent hesitant people from talking through their concerns and potentially reinforce the notion that there's a ‘cover-up.'”

[Doc. No. 174-1 at 41-42]

[Doc. No. 174-1 at 41-42].

[Id.]

[Id. at 42]

(19) On May 5, 2021, then-White House Press Secretary Jen Psaki (“Psaki”) publicly began pushing Facebook and other social-media platforms to censor COVID-19 misinformation. At a White House Press Conference, Psaki publicly reminded Facebook and other social-media platforms of the threat of “legal consequences” if they do not censor misinformation more aggressively. Psaki further stated: “The President's view is that the major platforms have a responsibility related to the health and safety of all Americans to stop amplifying untrustworthy content, disinformation, and misinformation, especially related to COVID-19 vaccinations and elections.” Psaki linked the threat of a “robust anti-trust program” with the White House's censorship demand. “He also supports better privacy protections and a robust anti-trust program. So, his view is that there's more that needs to be done to ensure that this type of misinformation; disinformation; damaging, sometime life-threatening information, is not going out to the American public.”

[Doc. No. 266-6 at 374]

[Id.]

The next day, Flaherty followed up with another email to Facebook and chastised Facebook for not catching various COVID-19 misinformation. Flaherty demanded more information about Facebook's efforts to demote borderline content, stating, “Not to sound like a broken record, but how much content is being demoted, and how effective are you at mitigating reach, and how quicky?” Flaherty also criticized Facebook's efforts to censor the “Disinformation Dozen”: “Seems like your ‘dedicated vaccine hesitancy' policy isn't stopping the disinfo-dozen - they're being deemed as not dedicated - so it feels like that problem likely coming over to groups.”

[Doc. No. 174-1 at 41]

[Id.]

Things apparently became tense between the White House and Facebook after that, culminating in Flaherty's July 15, 2021 email to Facebook, in which Flaherty stated: “Are you guys fucking serious? I want an answer on what happened here and I want it today.”

[Id. at 55]

(20) On July 15, 2021, things became even more tense between the White House, Facebook, and other social-media platforms. At a joint press conference between Psaki and Surgeon General Murthy to announce the Surgeon General's “Health Advisory on Misinformation,” Psaki announced that Surgeon General Murthy had published an advisory on health misinformation as an urgent public health crisis. Murthy announced: “Fourth, we're saying we expect more from our technology companies. We're asking them to operate with greater transparency and accountability. We're asking them to monitor misinformation more closely. We're asking them to consistently take action against misinformation super-spreaders on their platforms.”Psaki further stated, “We are in regular touch with these social-media platforms, and those engagements typically happen through members of our senior staff, but also members of our COVID-19 team,” and “We're flagging problematic posts for Facebook that spread disinformation.”

[Doc. No. 210-1 at 16 (Waldo Depo, Exh. 10)]

[Id. at 162]

[Doc. No. 10-1 at 370]

[Id. at 376-77]

Psaki followed up by stating that the White House's “asks” include four key steps by which social-media companies should: 1) measure and publicly share the impact of misinformation on their platforms; 2) create a robust enforcement strategy; 3) take faster action against harmful posts; and 4) promote quality information sources in their feed algorithms.

[Id. at 377-78]

The next day, on July 16, 2021, President Biden, after being asked what his message was to social-media platforms when it came to COVID-19, stated, “[T]hey're killing people.”Specifically, he stated “Look, the only pandemic we have is among the unvaccinated, and that they're killing people.” Psaki stated the actions of censorship Facebook had already conducted were “clearly not sufficient.”

[Doc. No. 10-1 at 370]

[Id. at 436-37]

[Doc. No. 10-1 at 446]

Four days later, on July 20, 2021, at a White House Press Conference, White House Communications Director Kate Bedingfield (“Bedingfield”) stated that the White House would be announcing whether social-media platforms are legally liable for misinformation spread on their platforms and examining how misinformation fits into the liability protection granted by Section 230 of the Communications Decency Act (which shields social-media platforms from being responsible for posts by third parties on their sites). Bedingfield further stated the administration was reviewing policies that could include amending the Communication Decency Act and that the social-media platforms “should be held accountable.”

[Doc. No. 10-1 at 477-78]

[Id.]

(21) The public and private pressure from the White House apparently had its intended effect. All twelve members of the “Disinformation Dozen” were censored, and pages, groups, and accounts linked to the Disinformation Dozen were removed.

[Doc. No. 10-1 at 483-85]

Twitter suspended Berenson's account within a few hours of President Biden's July 16, 2021 comments. On July 17, 2021, a Facebook official sent an email to Anita B. Dunn (“Dunn”), Senior Advisor to the President, asking for ways to “get back into the White House's good graces” and stated Facebook and the White House were “100% on the same team here in fighting this.”

[Doc. No. 214-12 at 2-5]

[Doc. No. 174-1 at 49]

(22) On November 30, 2021, the White House's Christian Tom (“Tom”) emailed Twitter requesting that Twitter watch a video of First Lady Jill Biden that had been edited to make it sound as if the First Lady were profanely heckling children while reading to them. Twitter responded within six minutes, agreeing to “escalate with the team for further review.” Twitter advised users that the video had been edited for comedic effect. Tom then requested Twitter apply a “Manipulated Media” disclaimer to the video. After Twitter told Tom the video was not subject to labeling under its policy, Tom disputed Twitter's interpretation of its own policy and added Michael LaRosa (“LaRosa”), the First Lady's Press Secretary, into the conversation. Further efforts by Tom and LaRosa to censor the video on December 9, 13, and 17 finally resulted in the video's removal in December 2021.

[Doc. No. 174-1 at 59-67]

[Id.]

[Id.]

[Id.]

[Id. at 59-67]

(23) In January 2022, Facebook reported to Rowe, Murthy, Flaherty, and Slavitt that it had “labeled and demoted” vaccine humor posts whose content could discourage vaccination.Facebook also reported to the White House that it “labeled and ‘demoted' posts suggesting natural immunity to a COVID-19 infection is superior to vaccine immunity.” In January 2022, Jesse Lee (“Lee”) of the White House sent an email accusing Twitter of calling the President a liar in regard to a Presidential tweet.

[Doc. No. 71-3 at 10-11]

[Doc. No. 71-3 at 10-11]

[Doc. No. 174-1 at 69]

At a February 1, 2022, White House press conference, Psaki stated that the White House wanted every social-media platform to do more to call out misinformation and disinformation, and to uplift accurate information.

[Doc. No. 10-1 at 501-2]

At an April 25, 2022, White House press conference, after being asked to respond to news that Elon Musk may buy Twitter, Psaki again mentioned the threat to social-media companies to amend Section 230 of the Communications Decency Act, linking these threats to social-media platforms' failure to censor misinformation and disinformation.

[Id. at 62-63, ¶¶ 193-197]

On June 13, 2022, Flaherty demanded Meta continue to produce periodic COVID-19 insight reports to track COVID-19 misinformation, and he expressed a concern about misinformation regarding the upcoming authorization of COVID-19 vaccines for children under five years of age. Meta agreed to do so on June 22, 2022.

[Doc. No. 71-3 at 5-6]

(24) In addition to misinformation regarding COVID-19, the White House also asked social-media companies to censor misinformation regarding climate change, gender discussions, abortion, and economic policy. At an Axios event entitled “A Conversation on Battling Misinformation,” held on June 14, 2022, the White House National Climate Advisor Gina McCarthy (“McCarthy”) blamed social-media companies for allowing misinformation and disinformation about climate change to spread and explicitly tied these censorship demands with threats of adverse legislation regarding the Communications Decency Act.

[Doc. No. 214-15]

On June 16, 2022, the White House announced a new task force to target “general misinformation” and disinformation campaigns targeted at women and LBGTQI individuals who are public and political figures, government and civic leaders, activists, and journalists. The June 16, 2022, Memorandum discussed the creation of a task force to reel in “online harassment and abuse” and to develop programs targeting such disinformation campaigns. The Memorandum also called for the Task Force to confer with technology experts and again threatened social-media platforms with adverse legal consequences if the platforms did not censor aggressively enough.

[Doc. No. 214-15[

[Id.]

[Doc. No. 214-16]

On July 8, 2022, President Biden signed an Executive Order on protecting access to abortion. Section 4(b)(iv) of the order required the Attorney General, the Secretary of HHS, and the Chair of the Federal Trade Commission to address deceptive or fraudulent practices relating to reproductive healthcare services, including those online, and to protect access to accurate information.

[Doc. No. 214-18]

On August 11, 2022, Flaherty emailed Twitter to dispute a note added by Twitter to one of President Biden's tweets about gas prices.

[Doc. No. 174-1 at 68]

(25) On August 23, 2021, Flaherty emailed Facebook requesting a report on how Facebook intended to promote the FDA approval of the Pfizer vaccine. He also stated that the White House would appreciate a “push” and provided suggested language.

[Id.]

B. Surgeon General Defendants

Surgeon General Defendants consists of Dr. Vivek H. Murthy (“Murthy”) and Katharine Dealy (“Dealy”).

Surgeon General Murthy is the Surgeon General of the United States. Eric Waldo (“Waldo”) is the Senior Advisor to the Surgeon General and was formerly Chief Engagement Officer for the Surgeon General's office. Waldo's Deposition was taken as part of the allowed Preliminary Injunction-related discovery in this matter.

[Doc. No. 210]

(1) Waldo was responsible for maintaining the contacts and relationships with representatives of social-media platforms. Waldo did pre-rollout calls with Twitter, Facebook, and Google/YouTube before the Surgeon General's health advisory on misinformation was published on July 15, 2021. Waldo admitted that Murthy used his office to directly advocate for socialmedia platforms to take stronger actions against health “misinformation” and that those actions involved putting pressure on social-media platforms to reduce the dissemination of health misinformation. Surgeon General Murthy's message was given to social-media platforms both publicly and privately.

[Doc. No. 210 at 11, 20]

[Id. at 25, 28]

[Id. at 11, 20, 25, 28]

(2) At a July 15, 2021 joint press conference between Psaki and Murthy, the two made the comments mentioned previously in II A(19), which publicly called for social-media platforms “to do more” to take action against misinformation super-spreaders. Murthy was directly involved in editing and approving the final work product for the July 15, 2021 health advisory on misinformation. Waldo also admitted that Murthy used his “bully pulpit” to talk about health misinformation and to put public pressure on social-media platforms.

[Id. at 33-35]

[Waldo depo at 14-17]

[Id. at 29]

(3) Waldo's initial rollout with Facebook was negatively affected because of the public attacks by the White House and Office of the Surgeon General towards Facebook for allowing misinformation to spread. Clegg of Facebook reached out to attempt to request “de-escalation” and “working together” instead of the public pressure. In the call between Clegg and Murthy, Murthy told Clegg he wanted Facebook to do more to censor misinformation on its platforms. Murthy also requested Facebook share data with external researchers about the scope and reach of misinformation on Facebook's platforms to better understand how to have external researchers validate the spread of misinformation. “Data about misinformation” was the topic of conversation in this call; DJ Patil, chief data scientist in the Obama Administration, Murthy, Waldo, and Clegg all participated on the call. The purpose of the call was to demand more information from Facebook about monitoring the spread of misinformation.

[Id. at 91-94]

[Doc. No. 210 at 95-98]

[Id.]

[Doc. No. 210 at 95-98]

(4) One of the “external researchers” that the Office of Surgeon General likely had in mind was Renee DiResta (“DiResta”) from the Stanford Internet Observatory, a leading organization of the Virality Project. The Virality Project hosted a “rollout event” for Murthy's July 15, 2021 press conference.

The Virality Project will be discussed later in greater detail.

[Id. at 36-38]

There was coordination between the Office of the Surgeon General and the Virality Project on the launch of Murthy's health advisory. Kyla Fullenwider (“Fullenwider”) is the Office of the Surgeon General's key subject-matter expert who worked on the health advisory on misinformation. Fullenwider works for a non-profit contractor, United States' Digital Response. Waldo, Fullenwider, and DiResta were involved in a conference call after the July 15, 2021 press conference where they discussed misinformation. The Office of the Surgeon General anticipated that social-media platforms would feel pressured by the Surgeon General's health advisory.

[Id. at 38]

[Id. at 39, 59, 85]

[Id.]

[Id. at 39, 59, 85]

(5) Waldo and the Office of the Surgeon General received a briefing from the Center for Countering Digital Hate (“CCDH”) about the “Disinformation Dozen.” CCDH gave a presentation about the Disinformation Dozen and how CCDH measured and determined that the Disinformation Dozen were primarily responsible for a significant amount of online misinformation.

[Id. at 43, 47]

(6) In his deposition, Waldo discussed various phone calls and communications between Defendants and Facebook. In August of 2021, Waldo joined a call with Flaherty and Brian Rice of Facebook. The call was an update by Facebook about the internal action it was taking regarding censorship. Waldo was aware of at least one call between Murthy and Facebook in the period between President Biden's election and assuming office, and he testified that the call was about misinformation. Waldo was also aware of other emails and at least one phone call where Flaherty communicated with Facebook.

[Id. at 66, 124-25]

[Id. at 66, 124-25]

[Id. at 55-56]

[Id. at 64-65]

(7) The first meeting between the Office of the Surgeon General and social-media platforms occurred on May 25, 2021, between Clegg, Murthy, and Slavitt. The purpose of this call was to introduce Murthy to Clegg. Clegg emailed Murthy with a report of misinformation on Facebook on May 28, 2021.

[Doc. No. 210-4]

Policy updates about increasing censorship were announced by Facebook on May 27, 2021. The Office of the Surgeon General had a pre-rollout (i.e., before the rollout of the Surgeon General's health advisory on misinformation) call with Twitter and YouTube on July 12 and July 14, 2021. The Office of the Surgeon General had a rollout call with Facebook on July 16, 2021. The July 16 call with Facebook was right after President Biden had made his “[T]hey're killing people” comment (II A (19), above), and it was an “awkward call” according to Waldo.

[Id. at 78, Exh. 3]

[Id. at 85]

[Id.]

Another call took place on July 23, 2021, between Murthy, Waldo, DJ Patil, Clegg, and Rice. Clegg shared more about the spread of information and disinformation on Facebook after the meeting. At the meeting, Murthy raised the issue of wanting to have a better understanding of the reach of misinformation and disinformation as it relates to health on Facebook; Murthy often referred to health misinformation in these meetings as “poison.” The Surgeon General's health advisory explicitly called for social-media platforms to do more to control the reach of misinformation.

[Id. at 95-98, 101, 105]

[Id. at 107-08]

On July 30, 2021, Waldo had a meeting with Google and YouTube representatives. At the meeting, Google and YouTube reported to the Office of the Surgeon General what actions they were taking following the Surgeon General's health advisory on misinformation.

[Doc. No. 210-4 at 33]

On August 10, 2021, Waldo and Flaherty had a call with Rice calling for Facebook to report to federal officials as to Facebook's actions to remove “disinformation” and to provide details regarding a vaccine misinformation operation Facebook had uncovered.

[Id.]

Another meeting took place between Google/YouTube, Waldo, and Flaherty on September 14, 2021, to discuss a new policy YouTube was working on and to provide the federal officials with an update on YouTube's efforts to combat harmful COVID-19 misinformation on its platform.

[Id. at 129]

(8) After the meetings with social-media platforms, the platforms seemingly fell in line with the Office of Surgeon General's and White House's requests. Facebook announced policy updates about censoring misinformation on May 27, 2021, two days after the meeting. As promised, Clegg provided an update on misinformation to the Office of Surgeon General on May 28, 2021, three days after the meeting and began sending bi-weekly COVID content reports on June 14, 2021.

[Doc. No. 210-1 at 138]

[Doc. No. 210-5 at 1-2]

[Doc. No. 210-6]

On July 6, 2021, Waldo emailed Twitter to set up the rollout call for the Office of the Surgeon General's health advisory on misinformation and told Twitter that Murthy had been thinking about how to stop the spread of health misinformation; that he knew Twitter's teams were working hard and thinking deeply about the issue; and that he would like to chat over Zoom to discuss. Twitter ultimately publicly endorsed the Office of the Surgeon General's call for greater censorship of health misinformation.

[Doc. No. 210-7 at 145-46]

[Id.]

Waldo sent an email to YouTube on July 6, 2021, to set up the rollout call and to state that the Office of the Surgeon General's purpose was to stop the spread of misinformation on socialmedia platforms. YouTube eventually adopted a new policy on combatting COVID-19 misinformation and began providing federal officials with updates on YouTube's efforts to combat the misinformation.

[Doc. No. 210-8]

[Doc. No. 210-8].

(9) At the July 15, 2021 press conference, Murthy described health misinformation as one of the biggest obstacles to ending the pandemic; insisted that his advisory was on an urgent public health threat; and stated that misinformation poses an imminent threat to the nation's health and takes away the freedom to make informed decisions. Murthy further stated that health disinformation is false, inaccurate, or misleading, based upon the best evidence at the time.

[Doc. No. 210-11]

[Doc. No. 210-11]

Murthy also stated that people who question mask mandates and decline vaccinations are following misinformation, which results in illnesses and death. Murthy placed specific blame on social-media platforms for allowing “poison” to spread and further called for an “all-of-society approach” to fight health misinformation. Murthy called upon social-media platforms to operate with greater transparency and accountability, to monitor information more clearly, and to “consistently take action against misinformation super-spreaders on their platforms.” Notably, Waldo agreed in his deposition that the word “accountable” carries with it the threat of consequences. Murthy further demanded social-media platforms do “much, much, more” and take “aggressive action” against misinformation because the failure to do so is “costing people their lives.”

[Doc. No. 210-11]

[Id.]

[Id.]

[Id.]

[Id.]

(10) Murthy's July 15, 2021 health advisory on misinformation blamed social-media platforms for the spread of misinformation at an unprecedented speed, and it blamed social-media features and algorithms for furthering the spread. The health advisory further called for socialmedia platforms to enact policy changes to reduce the spread of misinformation, including appropriate legal and regulatory measures.

[Doc. No. 210-11]

[Id.]

Under a heading entitled “What Technology Platforms Can Do,” the health advisory called for platforms to take a series of steps to increase and enable greater social-media censorship of misinformation, including product changes, changing algorithms to avoid amplifying misinformation, building in “frictions” to reduce the sharing of misinformation, and practicing the early detection of misinformation super-spreaders, along with other measures. The consequences for misinformation would include flagging problematic posts, suppressing the spread of the information, suspension, and permanent de-platforming.

[Id.]

[Id.]

(11) The Office of the Surgeon General collaborated and partnered with the Stanford University Internet Observatory and the Virality Project. Murthy participated in a January 15, 2021 launch of the Virality Project. In his comments, Murthy told the group, “We're asking technology companies to operate with great transparency and accountability so that misinformation does not continue to poison our sharing platforms and we knew the government can play an important role, too.”

[Doc. No. 210-13, Doc. No. 210, at 206-07].

Murthy expressly mentioned his coordination with DiResta at the Virality Project and expressed his intention to maintain that collaboration. He claimed that he had learned a lot from the Virality Project's work and thanked the Virality Project for being such a great “partner.” Murthy also stated that the Office of the Surgeon General had been “partnered with” the Stanford Internet Observatory for many months.

[Doc. No. 210-1 at 213]

[Doc. No. 210-1 at 213]

(12) After President Biden's “[T]hey're killing people” comment on July 16, 2021, Facebook representatives had “sad faces” according to Waldo. On July 21, 2021, Facebook emailed Waldo and Fullenwider with CrowdTangle data and with “interventions” that created “frictions” with regard to COVID misinformation. The interventions also included limiting forwarding of WhatsApp messages, placing warning labels on fact-checked content, and creating “friction” when someone tries to share these posts on Facebook. Facebook also reported other censorship policy and actions, including censoring content that contributes to the risk of imminent physical harm, permanently banning pages, groups, and accounts that repeatedly broke Facebook's COVID-19 misinformation rules, and reducing the reach of posts, pages, groups, and accounts that share other false claims “that do not violate our policies but may present misleading or sensationalized information about COVID-19 and vaccines.”

[Doc. No. 210-15]

On July 16, 2021, Clegg emailed Murthy and stated, “I know our teams met today to better understand the scope of what the White House expects of us on misinformation going forward.”On July 18, 2021, Clegg messaged Murthy stating “I imagine you and your team are feeling a little aggrieved-as is the [Facebook] team, it's not great to be accused of killing people-but as I said by email, I'm keen to find a way to deescalate and work together collaboratively. I am available to meet/speak whenever suits.” As a result of this communication, a meeting was scheduled for July 23, 2021.

[Doc. No. 210-16]

[Doc. No. 210-17]

[Doc. No. 210-18]

At the July 23, 2021 meeting, the Office of the Surgeon General officials were concerned about understanding the reach of Facebook's data. Clegg even sent a follow-up email after the meeting to make sure Murthy saw the steps Facebook had been taking to adjust policies with respect to misinformation and to further address the “disinfo-dozen.” Clegg also reported that Facebook had “expanded the group of false claims that we remove, to keep up with recent trends of misinformation that we are seeing.” Further, Facebook also agreed to “do more” to censor COVID misinformation, to make its internal data on misinformation available to federal officials, to report back to the Office of the Surgeon General, and to “strive to do all we can to meet our ‘shared' goals.”

[Id.]

[Id. at 4-5]

[Id.]

[Id.]

Evidently, the promised information had not been sent to the Office of the Surgeon General by August 6, 2021, so the Office requested the information in a report “within two weeks.” The information entitled “How We're Taking Action Against Vaccine Misinformation Superspreaders” was later sent to the Office of the Surgeon General. It detailed a list of censorship actions taken against the “Disinformation Dozen.” Clegg followed up with an August 20, 2021 email with a section entitled “Limiting Potentially Harmful Misinformation,” which detailed more efforts to censor COVID-19 Misinformation. Facebook continued to report back to Waldo and Flaherty with updates on September 19 and 29 of 2021.

[Doc. No. 210-22 at 1-3]

[Doc. No. 210-21]

[Doc. No. 210-22 at 2]

[Doc. No. 210, Waldo depo. Exh 30, 31]

(13) Waldo asked for similar updates from Twitter, Instagram, and Google/YouTube.

[Doc. No. 210, Waldo depo. at 257-58]

(14) The Office of the Surgeon General also collaborated with the Democratic National Committee. Flaherty emailed Murthy on July 19, 2021, to put Murthy in touch with Jiore Craig (“Craig”) from the Democratic National Committee who worked on misinformation and disinformation issues. Craig and Murthy set up a Zoom meeting for July 22, 2021.

[Doc. No. 210, Exh. 22]

(15) After an October 28, 2021 Washington Post article stated that Facebook researchers had deep knowledge about how COVID-19 and vaccine misinformation ran through Facebook's apps, Murthy issued a series of tweets from his official Twitter account indicating he was “deeply disappointed” to read this story, that health misinformation had harmed people's health and cost lives, and that “we must demand Facebook and the rest of the social-media ecosystems take responsibility for stopping health misinformation on their platforms.” Murthy further tweeted that “we need transparency and accountability now.”

[Doc. No. 210, Exh. 31]

[Id.]

(16) On October 29, 2021, Facebook asked federal officials to provide a “federal health contract” to dictate “what content would be censored on Facebook's platforms.” Federal officials informed Facebook that the federal health authority that could dictate what content could be censored as misinformation was the CDC.

[Doc. No. 210, Exh. 33]

[Id.]

(17) Murthy continued to publicly chastise social-media platforms for allowing health misinformation to be spread on their platforms. Murthy made statements on the following platforms: a December 21, 2021 podcast threatening to hold social-media platforms accountable for not censoring misinformation; a January 3, 2022 podcast with Alyssa Milano stating that “platformers need to step up to be accountable for making their spaces safer”; and a February 14, 2022 panel discussion hosted by the Rockefeller Foundation, wherein they discussed that technology platforms enabled the speed, scale, and sophistication with which this misinformation was spreading.

[Doc. No. 210, Exh. 38, Audio Transcript, at 7]

[Doc. No. 210-33]

[Doc. No. 210-34]

On March 3, 2022, the Office of the Surgeon General issued a formal Request for Information (“RFI”), published in the Federal Register, seeking information from social-media platforms and others about the spread of misinformation. The RFI indicated that the Office of the Surgeon General was expanding attempts to control the spread of misinformation on social media and other technology platforms. The RFI also sought information about censorship policies, how they were enforced, and information about disfavored speakers.The RFI was sent to Facebook, Google/YouTube, LinkedIn, Twitter, and Microsoft by Max Lesko (“Lesko”), Murthy's Chief of Staff, requesting responses from these social-media platforms.Murthy again restated social-media platforms' responsibility to reduce the spread of misinformation in an interview with GQ Magazine. Murthy also specifically called upon Spotify to censor health information.

[Doc. No. 32. Ex. 42, 87 Fed.Reg. 12712]

[Id.]

[Id.]

[Id. Exh. 46, 47, 48, 49, 50, 51]

[Id.]

[Id. Exh. 51]

[Exh. 52]

C. CDC Defendants

The CDC Defendants consist of the Centers for Disease Control & Prevention, Carol Crawford (“Crawford”), Jay Dempsey (“Dempsey”), Kate Galatas (“Galatas”), United States Census Bureau (“Census Bureau”), Jennifer Shopkorn (“Shopkorn”), the Department of Health and Human Services (“HHS”), Xavier Becerra (“Becerra”), Yolanda Byrd (“Byrd”), Christy Choi (“Choi”), Ashley Morse (“Morse”), and Joshua Peck (“Peck”).

(1) Crawford is the Director for The Division of Digital Media within the CDC Office of the Associate Director for Communications. Her deposition was taken pursuant to preliminary- injunction related discovery here. The CDC is a component of the Department of Health and Human Services (“HHS”); Xavier Becerra (“Becerra”) is the Secretary of HHS. Crawford's division provides leadership for CDC's web presence, and Crawford, as director, determines strategy and objectives and oversees its general work. Crawford was the main point of contact for communications between the CDC and social-media platforms.

[Doc. No. 205-1]

[Doc. No. 266-5 at 57-61]

[Doc. No. 205-1 at 11]

[Id. at 249]

Prior to the COVID-19 pandemic, Crawford only had limited contact with social-media platforms, but she began having regular contact post-pandemic, beginning in February and March of 2020. Crawford communicated with these platforms via email, phone, and meetings.

[Id. at 16-18]

[Id. at 20]

(2) Facebook emailed State Department officials on February 6, 2020, that it had taken proactive and reactive steps to control information and misinformation related to COVID-19. The email was forwarded to Crawford, who reforwarded to her contacts on Facebook. Facebook proposed to Crawford that it would create a Coronavirus page that would give information from trusted sources including the CDC. Crawford accepted Facebook's proposal on February 7, 2020, and suggested the CDC may want to address “widespread myths” on the platform.

[Doc. 205-3 at 3]

[Id. at 1-2]

Facebook began sending Crawford CrowdTangle reports on January 25, 2021. CrowdTangle is a social-media listening tool for Meta, which shows themes of discussion on social-media channels. These reported on “top engaged COVID and vaccine-related content overall across Pages and Groups.” This CrowdTangle report was sent by Facebook to Crawford in response to a prior conversation with Crawford. The CDC had privileged access to CrowdTangle since early 2020.

[Id. at 49-52]

[Doc. No. 205-1, Exh. 6 at 2]

[Id. at 49-52, 146-47]

Facebook emailed Crawford on March 3, 2020, that it intended to support the Government in its response to the Coronavirus, including a goal to remove certain information. Crawford and Facebook began having discussions about misinformation with Facebook in the Fall of 2020, including discussions of how to combat misinformation.

[Doc. No. 205-4 at 1-2]

[Doc. No. 205-7 at 1-2]

The CDC used CrowdTangle, along with Meltwater reports (used for all platforms), to monitor social media's themes of discussion across platforms. Crawford recalls generally discussing misinformation with Facebook. Crawford added Census Bureau officials to the distribution list for CrowdTangle reports because the Census Bureau was going to begin working with the CDC on misinformation issues.

[Doc. No. 205-1 at 154-55]

[Id. at 58]

[Id.]

(3) On January 27, 2021, Facebook sent Crawford a recurring invite to a “Facebook weekly sync with CDC.” A number of Facebook and CDC officials were included in the invite, and the CDC could invite other agencies as needed. The CDC had weekly meetings with Facebook.

[Doc. No. 205-1 at 226]

[Doc. No. 205-36]

Doc. No. 205-1 at 226]

(4) On March 10, 2021, Crawford sent Facebook an email seeking information about “Themes that have been removed for misinfo.” The CDC questioned if Facebook had info on the types of posts that were removed. Crawford was aware that the White House and the HHS were also receiving similar information from Facebook. The HHS was present at meetings with social-media companies on March 1, 2021, and on April 21, 2021.

[Doc. No. 205-44 at 2-3]

[Doc. No. 205-1 at 258-61]

Twitter with White House

Twitter with White House

(5) On March 25, 2021, Crawford and other CDC officials met with Facebook. In an email by Facebook prior to that meeting, Facebook stated it would present on COVID-19 misinformation and have various persons present, including a Misinformation Manager and a Content-Manager official (Liz Lagone). Crawford responded, attaching a PowerPoint slide deck, stating “This is a deck Census would like to discuss and we'd also like to fit in a discussion of topic types removed from Facebook.” Crawford also indicated two Census Bureau officials, Schwartz and Shopkorn, would be present, as well as two Census Bureau contractors, Sam Huxley and Christopher Lewitzke.

[Doc. No. 205-1 at 103]

[Id.]

[Doc. No. 205-34 at 3]

The “deck” the Census Bureau wanted to discuss contained an overview of “Misinformation Topics” and included “concerns about infertility, misinformation about side effects, and claims about vaccines leading to deaths.” For each topic, the deck included sample slides and a statement from the CDC debunking the allegedly erroneous claim.

[Id. at 4]

[Id. at 6-14]

(6) Crawford admits she began engaging in weekly meetings with Facebook, and emails verify that the CDC and Facebook were repeatedly discussing misinformation back and forth. The weekly meetings involved Facebook's content-mediation teams. Crawford mainly inquired about how Facebook was censoring COVID-19 misinformation in these meetings.

[Doc. No. 205-1 at 68-69]

[Doc. No. 205-9 at 1-4]

[Doc. No. 205-1 at 68-69]

(7) The CDC entered into an Intra-Agency Agreement (“IAA”) with the Census Bureau to help advise on misinformation. The IAA required that the Census Bureau provide reports to the CDC on misinformation that the Census Bureau tracked on social media. To aid in this endeavor, Crawford asked Facebook to allow the Census Bureau to be added to CrowdTangle.

[Doc. No. 205-1 at 71-72, 110]

[Doc. No. 205-9 at 1]

(8) After the March 2021 weekly meetings between Facebook, the CDC, and Census Bureau began, Crawford began to press Facebook on removing and/or suppressing misinformation. In particular, she stated, “The CDC would like to have more info... about what is being done on the amplification-side,” and the CDC “is still interested in more info on how you view or analyze the data on removals, etc.” Further, Crawford noted, “It looks like the posts from last week's deck about infertility and side effects have all been removed. Were these evaluated by the moderation team or taken down for another reason?” Crawford also questioned Facebook about the CrowdTangle report showing local news coverage of deaths after receiving the vaccine and questioned what Facebook's approach is for “adding labels” to those stories.

[Doc. No. 205-9 at 2]

[Id.]

[Doc. No. 205-9 at 1]

On April 13, 2021, Facebook emailed Crawford to propose enrolling CDC and Census Bureau officials in a special misinformation reporting channel; this would include five CDC officials and four Census Bureau officials. The portal was only provided to federal officials.

[Doc. No. 205-11 at 2]

On April 23, 2021, and again on April 28, 2021, Crawford emailed Facebook about a Wyoming Department of Health report noting that the algorithms that Facebook and other social-media networks are using to “screen out postings of sources of vaccine misinformation” were also screening out valid public health messages.

[Doc. No. 205-38 at 2]

On May 6, 2021, Crawford emailed Facebook a table containing a list of sixteen specific postings on Facebook and Instagram that contained misinformation. Crawford stated in her deposition that she knew when she “flagged” content for Facebook, they would evaluate and possibly censor the content. Crawford stated CDC's goal in flagging information for Facebook was “to be sure that people have credible health information so that they can make the correct health decisions.” Crawford continued to “flag” and send misinformation posts to Facebook, and on May 19, 2021, Crawford provided Facebook with twelve specific claims.

[Doc. No. 205-10 at 1-3]

[Doc. No. 205-1 at 88]

[Id.]

[Doc. No. 205-12 at 1]

(9) Facebook began to rely on Crawford and the CDC to determine whether claims were true or false. Crawford began providing the CDC with “scientific information” for Facebook to use to determine whether to “remove or reduce and inform.” Facebook was relying on the CDC's “scientific information” to determine whether statements made on its platform were true or false. The CDC would respond to “debunk” claims if it had an answer. These included issues like whether COVID-19 had a 99.96% survival rate, whether COVID-19 vaccines cause bells' palsy, and whether people who are receiving COVID-19 vaccines are subject to medical experiments.

[Id. at 2]

[Doc. No. 205-1 at 106]

[Id.]

[Doc. No. 205-12 at 1-2]

Facebook content-mediation officials would contact Crawford to determine whether statements made on Facebook were true or false. Because Facebook's content-moderation policy called for Facebook to remove claims that are false and can lead to harm, Facebook would remove and/or censor claims the CDC itself said were false. Questions by Facebook to the CDC related to this content-moderation included whether spike proteins in COVID-19 vaccines are dangerous and whether Guillain-Barre Syndrome or heart inflammation is a possible side effect of the COVID-19 vaccine. Crawford normally referred Facebook to CDC subject-matter experts or responded with the CDC's view on these scientific questions.

[Doc. No. 205-12 at 2]

[Doc. No. 205-26 at 1-4]

[Doc. No. 205-18]

[Doc. No. 205-1 at 140]

(10) Facebook continued to send the CDC biweekly CrowdTangle content insight reports, which included trending topics such as Door-to-Door Vaccines, Vaccine Side Effects, Vaccine Refusal, Vaccination Lawsuits, Proof of Vaccination Requirement, COVID-19 and Unvaccinated Individuals, COVID-19 Mandates, Vaccinating Children, and Allowing People to Return to Religious Services.

[Doc. No. 205-20 at 205-20]

(11) On August 19, 2021, Facebook asked Crawford for a Vaccine Adverse Event Reporting System (“VAERS”) meeting for the CDC to give Facebook guidance on how to address VAERS-related “misinformation.” The CDC was concerned about VAERS-related misinformation because users were citing VAERS data and reports to raise concerns about the safety of vaccines in ways the CDC found to be “misleading.” Crawford and the CDC followed up by providing written materials for Facebook to use. The CDC eventually had a meeting with Facebook about VAERS-related misinformation and provided two experts for this issue.

[Doc. No. 205-21]

[Doc. No. 205-22]

[Doc. No. 205-21]

[Doc. No. 205-1 at 151-52]

(12) On November 2, 2021, a Facebook content-moderation official reached out to the CDC to obtain clarity on whether the COVID-19 vaccine was harmful to children. This was following the FDA's emergency use authorization (“EUA”) related to the COVID-19 vaccine.In addition to the EUA issue for children, Facebook identified other claims it sought clarity on regarding childhood vaccines and vaccine refusals.

[Doc. 205-23 at 1-2]

[Id.]

The following Monday, November 8, 2021, Crawford followed up with a response from the CDC, which addressed seven of the ten claims Facebook had asked the CDC to evaluate. The CDC rated six of the claims “False” and stated that any of these false claims could cause vaccine refusal.

[Doc. No. 205-24]

The questions the CDC rated as “false” were:

1) COVID-19 vaccines weaken the immune system;
2) COVID-19 vaccines cause auto-immune diseases;
3) Antibody-dependent enhancement (“ADE”) is a side effect of COVID-19 vaccines;
4) COVID-19 vaccines cause acquired immunodeficiency syndrome (AIDS);
5) Breast milk from a vaccinated parent is harmful to babies/children; and
6) COVID-19 vaccines cause multi-system inflammatory syndrome in children (MIS-C).

(13) On February 3, 2022, Facebook again asked the CDC for clarification on whether a list of claims were “false” and whether the claims, if believed, could contribute to vaccine refusals. The list included whether COVID-19 vaccines cause ulcers or neurodegenerative diseases such as Huntington's and Parkinson's disease; the FDA's possible future issuance of an EUA to children six months to four years of age; and questions about whether the COVID-19 vaccine causes death, heart attacks, autism, birth defects, and many others.

[Doc. No. 205-26 at 1]

[Id. at 1-4]

(14) In addition to its communications with Facebook, the CDC and Census Bureau also had involvement with Google/YouTube. On March 18, 2021, Crawford emailed Google, with the subject line “COVID Misinfo Project.” Crawford informed Google that the CDC was now working with the Census Bureau (who had been meeting with Google regularly) and wanted to set up a time to talk and discuss the “COVID Misinfo Project.” According to Crawford, the previous Census project referred to the Census' work on combatting 2020 Census misinformation.

[Doc. No. 205-28]

[Doc. No. 205-1 at 175]

On March 23, 2021, Crawford sent a calendar invite for a March 24, 2021 meeting, which included Crawford and five other CDC employees, four Census Bureau employees, and six Google/YouTube officials. At the March 24, 2021 meeting, Crawford presented a slide deck similar to the one prepared for the Facebook meeting. The slide deck was entitled “COVID Vaccine Misinformation: Issue Overview” and included issues like infertility, side effects, and deaths. The CDC and the Census Bureau denied that COVID-19 vaccines resulted in infertility, caused serious side effects, or resulted in deaths.

[Doc. No. 214-22 Jones Dec. Exh. T] SEALED DOCUMENT

[Id.] SEALED DOCUMENT

(15) On March 29, 2021, Crawford followed up with Google about using their “regular 4 p.m. meetings” to go over things with the Census. Crawford recalled that the Census was asking for regular meetings with platforms, specifically focused on misinformation. Crawford also noted that the reference to the “4 p.m. meeting” refers to regular biweekly meetings with Google, which “continues to the present day.” Crawford also testified she had similar regular meetings with Meta and Twitter, and previously had regular meetings with Pinterest. Crawford stated these meetings were mostly about things other than misinformation, but misinformation was discussed at the meetings.

[Doc. No. 205-1 at 179-82]

[Doc. No. 205-1 at 184-85]

[Doc. No. 205-1 at 180]

[Id. at 181]

(16) On May 10, 2021, Crawford emailed Facebook to establish “COVID BOLO” (“Be on The Lookout”) meetings. Google and YouTube were included. Crawford ran the BOLO meetings, and the Census Bureau official arranged the meetings and prepared the slide deck for each meeting.

[Doc. No. 205-40]

[Doc. No. 205-1 at 246, 265-66]

The first BOLO meeting was held on May 14, 2021; the slide deck for the meeting was entitled “COVID Vaccine Misinformation: Hot Topics” and included five “hot topics” with a BOLO note for each topic. The five topics were: the vaccines caused “shedding”; a report made on VAERS that a two-year old child died from the vaccine; other alleged misleading information on VAERS reports; statements that vaccines were bioweapons, part of a depopulation scheme, or contain microchips; and misinformation about the eligibility of twelve to fifteen year old children for the vaccine. All were labeled as “false” by the CDC, and the potential impact on the public was a reduction of vaccine acceptance.

[Doc. No. 214-23 at 4-5] SEALED DOCUMENT

The second BOLO meeting was held on May 28, 2021. The second meeting also contained a slide deck with a list of three “hot topics” to BOLO: that the Moderna vaccine was unsafe; that vaccine ingredients can cause people to become magnetic; and that the vaccines cause infertility or fertility-related issues in men. All were labeled as false by the CDC, and possibly impacted reduced vaccine acceptance.

[Doc. No. 214-24 at 3-7] SEALED DOCUMENT

A third BOLO meeting scheduled for June 18, 2021, was cancelled due to the new Juneteenth holiday. However, Crawford sent the slide deck for the meeting. The hot topics for this meeting were: that vaccine particles accumulate in ovaries causing fertility; that vaccines contain microchips; and because of the risk of blood clots to vaccinated persons, airlines were discussing a ban. All were labeled as false.

[Doc. No. 214-25 at 2-7] SEALED DOCUMENT

The goal of the BOLO meetings was to be sure credible information was out there and to flag information the CDC thought was not credible for potential removal.

[Doc. No. 205-1 at 266]

On September 2, 2021, Crawford emailed Facebook and informed them of a BOLO for a small but growing area of misinformation: one of the CDC's lab alerts was misinterpreted and shared via social media.

[Doc. No. 205-22]

(17) The CDC Defendants also had meetings and/or communications with Twitter. On April 8, 2021, Crawford sent an email stating she was “looking forward to setting up regular chats” and asked for examples of misinformation. Twitter responded.

[Doc. No. 205-1 at 197, 205-33]

On April 14, 2021, Crawford sent an email to Twitter giving examples of misinformation topics, including that vaccines were not FDA approved, fraudulent cures, VAERS data taken out of context, and infertility. The list was put together by the Census Bureau team.

[Doc. No. 205-33]

On May 10, 2021, Crawford emailed Twitter to print out two areas of misinformation, which included copies of twelve tweets. Crawford informed Twitter about the May 14, 2021 BOLO meeting and invited Twitter to participate. The examples of misinformation given at the meeting included: vaccine shedding; that vaccines would reduce the population; abnormal bleeding; miscarriages for women; and that the Government was lying about vaccines. In a response, Twitter stated that at least some of the examples had been “reviewed and actioned.”Crawford understood that she was flagging posts for Twitter for possible censorship.

[Doc. No. 205-34]

[Id.]

[Doc. No. 205-1 at 211]

Twitter additionally offered to enroll CDC officials in its “Partner Support Portal” to provide expedited review of content flagged for censorship. Crawford asked for instructions of how to enroll in the Partnership Support Portal and provided her personal Twitter account to enroll. Crawford was fully enrolled on May 27, 2021. Census Bureau contractor Christopher Lewitzke (“Lewitzke”) also requested to enroll in the Partner Support Portal.

[Id. at 211-12]

[Id. at 211-18]

[Doc. No. 201-34 at 2]

Crawford also sent Twitter a BOLO for the alleged misinterpretation of a CDC lab report.

[Doc. No. 201-35]

(18) Crawford testified in her deposition that the CDC has a strong interest in tracking what its constituents are saying on social media. Crawford also expressed concern that if content were censored and removed from social-media platforms, government communicators would not know what the citizen's “true concerns” were.

[Doc. No. 201-1 at 57-58]

[Id. at 75]

D. NIAID Defendants

The NIAD Defendants consist of the National Institute of Allergy and Infectious Disease and Dr. Hugh Auchincloss (“Dr. Auchincloss”).

The NIAID is a federal agency under HHS. Dr. Fauci was previously the Director of NIAID. Dr. Fauci's deposition was taken as a part of the limited preliminary injunction discovery in this matter.

[Doc. No. 206]

1) Dr. Fauci had been the director of the NIAID for over thirty-eight years and became Chief Medical Advisor to the President in early 2021. Dr. Fauci retired December 31, 2022.

[Doc. No. 206-1 at 10 (Deposition of Dr. Anthony S. Fauci)]

1. Lab-Leak Theory

Plaintiffs set forth arguments that because NIAID had funded “gain-of-function”research at Dr. Fauci's direction at the Wuhan Institute of Virology (“Wuhan lab”) in Wuhan, China, Dr. Fauci sought to suppress theories that the SARS-CoV2 virus leaked from the Wuhan lab.

“Gain-of-function” research involves creating a potentially dangerous virus in a laboratory.

[Doc. No. 212-3 at 151-85]

(1) Plaintiffs allege that Dr. Fauci's motive for suppressing the lab-leak theory was a fear that Dr. Fauci and NIAID could be blamed for funding gain-of-function research that created the COVID-19 pandemic. Plaintiffs allege Dr. Fauci participated in a secret call with other scientists on February 1, 2020, and convinced the scientists (who were proponents of the lab-leak theory) to change their minds and advocate for the theory that the COVID-19 virus originated naturally. A few days after the February 1, 2020 call, a paper entitled “The Proximal Origin of COVID-19” was published by Nature Medicine on March 17, 2020. The article concludes that SARS-CoV2 was not created in a lab but rather was naturally occurring.

[Id. at 165]

On February 2, 2020, Dr. Fauci told the other scientists that “given the concerns of so many people and the threat of further distortions on social media it is essential that we move quickly. Hopefully, we can get the WHO to convene.” Dr. Fauci emailed Dr. Tedros of the WHO and two senior WHO officials, urging WHO to quickly establish a working group to address the lableak theory. Dr. Fauci stated they should “appreciate the urgency and importance of this issue given the gathering internet evident in the science literature and in mainstream and social media to the question of the origin of this virus.” Dr. Fauci also stated WHO needed to “get ahead of ...the narrative of this and not reacting to reports which could be very damaging.” Numerous drafts of “The Proximal Origin of COVID-19” were sent to Dr. Fauci to review prior to the article being published in Nature Medicine.

[Doc. No. 206-9 at 2]

[Doc. No. 206-9 at 1]

[Doc. No. 206-13 at 1, 7-8; 206-11 at 2-3; and 206-20]

(2) On February 9, 2020, in a joint podcast with Dr. Peter Daszak of the Eco Health Alliance, both Drs. Fauci and Daszak discredited the lab-leak theory, calling it a “conspiracy theory.”

[Doc. No. 206-16 at 1]

[Doc. No. 206-16 at 1; 206-17 at 1]

(3) Three authors of “The Proximal Origins of SARS-CoV2,” Robert Garry, Kristian Anderson, and Ian Lipkin, received grants from NIH in recent years.

[Doc. No. 214-30]

(4) After “The Proximal Origins of SARS-CoV2” was completed and published in Nature Medicine, Dr. Fauci began discrediting the lab-leak theory. “This study leaves little room to refute a natural original for COVID-19.” “It's a shining object (lab-leak theory) that will go away in times.”

[Doc. No. 206-27 at 3-4]

At an April 17, 2020 press conference, when asked about the possibility of a lab-leak, Dr. Fauci stated, “There was a study recently that we can make available to you, where a group of highly qualified evolutionary virologists looked at the sequences there and the sequences in bats as they evolve. And the mutations that it took to get to the point where it is now is totally consistent with jump of a species from animal to a human.” “The Proximal Origin of SARS-CoV2” has since become one of the most widely read papers in the history of science.

(Video of c Force Briefing, at https://www.youtube.com/watch? v=brbArPX8=6I)

[Doc. No. 214-30]

(5) Twitter and Facebook censored the lab-leak theory of COVID-19. However, Dr. Fauci claims he is not aware of any suppression of speech about the lab-leak theory on social media, and he claims he does not have a Twitter or Facebook account.

[Doc. No. 206-32 at 1-2; Doc. No. 206-33 at 3]

[Doc. No. 206-1 at 210]

(6) On March 15, 2020, Zuckerberg sent Dr. Fauci an email asking for coordination between Dr. Fauci and Facebook on COVID-19 messaging. Zuckerberg asked Dr. Fauci to create a video to be used on Facebook's Coronavirus Information Hub, with Dr. Fauci answering COVID-19 health questions, and for Dr. Fauci to recommend a “point person” for the United States Government “to get its message out over the platform.”

[Doc. No. 206-24 at 3]

Dr. Fauci responded the next day to Zuckerberg saying, “Mark your idea and proposal sounds terrific,” “would be happy to do a video for your hub,” and “your idea about PSAs is very exciting.” Dr. Fauci did three live stream Facebook Q&A's about COVID-19 with Zuckerberg.

[Doc. No. 201-1 at 177]

2. Hydroxychloroquine

Plaintiffs further allege the NIAID and Dept. of HHS Defendants suppressed speech on hydroxychloroquine. On May 22, 2020, The Lancet published an online article entitled “Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multi-national registry analysis.” The article purported to analyze 96,032 patients to compare persons who did and did not receive this treatment. The study concluded that hydroxychloroquine and chloroquine were associated with decreased in-hospital survival and an increased frequency of ventricular arrhythmias when used for treatment of COVID-19.

[Doc. No. 206-36 at 1]

[Id.]

Dr. Fauci publicly cited this study to claim that “hydroxychloroquine is not effective against coronavirus.” He then publicly began to discredit COVID-19 treatment with hydroxychloroquine and stated whether the treatment of COVID-19 by hydroxychloroquine was effective could only be judged by rigorous, randomized, double-blind, placebo-based studies. He testified the same on July 31, 2020, before the House Select Subcommittee on Coronavirus Crisis.

[Doc. No. 206-35 at 1]

https://www.youtube.com/watch?v=RUNCSOQD2UE

(2) When America's Frontline Doctors held a press conference criticizing the Government's response to the COVID-19 pandemic and spouting the benefits of hydroxychloroquine in treating the coronavirus, Dr. Fauci made statements on Good Morning America and on Andrea Mitchell Reports that hydroxychloroquine is not effective in treating the coronavirus. Social-media platforms censored the America's Frontline Doctors videos. Facebook, Twitter, and YouTube removed the video. Dr. Fauci does not deny that he or his staff at NIAID may have communicated with social-media platforms, but he does not specifically recall it.

[Doc. No. 207-2 at 5]

[Doc. No. 207-1 at 2]

[Doc. No. 207-1 at 2-3]

[Doc. No. 207-2 at 6]

[Doc. No. 206-1 at 238]

3. The Great Barrington Declaration

(1) The GBD was published online on October 4, 2020. The GBD was published by Plaintiffs Dr. Bhattacharja of Stanford and Dr. Kulldorff of Harvard, along with Dr. Gupta of Oxford. The GBD is a one-page treatise opposing reliance on lockdowns and advocating for an approach to COVID-19 called “focused protection.” It criticized the social distancing and lockdown approaches endorsed by government experts. The authors expressed grave concerns about physical and mental health impacts of current government COVID-19 lockdown policies and called for an end to lockdowns.

[Doc. No. 207-5 at 3]

[Id.]

(2) On October 8, 2020, Dr. Francis Collins emailed Dr. Fauci (and Cliff Lane) stating:

Hi Tony and Cliff, See https://gbdeclaration.org/. This proposal from the three fringe epidemiologists who met with the Secretary seems to be getting a lot of attention - and even a co-signature from Nobel Prize winner Mike Leavitt at Stanford. There needs to be a quick and devastating published take down of its premises. I don't see anything like that online yet- is it underway? Francis.
The same day, Dr. Fauci wrote back to Dr. Collins stating, “Francis: I am pasting in below a piece from Wired that debunks this theory. Best, Tony.”

[Doc. No. 207-6]

[Doc. No. 207-7]

Dr. Fauci and Dr. Collins followed up with a series of public media statements attacking the GBD. In a Washington Post story run on October 14, 2020, Dr. Collins described the GBD and its authors as “fringe” and “dangerous.” Dr. Fauci consulted with Dr. Collins before he talked to the Washington Post. Dr. Fauci also endorsed these comments in an email to Dr. Collins, stating “what you said was entirely correct.”

[Doc. No. 207-9]

[Doc. No. 206-1 at 272]

[Doc. No. 207-10]

On October 15, 2020, Dr. Fauci called the GBD “nonsense” and “dangerous.” Dr. Fauci specifically stated, “Quite frankly that is nonsense, and anybody who knows anything about epidemiology will tell you that is nonsense and very dangerous.” Dr. Fauci testified “it's possible that” he coordinated with Dr. Collins on his public statements attacking the GBD.

[Doc. No. 207-11 at 1]

[Id. at 3]

[Doc. No. 206-1 at 279]

(3) Social-media platforms began censoring the GBD shortly thereafter. In October 2020, Google de-boosted the search results for the GBD so that when Google users googled “Great Barrington Declaration,” they would be diverted to articles critical of the GBD, and not to the GBD itself. Reddit removed links to the GBD. YouTube updated its terms of service regarding medical “misinformation,” to prohibit content about vaccines that contradicted consensus from health authorities. Because the GBD went against a consensus from health authorities, its content was removed from YouTube. Facebook adopted the same policies on misinformation based upon public health authority recommendations. Dr. Fauci testified that he could not recall anything about his involvement in seeking to squelch the GBD.

[Doc. No. 207-12 at 4]

[Id. at 4-5]

[Doc. No. 207-13 at 4-5]

[Doc. No. 207-15]

[Doc. No. 206-1 at 251-52, 255-58]

(4) NIAID and NIH staff sent several messages to social-media platforms asking them to remove content lampooning or criticizing Dr. Fauci. When a Twitter employee reached out to CDC officials asking if a particular account associated with Dr. Fauci was “real or not,” Scott Prince of NIH responded, “Fake/Imposter handle. PLEASE REMOVE!!!” An HHS official then asked Twitter if it could “block” similar parody accounts: “Is there anything else you can also do to block other variations of his (Dr. Fauci's) name from impersonation so we don't have this occur again?” Twitter replied, “We'll freeze this @handle and some other variations so no one can hop on them.”

[Doc. No. 207-17 at 2]

[Id.]

[Id. at 1]

[Id.]

On April 21, 2020, Judith Lavelle of NIAID emailed Facebook, copying Scott Prince of NIH and Jennifer Routh (“Routh”), and stated, “We wanted to flag a few more fake Dr. Fauci accounts on FB and IG for you I have reported them from NIAID and my personal FB account.”Both Lavelle and Routh are members of Dr. Fauci's communications staff. Six of the eight accounts listed were removed by Facebook on the same day.

[Doc. No. 207-19 at 3]

[Doc. No. 206-1 at 308]

[Doc. No. 207-19 at 3]

(5) On October 30, 2020, a NIAID staffer wrote an email connecting Google/YouTube with Routh, “so that NIAID and the ‘Google team' could connect on vaccine communications-specifically misinformation.'” Courtney Billet (“Billet”), director of the Office of Communications and Government Relations of NIAID, was added by Routh, along with two other NIAID officials, to a communications chain with YouTube. Twitter disclosed that Dina Perry (“Perry”), a Public Affairs Specialist for NIAID, communicates or has communicated with Twitter about misinformation and censorship.

[Doc. No. 207-20 at 1]

[Id.]

[Doc. No. 214-8 at 1]

(6) Dr. Fauci testified that he has never contacted a social-media company and asked them to remove misinformation from one of their platforms.

[Doc. No. 206-1 at 151]

4. Ivermectin

(8) On September 13, 2021, Facebook emailed Carol Crawford of the CDC to ask whether the claim that “Ivermectin is effective in treating COVID is false, and if believed, could contribute to people refusing the vaccine or self-medicating.” The CDC responded the next day and advised Facebook that the claim that Ivermectin is effective in treating COVID is “NOT ACCURATE.” The CDC cited the NIH's treatment guidelines for authority that the claims were not accurate.

[Doc. No. 207-22 at 2]

[Id. at 1]

[Id.]

5. Mask Mandates

(9) Plaintiffs maintain that Dr. Fauci initially did not believe masks worked, but he changed his stance. A February 4, 2020 email, in which Dr. Fauci responded to an email from Sylvia Burwell, stated, “the typical mask you buy in a drugstore is not really effective in keeping out the virus, which is small enough to pass through mankind.” Dr. Fauci stated that, at that time, there were “no studies” on the efficacy of masking to stop the spread. On March 31, 2020, Dr. Fauci forwarded studies showing that masking is ineffective.

[Doc. No. 206-1 at 314]

[Id. at 316]

[Id. at 318]

Plaintiffs allege that Dr. Fauci's position on masking changed dramatically on April 3, 2020, when he became an advocate for universal mask mandates. Dr. Fauci testified his position changed in part because “evidence began accumulating that masks actually work in preventing acquisition and transmission,” although Dr. Fauci could not identify those studies.

[Id. at 317]

[Id.]

[Id. at 318].

6. Alex Berenson

Alex Berenson (“Berenson”) was a former New York Times Science reporter and critic of government messaging about COVID-19 vaccines. He was de-platformed from Twitter on August 28, 2021.

[Doc. No. 207-23 at 4]

Dr. Fauci had previously sought to discredit Berenson publicly during an interview with CNN. Dr. Fauci does not deny that he may have discussed Berenson with White House or federal officials, but does not recall specifically whether he did so.

[Doc. No. 207-24 at 1-2]

[Doc. No. 206-1 at 341-43]

E. FBI Defendants

FBI Defendants include Elvis Chan (“Chan”), the Federal Bureau of Investigation (“FBI”), Lauren Dehmlow (“Dehmlow”), and the U.S. Department of Justice (“DOJ”).

(1) The deposition of Elvis Chan (“Chan”) was taken on November 29, 2021. Chan is the Assistant Special Agent in charge of the Cyber Branch for the San Francisco Division of the FBI. In this role, Chan was one of the primary people communicating with social-media platforms about disinformation on behalf of the FBI. There are also other agents on different cyber squads, along with the FBI's private sector engagement squad, who relay information to socialmedia platforms.

[Doc. No. 204-1]

[Id. at 8]

[Id. at 105]

Chan graduated from the Naval Postgraduate School in 2020 with a M.A. in Homeland Security Studies. His thesis was entitled, “Fighting Bears and Trolls. An Analysis of Social Media Companies and U.S. Government Efforts to Combat Russian Influence Campaigns During the 2020 U.S. Elections.” His thesis focuses on information sharing between the FBI, Facebook, Google, and Twitter. Chan relied on research performed by persons and entities comprising the Election Integrity Partnership, including Graphika, and DiResta of the Stanford Internet Observatory. Chan communicated directly with DiResta about Russian disinformation.

[Id. at 10]

[Doc. No. 204-2 at 1]

[Id. at 18]

[Doc. No. 204-1 at 145]

[Doc. No. 204-1 at 51-52, 85]

Chan also knows Alex Stamos (“Stamos”), the head of the Stanford Internet Observatory, from when Stamos worked for Facebook. Chan and Stamos worked together on “malign-foreign-influence activities, on Facebook.”

[Id. at 54]

[Id. at 55]

(2) Chan stated that the FBI engages in “information sharing” with social-media companies about content posted on their platforms, which includes both “strategic-level information” and “tactical information.”

[Id. at 16-19]

(3) The FBI, along with Facebook, Twitter, Google/YouTube, Microsoft, Yahoo!, Wikimedia Foundation, and Reddit, participate in a Cybersecurity and Infrastructure Security Agency (“CISA”) “industry working group.” Representatives of CISA, the Department of Homeland Security's Intelligence & Analysis Division (“I&A”), the Office of Director of National Intelligence (“ODNI”), the FBI's FITF, the Dept. of Justice National Security Division, and Chan participate in these industry working groups.

[Id. at 18, 23-24]

[Id. at 24, 171]

Chan participates in the meetings because most social-media platforms are headquartered in San Francisco, and the FBI field offices are responsible for maintaining day-to-day relationships with the companies headquartered in its area of responsibility.

[Id. at 24]

Matt Masterson (“Masterson”) was the primary facilitator in the meetings for the 2022 election cycle, and Brian Scully (“Scully”) was the primary facilitator ahead of the 2022 election. At the USG-Industry (“the Industry”) meetings, social-media companies shared disinformation content, providing a strategic overview of the type of disinformation they were seeing. The FBI would then provide strategic, unclassified overviews of things they were seeing from Russian actors.

[Id. at 25-26]

[Id. at 156-57]

The Industry meetings were “continuing” at the time Chan's deposition was taken on November 23, 2022, and Chan assumes the meetings will continue through the 2024 election cycle.

[Id. at 285-86]

(4) Chan also hosted bilateral meetings between FBI and Facebook, Twitter, Google/YouTube, Yahoo!/Verizon, Microsoft/LinkedIn, Wikimedia Foundation and Reddit,and the Foreign Influence Task Force. In the Industry meetings, the FBI raised concerns about the possibility of “hack and dump” operations during the 2020 election cycle. The bilateral meetings are continuing, occurring quarterly, but will increase to monthly and weekly nearer the elections.

[Id. at 23-24]

[Id. at 39]

[Id.]

[Id. at 40]

In the Industry meetings, FBI officials meet with senior social-media platforms in the “trust and safety or site integrity role.” These are the persons in charge of enforcing terms of service and content-moderation policies. These meetings began as early as 2017. At the Industry meetings, in addition to Chan and Laura Dehmlow (“Dehmlow”), head of the FITF, between three and ten FITF officials and as high as a dozen FBI agents are present.

[Id. at 43-44]

[Id. at 87-89]

[Id. at 109-10]

(5) On September 4, 2019, Facebook, Google, Microsoft, and Twitter along with the FITF, ODNI, and CISA held a meeting to discuss election issues. Chan attended, along with Director Krebs, Masterson, and Scully. Social media's trust and safety on content-moderation teams were also present. The focus of the meeting was to discuss with the social-media companies the spread of “disinformation.”

[Id. at 151]

(6) Discovery obtained from LinkedIn contained 121 pages of emails between Chan, other FBI officials, and LinkedIn officials. Chan testified he has a similar set of communications with other social-media platforms.

[Doc. No. 204-3]

Doc. No. 204-1 at 288]

(7) The FBI communicated with social-media platforms using two alternative, encrypted channels, Signal and Teleporter.

[Id. at 295-296]

(8) For each election cycle, during the days immediately preceding and through election days, the FBI maintains a command center around the clock to receive and forward reports of “disinformation” and “misinformation.” The FBI requests that social-media platforms have people available to receive and process the reports at all times.

[Id. at 301]

(9) Before the Hunter Biden Laptop story breaking prior to the 2020 election on October 14, 2020, the FBI and other federal officials repeatedly warned industry participants to be alert for “hack and dump” or “hack and leak” operations.

[Doc. No. 204-1 at 172, 232-34]

Dehmlow also mentioned the possibility of “hack and dump” operations. Additionally, the prospect of “hack and dump” operations was repeatedly raised at the FBI-led meetings with FITF and the social-media companies, in addition to the Industry meetings.

[Id. at 175]

[Id. at 177-78]

Social-media platforms updated their policies in 2020 to provide that posting “hacked materials” would violate their policies. According to Chan, the impetus for these changes was the repeated concern about a 2016-style “hack-and-leak” operation. Although Chan denies that the FBI urged the social-media platforms to change their policies on hacked material, Chan did admit that the FBI repeatedly asked the social-media companies whether they had changed their policies with regard to hacked materials because the FBI wanted to know what the companies would do if they received such materials.

[Id. at 205]

[Id. at 206]

[Id. at 249]

(10) Yoel Roth (“Roth”), the then-Head of Site Integrity at Twitter, provided a formal declaration on December 17, 2020, to the Federal Election Commission containing a contemporaneous account of the “hack-leak-operations” at the meetings between the FBI, other natural-security agencies, and social-media platforms. Roth's declaration stated:

[Doc. No. 204-5, ¶¶ 10-11, at 2-3]

Since 2018, I have had regular meetings with the Office of the Director of National Intelligence, the Department of Homeland Security, the FBI, and industry peers regarding election security. During these weekly meetings, the federal law enforcement agencies communicated that they expected “hack-and-leak” operations by state actors might occur during the period shortly before the 2020 presidential election, likely in October. I was told in these meetings that the intelligence community expected that individuals associated with political campaigns would be subject to hacking attacks and that material obtained through those hacking attacks would likely be disseminated over social-media platforms, including Twitter. These expectations of hack-and-leak operations
were discussed through 2020. I also learned in these meetings that there were rumors that a hack-and-leak operation would involve Hunter Biden.

(emphasis added)

Chan testified that, in his recollection, Hunter Biden was not referred to in any of the CISA Industry meetings. The mention of “hack-and-leak” operations involving Hunter Biden is significant because the FBI previously received Hunter Biden's laptop on December 9, 2019, and knew that the later-released story about Hunter Biden's laptop was not Russian disinformation.

[Doc. No. 204-1 at 213, 227-28].

[Doc. No. 106-3 at 5-11]

In Scully's deposition, he did not dispute Roth's version of events.

[Doc. No. 209]

[Id. at 247]

Zuckerberg testified before Congress on October 28, 2020, stating that the FBI conveyed a strong risk or expectation of a foreign “hack-and-leak” operation shortly before the 2020 election and that the social-media companies should be on high alert. The FBI also indicated that if a trove of documents appeared, they should be viewed with suspicion.

[Doc. 204-6 at 56]

(11) After the Hunter Biden laptop story broke on October 14, 2020, Dehmlow refused to comment on the status of the Hunter Biden laptop in response to a direct inquiry from Facebook, although the FBI had the laptop in its possession since December 2019.

[Doc. No. 204-1 at 215]

The Hunter Biden laptop story was censored on social media, including Facebook and Twitter. Twitter blocked users from sharing links to the New York Post story and prevented users who had previously sent tweets sharing the story from sending new tweets until they deleted the previous tweet. Further, Facebook began reducing the story's distribution on the platform pending a third-party fact-check.

[Doc. No. Doc 204-5 at ¶ 17]

[Id.]

[Doc. No. 204-6 at 2]

(12) Chan further testified that during the 2020 election cycle, the United States Government and social-media companies effectively limited foreign influence companies through information sharing and account takedowns. Chan's thesis also recommended standardized information sharing and the establishment of a national coordination center.

[Doc. No. 204-2 at 3]

According to Chan, the FBI shares this information with social-media platforms as it relates to information the FBI believes should be censored. Chan testified that the purpose and predictable effect of the tactical information sharing was that social-media platforms would take action against the content in accordance with their policies. Additionally, Chan admits that during the 2020 election cycle, the United States Government engaged in information sharing with social-media companies. The FBI also shared “indicators” with state and local government officials.

[Id.]

[Id. at 32-33]

[Id. at 19]

[Id. at 50]

Chan's thesis includes examples of alleged Russian disinformation, which had a number of reactions and comments from Facebook users, including an anti-Hillary Clinton post, a secureborder post, a Black Lives Matter post, and a pro-Second Amendment post.

[Id.]

Chan also identified Russian-aligned websites on which articles were written by freelance journalists. A website called NADB, alleged to be Russian-generated, was also identified by the FBI, and suppressed by social-media platforms, despite such content being drafted and written by American users on that site. The FBI identified this site to the social-media companies that took action to suppress it.

[Id. at 144-46]

[Doc. No. 204-1 at 141-43]

(13) “Domestic disinformation” was also flagged by the FBI for social-media platforms. Just before the 2020 election, information would be passed from other field offices to the FBI 2020 election command post in San Francisco. The information sent would then be relayed to the socialmedia platforms where the accounts were detected. The FBI made no attempt to distinguish whether those reports of election disinformation were American or foreign.

[Id. at 162]

[Id. at 163]

Chan testified the FBI had about a 50% success rate in having alleged election disinformation taken down or censored by social-media platforms. Chan further testified that although the FBI did not tell the social-media companies to modify their terms of service, the FBI would “probe” the platforms to ask for details about the algorithms they were using and what their terms of service were.

[Id. at 167]

[Id. at 88]

[Id. at 92]

(14) Chan further testified the FBI identifies specific social-media accounts and URLs to be evaluated “one to five times a month” and at quarterly meetings. The FBI would notify the social-media platforms by sending an email with a secure transfer application within the FBI called a “Teleporter.” The Teleporter email contains a link for them to securely download the files from the FBI. The emails would contain “different types of indicators,” including specific social-media accounts, websites, URLs, email accounts, and the like, that the FBI wanted the platforms to evaluate under their content-moderation policies.

[Id. at 96]

[Id. at 98]

[Id.]

[Id. at 99]

Most of the time, the emails flagging the misinformation would go to seven social-media platforms. During 2020, Chan estimated he sent out these emails from one to six times per month and in 2022, one to four times per month. Each email would flag a number that ranged from one to dozens of indicators. When the FBI sent these emails, it would request that the social-media platforms report back on the specific actions taken as to these indicators and would also follow up at the quarterly meetings.

[Id. at 100-01]

[Id. at 102-03]

(15) At least eight FBI agents at the San Francisco office, including Chan, are involved in reporting disinformation to social-media platforms. In addition to FBI agents, a significant number of FBI officials from the FBI's Foreign Influence Task Force also participate in regular meetings with social-media platforms about disinformation.

[Id. at 105-08]

[Id. at 108]

Chan testified that the FBI uses its criminal-investigation authority, national-security authority, the Foreign Intelligence Surveillance Act, the PATRIOT Act, and Executive Order 12333 to gather national security intelligence to investigate content on social media.

[Id. at 111-12]

Chan believes with a high degree of confidence that the FBI's identification of “tactical information” was accurate and did not misidentify accounts operated by American citizens.However, Plaintiffs identified tweets and trends on Twitter, such as #ReleasetheMemo in 2019, and indicated that 929,000 tweets were political speech by American citizens.

[Id. at 112]

[Doc. No. 204-2 at 71]

(16) Chan testified that he believed social-media platforms were far more aggressive in taking down disfavored accounts and content in the 2018 and 2020 election cycles. Chan further thinks that pressure from Congress, specifically the House Permanent Select Committee on Intelligence and the Senate Select Committee on Intelligence, resulted in more aggressive censorship policies. Chan also stated that congressional hearings placed pressure on the socialmedia platforms.

[Id. at 115-16]

[Id. at 116]

[Id. at 117-18]

Chan further testified that Congressional staffers have had meetings with Facebook, Google/YouTube, and Twitter and have discussed potential legislation. Chan spoke directly with Roth of Twitter, Steven Slagle of Facebook, and Richard Salgado of Google, all of whom participated in such meetings.

[Id. at 118]

[Id. at 123-26]

(17) Chan testified that 3,613 Twitter accounts and 825 Facebook accounts were taken down in 2018. Chan testified Twitter took down 422 accounts involving 929,000 tweets in 2019.

[Id. at 133-34, 149-50]

(18) Chan testified that the FBI is continuing its efforts to report disinformation to social-media companies to evaluate for suppression and/or censorship. “Post-2020, we've never stopped.. .as soon as November 3 happened in 2020, we just pretty much rolled into preparing for 2022.”

[Doc. No. 204-8 at 2-3]

[Doc. No. 204-8 at 2]

E. CISA Defendants

CISA Defendants consist of the Cybersecurity and Infrastructure Security Agency (“CISA”), Jen Easterly (“Easterly”), Kim Wyman (“Wyman”), Lauren Protentis (“Protentis”), Geoffrey Hale (“Hale”), Allison Snell (“Snell”), Brian Scully (“Scully”), the Department of Homeland Security (“DHS”), Alejandro Mayorkas (“Mayorkas”), Robert Silvers (“Silvers”), and Samantha Vinograd (“Vinograd”).

The deposition of Brian Scully was taken on January 12, 2023, as part of the injunction-related discovery in this matter.

(1) The CISA regularly meets with social-media platforms in several types of standing meetings. Scully is the chief of CISA's Mis, Dis and Malinformation Team (“MDM Team”). Prior to President Biden taking office, the MDM Team was known as the “Countering Foreign Influence Task Force (“CFITF”). Protentis is the “Engagements Lead” for the MDM Team, and she is in charge of outreach and engagement to key stakeholders, interagency partners, and private sector partners, which includes social-media platforms. Scully performed Protentis's duties while she was on maternity leave. Both Scully and Protentis have done extended detail at the National Security Council, where they work on misinformation and disinformation issues.

[Doc. No. 209-1 at 12]

[Id. at 18-20]

[Id. at 19]

(2) Scully testified that during 2020, the MDM Team did “switchboard work” on behalf of election officials. “Switchboarding” is a disinformation-reporting system provided by CISA that allows state and local election officials to identify something on social media they deem to be disinformation aimed at their jurisdiction. The officials would then forward the information to CISA, which would in turn share the information with the social-media companies.

[Id. at 16-17]

The main idea, according to Scully, is that the information would be forwarded to social-media platforms, which would make decisions on the content based on their policies. Scully further testified he decided in late April or early May 2022 not to perform switchboarding in 2022. However, the CISA website states the MDM Team serves as a “switchboard for routing disinformation concerns to social-media platforms.” The switchboarding activities began in 2018.

[Id. at 17]

[Doc. No. 209-19 at 3]

[Id.]

(3) The MDM Team continues to communicate regularly with social-media platforms in two different ways. The first way is called “Industry” meetings. The Industry meetings are regular sync meetings between government and industry, including social-media platforms. The second type of communication involves the MDM Team reviewing regular reports from socialmedia platforms about changes to their censorship policies or to their enforcement actions on censorship.

[Doc. No. 209-1 at 21]

[Id.]

(4) The Industry meetings began in 2018 and continue to this day. These meetings increase in frequency as each election nears. In 2022, the Industry meetings were monthly but increased to biweekly in October 2022.

[Id. at 24]

Government participants in the USG-Industry meetings are CISA, the Department of Justice (“DOJ”), ODNI, and the Department of Homeland Security (“DHS”). CISA is typically represented by Scully and Hale. Scully's role is to oversee and facilitate the meetings. Wyman, Snell, and Protentis also participate in the meetings on behalf of CISA. On behalf of the FBI, FITF Chief Dehmlow, Chan, and others from different parts of the FBI participate.

[Id. at 25]

[Id. at 28]

[Id. at 29]

In addition to the Industry meetings, CISA hosts at least two “planning meetings:” one between CISA and Facebook and an interagency meeting between CISA and other participating federal agencies. The social-media platforms attending the industry meetings include Facebook, Twitter, Microsoft, Google/YouTube, Reddit, LinkedIn, and sometimes the Wikipedia Foundation. At the Industry meetings, participants discuss concerns about misinformation and disinformation. The federal officials report their concerns over the spread of disinformation. The social-media platforms in turn report to federal officials about disinformation trends, share high-level trend information, and repot the actions they are taking. Scully testified that the specific discussion of foreign-originating information is ultimately targeted at preventing domestic actors from engaging in this information.

[Id. at 36-37]

[Id. at 39]

[Id. at 39-41]

[Id. at 41]

(5) CISA has established relationships with researchers at Stanford University, the University of Washington, and Graphika. All three are involved in the Election Integrity Partnership (“EIP”).

[Id. at 46, 48]

[Id. at 48]

When the EIP was starting up, CISA interns came up with the idea of having some communications with the EIP. CISA began having communications with the EIP, and CISA connected the EIP with the Center for Internet Security (“CIS”). The CIS is a CISA-funded, nonprofit that channels reports of disinformation from state and local government officials to socialmedia platforms. The CISA interns who originated the idea of working with the EIP also worked for the Stanford Internet Observatory, another part of the EIP. CISA had meetings with Stanford Internet Observatory officials, and eventually both sides decided to work together. The “gap” that the EIP was designed to fill concerned state and local officials' lack of resources to monitor and report on disinformation that affects their jurisdictions.

[Id. at 49-52]

[Id. at 57]

(6) The EIP continued to operate during the 2022 election cycle. At the beginning of the election cycle, the EIP gave Scully and Hale, on behalf of CISA, a briefing in May or June of 2022. In the briefing, DiResta walked through what the plans were for 2022 and some lessons learned from 2020. The EIP was going to support state and local election officials in 2022.

[Id. at 53-54]

(7) The CIS is a non-profit that oversees the Multi-State Information Sharing and Analysis Center (“MS-ISAC”) and the Election Infrastructure Information Sharing and Analysis Center (“EI-ISAC”). Both MS-ISAC and EI-ISAC are organizations of state and/or local government officials created for the purpose of information sharing.

[Id. at 59-61]

CISA funds the CIS through a series of grants. CISA also directs state and local officials to the CIS as an alternative route to “switchboarding.” CISA connected the CIS with the EIP because the EIP was working on the same mission, and it wanted to make sure they were all connected. Therefore, CISA originated and set up collaborations between local government officials and CIS and between the EIP and CIS.

[Id. at 61-62]

[Id. at 62-63]

(8) CIS worked closely with CISA in reporting misinformation to social-media platforms. CIS would receive the reports directly from election officials and would forward this information to CISA. CISA would then forward the information to the applicable social-media platforms. CIS later began to report the misinformation directly to social-media platforms.

[Id. at 63-64]

The EIP also reported misinformation to social-media platforms. CISA served as a mediating role between CIS and EIP to coordinate their efforts in reporting misinformation to the platforms. There were also direct email communications between the EIP and CISA about reporting misinformation. When CISA reported misinformation to social-media platforms, CISA would generally copy the CIS, who, as stated above, was coordinating with the EIP.

[Id. at 63-66]

[Id. at 67-68]

(9) Stamos and DiResta of the Stanford Internet Observatory briefed Scully about the EIP report, “The Long Fuse,” in late Spring or early Summer of 2021. Scully also reviewed copies of that report. Stamos and DiResta also have roles in CISA: DiResta serves as “Subject Matter Expert” for CISA's Cybersecurity Advisory Committee, MDM Subcommittee, and Stamos serves on the CISA Cybersecurity Advisory Committee, as does Kate Starbird (“Starbird”) of the University of Washington. Stamos identified the EIP's “partners in government” as CISA, DHS, and state and local officials. Also, according to Stamos, the EIP targeted “large following political partisans who were spreading misinformation intentionally.”

[Doc. 209-2]

[Doc. No. 209-1, at 72, 361; Doc. No. 212-36 at 4 (Jones Deposition-SEALED DOCUMENT)]

[Doc. No. 209-4 at 4]

[Scully depo. Exh. at ¶ 7]

(10) CISA's Masterson was also involved in communicating with the EIP. Masterson and Scully questioned EIP about their statements on election-related information. Sanderson left CISA in January 2021, was a fellow at the Stanford Internet Observatory, and began working for Microsoft in early 2022.

[Doc. No. 209-1 at 76]

[Id. at 88-89]

(11) CISA received misinformation principally from two sources: the CIS directly from state and local election officials; and information sent directly to a CISA employee. CISA shared information with the EIP and the CIS.

[Id. at 119-20]

[Id. at 120-21]

(12) CISA did not do an analysis to determine what percentage of misinformation was “foreign derived.” Therefore, CISA forwards reports of information to social-media platforms without determining whether they originated from foreign or domestic sources.

[Id. at 122-23]

(13) The Virality Project was created by the Stanford Internet Observatory to mimic the EIP for COVID. As previously stated, Stamos and DiResta of the Stanford Internet Observatory were involved in the Virality Project. Stamos gave Scully an overview of what they planned to do with the Virality Project, similar to what they did with the EIP. Scully also had conversations with DiResta about the Virality Project. DiResta noted the Virality Project was established on the heels of the EIP, following its success in order to support government health officials' efforts to combat misinformation targeting COVID-19 vaccines.

[Id. at 134]

[Id. at 134-36]

[Id. at 139]

[Doc. No. 209-5 at 7]

(14) According to DiResta, the EIP was designed to “get around unclear legal authorities, including very real First Amendment questions” that would arise if CISA or other government agencies were to monitor and flag information for censorship on social media.

[Id. at 4]

(15) The CIS coordinated with the EIP regarding online misinformation and reported it to CISA. The EIP was using a “ticketing system” to track misinformation. Scully asked the social-media platforms to report back on how they were handling reports of misinformation and disinformation received from CISA. CISA maintained a “tracking spreadsheet” of its misinformation reports to social-media platforms during the 2020 election cycle.

[Doc. No. 209-1 at 159]

[Doc. No. 209-6 at 11]

[Doc. No. 209-1 at 165-66]

(16) At least six members of the MDM team, including Scully, “took shifts” in the “switchboarding” operation reporting disinformation to social-media platforms; the others were Chad Josiah (“Josiah”), Rob Schaul (“Schaul”), Alex Zaheer (“Zaheer”), John Stafford (“Stafford”), and Pierce Lowary (“Lowary”). Lowary and Zaheer were simultaneously serving as interns for CISA and working for the Stanford Internet Observatory, which was the operating the EIP. Therefore, Zaheer and Lowary were simultaneously engaged in reporting misinformation to social-media platforms on behalf of both CISA and the EIP. Zaheer and Lowary were also two of the four Stanford interns who came up with the idea for the EIP.

[Id. at 166-68, 183]

[Id.]

[Id. at 171, 184-85]

(17) The CISA switchboarding operation ramped up as the election drew near. Those working on the switchboarding operation worked tirelessly on election night. They would also “monitor their phones” for disinformation reports even during off hours so that they could forward disinformation to the social-media platforms.

[Id. at 174-75]

[Id. at 75]

(18) As an example, Zaheer, when switchboarding for CISA, forwarded supposed misinformation to CISA's reporting system because the user had claimed “mail-in voting is insecure” and that “conspiracy theories about election fraud are hard to discount.”

[Doc. No. 209-6 at 61-62]

CISA's tracking spreadsheet contains at least eleven entries of switchboarding reports of misinformation that CISA received “directly from EIP” and forwarded to social-media platforms to review under their policies. One of these reports was reported to Twitter for censorship because EIP “saw an article on the Gateway Pundit” run by Plaintiff Jim Hoft.

[Doc. No. 214-35 at 5-6, Column C]

[Id. at 4-5, Column F, Line 94]

(19) Scully admitted that CISA engaged in “informal fact checking” to determine whether a claim was true or not. CISA would do its own research and relay statements from public officials to help debunk postings for social-media platforms. In debunking information, CISA apparently always assumed the government official was a reliable source; CISA would not do further research to determine whether the private citizen posting the information was correct or not.

CISA also became the “ministry of truth.”

[Doc. No. 209-1 at 220-22]

(20) CISA's switchboarding activities reported private and public postings. Socialmedia platforms responded swiftly to CISA's reports of misinformation.

[Doc. No. 209-7 at 45-46]

[Doc. No. 209-1 at 291-94; 209-49]

(21) CISA, in its interrogatory responses, disclosed five sets of recurring meetings with social-media platforms that involved discussions of misinformation, disinformation, and/or censorship of speech on social media. CISA also had bilateral meetings between CISA and the social-media companies.

[Doc. No. 209-9 at 38-40]

[Doc. No. 209-1 at 241]

(22) Scully does not recall whether “hack and leak” or “hack and dump” operations were raised at the Industry meetings, but does not deny it either. However, several emails confirm that “hack and leak” operations were on the agenda for the Industry meeting on September 15, 2020, and July 15, 2020.

[Id. at 236-37]

[Doc. No. 209-13 at 1]

[Doc. No. 209-14 at 16]

(23) In the spring and summer of 2022, CISA's Protentis requested that social-media platforms prepare a “one-page” document that sets forth their content-moderation rules that could then be shared with election officials-and which also included “steps for flagging or escalating MDM content” and how to report misinformation. Protentis referred to the working group (which included Facebook and CISA's Hale) as “Team CISA.”

[Doc. No. 209-14]

[Doc. No. 209-15 at 41, 44-45]

[Doc. No. 209-15 at 39]

(24) The Center for Internet Security continued to report misinformation to social-media platforms during the 2022 election cycle.

[Doc. No. 209-1 at 266]

(25) CISA has teamed up directly with the State Department's Global Engagement Center (“GEC”) to seek review of social-media content. CISA also flagged for review parody and joke accounts. Social-media platforms report to CISA when they update their contentmoderation policies to make them more restrictive. CISA publicly stated that it is expanding its efforts to fight disinformation-hacking in the 2024 election cycle.

[Doc. No. 209-15 at 1-2]

[Id. at 11-12]

[Id. at 9]

[Doc. No. 209-20 at 1-2]

(26) A draft copy of the DHS's “Quadrennial Homeland Security Review,” which outlines the department's strategy and priorities in upcoming years, states that the department plans to target “inaccurate information” on a wide range of topics, including the origins of the COVID-19 pandemic, the efficacy of COVID-19 vaccines, racial justice, the United States' withdrawal from Afghanistan, and the nature of the United States' support of Ukraine.

[Doc. No. 209-23 at 1-4]

(27) Scully also testified that CISA engages with the CDC and DHS to help them in their efforts to stop the spread of disinformation. The examples given were about the origins of the COVID-19 pandemic and Russia's invasion of Ukraine.

[Doc. No. 209-1 at 323-25]

(28) On November 21, 2021, CISA Director Easterly reported that CISA is “beefing up its misinformation and disinformation team in wake of a diverse presidential election a proliferation of misleading information online.” Easterly stated she was going to “grow and strengthen” CISA's misinformation and disinformation team. She further stated, “We live in a world where people talk about alternative facts, post-truth, which I think is really, really dangerous if people get to pick their own facts.”

[Doc. No. 209-1 at 335-36]

[Doc. No. 209-18 at 1-2]

Easterly also views the word “infrastructure” very expansively, stating, “[W]e're in the business of protecting critical infrastructure, and the most critical is our ‘cognitive infrastructure.'” Scully agrees with the assessment that CISA has an expansive mandate to address all kinds of misinformation that may affect control and that could indirectly cause national security concerns.

[Id.]

[Doc. No. 209-1 at 341]

On June 22, 2022, CISA's cybersecurity Advisory Committee issued a Draft Report to the Director, which broadened “infrastructure” to include “the spread of false and misleading information because it poses a significant risk to critical function, like elections, public health, financial services and emergency responses.”

[Doc. No. 209-25 at 1]

(29) In September 2022, the CIS was working on a “portal” for government officials to report election-related misinformation to social-media platforms. That work continues today.

[Doc. No. 210-22]

[Id.]

F. State Department Defendants

The State Department Defendants consist of the United States Department of State, Leah Bray (“Bray”), Daniel Kimmage (“Kimmage'), and Alex Frisbie (“Frisbie”).

1. The GEC

(1) Daniel Kimmage is the Principal Deputy Coordinator of the State Department's Global Engagement Center (“GEC”). The GEC's front office and senior leadership meets with social-media platforms every few months, sometimes quarterly. The meetings focus on the “tools and techniques” of stopping the spread of disinformation on social media, but they rarely discuss specific content that is posted. Additionally, GEC has a “Technology Engagement Team” (“TET”) that also meets with social-media companies. The TET meets more frequently than the GEC.

Kimmage's deposition was taken and filed as [Doc. No. 208-1].

[Doc. No. 208-1 at 29, 32]

[Id. at 30]

[Id. at 37]

(2) Kimmage recalls two meetings with Twitter. At these meetings, the GEC would bring between five and ten people including Kimmage, one or more deputy coordinators, and team chiefs from the GEC and working-level staff with relevant subject-matter expertise. The GEC staff would meet with Twitter's content-mediation teams, and the GEC would provide an overview of what it was seeing in terms of foreign propaganda and information. Twitter would then discuss similar topics.

[Id. at 130-31]

[Id. at 133-36]

(3) The GEC's senior leadership also had similar meetings with Facebook and Google. Similar numbers of people were brought to these meetings by GEC, and similar topics were discussed. Facebook and Google also brought their content-moderator teams.

[Id. at 141-43]

(4) Samaruddin Stewart (“Stewart”) was the GEC's Senior Advisor who was a permanent liaison in Silicon Valley for the purpose of meeting with social-media platforms about disinformation. Stewart set up a series of meetings with LinkedIn to discuss “countering disinformation” and to explore shared interests and alignment of mutual goals regarding the challenge.

[Id. at 159-60]

(5) The GEC also coordinated with CISA and the EIP. Kimmage testified that the GEC had a “general engagement” with the EIP.

[Id. at 214-215]. The details surrounding the EIP are described in II 6(5)(6)(7)(8)(9)(10)(15) and (16). Scully Ex. 1 details EIPS work carried out during the 2020 election.

(6) On October 17, 2022, at an event at Stanford University, Secretary of State Anthony Blinken mentioned the GEC and stated that the State Department was “engaging in collaboration and building partnerships” with institutions like Stanford to combat the spread of propaganda.Specifically, he stated, “We have something called the Global Engagement Center that's working on this every single day.”

[Doc. No. 208-17 at 5]

[Id.]

(7) Like CISA, the GEC works through the CISA-funded EI-ISAC and works closely with the Stanford Internet Observatory and the Virality Project.

2. The EIP

(8) The EIP is partially-funded by the United States National Science Foundation through grants. Like its work with CISA, the EIP, according to DiResta, was designed to “get around unclear legal authorities, including very real First Amendment questions” that would arise if CISA or other government agencies were to monitor and flag information for censorship on social media.

[Id. at 17]

[Doc. No. 209-5 at 4]

The EIP's focus was on understanding misinformation and disinformation in the socialmedia landscape, and it successfully pushed social-media platforms to adopt more restrictive policies about election-related speech in 2020.

[Doc. No. 209-5, Exh. 1; Ex. 4 at 7, Audio Tr. 4]

The government agencies that work with and submit alleged disinformation to the EIP are CISA, the State Department Global Engagement Center, and the Elections Infrastructure Information Sharing and Analysis Center.

[Doc. No. 209-2 at 30]

(9) The EIP report further states that the EIP used a tiered model based on “tickets” collected internally and from stakeholders. The tickets also related to domestic speech by American citizens, including accounts belonging to media outlets, social-media influencers, and political figures. The EIP further emphasized that it wanted greater access to social-media platform's internal data and recommended that the platforms increase their enforcement of censorship policies.

[Id. at 11]

[Id. at 12]

[Id. at 14]

The EIP was formed on July 26, 2020, 100 days before the November 2020 election. On July 9, 2020, the Stanford Internet Observatory presented the EIP concept to CISA. The EIP team was led by Research Manager DiResta, Director Stamos and the University of Washington's Starbird.

[Id. at 20]

[Id.]

(10) EIP's managers both report misinformation to platforms and communicate with government partners about their misinformation reports. EIP team members were divided into tiers of on-call shifts. Each shift was four hours long and led by one on-call manager. The shifts ranged from five to twenty people. Normal scheduled shifts ran from 8:00 a.m. to 8:00 p.m., ramping up to sixteen to twenty hours a day during the week of the election.

[Id. at 27-28]

[Id. at 28]

(11) Social-media platforms that participated in the EIP were Facebook, Instagram, Google/YouTube, Twitter, TikTok, Reddit, Nextdoor, Discord, and Pinterest.

[Id. at 35]

(12) In the 2020 election cycle, the EIP processed 639 “tickets,” 72% of which were related to delegitimizing the election results. Overall, social-media platforms took action on 35% of the URLs reported to them. One “ticket” could include an entire idea or narrative and was not always just one post. Less than 1% of the tickets related to “foreign interference.”

[Id. at 45]

[Id. at 58]

[Id. at 27]

[Id. at 53]

(13) The EIP found that the Gateway Pundit was one of the top misinformation websites, allegedly involving the “exaggeration” of the input of an issue in the election process. The EIP did not say that the information was false. The EIP Report cites The Gateway Pundit forty-seven times.

[Id. at 51]

[Id. at 51, 74, 76, 101, 103, 110, 112, 145, 150-51, 153, 155-56, 172, 175, 183, 194-95, 206-09, 211-12, 214-16, and 226]

(14) The GEC was engaging with the EIP and submitted “tickets.”

[Id. at 60]

(15) The tickets and URLs encompassed millions of social-media posts, with almost twenty-two million posts on Twitter alone. The EIP sometimes treats as “misinformation” truthful reports that the EIP believes “lack broader context.”

[Id. at 201]

[Id. at 202]

(16) The EIP stated “influential accounts on the political right.. .were responsible for the most widely spread of false or misleading information in our data set.” Further, the EIP stated the twenty-one most prominent report spreaders on Twitter include political figures and organizations, partisan media outlets, and social-media stars. Specifically, the EIP stated, “All 21 of the repeat spreaders were associated with conservative or right-wing political views and support of President Trump.” The Gateway Pundit was listed as the second-ranked “Repeat Spreader of Election Misinformation” on Twitter. During the 2020 election cycle, the EIP flagged The Gateway Pundit in twenty-five incidents with over 200,000 retweets. The Gateway Pundit ranked above Donald Trump, Eric Trump, Breitbart News, and Sean Hannity.

[Id. at 204-05]

[Id. at 204-05]

[Id.]

[Id. at 246]

The Gateway Pundit's website was listed as the domain cited in the most “incidents”; its website content was tweeted by others in 29,209 original tweets and 840,740 retweets. The Gateway Pundit ranked above Fox News, the New York Post, the New York Times, and the Washington Post. The EIP report also notes that Twitter suspended The Gateway Pundit's account on February 6, 2021, and it was later de-platformed entirely.

[Id. at 207]

[Id.]

[Id. at 224]

(17) The EIP notes that “during the 2020 election, all of the major platforms made significant changes to election integrity policies-policies that attempted to slow the spread of specific narratives and tactics that could ‘potentially mislead or deceive the public.'” The EIP was not targeting foreign disinformation, but rather “domestic speakers.” The EIP also indicated it would continue its work in future elections.

[Id. at 229]

[Id. at 243-44]

[Id. at 243-44]

(18) The EIP also called for expansive censorship of social-media speech into other areas such as “public health.”

[Id. at 251]

(19) The EIP stated that it “united government, academic, civil society, and industry, analyzing across platforms to address misinformation in real time.”

[Id. at 259]

(20) When asked whether the targeted information was domestic, Stamos answered, “It is all domestic, and the second point on the domestic, a huge part of the problem is well-known influences. you... have a relatively small number of people with very large followings who have the ability to go and find a narrative somewhere, pick it out of obscurity and ... harden it into these narratives.”

[Doc. No. 276-1 at 12]

Stamos further stated:

We have set up this thing called the Election Integrity Partnership, so we went and hired a bunch of students. We're working with the University of Washington, Graphika, and DFR Lab and the vast, vast majority we see we believe is domestic. And so, I think a much bigger issue for the platforms is elite disinformation. The staff that
is being driven by people who are verified that are Americans who are using their real identities.

[Id.]

(21) Starbird of the University of Washington, who is on a CISA subcommittee and an EIP participant, also verified the EIP was targeting domestic speakers, stating:

Now fast forward to 2020, we saw a very different story around disinformation in the U.S. election. It was largely domestic coming from inside the United States... Most of the accounts perpetrating this.... they're authentic accounts. They were often blue check and verified accounts. They were pundits on cable television shows that were who they said they were . a lot of major spreaders were blue check accounts, and it wasn't entirely coordinated, but instead, it was largely sort of cultivated and even organic in places with everyday people creating and spreading disinformation about the election.

[Doc. No. 276-1 at 42]

3. The Virality Project

(22) The Virality Project targeted domestic speakers' alleged disinformation relating to the COVID-19 vaccines. The Virality Project's final report, dated April 26, 2022, lists DiResta as principal Executive Director and lists Starbird and Masterson as contributors.

[Doc. No. 209-3]; Memes, Magnets, Microchips, Narrative Dynamics Around COVID-19 Vaccines.

[Doc. No. 209-3 at 4]

According to the Virality Project, “vaccine mis-and disinformation was largely driven by a cast of recuring [sic] actors including long-standing anti-vaccine influencers and activists, wellness and lifestyle influence, pseudo medical influencers, conspiracy theory influencers, rightleaning political influencers, and medical freedom influencers.”

[Id. at 9]

The Virality Project admits the speech it targets is primarily domestic, stating “Foreign . actor's reach appeared to be far less than that of domestic actors.” The Virality Project also calls for more aggressive censorship of COVID-19 misinformation, calls for more federal agencies to be involved through “cross-agency collaboration,” and calls for a “whole-of-society response.” Just like the EIP, the Virality Project states that it is “multistakeholder collaboration” that includes “government entities” among its key stakeholders. The Virality Project targets tactics that are not necessarily false, including hard-to-verify content, alleged authorization sources, organized outrage, and sensationalized/misleading headlines.

[Id.]

[Id. at 12]

[Id.]

[Id. at 17]

[Id. at 19]

(23) Plaintiff Hines of the Health Freedom Louisiana was flagged by the Virality Project to be a “medical freedom influencer” who engages in the “tactic” of “organized outrage” because she created events or in-person gatherings to oppose mask and vaccine mandates in Louisiana.

[Id. at 9, 19]

(24) The Virality Project also acknowledges that government “stakeholders,” such as “federal health agencies” and “state and local public health officials,” were among those that “provided tips” and “requests to access specific incidents and narratives.”

[Id. at 24]

(25) The Virality Project also targeted the alleged COVID-19 misinformation for censorship before it could go viral. “Tickets also enabled analysts to qualify tag platform or health sector partners to ensure their situational awareness of high-engagement material that appeared to be going viral, so that those partners could determine whether something might merit a rapid public or on-platform response.”

[Id. at 37]

(26) The Virality Project flagged the following persons and/or organizations as spreaders of misinformation:

i. Jill Hines and Health Freedom of Louisiana;
ii. One America News;
iii. Breitbart News;
iv. Alex Berenson;
v. Tucker Carlson;
vi. Fox News;
vii. Candace Owens;
viii. The Daily Wire;
ix. Robert F. Kennedy, Jr.;
x. Dr. Simone Gold and America's Frontline Doctors; and
xi. Dr. Joyce Mercula.

[Id. at 59]

[Id. at 60]

[Id.]

[Id. at 54, 57, 49, 50]

[Id. at 57]

[Id. at 91]

[Id. at 86, 92]

[Id.]

[Id.]

[Id. at 87-88]

[Id. at 87]

(27) The Virality Project recommends that the federal government implement a Misinformation and Disinformation Center of Excellence, housed within the federal government, which would centralize expertise on mis/disinformation within the federal government at CISA.

[Id. at 150]

III. LAW AND ANALYSIS

A. Preliminary Injunction Standard

An injunction is an extraordinary remedy never awarded of right. Benisek v. Lamone, 138 U.S. 1942, 1943 (2018). In each case, the courts must balance the competing claims of injury and must consider the effect on each party of the granting or withholding of the requested relief. Winter v. Nat. Res. Def. Council, Inc., 555 U.S. 7, 24, 129 S.Ct. 365 (2008).

The standard for an injunction requires a movant to show: (1) the substantial likelihood of success on the merits; (2) that he is likely to suffer irreparable harm in the absence of an injunction; (3) that the balance of equities tips in his favor; and (4) that an injunction is in the public interest. Benisek, 138 U.S. at 1944. The party seeking relief must satisfy a cumulative burden of proving each of the four elements enumerated before an injunction can be granted. Clark v. Prichard, 812 F.2d 991, 993 (5th Cir. 1987). None of the four prerequisites has a quantitative value. State of Tex. v. Seatrain Int'l, S.A., 518 F.2d 175, 180 (5th Cir. 1975).

B. Analysis

As noted above, Plaintiffs move for a preliminary injunction against Defendants' alleged violations of the Free Speech Clause of the First Amendment. Plaintiffs assert that they are likely to succeed on the merits of their First Amendment claims because Defendants have significantly encouraged and/or coerced social-media companies into removing protected speech from socialmedia platforms. Plaintiffs also argue that failure to grant a preliminary injunction will result in irreparable harm because the alleged First Amendment violations are continuing and/or there is a substantial risk that future harm is likely to occur. Further, Plaintiffs maintain that the equitable factors and public interest weigh in favor of protecting their First Amendment rights to freedom of speech. Finally, Plaintiffs move for class certification under Federal Rule of Civil Procedure 23.

In response, Defendants maintain that Plaintiffs are unlikely to succeed on the merits for a myriad of reasons. Defendants also maintain that Plaintiffs lack Article III standing to bring the claims levied herein, that Plaintiffs have failed to show irreparable harm because the risk of future injury is low, and that the equitable factors and public interests weigh in favor of allowing Defendants to continue enjoying permissible government speech.

Each argument will be addressed in turn below.

1. Plaintiffs' Likelihood of Success on the Merits

For the reasons explained herein, the Plaintiffs are likely to succeed on the merits of their First Amendment claim against the White House Defendants, Surgeon General Defendants, CDC Defendants, FBI Defendants, NIAID Defendants, CISA Defendants, and State Department Defendants. In ruling on a motion for Preliminary Injunction, it is not necessary that the applicant demonstrate an absolute right to relief. It need only establish a probable right. West Virginia Highlands Conservancy v. Island Creek Coal Co., 441 F.2d 232 (4th Cir. 1971). The Court finds that Plaintiffs here have done so.

a. Plaintiffs' First Amendment Claims

The Free Speech Clause prohibits only governmental abridgment of speech. It does not prohibit private abridgment of speech. Manhattan Community Access Corporation v. Halleck, 139 S.Ct. 1921, 1928 (2019). The First Amendment, subject only to narrow and well-understood exceptions, does not countenance governmental control over the content of messages expressed by private individuals. Turner Broadcasting System, Inc. v. F.C.C., 512 U.S. 622, 641 (1994). At the heart of the First Amendment lies the principle that each person should decide for himself or herself the ideas and beliefs deserving of expression, consideration, and adherence. Id. Government action, aimed at the suppression of particular views on a subject that discriminates on the basis of viewpoint, is presumptively unconstitutional. The First Amendment guards against government action “targeted at specific subject matter,” a form of speech suppression known as “content-based discrimination.” National Rifle Association of America v. Cuomo, 350 F.Supp.3d 94, 112 (N.D. N.Y. 2018). The private party, social-media platforms are not defendants in the instant suit, so the issue here is not whether the social-media platforms are government actors,but whether the government can be held responsible for the private platforms' decisions.

This is a standard that requires the private action to be “fairly attributable to the state.” Lugar v. Edmondson Oil Co., 457 U.S. 922 (1982).

Viewpoint discrimination is an especially egregious form of content discrimination. The government must abstain from regulating speech when the specific motivating ideology or the perspective of the speaker is the rationale for the restriction. Rosenberger v. Rectors and Visitors of University of Virginia, 515 U.S. 819, 829 (1995). Strict scrutiny is applied to viewpoint discrimination. Simon & Schuster, Inc. v. Members of the New York State Crime Victim's Board, 505 U.S. 105 (1991). The government may not grant the use of a forum to people whose views it finds acceptable, but deny use to those wishing to express less favored or more controversial views. Police Department of Chicago v. Moseley, 408 U.S. 92, 96 (1972).

If there is a bedrock principal underlying the First Amendment, it is that the government may not prohibit the expression of an idea simply because society finds the idea itself offensive or disagreeable. Matal v. Tam, 137 S.Ct. 1744, 1763 (2017); see also R.A.V. v. City of St. Paul, 505 U.S. 377 (1996). The benefit of any doubt must go to protecting rather than stifling speech. Citizens United v. Federal Election Commission, 130 S.Ct. 876, 891 (2010).

i. Significant Encouragement and Coercion

To determine whether Plaintiffs are substantially likely to succeed on the merits of their First Amendment free speech claim, Plaintiffs must prove that the Federal Defendants either exercised coercive power or exercised such significant encouragement that the private parties' choice must be deemed to be that of the government. Additionally, Plaintiffs must prove the speech suppressed was “protected speech.” The Court, after examining the facts, has determined that some of the Defendants either exercised coercive power or provided significant encouragement, which resulted in the possible suppression of Plaintiffs' speech.

The State (i.e., the Government) can be held responsible for a private decision only when it has exercised coercive power or has provided such “significant encouragement,” either overt or covert, that the choice must be deemed to be that of the State. Mere approval or acquiescence in the actions of a private party is not sufficient to hold the state responsible for those actions. Blum v. Yaretsky, 457 U.S. 991, 1004 (1982); Rendell-Baker v. Kohn, 457 U.S. 830, 1004-05 (1982); National Broadcasting Co. Inc v. Communications Workers of America, Afl-Cio, 860 F.2d 1022 (11th Cir. 1988); Focus on the Family v. Pinellas Suncoast Transit Authority, 344 F.3d 1213 (11th Cir. 2003); Brown v. Millard County, 47 Fed.Appx. 882 (10th Cir. 2002).

In evaluating “significant encouragement,” a state may not induce, encourage, or promote private persons to accomplish what it is constitutionally forbidden to accomplish. Norwood v. Harrison, 413 U.S. at 465. Additionally, when the government has so involved itself in the private party's conduct, it cannot claim the conduct occurred as a result of private choice, even if the private party would have acted independently. Peterson v. City of Greenville, 373 U.S. at 247-48. Further, oral, or written statements made by public officials could give rise to a valid First Amendment claim where the comments of a governmental official can reasonably be interpreted as intimating that some form of punishment or adverse regulatory action will follow the failure to accede to the official's request. National Rifle Association of America, 350 F.Supp.3d at 114. Additionally, a public official's threat to stifle protected speech is actionable under the First Amendment and can be enjoined, even if the threat turns out to be empty. Backpage.com, LLC v. Dart, 807 F.3d at 230-31.

The Defendants argue that the “significant encouragement” test for government action has been interpreted to require a higher standard since the Supreme Court's ruling in Blum v. Yaretsky, 457 U.S. 991 (1982). Defendants also argue that Plaintiffs are unable to meet the test to show Defendants “significantly encouraged” social-media platforms to suppress free speech. Defendants further maintain Plaintiffs have failed to show “coercion” by Defendants to force social-media companies suppress protected free speech. Defendants also argue they made no threats but rather sought to “persuade” the social-media companies. Finally, Defendants maintain the private social-media companies made independent decisions to suppress certain postings.

In Blum, the Supreme Court held the Government “can be held responsible for a private decision only when it has exercised coercive power or has provided such significant encouragement, either overt or covert, that the choice in law must be deemed to be that of the state.” Blum, 457 U.S. at 1004. Defendants argue that the bar for “significant encouragement” to convert private conduct into state action is high. Defendants maintain that Blum's language does not mean that the Government is responsible for private conduct whenever the Government does more than adopt a passive position toward it. Skinner v. Ry. Labor Execs. Ass'n., 489 U.S. 602, 615 (1989).

Defendants point out this is a question of degree: whether a private party should be deemed an agent or instrument of the Government necessarily turns on the “degree” of the Government's participation in the private party's activities. 489 U.S. at 614. The dispositive question is “whether the State has exercised coercive power or has provided such significant encouragement that the choice must in law be deemed to be that of the State.” VDARE Fund v. City of Colo. Springs, 11 F.4th 1151, 1161 (10th Cir. 2021).

The Supreme Court found there was not enough “significant encouragement” by the Government in American Manufacturers Mutual Ins. Co. v. Sullivan, 526 U.S. 40 (1999). This case involved the constitutionality of a Pennsylvania worker's compensation statute that authorized, but did not require, insurers to withhold payments for the treatment of work-related injuries pending a “utilization” review of whether the treatment was reasonable and necessary. The plaintiffs' argument was that by amending the statute to grant the utilization review (an option they previously did not have), the State purposely encouraged insurers to withhold payments for disputed medical treatment. The Supreme Court found this type of encouragement was not enough for state action.

The United States Court of Appeal for the Fifth Circuit has also addressed the issue of government coercion or encouragement. For example, in La. Div. Sons of Confederate Veterans v. City of Natchitoches, 821 Fed.Appx. 317 (5th Cir. 2020), the Sons of Confederate Veterans applied to march in a city parade that was coordinated by a private business association. The Mayor sent a letter asking the private business to prohibit the display of the Confederate battle flag. After the plaintiff's request to march in the parade was denied, the plaintiff filed suit and argued the Mayor's letter was “significant encouragement” to warrant state action. The Fifth Circuit found the letter was not “significant encouragement.”

In determining whether the Government's words or actions could reasonably be interpreted as an implied threat, courts examine a number of factors, including: (1) the Defendant's regulatory or other decision-making authority over the targeted entities; (2) whether the government actors actually exercised regulatory authority over the targeted entities; (3) whether the language of the alleged threatening statements could reasonably be perceived as a threat; and (4) whether any of the targeted entities reacted in a manner evincing the perception of implicit threat. Id. at 114. As noted above, a public official's threat to stifle protected speech is actionable under the First Amendment and can be enjoined, even if the threat turns out to be empty. Backpage.com, LLC v. Dart, 807 F.3d 229, 230-31 (7th Cir. 2015); Okwedy v. Molinari, 333 F.3d 339, 340-41 (2d. Cir. 2003).

The closest factual case to the present situation is O'Handley v. Weber, 62 F.4th 1145 (9th Cir. 2023). In O'Handley, the plaintiff maintained that a California agency was responsible for the moderation of his posted content. The plaintiff pointed to the agency's mission to prioritize working closely with social-media companies to be “proactive” about misinformation and the flagging of one of his Twitter posts as “disinformation.” The Ninth Circuit rejected the argument that the agency had provided “significant encouragement” to Twitter to suppress speech. In rejecting this argument, the Ninth Circuit stated the “critical question” in evaluating the “significant encouragement” theory is “whether the government's encouragement is so significant that we should attribute the private party's choice to the State...”Id. at 1158.

Defendants cited many cases in support of their argument that Plaintiffs have not shown significant coercion or encouragement. See VDARE Found v. City of Colo. Springs, 11 F.4th 1151 (10th Cir. 2021), cert. denied, 142 S.Ct. 1208 (2022) (city's decision not to provide “support or resources” to plaintiff's event was not “such significant encouragement” to transform a private venue's decision to cancel the event into state action); S.H.A.R.K. v. Metro Park Serving Summit Cnty., 499 F.3d 553 (6th Cir. 2007) (government officials' requests were “not the type of significant encouragement” that would render agreeing to those requests to be state action); Campbell v. PMI Food Equip, Grp., Inc., 509 F.3d 776 (6th Cir. 2007) (no state action where government entities did nothing more than authorize and approve a contract that provided tax benefits or incentives conditioned on the company opening a local plant); Gallagher v. Neil Young Freedom Concert, 49 F.3d 1442 (10th Cir. 1995) (payments under government contracts and the receipt of government grants and tax benefits are insufficient to establish a symbiotic relationship between the government and a private entity). Ultimately, Defendants contend that Plaintiffs have not shown that the choice to suppress free speech must in law be deemed to be that of the Government. This Court disagrees.

The Plaintiffs are likely to succeed on the merits on their claim that the United States Government, through the White House and numerous federal agencies, pressured and encouraged social-media companies to suppress free speech. Defendants used meetings and communications with social-media companies to pressure those companies to take down, reduce, and suppress the free speech of American citizens. They flagged posts and provided information on the type of posts they wanted suppressed. They also followed up with directives to the social-media companies to provide them with information as to action the company had taken with regard to the flagged post. This seemingly unrelenting pressure by Defendants had the intended result of suppressing millions of protected free speech postings by American citizens. In response to Defendants' arguments, the Court points out that this case has much more government involvement than any of the cases cited by Defendants, as clearly indicated by the extensive facts detailed above. If there were ever a case where the “significant encouragement” theory should apply, this is it.

What is really telling is that virtually all of the free speech suppressed was “conservative” free speech. Using the 2016 election and the COVID-19 pandemic, the Government apparently engaged in a massive effort to suppress disfavored conservative speech. The targeting of conservative speech indicates that Defendants may have engaged in “viewpoint discrimination,” to which strict scrutiny applies. See Simon & Schuster, Inc., 505 U.S. 105 (1991).

In addition to the “significant encouragement” theory, the Government may also be held responsible for private conduct if the Government exercises coercive power over the private party in question. Blum, 457 U.S. at 1004. Here, Defendants argue that not only must there be coercion, but the coercion must be targeted at specific actions that harmed Plaintiffs. Bantam Books v. Sullivan, 372 U.S. 58 (1963) (where a state agency threatened prosecution if a distributor did not remove certain designated books or magazines it distributed that the state agency had declared objectionable); see also Backpage.com, LLC v. Dart, 807 F.3d 229 (7th Cir. 2015) (where a sheriff's letter demanded that two credit card issuers prohibit the use of their credit cards to purchase any ads on a particular website containing advertisements for adult services); Okwedy v. Molinari, 333 F.3d 339 (2d Cir. 2003) (per curium) (where a municipal official allegedly pressured a billboard company to take down a particular series of signs he found offensive).

The Defendants further argue they only made requests to the social-media companies, and that the decision to modify or suppress content was each social-media company's independent decision. However, when a state has so involved itself in the private party's conduct, it cannot claim the conduct occurred as a result of private choice, even if the private party would have acted independently. Peterson v. City of Greenville, 373 U.S. 244, 247-248 (1963).

Therefore, the question is not what decision the social-media company would have made, but whether the Government “so involved itself in the private party's conduct” that the decision is essentially that of the Government. As exhaustedly listed above, Defendants “significantly encouraged” the social-media companies to such extent that the decision should be deemed to be the decisions of the Government. The White House Defendants and the Surgeon General Defendants additionally engaged in coercion of social-media companies to such extent that the decisions of the social-media companies should be deemed that of the Government. It simply makes no difference what decision the social-media companies would have made independently of government involvement, where the evidence demonstrates the wide-scale involvement seen here.

(1) White House Defendants

The Plaintiffs allege that by use of emails, public and private messages, public and private meetings, and other means, White House Defendants have “significantly encouraged” and “coerced” social-media platforms to suppress protected free speech on their platforms.

The White House Defendants acknowledged at oral arguments that they did not dispute the authenticity or the content of the emails Plaintiffs submitted in support of their claims. However, they allege that the emails do not show that the White House Defendants either coerced or significantly encouraged social-media platforms to suppress content of social-media postings. White House Defendants argue instead that they were speaking with social-media companies about promoting more accurate COVID-19 information and to better understand what action the companies were taking to curb the spread of COVID-19 misinformation.

[Doc. No. 288 at 164-65]

White House Defendants further argue they never demanded the social-media companies to suppress postings or to change policies, and the changes were due to the social-media companies' own independent decisions. They assert that they did not make specific demands via the White House's public statements and four “asks” of social-media companies. Defendants contend the four “asks” were “recommendations,” not demands. Additionally, Defendants argue President Biden's July 16, 2021 “they're killing people” comment was clarified on July 19, 2021, to reflect that President Biden was talking about the “Disinformation Dozen,” not the social-media companies.

The White House four “asks” are: (1) measure and publicly share the impact of misinformation on their platform; (2) create a robust enforcement strategy; (3) take faster action against harmful posts; and (4) promote quality information sources in their feed algorithm.

[Doc. No. 10-1 at 377-78]

Although admitting White House employee Flaherty expressed frustration at times with social-media companies, White House Defendants contend Flaherty sought to better understand the companies' policies with respect to addressing the spread of misinformation and hoped to find out what the Government could do to help. Defendants contend Flaherty felt such frustration because some of the things the social-media-companies told him were inconsistent with what others told him, compounded with the urgency of the COVID-19 pandemic.

Explicit threats are an obvious form of coercion, but not all coercion need be explicit. The following illustrative specific actions by Defendants are examples of coercion exercised by the White House Defendants:

(a) “Cannot stress the degree to which this needs to be resolved immediately. Please remove this account immediately.”
(b) Accused Facebook of causing “political violence” by failing to censor false COVID-19 claims.
(c) “You are hiding the ball.”
(d) “Internally we have been considering our options on what to do about it.”
(e) “I care mostly about what actions and changes you are making to ensure you're not making our country's vaccine hesitancy problem worse.”
(f) “This is exactly why I want to know what “Reduction” actually looks like - if “reduction” means pumping our most vaccine hesitance audience with Tucker Carlson saying it does not work. then. I'm not sure it's reduction.”
(g) Questioning how the Tucker Carlson video had been “demoted” since there were 40,000 shares.
(h) Wanting to know why Alex Berenson had not been kicked off Twitter because Berenson was the epicenter of disinformation that radiated outward to the persuadable public. “We want to make sure YouTube has a handle on vaccine hesitancy and is working toward making the problem better. Noted that vaccine hesitancy was a concern. That is shared by the highest (‘and I mean the highest') levels of the White House.”'
(i) After sending to Facebook a document entitled “Facebook COVID-19 Vaccine Misinformation Brief, which recommends much more aggressive censorship by Facebook. Flaherty told Facebook sending the Brief was not a White House endorsement of it, but “this is circulating around the building and informing thinking.”
(j) Flaherty stated: “Not to sound like a broken record, but how much content is being demoted, and how effective are you at mitigating reach and how quickly?”
(k) Flaherty told Facebook: “Are you guys fucking serious” I want an answer on what happened here and I want it today.”
(l) Surgeon General Murthy stated: “We expect more from our technology companies. We're asking them to operate with greater transparency and accountability. We're asking them to monitor information more closely. We're asking them to consistently take action against misinformation super-spreaders on their platforms.”
(m) White House Press Secretary Psaki stated: “we are in regular touch with these social-media platforms, and those engagements typically happen through members of our senior staff, but also members of our COVID-19 team. We're flagging problematic posts for Facebook that spread disinformation. Psaki also stated one of the White House's “asks” of social-media companies was to “create a robust enforcement strategy.”
(n) When asked about what his message was to social-media platforms when it came to COVID-19, President Biden stated: “they're killing people. Look, the only pandemic we have is among the unvaccinated and that - they're killing people.”
(o) Psaki stated at the February 1, 2022, White House Press Conference that the White House wanted every social-media platform to do more to call out misinformation and disinformation and to uplift accurate information.
(p) “Hey folks, wanted to flag the below tweet and am wondering if we can get moving on the process of having it removed. ASAP”
(q) “How many times can someone show false COVID-19 claims before being removed?”
(r) “I've been asking you guys pretty directly over a series of conversations if the biggest issues you are seeing on your platform when it comes to vaccine hesitancy and the degree to which borderline content- as you define it, is playing a role.”
(s) “I am not trying to play ‘gotcha' with you. We are gravely concerned that your service is one of the top drivers of vaccine hesitancy-period.”
(t) “You only did this, however after an election that you helped increase skepticism in and an insurrection which was plotted, in large part, on your platform.”
(u) “Seems like your ‘dedicated vaccine hesitancy' policy isn't stopping the disinfo dozen.”
(v) White House Communications Director, Kate Bedingfield's announcement that “the White House is assessing whether social-media platforms are legally liable for misinformation spread on their platforms, and examining how misinformation fits into the liability protection process by Section 230 of The Communication Decency Act.”

[II. A.]

[Id. A. (5)]

[Id. A. (10)]

[Id.]

[Id. A. (11)]

[Id. A. (12)]

[Id. A. (15)]

[Id. A. (16)]

[Id. A. (17)]

[Id.]

[Id. at A. (19)]

[Id.]

[Id.]

[Id.]

[Id.]

[Id. at A. (24)]

[Doc. No. 174-1 at 1]

[Id. at 11]

[Id.]

[Doc. No. 174-1 at 17-20]

[Id. at 41]

[Doc. No. 10-1 at 477-78]

These actions are just a few examples of the unrelenting pressure the Defendants exerted against social-media companies. This Court finds the above examples demonstrate that Plaintiffs can likely prove that White House Defendants engaged in coercion to induce social-media companies to suppress free speech.

With respect to 47 U.S.C. § 230, Defendants argue that there can be no coercion for threatening to revoke and/or amend Section 230 because the call to amend it has been bipartisan. However, Defendants combined their threats to amend Section 230 with the power to do so by holding a majority in both the House of Representatives and the Senate, and in holding the Presidency. They also combined their threats to amend Section 230 with emails, meetings, press conferences, and intense pressure by the White House, as well as the Surgeon General Defendants. Regardless, the fact that the threats to amend Section 230 were bipartisan makes it even more likely that Defendants had the power to amend Section 230. All that is required is that the government's words or actions “could reasonably be interpreted as an implied threat.” Cuomo, 350 F.Supp.3d at 114. With the Supreme Court recently making clear that Section 230 shields socialmedia platforms from legal responsibility for what their users post, Gonzalez v. Google, 143 S.Ct. 1191 (2023), Section 230 is even more valuable to these social-media platforms. These actions could reasonably be interpreted as an implied threat by the Defendants, amounting to coercion.

Specifically, the White House Defendants also allegedly exercised significant encouragement such that the actions of the social-media companies should be deemed to be that of the government. The White House Defendants used emails, private portals, meetings, and other means to involve itself as “partners” with social-media platforms. Many emails between the White House and social-media companies referred to themselves as “partners.” Twitter even sent the White House a “Partner Support Portal” for expedited review of the White House's requests. Both the White House and the social-media companies referred to themselves as “partners” and “on the same team” in their efforts to censor disinformation, such as their efforts to censor “vaccine hesitancy” spread. The White House and the social-media companies also demonstrated that they were “partners” by suppressing information that did not even violate the social-media companies' own policies.

Further, White House Defendants constantly “flagged” for Facebook and other socialmedia platforms posts the White House Defendants considered misinformation. The White House demanded updates and reports of the results of their efforts to suppress alleged disinformation, and the social-media companies complied with these demands. The White House scheduled numerous Zoom and in-person meetings with social-media officials to keep each other informed about the companies' efforts to suppress disinformation.

The White House Defendants made it very clear to social-media companies what they wanted suppressed and what they wanted amplified. Faced with unrelenting pressure from the most powerful office in the world, the social-media companies apparently complied. The Court finds that this amounts to coercion or encouragement sufficient to attribute the White House's actions to the social-media companies, such that Plaintiffs are likely to succeed on the merits against the White House Defendants.

(2) Surgeon General Defendants

Plaintiffs allege that Surgeon General Murthy and his office engaged in a pressure campaign parallel to, and often overlapping with, the White House Defendants' campaign directed at social-media platforms. Plaintiffs further allege the Surgeon General Defendants engaged in numerous meetings and communications with social-media companies to have those companies suppress alleged disinformation and misinformation posted on their platforms.

The Surgeon General Defendants argue that the Surgeon General's role is primarily to draw attention to public health matters affecting the nation. The SG took two official actions in 2021 and in 2022. In July 2021, the Surgeon General issued a “Surgeon General's Advisory.” In March 2022, the Surgeon General issued a Request For Information (“RFI”). Surgeon General Defendants argue that the Surgeon General's Advisory did not require social-media companies to censor information or make changes in their policies. Surgeon General Defendants further assert that the RFI was voluntary and did not require the social-media companies to answer.

Additionally, the Surgeon General Defendants contend they only held courtesy meetings with social-media companies, did not flag posts for censorship, and never worked with socialmedia companies to moderate their policies. Surgeon General Defendants also deny that they were involved with the Virality Project.

As with the White House Defendants, this Court finds that Plaintiffs are likely to succeed on the merits of their First Amendment free speech claim against the Surgeon General Defendants. Through public statements, internal emails, and meetings, the Surgeon General Defendants exercised coercion and significant encouragement such that the decisions of the social-media platforms and their actions suppressing health disinformation should be deemed to be the decisions of the government. Importantly, the suppression of this information was also likely prohibited content and/or viewpoint discrimination, entitling Plaintiffs to strict scrutiny.

The Surgeon General Defendants did pre-rollout calls with numerous social-media companies prior to publication of the Health Advisory on Misinformation. The Advisory publicly called on social-media companies “to do more” against COVID misinformation Superspreaders. Numerous calls and meetings took place between Surgeon General Defendants and private socialmedia companies. The “misinformation” to be suppressed was whatever the government deemed misinformation.

The problem with labeling certain discussions about COVID-19 treatment as “health misinformation” was that the Surgeon General Defendants suppressed alternative views to those promoted by the government. One of the purposes of free speech is to allow discussion about various topics so the public may make informed decisions. Health information was suppressed, and the government's view of the proper treatment for COVID-19 became labeled as “the truth.” Differing views about whether COVID-19 vaccines worked, whether taking the COVID-19 vaccine was safe, whether mask mandates were necessary, whether schools and businesses should have been closed, whether vaccine mandates were necessary, and a host of other topics were suppressed. Without a free debate about these issues, each person is unable to decide for himself or herself the proper decision regarding their health. Each United States citizen has the right to decide for himself or herself what is true and what is false. The Government and/or the OSG does not have the right to determine the truth.

The Surgeon General Defendants also engaged in a pressure campaign with the White House Defendants to pressure social-media companies to suppress health information contrary to the Surgeon General Defendants' views. After the Surgeon General's press conference on July 15, 2021, the Surgeon General Defendants kept the pressure on social-media platforms via emails, private meetings, and by requiring social-media platforms to report on actions taken against health disinformation.

The RFI by the Surgeon General Defendants also put additional pressure on social-media companies to comply with the requests to suppress free speech. The RFI sought information from private social-media companies to provide information about the spread of misinformation. The RFI stated that the office of the Surgeon General was expanding attempts to control the spread of misinformation on social-media platforms. The RFI also sought information about social-media censorship policies, how they were enforced, and information about disfavored speakers.

Taking all of this evidence together, this Court finds the Surgeon General Defendants likely engaged in both coercion and significant encouragement to such an extent that the decisions of private social-media companies should be deemed that of the Surgeon General Defendants. The Surgeon General Defendants did much more than engage in Government speech: they kept pressure on social-media companies with pre-rollout meetings, follow-up meetings, and RFI. Thus, Plaintiffs are likely to succeed on the merits of their First Amendment claim against these Defendants.

(3) CDC Defendants

Plaintiffs allege that the CDC Defendants have engaged in a censorship campaign, together with the White House and other federal agencies, to have free speech suppressed on social-media platforms. Plaintiffs allege that working closely with the Census Bureau, the CDC flagged supposed “misinformation” for censorship on the platforms. Plaintiffs further allege that by using the acronym “BOLO,” the CDC Defendants told social-media platforms what health claims should be censored as misinformation.

In opposition, Defendants assert that the CDC's mission is to protect the public's health. Although the CDC Defendants admit to meeting with and sending emails to social-media companies, the CDC Defendants argue they were responding to requests by the companies for science-based public health information, proactively alerting the social-media companies about disinformation, or advising the companies where to find accurate information. The Census Bureau argues the Interagency Agreement, entered into with the CDC in regard to COVID-19 misinformation, has expired, and that it is no longer participating with the CDC on COVID-19 misinformation issues. The CDC Defendants further deny that they directed any social-media companies to remove posts or to change their policies.

Like the White House Defendants and Surgeon General Defendants, the Plaintiffs are likely to succeed on the merits of Plaintiffs' First Amendment free speech claim against the CDC Defendants. The CDC Defendants through emails, meetings, and other communications, seemingly exercised pressure and gave significant encouragement such that the decisions of the social-media platforms to suppress information should be deemed to be the decisions of the Government. The CDC Defendants coordinated meetings with social-media companies, provided examples of alleged disinformation to be suppressed, questioned the social-media companies about how it was censoring misinformation, required reports from social-media companies about disinformation, told the social-media companies whether content was true or false, provided BOLO information, and used a Partner Support Portal to report disinformation. Much like the other Defendants, described above, the CDC Defendants became “partners” with social-media platforms, flagging and reporting statements on social media Defendants deemed false. Although the CDC Defendants did not exercise coercion to the same extent as the White House and Surgeon General Defendants, their actions still likely resulted in “significant encouragement” by the government to suppress free speech about COVID-19 vaccines and other related issues.

Various social-media platforms changed their content-moderation policies to require suppression of content that was deemed false by CDC and led to vaccine hesitancy. The CDC became the “determiner of truth” for social-media platforms, deciding whether COVID-19 statements made on social media were true or false. And the CDC was aware it had become the “determiner of truth” for social-media platforms. If the CDC said a statement on social media was false, it was suppressed, in spite of alternative views. By telling social-media companies that posted content was false, the CDC Defendants knew the social-media company was going to suppress the posted content. The CDC Defendants thus likely “significantly encouraged” socialmedia companies to suppress free speech.

Based on the foregoing examples of significant encouragement and coercion by the CDC Defendants, the Court finds that Plaintiffs are likely to succeed on the merits of their First Amendment claim against the CDC Defendants.

(4) NIAID Defendants

Plaintiffs allege that NIAID Defendants engaged in a series of campaigns to discredit and procure the censorship of disfavored viewpoints on social media. Plaintiffs allege that Dr. Fauci engaged in a series of campaigns to suppress speech regarding the Lab-Leak theory of COVID-19's origin, treatment using hydroxychloroquine, the GBD, the treatment of COVID-19 with Ivermectin, the effectiveness of mask mandates, and the speech of Alex Berenson.

In opposition, Defendants assert that the NIAID Defendants simply supports research to better understand, treat, and prevent infectious, immunologic, and allergic diseases and is responsible for responding to emergency public health threats. The NIAID Defendants argue that they had limited involvement with social-media platforms and did not meet with or contact the platforms to change their content or policies. The NIAID Defendants further argue that the videos, press conferences, and public statements by Dr. Fauci and other employees of NIAID was government speech.

This Court agrees that much of what the NIAID Defendants did was government speech. However, various emails show Plaintiffs are likely to succeed on the merits through evidence that the motivation of the NIAID Defendants was a “take down” of protected free speech. Dr. Francis Collins, in an email to Dr. Fauci told Fauci there needed to be a “quick and devastating take down” of the GBD-the result was exactly that. Other email discussions show the motivations of the NIAID were to have social-media companies suppress these alternative medical theories. Taken together, the evidence shows that Plaintiffs are likely to succeed on the merits against the NIAID Defendants as well.

[Doc. No. 207-6]

(5) FBI Defendants

Plaintiffs allege that the FBI Defendants also suppressed free speech on social-media platforms, with the FBI and FBI's FITF playing a key role in these censorship efforts.

In opposition, Defendants assert that the FBI Defendants' specific job duties relate to foreign influence operations, including attempts by foreign governments to influence U.S. elections. Based on the alleged foreign interference in the 2016 U.S. Presidential election, the FBI Defendants argue that, through their meetings and emails with social-media companies, they were attempting to prevent foreign influence in the 2020 Presidential election. The FBI Defendants deny any attempt to suppress and/or change the social-media companies' policies with regard to domestic speech. They further deny that they mentioned Hunter Biden or a “hack and leak” foreign operation involving Hunter Biden.

According to the Plaintiffs' allegations detailed above, the FBI had a 50% success rate regarding social media's suppression of alleged misinformation, and it did no investigation to determine whether the alleged disinformation was foreign or by U.S. citizens. The FBI's failure to alert social-media companies that the Hunter Biden laptop story was real, and not mere Russian disinformation, is particularly troubling. The FBI had the laptop in their possession since December 2019 and had warned social-media companies to look out for a “hack and dump” operation by the Russians prior to the 2020 election. Even after Facebook specifically asked whether the Hunter Biden laptop story was Russian disinformation, Dehmlow of the FBI refused to comment, resulting in the social-media companies' suppression of the story. As a result, millions of U.S. citizens did not hear the story prior to the November 3, 2020 election. Additionally, the FBI was included in Industry meetings and bilateral meetings, received and forwarded alleged misinformation to social-media companies, and actually mislead social-media companies in regard to the Hunter Biden laptop story. The Court finds this evidence demonstrative of significant encouragement by the FBI Defendants.

Defendants also argue that Plaintiffs are attempting to create a “deception” theory of government involvement with regards to the FBI Defendants. Plaintiffs allege the FBI told the social-media companies to watch out for Russian disinformation prior to the 2020 Presidential election and then failed to tell the companies that the Hunter Biden laptop was not Russian disinformation. The Plaintiffs further allege Dr. Fauci colluded with others to cover up the Government's involvement in “gain of function” research at the Wuhan lab in China, which may have resulted in the creation of the COVID-19 pandemic.

Although this Court agrees there is no specified “deception” test for government action, a state may not induce private persons to accomplish what it is constitutionally forbidden to accomplish. Norwood, 413 U.S. at 455. It follows, then, that the government may not deceive a private party either-it is just another form of coercion. The Court has evaluated Defendants' conduct under the “coercion” and/or “significant encouragement” theories of government action, and finds that the FBI Defendants likely exercised “significant encouragement” over social-media companies.

Through meetings, emails, and in-person contacts, the FBI intrinsically involved itself in requesting social-media companies to take action regarding content the FBI considered to be misinformation. The FBI additionally likely misled social-media companies into believing the Hunter Biden laptop story was Russian disinformation, which resulted in suppression of the story a few weeks prior to the 2020 Presidential election. Thus, Plaintiffs are likely to succeed in their claims that the FBI exercised “significant encouragement” over social-media platforms such that the choices of the companies must be deemed to be that of the Government.

(5) CISA Defendants

Plaintiffs allege the CISA Defendants served as a “nerve center” for federal censorship efforts by meeting routinely with social-media platforms to increase censorship of speech disfavored by federal officials, and by acting as a “switchboard” to route disinformation concerns to social-media platforms.

In response, the CISA Defendants maintain that CISA has a mandate to coordinate with federal and non-federal entities to carry out cybersecurity and critical infrastructure activities. CISA previously designated election infrastructure as a critical infrastructure subsector. CISA also collaborates with state and local election officials; as part of its duties, CISA coordinates with the EIS-GCC, which is comprised of state, local, and federal governmental departments and agencies. The EI-SSC is comprised of owners or operators with significant business or operations in U.S. election infrastructure systems or services. After the 2020 election, the EI-SCC and EIS-GCC launched a Joint Managing Mis/Disinformation Group to coordinate election infrastructure security efforts. The CISA Defendants argue CISA supports the Joint Managing MisDisinformation Group but does not coordinate with the EIP or the CIS. Despite DHS providing financial assistance to the CIS through a series of cooperative agreement awards managed by CISA, the CISA Defendants assert that the work scope funded by DHS has not involved the CIS performing disinformation-related tasks.

Although the CISA Defendants admit to being involved in “switchboarding” work during the 2020 election cycle, CISA maintains it simply referred the alleged disinformation to the socialmedia companies, who made their own decisions to suppress content. CISA maintains it included a notice with each referral to the companies, which stated that CISA was not demanding censorship. CISA further maintains it discontinued its switchboarding work after the 2020 election cycle and has no intention to engage in switchboarding for the next election. CISA further argues that even though it was involved with USG-Industry meetings with other federal agencies and social-media companies, they did not attempt to “push” social-media companies to suppress content or to change policies.

However, at oral argument, CISA attorneys were unable to verify whether or not CISA would be involved in switchboarding during the 2024 election. [Doc. No. 288 at 122]

The Court finds that Plaintiffs are likely to succeed on the merits of their First Amendment claim against the CISA Defendants. The CISA Defendants have likely exercised “significant encouragement” with social-media platforms such that the choices of the social-media companies must be deemed to be that of the government. Like many of the other Defendants, the evidence shows that the CISA Defendants met with social-media companies to both inform and pressure them to censor content protected by the First Amendment. They also apparently encouraged and pressured social-media companies to change their content-moderation policies and flag disfavored content.

But the CISA Defendants went even further. CISA expanded the word “infrastructure” in its terminology to include “cognitive” infrastructure, so as to create authority to monitor and suppress protected free speech posted on social media. The word “cognitive” is an adjective that means “relating to cognition.” “Cognition” means the mental action or process of acquiring knowledge and understanding through thought, experiences, and the senses. The Plaintiffs are likely to succeed on the merits on its claim that the CISA Defendants believe they had a mandate to control the process of acquiring knowledge. The CISA Defendants engaged with Stanford University and the University of Washington to form the EIP, whose purpose was to allow state and local officials to report alleged election misinformation so it could be forwarded to the social-media platforms to review. CISA used a CISA-funded non-profit organization, the CIS, to perform the same actions. CISA used interns who worked for the Stanford Internal Observatory, which is part of the EIP, to address alleged election disinformation. All of these worked together to forward alleged election misinformation to social-media companies to view for censorship. They also worked together to ensure the social-media platforms reported back to them on what actions the platforms had taken. And in this process, no investigation was made to determine whether the censored information was foreign or produced by U.S. citizens.

Google English Dictionary.

According to DiResta, head of EIP, the EIP was designed “to get around unclear legal authorities, including very real First Amendment questions that would arise if CISA or the other government agencies were to monitor and flag information for censorship on social media.”Therefore, the CISA Defendants aligned themselves with and partnered with an organization that was designed to avoid Government involvement with free speech in monitoring and flagging content for censorship on social-media platforms.

[Doc. No. 209-5 at 4]

At oral arguments on May 26, 2023, Defendants argued that the EIP operated independently of any government agency. The evidence shows otherwise: the EIP was started when CISA interns came up with the idea; CISA connected the EIP with the CIS, which is a CISA-funded non-profit that channeled reports of misinformation from state and local government officials to social-media companies; CISA had meetings with Stanford Internet Observatory officials (a part of the EIP), and both agreed to “work together”; the EIP gave briefings to CISA; and the CIS (which CISA funds) oversaw the Multi-State Information Sharing and Analysis Center (“MS-ISAC”) and the Election Infrastructure Information Sharing and Analysis Center (“EI-ISAC”), both of which are organizations of state and local governments that report alleged election misinformation.

CISA directs state and local officials to CIS and connected the CIS with the EIP because they were working on the same mission and wanted to be sure they were all connected. CISA served as a mediating role between CIS and EIP to coordinate their efforts in reporting misinformation to social-media platforms, and there were direct email communications about reporting misinformation between EIP and CISA. Stamos and DiResta of the EIP also have roles in CISA on CISA advisory committees. EIP identifies CISA as a “partner in government.” The CIS coordinated with EIP regarding online misinformation. The EIP publication, “The Long Fuse,” states the EIP has a focus on election misinformation originating from “domestic” sources across the United States. EIP further stated that the primary repeat spreaders of false and misleading narratives were “verified blue-checked accounts belonging to partisan media outlets, social-media influencers, and political figures, including President Trump and his family.” The EIP further disclosed it held its first meeting with CISA to present the EIP concept on July 9, 2020, and EIP was officially formed on July 26, 2020, “in consultation with CISA.”The Government was listed as one of EIP's Four Major Stakeholder Groups, which included CISA, the GEC, and ISAC.

[Doc. No. 209-2]

[Id. at 9]

[Id. at 12]

[Id. at 20-21]

[Id. at 30]

As explained, the CISA Defendants set up a “switchboarding” operation, primarily consisting of college students, to allow immediate reporting to social-media platforms of alleged election disinformation. The “partners” were so successful with suppressing election disinformation, they later formed the Virality Project, to do the same thing with COVID-19 misinformation that the EIP was doing for election disinformation. CISA and the EIP were completely intertwined. Several emails from the switchboarding operation sent by intern Pierce Lowary shows Lowary directly flagging posted content and sending it to social-media companies. Lowary identified himself as “working for CISA” on the emails.

[Doc. No. 227-2 at 15, 23, 42, 65 & 78]

On November 21, 2021, CISA Director Easterly stated: “We live in a world where people talk about alternative facts, post-truth, which I think is really, really dangerous if people get to pick their own facts.” The Free Speech Clause was enacted to prohibit just what Director Easterly is wanting to do: allow the government to pick what is true and what is false. The Plaintiffs are likely to succeed on the merits of their First Amendment claim against the CISA Defendants for “significantly encouraging” social-media companies to suppress protected free speech.

(5) State Department Defendants

Plaintiffs allege the State Department Defendants, through the State Department's GEC, were also involved in suppressing protected speech on social-media platforms.

In response, the State Department Defendants argue that they, along with the GEC, play a critical role in coordinating the U.S. government efforts to respond to foreign influence. The State Department Defendants argue that they did not flag specific content for social-media companies and did not give the company any directives. The State Department Defendants also argue that they do not coordinate with or work with the EIP or the CIS.

The Court finds that Plaintiffs are also likely to succeed on the merits regarding their First Amendment Free Speech Clause against the State Department Defendants. For many of the same reasons the Court reached its conclusion as to the CISA Defendants, the State Department Defendants have exercised “significant encouragement” with social-media platforms, such that the choices of the social-media companies should be deemed to be that of the government. As discussed previously, both CISA and the GEC were intertwined with the VP, EIP, and Stanford Internet Observatory.

The VP, EIP, and Stanford Internet Observatory are not defendants in this proceeding. However, their actions are relevant because government agencies have chosen to associate, collaborate, and partner with these organizations, whose goals are to suppress protected free speech of American citizens. The State Department Defendants and CISA Defendants both partnered with organizations whose goals were to “get around” First Amendment issues. In partnership with these non-governmental organizations, the State Department Defendants flagged and reported postings of protected free speech to the social-media companies for suppression. The flagged content was almost entirely from political figures, political organizations, alleged partisan media outlets, and social-media all-stars associated with right-wing or conservative political views, demonstrating likely “viewpoint discrimination.” Since only conservative viewpoints were allegedly suppressed, this leads naturally to the conclusion that Defendants intended to suppress only political views they did not believe in. Based on this evidentiary showing, Plaintiffs are likely to succeed on the merits of their First Amendment claims against the State Department Defendants.

[Doc. No. 209-5 at 4]

(6) Other Defendants

Other Defendants in this proceeding are the U.S. Food and Drug Administration, U.S. Department of Treasury, U.S. Election Assistance Commission, U.S. Department of Commerce, and employees Erica Jefferson, Michael Murray, Wally Adeyemo, Steven Frid, Brad Kimberly, and Kristen Muthig. Plaintiffs confirmed at oral argument that they are not seeking a preliminary injunction against these Defendants. Additionally, Plaintiffs assert claims against the Disinformation Governance Board (“DGB”) and its Director Nina Jankowicz. Defendants have provided evidence that the DGB has been disbanded, so any claims against these Defendants are moot. Thus, this Court will not address the issuance of an injunction against any of these Defendants.

ii. Joint Participation

The Plaintiffs contend that the Defendants are not only accountable for private conduct that they coerced or significantly encouraged, but also for private conduct in which they actively participated as “joint participants.” Burton v. Wilmington Parking Authority, 365 U.S. 715, 725 (1961). Although most often “joint participation” occurs through a conspiracy or collusive behavior, Hobbs v. Hawkins, 968 F.2d 471, 480 (5th Cir. 1992), even without a conspiracy, when a plaintiff establishes the government is responsible for private action arising out of “pervasive entwinement of public institutions and public officials in the private entity's composition and workings.” Brentwood Academy. v. Tennessee Secondary Sch. Athletic Ass'n., 531 U.S. 288, 298 (2001).

Under the “joint action” test, the Government must have played an indispensable role in the mechanism leading to the disputed action. Frazier v. Bd. Of Trs. Of N.W. Miss. Reg.'l Med. Ctr., 765 F.2d 1278, 1287-88 (5th Cir.), amended, 777 F.2d 329 (5th Cir. 1985). When a plaintiff establishes “the existence of a conspiracy involving state action,” the government becomes responsible for all constitutional violations committed in furtherance of the conspiracy by a party to the conspiracy. Armstrong v. Ashley, 60 F.4th 262, (5th Cir. 2023). Conspiracy can be charged as the legal mechanism through which to impose liability on each and all of the defendants without regard to the person doing the particular act that deprives the plaintiff of federal rights. Pfannstiel v. City of Marion, 918 F.2d 1178, 1187 (5th Cir. 1990).

Much like conspiracy and collusion, joint activity occurs whenever the government has “so far insinuated itself” into private affairs as to blur the line between public and private action. Jackson v. Metro. Edison Co., 419 U.S. 345, 357 (1974). To become “pervasively entwined” in a private entity's workings, the government need only “significantly involve itself in the private entity's actions and decision-making”; it is not necessary to establish that “state actors literally ‘overrode' the private entity's independent judgment.” Rawson v. Recovery Innovations, Inc., 975 F.3d 742, 751, 753 (9th Cir. 2020). “Pervasive intertwinement” exists even if the private party is exercising independent judgment. West v. Atkins, 487 U.S. 42, 52, n.10 (1988); Gallagher v. Neil Young Freedom Concert, 49 F.3d 1442, 1454 (10th Cir. 1995) (holding that a “substantial degree of cooperative action” can constitute joint action).

For the same reasons as this Court has found Plaintiffs met their burden to show “significant encouragement” by the White House Defendants, the Surgeon General Defendants, the CDC Defendants, the FBI Defendants, the NIAID Defendants, the CISA Defendants, and the State Department Defendants, this Court finds the Plaintiffs are likely to succeed on the merits that these Defendants “jointly participated” in the actions of the private social-media companies as well, by insinuating themselves into the social-media companies' private affairs and blurring the line between public and private action.

It is not necessary to repeat the details discussed in the “significant encouragement” analysis in order to find Plaintiffs have met their initial burden.

However, this Court finds Plaintiffs are not likely to succeed on the merits that the “joint participation” occurred as a result of a conspiracy with the social-media companies. The evidence thus far shows that the social-media companies cooperated due to coercion, not because of a conspiracy.

This Court finds the White House Defendants, the Surgeon General Defendants, the CDC Defendants, the NIAID Defendants, the FBI Defendants, the CISA Defendants, and the State Department Defendants likely “jointly participated” with the social-media companies to such an extent that said Defendants have become “pervasively entwined” in the private companies' workings to such an extent as to blur the line between public and private action. Therefore, Plaintiffs are likely to succeed on the merits that the government Defendants are responsible for the private social-media companies' decisions to censor protected content on social-media platforms.

iii. Other Arguments

While not admitting any fault in the suppression of free speech, Defendants blame the Russians, COVID-19, and capitalism for any suppression of free speech by social-media companies. Defendants argue the Russian social-media postings prior to the 2016 Presidential election caused social-media companies to change their rules with regard to alleged misinformation. The Defendants argue the Federal Government promoted necessary and responsible actions to protect public health, safety, and security when confronted by a deadly pandemic and hostile foreign assaults on critical election infrastructure. They further contend that the COVID-19 pandemic resulted in social-media companies changing their rules in order to fight related disinformation. Finally, Defendants argue the social-media companies' desire to make money from advertisers resulted in change to their efforts to combat disinformation. In other words, Defendants maintain they had nothing to do with Plaintiffs' censored speech and blamed any suppression of free speech on the Russians, COVID-19, and the companies' desire to make money. The social-media platforms and the Russians are of course not defendants in this proceeding, and neither are they bound by the First Amendment. The only focus here is on the actions of the Defendants themselves.

Although the COVID-19 pandemic was a terrible tragedy, Plaintiffs assert that it is still not a reason to lessen civil liberties guaranteed by our Constitution. “If human nature and history teaches anything, it is that civil liberties face grave risks when governments proclaim indefinite states of emergency.” Does 1-3 v. Mills, 142 S.Ct. 17, 20-21 (2021) (Gorsuch, J., dissenting). The “grave risk” here is arguably the most massive attack against free speech in United States history.

Another argument of Defendants is that the previous Administration took the same actions as Defendants. Although the “switchboarding” by CISA started in 2018, there is no indication or evidence yet produced in this litigation that the Trump Administration had anything to do with it. Additionally, whether the previous Administration suppressed free speech on social media is not an issue before this Court and would not be a defense to Defendants even if it were true.

Defendants also argue that a preliminary injunction would restrict the Defendants' right to government speech and would transform government speech into government action whenever the Government comments on public policy matters. The Court finds, however, that a preliminary injunction here would not prohibit government speech. The traditional test used to differentiate government speech from private speech discusses three relevant factors: (1) whether the medium at issue has historically been used to communicate messages from the government; (2) whether the public reasonably interprets the government to be the speaker; and (3) whether the government maintains editorial control over the speech. Pleasant Grove City, Utah v. Summum, 555 U.S. 460, 465-80 (2009). A government entity has the right to speak for itself and is entitled to say what it wishes and express the views it wishes to express. The Free Speech Clause restricts government regulation of private speech; it does not regulate government speech. Pleasant Grove City, Utah, 555 U.S. at 468.

The Defendants argue that by making public statements, this is nothing but government speech. However, it was not the public statements that were the problem. It was the alleged use of government agencies and employees to coerce and/or significantly encourage social-media platforms to suppress free speech on those platforms. Plaintiffs point specifically to the various meetings, emails, follow-up contacts, and the threat of amending Section 230 of the Communication Decency Act. Plaintiffs have produced evidence that Defendants did not just use public statements to coerce and/or encourage social-media platforms to suppress free speech, but rather used meetings, emails, phone calls, follow-up meetings, and the power of the government to pressure social-media platforms to change their policies and to suppress free speech. Content was seemingly suppressed even if it did not violate social-media policies. It is the alleged coercion and/or significant encouragement that likely violates the Free Speech Clause, not government speech, and thus, the Court is not persuaded by Defendants' arguments here.

b. Standing

The United States Constitution, via Article III, limits federal courts' jurisdiction to “cases” and “controversies.” Sample v. Morrison, 406 F.3d 310, 312 (5th Cir. 2005) (citing U.S. Const. art. III, § 2). The “law of Article III standing, which is built on separation-of-powers principles, serves to prevent the judicial process from being used to usurp the powers of the political branches.” Town of Chester, N.Y. v. Laroe Ests., Inc., 581 U.S. 433, 435 (2017) (citation omitted). Thus, “the standing question is whether the plaintiff has alleged such a personal stake in the outcome of the controversy as to warrant [its] invocation of federal-court jurisdiction and to justify exercise of the court's remedial powers on his behalf.” Warth v. Seldin, 422 U.S. 490, 498 (1975) (citation and internal quotation marks omitted). The Article III standing requirements apply to claims for injunctive and declaratory relief. See Seals v. McBee, 898 F.3d 587, 591 (5th Cir. 2018), as revised (Aug. 9, 2018); Lawson v. Callahan, 111 F.3d 403, 405 (5th Cir. 1997).

Article III standing is comprised of three essential elements. Spokeo, Inc. v. Robins, 578 U.S. 330, 338 (2016), as revised (May 24, 2016) (citation omitted). “The plaintiff must have (1) suffered an injury-in-fact, (2) that is fairly traceable to the challenged conduct of the defendant, and (3) that is likely to be redressed by a favorable judicial decision. The plaintiff, as the party invoking federal jurisdiction, bears the burden of establishing these elements.” Id. (internal citations omitted). Furthermore, “[a] plaintiff must demonstrate standing for each claim he seeks to press and for each form of relief that is sought.” Town of Chester, N.Y., 581 U.S. at 439 (citations omitted). However, the presence of one party with standing “is sufficient to satisfy Article III's case-or-controversy requirement.” Texas, 809 F.3d 134 (citing Rumsfeld v. F. for Acad. & Institutional Rts., Inc., 547 U.S. 47, 52 n.2 (2006)).

In the context of a preliminary injunction, it has been established that “the ‘merits' required for the plaintiff to demonstrate a likelihood of success include not only substantive theories but also the establishment of jurisdiction.” Food & Water Watch, Inc. v. Vilsack, 808 F.3d 905, 913 (D.C. Cir. 2015). In order to establish standing, the plaintiff must demonstrate that they have encountered or suffered an injury attributable to the defendant's challenged conduct and that such injury is likely to be resolved through a favorable decision. Lujan v. Def. of Wildlife, 504 U.S. 555, 560-61 (1992). Further, during the preliminary injunction stage, the movant is only required to demonstrate a likelihood of proving standing. Speech First, Inc. v. Fenves, 979 F.3d 319, 330 (5th Cir. 2020). Defendants raise challenges to each essential element of standing for both the Private Plaintiffs and the States. Each argument will be addressed in turn below. For the reasons stated herein, the Court finds that the Plaintiffs have demonstrated a likelihood of satisfying Article III's standing requirements.

i. Injury-in-fact

Plaintiffs seeking to establish injury-in-fact must show that they suffered “an invasion of a legally protected interest” that is “concrete and particularized” and “actual or imminent, not conjectural or hypothetical.” Spokeo, 578 U.S. at 339 (citations and internal quotation marks omitted). For an injury to be “particularized,” it must “affect the plaintiff in a personal and individual way.” Id. (citations and internal quotation marks omitted).

Plaintiffs argue that that they have asserted violations of their First Amendment right to speak and listen freely without government interference. In response, Defendants contend that Plaintiffs' allegations rest on dated declarations that focus on long-past conduct, making Plaintiffs' fears of imminent injury entirely speculative. The Court will first address whether the Plaintiff States are likely to prove an injury-in-fact. Then the court will examine whether the Individual Plaintiffs are likely to prove an injury-in-fact. For the reasons explained below, both the Plaintiff States and Individual Plaintiffs are likely to prove an injury-in-fact.

See [Doc. No. 214, at 66]

See [Doc. No. 266, at 151]

(1) Plaintiff States

In denying Defendants' Motion to Dismiss, this Court previously found that the Plaintiff States had sufficiently alleged injury-in-fact to satisfy Article III standing under either a direct injury or parens patriae theory of standing and that the States were entitled to special solicitude in the standing analysis. At the preliminary injunction stage, the issue becomes whether the Plaintiffs are likely to prove standing. See Speech First, Inc., 979 F.3d, at 330. The evidence produced thus far through discovery shows that the Plaintiff States are likely to establish an injuryin-fact through either a parens patriae or direct injury theory of standing.

[Doc. No. 128]

[Doc. No. 224, at 20-33]

Parens patriae, which translates to “parent of the country,” traditionally refers to the state's role as a sovereign and guardian for individuals with legal disabilities. Alfred L. Snapp & Son, Inc. v. Puerto Rico, ex rel., Barez, 458 U.S. 592, 600 n.8 (1982) (quoting Black's Law Dictionary 1003 (5th ed. 1979)). The term “Parens patriae lawsuit” has two meanings: it can denote a lawsuit brought by the state on behalf of individuals unable to represent themselves, or a lawsuit initiated by the state to protect its “quasi-sovereign” interests. Id. at 600; see also Kentucky v. Biden, 23 F.4th 585, 596-98 (6th Cir. 2022); Chapman v. Tristar Prod., Inc., 940 F.3d 299, 305 (6th Cir. 2019). A lawsuit based on the former meaning is known as a “third-party” Parens patriae lawsuit, and it is clearly established law that states cannot bring such lawsuits against the federal government. Kentucky, 23 F.4th at 596. Thus, to have Parens patriae standing, the Plaintiff States must show a likelihood of establishing an injury to one or more of their quasi-sovereign interests.

In Snapp, the United States Supreme Court determined that Puerto Rico had parens patriae standing to sue the federal government to safeguard its quasi-sovereign interests. Snapp, 458 U.S. at 608. The Court identified two types of injuries to a state's quasi-sovereign interests: one is an injury to a significant portion of the state's population, and the other is the exclusion of the state and its residents from benefiting from participation in the federal system. Id. at 607-608. The Court did not establish definitive limits on the proportion of the population that must be affected but suggested that an indication could be whether the injury is something the state would address through its sovereign lawmaking powers. Id. at 607. Based on the injuries alleged by Puerto Rico, the Court found that the state had sufficiently demonstrated harm to its quasi-sovereign interests and had Parens patriae standing to sue the federal government. Id. at 609-10.

In Massachusetts v. E.P.A., 549 U.S. 497 (2007), the United States Supreme Court further clarified the distinction between third-party and quasi-sovereign Parens patriae lawsuits. There, the Court concluded that Massachusetts had standing to sue the EPA to protect its quasi-sovereign interests. The Court emphasized the distinction between allowing a state to protect its citizens from federal statutes (which is prohibited) and permitting a state to assert its rights under federal law (which it has standing to do). Massachusetts, 549 U.S. at 520 n.17. Because Massachusetts sought to assert its rights under a federal statute rather than challenge its application to its citizens, the Court determined that the state had Parens patriae standing to sue the EPA.

Here, the Plaintiff States alleged and have provided ample evidence to support injury to two quasi-sovereign interests: the interest in safeguarding the free-speech rights of a significant portion of their respective populations and the interest in ensuring that they receive the benefits from participating in the federal system. Defendants argue that this theory of injury is too attenuated and that Plaintiffs are unlikely to prove any direct harm to the States' sovereign or quasisovereign interests, but the Court does not find this argument persuasive.

Plaintiffs have put forth ample evidence regarding extensive federal censorship that restricts the free flow of information on social-media platforms used by millions of Missourians and Louisianians, and very substantial segments of the populations of Missouri, Louisiana, and every other State. The Complaint provides detailed accounts of how this alleged censorship harms “enormous segments of [the States'] populations.” Additionally, the fact that such extensive examples of suppression have been uncovered through limited discovery suggests that the censorship explained above could merely be a representative sample of more extensive suppressions inflicted by Defendants on countless similarly situated speakers and audiences, including audiences in Missouri and Louisiana. The examples of censorship produced thus far cut against Defendants' characterization of Plaintiffs' fear of imminent future harm as “entirely speculative” and their description of the Plaintiff States' injuries as “overly broad and generalized grievance[s].” The Plaintiffs have outlined a federal regime of mass censorship, presented specific examples of how such censorship has harmed the States' quasi-sovereign interests in protecting their residents' freedom of expression, and demonstrated numerous injuries to significant segments of the Plaintiff States' populations.

See supra, pp. 8-94 (detailing the extent and magnitude of Defendants' pressure and coercion tactics with socialmedia companies); See also [Doc. No. 214-1, at ¶¶ 1348 (noting that Berenson had nationwide audiences and over 200,000 followers when he was de-platformed on Twitter), 1387 (noting that the Gateway Pundit had more than 1.3 million followers across its social-media accounts before it was suspended), 1397-1409 (noting that Hines has approximately 13,000 followers each on her Health Freedom Louisiana and Reopen Louisiana Facebook pages, approximately 2,000 followers on two other Health Freedom Group Louisiana pages, and that the former Facebook pages have faced increasing censorship penalties and that the latter pages were de-platformed completely), etc.]

[Doc. No. 266, at 151]

Moreover, the materials produced thus far suggest that the Plaintiff States, along with a substantial segment of their populations, are likely to show that they are being excluded from the benefits intended to arise from participation in the federal system. The U.S. Constitution, like the Missouri and Louisiana Constitutions, guarantees the right of freedom of expression, encompassing both the right to speak and the right to listen. U.S. Const. amend. I; Virginia State Bd. of Pharmacy v. Virginia Citizens Consumer Council, Inc., 425 U.S. 748, 756-57 (1976). The United States Supreme Court has acknowledged the freedom of expression as one of the most significant benefits conferred by the federal Constitution. W. Virginia State Bd. of Educ. v. Barnette, 319 U.S. 624, 642 (1943) (“If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion.”). Plaintiffs have demonstrated that they are likely to prove that federal agencies, actors, and officials in their official capacity are excluding the Plaintiff States and their residents from this crucial benefit that is meant to flow from participation in the federal system. See Snapp, 458 U.S. at 608.

Accordingly, the Court finds that the States have alleged injuries under a parens patriae theory of standing because they are likely to prove injuries to the States' quasi-sovereign interests in protecting the constitutionally bestowed rights of their citizens.

Further, Plaintiffs have demonstrated direct censorship injuries that satisfy the requirements of Article III as injuries in fact. Specifically, the Plaintiffs contend that Louisiana's Department of Justice, which encompasses the office of its Attorney General, faced direct censorship on YouTube for sharing video footage wherein Louisianans criticized mask mandates and COVID-19 lockdown measures on August 18, 2021, immediately following the federal Defendants' strong advocacy for COVID-related “misinformation” censorship. Moreover, a Louisiana state legislator experienced censorship on Facebook when he posted content addressing the vaccination of children against COVID-19. Similarly, during public meetings concerning proposed county-wide mask mandates held by St. Louis County, a political subdivision of Missouri, certain citizens openly expressed their opposition to mask mandates. However, YouTube censored the entire videos of four public meetings, removing the content because some citizens expressed the view that masks are ineffective. Therefore, this Court finds that the Plaintiff States have also demonstrated a likelihood of establishing an injury-in-fact under a theory of direct injury sufficient to satisfy Article III.

[Doc. No. 214-1, at ¶¶1428-1430]

[Id. at ¶1428]

[Id. at ¶1429]

[Id. at ¶ 1430]

Accordingly, for the reasons stated above and explained in this Court's ruling on the Motion to Dismiss, the Plaintiff States are likely to succeed on establishing an injury-in-fact under Article III.

[Doc. No. 214, at 20-33]

(2) Individual Plaintiffs

In Susan B. Anthony List v. Driehaus, 573 U.S. 149, 158 (2014) (“SBA List”), the Supreme Court held that an allegation of future injury may satisfy the Article III injury-in-fact requirement if there is a “substantial risk” of harm occurring. (quoting Clapper v. Amnesty Int'l USA, 568 U.S. 398, 408 (2013). In SBA List, the petitioner challenged a statute that prohibited making false statements during political campaigns. Id. at 151-52. The Court considered the justiciability of the pre-enforcement challenge and whether it alleged a sufficiently imminent injury under Article III. It noted that pre-enforcement review is warranted when the threatened enforcement is “sufficiently imminent.” Id. at 159. The Court further emphasized that past enforcement is indicative that the threat of enforcement is not “chimerical.” Id. at 164 (quoting Steffel v. Thompson, 415 U.S. 452, 459 (1974)).

Likewise, in Babbitt v. United Farm Workers Nat. Union, 442 U.S. 289, 302 (1979), the Supreme Court found that the plaintiffs satisfied Article III's injury-in-fact requirement because the fear of future injury was not “imaginary or wholly speculative.” There, the Court considered a pre-enforcement challenge to a statute that deemed it an unfair labor practice to encourage consumer boycotts through deceptive publicity. Id. at 301. Because the plaintiffs had engaged in past consumer publicity campaigns and intended to continue those campaigns in the future, the Court found their challenge to the consumer publicity provision satisfied Article III. Id. at 302. Similar pre-enforcement review was recognized in Virginia v. Am. Booksellers Ass'n, Inc., 484 U.S. 383, 386 (1988), where the Supreme Court held that booksellers could seek review of a law criminalizing the knowing display of “harmful to juveniles” material for commercial purposes, as defined by the statute. Virginia, 484 U.S. at 386 (certified question answered sub nom. Commonwealth v. Am. Booksellers Ass'n, Inc., 236 Va. 168 (1988)).

Here, each of the Individual Plaintiffs are likely to demonstrate an injury-in-fact through a combination of past and ongoing censorship. Bhattacharya, for instance, is the apparent victim of an ongoing “campaign” of social-media censorship, which indicates that he is likely to experience future acts of censorship. Similarly, Kulldorff attests to a coordinated federal censorship campaign against the Great Barrington Declaration, which implies future censorship.Kulldorff's ongoing censorship experiences on his personal social-media accounts provide evidence of ongoing harm and support the expectation of imminent future harm. Kheriaty also affirms ongoing and anticipated future injuries, noting that the issue of “shadow banning” his social-media posts has intensified since 2022.

See [Doc. No. 214-1, ¶787 (an email from Dr. Francis Collins to Dr. Fauci and Cliff Lane which read: “Hi [Dr. Fauci] and Cliff, See https://gbdeclaration.org. This proposal from the three fringe epidemiologists who met with the Secretary seems to be getting a lot of attention - and even a co-signature from Nobel Prize winner Mike Leavitt at Stanford. There needs to be a quick and devastating published take down of its premises. I don't see anything like that online yet - is it underway?”), ¶¶1368-1372 (describing the covert and ongoing censorship campaign against him)]

See [Id. at ¶¶1373-1380 (where Kulldorff explains an ongoing campaign of censorship against his personal socialmedia accounts, including censored tweets, censored posts criticizing mask mandates, removal of LinkedIn posts, and the ongoing permanent suspension of his LinkedIn account)]

[Id.]

[Id. at ¶¶1383-1386]

Hoft and Hines present similar accounts of past, ongoing, and anticipated future censorship injuries. Defendants even appear to be currently involved in an ongoing project that encourages and engages in censorship activities specifically targeting Hoft's website. Hines, too, recounts past and ongoing censorship injuries, stating that her personal Facebook page, as well as the pages of Health Freedom Louisiana and Reopen Louisiana, are constantly at risk of being completely de-platformed. At the time of her declaration, Hines' personal Facebook account was under an ongoing ninety-day restriction. She further asserts, and the evidence supplied in support of the preliminary injunction strongly implies, that these restrictions can be directly traced back to federal officials.

See [Id. at ¶¶1387-1396 (describing the past and ongoing campaign against his website, the Gateway Pundit, which resulted in censorship on Facebook, Twitter, Instagram, and YouTube)]

See [Id. at ¶¶1397-1411]

Each of the Private Plaintiffs alleges a combination of past, ongoing, and anticipated future censorship injuries. Their allegations go beyond mere complaints about past grievances. Moreover, they easily satisfy the substantial risk standard. The threat of future censorship is significant, and the history of past censorship provides strong evidence that the threat of further censorship is not illusory or speculative. Plaintiffs' request for an injunction is not solely aimed at addressing the initial imposition of the censorship penalties but rather at preventing any continued maintenance and enforcement of such penalties. Therefore, the Court concludes that the Private Plaintiffs have fulfilled the injury-in-fact requirement of Article III.

Based on the reasons outlined above, the Court determines that both the States and Private Plaintiffs have satisfied the injury-in-fact requirement of Article III.

ii. Traceability

To establish traceability, or “causation” in this context, a plaintiff must demonstrate a “direct relation between the injury asserted and the injurious conduct alleged.” Holmes v. Sec. Inv. Prot. Corp., 503 U.S. 258, 268 (1992). Therefore, courts examining this element of standing must assess the remoteness, if any, between the plaintiff's injury and the defendant's actions. As explained in Ass'n of Am. Physicians & Surgeons v. Schiff, the plaintiff must establish that it is “‘substantially probable that the challenged acts of the defendant, not of some absent third party' caused or will cause the injury alleged.” 518 F.Supp.3d 505, 513 (D.D.C. 2021), aff'd sub nom. Ass'n of Am. Physicians & Surgeons, Inc. v. Schiff, 23 F.4th 1028 (D.C. Cir. 2022) ("AAPS II") (quoting Fla. Audubon Soc. v. Bentsen, 94 F.3d 658, 663 (D.C. Cir. 1996)).

Plaintiffs argue that they are likely to prove that their injuries are fairly traceable to Defendants' actions of inducing and jointly participating in the social-media companies' viewpoint-based censorship under a theory of “but-for” causation, conspiracy, or aiding and abetting. In support, they cite the above-mentioned examples of switchboarding and other pressure tactics employed by Defendants. In response, Defendants assert that there is no basis upon which this Court can conclude that the social-media platforms made the disputed contentmoderation decisions because of government pressure. For the reasons explained below, the Court finds that Plaintiffs are likely to prove that their injuries are fairly traceable to the conduct of the Defendants.

[Doc. No. 204, at 67-68]

[Id. at 69-71 (citing Doc. No. 214-1, ¶¶57, 64 “(promising the White House that Facebook would censor “often-true” but “sensationalized” content)”; ¶ 73 “(imposing forward limits on non-violative speech on WhatsApp)”; ¶¶ 8992 “(assuring the White House that Facebook will use a “spectrum of levers” to censor content that “do[es] not violate our Misinformation and Harm policy, including “true but shocking claims or personal anecdotes, or discussing the choice to vaccinate in terms of personal and civil liberties”)”; ¶¶ 93-100 “(agreeing to censor Tucker Carlson's content at the White House's behest, even though it did not violate platform policies)”, ¶¶ 103-104 “(Twitter deplatforming Alex Berenson at White House pressure)”; ¶ 171 “(Facebook deplatformed the Disinformation Dozen immediately after these comments). Facebook officials scrambled to get back into the White House's good graces. Id. ¶¶ 172, 224 (pleading for “de-escalation” and “working together”).”]

[Doc. No. 266, at 131-136]

In Duke Power Co. v. Carolina Envt. Study Grp., the United States Supreme Court found that a plaintiff's injury was fairly traceable to a statute under a theory of “but-for” causation. 438 U.S. 59 (1978). The plaintiffs, who were comprised in part of individuals living near the proposed sites for nuclear plants, challenged a statute that limited the aggregate liability for a single nuclear accident under the theory that, but for the passing of the statute, the nuclear plants would not have been constructed. Id. at 64-65. The Supreme Court agreed with the district court's finding that there was a “substantial likelihood” that the nuclear plants would have been neither completed nor operated absent the passage of the nuclear-friendly statute. Id. at 75.

In Duke Power Co., the defendants essentially argued that the statute was not the “but-for” cause of the injuries claimed by the plaintiffs because if Congress had not passed the statute, the Government would have developed nuclear power independently, and the plaintiffs would have likely suffered the same injuries from government-operated plants as they would have from privately operated ones. Id. In rejecting that argument, the Supreme Court stated:

Whatever the ultimate accuracy of this speculation, it is not responsive to the simple proposition that private power companies now do in fact operate the nuclear-powered generating plants injuring [the plaintiffs], and that their participation would not have occurred but for the enactment and implementation of the Price-Anderson Act. Nothing in our prior cases requires a party seeking to invoke federal jurisdiction to negate the kind of speculative and hypothetical possibilities suggested in order to demonstrate the likely effectiveness of judicial relief.
Id. at 77-78. The Supreme Court's reluctancy to follow the defendants down a rabbit-hole of speculation and “what-ifs” is highly instructive.

Here, Defendants heavily rely upon the premise that social-media companies would have censored Plaintiffs and/or modified their content moderation policies even without any alleged encouragement and coercion from Defendants or other Government officials. This argument is wholly unpersuasive. Unlike previous cases that left ample room to question whether public officials' calls for censorship were fairly traceable to the Government; the instant case paints a full picture. A drastic increase in censorship, deboosting, shadow-banning, and account suspensions directly coincided with Defendants' public calls for censorship and private demands for censorship. Specific instances of censorship substantially likely to be the direct result of Government involvement are too numerous to fully detail, but a birds-eye view shows a clear connection between Defendants' actions and Plaintiffs injuries.

See [Doc. No. 204, at 41-44 (where this Court distinguished this case from cases that “left gaps” in the pleadings)]

See, e.g., [Doc. No. 241-1, ¶¶1, 7, 17, 164 (examples of Government officials threatening adverse legislation against social-media companies if they do not increase censorship efforts); ¶¶ 51, 119, 133, 366, 424, 519 (examples of socialmedia companies, typically following up after an in-person meeting or phone call, ensuring Defendants that they would increase censorship efforts)]

The Plaintiffs' theory of but-for causation is easy to follow and demonstrates a high likelihood of success as to establishing Article III traceability. Government officials began publicly threatening social-media companies with adverse legislation as early as 2018. In the wake of COVID-19 and the 2020 election, the threats intensified and became more direct. Around this same time, Defendants began having extensive contact with social-media companies via emails, phone calls, and in-person meetings. This contact, paired with the public threats and tense relations between the Biden administration and social-media companies, seemingly resulted in an efficient report-and-censor relationship between Defendants and social-media companies.Against this backdrop, it is insincere to describe the likelihood of proving a causal connection between Defendants' actions and Plaintiffs' injuries as too attenuated or purely hypothetical.

[Doc. No. 214-1, ¶1]

See, e.g., [Id. at ¶ 156 (Psaki reinforcing President Biden's “They're killing people” comment); ¶166 (media outlets reporting tense relations between the Biden administration and social-media companies)]

See, e.g., [Doc. No. 174-1, at 3 (Twitter employees setting up a more streamlined process for censorship requests because the company had been “recently bombarded” with censorship requests from the White House)]

See, e.g., [Doc. Nos. 174-1, at 3 (Twitter employees setting up a more streamlined process for censorship requests because the company had been “recently bombarded” with censorship requests from the White House); at 4 (Twitter suspending a Jill Biden parody account within 45 minutes of a White House official requesting twitter to “remove this account immediately”); 214-1, at ¶799 (Drs. Bhattacharya and Kuldorff began experienced extensive censorship on social media shortly after Dr. Collins emailed Dr. Fauci seeking a “quick and devastating take down” of the GBD.); ¶1081 (Twitter removing tweets within two minutes of Scully reporting them for censorship.); ¶¶1266-1365 (Explaining how the Virality Project targeted Hines and health-freedom groups.); 214-9, at 2-3 (Twitter ensuring the White House that it would increase censorship of “misleading information” following a meeting between White House officials and Twitter employees.); etc.]

The evidence presented thus goes far beyond mere generalizations or conjecture: Plaintiffs have demonstrated that they are likely to prevail and establish a causal and temporal link between Defendants' actions and the social-media companies' censorship decisions. Accordingly, this Court finds that there is a substantial likelihood that Plaintiffs would not have been the victims of viewpoint discrimination but for the coercion and significant encouragement of Defendants towards social-media companies to increase their online censorship efforts.

Because this Court finds that Plaintiffs have successfully shown a likelihood of success under a “but for” theory of causation, it will not address Plaintiffs arguments as to other theories of causation. However, the Court does note that caselaw from outside of the Fifth Circuit supports a more lenient theory of causation for purposes of establishing traceability. See, e.g., Tweed-New Haven Airport Auth. v. Tong, 930 F.3d 65, 71 (2d Cir. 2019); Parsons v. U.S. Dep't of Justice, 801 F.3d 701, 714 (6th Cir. 2015).

For the reasons stated above, as well as those set forth in this Court's previous ruling on the Motion to Dismiss, the Court finds that Plaintiffs are likely to succeed in establishing the traceability element of Article III standing.

[Doc. No. 204, at 67-71]

iii. Redressability

The redressability element of the standing analysis requires that the alleged injury is “likely to be redressed by a favorable decision.” Lujan, 504 U.S. at 560-61. “To determine whether an injury is redressable, a court will consider the relationship between ‘the judicial relief requested' and the ‘injury' suffered.” California v. Texas, 141 S.Ct. 2104, 2115, 210 L.Ed.2d 230 (2021) (quoting Allen v. Wright, 468 U.S. 737, 753 n.19 (1984), abrogated by Lexmark Int'l, Inc. v. Static Control Components, Inc., 572 U.S. 118 (2014)). Additionally, courts typically find that where an injury is traceable to a defendant's conduct, it is usually redressable as well. See, e.g., Scenic Am., Inc. v. United States Dep't of Transportation, 836 F.3d 42, 54 (D.C. Cir. 2016) (“[C]ausation and redressability are closely related, and can be viewed as two facets of a single requirement.”); Toll Bros. v. Twp. of Readington, 555 F.3d 131, 142 (3d Cir. 2009) (“Redressability . . . is closely related to traceability, and the two prongs often overlap.”); El Paso Cnty. v. Trump, 408 F.Supp.3d 840, 852 (W.D. Tex. 2019).

Plaintiffs argue that they are likely to prove that a favorable decision would redress their injuries because they have provided ample evidence that their injuries are imminent and ongoing. In response, Defendants contend that any threat of future injury is merely speculative because Plaintiffs rely on dated declarations and focus on long-past conduct of Defendants and social-media companies. For the reasons explained below, the Court finds that Plaintiffs are likely to prove that their injuries would be redressed by a favorable decision.

[Doc. No. 214, at 71-74]

[Doc. No. 266, at 152-157]

As this Court previously noted, a plaintiff's standing is evaluated at the time of filing of the initial complaint in which they joined. Lynch v. Leis, 382 F.3d 642, 647 (6th Cir. 2004); Davis v. F.E.C., 554 F.3d 724, 734 (2008); S. Utah Wilderness All. v. Palma, 707 F.3d 1143, 1153 (10th Cir. 2013). The State Plaintiffs filed suit on May 5, 2022, and the individual Plaintiffs joined on August 2, 2022. Both groups are likely to prove that threat of future injury is more than merely speculative.

[Doc. No. 204, at 62-65]

[Doc. No. 1]

[Doc. No. 45]

Plaintiff States have produced sufficient evidence to demonstrate a likelihood of proving ongoing injuries as of the time the Complaint was filed. For instance, on June 13, 2023, Flaherty still wanted to “get a sense of what [Facebook was] planning” and denied the company's request for permission to stop submitting its biweekly “Covid Insights Report” to the White House.Specifically, Flaherty wanted to monitor Facebook's suppression of COVID-19 misinformation “as we start to ramp up [vaccines for children under the age of five].” The CDC also remained in collaboration with Facebook in June of 2022 and even delayed implementing policy changes “until [it got] the final word from [the CDC].” After coordinating with the CDC and White House, Facebook informed the White House of its new and government-approved policy, stating: “As of today, [June 22, 2022], all COVID-19 vaccine related misinformation and harm policies on Facebook and Instagram apply to people 6 months or older.”

[Doc. No. 214-1, at ¶425]

[Id.]

[Doc. Nos. 71-7, at 6; 214-1, ¶424]

[Doc. Nos. 71-7, at 6; 71-3, at 5; 214-1, ¶¶424-425]

Likewise, the individual Plaintiffs are likely to demonstrate that their injuries were imminent and ongoing as of August 2, 2022. Evidence obtained thus far indicates that Defendants have plans to continue the alleged censorship activities. For example, preliminary discovery revealed CISA's expanding efforts in combating misinformation, with a focus on the 2022 elections. As of August 12, 2022, Easterly was directing the “mission of Rumor Control” for the 2022 midterm elections, and CISA candidly reported to be “bee[fing] up [its] efforts to fight falsehoods[]" in preparation for the 2024 election cycle. Chan of the FBI also testified at his deposition that online disinformation continues to be discussed between the federal agencies and social-media companies at the USG Industry meetings, and Chan assumes that this will continue through the 2024 election cycle. All of this suggests that Plaintiffs are likely to prove that risk of future censorship injuries is more than merely speculative. Additionally, past decisions to suppress speech result in ongoing injury as long as the speech remains suppressed, and the past censorship experienced by individual Plaintiffs continues to inhibit their speech in the present. These injuries are also affecting the rights of the Plaintiffs' audience members, including those in Plaintiff States, who have the First Amendment right to receive information free from Government interference.

[Doc. No. 71-8, at 2; Doc. 86-7, at 14]

[Doc. No. 86-7, at 14]

[Doc. No. 214-1, at ¶1106 (see also [Doc. No. 71-8, at 2 (CISA “wants to ensure that it is set up to extract lessons learned from 2022 and apply them to the agency's work in 2024.”]

[Id. at ¶ 866]

Accordingly, and for the reasons stated above, the Court finds that Plaintiffs are likely to prove that a favorable decision would redress their injuries because those injuries are ongoing and substantially likely to reoccur.

iv. Recent United States Supreme Court cases of Texas and Haaland

Defendants cite to two recent cases from the Supreme Court of the United States which they claim undermine this Court's previous ruling about the Plaintiff States' likelihood of proving Article III standing.

First, Defendants argue that United States v. Texas, No. 22-58, 2023 WL 4139000 (U.S. June 23, 2023), undermines the States' Article III standing. In Texas, Texas and Louisiana sued the Department of Homeland Security (the “Department”), as well as other federal agencies, claiming that the recently promulgated “Guidelines for the Enforcement of Civil Immigration Law” contravened two federal statutes. Id. at *2. The Supreme Court held that the states lacked Article III standing because “a citizen lacks standing to contest the policies of the prosecuting authority when he himself is neither prosecuted nor threatened with prosecution.” The Court further noted that the case was “categorically different” from other standing decisions “because it implicates only one discrete aspect of the executive power-namely, the Executive Branch's traditional discretion over whether to take enforcement actions against violators of federal law.” Id. at *2, *8 (citations omitted).

Here, the Plaintiff States are not asserting a theory that the Defendants failed to act in conformity with the Constitution. To the contrary, the Plaintiff States assert that Defendants have affirmatively violated their First Amendment right to free speech. The Plaintiff States allege and (as extensively detailed above) are likely to prove that the Defendants caused direct injury to the Plaintiff States by significantly encouraging and/or coercing social-media companies to censor posts made on social-media. Further, as noted in this Court's previous ruling, the Plaintiff States are likely to have Article III standing because a significant portion of the Plaintiff States' population has been prevented from engaging with the posts censored by the Defendants. The Supreme Court noted that “when the Executive Branch elects not to arrest or prosecute, it does not exercise coercive power over an individual's liberty or property, and thus does not infringe upon interests that courts are often called upon to protect.” Id. at *5. Here, federal officials allegedly did exercise coercive power, and the Plaintiffs are likely to prevail on their claim that the Defendants violated the First Amendment rights of the Plaintiff States, their citizens, and the Individual Plaintiffs.

Defendants contend that the Supreme Court in Texas narrowed the application of special solicitude afforded to states because the Supreme Court noted that the standing analysis in Massachusetts “d[id] not control” because “[t]he issue there involved a challenge to the denial of a statutorily authorized petition for rulemaking,” rather than the exercise of enforcement discretion. Id. at *8 n.6. This Court disagrees with Defendants on that point. As noted by Plaintiffs, the majority opinion in Texas does not mention special solicitude. Further, this Court noted in its previous analysis of standing that the Plaintiff States could satisfy Article III's standing requirements without special solicitude. Therefore, even to the extent this Court “leaves that idea on the shelf,” as suggested in Justice Gorsuch's concurrence, the Court nonetheless finds that the Plaintiff States are likely to prove Article III standing.

Defendants also argue that the Supreme Court's recent ruling in Haaland v. Brackeen, No. 21-376, 2023 WL 4002951 (U.S. June 15, 2023), undermines the Plaintiff States' Article III standing. In Haaland, the Supreme Court ruled that Texas did not possess standing to challenge the placement provisions of the Indian Child Welfare Act, which prioritizes Indian families in custody disputes involving Indian children. Id. at *19. The Supreme Court reasoned that the states in Texas could not “assert equal protection claims on behalf of its citizens because ‘[a] State does not have standing as Parens patriae to bring an action against the Federal Government.'” Id. (quoting Snapp, 458 U.S. at 610 n.16)). The Defendants argue that this statement precludes Parens patriae standing in the present case. However, in its brief discussion regarding parens patriae standing, the Haaland Court quoted footnote 16 from Snapp, which, in turn, reiterated the “Mellon bar.” Haaland, 2023 WL 4002951, at *19; Snapp, 458 U.S. at 610 n.16 (quoting Massachusetts v. 262 U.S. at 485-86.

[Doc. 289, at 2].

Plaintiffs correctly note that, although both cases employ broad language, neither Haaland nor Snapp elaborate on the extent of the “Mellon bar.” Moreover, the Supreme Court has clarified in other instances that Parens patriae suits are permitted against the federal government outside the scope of the Mellon bar. See Massachusetts v. EPA, 549 U.S. at 520 n.17, (explaining the “critical difference” between barred Parens patriae suits by Mellon and allowed parens patriae suits against the federal government).

Consistent with Massachusetts v. EPA, this Court has previously determined that the Mellon bar applies to “third-party Parens patriae suits,” but not to “quasi-sovereign-interest suits.” In Haaland, Texas presented a “third-party parens patriae suit,” as opposed to a “quasisovereign-interest suit,” as it asserted the equal protection rights of only a small minority of its population (i.e., non-Indian foster or adoptive parents seeking to foster or adopt Indian children against the objections of relevant Indian tribes), which clearly did not qualify as a quasi-sovereign interest. See Haaland, 2023 WL 4002951, at *19 & n.11). Here, however, Louisiana and Missouri advocate for the rights of a significant portion of their populations, specifically the hundreds of thousands or millions of citizens who are potential audience members affected by federal socialmedia speech suppression.

[Doc. 224, at 215-26], quoting Kentucky v. Biden, 23 F.4th 585, 598 (6th Cir. 2022).

Furthermore, when the Haaland Court determined that Texas lacked third-party standing, it stressed that Texas did not have either a “‘concrete injury' to the State” or any hindrance to the third party's ability to protect its own interests. Id. at *19 n.11 (quoting Georgia v. McCollum, 505 U.S. 42, 55-56 (1992)). Here, by contrast, the Plaintiff States have demonstrated a likelihood of succeeding on their claims that they have suffered, and likely will continue to suffer, numerous concrete injuries resulting from federal social-media censorship. Additionally, the ability of the third parties in this case to protect their own interests is hindered because the diffuse First Amendment injury experienced by each individual audience member in Louisiana and Missouri lacks sufficient economic impact to encourage litigation through numerous individual lawsuits. See Bantam Books, Inc. v. Sullivan, 372 U.S. 58, 64 n.6 (1963).

See, e.g., [Doc. 214-1, ¶¶ 1427-1442]

Defendants further contend that Haaland rejected Texas's argument regarding the ICWA's placement provisions requiring Texas to compromise its commitment to being impartial in childcustody proceedings. However, the Supreme Court rejected this argument for a specific reason: “Were it otherwise, a State would always have standing to bring constitutional challenges when it is complicit in enforcing federal law.” Haaland, 2023 WL 4002951, at *19. By contrast, Missouri and Louisiana do not assert that the federal government mandates their complicity in enforcing federal social-media-censorship regimes. The Plaintiff States instead assert that they, along with a substantial portion of their populations, have been injured by Defendants' actions.

[Doc. 289, at 3] quoting Haaland, 2023 WL 4002951, at *19.

Neither Texas nor Haaland undermine this Court's previous ruling that the Plaintiff States have Article III standing to sue Defendants in the instant case. Further, the evidence produced thus far through limited discovery demonstrates that Plaintiffs are likely to succeed on their First Amendment claims. Accordingly, the Court finds that Plaintiffs are likely to prove all elements of Article III standing, and therefore, are likely to establish that this Court has jurisdiction.

2. Irreparable Harm

The second requirement for a Preliminary Injunction is a showing of irreparable injury: plaintiffs must demonstrate “a substantial threat of irreparable injury” if the injunction is not issued. Texas, 809 F.3d at 150. For injury to be “irreparable,” plaintiffs need only show it cannot be undone through monetary remedies. Burgess v. Fed. Deposit Inc., Corp., 871 F.3d 297, 304 (5th Cir. 2017). Deprivation of a procedural right to protect a party's concrete interests is irreparable injury. Texas, 933 F.3d at 447. Additionally, violation of a First Amendment constitutional right, even for a short period of time, is always irreparable injury. Elrod, 427 U.S. at 373.

Plaintiffs argue in their memorandum that the First Amendment violations are continuing and/or that there is a substantial risk that future harm is likely to occur. In contrast, Defendants argue that Plaintiffs are unable to show imminent irreparable harm because the alleged conduct occurred in the past, is not presently occurring, and is unlikely to occur in the future. Defendants argue Plaintiffs rely upon actions that occurred approximately one year ago and that it cannot be remedied by any prospective injunctive relief. Further, Defendants argue that there is no “imminent harm” because the COVID-19 pandemic is over and because the elections where the alleged conduct occurred are also over.

The Court finds that Plaintiffs have demonstrated a “significant threat of injury from the impending action, that the injury is imminent, and that money damages would not fully repair the harm.” Humana, Inc., v. Jackson, 804 F.2d 1390, 1394 (5th Cir. 1986). To demonstrate irreparable harm at the preliminary injunction stage, Plaintiffs must adduce evidence showing that the irreparable injury is likely to occur during the pendency of the litigation. Justin Indus. Inc., v. Choctaw Secs., L.P., 920 F.2d 262, 268 n. 7 (5th Cir. 1990). This Plaintiffs have done.

Defendants argue that the alleged suppression of social-media content occurred in response to the COVID-19 pandemic and attacks on election infrastructure, and therefore, the alleged conduct is no longer occurring. Defendants point out that the alleged conduct occurred between one to three years ago. However, the information submitted by Plaintiffs was at least partially based on preliminary injunction-related discovery and third-party subpoena requests that were submitted to five social-media platforms on or about July 19, 2022. The original Complaintwas filed on May 5, 2022, and most of the responses to preliminary injunction-related discovery provided answers to discovery requests that occurred before the Complaint was filed. Since completion of preliminary-injunction related discovery took over six months, most, if not all, of the information obtained would be at least one year old.

[Doc. No. 34]

[Doc. No. 37]

[Doc. No. 1]

Further, the Defendants' decision to stop some of the alleged conduct does not make it any less relevant. A defendant claiming that its voluntary compliance moots a case bears the formidable burden of showing that it is absolutely clear the alleged wrongful behavior could not reasonably be expected to recur. Already, LLC v. Nike, 568 U.S. 85, 91 (2013). Defendants have not yet met this burden here.

Defendants also argue that, due to the delay in the Plaintiffs seeking relief, the Plaintiffs have not shown “due diligence” in seeking relief. However, this Court finds that Plaintiffs have exercised due diligence. This is a complicated case that required a great deal of discovery in order to obtain the necessary evidence to pursue this case. Although it has taken several months to obtain this evidence, it certainly was not the fault of the Plaintiffs. Most of the information Plaintiffs needed was unobtainable except through discovery.

Plaintiffs allege actions occurring as far back as 2020.

Defendants further argue the risk that Plaintiffs will sustain injuries in the future is speculative and depends upon the action of the social-media platforms. Defendants allege the Plaintiffs have therefore not shown imminent harm by any of the Defendants.

In Susan B. Anthony List v. Driehaus, 573 U.S. 149, 158 (2014) (“SBA List”), the Supreme Court held that, for purposes of an Article III injury-in-fact, an allegation of future injury may suffice if there is “a ‘substantial risk' that the harm will occur.” (quoting Clapper v. Amnesty Int'l USA, 568 U.S. 398, 408, (2013)). In SBA List, a petitioner challenged a statute that prohibited making certain false statements during the course of a political campaign. Id. at 151-52. In deciding whether the pre-enforcement challenge was justiciable-and in particular, whether it alleged a sufficiently imminent injury for purposes of Article III-the Court noted that preenforcement review is warranted under circumstances that render the threatened enforcement “sufficiently imminent.” Id. at 159. Specifically, the Court noted that past enforcement is “good evidence that the threat of enforcement is not ‘chimerical.'” Id. at 164 (quoting Steffel v. Thompson, 415 U.S. 452, 459 (1974)).

Similarly, in Babbitt v. United Farm Workers Nat. Union, 442 U.S. 289, 302 (1979), the Supreme Court held that a complaint alleges an Article III injury-in-fact where fear of future injury is not “imaginary or wholly speculative.” In Babbitt, the Supreme Court considered a preenforcement challenge to a statute that made it an unfair labor practice to encourage consumers to boycott using “dishonest, untruthful, and deceptive publicity.” Id. at 301. Because the plaintiffs had engaged in consumer publicity campaigns in the past and alleged an intention to continue those campaigns in the future, the Court held that their challenge to the consumer publicity provision presented an Article III case or controversy. Id. at 302; see also Virginia v. Am. Booksellers Ass'n, Inc., 484 U.S. 383, 386 (1988) (where the Supreme Court held that booksellers could seek preenforcement review of a law making it a crime to “knowingly display for commercial purpose” material that is “harmful to juveniles,” as defined by the statute).

Therefore, the question is whether Plaintiffs have alleged a “substantial risk” that harm may occur, which is not “imaginary or wholly speculative.” This Court finds that the alleged past actions of Defendants show a substantial risk of harm that is not imaginary or speculative. SBA List, 573 U.S. at 164. Defendants apparently continue to have meetings with social-media companies and other contacts.

[Doc. No. 204-1 at 40]

Although the COVID-19 pandemic is no longer an emergency, it is not imaginary or speculative to believe that in the event of any other real or perceived emergency event, the Defendants would once again use their power over social-media companies to suppress alternative views. And it is certainly not imaginary or speculative to predict that Defendants could use their power over millions of people to suppress alternative views or moderate content they do not agree with in the upcoming 2024 national election. At oral arguments Defendants were not able to state that the “switchboarding” and other election activities of the CISA Defendants and the State Department Defendants would not resume prior to the upcoming 2024 election; in fact, Chan testified post 2020, “we've never stopped.” Notably, a draft copy of the DHS's “Quadrennial Homeland Security Review,” which outlines the department's strategy and priorities in upcoming years, states that the department plans to target “inaccurate information” on a wide range of topics, including the origins of the COVID-19 pandemic, the efficacy of COVID-19 vaccines, racial justice, the U.S. withdrawal from Afghanistan, and the return of U.S. Support of Ukraine.

[Doc. No. 208 at 122]

[Chan depo. at 8-9]

[Doc. No. 209-23 at 4]

The Plaintiffs are likely to succeed on the merits in their claims that there is a substantial risk that harm will occur, that is not imaginary or speculative. Plaintiffs have shown that not only have the Defendants shown willingness to coerce and/or to give significant encouragement to social-media platforms to suppress free speech with regard to the COVID-19 pandemic and national elections, they have also shown a willingness to do it with regard to other issues, such as gas prices, parody speech, calling the President a liar, climate change, gender, and abortion. On June 14, 2022, White House National Climate Advisor Gina McCarthy, at an Axios event entitled, “A Conversation on Battling Disinformation,” was quoted as saying, “We have to get together; we have to get better at communicating, and frankly, the tech companies have to stop allowing specific individuals over and over to spread disinformation.”

[Doc. No. 212-3 at 65-66, ¶ 211]

[Id. at 58-60, ¶¶ 180-188]

[Id. at 61, ¶ 190]

[Id. at 63-64, ¶¶ 200-203]

[Id. at 64-64, ¶¶ 204-208]

[Id. at 65, ¶¶ 209-210]

[Doc. No. 214-15]

The Complaint (and its amendments) shows numerous allegations of apparent future harm. Plaintiff Bhattacharya alleges ongoing social-media censorship. Plaintiff Kulldorff alleges an ongoing campaign of censorship against the GBD and his personal social-media accounts.Plaintiff Kheriaty also alleges ongoing and expected future censorship, noting “shadow-banning” his social-media account is increasing and has intensified since 2022. Plaintiffs Hoft and Hines also allege ongoing and expected future censorship injuries. It is not imaginary or speculative that the Defendants will continue to use this power. It is likely.

[Doc. No. 45-3, ¶¶ 15-33]

[Doc. No. 45-4, ¶¶ 14-16]

[Doc. No. 45-7, ¶¶ 12-18]

[Id. at ¶¶ 15]

[Doc. No. 45-7 at ¶¶ 12-18]; [Doc. No. 84 at ¶¶ 401-420]; [Doc. No. 45-12 at ¶ 4, 12]

The Court finds that Plaintiffs are likely to succeed on their claim that they have shown irreparable injury sufficient to satisfy the standard for the issuance of a preliminary injunction.

3. Equitable Factors and Public Interest

Thus far, Plaintiffs have satisfied the first two elements to obtain a preliminary injunction. The final two elements they must satisfy are that the threatened harm outweighs any harm that may result to the Federal Defendants and that the injunction will not undermine the public interest. Valley v. Rapides Par. Sch. Bd., 118 F.3d 1047, 1051 (5th Cir. 1997). These two factors overlap considerably. Texas, 809 F.3d at 187. In weighing equities, a court must balance the competing claims of injury and must consider the effect on each party of the granting or withholding of the requested relief. Winter v. Nat. Res. Def. Council, Inc., 555 U.S. 7, 24 (2008). The public interest factor requires the court to consider what public interests may be served by granting or denying a preliminary injunction. Sierra Club v. U.S. Army Corps of Engineers, 645 F.3d 978, 997-98 (8th Cir. 2011).

Defendants maintain their interest in being able to report misinformation and warn social-media companies of foreign actors' misinformation campaigns outweighs the Plaintiffs' interest in the right of free speech. This Court disagrees and finds the balance of equities and the public interest strongly favors the issuance of a preliminary injunction. The public interest is served by maintaining the constitutional structure and the First Amendment free speech rights of the Plaintiffs. The right of free speech is a fundamental constitutional right that is vital to the freedom of our nation, and Plaintiffs have produced evidence of a massive effort by Defendants, from the White House to federal agencies, to suppress speech based on its content. Defendants' alleged suppression has potentially resulted in millions of free speech violations. Plaintiffs' free speech rights thus far outweighs the rights of Defendants, and thus, Plaintiffs satisfy the final elements needed to show entitlement to a preliminary injunction.

4. Injunction Specificity

Lastly, Defendants argue that Plaintiff's proposed preliminary injunction lacks the specificity required by Federal Rule of Civil Procedure 65 and is impermissibly overbroad. Rule 65(d)(1) requires an injunction to “state its terms specifically” and to “describe in reasonable detail the acts or acts restrained or required.” The specificity provisions of Rule 65(d) are designed to prevent uncertainty and confusion on the part of those faced with injunction orders and to avoid possible contempt based upon a decree too vague to be understood. Atiyeh v. Capps, 449 U.S. 1312, 1316-17 (1981). An injunction must be narrowly tailored to remedy the specific action that gives rise to the injunction. Scott v. Schedler, 826 F.3d 207, 211 (5th Cir. 2016).

This Court believes that an injunction can be narrowly tailored to only affect prohibited activities, while not prohibiting government speech or agency functions. Just because the injunction may be difficult to tailor is not an excuse to allow potential First Amendment violations to continue. Thus, the Court is not persuaded by Defendants arguments here.

Because Plaintiffs have met all the elements necessary to show entitlement to a preliminary injunction, this Court shall issue such injunction against the Defendants described above.

IV. CLASS CERTIFICATION

In their Third Amended Complaint, the Individual Plaintiffs purport to bring a class action “on behalf of themselves and two classes of other persons similarly situated to them.” Plaintiffs go on to describe the two proposed classes, as well as state generally that each requirement for class certification is met. Defendants opposed Plaintiffs' request for class certification in their Response to Plaintiffs' Motion for Class Certification and for Leave to File Third Amended Complaint.

[Doc. No. 268 at ¶489].

[Id. at ¶¶490-501].

[Doc. No. 244].

The Court is obligated to analyze whether this litigation should proceed as a class action. See Castano v. Am. Tobacco Co., 84 F.3d 734, 740 (5th Cir. 1996) (“A district court must conduct a rigorous analysis of the rule 23 prerequisites before certifying a class.”). Pursuant to this obligation, the Court questioned counsel at the hearing on the preliminary injunction as to the basis for class certification. As explained in further detail below, the Court finds that Plaintiffs failed to meet their burden of proof, and class certification is improper here.

A. Class Certification Standard under FRCP 23

“The decision to certify is within the broad discretion of the court, but that discretion must be exercised within the framework of rule 23.” Id. at 740. “The party seeking certification bears the burden of proof.” Id.

Federal Rule of Civil Procedure 23(a) lays out the four key prerequisites for a class action. It states:

One or more members of a class may sue or be sued as representative parties on behalf of all members only if:
(1) the class is so numerous that joinder of all members is impracticable;
(2) there are questions of law or fact common to the class;
(3) the claims or defenses of the representative parties are typical of the claims or defenses of the class; and
(4) the representative parties will fairly and adequately protect the interests of the class.
Fed. R. Civ. P. 23(a).

In addition to the enumerated requirements above, Plaintiffs must propose a class that has an objective and precise definition. “The existence of an ascertainable class of persons to be represented by the proposed class representative is an implied prerequisite of Federal Rule of Civil Procedure 23.” John v. Nat'l Sec. Fire & Cas. Co., 501 F.3d 443, 445 (5th Cir. 2007).

“In addition to satisfying Rule 23(a)'s prerequisites, parties seeking class certification must show that the action is maintainable under Rule 23(b)(1), (2), or (3).” Amchem Prod., Inc. v. Windsor, 521 U.S. 591, 614 (1997). Here, Plaintiffs specifically bring this class action under Rule 23(b)(2), which allows for maintenance of a class action where “the party opposing the class has acted or refused to act on grounds that apply generally to the class, so that final injunctive relief or corresponding declaratory relief is appropriate respecting the class as a whole.” Fed.R.Civ.P. 23(b0) (2). “Civil rights cases against parties charged with unlawful, class-based discrimination are prime examples” of Rule 23(b)(2) class actions. Amchem Prod., Inc., 521 U.S. at 614.

Notably, the Fifth Circuit recently held that a standing analysis is necessary before engaging in the class certification analysis. Angell v. GEICO Advantage Ins. Co., 67 F.4th 727, 733 (5th Cir. 2023). However, because this Court has already completed multiple standing analyses in this matter, and because the Court ultimately finds that the class should not be certified, the Court will not address which standing test should be applied to this specific issue.

B. Analysis

In order to certify this matter as a class action, the Court must find that Plaintiffs have established each element of Rule 23(a). See In re Monumental Life Ins. Co., 365 F.3d 408, 414-15 (5th Cir. 2004) (“All classes must satisfy the four baseline requirements of rule 23(a): numerosity, commonality, typicality, and adequacy of representation.”). The Court finds that

Plaintiffs failed to meet their burden, and therefore, the Court will not certify the class action.

1. Class Definition

Plaintiffs propose two classes to proceed with their litigation as a class action. First, Plaintiffs define Class 1 as follows:

The class of social-media users who have engaged or will engage in, or who follow, subscribe to, are friends with, or are otherwise connected to the accounts of users who have engaged or will engage in, speech on any social-media company's platform(s) that has been or will be removed; labelled; used as a basis for suspending, deplatforming, issuing strike(s) against, demonetizing, or taking other adverse action against the speaker; downranked; deboosted; concealed; or otherwise suppressed by the platform after Defendants and/or those acting in concert with them flag or flagged the speech to the platform(s) for suppression.
Next, Plaintiffs define Class 2 as follows:
The class of social-media users who have engaged in or will engage in, or who follow, subscribe to, are friends with, or are otherwise connected to the accounts of users who have engaged in or will engage in, speech on any social-media company's platform(s) that has been or will be removed; labelled; used as a basis for suspending, deplatforming, issuing strike(s) against, demonetizing, or taking other adverse action against the speaker; downranked; deboosted; concealed; or otherwise suppressed by the company pursuant to any change to the company's policies or enforcement practices that Defendants and/or those acting in concert with them have induced or will induce the company to make.

[Doc. No. 268 at ¶490]

[Id. at ¶491]

“It is elementary that in order to maintain a class action, the class sought to be represented must be adequately defined and clearly ascertainable.” DeBremaecker v. Short, 433 F.2d 733, 734 (5th Cir. 1970). The Court finds that the class definitions provided by Plaintiffs are neither “adequately defined” nor “clearly ascertainable.” Simply put, there is no way to tell just how many people or what type of person would fit into these proposed classes. The proposed class definitions are so broad that almost every person in America, and perhaps in many other countries as well, could fit into the classes. The Court agrees with Defendants that the language used is simply too vague to maintain a class action using these definitions. Where a class definition is, as here, “too broad and ill-defined” to be practicable, the class should not be certified. See Braidwood Mgmt., Inc. v. EqualEmp. Opportunity Comm 'n, No. 22-10145, 2023 WL 4073826, at *14 (5th Cir. June 20, 2023).

[Doc. No. 244 at 7]

Further, no evidence was produced at the hearing on the motion for preliminary injunction that “would have assisted the district court in more accurately delineating membership in a workable class.” DeBremaecker, 433 F.2d at 734. The Court questioned Plaintiffs' counsel about the issues with the proposed class definitions, but counsel was unable to provide a solution that would make class certification feasible here. Counsel for Plaintiffs stated that “the class definition is sufficiently precise,” but the Court fails to see how that is so, and counsel did not explain any further. Counsel for Plaintiffs focused on the fact that the proposed class action falls under Rule 23(b)(2), providing for broad injunctive relief, and therefore, counsel argued that the Court would not need to “figure out every human being in the United States of American [sic] who was actually adversely affected.” Even if the Court does not need to identify every potential class member individually, the Court still needs to be able to state the practical bounds of the class definition- something it cannot do with the loose wording given by Plaintiffs.

Hearing Transcript at 181, line 15.

[Id. at lines 16-18]

Without a feasible class definition, the Court cannot certify Plaintiffs' proposed class action. Out of an abundance of caution, however, the Court will address the other enumerated prerequisites of Rule 23(a) below.

2. Numerosity

The numerosity requirement mandates that a class be “so large that joinder of all members is impracticable.” Fed.R.Civ.P. 23(a)(1). “Although the number of members in a proposed class is not determinative of whether joinder is impracticable,” classes with a significantly high number of potential members easily satisfy this requirement. Mullen v. Treasure Chest Casino, LLC, 186 F.3d 620, 624 (5th Cir. 1999) (finding class of 100 to 150 members satisfied the numerosity requirement). Other factors, such as “the geographical dispersion of the class” and “the nature of the action,” may also support a finding that the numerosity element has been met. Id. at 624-25.

Here, Plaintiffs state that both Class 1 and Class 2 are “sufficiently numerous that joinder of all members is impracticable.” Plaintiffs reference the “content of hundreds of users with, collectively, hundreds of thousands or millions of followers” who were affected by Defendants' alleged censorship. Thus, based on a surface-level look at potential class members, it appears that the numerosity requirement would be satisfied because the class members' numbers reach at least into the thousands, if not the millions.

[Doc. No. 268 at ¶¶492-93]

[Id. at ¶¶492]

However, the numerosity requirement merely serves to highlight the same issue described above: the potential class is simply too broad to even begin to fathom who would fit into the class. Joinder of all the potential class members is more than impractical-it is impossible. Thus, while the sheer number of potential class members may tend towards class certification, the Court is only further convinced by Plaintiffs' inability to estimate the vast number of class members that certification is improper here.

3. Commonality

The commonality requirement ensures that there are “questions of law or fact common to the class.” Fed.R.Civ.P. 23(a)(2). “The test for commonality is not demanding and is met ‘where there is at least one issue, the resolution of which will affect all or a significant number of the putative class members.'” Mullen, 186 F.3d at 625 (quoting Lightbourn v. County of El Paso, 118 F.3d 421, 426 (5th Cir. 1997)).

Here, Plaintiffs state that both classes share common questions of law or fact, including “the question whether the government is responsible for a social-media company's suppression of content that the government flags to the company for suppression” for Class 1 and “the question whether the government is responsible for a social-media company's suppression of content pursuant to a policy or enforcement practice that the government induced the company to adopt or enforce” for Class 2. These questions of law are broadly worded and may not properly characterize the specific issues being argued in this case.

[Id. at ¶¶494-95]

At the hearing for the preliminary injunction, Plaintiffs' counsel clarified that the alleged campaign of censorship “involve[es] a whole host of common questions whose resolution are going to determine whether or not there's a First Amendment violation.” The Court agrees that there is certainly a common question of First Amendment law that impacts each member of the proposed classes, but notes Defendants' well-reasoned argument that Plaintiffs may be attempting to aggregate too many questions into one class action. The difficulty of providing “a single, class-wide answer,” as highlighted by Defendants, further proves to this Court that class certification is likely not the best way to proceed with this litigation. Although commonality is a fairly low bar, the Court is not convinced Plaintiffs have met their burden on this element of Rule 23(a).

Hearing Transcript, at 183, lines 19-21.

[Doc. No. 244 at 10]

[Id. at 13]

4. Typicality

The typicality requirement mandates that named parties' claims or defenses “are typical.. .of the class.” Fed.R.Civ.P. 23(a)(3). “Like commonality, the test for typicality is not demanding.” Mullen, 186 F.3d at 625. It “focuses on the similarity between the named plaintiffs' legal and remedial theories and the theories of those whom they purport to represent.” Lightbourn, 118 F.3d at 426.

Here, Plaintiffs assert that the Individual Plaintiffs' claims are typical of both Class 1 and Class 2 members' claims because they “all arise from the same course of conduct by Defendants...namely, the theory that such conduct violates the First Amendment.” Further, Plaintiffs state that the Individual Plaintiffs “are not subject to any affirmative defenses that are inapplicable to the rest of the class and likely to become a major focus of the case.”

[Doc. No. 268 at ¶496-97]

[Id.]

While the general claims of each potential class member would arise from the Defendants' alleged First Amendment violations, the Individual Plaintiffs have not explained how their claims are typical of each proposed class specifically. For example, Class 2 includes those social-media users who “follow, subscribe to, are friends with, or are otherwise connected to the accounts of users” subject to censorship. While the Individual Plaintiffs detail at length their own censorship, they do not clarify how they have been harmed by the censorship of other users. Again, this confusion highlights the myriad issues with this proposed class action as a result of the ill-defined and over-broad class definitions. The Court cannot make a finding that the Individual Plaintiffs' claims are typical of all class members' claims, simply because the Court cannot identify who would fit in the proposed class. Merely stating that the Rule 23(a) requirements have been met is not enough to persuade this Court that the class should be certified as stated.

[Id. at ¶491]

5. Adequate Representation

The final element of a class certification analysis requires that the class representatives “fairly and adequately protect the interests of the class.” Fed.R.Civ.P. 23(a)(4). “Differences between named plaintiffs and class members render the named plaintiffs' inadequate representatives only if those differences create conflicts between the named plaintiffs' interests and the class members' interests.” Mullen, 186 F.3d at 626.

On this element, Plaintiffs state that they “are willing and able to take an active role in the case, control the course of litigation, and protect the interest of absentees in both classes.”Plaintiff also state that “[n]o conflicts of interest currently exist or are likely to develop” between themselves and the absentees. This element is likely met, without evidence to the contrary.

[Id. at ¶498]

[Id.]

However, without a working class definition, and with the issues concerning the other Rule 23(a) elements discussed above, the Court finds class certification inappropriate here, regardless of the adequacy of the Individual Plaintiffs' representation. Thus, for the foregoing reasons, the Court declines to certify this matter as a class action.

V. CONCLUSION

Once a government is committed to the principle of silencing the voice of opposition, it has only one place to go, and that is down the path of increasingly repressive measures, until it becomes a source
of terror to all its citizens and creates a country where everyone lives in fear.
Harry S. Truman

The Plaintiffs are likely to succeed on the merits in establishing that the Government has used its power to silence the opposition. Opposition to COVID-19 vaccines; opposition to COVID-19 masking and lockdowns; opposition to the lab-leak theory of COVID-19; opposition to the validity of the 2020 election; opposition to President Biden's policies; statements that the Hunter Biden laptop story was true; and opposition to policies of the government officials in power. All were suppressed. It is quite telling that each example or category of suppressed speech was conservative in nature. This targeted suppression of conservative ideas is a perfect example of viewpoint discrimination of political speech. American citizens have the right to engage in free debate about the significant issues affecting the country.

Although this case is still relatively young, and at this stage the Court is only examining it in terms of Plaintiffs' likelihood of success on the merits, the evidence produced thus far depicts an almost dystopian scenario. During the COVID-19 pandemic, a period perhaps best characterized by widespread doubt and uncertainty, the United States Government seems to have assumed a role similar to an Orwellian “Ministry of Truth.”

An “Orwellian 'Ministry of Truth'” refers to the concept presented in George Orwell's dystopian novel, '1984.' In the novel, the Ministry of Truth is a governmental institution responsible for altering historical records and disseminating propaganda to manipulate and control public perception.

The Plaintiffs have presented substantial evidence in support of their claims that they were the victims of a far-reaching and widespread censorship campaign. This court finds that they are likely to succeed on the merits of their First Amendment free speech claim against the Defendants. Therefore, a preliminary injunction should issue immediately against the Defendants as set out herein. The Plaintiffs Motion for Preliminary Injunction [Doc. No. 10] is GRANTED IN PART and DENIED IN PART.

The Plaintiffs' request to certify this matter as a class action pursuant to Fed.R.Civ.P. Article 23(b)(2) is DENIED.


Summaries of

State of Mo. v. Biden

United States District Court, Western District of Louisiana
Jul 4, 2023
3:22-CV-01213 (W.D. La. Jul. 4, 2023)
Case details for

State of Mo. v. Biden

Case Details

Full title:STATE OF MISSOURI, ET AL. v. JOSEPH R. BIDEN JR., ET AL.

Court:United States District Court, Western District of Louisiana

Date published: Jul 4, 2023

Citations

3:22-CV-01213 (W.D. La. Jul. 4, 2023)