Executive Order Directed to Section 230 to Increase Regulatory Scrutiny of Online Services
On May 28, 2020, President Trump signed an "Executive Order on Preventing Online Censorship" directed to Section 230 of the Communications Decency Act (47 U.S.C. §230(c)). Section 230 has long afforded protections to interactive computer services against litigation over their hosting and moderation of online content. In hundreds of cases since the law was passed in 1996, courts have held that Section 230 immunizes online services from claims based upon their hosting of third-party content, and their editorial decisions to remove such content from their platforms. The Executive Order does not, and could not, undo those precedents, much less unilaterally rewrite Section 230. Instead, it directs various activities within the Executive Branch that will increase regulatory scrutiny of online services.
Background on Section 230
Online service providers routinely rely on Section 230 to defend against claims related to their hosting or removal of user content. While Section 230 has several components, the most frequently invoked is Section 230(c)(1). Courts have uniformly held that Section 230(c)(1) affords a broad immunity against claims that seek to hold online services liable for hosting or publishing third-party content.1This immunity applies regardless of how a plaintiff styles its claims, regardless of whether the service provider is on notice of alleged problems with the content, and regardless of the service provider's motivation in hosting or providing access to the content.2Where the immunity applies, service providers can invoke it to secure dismissal at the earliest stage of a case, allowing them to avoid costly litigation battles.3
Many courts have also found that Section 230(c)(1) protects service providers against claims based on their decision to remove or limit access to third-party content.4These courts reason that a decision to remove content is no different than a decision to host it: both are core publishing functions.
A separate provision of the statute, Section 230(c)(2)(A), expressly shields online services for actions that they take to "remove or restrict access to" any content that "they or their users consider to be lewd, lascivious, obscene, filthy, excessively violent, harassing or otherwise objectionable." 47 U.S.C. § 230(c)(2)(A). Unlike Section 230(c)(1), an online service invoking this protection has to establish that its removal decision was made "in good faith." Courts have split on the meaning of this requirement, some finding good faith where the service provider subjectively believes the content meets one of the enumerated categories, others applying a more objective standard.5Typically, where a plaintiff makes a colorable allegation that a service acted with some improper motive in removing content (such as for anticompetitive reasons), courts have ruled that the service provider's "good faith" is a fact issue that cannot be resolved at the pleading stage of the case.
The Executive Order's Discussion of Section 230
The new Executive Order focuses on Section 230(c)(2)(A) rather than Section 230(c)(1). That focus limits the impact of the Order because, as noted, service providers more frequently rely on the latter protection.
In any event, citing concerns that service providers engage in viewpoint discrimination when they remove content, the Order suggests that such "discriminatory" removals may fall outside the protections of Section 230(c)(2)(A). Specifically, the Order maintains that service providers lack the "good faith" required by Section 230(c)(2)(A) if they "engage in deceptive and pretextual actions ... to stifle viewpoints with which they disagree." The Order thus appears most concerned not with the removal of content generally, but with service providers who mislead about the reasons for such removals. As examples, the Order calls out services that remove content in ways that are inconsistent with their own Terms of Service, and services that remove content to serve some political bias but pretextually claim that the removal was for other reasons. The Order declares that such removals are in "bad faith" and should not be protected under Section 230(c)(2)(a).
On its face, the Order's interpretation of Section 230(c)(2)(a) is not a radical departure from current law. Private litigants who sue online platforms for removing their content often claim that the removal was pretextual and that the service holds some political or other bias against them. In the rare case where such claims have been supported by plausible allegations that the service did not act in good faith, courts have allowed them to survive the Section 230(c)(2)(a) immunity. Given that, services have increasingly invoked the separate immunity in Section 230(c)(1) to secure dismissal of claims based on the service's decision to remove, or to withdraw from publication, user content. The Executive Order does not speak directly to the scope or application of Section 230(c)(1).
The Executive Order's Various Directives
Beyond highlighting concerns over supposed "deceptive and pretextual" removals, the Order calls upon various elements of the Executive Branch to examine Section 230 and its impact. Specifically, the Order:
- directs all executive departments and agencies to ensure that their application of section 230(c) properly reflects the "narrow purpose" of Section 230(c)(2)(a) and to take "all appropriate actions" in this regard;
- directs the National Telecommunications and Information Administration to request, within 60 days, that the Federal Communications Commission (FCC) open a rulemaking into the proper application of Section 230, and specifically into: (i) when a service provider acts "in good faith" under 230(c)(2)(a); and (ii) whether 230(c)(1) shields services from claims based on their editorial decisions when a service provider doesn't meet the requirements of (c)(2)(a);
- directs federal agencies to conduct a review of their use of online services for advertising and marketing purposes, and to prepare reports of that use for the Office of Management and Budget within 30 days; the Department of Justice is to review the reports and determine whether "any online platforms are problematic vehicles for government speech due to viewpoint discrimination, deception to consumers, or other bad practices";
- directs the Federal Trade Commission to consider whether certain platforms that are "vast arenas for public debate" are engaged in deceptive trade practices by, for example, restricting speech in ways that do not align with their public representations about their practices;
- convenes a working group of state attorneys general to discuss whether they can pursue deceptive trade practice claims based on platforms' misrepresentations about their content moderation practices;
- directs the Attorney General to develop a proposal for federal legislation "to promote the policy objectives" of the Order.
All of these measures portend heightened difficulties for online services before federal and state regulators. Services have always faced the possibility of regulatory action over supposedly deceptive content moderation practices, but the Order encourages and gives political cover to such action, making it more likely. The Order's implicit threat that federal agencies may pare back their advertising spending on services that employ "bad practices," carries direct financial risks and interesting constitutional implications. And while the FCC's rulemaking authority regarding Section 230 is murky at best, any FCC pronouncements about the statute will likely complicate and increase the cost of civil litigation for service providers.
The bottom line is that a bedrock principle of internet jurisprudence is under a regulatory microscope. Online services and internet users need to remain informed, vigilant, and engaged as debate over Section 230 intensifies.
1Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1102 (9th Cir. 2009); Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997).
2 There are exceptions in the statute for intellectual property claims, wiretapping claims, claims about child pornography and sex trafficking, and federal criminal prosecutions. 47 U.S.C. § 230(e)(1)-5.
3Nemet Chevrolet,Ltd. v. Consumeraffairs.com, Inc., 591 F.3d 250, 255 (4th Cir. 2009).
4See, e.g., Sikhs for Justice “SFJ”, Inc. v. Facebook, Inc., 144 F. Supp. 3d 1088, 1095 (N.D. Cal. 2015), aff’d, 697 F. App’x 526 (9th Cir. 2017); Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1102 (9th Cir. 2009); Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997).
5Holomaxx Techs. v. Microsoft Corp., 783 F. Supp. 2d 1097, 1105 (N.D. Cal. 2011) (considering the meaning of “good faith”). Courts have also disagreed on the breadth of the “otherwise objectionable” catch-all language. Compare e360Insight, LLC v. Comcast Corp., 546 F. Supp. 2d 605, 608 (N.D. Ill. 2008) (defendant’s subjective determination that spam email messages were objectionable sufficient to grant immunity) to Sherman v. Yahoo! Inc., 997 F. Supp. 2d 1129, 1138 (S.D. Cal. 2014) (“declin[ing] to broadly interpret ‘otherwise objectionable’ material to include any or all information or content.”).