From Casetext: Smarter Legal Research

In re Soc. Media Adolescent Addiction/Personal Injury Prods. Liab. Litig.

United States District Court, Northern District of California
Nov 14, 2023
MDL 3047 (N.D. Cal. Nov. 14, 2023)

Opinion

MDL 3047 4:22-md-03047-YGR

11-14-2023

IN RE SOCIAL MEDIA ADOLESCENT ADDICTION/PERSONAL INJURY PRODUCTS LIABILITY LITIGATION This Document Relates to: Individual Plaintiffs' Master Amended Complaint


ORDER GRANTING IN PART AND DENYING IN PART DEFENDANTS' MOTIONS TO DISMISS

Re: Dkt. Nos. 237 & 320

YVONNE GONZALEZ ROGERS UNITED STATES DISTRICT JUDGE

This Order addresses the first wave of legal arguments stemming from the filing, on behalf of children and adolescents, of hundreds of individual cases across the United States against five companies operating some of the world's most used social media platforms: Meta's Facebook and Instagram, Google's YouTube, ByteDance's TikTok, and Snapchat. Notably, this multi-district litigation (“MDL”) encompasses, in addition to individual suits, over 140 actions brought on behalf of school districts and actions filed jointly by over thirty state Attorneys General. While plaintiffs' complaint asserts eighteen claims against defendants, this Order addresses only defendants' motions to dismiss the individual plaintiffs' five priority claims.

For clarity, the primary defendants in this litigation are Alphabet, Inc.; ByteDance, Inc.; Facebook Holdings, LLC, Facebook Operations, LLC, Facebook Payments, Inc., Facebook Technologies, LLC (collectively, “Facebook”); Google LLC; Instagram, LLC; Meta Platforms, Inc., Meta Payments, Inc., Meta Technologies, LLC (collectively, “Meta”); TikTok, Inc., TikTok, LLC, TikTok, Ltd. (collectively, “TikTok”); Snap, Inc. (“Snap”); and YouTube, LLC. A Table of Contents outlining the organization of this Order is attached as Appendix A for the reader's convenience.

MDL courts frequently phase motion to dismiss briefing to determine whether the gravamen of the complaint can proceed with discovery, then, in parallel, legal analysis of the remaining claims proceeds. Here, defendants were adamant that the entirety of the complaint should be dismissed under Section 230 of the Communications Decency Act of 1996 and the First Amendment. If true, there would be no need to analyze the remaining claims.

For the reasons set forth in this Order, based on a careful review of the pleadings and the briefing submitted by the parties as well as oral argument heard on October 27, 2023, the Court GRANTS IN PART and DENIES IN PART the motions to dismiss plaintiffs' products liability claims (Priority Claims 1-4), and FINDS that Section 230 and the First Amendment do not bar plaintiffs' negligence per se claim (Priority Claim 5). By way of summary, the Court finds that the parties' “all or nothing” approach to the motions to dismiss does not sufficiently address the complexity of the issues facing this litigation. Rather, the Court has conducted an analysis of the actual functionality defects alleged in the complaint, and in that context, has determined whether (and to what extend) the claims against the platforms can proceed.

I. Background

A. Procedural Background

Plaintiffs' Master Amended Complaint (hereinafter, “MAC”) is nearly 300 pages long asserting eighteen claims brought under various state laws on behalf of hundreds of plaintiffs. For efficiency purposes, the Court required plaintiffs to identify their five priority claims and preferred state law. (See Dkt. No. 131.) They are:

Claim 1: Strict Products Liability Design Defect - New York
Claim 2: Strict Products Liability Failure to Warn - New York
Claim 3: Product-Based Negligent Design Defect - Georgia
Claim 4: Product-Based Negligent Failure to Warn - Georgia
Claim 5: Negligence Per Se - Oregon

Defendants filed two separate consolidated dismissal motions in response to the MAC. The first (hereinafter referred to as “MTD1”) addressed whether plaintiffs have legally stated each of the five priority claims. The second focused on immunity and protections under Section 230 and the First Amendment (hereinafter, “MTD2”).

Defendant Snap filed supplemental briefs in support of MTD1. See Dkt. Nos. 238 & 324. MTD2 came later as the Court was awaiting the possible impact of the Supreme Court's decision in Gonzalez v. Google. Though that case raised questions regarding the scope of Section 230, the Supreme Court ultimately did not reach them. See generally Gonzalez, et al. v. Google LLC, 598 U.S. 617 (2023). The Court notes here that the parties failed to focus on the law of the identified preferred states on many key issues, which increased inefficiencies in resolving the pending motions. In the future, counsel shall address any concerns regarding the import of administrative mechanisms with the Court prior to motion practice.

B. Relevant Facts Alleged

The Court focuses on the allegations relevant to the pending motions. Thus, the Master Amended Complaint alleges as follows:

Defendants are companies that own and operate “social media” platforms. (MAC ¶ 1.) Each platform allows users to create profiles and to share content including messages, videos, and photos. Significantly, these platforms are more than mere message boards or search engines. In addition to enabling users to look for content, or to send content to other specific users, in many instances, the platforms determine when and to whom certain content is shown. As noted, the platforms here at issue are Facebook and Instagram, both operated by Meta; Snapchat; TikTok; and YouTube.

Snap's argument that Snapchat is not a social media platform and is instead “a camera application” fail to persuade. Dkt. No. 238, Snap's Supplemental Brief in Support of Defs' MTD1(hereinafter, “Snap's Supplemental MTD Brief”) at 3:6-7. Further, this is not the appropriate procedural posture to resolve such fact-based arguments.

Use of these platforms is generally free. Defendants make money primarily by selling advertising space. Marketers covet such space because defendants possess vast data about users. This enables them to target advertisements to specific audiences. Given this business model, profits from these platforms are highly dependent on the number of users, the amount of time each user spends on the platform, and the amount of information a user provides, directly or indirectly, to the platform about themselves.

i. Defect Allegations

As pled, defendants target children as a core market and designed their platforms to appeal to and addict them. Because children still developing impulse control are uniquely susceptible to harms arising out of compulsive use of social media platforms, defendants have “created a youth mental health crisis” through the defective design of their platforms. (Id. at ¶ 96.) Further, these platforms facilitate and contribute to the sexual exploitation and sextortion of children, as well as the ongoing production and spread of child sex abuse materials (“CSAM”) online.

The Court refers herein to “children,” “child,” and “adolescent” interchangeably. The Court understands that plaintiffs' claims may require, at a future date, the Court to discern between allegations or claims relative to minors of particular ages. This is not necessary for purposes of this Order.

Plaintiffs use the word “sextortion” to describe “nightmarish scheme[s]” in which “a predator threatens to circulate the sexual images of [a] minor unless the predator is paid to keep the images under wraps.” MAC ¶ 143. Law enforcement allegedly reports that sextortion is “pervasive” on defendants' platforms. Id.

To that end, defendants know that children use their products, both from public and internal data. (See, e.g., id. at ¶ 60.) Indeed, the ability to estimate a user's age and other characteristics increases the value of defendants' platforms to advertisers. (Id. at ¶¶ 60-61.) Further, defendants specifically try to cultivate children as users. They believe that early adoption of their platforms will increase the likelihood a child will continue to use the platform as they age. Given the susceptibility of the addictive elements of the platforms, adolescents are more likely to use them for long periods of time, allowing defendants to sell more space to advertisers. (See, e.g., id. at ¶ 54.) Millions of children use defendants' platforms “compulsively.” Many report that they feel they are addicted to the platforms, wish they used them less, and feel harmed by them. (Id. at ¶¶ 91-95.)

Defendants are also aware that their platforms harm child users. (Id. at ¶ 99.) Beginning in at least 2014, researchers began demonstrating that addictive and compulsive use of defendants' platforms leads to negative mental and physical outcomes for children. (Id. at ¶ 101; see also id. at ¶¶ 96-124 (discussing the nearly decade's-worth of “scientific and medical studies” linking compulsive use of defendants' platforms to negative health outcomes).) At least some defendants also knew about these harms from internal data and studies. (See, e.g., id. at ¶¶ 181-85 (as to Meta).)

The MAC describes myriad ways in which the design of defendants' platforms cause the harms described above. These aspects, or functions, include:

Endless-content: This describes the “endless feeds” of content shown to users via defendants' platforms. (Id. at ¶ 845(i).) One example is Facebook's “News Feed,” which presents users a continuous feed of stories, advertisements, and other content, and which never ends. (Id. at ¶ 202; see also id. at ¶¶ 494, 496 (as to analogous Snapchat features); 584-85, 591-92 (as to “continuous scrolling” via TikTok's “For You” page); 700-701 (as to YouTube's “autoplay” functionality).)

Lack of Screen Time Limitations: These designs concern maximizing the length of user sessions and the lack of default or user-imposed protections to limit session duration, such as by time of day or frequency of use. (Id. at ¶¶ 845(e) - (h), (j).) For instance, the TikTok app “intentionally omits the concept of time.” The app does not show users the time or date a video was uploaded, and the app “is designed to cover the clock displayed at the top of users' iPhones, preventing them from keeping track of time spent” in the app. (Id. at ¶¶ 621-22.)

Plaintiffs acknowledge that, “after receiving public criticism regarding its [platform's] effects on people's mental health,” TikTok introduced a “Take a Break” feature, which assists users in limiting their app screen time, beginning in June 2022. When a minor has spent 100 minutes using the app on a given day, the app will show them, upon re-opening the app later that day, a message reminding them that the “Take a Break” feature exists. This feature is not enabled by default. Id. at ¶ 624.

Intermittent Variable Rewards or “IVR”: Here, defendants designed algorithms to strategically time when they show content to users in order to maximize engagement. (Id. at ¶ 845(1); see also ¶¶ 77-81 (explaining how IVR works, generally, and how defendants deploy IVR on their platforms).) For example, Instagram may wait until a piece of content receives multiple likes before notifying the user who posted it. That way, the user's dopamine reaction is intensified after viewing the notification. (Id. at ¶ 79.) TikTok may similarly delay showing a video it knows a user will like until the moment before it anticipates the user would otherwise log out of the app. (Id.)

Ephemeral Content : To create a sense of urgency for users to engage with content, some defendants limit how long certain content is available. (Such content is sometimes referred to as “ephemeral” given its “disappearing” nature.) For example, the defining feature of Snapchat is the ability to send and receive “Snaps,” photo or video messages that disappear within a short period of time. (Id. at ¶ 444; see also id. at ¶¶ 294 (as to ephemeral Instagram and Facebook “Stories”); 626-27 (as to disappearing TikTok “Stories”).) Such content also makes it harder to track the spread of CSAM and enables coercive, predatory behavior toward children. (See, e.g., id. at ¶ 523 (describing how Snapchat's ephemeral content contributes to such harms).)

See, e.g., id. ¶ 143 (describing how ephemeral content works and emphasizing the risk it poses to the production and spread of CSAM).

Limitations on Content Length: The length of content that can be posted is limited to optimize use. (See, e.g., id. at ¶ 224 (as to Instagram videos of up to fifteen seconds long).)

Notifications: Defendants send users notifications on their phones, by text and by email, to draw them back to their respective platforms. For example, the platform may alert users when someone they follow creates new content, or where someone reacts to their content. (Id. at ¶¶ 292-93 (as to Meta).) This includes pushing notifications to users late at night, prompting them to re-engage with the app no matter the cost to their sleep schedule. (Id. at ¶¶ 103 (as to defendants, generally); 488 (as to Snap).) Some defendants also notify users of content created by defendants themselves. For example, Snap rewards continuous engagement with the app by providing “elevated status,” “trophies,” and other awards to frequent users. (Id. at ¶¶ 439, 468 (describing the range of social metrics through which Snapchat “reward[s] users when they engage with [the app] and punish them when they fail to [do so].”).)

Algorithmic Prioritization of Content: Defendants use engagement-based algorithms that promote content to users based on the likelihood it will keep them engaged with and using the platform rather than post content as specifically directed by users or in chronological order. (See, e.g., id. at ¶¶ 227 (as to Instagram); 200 (as to Facebook).) For instance, TikTok tracks user behavior, such as time spent on a given video, so that it can provide a “never-ending stream of TikToks optimized to hold [users'] attention.” (Id. at ¶ 585 (citation omitted) (alteration in original).) Plaintiffs allege this can be harmful to children not only because it promotes compulsive use, but because it may expose children to “rabbit holes” of harmful or inappropriate content. For example, a child experiencing depression may spend time on a video about suicide and then find themselves receiving an increasing number of suicide related videos. (Id. at ¶¶ 597-601.)

Filters: Defendants provide users with tools, such as filters, so that they can edit photos and videos before posting and/or sharing them. This enables the proliferation of “idealized” content reflecting “fake appearances and experiences,” resulting in, among other things, “harmful body image comparisons.” (Id. at ¶ 88.) For example, Snapchat includes “lenses and filters” that allow users to “blur[] imperfections,” “even[] out skin tone,” and alter facial features and skin color. (Id. at ¶¶ 513-14; see also id. at ¶¶ 314-17 (explaining how Instagram filters enable users to make “improvements” to their appearance, resulting in a range of social comparison, selfhatred, and other harms).) Exposure to these filtered, sometimes unrealistic images create body image and self-esteem issues among youth users. At present, defendants do not inform users when an image has been altered through filters or otherwise edited. As a result, young users are unable to discern unedited and edited content. (Id. at ¶ 845(k); see also id. at ¶ 318 (as to Meta).)

Barriers to Deletion: Each defendant makes it more challenging to delete and/or deactivate a user account than to create one in the first place, thereby creating barriers to children discontinuing use of defendants' apps, even if they want to. (See, e.g., id. at ¶¶ 358-60 (as to Facebook); 489-90 (as to Snapchat); 638-48 (as to TikTok); 774-77 (as to YouTube).) For instance, a user seeking to delete or deactivate their Facebook or Instagram account “must locate and tap on approximately seven different buttons (through seven different pages and popups) from th[eir] main feed[s].” (Id. at ¶ 359.) Yet, even once navigating that process, they are not able to immediately delete or deactivate their account; instead, Meta imposes a 30-day waiting period during which a user can reactivate their account simply by logging in. (Id. at ¶ 360.)

Connection of Child and Adult Users: Some platforms “recommend minor accounts to adult strangers.” (Id. at ¶ 845(u).) These include “quick add” functions that recommend that users “friend,” “follow,” or otherwise connect with other users. These features recommend connections between adult and child users, facilitating the exploitation of children by adult predators. (See, e.g., id. at ¶ 198 (as to Facebook); ¶¶ 509-10 (as to Snapchat).)

Private Chats: Some defendants have private chat functions, which can be harmful to children as they further enable private communication with adult predators. (See, e.g., id. at ¶¶ 197 (as to Facebook); 225 (as to Instagram).)

Geolocation: Some defendants allow children to share their location with other users, such as by geotagging posts. This too can be used by predators. (See, e.g., id. at ¶¶ 506-07 (as to Snapchat); see also id. ¶ 845(t).)

Age-Verification: Defendants either do not require users to enter their age upon sign-up or do not have effective age-verification for users, even though such verification technology is readily available and, in some instances, used by defendants in other contexts. For example, Meta purports not to allow children under thirteen to access Facebook. The platform relies on a user's self-reported age when they sign up for the platform to enforce this policy. When a user enters a birthdate showing they are under thirteen years-old, they will be blocked from completing the registration process. However, immediately thereafter, the platform permits them to recomplete the sign-up form, enter an earlier birthday (even if it does not accurately reflect their age), and create an account. (Id. at ¶¶ 328-32.) Snapchat's age verification systems are similarly defective. (Id. at ¶ 461.)

See id. ¶ 59 (“None of the Defendants conduct proper age verification or authentication. Instead, each Defendant leaves it to users to self-report their age. This unenforceable and facially inadequate system allows children under 13 to easily create accounts on Defendants' apps.”).

Lack of Parental Controls: Defendants offer parents limited tools for controlling their children's access to and use of their respective platforms. Further, their apps do not require parental consent for children to create new accounts, or, where parental consent is required for child-users, children can easily circumvent the requirement by inputting a fake age, as described above. Where the platforms provide tools for parents to control or monitor their child's use, the tools are inadequate. For example, Snapchat allows parents to “link” to a child's account and see with whom they communicate, but the app does not enable parents to see what messages are being sent or to control access to many of the app's features. (Id. at ¶ 522.)

* * *

The failure to warn claims are similarly based on the above-referenced alleged defects in their platforms. (See id. at ¶¶ 431-37 (Meta); 543-53 (Snap); 675-89 (TikTok); 812-19 (YouTube).)

ii. Allegations Regarding Causation and Harm

The MAC contains two categories of allegations relative to causation. As to the first, plaintiffs allege that the “defective features” of defendants' platforms caused their negative physical, mental, and emotional health outcomes, such as anxiety, depression, and self-harm. (See generally id. at ¶ 90.) They support these allegations by making three logical moves. First, they explain, in great detail, how defendants' platforms work. Second, they assert these platforms are designed to (and in fact do) addict minor users. Third, they show that compulsive use of such platforms results in the harms alleged. (See generally ¶¶ 181-437 (Meta); 438-553 (Snap); 554-689 (TikTok); 690-819 (Google).) As to the second, the MAC is also replete with references to research studies tying use of defendants' platforms to the types of injuries alleged by plaintiffs. (See id. at ¶ 101; see also id. at ¶¶ 96-124 (collecting studies).)

The MAC contains additional causation allegations relative to harm caused to plaintiffs by adult third parties using defendants' platforms. Plaintiffs allege, for instance, that the defective design of each defendant's respective platform facilitates harms to minor users arising out of third parties' use of the platforms. See, e.g., id. at ¶ 134. For example, plaintiffs plead that, “[e]ach Defendant knew or should have known that the design” of their platforms “attracts, enables, and facilitates child predators, and that such predators use [their] apps to recruit and sexually exploit children for the production of CSAM and its distribution ....” Id. at ¶ 164. Elsewhere, they contend that an increased “risk of sexual exploitation, sexual abuse, and sextortion of children” is “a direct and foreseeable consequence of Defendants' connecting children to sexual predators.” Id. at ¶ 144. Plaintiffs make such allegations as to each defendant and a range of design defects.

iii. Negligence Per Se Allegations Relative to Section 230 & the First Amendment

Here, the Court notes that the MAC alleges claims for negligence per se based on defendants' violations of two federal statutes, the Children's Online Privacy Protection Act (“COPPA”), 15 U.S.C. §§ 6501-6506, and the Protect Our Children Act (“Protect Act”), 18 U.S.C. §§ 2258A, 2258B. Reportedly, approximately two dozen cases assert these allegations, and theories as to defendants' violation of these statutes differ.

In general, plaintiffs allege defendants violate COPPA by failing to: (i) provide, through their respective websites and apps, a clear, understandable, and complete notice to parents describing each's collection, use, and/or disclosure of children's person al information, in violation of 16 C.F.R. § 312.4(a) and (c); (ii) make reasonable efforts, taking into account available technology, to ensure parents receive such notices on their respective websites and apps such that they can provide informed consent, in violation of 16 C.F.R. § 312.4(b) - (c); and (iii) obtain verifiable parental consent before any collection, use, or disclosure of children's personal information, in violation of 16 C.F.R. § 312.5(a)(1). (See MAC ¶ 1010.)

Plaintiffs allege defendants violate the Protect Act by failing to: (i) “minimize the numbers of [ ] employees with access to visual depictions of [p]laintiffs,” and (ii) report “the violations of child pornography laws that they suspect[] to be in existence within [their] respective [platforms].” (Id. at ¶¶ 1004, 1006.)

II. Legal Framework

A. Law to Apply in an MDL

In an MDL, the transferee court applies the law of its circuit to issues of federal law, but on issues of state law it applies the state law that would have been applied to the underlying case as if it had never been transferred into the MDL. In re Anthem, Inc. Data Breach Litig., 2015 WL 5286992, at *2 (N.D. Cal. Sept. 9, 2015) (collecting cases). This may require a court to apply different law to the individual cases within the MDL. See In re Dow Co. Saraband Prods. Liab. Litig., 666 F.Supp. 1466, 1468-70 (D. Colo. 1987) (applying the law of four different circuits to different cases in the same MDL).

B. Motion to Dismiss Standard

The standard under Federal Rule of Civil Procedure 12(b)(6) is well-known and not in dispute. “To survive a motion to dismiss for failure to state a claim after the Supreme Court's decisions in Iqbal and Twombly, plaintiffs' allegations must suggest that their claim has at least a plausible chance of success.” Levitt v. Yelp! Inc., 765 F.3d 1123, 1134-35 (9th Cir. 2014) (cleaned up). The district court must assume that the plaintiffs' allegations are true and draw all reasonable inferences in their favor. The court need not, however, construe as true conclusory statements or unreasonable inferences. In re Gilead Scis. Sec. Litig., 536 F.3d 1049, 1055 (9th Cir. 2008). These well-established standards apply with equal force in MDL proceedings. See In re Optical Disk Drive Antitrust Litig., 2011 WL 3894376, at *8-*9 (N.D. Cal. Aug. 3, 2011) (applying such standard in the context of an MDL); In re Zofran (Ondansetron) Prod. Liab. Litig., 2017 WL 1458193, at *5 (D. Mass. Apr. 24, 2017) (the “creation of an MDL proceeding does not suspend [or change] the requirements of the Federal Rules of Civil Procedure”).

C. Organization of Analysis

The claims at issue raise multiple broad and distinct theories of harm regarding a wide variety of alleged conduct by defendants. In the interest of efficiency and clarity, this Order is organized as follows:

The Court first addresses the extent to which plaintiffs' priority claims are barred, if at all, by Section 230 (Section III) or the First Amendment (Section IV). Given the complexity of the issue, the legal framework is outlined in detail before conducting the analysis. In that regard, the Court looks to paragraphs 845 and 864 of the MAC, as well as the parties' dismissal briefing, to define the list of alleged defects at issue. It is these detailed, conduct-specific allegations that require analysis.

At the October 27 hearing, plaintiffs agreed that the strict products liability and product-based negligence claims (Priority Claims 1-4) derive from the same alleged defects. Thus, the Court finds that the defects identified herein are alleged as to all of the products liability claims. Further, also at the hearing, plaintiffs agreed that paragraphs 845(d), (0), (q), (r), and (s) from the MAC do not apply to plaintiffs' products liability claims, and in that regard are stricken. Except as otherwise stated, the Court does not here determine if plaintiffs have alleged each defect as to each platform. Rather, the Court here refers to salient examples of each defect from the various platforms. If the Court allows a claim to proceed as to an alleged defect, defendants will have an opportunity to raise with the Court whether plaintiffs have alleged the defect is present in their platform.

Next, the Court assesses whether plaintiffs have stated their products liability claims in terms of the existence of a product (Section V), duty (Section VI), and general causation (Section VII).

III. Section 230

A. Section 230(c)(1) Overview

Defendants contend that Section 230, 47 U.S.C. § 230, of the Communications Decency Act (“CDA”), bars all of plaintiffs' priority claims. Plaintiffs, in turn, argue that Section 230 bars none. The Court finds neither sides' all-or-nothing approach to fairly or accurately represent the Ninth Circuit's application of Section 230 immunity. Before starting its analysis of defendants' motion, the Court sets forth this authority.

The Court begins with the statute which provides:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
47 U.S.C. § 230(c)(1). Pursuant to Section 230(e)(3) “[n]o cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”

There are some exceptions that are not relevant to the claims currently alleged.

By way of background, prior to passage of the relevant portions of Section 230 in 1996, websites had an adverse incentive not to monitor or remove any harmful content from their sites. If they did, they could be held liable for harmful content not otherwise removed. In contrast, by doing nothing, liability for third-party content did not attach. Congress passed the CDA, “for two basic policy reasons: to promote the free exchange of information and ideas over the Internet and to encourage voluntary monitoring for offensive or obscene material.” Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1122 (9th Cir. 2003) (citations omitted).

Child safety and well-being also constitute explicit goals of the CDA. The statute itself provides that amongst its policy goals are: “encourage[ing] the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services; [and] remov[ing] disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children's access to objectionable or inappropriate online material.” 47 U.S.C. § 230(b)(3)-(4).

Through the CDA, online publishers are granted greater freedom from liability than their traditional counterparts. Batzel v. Smith 333 F.3d 1018, 1026-1027 (9th Cir.2003) (cleaned up) (“As a matter of policy, Congress decided not to treat providers of interactive computer services like other information providers such as newspapers, magazines or television and radio stations . . . Congress . . . has chosen to treat cyberspace differently.”).

B. Tests to Determine Applicability of Immunity Protections

i. The Barnes Test

The Ninth Circuit has articulated a three-part test for determining if a claim is entitled to Section 230(c)(1) immunity:

Section 230(c)(1) of the CDA “only protects from liability (1) a provider or user of an interactive computer service (2) whom a plaintiff seeks to treat, under a state law cause of action, as a publisher or speaker (3) of information provided by another information content provider.
Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1100 (9th Cir. 2009), as amended (Sept. 28, 2009) (footnotes omitted). Hereinafter, the Court refers to this as the Barnes test.

Though usually addressed as separate, sequential prongs, the three parts of the test overlap, making them redundant in some cases. For example, if the Court finds that a platform is acting as an information content provider in prong one, they are necessarily finding that the information at issue is not entirely “provided by another” for prong three. Regardless, the Court sets forth each step as a separate analysis.

Here, plaintiffs allege that defendants fail to meet the second prong. The Court thus directs the bulk of its analysis there.

a. Prong 1: Interactive Computer Services and Information Content Providers

Prong one provides that the act only applies to “information content providers.” As this is not disputed, the Court only briefly addresses this prong.

“The term ‘information content provider' means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.” 47 U.S.C. § 230(f)(3). The term “‘interactive computer service' means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.” 47 U.S.C. § 230(f)(2).

“The term ‘access software provider' means a provider of software (including client or server software) or enabling tools that do any one or more of the following: (A) filter, screen, allow, or disallow content; (B) pick, choose, analyze, or digest content; or (C) transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.” 47 U.S.C. § 230(f)(4). Defendants' motion implies that Section 230 provides protection for all these activities. Not so. It provides companies that perform these activities with the limitation on liability described in section 230(c)(1). For example, if they are transmitting or displaying their own content rather than third party content, that is not entitled to immunity.

A platform can be both an interactive computer service and an information content provider. For example, a website that provides articles that it writes as well as comments written by third parties is both an interactive service provider and information content provider. In conducting the Section 230(c)(1) analysis, courts must consider whether the platform is a service or content provider “for the portion of the slalemenl or publication at issue.” Carafano, 339 F.3d 1123 (emphasis supplied).

b. Prong 2: Publisher or Speaker

The second prong of the Barnes test focuses on “whether ‘the duty the plaintiff alleges' stems ‘from the defendant's status or conduct as a publisher or speaker.'” Lemmon v. Snap, Inc., 995 F.3d 1085, 1091 (9th Cir. 2021) (quoting Barnes, 570 F.3d 1107). A claim meets this prong where the claim is based on “behavior that is identical to publishing or speaking.” Barnes, 570 F.3d 1107 (emphasis supplied). Critically, Section 230 does not create immunity simply because publication of third-party content is relevant to or a but-for cause of the plaintiff's harm. The issue is whether the defendant's alleged duty to the plaintiff could “have been satisfied without changes to the content pos ted by the website's users and without conducting a detailed investigation.” Doe v. Internet Brands, Inc., 824 F.3d 846, 851 (9th Cir. 2016).

Thus, Doe v. Internet Brands instructs. There, the plaintiff was an aspiring model who was assaulted at a fake audition third parties had posted on defendant's website. The plaintiff alleged that defendant was aware of the third parties' scheme to assault models using the website, but failed to provide any warning to users such as plaintiff. Importantly, defendant's alleged awareness stemmed “from an outside source, not from monitoring postings” on their site. Id. at 849. The court held that Section 230 did not bar the claim because “Jane Doe's failure to warn claim has nothing to do with Internet Brands' efforts, or lack thereof, to edit, monitor, or remove user generated content. Plaintiff's theory [was] that Internet Brands should be held liable, based on its knowledge of the rape scheme.... The duty to warn allegedly imposed by California law would not require Internet Brands to remove any user content or otherwise affect how it publishes or monitors such content.” Id. Additionally, “[a]ny alleged obligation to warn could have been satisfied without changes to the content posted by the website's users and without conducting a detailed investigation.” Id. While the harm was inextricable from the third-party content (the fake audition post), the conduct at issue was not the defendant's publication of the advertisement, it was the failure to warn about the advertisement while holding information indicating that it was likely fake and dangerous.

Further, in Lemmon v. Snap, Inc., 995 F.3d 1085, 1089 (9th Cir. 2021), the Ninth Circuit reversed a district court that found Section 230 barred plaintiffs' claims against Snap. The case concerned a “speed filter” created by Snap. Users could open Snap and take a video of themselves while the filter showed the speed they were moving. Such content could then be posted/shared on SnapChat. Plaintiffs alleged that it was commonly believed that Snap would somehow reward photos or videos posted with the filter showing the user went more than 100 mph, and that Snap was aware of this belief. Plaintiffs' children died in a car accident after driving over one-hundred miles per hour off a road. During the accident, one of them had the speed filter open on their phone.

Section 230 did not apply because plaintiffs did not seek to hold Snap liable as a publisher or speaker of third-party content. No content was shared. Instead, the claim derived from the alleged dangerous feature of Snap's platform, i.e., a filter that showed the user's speed. Though the incentive to use the filter was to create and then post third-party content, the conduct directly at issue (Snap's creation of the filter) is distinct from its role as publisher and “Snap could have satisfied its ‘alleged obligation'-to take reasonable measures to design a product more useful than it was foreseeably dangerous -without altering the content that Snapchat's users generat e . . . . Snap's alleged duty in this case thus ‘has nothing to do with' its editing, monitoring, or removing of the content that its users generate through Snapchat. Lemmon, 995 F.3d 1092 (9th Cir. 2021) (quoting Internet Brands, 824 F.3d 851)).

See also Barnes, 570 F.3d at 1107 (defendant liable where liability derived from promise to remove third-party content from website, not merely from failure to remove the content) (“liability here would come not from Yahoo's publishing conduct, but from Yahoo's manifest intention to be legally obligated to do something, which happens to be removal of material from publication”).

With respect to the term “publishing” itself, courts understand it to mean “deciding whether to publish or to withdraw from publication third-party content.” Id. The most basic example of online publishing Section 230(c)(1) is intended to protect is a message board on which content is posted by third parties. For example, Dyroff v. Ultimate Software Grp., Inc., 934 F.3d 1093 (9th Cir. 2019) held the plaintiff could not proceed on a claim that defendant, an online platform, contributed to the death of her son who purchased heroin from another user on the platform through message board postings. In this circumstance, the defendant was merely a publisher of third-party content.

“Publishing” also includes editorial decisions and functions ancillary to the decision to make content available. Thus, publishing has been found to “involve[] reviewing [and] editing,” such as “reviewing] material submitted, perhaps edit[ing] it for style or technical fluency,” Barnes, 570 F.3d 1102, and “deciding whether to exclude material . . . .” Fair Hous. Council of San Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157, 1170-71 (9th Cir. 2008). In general, it is any conduct “rooted in the common sense and common definition of what a publisher does.” Barnes, 570 F.3d at 1102 (also “deciding whether to publish, withdraw, postpone or alter content” and other of “‘publisher's traditional editorial functions'”) (quoting Zeran v. Am. Online, Inc., 129 F.3d 327, 331 (4th Cir.1997)). The Ninth Circuit has also indicated that Section 230 may provide liability for any claim that would “necessarily require an internet company to monitor third-party content.” HomeAway.com, Inc. v. City of Santa Monica, 918 F.3d 676, 682 (9th Cir. 2019); Lemmon, 995 F.3d 1092.

c. Prong 3: “Information Provided by Another Information Content Provider”

The third prong overlaps with the prior two and concerns information provided by another information content provider. In re Apple Inc. App Store Simulated Casino-Style Games Litig., 625 F.Supp.3d 971, 978 (N.D. Cal. 2022) (“Practically speaking, the second and third factor tend to overlap in significant ways.”) No further articulation is required.

ii. The Roommates Test

Given the complexity with which online platforms function, it is not always clear whether a platform is merely acting as an interactive services provider and publisher of another's content, or if the platform's involvement or intervention in the posting or presentation of that content crosses the line into what courts generally refer to as “development.” Kimzey v. Yelp! Inc., 836 F.3d 1263, 1269 (9th Cir. 2016) (“The meanings of the words ‘creation' and ‘development' are hardly self-evident in the online world, and our cases have struggled with determining their scope.”).

The Ninth Circuit has established a test for determining if a platform's actions in altering or presenting content constitute development, namely, whether it provides “neutral tools” for the creation or dissemination of content which does not destroy Section 230 immunity. Roommates.Com, 521 F.3d at 1172. However, if the platform's conduct materially alters the content, enhancing its alleged illegality, the tool is not neutral, and Section 230 does not bar liability. Id. at 1174-75 (“Where it is very clear that the website directly participates in developing the alleged illegality . . . immunity will be lost.”). Hereinafter, the Court refers to this as the Roommates test.

Importantly, in Roommates, the Ninth Circuit found some of the alleged conduct by defendants was neutral, and protected under Section 230, while other conduct was not. There, the defendant, Roommates.com, hosted a website that allegedly required people posting and searching for roommates to include certain information about themselves or the people they were looking for, such as race and familial status, and also provided an “additional comments” box for users to provide more information. Plaintiffs alleged that this violated fair housing laws including the Federal Housing Act.

The court held that Section 230 did not grant immunity to the extent that Roommates.com created and required the category choices. In that way, it constituted a “developer” of the content at issue (the housing ads and searches). Its decision to add those categories about race and other characteristics substantively altered the third-party speech, and further, contributed to the wrongful nature of the content by inserting discriminatory criteria. In contrast, the Roommates court also held that the “additional comments” text box was a “neutral tool” entitled to Section 230 immunity. It was merely a “generic text prompt with no direct encouragement to perform illegal searches or to publish illegal content.” Id. at 1175.

The court distinguished this from an earlier case, Carafano v. Metrosplash.com, Inc. There, a platform had provided categories (such as the user's name) for people to add to profiles. The plaintiff's defamation claim was based on a fake profile someone made of her on defendant's website. The court explained that Section 230 barred the claim: “[t]he salient fact in Carafano was that the website's classifications of user characteristics did absolutely nothing to enhance the defamatory sting of the message, to encourage defamation or to make defamation easier.” Roommates 521 F.3d 1172.

Similarly, in Dyroff, plaintiff brought various claims holding defendant liable for using an algorithm to recommend a message board to her son based on his past interests and for sending him notification when others posted on the message board after he joined. Her son joined that message board and used it to purchase drugs from another user, leading to his overdose. Plaintiff alleged that the recommendation to join the message board was defendant's own content thus immunity could not be available under Barnes. It was not acting as an interactive computer service, or publishing another party's content. The Ninth Circuit disagreed finding that the notifications and recommendations “were content-neutral tools used to facilitate . . . user-to-user communication, [and] it did not materially contribute [] to the alleged unlawfulness of the content” that ultimately harmed plaintiff's son (i.e., the third-party drug sale). Dyroff, 934 F.3d 1096, 1099.

C. Analysis

i. Parties' “All or Nothing ” Approach

As noted at the outset, defendants argue that Section 230 bars plaintiffs' product claims in their entirety both because they are based on defendants' conduct as publishers of third-party content and plaintiffs' harm is inextricably related to the third-party content they see on defendants' sites. Plaintiffs disagree, arguing they do not target publishing conduct.

Neither side persuades with its all or nothing approach. As described above, application of Section 230 is more nuanced. The Court must consider the specific conduct through which the defendants allegedly violated their duties to plaintiffs. Here, plaintiffs allege a wide array of conduct through which defendants allegedly failed in their duty to create a safe product for users or to warn about defects. Accordingly, the Court uses a conduct-specific approach to the analysis.

ii. Claim 1: Negligent Design - Strict Liability and Claim 3: Negligence - Design

a. Defect Allegations Not Barred By Section 230

The Court begins with plaintiffs' design defect products liability claims. As relevant thereto, plaintiffs make myriad allegations that do not implicate publishing or monitoring of third-party content and thus are not barred by Section 230. The defects pled as part of such allegations are not equivalent to speaking or publishing and can be fixed by defendants without altering the publishing of third-party content. They are, as identified by the Court:

• Not providing effective parental controls including notification to parents that children are using the platforms (MAC ¶ 845(b)-(c));
• Not providing options to users to self-restrict time used on a platform (id. at ¶ 845(f)- (g));
• Making it challenging for users to choose to delete their account (id. at ¶ 845(m));
• Not using robust age verification (id. at ¶ 845(a));
• Making it challenging for users to report predator accounts and content to the platform (id. at ¶ 845(p));
• Offering appearance-altering filters (id. at ¶ 864(d));
• Not labelling filtered content (id. at ¶ 845(k));
• Timing and clustering notifications of defendants' content to increase addictive use (id. at ¶ 845(1))
• Not implementing reporting protocols to allow users or visitors of defendants' platforms to report CSAM and adult predator accounts specifically without the need to
create or log in to the products prior to reporting (id. at ¶ 845(p)).

The Court proceeds to consider defendants' arguments relative to the above-referenced defects, to the extent defendants addressed them specifically. For instance, defendants do not directly address application of Section 230 to the parental control related defects.

Defendants' assertion that other courts have found age verification targeted claims barred by Section 230 does not persuade. Those cases are not controlling, and further, are consistent with this Court's position. MySpace, 528 F.3d 422, for example, did not find that defendants were immunized from claims that they should verify users' ages. Rather, it held that Section 230 immunized defendant from claims that it should have used age-verification to then limit platform access to children like plaintiff. Doe v. MySpace, Inc., 528 F.3d 413, 422 (5th Cir. 2008) (“We therefore hold, without considering the Does' content-creation argument, that their negligence and gross negligence claims are barred by the CDA, which prohibits claims against Web-based interactive computer services based on their publication of third-party content.”).

Here, in contrast, plaintiffs' allegations are broader. They allege that defendants could use age-verification information to take steps that would not impact their publication of third-party content, such as by notifying parents that a child is on the site, enabling the parent to either limit the child's access to the site or talk to them about the content they may see. Accordingly, they pose a plausible theory under which failure to validly verify user age harms users that is distinct from harm caused by consumption of third-party content on defendants' platforms.

Again, defendants do not directly address plaintiffs' filter-related allegations. Plaintiffs allege that defendants' products are defective because they provide filters for children to use and because defendants do not label filtered images. Defendants ignore these allegations, arguing only that they cannot be liable for publishing content made using a filter. At the hearing, defendants did suggest that holding them liable for providing the filters is indistinguishable or inseparable from holding them liable for publishing the images created with those filters. The Court disagrees. Plaintiffs plausibly allege that the filters are harmful regardless of whether children eventually post the images that they filtered. Plaintiffs allege that children are harmed simply by creating and then seeing their own altered images. No posting or publication is necessary. There is a defect and a harm separate and apart from publication of any third-party content. Lemmon, 995 F.3d 1092 (allowing product-defect claim based on speed filter not barred by Section 230 because “Snap's alleged duty in this case [] ‘has nothing to do with' its editing, monitoring, or removing of the content that its users generate through Snapchat'”) (citation omitted).

Next, Section 230 does not entirely immunize defendants from plaintiffs' allegations respecting the ways in which defendants time and cluster notifications. (Id. at ¶ 845(1).) Some of the notifications at issue concern content created by defendants, not third parties. For example, the “awards” allegedly given by Snap fit within this category, as they allegedly are based on data collected and used by Snap and sent to users, and are not published at the request of a third-party content-creator. To the extent defendants send notifications of their own content, Section 230 provides no immunity. Defendants “remain on the hook when they create or develop their own internet content.” Lemmon, 995 F.3d 1093.

Finally, defendants have not addressed how altering the ways in which they allow users and visitors to their platforms to report CSAM is barred by Section 230. Defendants implied at the hearing that making reporting more accessible would necessarily require them to remove CSAM, in violation of Section 230. Not so. Receiving more reports does not require them to remove the content. They could respond by taking other steps, such as reporting the content to a government agency or providing relevant warnings.

The motion to dismiss the product defect claims based on Section 230 is denied as to the defects listed above.

b. Defect Allegations Barred By Section 230

By contrast, the following alleged design defects directly target defendants' roles as publishers of third-party content and are barred by Section 230:

• Failing to put “[d]efault protective limits to the length and frequency of sessions” (MAC ¶ 845(e));
• Failing to institute “[b]locks to use during certain times of day (such as during school hours or late at night” (id. at ¶ 845(h));
• Not providing a beginning and end to a user's “Feed” (id. at ¶ 845(i));
• Publishing geolocating information for minors (id. at ¶ 845(t));
• Recommending minor accounts to adult strangers (id. at ¶ 845 (u));
• Limiting content to short-form and ephemeral content, and allowing private content (id. at ¶ 864 (1); briefing, passim);
• Timing and clustering of notifications of third-party content in a way that promotes addiction (id. at ¶ 845(1)); and
• Use of algorithms to promote addictive engagement (id. at ¶ 845(j)).

This applies to the continual feed feature of Facebook and Instagram, as well as “autoplay” as used on the various platforms. (See ¶ 289 (as to Facebook and Instagram); ¶ 493 (as to Snap); ¶ 615 (as to TikTok); ¶¶ 731-35 (as to YouTube) and any other equivalent features.)

First, addressing the defects in paragraph 845 (e), (h), and (i) would necessarily require defendants to publish less third-party content. Unlike the opt-in restrictions described above, which allow users to choose to view or receive less content, but do not limit defendants' ability to post such content on their platforms, these alleged defects would inherently limit what defendants are able to publish. Similarly, limiting publication of geolocation data provided by users to be published by the site inherently targets the publishing of third-party content and would require defendants to refrain from publishing such content.

Unlike with notifications, the MAC does not allege that any of the content at issue in plaintiffs' feeds and private messages are created by defendants. The Court reads the MAC to allege only that the messaging functions publish third-party content.

While the Court finds this result self-evident based on application of the Ninth Circuit tests, it notes that conduct relative to geolocation has been repeatedly protected under Section 230. See Herrick v. Grindr LLC, 765 Fed.Appx. 586, 590 (2d Cir. 2019) and Marshall's Locksmith Serv. Inc. v. Google, LLC, 925 F.3d 1263, 1270-71 (D.C. Cir. 2019). Other than generally critiquing defendants' reliance on out of circuit authority, plaintiffs do not respond to these cases or to the issue of geolocation at all. In the absence of Ninth Circuit authority, the Court finds these cases persuasive. They both apply the Roommates standard applied in the Ninth Circuit to similar factual allegations as those at issue here. Grindr, 765 Fed.Appx. 591 (relying on Roommates); Marshall's, 925 F.3d 1270 n. 5 (relying on Kimzey for same standard).

Second, Section 230 also immunizes defendants from allegations that they recommend adult accounts to adolescents. The publishing conduct covered by Section 230 immunity includes recommending content to users. Dyroff, 934 F.3d 1096. Plaintiffs do not dispute that user accounts or profiles are third-party “content” published by the platform. Thus, recommending one user's profile to another is publishing of third-party content, which is entitled to Section 230 immunity. See L.W. through Doe v. Snap Inc., 2023 WL 3830365, at *4 (S.D. Cal. June 5, 2023) (holding that claims based on SnapChat's “QuickAdd function” for friending other users barred by Section 230); accord generally Force v. Facebook, Inc., 934 F.3d 53 (2019). In essence, the recommendation function challenged is indistinguishable from publishing, it is the means through which defendants publish third-party content to users. Plaintiffs do not explain how the alleged defects could be addressed without requiring defendants to change how they publish such content. Indeed, the only solution they suggest for addressing the alleged problems caused by the connection of child and adult profiles is to eliminate product features that recommend accounts between children and adult strangers.

See also Carafano, 339 F.3d 1124; Barnes, 570 F.3d 1103.

Plaintiffs assert that L.W. was wrongly decided and misapplies Ninth Circuit precedent by holding that “a duty not to design features that [facilitate[] sex crimes against children,” [citation], is the same as a duty ‘to remove CSAM distributed . . . by third parties.” Opp.2 at 8-9 (quoting L.W., 2023 WL 3830365, at *1, *4). This mischaracterizes L.W. The court there did not make such a sweeping generalization. Instead, it carefully applied Barnes and other precedent and held that the specific feature upon which plaintiffs based their claim was indistinguishable from publishing and that the only identified means to address the issue required alterations to the publication of third-party content.

Lemmon, supra, and A.M. v. Omegle.com, LLC, 614 F.Supp.3d 814 (D. Or. 2022) do not compel a different result. In both, the plaintiffs alleged the defendants had violated their duty to plaintiffs through conduct other than publishing third-party content and could have met their duty without changes to publishing conduct. In Lemmon, the plaintiffs alleged the speed filter was defective irrespective of content being posted or published and that defendants could have met their duty to create a safe product by no longer providing the filter, not by changing how they publish any third-party content. Here, in contrast, plaintiffs have not alleged that the recommendation functions are themselves dangerous, they allege they are dangerous because they recommend third-party content: adult profiles. Plaintiffs do not explain how such defect could be rectified other than through limitations on defendants' publication of third-party content.

Similarly, the plaintiff in Omegle alleged that the defendant's product was defective because it randomly paired her to chat with an adult, who then abused her. The defendant argued that because it matched users in order for them to chat, it was acting as a publisher of third-party content (the conversation). The court disagreed. The recommendation of a chat partner was distinct from the recommendation or publication of content. Indeed, at the time the matching occurred, the “content” or conversation did not exist. Omegle, 614 F.Supp.3d 820-21 (“Omegle has attempted to make this a case about [abuser's] communications to the Plaintiff, but as discussed above, Plaintiff's case does not rest on third party content. Plaintiff's contention is that the product is designed a way that connects individuals who should not be connected (minor children and adult men) and that it does so before any content is exchanged between them.”). There is no such distinction between the recommendation and publication of content here. Defendants recommend existing third-party content (profiles) to users, which is publishing conduct.

Third, Section 230 also immunizes where the products are allegedly defective because they provide short-form and ephemeral content. Editorial decisions such as determining the length of content published and how long to publish content are “traditional editorial functions” immune under Section 230, where exercised with regard to third-party content. Barnes, 570 F.3d 1102 (stating publishing includes “deciding whether to publish, withdraw, postpone or alter content” and other of “‘publisher's traditional editorial functions.'”) (quoting Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997)).

Fourth, with respect to private messaging, plaintiffs cite no authority indicating that posting third-party content is not publishing where it is posted only to one other person. Indeed, when confronted with this exact question in Fields, the court held that private messaging functions do fall within the publishing umbrella. See, e.g., Fields v. Twitter, Inc., 217 F.Supp.3d 1116, 1128-29 (N.D. Cal. 2016), aff'd, 881 F.3d 739 (9th Cir. 2018) (“[A] number of courts have applied [Section 230] to bar claims predicated on a defendant's transmission of nonpublic messages.”).

Plaintiffs argue that Fields is not applicable here because plaintiffs there alleged only that Twitter was “liable for the substance of ISIS's private messages.” Opp.2 at 13. Not so. The court held that Section 230 applied to the direct messaging capabilities generally, regardless of the content of those messages. Fields v. Twitter, Inc., 217 F.Supp.3d at 1128, aff'd, 881 F.3d 739 (9th Cir. 2018) (“Publishing activity under section 230(c)(1) extends to Twitter's Direct Messaging capabilities.”)

Fifth, where notifications are made to alert users to third-party content, Section 230 bars plaintiffs' product defect claims. Dyroff, 934 F.3d 1093 (Section 230 barred claim against defendant for notifying users when other users posted content on a message-board). This includes notifications that someone has commented on or liked a user's post.

Plaintiffs do not allege that defendants modify or develop the content beyond deciding when to publish it to users, just that defendants make strategic decisions about when to publish such notifications to users and whether to publish content to a given user. If plaintiffs alleged defendants altered the content in a material way, that would constitute development. See Roommates, 521 F.3d 1167 (“Roommate designed its search system so it would steer users based on the preferences and personal characteristics that Roommate itself forces subscribers to disclose. If Roommate has no immunity for asking the discriminatory questions, as we concluded above [] it can certainly have no immunity for using the answers to the unlawful questions to limit who has access to housing.”).

Sixth, to the extent plaintiffs challenge defendants' use of algorithms to determine whether, when, and to whom to publish third-party content, Section 230 immunizes defendants. Whether done by an algorithm or an editor, these are traditional editorial functions that are essential to publishing. Further, plaintiffs identify no means by which defendants could fix this alleged defect other than by altering when, and to whom they publish third-party content.

The parties' cited cases support this approach. Courts addressing the use of an algorithm to connect a user with certain third-party content have found Section 230 provides immunity. See, e.g., Force v. Facebook, Inc., 934 F.3d 53, 66 (2d Cir. 2019) (claim based on Facebook algorithm that promoted terrorist content to some users because of data indicating interest or engagement with such content barred by Section 230).

To remind, this is not the case where the defendant allegedly developed the content. For example, in the unpublished Ninth Circuit decision Vargas v. Facebook, Inc., 2023 WL 6784359 (9th Cir. Oct. 13, 2023), the court held defendant was liable for selectively publishing housing advertisements where Facebook had created the tools to selectively send advertisements based on unlawful criteria (e.g., race, familial status).

Plaintiffs argue that their claims are distinct from those at issue in cases such as Force. They focus not on defendants' conduct as publishers of third-party content but rather the process through which the decision to publish is made. Said differently, they argue that these algorithms are not formulated merely to connect people with content, rather they are crafted to increase the quantity of user interaction, regardless of content, with the platform and thus generate profit for defendants. For example, while TikTok's recommendation algorithm has some content-based metrics, it more heavily relies on non-content related data such as the length of time the user spends on a video. (MAC at ¶¶ 585-92.) As such, the algorithm is calibrated to show what a user is most likely to spend the most time watching, rather than the content they are most interested in viewing. Thus, the purpose of the algorithm is not “to curate information” but rather “to maximize the duration and intensity of children's usage” of the platform. (Opp.2 at 11.) They “challenge how Defendants have automated attention farming. They do not seek to change any content on Defendants' platforms.” (Id.)

Plaintiffs' framing does not change the analysis. Nothing in Section 230 or existing case law indicates that Section 230 only applies to publishing where a defendants' only intent is to convey or curate information. To hold otherwise would essentially be to hold that any website that generates revenue by maintaining the interest of users and publishes content with the intent of meeting this goal, would no longer be entitled to Section 230 immunity. Regardless of defendants' intent, they are executing it through conduct that is identical to publishing. Because plaintiffs do not show how the conduct at issue is distinct from determinations of what to publish and how, or that the alleged duty could be met other than by changing the way defendants' publish third-party content, Section 230 bars the claim as to the recommendation algorithms.

As such, this is distinguishable from Lemmon. As noted, the Ninth Circuit did not treat Snap as a publisher or speaker because the claim at issue did not target to publication of third-party content (the plaintiffs' children's use of the speed filter) and thus Snap could have satisfied its alleged duty without changes to publication of third-party content (removing the filter, created by Snap not a third party, from the app).

Accordingly, the motion to dismiss the product defect claims based on Section 230 is Granted as to the defects listed above.

iii. Claim 2: Strict Liability - Failure to Warn and Claim 4: Negligence - Failure to Warn

Claims 2 and 4 allege that defendants distributed defective and unreasonably dangerous products without adequately warning users of risks including risk of abuse, addiction, and compulsive use. The Court defines the risks are those created by the defects addressed in claims 1 and 2. Defendants do not brief application of Section 230 to any of the failure to warn claims. This alone is a basis to deny the motion as to these claims. In any event, the Court finds these claims plausibly allege that defendants are liable for conduct other than publishing of third-party content and that they could address their duty without changing what they publish. The duty arises not from their publication of conduct, but from their knowledge, based on public studies or internal research, of the ways that their products harm children. Plaintiffs allege through these claims that defendants could meet this duty without making any changes to how they publish content, by providing warnings for any and all of the alleged defects.

iv. Claim 5: Negligence Per Se

Plaintiffs' negligence per se claim is based on defendants' alleged violations of COPPA and the Protect Act. Defendants do not address application of Section 230 to the negligence per se claim beyond asserting that it is barred because the harm alleged is inextricable from third-party content published by defendants. As already addressed, this is not an adequate basis for Section 230 immunity. Plaintiffs allege that defendants violated the Protect Act by failing to make required reports of CSAM and by allowing too many of their employees to access CSAM on the platform. Neither of these alleged duties derive from defendants' status as publishers. Plaintiffs do not allege that defendants violated the Protect Act by publishing CSAM, but because they failed in their separate, non-publishing duty to report such content. Limiting employee access to CSAM also has nothing to do with the publication of such content.

The alleged COPPA violations also do not treat defendants as publishers. The claim alleges defendants failed to provide required notice and obtain parental consent before collecting certain information from children. That in no way impacts their role as publishers of third-party content.

Accordingly, the Court FINDS no Section 230 immunity as to the negligence per se claim.

IV. First Amendment

A. Overview and Legal Framework

Defendants broadly assert that the First Amendment entirely bars plaintiffs' claims in that it “precludes tort liability for protected speech, including choices about presenting and disseminating content.” MTD2 at iii (cleaned up). As discussed, myriad allegations do not seek to hold defendants liable for protected speech. Defendants' all or nothing approach therefore fails.

“The Free Speech Clause of the First Amendment . . . can serve as a defense in state tort suits.” Snyder v. Phelps, 562 U.S. 443, 451 (2011). “[A]s a general matter, . . . government has no power to restrict expression because of its message, its ideas, its subject matter, or its content.” Brown v. Ent. Merchants Ass'n, 564 U.S. 786, 790-91 (2011) (citation omitted). “‘[T]he basic principles of freedom of speech and the press, like the First Amendment's command, do not vary when a new and different medium for communication appears.” Id. at 790 (citation omitted). Additionally, under the First Amendment, “the creation and dissemination of information are speech . . . .” Sorrell v. IMS Health Inc., 564 U.S. 552, 570 (2011); Bartnicki v. Vopper, 532 U.S. 514, 527 (2001) (“[I]f the acts of ‘disclosing' and ‘publishing' information do not constitute speech, it is hard to imagine what does fall within that category.”) (citation omitted). Dissemination of speech is different from “expressive conduct,” which is conduct that has its own expressive purpose and may be entitled to First Amendment protection. Id. (stating that disclosing or publishing of information is “distinct from the category of expressive conduct”).

As such, plaintiffs' hyperbolic argument that finding First Amendment protection here would “bend the law beyond its breaking point” because it would be addressing novel questions regarding what constitutes speech and expressive content on the internet is overstated.

That said, “well-defined and narrowly limited classes of speech” provide exceptions to First Amendment protections. Brown, 564 U.S. 791. For instance, obscenity, fighting-words, and incitement may go beyond the protection of free speech. Id. Further, First Amendment rights may also be subject to reasonable time, place, and manner restrictions. As the parties have not argued any of these limitations are relevant here, the Court does not address them further.

B. Design Defect Claims (Claims 1 & 3)

Defendants argue that the First Amendment protects them from liability for the speech they publish as well as for all choices they have made in disseminating them. Even adopting this premise in full, much of the conduct alleged by plaintiffs does not constitute speech or expression, or publication of same. Indeed, defendants' briefing ignores these defects and does not explain how holding them liable in that context would be akin to making them liable for speech.

First, several of the defects relate to how users interact with the platforms. As the Court has already found certain defect allegations barred under Section 230, it only addresses those that remain:

• Not providing effective parental controls including notification to parents that children are using the platforms (MAC at ¶¶ 845(b)-(c));
• Not providing options to users to self-restrict time used on a platform (id. at ¶¶ 845(f)- (g));
• Making it challenging for users to choose to delete their account (id. at ¶ 845(m));
• Not using robust age verification (id. at ¶ 845(a)); and
• Not implementing reporting protocols to allow users or visitors of defendants' platforms to report CSAM and adult predator accounts specifically without the need to create or log in to the products prior to reporting (Id. at ¶ 845(p)).

Defendants raise no arguments regarding whether the First Amendment protects them from having to give notices or warnings. Thus, the Court considers none.

Addressing these defects would not require that defendants change how or what speech they disseminate. For example, parental notifications could plausibly empower parents to limit their children's access to the platform or discuss platform use with them. Providing users with tools to limit the amount of time they spend on a platform does not alter what the platform is able to publish for those that choose to visit it. As discussed with regard to Section 230, allowing users to report CSAM is not the same as requiring defendants to monitor or remove CSAM. The motion to dismiss on First Amendment grounds is Denied as to these defects.

Second, plaintiffs' filter-related defects identified in ¶ 845(k) and ¶ 864(d) of the MAC also survive at this stage. Defendants raise no arguments regarding how the First Amendment protects them from having to label filtered content. Instead, plaintiffs focus on the filters themselves, arguing that they are “tools” to enable users to “modify[] their own expression,” and to “facilitate interactive speech.” Defendants disagree and submit the First Amendment protects the filters in the same way a magazine's use of “computer technology to alter famous film stills” for fashion photography is protected. (MTD2 at 20 (citing Hoffman v. Capital Cities/ABC, Inc., 255 F.3d 1180, 1183 (9th Cir. 2001).)

The Court is not persuaded. In Hoffman the Ninth Circuit found the images and alteration of those images were protected by the First Amendment, not the computer editing technology that created the speech. It was the images, the speech, that was protected. Defendants make a distinctly different argument here. They do not allege that they created the filters with any expressive intent or that the filters are in any way their own “speech.” Based on defendants' own description, the filters are neutral, non-expressive tools provided by defendants. They are not entitled to First Amendment protection. The motion to dismiss on First Amendment grounds is Denied as to the filter defects.

Third, the timing and clustering of notifications of defendants' content to increase addictive use (id. at ¶ 845(1)) is entitled to First Amendment protection. There is no dispute that the content of the notifications themselves, such as awards, are speech. The Court conceives of no way to interpret plaintiffs' claim with respect to the frequency of the notifications that would not require defendants to change when and how much they publish speech. This is barred by the First Amendment. Bartnicki, 532 U.S. 527. Accordingly, the Court finds that the First Amendment protects defendants for the timing and clustering of notifications they publish to users regarding content created by defendants themselves. These are fundamentally choices about when and to whom to publish content notifications. The motion to dismiss the product defect claims as to this defect is Granted.

In summary, with respect to the identified defects, the First Amendment only affords protection with respect to the timing and clustering notifications of defendants' content to increase addictive use (MAC at ¶ 845(1)); otherwise, it does not.

C. Failure to Warn Claims (Claims 2 & 4)

As plaintiffs raise in their opposition, defendants' motion did not raise any arguments specifically addressing application of the First Amendment to failure to warn claims. For the first time on reply defendants attempt to distinguish plaintiffs' cases and cite other First Amendment cases focused on the duty to warn where the alleged danger is caused by defendant's publication or dissemination of speech. Even if this were not procedurally improper, the cited cases are not dispositive. Finding the issue effectively waived by defendants at this stage, and not fully briefed, the Court DENiES any belated motion on First Amendment grounds as to the failure to warn claims.

D. Claim 5: Negligence Per Se

Defendants did not raise any arguments on First Amendment grounds with respect to the negligence per se claim. Plaintiffs identify this issue and defendants remain silent in reply. The Court deems the silence as a concession that the First Amendment does not bar this claim and DENIES the motion to dismiss this claim on First Amendment grounds.

V. Products Liability: Whether the Defects Alleged ConcernPRODUCTS

Having determined the applicability of Section 230 and the First Amendment to all of plaintiffs' claims, the Court turns next to plaintiffs' products liability claims, specifically (Priority Claims 1-4). Given the lack of sufficient briefing, other issues related to plaintiffs' negligence per se claims will be addressed in a separate, subsequent order.

* * *

Plaintiffs state both design defect and failure to warn claims relative to defendants' platforms. As discussed at length at the hearing, the claims are predicated upon the existence of a “product.” Thus, the Court begins there.

Parties agree that strict products liability claims are not cognizable under the laws of Massachusetts, Michigan, North Carolina, and Virginia. Compare MTD1 at 14 n.9 with Pls' Opp'n at 22 n.8. Any strict products liability claims brought under the laws of these states are therefore DISMISSED WITH PREJUDICE. Parties disagree as to whether products liability claims are available to plaintiffs bringing suit under the laws of Alabama and Delaware. The Court declines to resolve those state-specific disputes at this early stage given the brevity of briefing. Separately, parties agree that product-based negligence claims are not cognizable under the laws of Connecticut, Louisiana, New Jersey, Ohio, and Washington. Compare MTD1 at 14:10 & n. 9 with Pls' Opp'n at 22:14 & n. 8 and Defs' Reply at 3:5 & n.1. Any product-based negligence claims brought under the laws of these states are similarly DISMISSED WITH PREJUDICE.

A. Background

i. Overview of Parties' Arguments

As in the Section 230 context, the parties proceed with an “all or nothing” approach to determining whether defendants' social media platforms are products. Defendants seek dismissal on the basis that (i) their platforms are services, not products; and (ii) even if they are not services, plaintiffs' allegations primarily concern access to and distribution of content posted on the platforms, which cannot form the basis of a cognizable products liability claim. Plaintiffs emphasize that they do not claim that content caused them harm; instead, they purport to challenge design choices by defendants regarding various features of their respective platforms' user interfaces.

These approaches are overly simplistic and misguided. While acknowledging that these proceedings implicate novel questions of law, including the applicability of products liability torts to the digital world, the parties repeatedly downplay nuances in the caselaw and the facts. The Court declines to adopt either party's desired approach. Cases exist on both sides of the questions posed by this litigation precisely because it is the functionalities of the alleged products that must be analyzed. This is borne out in the cases relied upon by all parties. The cases generally concern a specific product defect and the determination of whether a specific technology is a product hinges on the specifics of that defect. The same applies here. The Court determines it is necessary to analyze each defect pled by plaintiffs to determine whether they have adequately alleged the existence of a product (or products).

Parties cite myriad cases arguing they definitively establish that defendants' platforms are, or are not, at a global level, products. Many of these cases do not actually analyze whether the technology at issue is a product; others merely conclude, without meaningful analysis, that certain technologies are services, not products. See, e.g., Jacobs v. Meta Platforms, Inc., 2023 WL 2655586 (Cal. Super. Ct. Mar. 10, 2023) (holding Facebook is more akin to a service, with minimal analysis); Grossman v. Rockaway Twp., 2019 WL 2649153 (N.J.Super. Ct. June 10, 2019) (declining to “reach a definitive conclusion” on the question of whether Snapchat is a product); Wickersham v. Ford Motor Co., 194 F.Supp.3d 434 (D.S.C. 2016) (suggesting that an algorithm that comprises part of a Ford truck's airbag deployment system is a product but doing so only in the context of assessing whether plaintiffs' adequately pled the existence of qualifying alternative designs); Crouch v. Ruby Corp., 639 F.Supp.3d 1065 (S.D. Cal. 2022) (observing, in the context of a gender discrimination suit, that an online dating site sells a service, not a product); In re MyFord Touch Consumer Litig., 2016 WL 7734558 (N.D. Cal. Sept. 14, 2016) (certifying a class of individuals who purchased Ford vehicles containing an “infotainment” system and stated strict products liability claims regarding such system; declining to conduct a robust analysis of whether the infotainment system is in fact a product); Hardin v. PDX, Inc., 227 Cal.App.4th 159 (2014) (denying an anti-SLAPP motion, noting that whether the at-issue technology was a product was not argued, and declining to reach a conclusion as to the product question). The cases do not stand for the proposition that products should be analyzed at a global level. Rather, as explained, they are best understood in terms of the functionality which was causing the alleged harm.

ii. The Court's Approach to Analyzing Such Arguments

Given the parties' tactical choices, it is not surprising that neither provides a comprehensive legal framework through which the Court can assess what is and is not a “product.” Further, despite plaintiffs having identified Georgia and New York as their preferred states for the challenged products liability claims, parties rely on cases from various jurisdictions. They spend little time analyzing plaintiffs' claims under Georgia or New York law, and they provide limited insights as to how the laws of those states compare to others. Thus, the Court begins by setting out a framework, then applies it to plaintiffs' alleged defects.

Plaintiffs specified their priority claims and preferred law relative to those claims. They selected New York law for their strict products liability claims (Priority Claims 1-2) and Georgia law for their product-based negligence claims (Priority Claims 3-4). See Dkt. No. 131. As previously explained, the application of these states' laws was intended to promote judicial efficiency in addressing the many claims and jurisdictions implicated by this litigation. Parties nonetheless pay short shrift to the law of these jurisdictions, seldom addressing either. To the extent they do, it is almost exclusively in footnotes.

As stated above, many of plaintiffs' alleged defects are barred by Section 230 or the First Amendment. The Court therefore limits its analysis of plaintiffs' products liability claims to the subset of defects that are not barred, which are:

(i) failure to implement robust age verification processes to determine users' ages;
(ii) failure to implement effective parental controls;
(iii) failure to implement effective parental notifications;
(iv) failure to implement opt-in restrictions to the length and frequency of use sessions;
(v) failure to enable default protective limits to the length and frequency of use sessions;
(vi) creating barriers that make it more difficult for users to delete and/or deactivate their accounts than to create them in the first instance;
(vii) failure to label content that has been edited, such as by applying a filter;
(viii) making filters available to users so they can, among other things, manipulate their appearance; and
(ix) failure to create adequate processes for users to report suspected CSAM to defendants' platforms.

B. Legal Framework

i. Plaintiffs' Preferred Law

The Court begins with plaintiffs' preferred law. Neither Georgia nor New York have codified definitions of what constitutes a “product” for the purpose of applying the doctrine of products liability. In the absence of such definitions, the Court looks to well-accepted persuasive authority concerning the scope of a “product.” This is especially appropriate where, as here, the technologies at issue are emergent. To that end, the Court finds both Georgia and New York courts have turned to the Third Restatement of Torts when assessing products liability claims. See, e.g., Matter of Eighth Jud. Dist. Asbestos Litig., 33 N.Y.3d 488, 493-94 (2019) (noting that “none of our strict liability case law provides a clear definition of a ‘product'” and approvingly citing to commentary to the Third Restatement); Johns v. Suzuki Motors of America, Inc., 310 Ga. 159 (2020); Banks v. ICI Americas, Inc., 264 Ga. 732 (1994).

The Court is satisfied, based on its own limited analysis of Georgia and New York law, that neither state's strict products liability frameworks define the scope of a “product.” Georgia's strict liability statute does not define the “products” to which it applies. See Ga. Code Ann. § 511-11; see also id. at § 51-1-11.1 (defining the scope of the “product sellers” to whom Georgia's strict products liability law applies, without defining the scope of the products or property to which it applies). New York does not appear to have a strict liability statute; instead, the state high court created a cause of action for strict products liability in a seminal 1973 decision. See Codling v. Paglia, 32 N.Y.2d 330, 335 (1973) (“We hold that today the manufacturer of a defective product may be held liable to an innocent bystander, without proof of negligence, for damages sustained in consequence of the defect.”).

While the Court was unable to identify a Georgia case that cites specifically to the Third Restatement's definition of a product, the Court is satisfied that the Georgia Supreme Court has repeatedly referenced the Third Restatement or a draft thereof when resolving novel issues in the past. See Banks, 264 Ga. at 734 (reviewing the ways “foreign jurisdictions and learned treatises” assess whether “a product's design specifications [are] partly or totally defective”). The Court finds further support for its approach in parties' own briefs. For instance, plaintiffs' Opposition notes that defendants acknowledge that Georgia looks to the Third Restatement for guidance regarding the impact of “tangibility” on the products analysis. Pls' Opp'n at 24:8 & n. 8. Defendants do not respond to this argument in their Reply. The Court construes the silence as a concession. As to New York, plaintiffs explicitly address the applicability of the Third Restatement in the appendix to their Opposition brief, writing, “There is no clear definition of ‘product' [under New York law], but courts have looked to the Third Restatement for guidance.” Pls' Opp'n at 92:9-11. Again, defendants do not respond, which the Court deems a concession.

The Court is satisfied, on this basis, that applying the approach taken by the Restatement is in keeping with approaches likely to be taken by the respective high courts of states of plaintiffs' preferred law. In particular, use of the Third Restatement of Torts is consistent with this Court's obligation “to predict' how the state high court would rule” based on the information available. In re Lithium Ion Batteries Antitrust Litig., 2014 WL 4955377 (N.D. Cal. Oct. 2, 2014) (citing Hayes v. Cnty. of San Diego, 658 F.2d 867, 871 (9th Cir. 2011)).

Further, neither party has articulated any reason why this Court should not apply the Restatement's framework to plaintiffs' claims, despite having ample opportunity to do so at the hearing, at which the Court explained its view as to the appropriate legal framework. Indeed, both parties refer to the Restatement in their briefs and analyze the pending motions thereunder.

ii. Restatements of Torts

The Restatements of Torts collectively reflect the evolution of the doctrine of strict products liability over time. For instance, the Second Restatement of Torts focused on individuals who “sell[] any product in a defective condition unreasonably dangerous to the user or consumer or to his property.” Restatement (Second) of Torts § 402A(1) (AM. LAW. INST. 1965) (hereinafter, “Second Restatement”). Such individuals or entities were liable for physical harm caused by their products where they “engaged in the business of selling such a product” and the product was “expected to and [did] reach the user or consumer without substantial change in the condition in which it [was] sold.” Id. at § 402A(1)(a)-(b). The Second Restatement did not, however, define what constituted a “product,” nor did the accompanying commentaries. See generally id. at § 402A.

At times, plaintiffs argue from the Second rather than the Third Restatement. For instance, plaintiffs argue that the Second Restatement, unlike the Third Restatement, does not address tangibility. Instead, it refers to products liability for “any product which, if it should prove to be defective, may be expected to cause physical harm to the consumer or his property.” Pls' Opp'n at 24:18-20 (quoting Second Restatement § 402A(1) & cmt. b). In general, the Third Restatement builds upon, and extends, the Second. Thus, the definition of “product” in the Third Restatement is what controls here.

The Third Restatement, published in 1998, did include a definition of “products” for purposes of products liability actions. This definition reads:

(a) A product is tangible personal property distributed commercially for use or consumption. Other items, such as real property and electricity, are products when the context of their distribution and use is sufficiently analogous to the distribution and use of tangible personal property that it is appropriate to apply the rules stated in th[e] Restatement.
(b) Services, even when provided commercially, are not products.
(c) Human blood and human tissue, even when provided commercially, are not subject to [the] Restatement.
Restatement (Third) of Torts § 19(a) (AM. LAW. INST. 1998) (HEREINAFTER, “THIRD RESTATEMENT”). THIS DEFINITION AS WELL AS THE RESTATEMENT'S EXPLANATORY NOTES IDENTIFY THREE CIRCUMSTANCES IN WHICH intangible things may be deemed “products.” To summarize:

This bar extends to “professionally-provided services, such as medical or legal help” as well as to services related to products, such as their “installation or repair.” Third Restatement § 19(a) & cmt. f; see also id. at Reporters' Note, cmt. f.

First, intangible things can be products when analogized to “tangible personal property” based on “the context of [its] distribution and use.” Id.

Second, strict products liability has been imposed in unique circumstances where harm is caused by (i) the distribution of objectively false information or (ii) electricity. Courts have attached products liability to “maps and navigational charts” containing “false information.” Id. & cmt. d. They have also done so with respect to certain intangible forces, such as electricity. Specifically, “a majority of courts have held that electricity becomes a product when it passes through the customer's meter and enters the customer's premises.” Id. (cleaned up).

The Restatement nonetheless cautioned that “the better view [of such maps and navigational charts] is that false information in such documents constitutes a misrepresentation that the user may properly rely upon.” Id. at § 19(a) & cmt. d. Nonetheless, the courts concluded that unlike other content, defects in maps and charts are “unambiguous” and are therefore “more akin to a classic product defect” than alleged defects implicating other forms of content or the interpretation thereof. For example, a captain of a ship navigating poor visibility near a harbor might rely on mass-marketed nautical maps to discern the depths of the ocean floor and to guide their ship to safely to shore. Were the map to inaccurately reflect the depths through which the captain traveled, the ship and the well-being of those aboard could be severely threatened. The contents of the map are therefore distinguishable from, for instance, the contents of a work of fiction, whose accuracy is not presumed nor relied upon as are navigational maps and charts.

Third, the Restatement clarifies that ideas, content, and free expression have consistently been held not to support a products liability claim. The seminal case of Winter v. G.P. Putnam's Sons, 938 F.2d 1033 (9th Cir. 1991) illustrates the point. There, plaintiffs were mushroom enthusiasts who become severely ill after eating wild mushrooms they identified as non-dangerous based on a reference book. Id. at 1033. They subsequently sued the book's publisher under a strict products liability theory, alleging the book was defectively designed in that it contained erroneous and misleading information about the identification of deadly mushrooms. Id. at 1034. The district court granted summary judgment to the book publisher finding that the content of the book (i.e., whether a specific mushroom was safe to eat) was not a product. The Ninth Circuit affirmed. Id. at 1034, 1036-38.

The Ninth Circuit's opinion in Winter is routinely cited for the proposition that ideas, thoughts, and free expression cannot form a product upon which a products liability can be based. The Ninth Circuit begins with a statement of first principles:

A book containing Shakespeare's sonnets consists of two parts, the material and print therein, and the ideas and expression thereof. The first may be a product, but the second is not. The latter, were Shakespeare alive, would be governed by copyright laws[, among others.] These doctrines applicable to the second part are aimed at the delicate issues that arise with respect to intangibles such as ideas and expression. Products liability law is geared toward to the tangible world.
Id. at 1034 (emphasis supplied). The court applied this framework to the mushroom reference book. Id. at 1034-36. Following the logic excerpted above, the Ninth Circuit allowed that the mushroom reference book itself could be a product, although its contents, as “pure thought and expression,” were not. Id. at 1035-36.

The logic of Winter has been repeatedly reaffirmed in the case law. For instance, roughly ten years after Winter was decided, the Sixth Circuit relied upon its framework in a case involving claims more akin to those presently before the Court, i.e., one involving the digital world. See James v. Meow Media, Inc., 300 F.3d 683 (6th Cir. 2002) (affirming James v. Meow Media, Inc., 90 F.Supp.2d 798, 800 (W.D. Ky. 2000)). There, a 14-year-old student shot, wounded, and killed his high school classmates. Parents brought suit against entities that developed and distributed violent online content which the assailant consumed through video games and movies. 90 F.Supp.2d at 809. They alleged that the “inherent dangerousness” of the content rendered it a “product” for which its developers should be held strictly liable. Id. The trial court disagreed, dismissing plaintiffs' products liability claims on the grounds that “intangible thoughts, ideas, and expressive content are not ‘products' within the realm of the strict liability doctrine.” Id. at 81011. The Sixth Circuit ultimately affirmed. See James, 300 F.3d at 701 (quoting Watters v. TSR, 904 F.2d 378 (6th Cir. 1990)) (“The video game cartridges, movie cassette, and internet transmissions are not sufficiently ‘tangible' to constitute products in the sense of their communicative content.”) (cleaned up).)

C. Analysis

The Court uses the legal framework outlined above to analyze plaintiffs' products liability claims. First, the Court uses the framework to address the parties' “all or nothing” approach to whether defendants' platforms are products. In this regard, the Court considers whether defendants' platforms are: (1) services; (2) tangible; (3) analogous to tangible personal property; (4) akin to ideas, content, and free expression upon which products liability claims cannot be based; and/or (5) akin to “software,” and should, on that basis, be treated as products. Second, the Court conducts an analysis of the functionalities of defendants' platforms challenged by plaintiffs.

i. Parties' “All or Nothing” Approach

a. Whether Defendants' Platforms are Services

The parties dispute whether defendants' platforms should be classified globally as services, not products. On the one hand, defendants maintain that the platforms are simply “interactive communication services.” (MTD1 at 3:3.) Because they merely allow “users to communicate with each other or to interact with other users' content,” they cannot be subject to products liability. (Id. at 15:2-3.) On the other hand, plaintiffs urge that, whether or not defendants also provide services to consumers, their platforms, as pled, are products. Caselaw to the contrary amounts to nothing more than unreasoned “statements that an app is a service.” Further, defendants themselves, publicly and privately, describe the platforms as “products” and should be held to those representations.

These arguments are wanting. As to defendants, a review of the cases reveals that, where courts actually considered whether web-based platforms such as defendants' are services, they offered minimal, if any, rationale for such classifications. This is presumably because the issue appeared either obvious or was not contested. Plaintiffs meanwhile fail to persuade that defendants transfigure their platforms into “products” simply by using the word “product” in internal and external communications. (See generally MAC ¶¶ 171-80). Hiring “Product Managers” to work on a platform does not render that platform a product. See e.g., Jacobs, 2023 WL 2655586, at *3 & n.1 (“[T]he use by Facebook of the term ‘product' does not resolve the question of whether Facebook represents a ‘product' for the purposes of [the at-issue products liability claims].”) In myriad circumstances, courts look to the substance of an issue, not merely the label. A label can be a factor, but without more, is hollow. The Court will not rest its analysis purely on a label. Accordingly, parties' global arguments as to whether defendants' platforms are services do not resolve this dispute.

Take for instance, Jackson. There, a 17-year-old shot and killed another using a gun purchased on Snapchat while staying at an Airbnb property. Jackson v. Airbnb, Inc., 639 F.Supp.3d 994, 1000 (C.D. Cal. 2022). Decedent's mother sued Airbnb on a products liability claim, which the court did not consider on the basis that Airbnb was “more akin to a service than a product.” Id. at 1011. The court provided no meaningful analysis of why Airbnb was more like a service than a product. Id. Other, similar cases are Quinteros, Burghart, Jacobs, and Zienick. See, e.g., Quinteros v InnoGames, 2022 WL 898560, at *7 (W.D. Wash. Mar. 28, 2022) (concluding that an online game called Forge of Empires, “as plead in [that] case,” was “software as a service,” not a product); Burghart v. South Correctional Entities, 2023 WL 1766258, at *3 (W.D. Wash. 2023) (applying Quinteros to hold that an electronic health software system used to monitor prisoners' medical conditions at a jail was “software-as-service” technology, not a product); Jacobs v. Meta Platforms, Inc., 2023 WL 2655586, at *4 (Cal. Super. Ct. Mar. 10, 2023) (relying on Jackson, the court found “that, as a social media platform that connects its users, Facebook is more akin to a service than a product.”); Zienick v. Snap, Inc., 2023 WL 2638314, at *4 (C.D. Cal. Feb. 3, 2023) (“Snapchat is more like a service than a product, and services are not subject to the laws of strict products liability.”)

For instance, it is surely not the case that hiring “Product Managers” to work on a platform makes that platform a product in a legal sense. See MAC ¶ 175 (“Defendants employ ‘product managers' and have established ‘product teams' responsible for the development, management, operation, and marketing of their apps.”)

b. Whether Defendants' Platforms are Tangible

Second, plaintiffs argue that defendants' platforms are in fact tangible in the sense that they “have very tangible manifestations to their users.” (Pls' Opp'n at 25:22-23.) They contend that defendants “design their apps to be visually stimulating, to make noises and vibrate, and to prompt Plaintiffs and other users to pick up their devices to swipe, click, and flick the user interface ....” (Id. at 25:23-24.) Further, defendants “track their users' physical interactions with the apps,” such as measuring “how long a child ‘hovers' on an image before touching it.” (Id. at 26:4-5.)

While creative, the Court disagrees. It is the phones that vibrate, make sounds, or otherwise manifest, physically, defendants' design choices. Any connection between defendants and such haptics is therefore too attenuated for the Court to find that defendants' platforms are in fact tangible products. Doing so would erode any distinction between phone manufacturers (for example, when they calibrate how a phone vibrates in a user's hands) and platform operators like defendants.

Accordingly, the Court determines defendants' platforms are not tangible.

Because Ohio's products liability framework limits the scope of “products” to tangible things, plaintiffs' products liability claims are DISMISSED WITH PREJUDICE insofar as they are brought under Ohio law. Parties agreed as much at the hearing. See Ohio Rev. Code § 2307.71(A)(12); see also Pls' Opp'n at 93:13-15 (admitting that Ohio's products liability framework is limited to “tangible” products).

c. Whether Defendants' Platforms are Analogous to Tangible Personal Property

Third, plaintiffs barely argue that defendants' platforms are sufficiently analogous to tangible personal property to be products. See Third Restatement § 19(a) (defining products to encompass tangible things that are analogous to tangible personal property in “the context of their distribution and use”). The Court addresses each of their two arguments.

Plaintiffs devote two paragraphs to the issue.

One, plaintiffs contend that the “distribution” of defendants' platforms is akin to platforms made by “product designers,” overseen by “product managers,” and then packaged and shipped to the public via stores. The Court has already addressed, and incorporates here, the flaws in relying solely on generic labels. In terms of the analogy that the platforms are, like tangible goods, similarly purchased in a store, i.e., an online app store, that specific analogy fails to persuade. The Court determines that this analogy is not sufficiently direct. For example, defendants' platforms are not exclusively accessed by downloading an app from an online “store.” They can be accessed via their webpages and can even come pre-loaded on certain connected devices. (See, e.g., MAC ¶¶ 186-89 (implicitly acknowledging that Facebook was initially developed as a website but later was configured into a mobile phone app); 578 (noting that TikTok can be accessed via web browser); 692 (explaining that YouTube “comes pre-installed on many Smart-TV's.”)

The fact that one might purchase a tangible item like a television, phone, or other device in order to then access defendants' platforms is not relevant. The direct analogy being drawn is between the tangible thing purchased from a store and defendants' platform.

Two, plaintiffs submit, without analysis, as follows:

Consumers store Defendants' apps on their personal electronic devices and use them for personal purposes. There is no functional difference between downloading an app from the App Store and using it on your phone, and buying a container from the Container Store and using it on your countertop.
(Pls' Opp'n at 25:11-14.) Treating as self-evident the similarities between defendants' platforms and a physical container purchased from a storage solutions company fails. The Court cannot discern what plaintiffs seek to demonstrate through this analogy. A social media platform is not like a container. To that end, plaintiffs have not established as a global matter that defendants' platforms are akin to tangible personal property such that they are products.

d. Whether Defendants' Platforms are Akin to Ideas, Content, or Free Expression

Fourth, the Court considers whether defendants' platforms are akin to ideas, content, and free expression upon which products liability claims cannot be based. Defendants emphatically argue that plaintiffs' claims rise and fall with the content-based allegations made in the MAC. Plaintiffs again emphasize that they do not challenge any content hosted on defendants' platforms, but challenge defendants' design choices in how to structure and operate their platforms, which subsequently caused them harm.

In light of the above and for efficiency, the Court analyzes the parties' key cases focusing on the distinction between design- and content-focused claims to determine whether resolution of the pending motions on their “all-or-nothing” arguments is possible. To this end, the Court begins with defendants' cases. In addition to Winter and its progeny, such as James, defendants also rely on cases like Estate of B.H. v. Netflix, Inc., 2022 WL 551701, at *1 (N.D. Cal. Jan. 12, 2022), appeal docketed, No. 22-15260 (9th Cir. Feb. 23, 2022) and Rodgers v. Christie, 795 Fed.Appx. 878 (3d Cir. 2020) to assert that products liability claims relating to content-delivery systems are not cognizable.

In Netflix, decided by this Court in 2022, plaintiffs brought a range of claims, including for products liability, against Netflix in connection with Netflix's production, dissemination, and recommendation of a television show involving suicide to a young girl who went on to herself commit suicide. Netflix, 2022 WL 551701, at *3; see also Am. Compl. ¶ 6, Netflix, No. 4:21-CV-06561-YGR (N.D. Cal. Sept. 22, 2021), ECF No. 22 (alleging Netflix “used its sophisticated, targeted recommendation systems to push the [at issue s]how” on children). This Court determined that plaintiffs failed to state a strict products liability claim and granted defendant Netflix's motion to dismiss because plaintiffs “premised” the operative complaint “on the content and dissemination of the show.” Netflix, 2022 WL 551701, at *3. Relying on Winter, the Court likened plaintiffs' claims to those against “books, moves, or other forms of media.” Id.

This litigation is distinguishable from Netflix, however. There, plaintiffs' injuries were inseparable from a specific show. The Court was therefore left with no option but to conclude that, “[w]ithout the content,” meaning the show in question, “there would be no claim.” Id. Not so here. As pled, plaintiffs allege harms stemming from the design of defendants' platforms.

Rodgers similarly does not persuade, but on different grounds. There, plaintiff brought suit against the entity responsible for designing “a multifactor risk estimation model” used by the New Jersey state court to make pretrial release determinations, and which played a role in the decision to release a man who killed plaintiff's son. Rodgers, 795 Fed. App'x at 878-79. The district court dismissed the complaint, finding that the model was not a product under New Jersey law, and the Third Circuit affirmed. Id. at 879. In doing so, the Third Circuit emphasized two things: (i) the model at issue was not distributed commercially; and (ii) the model was not “remotely analogous” to tangible personal property because “information, guidance, ideas, and recommendations are not products under the Third Restatement.” Id. at 879-80 (cleaned up). Rodgers is therefore distinguishable on two grounds. First, defendants' platforms are commercially available, unlike the model there at issue. Second, plaintiffs' allegations regarding defendants' recommendation algorithms are barred by Section 230 and no longer part of this case.

Further, the opinion in Rodgers “does not constitute binding precedent” as it was “not an opinion of the full Court.” Rodgers, 795 Fed. App'x at 878.

Viewed in context, therefore, Netflix and Rodgers do not demonstrate why dismissal of all plaintiffs' products liability claims is necessary. By contrast, Brookes, Lemmon, and Omegle are examples of cases in which courts took plaintiffs' preferred approach of distinguishing between products liability claims that are focused primarily on content (and thus, were not cognizable) and those focused primarily on design (which are cognizable). The Court analyzes these cases next.

Defendants' arguments, based on Brookes, Lemmon, and Maynard, that products liability law does not apply “where the plaintiff alleges harm from provision of a service or from information, ideas, and content expressed through the service,” fail to persuade. MTD1 at 8:7-9 (collecting additional cases in support). First, this argument simply repackages the framework already developed in this Order. Second, the Court determines defendants' have not established that their platforms are consist solely of services as a matter of law.

The Court begins with Brookes as it is most analogous to this litigation. There, a Florida intermediate appellate court held, with the benefit of a full record and on summary judgment, that the ridesharing company Lyft's mobile app was a product. Brookes v. Lyft, Inc., 2022 WL 19799628, at *3 (Fla. Cir. Ct. Sept. 30, 2022). The court clarified, first, that Lyft's app was not a service, writing that “Lyft's connection to the application is not simply the use of it to provide a service.” Id. Instead, “Lyft [was] the designer and distributor of the application,” which was “defective because of the way it habituat[ed] and distract[ed] Lyft drivers to constantly monitor the application,” including while driving. Id. at *1, *3. The court concluded that the “design choices” gave rise to the harms alleged and Lyft could be held accountable under Florida's products liability framework. Id. at *2. The logic of Brookes therefore follows plaintiffs' products liability theory in this MDL. Here, plaintiffs allege harm arising out of defendants' choices about how to design their platforms, including the at-issue defects.

Defendants sought at the hearing to distinguish Brookes by emphasizing that, unlike here, plaintiff in that case was harmed by a physical object, the car operated by a Lyft driver distracted by the mobile app. Brookes, 2022 WL 19799628, at *1. Although the car in question was part of the chain of causation in that case, the Court discerns no indication that the court relied on the fact that the injury was ultimately caused by a tangible object in determining whether the Lyft app was a product. Defendants similarly argued that unless the allegations assert the “software made the operation of a vehicle more dangerous,” the action should be dismissed. Defs' Reply at 7:16-19 (citing Jane Doe No. 1 v. Uber Techs., Inc., 79 Cal.App.5th 410, 419 (2022)). This is misguided. First, Jane Doe did not conduct a product analysis. Second, the trial court below had found that the Uber ridesharing app was not a product in part because it was used to provide a service to the plaintiff (i.e., obtaining a ride). See Doe v. Uber Tech., Inc., 2020 WL 13801354, at *7 (Super. Ct. Cal. Nov. 30, 2020) (“By plaintiffs' own allegations, the Uber App was used to gain a service: a ride.”) (citation omitted). No such services are implicated here.

Like Brookes, Lemmon also supports the use of products liability in this litigation. While that case is discussed in more detail later in this Order, it suffices at this point to note that the Ninth Circuit there assumed (perhaps because they found it obvious) that plaintiffs adequately alleged a product-based negligence claim against Snap. Lemmon, 995 F.2d at 1093. Plaintiffs there challenged Snap's Speed Filter functionality, a tool that enabled users to overlay the speed they were traveling in real life onto digital content that could be shared through the app. Id. Thus, the claims at issue in that case are similar to plaintiffs' allegations in the MAC, in that they focused on the design of Snapchat functionalities more than any content shared through the platform or accessed there. See, e.g., id. (“This case presents a clear example of a claim that simply does not rest on third-party content .... [Plaintiffs'] negligent design claim faults Snap solely for Snapchat's architecture, contending that the app's Speed Filter and reward system worked together to encourage users to drive at dangerous speeds.”). To that end, Lemmon provides, at minimum, an example to validate plaintiffs' theory of products liability claims being cognizable against social media platforms, including defendants.

Finally, Omegle is also instructive and invokes Lemmon. That case involved products liability claims against Omegle.com, a chat platform that randomly connects users for video calls. The court found its design defective insofar as it randomly connected minor and adult users before any contact. Omegle, 614 F.Supp.3d at 817. Relying in part on Lemmon, the court determined that plaintiff adequately pled her products claims. See id. at 819 (noting that defendant Omegle “could have satisfied its alleged obligation to Plaintiff by designing its product differently-for example, by designing a product so that it did not match minors and adults.”) In doing so, the court rejected the notion that defendant would have “needed to review, edit, or withdraw any third-party content” in response to plaintiff's claims. Id. at 820. The court reiterated that plaintiff's “case [did] not rest on third party content” because she contended “that the product [was] designed [in] a way that connects individuals who should not be connected (minor children and adult men).” Id. at 820-21. Thus, Omegle, like Brookes, stands for the proposition that products claims focused on the design of digital platforms, as opposed to their content, may be cognizable.

Accordingly, the Court determines defendants' global arguments that plaintiffs' allegations concern only third-party content and should be dismissed on that basis fails to persuade. However, as discussed, infra, this does not end the analysis. Instead, a more detailed and searching analysis of the specific defects alleged is required.

e. Whether Defendants' Platforms, as Software, are Products

Fifth, the Court examines whether defendants' platforms are akin to “software” and on that basis are products globally speaking. The Third Restatement anticipated that courts might, at some future date, be asked to determine whether software is a product. It did not express a view on the matter and instead simply made two notes. First, academics have long urged such an extension of tort doctrine. Second, courts could turn to the Uniform Commercial Code (“UCC”)'s treatment of mass-marketed software as “goods” for persuasive authority. See generally Third Restatement § 19(a) & cmt. d.

The Ninth Circuit, in Winter, similarly anticipated this. It suggested, in dicta, that “computer software that fails to yield the result for which it was designed may [also] be [a product].” Winter, 938 F.2d at 1036.

Bespoke software designed specifically for a customer is not considered a good, however. In that context, software is a service. Third Restatement § 19(a) & cmt. d.

Relying on those notes, plaintiffs argue that, under the above-referenced framework, defendants' platforms are “software” and should be treated as a product. They rely on three cases to support this argument: Communications Groups, RRX Industries, and Neilson Business Equipment Center. These cases are not products liability cases, however; they are cases in which courts determined that contracts for software implicate “goods” and are therefore governed by the UCC (or its state analogs).

Commc'ns Grps., Inc. v. Warner Commc'ns, Inc., 527 N.Y.S.2d 341 (N.Y. Civ. Ct. 1988); RRX Indus., Inc. v. Lab-Con, Inc., 772 F.2d 543 (9th Cir. 1985); Neilson Bus. Equip. Ctr., Inc. v. Monteleone, 524 A.2d 1172 (Del. 1987).

Defendants submit that plaintiffs' preferred approach would extend the limits of products liability too far by finding, in effect, that any software can be a product, even software that operates as a service or deals primarily with ideas, content, and free expression that cannot typically form the basis of a products liability claim. That said, neither plaintiffs nor defendants analyze in detail plaintiffs' cases, nor do they apply the facts of such cases to this litigation. The Court nonetheless addresses each.

To start, the Court determines at the outset that Communications Groups is irrelevant to this litigation because it deals with custom software, which is a service.

See, supra, note 45. Communications Groups arose from a dispute over a contract for “the installation” of “specifically designed software equipment for defendant's particular telephone and computer system, needs, and purposes.” Communications Groups, 138 Misc.2d at 83. The transaction at issue involved multiple pieces of “identifiable and movable equipment such as recording, accounting and traffic analysis and optimizations, modules, buffer, directories, and an operational user guide and other items.” Id.

RRX Industries is similar to Communications Groups but does not appear to have involved bespoke software. Rather, it arose from a “computer software contract” dispute involving “a software system for use in [] medical laboratories.” RRX Industries, 772 F.2d at 545. The Ninth Circuit, in finding the software constituted a “good,” applied a California law defining as “goods” “all things . . . which are movable at the time of identification to the contract for sale ....” Id. at 546 (quoting Cal. Comm. Code § 2105 (West 1964)) (emphasis supplied). Neilson Business Equipment Center is similar. That case also involved a contract for a software package consisting of “hardware, software and services,” which were collectively determined to be a good. Neilson Business Equipment Center, 524 A.2d at 1174.

The Court therefore finds plaintiffs' analogy to the treatment of software under the UCC insufficiently developed to persuade. First, read together and viewed in context, RRX Industries and Neilson Business Equipment Center stand for a narrower proposition than plaintiffs admit: that certain software packages, typically including physical hardware and delivered to specific customers, can be “goods.” Plaintiffs do not explain how this litigation implicates this rule. This is especially striking where, as here, defendants' platforms are intangible and do not include hardware. Second, plaintiffs give this Court no recent cases to support their preferred approach. The cases upon which they rely were decided between 1985 and 1988. The “software” referenced therein was not as sophisticated as the technologies implicated by this litigation and created decades later. As plaintiffs cite no authority that interprets the UCC's treatment of software in a modern context, and as they do not meaningfully explain how defendants' platforms are similar to the software discussed in their cases, the Court cannot adopt their logic. Said differently, there may be a workable analogy here, but plaintiffs have not identified it.

Accordingly, the Court declines to treat the platforms as products by way of analogy to how the UCC treats some mass-marketed software.

ii. The Court's Defect-Specific Approach

As repeatedly emphasized herein, the allegations in the MAC warrant a more fulsome analysis than the global approaches taken. Thus, the Court analyzes whether the various functionalities of defendants' platforms challenged by plaintiffs are products. For each, the Court draws on the various considerations outlined above (i.e., whether the functionality is analogizable to tangible personal property or more akin to ideas, content, and free expression) to inform the analysis. Depending on the functionality at issue, the Court's analysis may be limited to one consideration; for other defects, multiple considerations may determine the outcome.

To the extent parties diverge from their “all or nothing” approach, they do so by categorizing plaintiffs' alleged defects into sweeping categories and analyzing them at an abstract level. See, e.g., MTD1 at 23:10-25:19; Reply at 10:3-11:25. As discussed, supra, this of marginal value.

a. Defective Parental Controls and Age Verification (Defects i, ii, and iii)

The first three design defects relate to defendants' allegedly defective parental controls and age verification systems, namely: (i) a failure to implement robust age verification processes to determine users' ages (MAC ¶ 845(a); see also, e.g., id. at ¶¶ 59, 134, 140, 327-35 (Meta), 461-62 (Snap), 568-74 (TikTok)); (ii) a failure to implement effective parental controls (Id. at ¶ 845(b); see also, e.g., id. at ¶¶ 134, 141, 262 & 346 (Meta), 566 & 579 (TikTok)); and (iii) a failure to implement effective parental notifications (Id. at ¶ 845(c)).

The Court begins by asking whether these alleged defects are analogous to tangible personal property in the context of their use and distribution. See Third Restatement § 19(a). The answer is yes. Myriad tangible products contain parental locks or controls to protect young children. Take, for instance, parental locks on bottles containing prescription medicines. Other examples include parental locks on televisions that enable adults to determine which channels or shows young children should be permitted to watch while unsupervised.

The Court also considers whether these defects concern design elements of defendants' platforms and are content-agnostic, as plaintiffs argue, or are more akin to ideas, content, and free expression upon which products liability claims cannot be based. Again, these identified defects primarily relate to the manner in which young users are able to access defendants' apps, including whether their age is accurately assessed during the sign-up process and whether, subsequent to signing up, their activity and settings can be accessed and controlled by their parents.

These defects are therefore more akin to user interface/experience choices, such as those found to be products in Brookes, where a Florida intermediate appellate court determined the Lyft mobile app was a product. See generally Brookes, 2022 WL 19799628. As in Omegle, the defects alleged here also concern minors' abilities to access online platforms. 614 F.Supp.3d at 817, 821. Such claims are therefore content-agnostic.

Defendants' counterarguments do not persuade otherwise. They urge that these defects are not products because the alleged harm is derived from words, images, and content, relying primarily on James to do so. See, supra, Section VI.B.ii (discussing James). Defendants' repackaging of the MAC for their own purposes (such as by asserting these defects are inseparable from content parents may wish to block) does not control, however. The Court determines plaintiffs' pleadings plausibly support their contentions, and therefore distinguish this litigation from James.

Defendants also rely on Grossman v. Rockaway Twp., 2019 WL 2649153 (Super Ct. N.J. June 10, 2019), which implicated Snap's age verification processes. The decision is not inapposite. While the court granted a motion to dismiss products liability claims against Snapchat, it did so on grounds that plaintiffs had not pled sufficient facts to enable it to analyze whether Snapchat was a product. See id. at 15. This is not the case here.

For these reasons, these three design defects are classified as products.

While not necessary for the finding, the Court notes here that the Third Restatement suggests courts should also consider public policy factors in determining what constitutes a “product” for the purposes of products liability claims. Third Restatement § 19, Reporter's Note at cmt. a. The Reporters to the Third Restatement identify such factors as including “the public interest in life and health,” as well as, among others, “the justice of imposing the loss on the manufacturer who created the risk and reaped the profit.” These considerations also support the Court's conclusion here. The alleged defects concern children's well-being and safeguards to ensure adequate parental oversight of their online activities. As such, they advance the public's interest in young people's well-being. See, e.g., Wyke v. Polk Cty. Sch. Bd., 129 F.3d 560, 573 (11th Cir. 1997) (“Society as a whole has a strong interest in ensuring the health and well-being of its children.”). Moreover, defendants are best positioned to impose such safeguards, especially where, as here, they are alleged to intentionally target younger users. See, e.g., MAC ¶ 54.

b. Failure to Assist Users in Limiting In-App Screen Time (Defects iv and v)

The next two design defects pertain to app session duration: (i) a failure to implement opt-in restrictions to the length and frequency of use sessions (MAC ¶ 845(f); see also, e.g., ¶¶ 195 & 263 (summarizing such allegations against Meta)); and (ii) a failure to implement default protective limits to the length and frequency of use sessions (Id. ¶ 845(e); see also, e.g., ¶¶ 195 & 263 (Meta)).

Again, the Court begins with an analogy to tangible personal property. The most obvious analog to these identified defects is physical timers and alarms, which have long been in use. Modern examples are also available. For instance, many of us carry in our pockets smart phones which are tangible products. These phones contain features that enable users to receive autonotifications should they exceed pre-set “screen time” limits. These examples are sufficiently analogous to tangible personal property in terms of their use and distribution.

Where features are not dependent on phone connectivity or integrated into apps thereon, they have been considered part of the physical product. See Holbrook v. Prodomax Automation, Ltd., 2021 WL 4260622, at *1-*2 (W.D. Mich. Sept. 20, 2021) (finding that operating software controlling an assembly line was, for all intents and purposes, part of that physical assembly line and a product).

As noted, supra, plaintiffs' arguments that defendants' platforms are tangible failed to persuade because any physical manifestations of the platforms are facilitated by the phones (or other devices), over which defendants are not alleged to have any control. By contrast, the Court here focuses on an analogy to functionalities of the phones as tangible items.

Importantly, these alleged defects are also content-agnostic. Plaintiffs' theory concerns the manner in which users access the apps (i.e., for uninterrupted, long periods of time), not the content they view there. For this reason, these alleged defects are not excluded on the grounds that they pertain to “ideas, thoughts, and expressive content” under Winter and its progeny. Cf. James, 300 F.3d at 701 (holding that such content cannot form the basis of a products liability claim).

Accordingly, the Court finds the two above-referenced design defects are product components and therefore appropriately fall within a product liability claim.

c. Creating Barriers to Account Deactivation and/or Deletion (Defect vii)

Plaintiffs allege that each defendant's account deactivation/deletion process is needlessly complicated and serves to disincentivize users from leaving their respective social media platforms. (MAC ¶ 845(m); see also, e.g., ¶¶ 358-59, 362 (Meta), 489 (Snap), 639, 645, 647 (TikTok), 774 (YouTube).)

Here, defendants' global arguments casting all of plaintiffs' allegations as essentially content-related are particularly lacking. The manner in which an individual user is able to deactivate or delete an account does not pertain directly to ideas, content, or free expression and is content agnostic. Cf. Winter, 938 F.2d at 1034 (concluding, in relevant part, that “ideas and [the] expression thereof” are not products). Defendants' suggestion otherwise strains credulity.

Defendants include a passing reference to “account deletion” in MTD1. There, defendants argue that this defect is part of an attempt by plaintiff to “use product liability law to restrict the ‘words and images' that Defendants make available-content that Plaintiffs say Defendants should have prevented from reaching minors.'” MTD1 at 11:19-21 (citing James, 300 F.3d at 697, 701) (additional citations omitted). The Court disagrees. It strains plaintiffs' MAC to characterize their allegations as seeking to require defendants to enable smoother account deletion pathways in order to prevent minors from accessing content. As pled, plaintiffs seek the creation of such pathways in order to permit users to extricate themselves from defendants' platforms for any reason, including that they no longer wish to view content available there. It is simply not true that the only reason a child may wish to deactivate a Facebook account, for instance, is the content it publishes.

Further, the Court is not inclined to view this alleged defect as akin to a service. In some senses, account deletion and deactivation may be analogized to interactions a consumer might have with a service provider (such as closing an account with a bank or credit card company, for instance). The distinction here, however, is that account deletion and deactivation, as pled, is a user-directed process. The MAC does not assert that employees of defendants must assist users in processing such requests. This distinguishes the alleged barriers to account deletion and deactivation here from account-related services provided in other contexts.

The Court similarly determines that defendants' account deletion and deactivation processes are distinct from product-related services, such as “installation or repair,” that courts typically view as services, not products. See Third Restatement § 19(a), Reporter's Note at cmt. f. A better example of such services, which again are not products, would be a technician assisting a consumer with the installation of a new appliance and subsequently servicing that appliance.

Given the procedural posture of the action, the Court therefore finds a sufficient plausible basis to classify the defect as a product.

d. Failure to Label Edited Content (Defect vi)

Next, plaintiffs allege defendants fail to label images and videos that have been edited through in-app “filters” as edited content. (MAC ¶ 845(k); see also, e.g., ¶ 318 (“Meta has intentionally designed its products to not alert adolescent users when images have been altered through filters or edited. Meta has therefore designed its product so that users, including plaintiffs, cannot know which images are real and which are fake, deepening negative appearance comparison.”)

This alleged defect concerns the design of defendants' social media platforms rather than the content made available through such platforms. See, e.g., Brookes, 2022 WL 19799628, at *3. That said, the Court recognizes that the labeling, or failing to label, content, in any way, is tied to the nature of the content itself. See, e.g., James v. Meow Media, Inc., 90 F.Supp.2d 798, 810-11 (W.D. Ky. 2000), aff'd 300 F.3d 683 (6th Cir. 2002) (“[I]ntangible thoughts, ideas, and expressive content are not products within the realm of the strict liability doctrine.”) (cleaned up). However, that connection relates to the output of the labeling, not the labeling tool itself.

On balance, and given the posture of this litigation, the Court is required to accept plaintiffs' allegations as true when testing the sufficiency of their claims. For this reason, the Court finds that this design defect may proceed as a product to the extent that plaintiffs' allegations center on the design of the filter. For instance, labeling a photo as “edited” does not alter the underlying photo as much as it guides the user in better understanding how to interpret that photo. The Court finds this distinction meaningful. Accordingly, while a closer question, plaintiffs have plausibly stated the existence of a product relative to this defect.

e. Making Filters Available to Users to Manipulate Content (Defect viii)

The next alleged defect concerns defendants' filters, which enable users to manipulate content prior to posting it on defendants' platforms or otherwise sharing it with others. (MAC ¶ 864(d); see also, e.g., ¶¶ 88 (all defendants), 131 & 649-53 (TikTok), 210 & 314-26 (Instagram), 256 (Facebook), 513-19 (Snapchat).)

Plaintiffs challenge two main categories of filters. One, they target filters that permit users to “blur imperfections” and otherwise enhance their appearance in order to “create the perfect selfie.” (Id. at ¶ 514.) Plaintiffs assert the widespread use of such filters promotes unattainable beauty standards and facilitates social comparison, which combine to cause negative mental health outcomes for users, particularly young girls. Two, they target filters like Snapchat's Speed Filter, which enable users to overlay content on top of existing content. Specifically, the Speed Filter is a functionality that enables users to overlay the speed they are traveling in real life onto a photo or video before sharing that content with others via the Snapchat app. (Id. at ¶¶ 132, 517-18 (describing the Speed Filter).)

The Court examines these categories of filters separately.

With respect to the filters that permit appearance alteration, the Court notes that defendants, admittedly in the First Amendment context, have referred to such filters as “tools that allow users to speak to one another,” such as by “creating or modifying their own expression (including with visual effects that change the look of images).” (MTD1 at 19:19-21 (emphasis supplied).) Defendants' use of the word “tools” here is notable because defendants implicitly concede that a distinction exists between a “tool,” or functionality, that permit users to manipulate content and the content itself. Here, the concession inures to plaintiffs' benefit as it bolsters their contention that this alleged defect is really about design, not content. Given the procedural posture, plaintiffs' products liability claims may proceed with respect to defendants' appearance-altering filters.

Defendants' articulation reoccurs in other filings. For instance, defendants argue that the “presence of photo or video ‘filters' is determined by the user in deciding how to personalize their own content.” MTD1 at 23:16-18 (internal citations omitted). Defendants frame use of such filters as pertaining to “how users can create or view messages, images, or other content generated by third parties online” and assert that they therefore cannot be products. Again, they focus on the output rather than the instrumentality itself.

With respect to Snapchat's Speed Filter, the Court views the Ninth Circuit's opinion in Lemmon as on point. See, supra, Section VI.C.i.d (discussing Lemmon). Like in Lemmon, plaintiffs here also challenge the design of Snapchat's platform insofar as it provides users with the Speed Filter as a tool for overlaying their speed onto photos and videos. As plaintiffs challenge essentially the same functionality as was at issue in that case and plead their allegations in similar ways, the Court determines Lemmon applies here. Thus, the Court finds that plaintiffs have adequately pled that the Speed Filter is a product or component thereof, and that plaintiffs' products liability claims may proceed as to that defect.

In their Reply, defendants assert Snapchat's Speed Filter “has been defunct for years and is not at issue in this litigation.” Defs' Reply at 5:16-17; see also Dkt. No. 324, Snap's Supplemental Brief in Support of Defendants' Reply in Support of MTD1 (hereinafter, “Snap's Supplemental Reply Brief”) at 4:3 & n.5 (explaining that the filter is no longer in operation). However, the Speed Filter is referenced in the MAC, see, e.g., MAC ¶¶ 517-18, and so the Court analyzes it here. To the extent plaintiffs agree that the Speed Filter is no longer operational and was not operational during all relevant periods, then the Court would expect to receive clarification as to the scope of the filters defect as this litigation progresses.

The Court disagrees, therefore, with defendants' attempt to distinguish Lemmon on the grounds that the Ninth Circuit “did not address whether Snapchat . . . was ‘a product' for purposes of California tort law.” MTD1 at 16 n.12 (citing Jacobs v. Meta Platforms, Inc., 2023 WL 2655586, at *3 (Cal. Super. Ct. Mar. 10, 2023)). Maynard, the other Speed Filter case to which parties point, also supports this conclusion. There, plaintiffs brought a product-based negligent design claim against Snapchat arising out of Snap's design of the Snapchat Speed Filter functionality. Maynard v. Snapchat, Inc., 870 S.E.2d 739, 743 (Ga. 2022). While the Georgia Supreme Court concentrated its analysis on the duty owed by Snap and proximate causation, it tacitly assumed that plaintiffs had adequately pled that the filter was a product in the first instance. Defendants urge this Court to disregard Maynard because the court there “did not analyze or decide whether” the filter was a product. MTD1 at 16 n.12. This is true. However, this may have been because the Georgia Supreme Court viewed the filter's status as a product as obvious.

Accordingly, the Court determines plaintiffs have plausibly alleged that both categories of filters are products and permits their claims to proceed on that basis.

f. Failure to Enable Processes to Report CSAM (Defect ix)

Finally, the Court analyses plaintiffs' allegations that defendants failed to design their platforms to include “reporting protocols [that] allow users or visitors” “to report CSAM and adult predator accounts specifically without the need to create or log in to the products prior to reporting.” (MAC ¶ 845(p) (emphasis supplied).)

The Court determines this allegation specifically concerns the design of defendants' platforms. Plaintiffs seek to hold defendants' accountable for requiring users to have logged into a registered account in order to report certain obscene content or profiles. This is quintessentially a matter of design, user interface, and system architecture rather than content. See generally Brookes, 2022 WL 19799628, at *3.

Accordingly, the Court determines that plaintiffs have adequately alleged that the design of defendants' CSAM and adult predator account reporting mechanisms are products.

The Court notes, however, that additional fact development is likely to be helpful on this defect because, as set forth above, plaintiffs are barred from holding defendants' liable for their content moderation activities. Their theory of harm relative to this defect must therefore not take issue with defendants' choices as to what content to take down or censor.

D. Conclusion

For the foregoing reasons, the Court determines that plaintiffs adequately plead the existence of product components as to each alleged defect analyzed herein. As such, the Court reaches the remaining elements of plaintiffs' products liability claims: duty and causation.

Notably, the product defects here identified by the Court and in relation to which the Court permits plaintiffs' products liability claims to proceed are not meaningfully challenged in Snap's supplemental filings. See generally Snap's Supplemental MTD Brief; Snap's Supplemental Reply Brief.

In general, defendants failed to brief specific grounds to dismiss plaintiffs' failure to warn claims. Given that there are at least some bases on which plaintiffs' failure to warn claims can proceed (i.e., insofar as such claims are based on the defects analyzed herein), the Court declines to do an unbriefed, complete analysis of all alleged defects on which their failure to warn claim could conceivably be based. Such an analysis would involve defects not barred by Section 230 and the First Amendment as to plaintiffs' failure to warn claims.

VI. Duty

The Court now addresses whether plaintiffs have adequately pled the duty element of their product-based negligence claims (Priority Claims 3-4). The Court analyzes two issues: One, have plaintiffs pled that defendants owe a duty to users of their social media platforms. Two, do defendants owe a duty to prevent third parties, such as adult predators, from using defendants' platforms to harm plaintiff users.

A. Duty to Users of Defendants' Platforms

First, with respect to whether defendants owe a duty to plaintiff users of their social media platforms, the analysis is straightforward. It is well-established, including in this circuit, that manufacturers of products owe such duties to users. See Third Restatement § 1. The parties agree. (See Oct. 27, 2023 Hrg Tr. 131:15-25.)

Third Restatement § 1 (“One engaged in the business of selling or otherwise distributing products who sells or distributes a defective product is subject to liability for harm to persons or property caused by the defect.”); see also id. at cmt. a (“[This is] a general rule of tort liability applicable to commercial sellers and other distributors of products generally.”); see also Lemmon, 995 F.3d at 1092 (“Manufacturers have a specific duty to refrain from designing a product that poses an unreasonable risk of injury or harm to consumers.”) (citing Dan B. Dobbs, et al., Dobbs' Law of Torts § 478 (2d ed., June 2020 Update)). Similarly, plaintiffs' preferred law, that of Georgia and New York, imposes duties on product manufacturers. See, e.g., Micallef v. Miehle Co., Division of Miehle-Goss Dexter, Inc., 39 N.Y.2d 376, 385 (Ct. App. 1976) (“[W]e hold that a manufacturer is obligated to exercise that degree of care in his plan or design so as to avoid any unreasonable risk of harm to anyone who is likely to be exposed to the danger when the product is used in the manner for which the product was intended as well as an unintended yet reasonably foreseeable use ....”) (citations omitted); Reece v. J.D. Posillico, Inc., 164 A.D.3d 1285, 1287-88 (N.Y. Supreme Ct., App. Div. 2018) (“A product may be defective when it contains a manufacturing flaw, is defectively designed, or is not accompanied by adequate warnings for the use of the product. A manufacturer has a duty to warn against latent dangers resulting from foreseeable sues of its product of which it knew or should have known.”) (citations omitted); Chrysler Corp. v. Batten, 264 Ga. 723, 724 (1994) (“[A] manufacturer has a duty to exercise reasonable care in manufacturing its products so as to make products that are reasonably safe for intended or foreseeable uses, the manufacturer of a product which, to its actual or constructive knowledge, involves danger to users, has a duty to give warning of such danger.”) (cleaned up). The Court recognizes that nuances exist, however. For example, under Oregon law, “there is typically no freestanding duty, only a fact question of foreseeability.” Pls' Opp'n at 35 n.20 (citing Towe v. Sacagawea, Inc., 347 P.3d 766, 774-75 (Or. 2015). That said, the Court declines to consider such granular distinctions at this early stage in the proceedings, especially given the posture and insufficient briefing.

Here, in the preceding section of this Order, the Court determined that plaintiffs adequately pled the existence of products in connection with the defects analyzed. Thus, defendants owe users the duty to design such products in a reasonably safe manner and to warn about risks they pose. This duty is informed by the context at issue, namely that plaintiffs are minor children. It is not, however, heightened on that basis.

Having determined that plaintiffs have established the existence of a duty, the Court declines to consider parties' arguments regarding whether defendants owe any other duties to plaintiff users of their social media platforms. The Court also notes here that plaintiffs raised a third issue, namely the impact of public policy in limiting or narrowing any duty recognized by the Court. Given the parties agree that product makers owe a duty relative to introducing their products into the stream of commerce and the Court does not find a duty in the third party context, the Court declines to reach that issue or to opine on the relevance of such issues to topics that may be before the Court in future briefing.

Throughout their briefing, plaintiffs make convoluted arguments about the impact of plaintiffs' status as minors on the duty analysis. At times, they suggest that defendants owe a heightened duty to the MDL plaintiffs on account of their age. The Court here clarifies that neither the Second Restatement, nor any of the other authority upon which plaintiffs rely, stand for the proposition that defendants' duty to plaintiffs is somehow elevated. At most, defendants are obligated to perform their duties to plaintiffs while bearing in mind that plaintiffs are children. They are not subject to a heighted or more exacting duty on this basis. See Second Restatement § 290, cmt. k (“The actor as a reasonable [person] should also know the peculiar habits, traits, and tendencies which are known to be characteristic of certain well defined classes of human beings. [They] should realize,” for instance, “that the inexperience and immaturity of young children may lead them to act innocently in a way which an adult would recognize as culpably careless ....”); see also Swix v. Daisy Mfg. Co., 373 F.3d 678, 686-88 (6th Cir. 2004) (determining that the duty owed should be construed relative to the typical user of the product); In re JUUL Labs, Inc. Marketing, Sales Practices, and Prods. Liab. Litig., 497 F.Supp.3d 552, 656 (N.D. Cal. 2020) (opining as to the public “policy in favor of preventing future harm” to children and recognizing a duty of care (not a heightened duty) in part on that basis); Sims v. United States, 2020 WL 3273040, at *3 (W.D. Mo. June 17, 2020) (finding that dependent owed “a duty to supervise and protect” the minor plaintiffs but without commenting on whether that duty was heightened).

B. Duty to Prevent Third Party Harm

Second, with respect to whether defendants' duty extends to preventing third parties from harming plaintiff users by using the social media platforms, the parties disagree. Defendants argue that plaintiffs have not established the requisite misfeasance upon which such a duty could be based. Plaintiffs assert that the factual allegations in the MAC are sufficient.

As above, the Court begins with the Restatements. In general, entities do not owe a duty to prevent harm by third parties to their users, subject to two exceptions. Namely, duties may attach where (i) a “special relationship” exists between the entity and its users or between the entities and the third parties potentially causing the harm or (ii) the entity itself creates a risk of harm by third parties to its users. Plaintiffs concede that no qualifying special relationship exists here, thus, the Court concentrates its analysis on the second exception.

See Second Restatement § 315 (“There is no duty so to control the conduct of a third person as to prevent him from causing physical harm to another unless [a cognizable special relationship exists].”); Third Restatement: Physical & Emotional Harm § 37 (“An actor whose conduct has not created a risk of physical or emotional harm to another has no duty of care to the other unless a court determines that one of the affirmative duties provided [herein] is applicable[,]” such as the existence of a special relationship between the actor and the third party or the affected person) (emphasis supplied).

Oct. 27, 2023 Hrg Tr. 132:9-11.

In terms of whether an entity is creating a risk of harm itself, common law principles draw a “distinction between misfeasance and nonfeasance.” See Dyroff, 2017 WL 5665670, at *12 (citation omitted). Duty is typically not imposed for nonfeasance, which is defined as “a failure to act.” Id. (citations omitted). By contrast, misfeasance can create a duty when the defendants are “responsible for making the plaintiff's position worse, i.e., defendant has created a risk.” Id. (citations omitted); see also Ziencik v. Snap, Inc., 2023 WL 2638314, *5 (C.D. Cal. Feb. 3, 2023) (acknowledging that, while defendants “generally owe[] no duty to protect another from the conduct of third parties,” “such a duty may arise when a defendant engages in risk-causing conduct.”); Weirum v. RKO Gen., Inc., 15 Cal.3d 40, 49 (1975) (recognizing a duty to protect where “the defendant is responsible for making the plaintiff's position worse, i.e., defendant has created a risk”). Thus, cases embrace a general proposition that a duty to protect users from third party harm is recognized where the actor has created a risk of harm to another or permitted the risk of such harm to increase. See Restatement (Third) of Torts: Liability for Physical and Emotional Harm (“Third Restatement: Physical and Emotional Harm”) § 37 (AM. LAW. INST. 2010).

This basic framework is articulated slightly differently in Vesley v. Armslist LLC, 762 F.3d 661 (7th Cir. 2014), a case applying Illinois law. There, the Seventh Circuit explained that a duty to protect can arise from the creation of “a risk of harm to others,” which “implicates inconcert liability.” Id. at 666. The court then suggested that creating a risk of harm to others is comparable to “assist[ing] the third party” alleged to have committed the harm and to “giving substantial assistance or encouragement” to such third party. Id. (citations omitted). Applying this framework to the facts there at issue, the Seventh Circuit explained that “simply enabling consumers to use a legal service,” such as a web-based platform, “is far removed from encouraging them to commit an illegal act” and therefore not misfeasance from which a duty to protect would arise. Id. (alteration in original). Defendants read Vesley to require that a defendant “actively encourage a third party to commit the unlawful act” in order for a duty to attach. See MTD1 at 37:3-4 (citing Vesley, 762 F.3d at 666 (additional citations omitted)); see also Defs' Reply at 20:3-6 (citing Vesley for the proposition that active assistance to a criminal third party is sufficient to trigger the existence of a duty). However, the Court declines to adopt such a high bar for finding misfeasance. This is for three reasons. First, the bulk of the authority cited by the parties and which the Court has reviewed stops short of requiring active encouragement of criminality. Second, the recognition of such a rule would further restrict the scope of the liability recognized by the Third Restatement in ways this Court does not believe were intended. Third, the Court in Vesley applied the law of Illinois, which is not the state selected by plaintiffs as their preferred law. That said, it is the controlling law for Illinois claims.

Notably, these principles align with defendants' own articulation of the standard:

[The] standard requires affirmative wrongdoing on the part of the defendant-generally in the form of directing or encouraging a third party to commit an unlawful act, acting in concert with another tortfeasor, or giving substantial assistance to another's tortious conduct-to subject that defendant to a duty of care.
MTD1 at 36 n.17 (emphasis in original) (citations omitted).

Having articulated a general standard, the next question is whether that MAC sufficiently alleges facts to support misfeasance. Again, the parties disagree as to the specificity required.

Defendants rely heavily on Jackson v. Airbnb, Inc. There, the court declined to impose a duty on the web-based, short-term rental service to safeguard renters from criminal acts by third parties on the grounds that plaintiffs' claims relative to the platform were conclusory. 639 F.Supp.3d at 1009. Defendants further emphasize that courts have required affirmative, concerted conduct that increases the risk of harm in order to find misfeasance. See, e.g., Bucher v. State ex rel. Or. Corr. Div., 853 P.2d 798, 805 (D. Or. 1993) (en banc) (“[M]ere ‘facilitation' of an unintended adverse result, where intervening intentional criminality of another person is the harm-producing force, does not cause the harm so as to support liability for it.”) (citation omitted).

Because Court determines defendants' platforms are not only services, it declines to consider defendants' arguments that no duty should be imposed on them as interactive communications services providers. See, e.g., MTD1 at 35:26-36:4.

Defendants also rely on Taamneh. That case is distinguishable for two reasons. First, the Supreme Court there construed the text of the Antiterrorism Act (“ATA”) and considered whether Facebook, Twitter, and Google should be liable for aiding and abetting a terrorist organization by permitting that organization to use their platforms. See generally Twitter, Inc. v. Taamneh, 598 U.S. 471 (2023). The ATA is not implicated by the instant proceedings. Second, the Supreme Court's tort law analysis is not on point for this litigation. Defendants are correct that, in the context of assessing liability under the ATA, the Supreme Court considered “the typical limits on tort liability.” Id. at 503. However, the Supreme Court's analysis did not consider the scope of tort liability in the products context here at issue.

By contrast, plaintiffs focus on Ileto and Hacala. Those cases found a duty. Defendants distinguish the cases on myriad grounds, including that the products at issue were physical. The Court addresses each.

Plaintiffs offer minimal citation to or discussion of other authority supporting their theory of the scope of misfeasance, as opposed to simply distinguishing defendants' cases. See generally Pls' Opp'n at 47-48.

First, in Ileto, the Ninth Circuit found that plaintiffs' allegations that defendant gun manufacturers intentionally overproduced weapons, thereby creating “an illegal secondary firearms market” was “more than sufficient to raise a factual question as to whether the Defendants owed the plaintiff a duty of care.” Ileto v. Glock, 349 F.3d 1191, 1204 (9th Cir. 2003). The Ninth Circuit emphasized that defendants themselves had “created an illegal secondary market targeting prohibited purchasers” and it was those actions that had “placed plaintiffs in a situation in which they were exposed to an unreasonable risk of harm through the foreseeable conduct of a prohibited purchaser,” such as the assailant who committed the mass shooting there at issue. Id. Thus, plaintiffs adequately pled that Glock, Inc.'s duty stemmed from placing its products, i.e., guns, essentially into the black market and by extension into criminals' hands. See generally id. at 1024-05.

Plaintiffs suggest that Ileto supports the proposition that where a company has a business plan that leads to foreseeable third party risk, a duty should be extended to third parties. Defendants distinguish on three grounds: “(1) firearms are tangible products; (2) they were the sole cause of the direct and immediate physical harms at issue (injury and death of multiple individuals); and (3) the plaintiff alleged that the defendant took “affirmative action” to “creat[e] an illegal secondary market for guns that targets illegal purchasers,' none of which apply here.” Defs' Reply at 21:6-9.

The Court agrees with defendants' third argument. In short, the allegations in Ileto explicitly connected the defendant manufacturers' and distributors' actions and knowledge with “criminals and underage end users.” Id. at 1197. Plaintiffs further alleged that the Bureau of Alcohol, Tobacco, and Firearms contacted them regarding defendants' illegal gun distribution and yet defendants failed to implement even basic safeguards, such as contractual provisions in their distributor contracts “to address the risks associated with prohibited purchasers.” Id. at 1198.

Here, plaintiffs allege that defendants designed their platforms to push children to use their platforms as much as possible. In that context, defendants enabled features that encourage children to connect with other users, such as adults. (See, e.g., MAC ¶ 530 (Snapchat “allows users to voice or video call one another in the app” which, “when paired with the many others that permit easy access to minors by predators, such as Quick Add and Snap Map,” can facilitate contact between adult predators and minors).) Plaintiffs urge that a duty should be imposed where defendants “knew or should have known that the design of [their] products attracts, enables, and facilitates child predators, and that such predators use [their] apps to recruit and sexually exploit children for the production of CSAM and its distribution on [their platforms].” (Id. at ¶ 164; see also id. at ¶ 144.) Ultimately, however, plaintiffs only allege that defendants sought to increase minors' use of their platforms while “knowing or having reason to know” that adult predators also used the sites and therefore increased the risk to the minors.

This generality of the allegations is insufficient to show misfeasance. See e.g., Ziencik, 2023 WL 2638314, at *5, and Dyroff, 2017 WL 5665670, at *15 (collectively, holding that merely operating a website or web-based platform used by malicious third parties is insufficient to constitute misfeasance). Moreover, at least with respect to defendant TikTok, plaintiffs would appear to acknowledge that defendants have taken more precautionary steps than defendants in Ileto.

The Court agrees with plaintiffs that, under Ileto and the framework articulated, supra, defendants' conduct need not “rise to the level of involvement described in Gersh v. Anglin” to constitute misfeasance. Id. at 48:6-7. The Court declines to consider Gersh as the facts are far different from those at issue here. There, plaintiffs sought to hold the publisher of a far right website accountable for “publishing personal and professional contact information for [her] family, and offering samples of the types of anti-Semitic and misogynistic messages his readers should leave” for them. Gersh v. Anglin, 353 F.Supp.3d 958, 969 (D. Mont. 2018). Plaintiffs do not allege defendants here incited analogous action by wrongdoers.

Defendants note that their Terms of Service prohibit criminal activity and therefore their platforms cannot be said to encourage such activity such as by third parties who caused harm to plaintiffs here. See MTD1 at 37-38 n.18 (referring to various defendants' online terms of service or similar policies, user contracts, or disclosures). The Court notes that plaintiffs' MAC cites to and thereby incorporates by reference defendant TikTok's Terms of Service which do prohibit criminal activity, including with respect to minors. See MAC ¶ 644. This further distinguishes this litigation from Ileto.

Plaintiffs' reliance on Hacala does not compel a different result. There, a California intermediate appellate court imposed a duty on the scooter ride-sharing company based on the foreseeable “risk that third parties would negligently leave Bird scooters in hazardous locations” since the scooters, as deployed on public streets, were “dock-less.” 90 Cal.App.4th 292, 317 (Cal.Ct.App. 2023). Thus, the court extended the duty to a plaintiff who tripped on an errant scooter and sustained injuries. Id. at 319. Said differently, the company's affirmative decision to deploy dock-less scooters on city streets and their failure to educate app users about how and where to safely leave scooters after their rides, “contributed to the risk of harm that resulted in plaintiffs' injuries.” Id. at 311, 313.

Hacala is distinguishable for three reasons. First, the decision related to a California statutory duty. See id. at 310 (“[E]veryone is responsible . . . for an injury occasioned to another by his or her want of ordinary care or skill in the management of his or her property or person.”) (citation omitted). Second, the court's decision was not based on a misfeasance analysis, but rather Bird's own responsibilities with respect to the deployment of scooters. Third and relatedly, the court's decision was informed by Bird's own agreement “to take measures to prevent” injuries by pedestrians, such as tripping over haphazardly abandoned scooters, “when it obtained [its] permit from the City [of Los Angeles]” to operate. Id. at 301. Given these distinctions, the ultimate decision has less persuasive value.

This is not to say that the court in Hacala did not consider misfeasance. See Hacala, 90 Cal.App.5th at 316-18. However, such discussion came after the court had already concluded that the above-referenced statutory general duty applied and was therefore tangential to the analysis.

Accordingly, the MAC does not sufficiently allege misfeasance such that a duty should attach for third party conduct. However, it may be possible for plaintiffs, especially with the benefit of discovery, to amend their pleadings to more explicitly and specifically explain the basis for the misfeasance by defendants that they claim. The Court therefore DISMISSES WITH LEAVE TO AMEND plaintiffs' product-based negligence claims to the extent such claims are premised on the existence of a duty to protect users from third party actors using their platforms.

Plaintiffs cite to paragraphs 129-32 (encouraging social media “challenges”), 137-39 (making minors' profiles public by default), 142, 145, and 147-55 (discussing existing approaches to CSAM and related adult sexual abuse) of the MAC to support their argument that their pleadings are “replete with examples of Defendants' conduct creating a risk of harm.” Pls' Opp'n at 48:20-22 (citing the MAC). The Court reviewed these paragraphs and views the allegations contained therein as insufficient to establish misfeasance, as addressed above.

* * *

For the foregoing reasons, the Court determines defendants owe plaintiff users of their social media platforms duties owing to their status as product makers, which are limited in scope to the defects previously determined by this Court to be product components. Defendants do not, however, owe plaintiffs a duty to protect them from harm from third party users of defendants' platforms. As set forth herein, plaintiffs' product-based negligence claims may proceed to the extent they are (i) based on the product defects identified in the preceding section of this Order; and (ii) arise out of defendants' duty to design reasonably safe products and to warn users of known defects. Plaintiffs' product-based negligence claims arising out of duties allegedly owed by defendants to plaintiffs regarding third party conduct on the at-issue platforms may not proceed at this time.

Parties devote scant time in their briefs to examining the scope of defendants' duty to warn product users of defects in those products. Given the insufficient briefing, the Court declines to conduct its own inquiry into the matter and is instead satisfied, based on the analysis set forth, supra, that such a duty exists under plaintiffs' preferred laws. Because plaintiffs' claims may proceed at least under the law of their preferred states, the Court determines their failure to warn claims survive dismissal.

VII. Causation

Defendants move to dismiss on the issue of causation. Here, they focus on the inadequacy of the short-form complaints. With respect to the issue of general causation, the parties brief Ninth Circuit and California law. The Court need not separately determine the law of New York, Georgia, or other jurisdictions as they are sufficiently consistent for pleading purposes, and no party has suggested otherwise. In summary, plaintiffs have generally, and adequately, alleged causation for purposes of their strict products liability and product-based negligence claims.Issues regarding short form complaints are more appropriately addressed in a parallel track of revised disclosures and, perhaps, additional motion practice.

State-specific variations may exist but are beyond the scope of this Order to address. For instance, the applicability of the substantial factor test. See, e.g., State Dept. of State Hospitals v. Super. Ct., 61 Cal.4th 339, 352 n.12 (2015) (explaining that, in California, where injuries involve “concurrent independent causes,” courts “apply the ‘substantial factor test' of the [Second Restatement], which subsumes traditional ‘but for' causation.”)

To that end, the Court does not address the sufficiency of plaintiffs' allegations of causation relative to harms purportedly caused by third parties using defendants' platforms.

A. Allegations of General Causation

i. Master Amended Complaint

The MAC includes two main categories of allegations on the causation element. Those specific to how particular design features caused plaintiffs' harms and those asserting that research studies have tied use of defendants' platforms to harms similar to those alleged.

As to the first category, plaintiffs contend that the “defective features” of defendants' platforms have caused and contributed to a range of negative physical, mental, and emotional health outcomes, including anxiety, depression, and self-harm. (MAC ¶ 90.) Plaintiffs support these allegations by describing, in great detail, how defendants' social media offerings work (See, e.g., id. at ¶¶ 181-437 (Meta); 438-553 (Snap); 554-689 (ByteDance); 690-819 (Google).) Then, they tie the mechanics of the platforms to plaintiffs by asserting that they are designed to induce compulsive use by minors, such as MDL plaintiffs. Defendants are alleged to do this through efforts to, among others, “addict users” to their platforms, Id. at ¶ 247 (Facebook); “exploit[] and monetize[] social comparison,” Id. at ¶ 312 (Instagram); “promote compulsive and excessive use,” Id. at ¶¶ 491-97 (Snap); “inundate users with features” that “maximize the time users (including children) spend using” the platforms, Id. at ¶¶ 727-29 (YouTube); and avoid controlling the spread of child sexual abuse material, Id. ¶¶ 654-74 (TikTok).)

As to the second category, plaintiffs argue that myriad studies tie defendants' design and operation of their platforms to the types of injuries alleged by plaintiffs. (See id. at ¶ 101; see also Id. at ¶¶ 96-124 (compiling studies over many years).)

ii. Analysis

As stated above, the Court determines the law of state jurisdictions is sufficiently consistent to permit a general analysis of whether plaintiffs' MAC adequately alleges causation. The Third Restatement's approach is generally representative. Where, as here, but-for causation is not at issue, the focus is on “proximate cause.” See Third Restatement: Physical and Emotional Harm§ 29. Under the Restatement, whether proximate cause exists hinges on whether the harm alleged is a foreseeable result of the at-issue conduct. Id. at § 19 (“The conduct of a defendant can lack reasonable care insofar as it foreseeably combines with or permits the improper conduct of the plaintiff or a third party.”).

For instance, the Restatement approach summarized herein accords with the frameworks employed by the Ninth Circuit when assessing federal law causes of action and by the California Supreme Court. See State Dept. of State Hospitals v. Super. Ct., 61 Cal.4th 339, 352-53 n.11 (2015) (“Proximate cause is also a necessary element” of a negligence claim in California and implicates both but-for causation and limitations on liability based on public policy grounds); Pacific Shores Properties, LLC v. City of Newport Beach, 730 F.3d 1142, 1168 (9th Cir. 2013) (observing, as a matter of federal common law, that “[t]he doctrine of proximate cause serves merely to protect defendants from unforeseeable results of their negligence ....”).

The Court refers to the Third Restatement: Physical and Emotional Harm as a guide when analyzing parties' causation arguments because the Third Restatement: Products Liability permits it to do so. See Third Restatement § 15 (“Whether a product defect caused harm to persons or property is determined by the prevailing rules and principles governing causation in tort.”).

As outlined above, plaintiffs allege defendants made design choices with respect to their platforms which caused plaintiffs' injuries, including adolescent addiction, and negative physical, mental, and emotional health outcomes. The allegations are rooted in academic studies empirically demonstrating causal connections. Thus, given the procedural posture and Rule 8 standards, plaintiffs' allegations are sufficient. Defendants' assertion that more is required at this preliminary stage lacks merit.

Rule 8's requirement that plaintiffs provide a “short and plain statement of the claim showing that [they are] entitled to relief” applies here. Fed.R.Civ.P. 8(a)(2).

The Court need not consider the impact of defendants' obvious possession of better information as plaintiffs have proffered a sufficient basis for a plausibility finding. See Pls' Opp'n at 50.

The Court also here notes that Snap's supplemental filings fail to distinguish Snapchat from the other challenged platforms in terms of the causation element of plaintiffs' products liability claims. The Court addresses two relevant arguments. First, Snap repeatedly argued, both in their supplemental filings and at the hearing, that plaintiffs have not pled adequately severe (and therefore cognizable) harms arising from the design of Snapchat. This is not accurate. The MAC contains sufficiently detailed allegations as to the harms caused by the design of Snapchat to satisfy Rule 8 and sufficiently plead general causation. See MAC ¶ 440 (“Snapchat's design features cause its young users to suffer increased anxiety, depression, disordered eating, sleep deprivation, suicide, and other severe mental and physical injuries); see also id. (linking such allegations to specific functionalities of the platform). Snap's attempt to explain away these allegations fails to persuade. Snap's Supplemental Brief in Support of Defendants' Reply in Support of MTD1 at 3 n.3. Second, Snap appears to suggest that a recent district court case in California is relevant to the causation analysis conducted above, writing that the court there “rejected the argument that alleged Snapchat ‘design defects' . . . were the source of harm suffered by Plaintiffs who experienced sexual exploitation by bad actors who abused the platform.” See Snap's Supplemental Reply Brief at 1:16-19. However, that argument relies on L.W., a case decided on Section 230 grounds and discussed by this Order in that context. See, supra, note 20 & accompanying text. L.W. provides no grounds for holding, as a matter of causation, that claims against Snap should be dismissed. For the reasons addressed here as well as those detailed, supra, at notes 3, 10, and 58, the Court DENIES Snap's request for dismissal of plaintiffs' claims relative to the Snapchat platform.

B. Specific Causation

With respect to the adequacy of the plaintiffs' short-form complaints, defendants' objections were made previously and addressed. The Court sees no reason to depart from its prior position. MDLs are designed to allow for an efficient progression of litigation and the approach here is consistent with other, similar cases, such as In re Allergan Biocell Textured Breast Implant Products Liability Litigation. There, as here, an analysis of the “common facts” alleged in plaintiffs' MAC is sufficient for plaintiffs to plausibly allege, at this preliminary stage, that defendants' actions proximately caused plaintiffs' injuries.

See Dkt. No. 153-4, Notice of Submissions Concerning Short-Form Complaints, Defendants' Letter Brief and Exhibit at 2 (expressing defendants' view that “each plaintiff must specify the app features and experiences that allegedly caused their injuries.”) (cleaned up). See also Dkt. No. 164, Case Management Order No. 5 at 3:16-17 (“[P]laintiffs are put on notice that they will be required to provide the type of information the defendants are requesting if the case proceeds past dispositive motions”) (citation omitted).

There, the court analyzed, on a motion to dismiss, the master and short-form complaints relative to products liability claims brought by plaintiffs implanted with allegedly defective medical devices. See generally In re Allergan Biocell Textured Breast Implant Prods. Liability Litig., 537 F.Supp.3d 679 (D.N.J. 2021). The court reviewed, “with substantial leniency, the facts that may be specific to each individual [p]laintiff or largely within the control of [the defendants].” Id. at 721. The court found that, “the lack of potentially individualized factual allegations of [p]laintiffs, such as causation of individual [p]laintiffs' injuries, [would] not be a ground for dismissal.” Id. By contrast, such leniency was not afforded to the “common facts” alleged in the master complaint. Id.

Defendants' reliance on Adams v. BRG Sports, Inc. does not compel a different result.There, the court dismissed on the grounds that the complaint was insufficiently particularized and had “obfuscate[d] whether each and every plaintiff [alleged] that his injury [was] caused by the defendants' negligence, defective design, and/or inadequate warnings.” Id. at *3. There is no such confusion here. The MDL plaintiffs allege harm stemming from defendants' platforms, as set forth in the MAC and their individual short-form complaints.

There, plaintiff high school football players sued designers, manufacturers, and sellers of football helmets under a products liability theory. Adams v. BRG Sports, Inc., Nos. 17 C 8544, 17 C 8972, 18 C 129, 2018 WL 4853130, at *1 (N.D. Ill. Oct. 5, 2018). The district court used a mini-MDL approach to address the suits filed, which involved requiring plaintiffs to file a master complaint and individual short-form complaints. Id. The court granted the motion to dismiss on the basis that plaintiffs' causation allegations were made “on a group basis and only in the master complaint.” Id. at *3. However, it did so with leave to amend their pleadings. Id. at *5.

Accordingly, the Court DENIES defendants' motion to dismiss for failure to adequately plead causation.

VIII. Conclusion

For the foregoing reasons, the Court GRANTS IN PART and DENIES IN PART the pending motions to dismiss. As such, discovery will be allowed to proceed, and the Court will work with parties on the next phase of briefing.

The Court summarizes its key rulings as follows:

• MTD2 is GRANTED on Section 230 grounds as to plaintiffs' products liability design defect claims (Claims 1 and 3) to the extent they are based on the defects alleged at paragraphs 845 (e), (h), (i), (t), (u), (1), and (j), as well as paragraph 864(1) of the MAC. MTD2 is GRANTED on First Amendment grounds as to plaintiffs' products liability design defect claims (Claims 1 and 3) arising out of the defect alleged at paragraph 845(1) of the MAC insofar as that defect concerns the timing and clustering of notifications of defendants' content. MTD2 is otherwise DENIED as set forth herein.

To clarify, MTD2 is granted on Section 230 grounds as to the defect pled at paragraph 845(1) of the MAC insofar as it describes the timing and clustering of notifications of third-party content in a way that promotes addiction.

• The Court FINDS Plaintiffs' negligence per se claim (Claim 5) not barred by Section 230 or the First Amendment.

• With respect to the functionalities that remain after the rulings with respect to Section 230 and the First Amendment, MTD1 is GRANTED WITH LEAVE TO AMEND as to plaintiffs' claims that defendants had and breached a duty to protect platform users from harm by third parties, such as adult predators. MTD1 is otherwise DENIED as set forth herein.

• With respect to the arguments regarding the remaining elements of plaintiffs' negligence per se claim (Claim 5), as they require adequate briefing, those arguments are deemed WITHDRAWN without prejudice.

• Snap's supplemental filings requesting dismissal of plaintiffs' claims specific to the Snapchat platform are DENIED.

This terminates Dkt Nos. 237 and 320.

IT IS SO ORDERED.


Summaries of

In re Soc. Media Adolescent Addiction/Personal Injury Prods. Liab. Litig.

United States District Court, Northern District of California
Nov 14, 2023
MDL 3047 (N.D. Cal. Nov. 14, 2023)
Case details for

In re Soc. Media Adolescent Addiction/Personal Injury Prods. Liab. Litig.

Case Details

Full title:IN RE SOCIAL MEDIA ADOLESCENT ADDICTION/PERSONAL INJURY PRODUCTS LIABILITY…

Court:United States District Court, Northern District of California

Date published: Nov 14, 2023

Citations

MDL 3047 (N.D. Cal. Nov. 14, 2023)