From Casetext: Smarter Legal Research

Daniels-Feasel v. Forest Pharm.

United States District Court, S.D. New York
Sep 3, 2021
17 CV 4188-LTS-JLC (S.D.N.Y. Sep. 3, 2021)

Opinion

17 CV 4188-LTS-JLC

09-03-2021

NICHOLE DANIELS-FEASEL, et al., Plaintiffs, v. FOREST PHARMACEUTICALS, INC., et al., Defendants.


MEMORANDUM OPINION AND ORDER

LAURA TAYLOR SWAIN, Chief United States District Judge.

This case involves product liability claims regarding the effects of Lexapro®, a prescription antidepressant medication. Plaintiffs are women who allegedly ingested Lexapro during pregnancy, and their minor children who allegedly suffer from autism spectrum disorder (“ASD”) as a result of their mothers' prenatal use of the drug. Defendants Forest Laboratories Inc., Forest Laboratories LLC, Allergan plc, and Forest Pharmaceuticals, Inc. (collectively, “Defendants”), are pharmaceutical companies that were involved in the design, manufacturing, and/or marketing of Lexapro.

Escitalopram is the single active isomer version of the generic drug compound citalopram, which is marketed and sold under the trade name Lexapro®. For the purposes of this motion, the Court will refer to the drug at issue as “Lexapro.”

Pending before the Court is Defendants' omnibus motion, pursuant to Federal Rules of Evidence 104(a), 702, 703, and 403, and Daubert v. Merrell Dow Pharm., Inc., 509 U.S. 579 (1993) (“Daubert”), and its progeny, to preclude from introduction into evidence the expert testimony tendered by Plaintiffs, regarding the alleged causal relationship between Lexapro and ASD, of Lemuel Moyé, M.D., Ph.D. (“Dr. Moyé”), Laura Plunkett, Ph.D. (“Dr. Plunkett”), and Patricia Whitaker-Azmitia, Ph.D. (“Dr. Whitaker-Azmitia”). (See docket entry no. 80, “Motion”.) Plaintiffs filed papers in opposition to Defendants' Motion. (See docket entry no. 85, “Opp.”) Defendants filed a reply in further support of their omnibus Motion. (See docket entry no. 91, “Reply”.) Defendants filed a further submission regarding supplemental authority, and Plaintiffs filed a response. (See docket entry nos. 101-02).

The Court has jurisdiction of this case pursuant to 28 U.S.C. §1332. The Court has considered carefully the parties' voluminous submissions. For the reasons stated below, Defendants' Motion is granted in its entirety.

Background

The propositions recited herein are either agreed to by the parties in their submissions or drawn from the reports of the experts whose testimony is targeted by Defendants' omnibus motion. The Court cites facts contained within the expert reports at issue solely as background factual propositions that the Court understands not to be meaningfully disputed.

Lexapro is a prescription antidepressant medication and member of the therapeutic class of selective serotonin reuptake inhibitors (“SSRI”). (Motion at 1, 7-8; Opp. at 1.) SSRIs are molecules that affect the level and availability of the neurotransmitter serotonin in living tissue. (Expert Report of Lemuel A. Moyé, M.D., Ph.D., dated September 14, 2018, docket entry no. 81 Exh. 10 (“Moyé Rpt.”) at ¶ 36.) SSRIs play an established role in treating anxiety disorders and major depressive illnesses. (Id. at ¶ 40.) The U.S. Food and Drug Administration (“FDA”) has approved the prescription of Lexapro for the treatment of major depressive disorder and generalized anxiety disorder in pregnant women. (Motion at 1.)

Autism is a complex neurodevelopmental disorder that is typified by impaired social interactions, poor communication skills, and repetitive motion and behavior. (Moyé Rpt. at ¶¶ 29-30). Although “changes in neural growth during prenatal and postnatal periods” and genetics may play a role in causing ASD, there is no “gene for autism” and the precise cause of the disorder is unknown. (See Moyé Rpt. at ¶ 35; Opp. at 18; Motion at 8-9.)

The question of whether there is a causal relationship between SSRIs such as Lexapro and neurodevelopmental disorders, including ASD, is studied by epidemiologists. Epidemiology is the study of the cause of disease and its distribution in human populations. (Moyé Rpt. at ¶ 41.) As Dr. Moyé explains, an epidemiologist is responsible for “[d]etermining whether the universe of the effects of SSRIs have been discovered or whether alternatively there are dangerous effects of these compounds[.]” (Id.) The process undertaken to reach any conclusions in epidemiology generally begins with an observation suggesting the possible link between an exposure and disease. (Id. at ¶ 43.) Such an observation would lead to the formulation of a hypothesis that is then tested through “epidemiological studies of individuals who have been both exposed and unexposed to the putative risk factor, measuring the occurrence of disease in both groups.” (Id.)

An epidemiologist then collects and analyzes the resulting data to determine “whether a statistical association exists, that is, whether the disease more commonly occurs in the presence of the risk factor than its absence.” (Id.) Where the reported risk ratio between the two variables being tested is 1.0, there is no statistical association. (Reference Manual on Scientific Evidence (3d ed. 2011) (“RMSE”) at 574.) A relative risk (“RR”) is computed when an investigator can follow the development of a disease following an exposure during the passage of time. (Moyé Rpt. at ¶ 49.) Where exposed patients are not followed over time, odds ratios (“OR”) are used, which require that one simply know how common a disease is in the exposed and the unexposed. (Id.) Both RRs and ORs attempt to measure the strength of an exposure. (Id.) A study that reports an association is one in which the reported RR or OR is greater than 1.0. (Id.) If a study reports a risk ratio of less than 1.0, that may indicate a decreased risk. (Teratology Primer (2010), docket entry no. 81, Exh. 87, at 11-31.)

Where a positive association is observed, its validity is assessed by evaluating the role of possible alternative explanations, such as chance, bias, or confounding. (Moyé Rpt. at ¶ 43.) Chance, or random error, is typically evaluated through measures of “statistical significance, ” which is usually reported using a range of values referred to as the “95% confidence interval” (“CI”). (RMSE at 247, 579-80.) The CI estimates the random error inherent in the study data. Defendants proffer the following examples, which reflect the results of two studies at issue in this case that examine a potential association between SSRIs and ASD, and report a risk ratio greater than 1.0:

• Croen (2011): OR 2.2; 95% CI 1.2-4.3.
• Hviid (2013): RR 1.20; 95% CI 0.90-1.61.

The studies cited herein are referred to by the short titles assigned to each study by Defendants in their Motion. (See Motion at vii-xvii.)

The result in Croen (2011) is nominally statistically significant because there is a 95% likelihood that the true result exceeds 1.0. The Hviid (2013) result is not statistically significant because the 95% confidence interval includes the null value of 1.0, which reports no association. (Motion at 16.)

Bias is a systematic, non-random error, that may appear, for example, in the case of information bias, where the available records for one group are more likely to include relevant information than another. (RMSE at 249.)

Confounding refers to “[a] factor that is both a risk factor for the disease and a factor associated with the exposure of interest.” (Id. at 621.) In studies that examine the relationship between SSRIs and ASD, “[c]onfounding by indication refers to a factor that is associated both with the indication for the prescription and with the outcome of interest.” (Motion at 17.) Defendants submit that confounding by indication is of concern because, where it is not taken into account, it can give rise to a false association between SSRI use and ASD due to the fact that women with depression or other symptoms necessitating SSRI use are more likely to have children with ASD as a result of their underlying mental health conditions. (Id. (citing Hviid (2013) at 2414).)

Where an association remains statistically significant, a judgment is then made as to whether a cause-effect relationship exists between the exposure and the alleged injury. (Moyé Rpt. at ¶ 43.) “The mere presence of an association does not imply causation, since association may merely reflect the simultaneous or coincidental relationship between exposure and a disease . . . [s]pecifically, the relationship is causal if the presence of the exposure excites the production of the disease in an individual.” (Id. at ¶ 46.) However, a causation analysis must consider whether a disease has multiple causes to accurately assess whether an exposure-disease relationship is merely associative or is in fact causal. (Id.) To determine the existence of a causal relationship, epidemiologists commonly analyze the relevant information and data using the so-called “Bradford Hill” criteria, which comprise the following: 1) strength of association; 2) biologic gradient; 3) temporality; 4) biologic plausibility; 5) consistency; 6) coherence; 7) specificity; 8) experimental evidence; and 9) analogy. (Id. at ¶ 48.) Consideration of the Bradford Hill criteria enable “differentiation of causal relationships from the ones that are merely associative.” (Id.)

Epidemiologists conduct general causation analyses based on the results of different types of epidemiological studies. These include human clinical trials, observational studies, and animal studies. Due to ethical constraints, there are no clinical trial studies on the relationship between maternal use of SSRIs in pregnancy and ASD. (Motion at 18.) Observational studies produce the vast majority of human data on maternal use of SSRIs. (See Expert Report of Michael Bracken, PhD, dated Nov. 7, 2018, docket entry no. 81 Exh. 1, at 6-9.) However, because observational studies do not randomize participants according to exposure or non-exposure, the risks of bias and confounding are considered when interpreting observational study results. (RMSE at 219.) Additionally, where it is unethical to expose humans, animal toxicological evidence may provide valuable scientific information about the risk of disease from a chemical exposure. (RMSE at 639.)

Discussion

Defendants argue that Plaintiffs' experts' opinions relating to causation are inadmissible under the Federal Rules of Evidence and the standards set by the Supreme Court in Daubert. Trial courts are charged with a “gatekeeping” responsibility to “ensur[e] that an expert's testimony both rests on a reliable foundation and is relevant to the task at hand” before it is deemed admissible. Daubert, 509 U.S. at 597. Relevance is broadly established where the proffered testimony has “any tendency to make the existence of any fact that is of consequence to the determination of the action more probable or less probable than it would be without the evidence.” Amorgianos v. Nat'l R.R. Passenger Corp., 303 F.3d 256, 265 (2d Cir. 2002) (internal quotation marks omitted); Fed.R.Evid. 401.

If expert testimony is relevant, the court must determine “whether the proffered testimony has a sufficiently ‘reliable foundation' to permit it to be considered” by the trier of fact. Amorgianos, 303 F.3d at 265 (internal citation omitted). Federal Rule of Evidence 702 (“Rule 702”), which governs the admissibility of expert testimony, provides that the party seeking to admit the testimony must show by a preponderance of the evidence that: 1) the witness is qualified as an expert by knowledge, skill, experience, training, or education; 2) the testimony is based upon sufficient facts or data; 3) the testimony is the product of reliable principles and methods; and 4) the witness has applied the principles and methods reliably to the facts of the case. Fed.R.Evid. 702. The court's examination of the proffered testimony requires a case-specific and “rigorous” consideration of the Rule 702 standards. Amorgianos, 303 F.3d at 267.

In addition to the Rule 702 standards, the court should consider the specific criteria identified by the Supreme Court in Daubert “where they are reasonable measures of the reliability of expert testimony.” Kumho Tire Co. v. Carmichael, 526 U.S. 137, 152 (1999). The Daubert factors call for consideration of 1) whether a theory or technique can be (and has been) tested; 2) whether it has been subjected to peer review or publication; 3) the “known or potential rate of error” for the expert's technique and whether there are “standards controlling the technique's operation”; and 4) whether the expert's technique or theory is generally accepted in the relevant scientific community. Daubert, 509 U.S. at 593-94. Courts also consider whether the proffered expert opinions were developed for the purpose of litigation. In re Rezulin Prods. Liab. Litig., 369 F.Supp.2d 398, 420 (S.D.N.Y. 2005) (“Rezulin II”). “A proffered opinion may fail all four Daubert reliability factors and still be admitted. Before admitting proposed testimony in those circumstances, however, a court must ‘carefully scrutinize,' pause, and take a ‘hard look' at the expert's methodology.” In re Mirena Ius Levonorgestrel-Related Prod. Liab. Litig. (No. II), 341 F.Supp.3d 213, 240 (S.D.N.Y. 2018) (“Mirena II”) (quotation omitted).

The district court has “considerable leeway in deciding in a particular case how to go about determining whether particular expert testimony is reliable, ” including how to test reliability and whether special briefing or other proceedings are necessary to determine reliability. Kumho Tire Co., 526 U.S. at 152. In making its reliability determination, a court should consider whether the expert's analysis is reliable at every step. The court need not “admit opinion evidence that is connected to existing data only by the ipse dixit of the expert, ” and may conclude that there is “too great an analytical gap between the data and the opinion proffered” to permit admission. Gen. Elec. Co. v. Joiner, 522 U.S. 136, 146 (1997). Indeed, “‘courts have a duty to inspect the reasoning of qualified scientific experts' . . . including whether an expert's sources support his conclusions.” Rodriguez v. Stryker Corp., 680 F.3d 568, 572 (6th Cir. 2012) (citations omitted).

The district court must also ensure that experts are employing “in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.” Kumho Tire Co., 526 U.S. at 152. “[W]hen an expert relies on the studies of others, he must not exceed the limitations the authors themselves place on the study.” In re Accutane Prods. Liab., No. 8:04-MD-2523-T-30TBM, 2009 WL 2496444, at *2 (M.D. Fla. Aug. 11, 2009), aff'd, 378 Fed.Appx. 929 (11th Cir. 2010). A court should not admit opinions that assume a conclusion and “reverse-engineer[ ] a theory” to fit that conclusion. Mirena II, 341 F.Supp.3d at 248 (internal quotation marks and citations omitted). Such an approach is the “antithesis of” a scientific method. Claar v. Burlington N. R. Co., 29 F.3d 499, 503-04 (9th Cir. 1994); see Rizzo v. Applied Materials, Inc., No. 615CV557MADATB, 2017 WL 4005625, at *13 (N.D.N.Y. Sept. 11, 2017) (causation opinion excluded where expert “only reviewed articles that support an association” in forming his opinion), aff'd, 743 Fed.Appx. 484 (2d. Cir.).

An expert must not cherry-pick from the “scientific landscape and present the Court with what he believes the final picture looks like.” In re Rezulin Prod. Liab. Litig., 309 F. Supp.2d 531, 563 (S.D.N.Y. 2004) (“Rezulin”) (citation omitted). Sound scientific methodology in assessing general causation requires an expert to evaluate “all of the scientific evidence when making causation determinations.” In re Zoloft (Sertraline Hydrocloride) Prod. Liab. Litig., 26 F.Supp.3d 449, 463 (E.D. Pa. 2014); see also In re Abilify (Aripiprazole) Prod. Liab. Litig., 299 F.Supp.3d 1291, 1311 (N.D. Fla. 2018) (“[The] ‘weight of the evidence' approach to analyzing causation can be considered reliable, provided the expert considers all available evidence carefully and explains how the relative weight of the various pieces of evidence led to his conclusion.”). Cherry-picking is a form of “[r]esult-driven analysis, ” which “undermines principles of the scientific method” by “applying methodologies (valid or otherwise) in an unreliable fashion.” In re Lipitor (Atorvastatin Calcium) Mktg., Sales Practices & Prod. Liab. Litig. (No II) MDL 2502, 892 F.3d 624, 634 (4th Cir. 2018). Therefore, exclusion of proffered testimony is warranted where the expert fails to address evidence that is highly relevant to his or her conclusion. See Mirena II, 341 F.Supp.3d at 242.

Where an expert employs reliable methodology to reach his or her conclusions, lack of textual support may go to the weight rather than the admissibility of the expert's testimony. See McCullock v. H.B. Fuller Co., 61 F.3d 1038, 1043 (2nd Cir. 1995) (affirming district court's admission of medical expert testimony despite the fact that the expert did not proffer any literature that specifically supported his opinion). Accordingly, the court must focus on an expert's principles and methodologies rather than on his or her conclusions, or the “court's belief as to the correctness of those conclusions.” Amorgianos, 303 F.3d at 266. However, the Supreme Court has noted that “conclusions and methodology are not entirely distinct from one another.” Joiner, 522 U.S. at 146. Therefore, “when an expert opinion is based on data, a methodology, or studies that are simply inadequate to support the conclusions reached, Daubert and Rule 702 mandate the exclusion of that unreliable opinion testimony.” Amorgianos, 303 F.3d at 266.

Here, Plaintiffs proffer three experts who offer general causation and biological plausibility opinions regarding the relationship between Lexapro and ASD. Causation in pharmaceutical products liability, or toxic tort, cases “has two components, general and specific, and the plaintiff must establish both in order to prevail.” Rezulin II, 369 F.Supp.2d at 401-02. “General causation is whether a substance is capable of causing a particular injury or condition in the general population, while specific causation is whether a substance caused a particular individual's injury.” Id. at 402 (citation omitted). “The concept of biological plausibility . . . asks whether the hypothesized causal link is credible in light of what is known from science and medicine about the human body and the potentially offending agent.” Milward v. Acuity Specialty Prod. Grp., Inc., 639 F.3d 11, 25 (1st Cir. 2011).

The Court will now examine the opinions proffered by each of Plaintiffs' experts. Dr. Lemuel Moyé

Dr. Moyé is an epidemiologist who works as a tenured, full-time professor of biostatistics at the University of Texas School of Public Health in Houston, Texas. See Moyé Rpt. at ¶ 2. Dr. Moyé received his Ph.D. in Community Sciences - Biostatistics from the University of Texas in 1987, his M.S. in Statistics from Purdue University in 1981, his M.D. from Indiana University in 1978, and his B.A. in mathematical sciences from Johns Hopkins University in 1974. Id. at ¶¶ 2, 21. In 1984, Dr. Moyé became a licensed physician in Texas, and practiced general medicine until 1992. Id. at ¶ 4. His formal training has included courses in mathematical statistics, epidemiology, and biostatistics. Id. at ¶ 2. Dr. Moyé has experience in designing, executing, and analyzing the results of large clinical trials, and as a consultant with the pharmaceutical industry. Id. at ¶¶ 6, 16. His many publications include three articles using epidemiologic procedures to address the relationship between the occurrence of autism and environmental exposures. Id. at ¶ 19. Defendants do not contest Dr. Moyé's qualifications as an expert witness in this case.

Dr. Moyé offers a general causation opinion that maternal use of SSRI inhibitors during gestation is a cause of autism separate and apart from any relationship between maternal depression and autism. He proffers that, in forming his opinion, he employed the weight of the evidence methodology and conducted a Bradford Hill analysis. Defendants argue that Dr. Moyé's opinions are inadmissible because he ignores findings contrary to his opinion, relies on studies for conclusions they did not reach, and fails to reliably conduct a Bradford Hill analysis. The Court agrees.

Bradford Hill Analysis

Experts conducting a causation analysis based on a statistically significant association may utilize a combination of two methods: the “weight of the evidence” analysis and the Bradford Hill criteria. The weight of the evidence method is one “involv[ing] a series of logical steps used to infer to the best explanation.” In re Zoloft (Sertraline Hydrochloride) Prod. Liab. Litig., 858 F.3d 787, 795 (3d Cir. 2017) (“Zoloft II”) (internal citations omitted). Dr. Moyé explains that the methodology is “the process by which a body of evidence is examined component by component whereby each component is sifted and assessed using a transparent and standard method. As this study-by-study evidentiary examination proceeds, contributions are made to the arguments for or against causality.” Rebuttal Report, Lemuel A Moyé, M.D., Ph.D. dated Dec. 6, 2018, docket entry no. 81 Exh. 11 (“Moyé Reb. Rpt.”) at 1, 8. The Bradford Hill criteria, explained further below, “form the generally accepted criteria by which, when reliably applied, modern practicing epidemiologists assign causality to an association.” Id. Therefore, the process that an epidemiologist using the weight of the evidence methodology in applying the Bradford Hill criteria to produce a general causation opinion undertakes involves the following steps: 1) an epidemiologist's identification of relevant sources; 2) a study-by-study review of useful information; and 3) filtering the information using each of the Bradford Hill Criteria. Id. Courts have found that the weight of the evidence methodology is scientifically acceptable because “[i]t is not intrinsically ‘unscientific' for experienced professionals to arrive at a conclusion by weighing all available scientific evidence-this is not the sort of ‘junk science' with which Daubert was concerned.” Joiner, 522 U.S. at 153 (Stevens, J., concurring, in part, and dissenting, in part).

A Bradford Hill analysis involves examination of nine “metrics that epidemiologists use to distinguish a causal connection from a mere association.” Zoloft II, 858 F.3d at 795; see also Mirena II, 341 F.Supp.3d at 242. These metrics comprise the following:

1) Strength of Association/Statistical Association. There must be some degree of statistical association between a cause and its effect. A strong association is more likely to represent causation than a weak association.
2) Temporality. A cause must precede its effect. Strength in temporality, such as when a cause immediately precedes its effect, supports an inference of causation.
3) Biological Plausibility. A cause and effect relationship between exposure to medication and disease should be biologically plausible with other information about the disease or harm.
4) Biologic Coherence. A cause and effect relationship between exposure and disease should be consistent with other information about the disease or harm.
5) Biologic Gradient/Dose-Response Effect. Causation is more likely if greater amounts of the putative cause are associated with corresponding increases in the occurrence of disease or harm.
6) Consistency. When similar findings are generated by several epidemiological studies involving various investigators, causation tends to be supported.
7) Analogy. Substantiation of relationships similar to the putative causal relationship increases the likelihood of causation.
8) Experimental Evidence. Causation is more likely if removing the exposure in a population results in a decrease in the occurrence of disease or harm.
9) Specificity. When there is but a single putative cause for the disease or harm, causation is supported.
See Mirena II, 341 F.Supp.3d. at 242-43; Zoloft, 26 F.Supp.3d at 474; Moyé Rpt. at ¶¶ 48-58.

Experts who apply multi-criteria methodologies such as Bradford Hill within a “weight of the evidence” framework must “rigorously explain how they have weighted the criteria” considered. Mirena II, 341 F.Supp.3d at 247. “Otherwise, such methodologies are virtually standardless and their applications to a particular problem can prove unacceptably manipulable.” Id. Additionally, “while the expert's bottom-line conclusion need not be independently supported by each of the nine Bradford Hill factors . . . the expert must employ ‘the same level of intellectual rigor that he employs in his academic work.'” Id. (quoting Milward, 639 F.3d at 26). For example, in Zoloft II, the Third Circuit explained that “[a]n expert can theoretically assign the most weight to only a few factors, or draw conclusions about one factor based on a particular combination of evidence.” 858 F.3d at 796. Regardless of the method, the analysis must be reliable; “all of the relevant evidence must be gathered, and the assessment or weighing of that evidence must not be arbitrary, but must itself be based on methods of science.” Id. (internal quotations omitted).

Analysis of Dr. Moyé's Opinion

A rigorous examination of Dr. Moyé's opinion reveals that his conclusion-that there is a statistically significant association between prenatal exposure to SSRIs and neurodevelopmental injury-is unreliable. Dr. Moyé described his use of the “weight of the evidence” methodology as gathering evidence from multiple sources and then intelligently and carefully studying the sources for the “clearest view of the cause-effect relationship, differentiating the associative from the truly causal.” Moyé Rpt. at ¶ 60. His analysis utilized the nine Bradford Hill criteria to distill the information he reviewed. However, Dr. Moyé's application of the Bradford Hill factors using the weight of the evidence methodology is flawed; he fails to adequately support his conclusions using the selectively favorable data he relies upon, unjustifiably disregards inconsistent data, and admittedly ignores categories of relevant evidence.

To begin, the Court finds that Dr. Moyé's opinion fails to meet the four Daubert reliability factors. He has not tested his theory on the general causation relationship between Lexapro and ASD, nor has he submitted it for peer review or publication. Dr. Moyé also does not identify an error rate for his application of the Bradford Hill factors. Indeed, “vetting of a multi-factor inquiry to yield a numeric error rate appears realistically impossible, as there are no standards controlling the technique's operation.” Mirena II, 341 F.Supp.3d at 247 (internal quotation marks and citation omitted). Finally, Dr. Moyé's opinion is not generally accepted, as no regulatory agency, professional organization, peer-review study, or medical treatise concludes that Lexapro causes ASD, and the FDA has approved its prescription to pregnant women. Therefore, Dr. Moyé's failure to meet Daubert standards requires a “hard look” at his methodology. Id. at 240-47 (finding that, where expert's proffered opinion based on the application of a multi-criteria methodology failed to satisfy the four Daubert reliability factors, the court's heightened scrutiny was warranted because without rigorous explanations as to how the evaluated criteria were weighted, such methodologies are “virtually standardless and their applications to a particular problem can prove unacceptably manipulable”).

Close scrutiny reveals that Dr. Moyé's application of the Bradford Hill criteria is unreliable because he does not sufficiently explain how he has weighted the evidence used to conduct his Bradford Hill analysis. Additionally, he repeatedly cherry-picks the findings on which he chooses to rely while disregarding the limitations expressed by the studies he cites in support of his conclusions, and dismisses inconsistent findings without explanation.

1. Strength of Association

Dr. Moyé opines that the strength of association factor weighs in favor of causation because “[t]here is substantial evidence demonstrating an increase in the incidence of autism associated with SSRI.” Moyé Rpt. at ¶ 101. The strength of association factor is a “gating” factor that “requires a statistical, or strong, association between the cause under review and its asserted effect.” Mirena II, 341 F.Supp.3d at 258. Dr. Moyé grounds his conclusion regarding the threshold factor on the findings of statistically significant associations in Croen (2011), Harrington (2014), and Boukhris (2016), but fails to acknowledge the limitations expressed by these studies.

For example, Croen (2011) conducted a population-based case-control study in which the medical records of children with and without autism were examined for maternal use of antidepressant medications. Moyé Rpt. at ¶ 92. Croen (2011) reported a statistically significant association between certain SSRIs and ASD (OR 2.2; 95% CI 1.2-4.3). Mirena II, 341 F.Supp.3d at 258. However, the Croen (2011) study explains that its findings are subject to limitations, including “detection bias, such that women who were prescribed SSRIs as treatment for anxiety may be more concerned about their child's development and more likely to have their child assessed, leading to more diagnoses of ASD.” Croen (2011) at 1110. Dr. Moyé does not mention this limitation in his report. When confronted with this limitation during his deposition, Dr. Moyé stated that the authors' suggestion that “their findings disappear when one takes into account detection bias” is “an overstatement of the issue” and that it would be “fairer to say that the findings may be influenced by detection bias” instead. See Deposition Transcript of Lemuel Moyé, dated March 22, 2019, docket entry no. 81 Exh. 12 (“Moyé Tr.”), at 74:1-16. He did not offer any explanation for this opinion. Moreover, the Croen (2011) study authors stated that their findings are preliminary and should be “treated with caution, pending results from further studies designed to address the very complex question of whether prenatal exposure to SSRIs may be etiologically linked to later diagnoses of ASDs in offspring.” Croen (2011) at 1110-11. Dr. Moyé's failure to mention the limitations of this study in his report, and to explain the weight he assigns to the sources he has reviewed, undermines his conclusion regarding the strength of association factor. See Mirena II, 341 F.Supp.3d at 248 (finding that the study “on which Dr. Moyé base[d] his strength of association finding stopped well short of finding a strong association, ” and the study author “pointedly cautioned” that its finding of a positive association may have been a result of other, external factors).

Additionally, Dr. Moyé disregards Hviid (2013) and Sorensen (2013), studies that failed to report a statistically significant association, as “unworthy of consideration” due to what he characterizes as their “critical weaknesses” resulting from inadequate exposure validation. Hviid (2013) is a Denmark-based study that reports an association between SSRI use in pregnancy and ASD, but no statistically significant association after adjusting for confounders (RR 1.20; 95% CI 0.90-1.61). Hviid (2013) at 2406; Moyé Rpt. at ¶ 93. The study authors state that interpretation of their findings, like those of all observational studies, should take into account “residual and unmeasured confounding or ascertainment bias with respect to exposure [which] adds to the imprecision of [their] estimates.” Hviid (2013) at 2415. Dr. Moyé both failed to mention this directive in his report, and dismissed it as a “politically correct comment” during his deposition. Moyé Tr. at 133:16-134:6. Instead, he comments that the absence of a statistically significant association finding in this study is because the study authors overestimated exposure by “assum[ing] that a prescription is prima facie evidence that the pregnant woman actually ingested the pill.” Moyé Rpt. at ¶ 93. As a result, Dr. Moyé submits that, “[a]bsent information about compliance, this study offers little helpful information on the relationship between maternal SSRI use and autism.” Id. Dr. Moyé provides similar reasoning dismissing Sorensen (2013), another study that reported no statistically significant association between antidepressants and ASD after all adjustments. Id. at ¶ 94 (commenting that “[n]o special steps were taken to ensure that prescription records actually tracked with compliance and pill ingestion; low compliance with medication is just as plausible [an] explanation for the findings as whether the medication was not related to the development of autism.”).

Dr. Moyé fails to address the fact that lack of compliance validation is a concern that was also noted by studies he cites for the existence of a statistically significant association. See, e.g., Croen (2011) at 1110 (“Another limitation of our reliance on medical records is that we were unable to validate actual use of antidepressants by the mothers during the time period of interest because we relied on documentation of dispensed prescriptions.”); Boukhris (2016) at 118-19 (“We defined [antidepressant] exposure as having at least 1 prescription filled at any time during pregnancy or a prescription filled before pregnancy that overlapped the first day of gestation.”). Indeed, Dr. Moyé's propensity to cherry-pick the findings he agrees with and his failure to acknowledge the express limitations that render those findings unreliable, while disregarding those studies that do not support his conclusions because they suffer from the same limitations, casts significant doubt on the reliability of both his weighting of the studies he reviewed, which he does not explain, as well as his subsequent analyses. See In re Accutane Prods. Liab., 2009 WL 2496444, at *2 (“[W]hen an expert relies on the studies of others, he must not exceed the limitations the authors themselves place on the study.”); Mirena II, 341 F.Supp.3d at 247 (finding that Dr. Moyé's opinion on the causal link between Mirena, an intrauterine device, and a disease known as idiopathic intracranial hypertension (“IIH”), was flawed, in part, due to his “failure to consider known contrary evidence” when conducting a Bradford Hill analysis).

2. Temporality

This factor assesses whether “the exposure is present before there are signs or symptoms of autism.” Moyé Rpt. at ¶ 50. Dr. Moyé opines that “[t]he pathophysiology of ASD and research designs render this concern moot.” Id. at ¶ 102.

3. Biologic Gradient/Dose-Response

The biologic gradient or dose-response factor is the “observation that the more intense the exposure, the greater the risk of disease” or “damage.” Id. at ¶ 51. Dr. Moyé opines that there is “some evidence that increasing SSRI exposure produces increased risk of ASD.” Id. at ¶ 103. He does not comment on the weight he has assigned to the evidence supporting his implication that dose response supports general causation, or comment on the weight he places on other studies that do not support this conclusion.

4. Biologic Plausibility

This Bradford Hill criterion takes into account data that is helpful in understanding “how the risk factor produces the disease.” Id. at ¶ 53. Dr. Moyé opines that “[m]echanistic studies point to the effect of SSRIs in the developing fetus.” Id. at ¶ 104. As support, he cites studies that find children with ASD commonly have higher serotonin levels, known as hyperserotonemia. Id. With respect to the question of whether hyperserotonemia leads to ASD, Dr. Moyé relies on several studies that conducted experiments in animals to support the impact and threat of serotonin in the fetus. However, Dr. Moyé does not comment on the weight he assigns to any of the studies he cites, explain the value of experiments conducted in animals within his biologic plausibility analysis, or mention whether other existing studies fail to weigh in favor of or satisfy the biologic plausibility criterion.

5. Consistency

This factor considers consistency with other research findings. It is important because “[r]esearch findings become more convincing when they are replicated in different populations and using different research methods and designs.” Id. at ¶ 54. In his analysis of this factor, Dr. Moyé states that “[m]ultiple but not all studies demonstrate the risk and are commonly of different design and population.” Id. at ¶ 105. He cites the studies that support the consistency factor, such as Croen (2011) and Boukhris (2016), which support a strong statistical association between prenatal SSRI use and ASD. He comments that those that do not support the factor, such as Sorensen (2013), Clements (2015), and Castro (2016), which also do not report a strong statistical association, “are crippled by inadequate exposure validation.” Id. However, as in his analysis of the strength of association factor, Dr. Moyé cherry-picks those findings that support his conclusions while failing to note that they also suffer from the same weaknesses as the studies he disregards. See, e.g., Croen (2011) at 1110 (noting the study's limitations due to inability to validate actual use of antidepressants by subject mothers); Boukhris (2016) at 118-19 (noting the study's inability to accurately track compliance).

6. Specificity

The specificity factor “inquires into the number of causes of a disease.” Mirena II, 341 F.Supp.3d at 249. Dr. Moyé explains that “[t]he greater the number of causes of a disease (i.e. the more multifactorial the risk factors causing the disease), the more nonspecific the disease is, and the more difficult it is to demonstrate a new causal agent is involved in the production of the disease. The ‘but for' causality argument may fail when there are multiple true, concurrent causes.” Moyé Rpt. at ¶ 55. In analyzing the specificity factor, Dr. Moyé states that “a summary of the findings that is consistent with the data is that SSRI and depression are additive influences on the occurrence of autism.” Id. at ¶ 106.

As support, Dr. Moyé cites Rai (2013), but fails to mention the limitations expressed by the authors of that study that directly contradict his conclusions. Rai (2013) is an epidemiological analysis that finds a statistically significant association between use of SSRIs during pregnancy and autism in offspring (OR 3.34, 95% CI: 1.50 to 7.47). Rai (2013) at 1. Dr. Moyé comments on this study in depth before diving into his Bradford Hill analysis, particularly regarding its controlling for the use of antidepressants as part of its assessment of the relationship between parental depression, SSRI use during pregnancy, and the occurrence of autism. Moyé Rpt. at ¶ 95. He concludes in his analysis of the specificity factor that the Rai (2013) study findings support the proposition that maternal depression and antidepressants each have their own relationship with autism, which do not cancel each other out but instead amplify one another. Id. at ¶ 106. However, Dr. Moyé fails, both in his lengthy discussion of the Rai (2013) study and his analysis of the specificity factor, to mention the limitations expressed by the study authors. For example, in direct contradiction to Dr. Moyé's conclusion, Rai (2013) reports that “it is not possible to conclude whether the [reported association] reflects severe depression during pregnancy or is a direct effect of the drug.” Rai (2013) at 5. Furthermore, the authors state that caution is required before making causal assumptions or clinical decisions based on observational studies. Id. In his deposition, Dr. Moyé dismissed Rai's disclaimer without any coherent explanation, likening the need for caution with observational studies to that which is “required when I cross the street” but that “doesn't mean I don't cross the street.” Moyé Tr. at 118:12-22. Rai (2013) also suggests that further studies are needed to clarify the roles that maternal depression and the SSRIs used to treat it play in causing ASD. Rai (2013) at 5. Dr. Moyé testified that he disagrees with this statement, deeming it unfair, “unbalanced, ” and “politically correct” because “[a]t some point, you have to make a determination and decide whether this is actionable, and there's no statement about that.” Moyé Tr. at 120:3-11. Indeed, Dr. Moyé presses conclusions that the Rai (2013) authors were not willing to make, thereby demonstrating the unreliability of his own conclusions because there is “too great an ‘analytical gap' between the conclusions reached by the authors of [Rai (2013)] and the conclusions [he] draws from their work.” Amorgianos, 303 F.3d at 270 (quotations and citation omitted) (finding that the “district court did not abuse its discretion in excluding [the expert's] testimony upon reasonably concluding that the analytical gap between the studies on which she relied and her conclusions was simply too great and that her opinion was thus unreliable”).

In his analysis of the specificity factor, Dr. Moyé also relies on Rai (2017), without acknowledging the authors' explicitly expressed limitations. In his analysis, Dr. Moyé claims that Rai (2017) “voided the concern about confounding by indication by demonstrating that [the] odds ratio reflecting the relationship between maternal antidepressant use [and] ASD was statistically significantly elevated for the relationship of maternal antidepressant use and ASD when the control group was children whose mothers has psychiatric disorder and no ASD.” Moyé Rpt. at ¶ 106. Rai (2017) reported a statistically significant association between SSRI use during pregnancy and ASD in offspring (OR 1.45, 95% CI 1.13-1.85) after conducting an observational prospective cohort study involving Stockholm youth. Rai (2017) at 1; Moyé Rpt. at ¶ 95. However, the study discloses that its findings are subject to a series of methodological limitations that make it “difficult to conclusively dismiss the possibility that the observed associations are wholly attributable to confounding.” Rai (2017) at 5. Dr. Moyé fails to mention this express limitation in his analyses of both the study and the Bradford Hill specificity factor. Furthermore, during his deposition, he dismisses the express limitation as unfair and “of no value, ” speculating “that a reviewer or editor required this statement to go in.” Moyé Tr. at 122:9-123:11. Nevertheless, Dr. Moyé cites an editorial that “did not refute” the authors' findings to bolster his citation to the study, without providing any information on the cited editorial, how it supported the Rai (2017) study, or any other context whatsoever. Moyé Rpt. ¶ 95. Again, Dr. Moyé's analysis here proves unreliable in light of his consistent cherry-picking of those studies he agrees with while refusing to acknowledge their deficiencies. See In re Accutane Prods. Liab., 2009 WL 2496444, at *2.

7. Challenge-Dechallenge

This factor takes into account “in vitro studies, laboratory experiments on animals and human experiments” where a harmful exposure is removed and sometimes reintroduced after the removal or discontinuation.” Moyé Rpt. at ¶ 57. Dr. Moyé explains that “[t]his criterion is of limited utility in assessing the SSRI-autism relationship in individuals” because “[e]thical reasons preclude the examination of this issue in humans.” Id. at ¶¶ 57, 107.

8. Analogy

The analogy factor requires “substantiation of relationships similar to the putative causal relationship . . .” Mirena II, 341 F.Supp.3d at 249 (citation and quotations omitted). In discussing this factor, Dr. Moyé cursorily states that “[m]any examples of birth defects related to perinatal exposure are available.” Moyé Rpt. at ¶ 108. He does not cite any studies or further elaborate on this Bradford Hill criterion, including whether he believes it supports a causal relationship between maternal SSRI use and ASD.

As a whole, Dr. Moyé's Bradford Hill analysis proves to be unreliable because he repeatedly cherry-picks favorable studies to support his conclusions, fails to explain the weight he attributed to the studies he reviewed, and also ignores entire categories of relevant studies in his report. As demonstrated above, particularly in relation to Dr. Moyé's analyses of the strength of association, consistency, and specificity Bradford Hill factors, Dr. Moyé only gives credence to those studies that support his conclusions, such as Croen (2011) and Boukhris (2016), without discussing their limitations, and disregards those studies that do not support his opinions because they suffer from many of the same limitations noted in the studies he relies on. Indeed, Dr. Moyé admitted during his deposition that he only gave weight to observational studies that support his conclusions, and no weight to studies that did not, which further highlights his flawed “weight of the evidence” methodology. See Moyé Tr. at 217:14-218:5.

Additionally, Dr. Moyé fails to explain rigorously how he has weighted the criteria he considered and the studies he cites in support of his conclusions. For example, Dr. Moyé does not clearly identify exactly which criteria he believes support a causal relationship between maternal use of SSRIs and ASD. Plaintiffs claim that Dr. Moyé found that strength of association, dose response, biologic plausibility, consistency, and specificity factors “are particularly strong in identifying a causal association.” Opp. at 48. Yet, in his report, Dr. Moyé merely states that there is “substantial evidence” in favor of the strength of association factor and “some evidence” in support of the dose-response factor, and fails to provide any similar indication of the magnitude of support for the remaining factors that Plaintiffs highlight. Also, “he nowhere concedes that any criterion [ ] is only weakly supportive of a finding of causation.” Mirena II, 341 F.Supp.3d at 248. Plaintiffs argue that Dr. Moyé concluded that the analogy factor weighs weakly in support of causation. Yet, as stated above, Dr. Moyé's report only devotes one conclusory sentence to his discussion of the analogy factor, fails to cite any sources, and omits his alleged allocation of weak weight to the factor. Such analysis of the factor is misleading because one could read his single conclusory sentence as concluding Dr. Moyé finds it to weigh in favor of causation. Ultimately, Dr. Moyé's analysis of the analogy factor is an example of his failure to provide the requisite rigorous explanation to establish reliability. See, e.g., id. at 250 (finding that Dr. Moyé's analysis of the specificity factor “depart[ed] from rigorous methodology” where he devoted two sentences to his discussion of the factor, one of which was conclusory, and failed to “cite any study” to support his conclusion). Indeed, “[b]y leaving obscure the weight that he attaches to each of the nine Bradford Hill factors and the relationship among them, Dr. Moyé's approach effectively disables a finder of fact from critically evaluating his work.” Id. at 248.

Finally, Dr. Moyé completely disregards entire categories of relevant evidence in his report. For example, Dr. Moyé does not cite any meta-analyses in his report. Meta-analyses are “studies that attempt to discern relationships by combining the data from multiple studies.” Opp. at 35. “When epidemiologists hypothesize that there is a ‘true' association which individual studies are underpowered to detect at a statistically significant level, the widely accepted approach to combining data from multiple studies-thus increasing the power to detect an association-is to conduct a systematic meta-analysis.” Zoloft, 26 F.Supp.3d at 457. Despite authoring meta-analyses himself, Dr. Moyé criticizes their use because the individual studies they analyze “were not designed, collected or intended to be combined with data from other studies.” See Moyé Rebuttal Rpt. at 10. However, guidelines exist for using pre-specified criteria to appropriately conduct such analyses. See Motion at 30 (citing Bracken 2d Supp. Rpt. at 8). In light of the fact that several meta-analyses examine the association between in utero SSRI exposure and ASD, as evidenced by Dr. Moyé's own list of referenced sources, and the fact that experts opining on the issue generally consider meta-analyses as part of their evaluations, Dr. Moyé's categorical omission of such evidence in his analysis is concerning. See Rizzo, 2017 WL 4005625 at *13 (excluding causation opinion formed by an expert who “only reviewed articles that support an association”).

Dr. Moyé also fails to address relevant reviews of epidemiological studies, most notably the report generated by the European Medicines Agency (“EMA”), Europe's regulatory agency for medicinal products. See docket entry no. 81 Exh. 83 (the “EMA Report”). In 2016, the EMA comprehensively reviewed the extensive peer-reviewed literature on the association and causal link between SSRIs and ASD, unequivocally concluding that, with respect to escitalopram and citalopram specifically, “[t]he data currently available on prenatal exposure to SSRI/SNRI and ASD do not support a causal relation.” EMA Report. at 75. Dr. Moyé ignores the EMA Report and testified that he is unsure of the relevance of such a conclusion, one regarding the “causal association between the risk of ASD and maternal exposure to SSRIs during pregnancy.” Moyé Tr. at 188:7-21, 191:23-192:8. Dr. Moyé's rejection of a conclusion that could not be more relevant to his opinions is alarming. Moreover, his testimony that he would not expect a regulatory agency to state that an exposure causes a disease is unfounded. In fact, as Defendants correctly point out, regulatory agencies do so where warranted, for example with smoking. See K.E. v. GlaxoSmithKline LLC, No. 3:14-CV-1294(VAB), 2017 WL 440242, at *10 (D. Conn. Feb. 1, 2017) (“When experts rely on epidemiological evidence to support causation, they must provide . . . a full picture of the state of the field.”) (citing Guardians Assoc. of N.Y.C. Police Dept., Inc. v. Civil Serv. Com., 633 F.2d 232, 240 (2d Cir. 1980)).

Dr. Moyé's selective and biased reliance on favorable sources to support his opinions on causation, failure to rigorously explain his application of the Bradford Hill factors under the weight of the evidence methodology, and ignorance of pertinent categories and sources of information in his report is demonstrative of an unreliable application of purportedly sound scientific methodology, which fails to meet the requisite standards outlined under both Daubert and Rule 702. For these reasons, the Court finds that Dr. Moyé's general causation opinion is inadmissible.

Dr. Laura Plunkett

Dr. Laura Plunkett is a pharmacologist, toxicologist, an FDA regulatory specialist, and principal of Integrative Biostrategies, LLC, a consulting company. See Expert Report of Laura M. Plunkett, dated Sept. 14, 2018, docket entry no. 81 Exh. 13 (“Plunkett Rpt.”), at ¶ 1. She received her Ph.D. in pharmacology from the University of Georgia, College of Pharmacy in 1984 and her B.S. from the University of Georgia in 1980. Id. at ¶ 3. Dr. Plunkett has been a Pharmacology Research Associate Training fellow at the National Institute of General Medical Sciences, Bethesda, Maryland, where she worked in a neurosciences laboratory and focused on the role of various brain neurochemical systems involved in the control of autonomic nervous system and cardiovascular function. Id. at ¶ 4. Her academic research has included studying the effect of antidepressants on brain function, including research focused on the serotoninergic system in the brain and its role in controlling different functions in humans. Id. at ¶ 7. Defendants do not contest Dr. Plunkett's qualifications as an expert witness in this matter.

Dr. Plunkett opines that 1) the hypothesis that SSRIs cause neurodevelopmental damage, including ASD, is “biologically plausible” mechanistically; 2) SSRI use by pregnant women exposes the developing fetal brain to SSRIs and their effects; and 3) the dose of exposure is sufficient to cause neurodevelopmental damage in the developing human brain. Opp. at 55. As a threshold matter, for the same reasons as those discussed with respect to Dr. Moyé, the Court notes that Dr. Plunkett's methodology does not meet the Daubert reliability factors. She has not tested her theory on the biological plausibility relationship between Lexapro and ASD, nor has she submitted it for peer review or publication. She does not identify an error rate for her partial application of the Bradford Hill factors, discussed below, and her opinion is not generally accepted. Therefore, a hard look at her methodology is warranted.

Defendants argue that Dr. Plunkett's purported opinion on general causation is in reality one on biological plausibility, and that it is unreliable under Rule 702 because 1) she primarily relies on animal studies, which cannot reliably prove general causation in humans in this case; and 2) her weight of the evidence methodology is flawed because she did not reliably apply the Bradford Hill criteria or analyze existing epidemiological data in forming her opinion.

Defendants contend, and Dr. Plunkett admits, that she has not conducted “a full, general causation analysis, ” and, in her report, was “not focusing on trying to do a general causation assessment across all data.” Deposition Transcript of Laura Plunkett, dated April 12, 2019, docket entry no. 81 Exh. 15 (“Plunkett Tr.”) at 131:20-25, 39:10-16, 78:2-4. She also makes clear that she is not “attempting to do a detailed review of all the epidemiological data.” Id. at 181:21-22. Dr. Plunkett testified that, in forming her opinion for the purposes of this litigation, she conducted three analyses that toxicological experts should undertake in order to form valid and reliable, and therefore admissible, opinions: 1) “analyze whether the disease can be related to chemical exposure by a biologically plausible theory”; 2) examine whether the plaintiff was “exposed to the chemical in a manner that can lead to absorption into the body”; and, 3) opine on “whether the dose to which the plaintiff was exposed is sufficient to cause the disease.” Opp. at 57 (citing RMSE at 661). Her conclusion that exposure is satisfied under the first step is primarily supported by in vivo data in humans showing that SSRIs as a class, including citalopram and escitalopram, readily cross the placental barrier, as evidenced by maternal, cord, and infant blood samples, and can be detected in the amniotic fluid of women taking the drugs orally during pregnancy. See Plunkett Rpt. at ¶¶ 39-42. However, as illustrated below, her analysis under the second and third steps is almost entirely reliant on animal data.

Reliance on Animal Data

Defendants contend that Dr. Plunkett's testimony should be excluded because it is not an opinion on general causation, but rather a “biological plausibility opinion [that is] based on [ ] animal studies [and therefore] cannot move the needle on general causation” in this case. Motion at 42. Under Rule 702, the Court must consider whether the proffered testimony is based on sufficient facts or data. In addition to epidemiological evidence, expert witnesses opining on general causation may rely on animal or in vitro studies. However, “laboratory animal studies are generally viewed with more suspicion than epidemiological studies, because they require making the assumption that chemicals behave similarly in different species.” In re Agent Orange Prod. Liab. Litig., 611 F.Supp. 1223, 1241 (E.D.N.Y. 1985), aff'd sub nom. In re Agent Orange Prod. Liab. Litig. MDL No. 381, 818 F.2d 187 (2d Cir. 1987) (internal citation omitted). Consistent “extrapolation from animal studies to humans entails some risks, as physiological differences and dosage differences can complicate comparisons.” In re Fosamax Prod. Liab. Litig., 645 F.Supp.2d 164, 186 (S.D.N.Y. 2009).

As a result of methodological difficulties in applying animal data to humans, courts have found that “causation opinions based primarily upon in vitro and live animal studies are unreliable and do not meet the Daubert standards.” Zoloft, 26 F.Supp.3d at 475; see also Chapman v. Procter & Gamble Distrib., LLC, 766 F.3d 1296, 1308 (11th Cir. 2014) (excluding expert witness testimony based on “secondary methodologies, ” including animal studies, which offer “insufficient proof of general causation.”). The unreliability of animal studies is particularly apparent where there is overwhelming contradictory epidemiological evidence. See Raynor v. Merrell Pharm. Inc., 104 F.3d 1371, 1375 (D.C. Cir. 1997) (“[W]here sound epidemiological studies produce opposite results from nonepidemiological ones, the rate of error of the latter is likely to be quite high”). Accordingly, expert opinions relying on animal studies may only be admitted where “the gap between what [they] reasonably imply and more definitive scientific proof of causality is not too great, ” and the “inferences are of a kind that physicians and scientists reasonably make from good but inconclusive science.” In re Ephedra Prod. Liab. Litig., 393 F.Supp.2d 181, 197 (S.D.N.Y. 2005); see also In re Fosamax, 645 F.Supp.2d at 187 (finding animal studies by themselves insufficient to prove general causation, but admitting them because “they serve as pieces of the scientific puzzle that contribute to the reliability of the experts' opinions”).

Dr. Plunkett's expert report reveals that she relies heavily on animal studies to justify her opinion on biological plausibility. With regard to the exposure and absorption considerations, Dr. Plunkett opines that “altering serotonin activity in the developing organism results in adverse effects on the developing fetus, effects that include specifically an increased risk of neurodevelopmental toxicity.” Plunkett Rpt. at ¶ 49. To support her conclusion, Dr. Plunkett cites two animal studies that observed behavioral abnormalities in rodents that were exposed to SSRIs in high doses pre- and post-natally: 1) Levitt, P. (2011), a study that found disruptions in serotonin levels result in “increased anxiety-like and depression-like behavior in animals”; and 2) Glover and Clinton (2016), which reviewed available evidence from animal studies related to the relationship between serotonergic activity, SSRI exposure, and brain development. Id. Dr. Plunkett purports to bolster her animal study-based conclusion by providing extensive string cites to human studies that have allegedly associated or linked SSRIs with adverse developmental effects and neurodevelopmental toxicity, including autism. She states that the cited “human data are consistent with the known toxicity profile of Celexa and Lexapro, as well as other SSRIs, based on [ ] preclinical animal studies.” Id. at ¶ 51. However, she provides no explanation as to what these cited human studies examined or concluded, their methodologies, or whether they specifically support or contradict her proffered opinions. In fact, as explained below, at least one of the studies she cites, Clements (2015), directly contradicts her opinion. Such vague treatment of highly relevant human studies and data, which were presumably analyzed through the lens of Dr. Plunkett's own expertise and judgment and to which she does not assign any weight, is inscrutable as “a scientific method of weighting that is used and explained, ” Zoloft, 858 F.3d at 796, and therefore falls short of meeting the rigorous explanation standards of Daubert and Rule 702 under the weight of the evidence methodology. See O'Conner v. Commonwealth Edison Co., 807 F.Supp. 1376, 1392 (C.D. Ill. 1992) (“[M]ere recitation of a list of studies is not a magical incantation paving the way to the witness stand unless it is accompanied by reasoned and scientifically accepted analysis”), aff'd, 13 F.3d 1090 (7th Cir. 1994).

Bradford Hill Analysis

In opining on the link between the dose and the alleged injury under the third step generally followed by toxicological experts who are proffering a reliable opinion, Dr. Plunkett concludes that available studies indicate that the “duration of exposure, a measure of dose, and the timing of that exposure may both impact the risk of neurodevelopmental toxicity in humans.” Plunkett Rpt. at 38-39. In reaching her conclusion, Dr. Plunkett purportedly reviewed “numerous peer-reviewed human epidemiology studies, as well as numerous animal toxicity studies” and applied four of the Bradford Hill criteria-biologic gradient/dose-response, biologic plausibility, biological coherence, and experimental evidence-to the available data using a weight of the evidence methodology. Opp. at 57-58. During her deposition, she explained that the weight she placed on the studies she reviewed under a weight of the evidence methodology, and subsequently relied on when conducting her Bradford Hill analysis, was based on her scientific training and judgment, which she has honed throughout her career in pharmacology and toxicology, and therefore cannot be quantified. Plunkett Tr. at 307:16-311:13. She also claimed to have used objective standards to determine how much weight to place on studies, considering whether articles were peer-reviewed or Good Laboratory Practice (GLP) quality, although these standards are not discussed in her report. Id. at 314:5-15.

An inquiry as to whether a study qualifies as GLP would question whether the study has “been reviewed by a regulatory agency and used for decision-making.” Plunkett Tr. at 314:5-15.

Generally, as explained above, the weight of evidence methodology is scientifically acceptable where experts “rigorously explain how they have weighted the criteria” considered. Mirena II, 341 F.Supp.3d at 247. Thus, in reliably employing a weight of the evidence methodology, an expert must thoroughly analyze the strengths and weaknesses of any inconsistent research and sufficiently reconcile her opinion with contrary authority. Zoloft, 26 F.Supp.3d at 457. Without such explanations, the proffered testimony cannot meet the reliability standards of Rule 702 and Daubert. Id. Here, Defendants argue that Dr. Plunkett's utilization of the weight of the evidence methodology is unreliable because, in conducting an incomplete and misleading Bradford Hill analysis, she 1) cherry-picks data that supports her conclusions; and 2) fails to reconcile data that is inconsistent with or weakens her conclusions. The Court agrees.

1. Biologic Gradient/Dose-Response

The biologic gradient or dose response factor prompts consideration of whether the risk of seeing an effect of the drug increases by increasing the dose, or level of exposure. With regard to this factor, Dr. Plunkett opined that there is a general dose-response relationship for developmental toxicity based on available animal data. Dr. Plunkett begins her analysis by noting the differences in human and animal gestation. Plunkett Rpt. at 36-37. For example, she explains “[h]uman brain development that occurs during the third trimester is not completed in rodents until after birth.” Id. at 37. As a result, she explains, standard animal developmental toxicity studies “are limited in terms of what they can contribute to the relationship between a disorder such as autism and drug exposure unless the studies involve exposure” during the postnatal period in rodents. Id. Consequently, in order to establish a dose-response relationship for the purposes of a causation analysis, Dr. Plunkett explains, standard animal studies involving exposure in utero “must be considered in conjunction with” data evaluating the postnatal period in rodents and human data. Id. She cites a few of the former that support her conclusion.

With regard to human data related to neurodevelopmental toxicity, which she claims must be considered, Dr. Plunkett explains that epidemiological investigations comprise the available authority because clinical trials cannot be conducted on pregnant women. Representing that she performed a “review of available [epidemiological] studies, ” Dr. Plunkett summarily cites five such studies that yielded results supporting her conclusion concerning a dose-response relationship. However, her citations are selective and fail to represent the studies' underlying conclusions accurately. For example, Dr. Plunkett first cites Croen (2011) and Harrington (2014) for the assertion that a “review of available studies shows that exposure during the first trimester was reported to be associated with an increased risk for neurodevelopmental disorders . . . .” Plunkett Rpt. at 38. However, she omits the fact that Boukhris (2016) found that “use of [antidepressants] in the first trimester . . . was not associated with the risk of ASD.” Boukhris (2016) at 120. Nevertheless, in the very same sentence, Dr. Plunkett cites Boukhris (2016) for the proposition that some studies “reported an increased risk with SSRI exposure during the second and third trimesters.” Plunkett Rpt. at 38. Indeed, she acknowledges that studies with data contrary to or inconsistent with her opinions would be “relevant data” and “useful in forming [her] opinions, ” yet she excludes from her report “those findings of no effect in animal studies.” Plunkett Tr. at 49:9-17; 323:21-25; 324:15-20. Dr. Plunkett's conclusion that “adequate data” exists to support a dose response relationship is only supported by favorable animal data, and her failure to reconcile data that does not support her conclusions casts doubt on the reliability of her opinion on this Bradford Hill factor. See Zoloft, 26 F.Supp.3d at 456-57.

2. Biological Plausibility

In discussing the biological plausibility factor, Dr. Plunkett submits that there is data available to support a plausible mechanistic basis for neurodevelopmental toxicity linked to in utero exposure to drugs such as Lexapro. To support her opinion, Dr. Plunkett cites to five animal-focused sources, all of which find adverse effects of exposure to citalopram in animals. Plunkett Rpt. at 39-41 (citing Borue et al. 2007; Simpson et al. (2011), Rodriguez-Porcel et al. (2011); Darling et al. (2011); and Zahra et al. (2018)). Failing to comment on whether any animal studies may conclude differently and the weight she would assign such studies, or on the existence of any epidemiological studies whatsoever, Dr. Plunkett concludes that “there is a large body of data available” to support a biologically plausible relationship between exposure to SSRIs during pregnancy and autism. Plunkett Rpt. at 41. Her failure to present a representative picture of the “scientific landscape and present the Court with what [s]he believes the final picture looks like” casts doubt on the reliability of her opinion. See Rezulin, 309 F.Supp.2d at 563.

3. Biological Coherence

The biological coherence factor considers whether the reported effect of the drug fits with the known pattern of the disease. In analyzing this factor, Dr. Plunkett concludes that this factor weighs in favor of causation because neurodevelopmental toxicity includes outcomes consistent with the pattern of effects seen in autism. Although she claims to have reached her conclusion upon “[r]eview of the human epidemiological data, as well as the [relevant] animal data, ” Dr. Plunkett again does not discuss any human epidemiological data that supports or opposes her conclusion; she only discusses cherry-picked studies on rats that support her opinion. Plunkett Rpt. at 41.

4. Experimental Evidence

Finally, the experimental evidence factor involves a consideration of the ability to collect data in order to analyze the cause-and-effect relationship alleged. In support of her conclusion that “there are experimental data that inform on the cause-and-effect relationship between SSRI exposure during pregnancy and the risk of [autism, ]” Dr. Plunkett comments only on the availability of animal developmental toxicity data. Plunkett Rpt. at 42. She does not address the unavailability of clinical data or the value provided by epidemiological data in analyzing a cause-and-effect relationship, which she acknowledged in her discussion of the biologic gradient/dose-response factor.

Indeed, a rigorous examination of Dr. Plunkett's analysis reveals that she conducted a flawed and misleading Bradford Hill analysis where she selectively analyzed four of the nine factors, primarily relied on cherry-picked, favorable animal data that supports her conclusions within those analyses, and failed to mention, much less reconcile, other categories of relevant data constituting contrary authority. Her application of the factors is fundamentally flawed from the outset because she is completely silent on the threshold “gating” factor, strength of association. As the Court observed in Mirena II, “if [ ] a statistical association is not found, there is no charter to undertake a Bradford Hill analysis at all.” Mirena II, 341 F.Supp.3d at 258. Additionally, her selective review of only four Bradford Hill factors, without explaining the relationship between them, casts doubt on the reliability of her proffered opinion because she fails both to consider the remaining five factors and to provide justification for her partial analysis.

Additionally, Dr. Plunkett chooses to discuss only those studies, and findings within studies, that support her conclusions, and presents to the Court “what [s]he believes the final picture looks like” rather than the entire “scientific landscape.” Rezulin, 309 F.Supp.2d at 563. This is particularly concerning because she relies primarily on animal data, despite the significant differences between animals and humans. As Dr. Plunkett herself explains, animals cannot even be diagnosed with autism in the same way humans can, because “human brains are different than rodent brains[, ]” and animals “are not communicative in the way . . . humans are[.]” See Plunkett Tr. at 268:1-5;144:16-145:12. Further, she does not discuss other relevant animal studies that constitute contrary authority. For example, she does not discuss, analyze, or explain Bairy (2007), which concludes that the SSRI fluoxetine, which is tested in studies that Dr. Plunkett relies on to support her opinion, “does not cause any cognitive deficits” and “does not produce any major defect in development of the [central nervous system] in rats and is safe at the therapeutic dose.” Bairy (2007) at 10. Dr. Plunkett's propensity to cherry-pick data that supports her conclusions and disregard contrary data that is highly relevant to her conclusions renders her opinion unreliable. See Mirena II, 341 F.Supp.3d at 242, 261 (finding Dr. Plunkett's opinion unreliable in part because she “fail[ed] to consider evidence that did not support her opinion”).

Furthermore, Dr. Plunkett implies in her report that epidemiological data are consistent with her conclusions, but admits that she did not necessarily discuss, analyze, or explain studies that did not “support the exact statement [she was] making, ” nor cite them in her report outside of a footnote or her reference list. Plunkett Tr. 319:24-320:11. Dr. Plunkett's purported incorporation of relevant epidemiological evidence into her report is half-hearted and misleading. She states that she has reviewed the available epidemiological evidence, but her references to such evidence are bare-bones, often in the form of string cites, and entirely omit any analysis, much less a thorough one, of the strengths and weaknesses of the underlying conclusions. See, e.g., Plunkett Rpt. at ¶ 51 n. 10. Where cited studies reach a conclusion contrary to Dr. Plunkett's opinion, she fails to acknowledge, discuss, or analyze the inconsistent data. For example, in the string cites provided in footnote 10 of her report, Dr. Plunkett cites Castro (2016) as an epidemiological study she reviewed. Castro (2016) reports an odds ratio of less than 1.0, which suggests the absence of an increased risk and may indicate a decreased or negative association. See Castro (2016) at 3, Tbl.2. However, Dr. Plunkett does not discuss Castro (2016) anywhere in her report. She similarly failed to analyze the contrary conclusions of the Boukhris (2016) study in presenting her conclusions concerning the biologic gradient and dose-response Bradford Hill factors (see supra pg. 33). See Zoloft, 26 F.Supp.3d at 477 (excluding causation testimony where experts had “given scant attention to the epidemiology research in their reports [and] failed to reconcile inconsistent epidemiological evidence with their opinions on human causation”); Rezulin II, 369 F.Supp.2d at 425 (“[I]f the relevant scientific literature contains evidence tending to refute the expert's theory and the expert does not acknowledge or account for that evidence, the expert's opinion is unreliable[.]”).

Moreover, on at least one occasion, Dr. Plunkett misrepresented the underlying epidemiological data that she claims supports her opinion. She cites Clements (2015) for the proposition that “SSRIs as a class have been associated with adverse developmental effects of various types in humans.” Plunkett Rpt. at ¶ 51. However, as the title of the study itself represents, the study finds that prenatal exposure to antidepressants is not associated with ASD and thus does not support any association that is relevant to this case. See Clements (2015) (titled “Prenatal antidepressant exposure is associated with risk for attention-deficit hyperactivity disorder but not autism spectrum disorder in a large health system”).

Finally, Dr. Plunkett dissembled when confronted about her disregard of relevant evidence. When asked at her deposition why she did not address the EMA report, which directly contradicts her opinion, Dr. Plunkett claimed that she was not asked to offer a regulatory opinion, and therefore did not need to discuss a regulatory document. Plunkett Tr. at 110:4-111:12. However, she cited regulatory laws, codes, and documents as part of her 12-page discussion of the regulation of SSRIs by the FDA in her report. See Plunkett Rpt. at ¶¶ 19-34. Similarly, she admittedly did not address Kaplan (2017) and Morales (2018), two inconsistent meta-analyses that she claims contain redundant information. Plunkett Tr. at 173:17-175:6. Yet, she cites other meta-analyses in her report that allegedly support her opinion. See, e.g., Plunkett Rpt. at ¶ 51 (citing Man (2015)). The unreliability of her methodology is thus revealed by her inconsistent application of the principles she claims to respect.

Moreover, despite her admission at her deposition of the importance of assessing chance, bias and confounding as alternate explanations for epidemiological study results, Plunkett Tr. at 172:19-173:5, 185:23-187:21, Dr. Plunkett's report fails to discuss which studies “had not controlled for important confounding factors . . . and therefore could not be read as finding causality.” Mirena II, 341 F.Supp.3d at 262. Therefore, she has not “adequately accounted for obvious alternative explanations” for her conclusions. Id. (citation omitted). Dr. Plunkett's divergent methods highlight the unscientific approach she has employed, first, in formulating her conclusions, and then in performing her research. See Claar, 29 F.3d at 502-03 (“Coming to a firm conclusion first and then doing research to support it is the antithesis of” a scientific method.).

When confronted about her methodological deficiencies, Dr. Plunkett repeatedly claimed that she is entitled to utilize her scientific judgment to decide which studies are worth mentioning in her report and which are not. See, e.g., Plunkett Tr. at 307:16-311:13. However, “[w]ithout more than credentials and a subjective opinion, an expert's testimony that ‘it is so' is not admissible.” Viterbo v. Dow Chem. Co., 826 F.2d 420, 424 (5th Cir. 1987). Furthermore, her claim that she was not providing a general causation analysis, and so did not have to conduct a thorough review of the available epidemiological evidence or a Bradford Hill analysis, does not justify her proffer of an incomplete, selective, misleading, and ultimately unreliable opinion. Indeed, she is obligated to utilize and explain “a scientific method of weighting” to avoid rendering her opinion the product of a “mere conclusion-oriented selection process.” Zoloft II, 858 F.3d at 796; see Zenith Elecs Corp. v. WH-TV Broad. Corp., 395 F.3d 416, 419 (7th Cir. 2005) (Commenting that a putative expert's “method, ‘expert intuition,' is neither normal among social scientists nor testable-and conclusions that are not falsifiable aren't worth much to either science or the judiciary”). To fail to do so constitutes a “malleable and vague approach [that] is in tension with the first principles under Daubert, because it makes it all too easy for an expert to manipulate the Bradford Hill factors to support a desired conclusion of causation, and far too hard for an ensuing expert to replicate and rigorously test the expert's analytical approach.” Mirena II, 341 F.Supp.3d at 268. For these reasons, the Court finds Dr. Plunkett's proffered testimony to be unreliable and therefore inadmissible under Daubert and Rule 702. Dr. Patricia Whitaker-Azmitia

Dr. Patricia Whitaker-Azmitia is a Professor of Integrative Neuroscience in the Department of Psychology, and a Professor of Psychiatry, at the State University of New York at Stony Brook, where she has been a Full Professor since 2001. See Expert Report of Patricia M. Whitaker-Azmitia, dated Sept. 14, 2018, docket entry no.81 Exh. 16 (“Whitaker-Azmitia Rpt.”), at 1). She received her Ph.D. in pharmacology in 1979 from the University of Toronto, where she trained with a world-renowned expert on neurotransmitters and mental illness. Id. Her thesis involved identifying and quantifying serotonin receptors in human and animal brains. Id. Dr. Whitaker-Azmitia has published over 80 papers on the serotonin receptors and the mechanisms involved in directing brain development. Id. In particular, Dr. Whitaker-Azmitia has worked to develop animal models of increased serotonin in developing rats, in order to ascertain if this could result in the behavioral and neurochemical changes associated with autism. Id. Defendants do not contest Dr. Whitaker-Azmitia's qualifications to serve as an expert witness in this case. Id.

Defendants first argue that Dr. Whitaker-Azmitia's proffered testimony on the issue of biological plausibility must be excluded because it is not a general causation opinion and it is unreliable because 1) she primarily relies on inadmissible animal data to support her hypothesis; and 2) her use of the “face validity” methodology renders her evaluation of epidemiological data unreliable. Dr. Whitaker-Azmitia opines that “manipulation of the serotonergic system during early development, specifically including the manipulation of this system by SSRIs [ ], can and do[es] cause broad disruptions in brain structure and circuitry and lead to associated disturbances in brain function and behavior, including [ASD].” Whitaker-Azmitia Rpt. at 17. In her report, she identifies several factors that have been associated to varying degrees with an increased likelihood of autism (i.e., obstetrical complications, genetic and maternal immune activation), but posits that hyperserotonemia-an increase in the amount of serotonin in the blood-during fetal development is “the single most common contributing factor to autism.” Id. at 9. Dr. Whitaker-Azmitia identifies a mechanism of action through which maternal use of SSRIs causes or contributes to the development of ASD. She opines as follows: 1) serotonin levels play a critical role in brain development of the fetus; 2) these levels of serotonin are regulated by a protein called “SERT”; 3) SSRIs inhibit SERT and disrupt the levels of serotonin in the developing brain; 4) the level of serotonin must be in a “narrow range” as too little can cause cognitive defects and too much can cause social deficits; 5) perturbations, either high or low, can cause neurodevelopmental damages, including autism; and 6) this disruption in the developing brain leads to permanent alteration in the formed brain, which manifests as ASD. Id. at 3. Dr. Whitaker-Azmitia supports her hyperserotonemia hypothesis with animal and human studies.

Reliance on Animal Data

The Court begins its analysis by examining Dr. Whitaker-Azmitia's opinion under the Daubert reliability factors. First, Dr. Whitaker-Azmitia has tested her hyperserotonemia hypothesis by replicating her serotonin findings through the development of animal models for increasing serotonin in developing rats in order to ascertain behavioral neurochemical changes associated with autism. Through conducting strictly controlled animal studies, Dr. Whitaker-Azmitia observed changes in animal behavior consistent with autism. However, she has not tested her hypothesis in humans, noting that “[t]o study serotonin in [a] fetal human brain would be next to impossible.” Rebuttal Expert Report of Dr. Whitaker-Azmitia, dated Dec. 6, 2018, docket entry no. 81 Exh. 17 (“Whitaker-Azmitia Reb. Rpt.”), at 9. Second, Dr. Whitaker-Azmitia has submitted her hypothesis, in which she has specifically identified and opined on the role of hyperserotonemia to autism in humans, for peer review and publication. Third, Dr. Whitaker-Azmitia does not identify an error rate for her technique or standards that control the application of her methodology.

With regard to the fourth Daubert factor, the parties debate as to whether Dr. Whitaker-Azmitia's hyperserotonemia hypothesis is generally accepted in the relevant scientific community. Given that Dr. Whitaker-Azmitia primarily relies on her own peer-reviewed research and conclusions regarding the effects of hyperserotonemia to form her opinion here, this Daubert inquiry is instructive in the Court's consideration of the first Rule 702 factor, i.e., whether Dr. Whitaker-Azmitia's opinion is grounded in reliable data.

In her report, Dr. Whitaker-Azmitia explains that, in order to test her hypothesis, that developmental hyperserotonemia leads to ASD, the principal method she has used in her laboratory has been treating pregnant rats with the appropriate serotonin receptor. Whitaker-Azmitia Rpt. at 9. She notes that other models include conducting tests on genetically engineered mice, and all models have demonstrated behavioral disruptions in adult animals, some of which relate to ASD. Id. With regard to the resulting data, Dr. Whitaker-Azmitia summarizes studies finding that two diagnostic criteria for autism, social deficits and repeated behaviors, have been found to be altered in animal models of developmental SSRI exposure. Id.

As a result of her exclusive reliance on animal data, Dr. Whitaker-Azmitia's hyperserotonemia hypothesis has been rejected as “speculative” and unsubstantiated with respect to its “support for a causal relation between prenatal SSRI/SNRI and development of ASD.” See Motion at 51-52 (citing EMA Report at 14). Additionally, as Defendants contend, and Plaintiffs do not refute, much of the peer-reviewed literature that has considered the hyperserotonemia hypothesis has found it beset with “uncertainty.” Motion at 51 (citing Garbarino (2019) at 2). As discussed in detail above, there is a wealth of epidemiological data on the associative and causal relationship between maternal SSRI use and ASD. For this reason, the reliability of the animal data upon which Dr. Whitaker-Azmitia relies is concerning at the outset. See Raynor, 104 F.3d at 1375. Nevertheless, the Court need not conclusively determine whether such data is admissible under Rule 702 in light of Dr. Whitaker-Azmitia's unreliable methodology, as discussed below.

Defendants also argue that Dr. Whitaker-Azmitia's hypothesis is litigation-driven and therefore unreliable because she has never before opined that antenatal exposure to SSRIs can cause elevations in serotonin, let alone ASD. Here she opines that hyperserotonemia during development is the single most common contributing factor to autism, whereas in the past she has published papers attributing ASD to the imbalance between oxytocin and cortisol on one occasion, and low progesterone on another. See Aiello & Whitaker-Azmitia (2011) at 1663; Whitaker-Azmitia (2014) at 313. However, in light of the Court's decision to exclude Dr. Whitaker-Azmitia's testimony as unreliable under Rule 702, the Court need not address whether her hypothesis here is litigation-driven or inconsistent with her prior published works.

Face Validity Methodology

In addition to drawing from her own animal-based research and conclusions, Dr. Whitaker-Azmitia claims to have reviewed other lines of evidence, including peer-reviewed human epidemiology. Dr. Whitaker-Azmitia employed the “face validity” methodology in deciding how much weight to place on the studies she reviewed. Under this approach, which calls for a comparison between animal models and human anatomy and biology, Dr. Whitaker-Azmitia admittedly only considered models that give face validity (those experimental results highlighting similarities between SSRI-exposed animals and people with autism). See Deposition Transcript of Patricia M. Whitaker-Azmitia, dated April 4, 2019, docket entry no. 81 Exh. 18 (“Whitaker-Azmitia Tr.”), at 133:21-134:8. Defendants argue that Dr. Whitaker-Azmitia's methodology is unreliable under the second Rule 702 factor because it inherently calls for cherry-picked data that supports her hypothesis, and because she failed to employ sufficient intellectual rigor in analyzing relevant epidemiological data. The Court agrees.

To begin, Defendants submit that the face validity method has been consistently rejected by peer-reviewed literature as a method for determining causality because it “is based on a subjective assessment and is therefore prone to bias.” Motion at 56 (citing Wagner (2018) at 215; Pittenger (2017) at 324; Royal (2016) at 1026); see also Lipitor, 892 F.3d at 634. Dr. Whitaker-Azmitia's application of the methodology exemplifies the truth behind this criticism. First, the methodology is inherently focused on analyzing consistent data, i.e., studies showcasing similarities between animals and humans displaying autistic-like features. Dr. Whitaker-Azmitia testified that, when reviewing and weighing the importance of relevant literature, she was exclusively looking for neurochemical morphological changes that are known to occur in children with autism in rodents exposed to SSRIs prenatally or perinatally. Whitaker-Azmitia Tr. at 133:21-134:8. In doing so, Dr. Whitaker-Azmitia admitted that she deliberately disregarded those studies that showed no similarities and was only looking for those that supported her hypothesis. Id. As a result, she does not consider, much less discuss in her report, any studies that found no changes in the brains of prenatally or perinatally SSRI-exposed animals, or studies showing variations among the changes found. Nor does she explain how she weighed any line of evidence that she reviewed when selecting the data she relies on. Indeed, “opinion evidence that is connected to existing data only by the ipse dixit of the expert” is not reliable. Joiner, 522 U.S. at 146.

Second, Dr. Whitaker-Azmitia proffers a misleading analysis of epidemiological data, which she admits she is unqualified to interpret. In her report, she presents a discussion of epidemiological data that purportedly supports her opinion. However, during her deposition, she admitted that epidemiology “isn't entirely my field. I have to rely on epidemiologists' interpretation of the data.” Whitaker-Azmitia Tr. at 150:10-151:11. As a result of her lack of expertise, she explains, she “looked for things in [the epidemiological literature] that were things [she] knew about autism.” Id. at 145:9-19. In fact, she testified that she relied on her colleague to explain basic epidemiological terms and concepts to her, such as what meta-analyses are, and to perform a Bradford Hill analysis based on the facts of this case. Id. at 144: 9-145:8 (Q: “Did you apply the Bradford Hill methodology in reaching your opinions in this case?” A: “I rely on Anne Moyér, who's the epidemiologist in our department . . . so I would have left it up to her.”). The unreliability of her epidemiological discussion is further underscored by her testimony regarding the claim in her report that “[t]o date, there have been 15 [epidemiological] studies examining the incidence of autism in children exposed in utero to antidepressants, which satisfy quality assessment for inclusion in review analyses.” Whitaker-Azmitia Rpt. at 15. When asked to identify these studies, Dr. Whitaker-Azmitia testified that she wasn't sure, as she did not conduct the quality assessment mentioned; it was completed by an epidemiologist, one she couldn't identify. See Whitaker-Azmitia Tr. at 150:25-151:25.

Furthermore, Dr. Whitaker-Azmitia entirely fails to highlight, much less analyze, the strengths and weaknesses of any inconsistent epidemiological data. For example, as Defendants point out, and Plaintiffs do not refute, Dr. Whitaker-Azmitia does not cite or address the fact that there are several epidemiological studies that find no association or find a false association between in-utero SSRI exposure and autism. Additionally, she relies on Bellissima (2015) to opine that “infants that were previously exposed to SSRIs in utero have an increase in blood levels of S100B [(a protein)], indicating that excess serotonin levels were reached before birth.” Whitaker-Azmitia Rpt. at 15. However, she does not address or consider a separate study that reached exactly the opposite conclusion-that “[p]renatal SSRI exposure was associated with decreased neonatal serum S100B levels.” See Pawluski (2009) at 662. Similarly, Dr. Whitaker-Azmitia relies on Cabrera (1994) to opine that embryonic SSRI exposure reduces the “density and function” of serotonin receptors. See Whitaker-Azmitia Rpt. at 15. Yet, she fails to address later studies by the same author that found SSRI exposure led to no such change. See Cabrera (1997) at 138; Cabrera (1998) at 1474.

Finally, Dr. Whitaker-Azmitia's mischaracterization of a cited study on at least one occasion further casts doubt on the reliability of her opinion. Compare Whitaker-Azmitia Rpt. at 16 (“Although other factors may explain variability in outcomes of infants exposed to SSRIs, the cause of SSRIs itself should be considered the largest contributing factor.”) (citing Rotem-Kohavi (2017)) with Rotem-Kohavi (2017) at 915 (“[R]ecent studies add to the growing literature suggesting that empirical evidence is starting to overtake biological plausibility and that the association between prenatal antidepressant medication exposure and ASD may not be causal.”).

In explaining her misleading representation of the available epidemiological data and failure to highlight or analyze inconsistent data, Dr. Whitaker-Azmitia took the position that “that's how science is done.” Whitaker-Azmitia Tr. at 358:5-19. The Supreme Court disagrees. Daubert requires experts to disclose a method subject to replication and testing, for it is the testing of hypotheses to “see if they can be falsified” that “distinguishes science from other fields of human inquiry.” Daubert, 509 U.S. at 593. For these reasons, the Court finds Dr. Whitaker-Azmitia's proffered testimony to be unreliable and therefore inadmissible under Daubert and Rule 702.

Conclusion

For the foregoing reasons, Defendants' omnibus motion to exclude the testimony proffered by Plaintiffs' experts is granted in its entirety. This case remains referred to Magistrate Judge Cott for general pretrial management.

This Memorandum Opinion and Order resolves docket entry no. 79.

SO ORDERED.


Summaries of

Daniels-Feasel v. Forest Pharm.

United States District Court, S.D. New York
Sep 3, 2021
17 CV 4188-LTS-JLC (S.D.N.Y. Sep. 3, 2021)
Case details for

Daniels-Feasel v. Forest Pharm.

Case Details

Full title:NICHOLE DANIELS-FEASEL, et al., Plaintiffs, v. FOREST PHARMACEUTICALS…

Court:United States District Court, S.D. New York

Date published: Sep 3, 2021

Citations

17 CV 4188-LTS-JLC (S.D.N.Y. Sep. 3, 2021)

Citing Cases

In re Paraquat Prods. Liab. Litig.

Numerous courts around the country have expressed similar concerns. See e.g., Gen. Elec. Co. v. Joiner, 522…

In re Acetaminophen - ASD-ADHD Prods. Liab. Litig.

Causation in pharmaceutical products liability cases such as those in this litigation has two components,…