From Casetext: Smarter Legal Research

United States v. Chavez

United States District Court, Northern District of California
Dec 13, 2021
No. 15-CR-00285-LHK-1 (N.D. Cal. Dec. 13, 2021)

Opinion

15-CR-00285-LHK-1

12-13-2021

UNITED STATES OF AMERICA, Plaintiff, v. DANIEL CHAVEZ, et al., Defendants.


ORDER DENYING CHAVEZ'S AND SKATES' MOTION TO LIMIT PRESENTATION OF BALLISTICS EVIDENCE BY THE UNITED STATES RE: DKT. NO. 941

LUCY H. KOH United States District Judge.

On September 10, 2021, Defendants Daniel Chavez and Victor Skates (collectively, “Defendants”) moved to limit the presentation of ballistics comparison evidence by the United States (“the government”). ECF No. 941. (“Mot.”). The government filed an opposition on October 1, 2021. ECF No. 951 (“Opp'n”). Defendants filed a reply on October 15, 2021. ECF No. 961 (“Reply”). Having considered the filings of the parties, the relevant law, and the record in 1 this case, the Court DENIES Defendants' Daubert motion to limit the presentation of ballistics comparison evidence by the government.

Defendants also filed a single motion seeking leave to file overlength briefing in two separate Daubert motions. See ECF No. 940 (seeking leave to file overlength briefs with respect to ECF No. 941 and 942). The Court grants Defendants' request to file an overlength brief in the instant motion, ECF No. 941. The Court will rule on Defendants' request to file an overlength brief in Defendants' Motion to Limit Gang Expert Testimony when the Court rules on Defendants' Motion to Limit Gang Expert Testimony, ECF No. 942.

Defendants move to limit the government's ballistic expert testimony under Federal Rule 702 and Daubert v. Merrell Dow Pharms., Inc.¸509 U.S. 579 (1993). Mot at 1. Specifically, Defendants seek to effectively limit any ballistic expert testimony to opinions that “a particular bullet or casing cannot be excluded from having been fired by a particular firearm.” Id. The government contends that ballistic expert testimony easily meet the Daubert standard for admissibility and Defendants rely on outlier rulings. Opp'n at 1-2. The Court agrees with the government.

Federal Rule of Evidence 702 governs the admission of expert testimony and imposes a “gatekeeping obligation” on the district court. See Kumho Tire Co., Ltd. v. Carmichael, 526 U.S. 137, 147 (1999). To be admitted, expert testimony must be “both relevant and reliable.” United States v. Vallejo, 237 F.3d 1008, 1019 (9th Cir. 2001). Although Defendants do not challenge the relevance of the government's ballistic expert testimony, the Court explicitly finds the government's proposed expert testimony that “a particular bullet or cartridge was fired from a particular gun, ” Opp'n at 1, is relevant to the instant case, which involves “dozens of shootings allegedly committed by alleged Norteño gang members in Salinas, California, ” Mot. at 2. Defendants' own expert agrees. See Mot. at 16 (acknowledging that “the comparison of toolmarks to a particular tool such as a gun are certainly relevant”).

To be reliable, the testimony must have “a reliable basis in the knowledge and experience of the relevant discipline.” Kumho Tire, 526 U.S. at 149 (1999) (citation and alterations omitted). In Daubert, the Supreme Court outlined five factors for the courts to consider when determining whether testimony is reliable: “(1) whether the method has been tested; (2) whether the method ‘has been subjected to peer review and publication;' (3) ‘the known or potential rate of error;' (4) whether there are ‘standards controlling the technique's operation;' and (5) the general acceptance of the method within the relevant community.” United States v. Johnson, 875 F.3d 1265, 1280 2 (9th Cir. 2017) (quoting Daubert¸ 509 U.S. at 592-95). However, “the reliability inquiry is a flexible one and the district court has broad latitude to determine what factors in Daubert, if any, are relevant to the reliability determination.” United States v. Valencia-Lopez, 971 F.3d 891, 898 (9th Cir. 2020) (cleaned up).

Here, relying primarily on out-of-circuit authority, Defendants challenge the reliability of the Association of Firearm and Toolmark Examiners (“AFTE”) methodology used by firearms identification experts. See, e.g., Mot. at 4-6. The Ninth Circuit has previously rejected similar reliability challenges to AFTE methodology under Daubert. See Johnson, 875 F.3d at 1280-81 (rejecting the argument that “the AFTE is inherently unreliable and fails to satisfy the Daubert factors.”). Moreover, the Ninth Circuit in Johnson found that qualifying the expert's conclusion to a “reasonable degree of ballistics certainty” coupled with allowing the defendant to cross-examine the expert and present a defense ballistics expert witness “provided adequate safeguards to allow the jury to properly evaluate the probative value” of the expert opinion and written report. Id.

The government represents that the government experts will not express identification conclusions with absolute certainty, or even that these conclusions are “held to a reasonable degree of scientific certainty, ” and will not testify that the “identification is being made to the exclusion of all other firearms in the world.” Opp'n at 24. These representations fall comfortably within the safeguards the Ninth Circuit found adequate in Johnson. Defendants however seek to impose two limiting jury instructions that reject the Ninth Circuit's general approval of AFTE methodology in Johnson.

Below, the Court analyzes in turn whether each Daubert factor weighs in favor of reliability. The Court ultimately concludes that even in light of a minority of cases, decided after the Ninth Circuit Johnson decision¸ AFTE methodology is relevant and reliable and that no limiting instruction is necessary.

1. The AFTE Methodology Can and Has Been Tested

The first Daubert factor, whether the relevant technique or method can and has been tested, 3 is a “key question” when deciding whether expert testimony should be admitted. See United States v. Romero-Lobato, 379 F.Supp.3d 1111, 1119 (D. Nev. 2019) (citing Daubert, 509 U.S. 579 at 593). Courts across this country nearly uniformly conclude that AFTE methodology can, and has, been tested. See, e.g., United States v. Johnson¸ No. (S5) 16 Cr. 281 (PGG), 2019 WL 1130258, at *15 (S.D.N.Y. March 11, 2019) (collecting cases and concluding that “[t]here appears to be little dispute that toolmark identification is testable as a general matter”); United States v. Harris, 502 F.Supp.3d 28, 37 (D.D.C. 2020) (collecting cases concluding that “firearm toolmark identification can be tested and reproduced”); United States v. Diaz, 2007 WL 485967 at *5 (N.D. Cal. 2007) (holding that the theory of firearms identification, though based on examiners' subjective assessment of individual characteristics, “has been and continues to be tested”). Even in United States v. Tibbs, the primary case on which defendant relies, the court concluded that “virtually every court that has evaluated the admissibility of firearms and toolmark identification has found the AFTE method to be testable and that the method has been repeatedly tested.” No. 2016-CF1-19431, 2019 WL 4359486 at *7 (D.C. Super. Ct. Sept. 5, 2019) (collecting cases). The Court agrees with these cases and finds this factor weighs in favor of admissibility.

Defendant's reliance on United States v. Adams, 444 F.Supp.3d 1248, 1259-60 (D. Or. 2020), is unpersuasive. Although in Adams the court found the AFTE method was not testable, the Adams court failed to engage with, much less, distinguish the reasoning in any of the previous cases that found the AFTE methodology has and can be tested. As other courts have noted, “[t]he fact that numerous studies have been conducted testing the validity and accuracy of the AFTE method” strongly suggests the method has and can be tested. See Romero-Lobado, 379 F.Supp.3d at 1119; see also Harris, 502 F.Supp.3d at 37 (“The fact that there are subjective elements to the firearm and toolmark identification methodology is not enough to show that the theory is not ‘testable.'”)

2. The AFTE Methodology Has Been Subject to Peer Review and Publication

Under the second Daubert factor, the Court considers whether the AFTE method has been 4 subject to “peer review and publication.” Daubert, 509 U.S. at 594. Similar to the first factor, courts considering challenges to the AFTE methodology have nearly uniformly concluded that this factor has been satisfied. See Tibbs, 2019 WL 4359486, at *9 (collecting cases and noting that other courts have found that “publication in the AFTE Journal satisfies this prong of the admissibility analysis). The court again agrees with the weight of authority and finds this factor weighs in favor of admissibility.

Defendants rely on two unpersuasive cases, Tibbs and Adams, to suggest that existing literature on the AFTE method is insufficient to meet this Daubert factor. Mot. at 36-42. However, both the Tibbs and Adams courts focused on whether AFTE Journal publications were sufficient to weigh in favor of admissibility. See Tibbs, 2019 WL 4359486 at *10 (“Ultimately, the Court has seen only two meaningfully peer reviewed journal articles regarding the foundational validity of the field, as the vast majority of the studies are published in a journal that uses a flawed and suspect review process.”); Adams, 444 F.Supp.3d at 1265-66 (rejecting reliance on the AFTE Journal because “the purpose of publication in the AFTE Journal is not to review the methodology for flaws but to review studies for their adherence to the methodology.”).

However, even if the Court were to fully discount the studies published in the AFTE Journal, these are not the only examples of peer review research published in this field. For example, Defendant's own expert acknowledges that the Journal of Forensic Sciences has published at least three error rate studies of firearm examiners. ECF No. 941-3 (“Scurich Decl.”) ¶¶ 14, 19. Moreover, the two government reports on which Defendants repeatedly rely, the 2009 NAS Report and the 2016 PCAST Report, “themselves constitute peer review despite the unfavorable view the two reports have of the AFTE method.” Romero-Lobato, 379 F.Supp.3d at 1119. In addition, the government's expert provides a non-exhaustive list of over fifty peer-reviewed research published outside of the AFTE Journal. ECF No. 951-2 (“Weller Decl.”) App'x A. These additional studies alone “would fulfill the required publication and peer review requirement.” Harris, 502 F.Supp.3d at 40. As such, Defendant's reliance on Tibbs and Adams 5 is unpersuasive.

3. The AFTE Methodology Error Rate Is Not Impermissibly High

Under the third Daubert factor the Court considers “whether the technique has a known or potential rate of error.” Romero-Lobato, 379 F.Supp.3d at 1119 (citing Daubert, 509 U.S. 579). Defendants and the government vigorously dispute the potential error rate based on the relevant studies. Initially, the Court agrees with the government, Opp'n at 19, that the focal point of the inquiry should be on the rate of false positives, “as this is the type of error that could lead to a conviction premised on faulty evidence.” Harris, 502 F.Supp.3d at 39. The government argues the false positive rate is very low and relies on a sample of twelve studies where the false positive rate ranged from 0% to around 1%, depending on the study design. Opp'n at 19-20. Defendants in turn contest the validity of the studies and the error rate calculation. Mot. at 31-35. According to Defendants, the defects in the study designs and error rate calculation undermine the validity of the calculated error rate. Id.

District courts have generally concluded that “validation studies as a whole show a low error rate for firearm and toolmark identification.” Harris, 502 F.Supp.3d at 39 (collecting cases). In so doing, some courts have concluded that the “error rate is difficult or impossible to determine and, at any rate, is presently unknown, ” but based on the available evidence “the error rate is sufficiently low to weigh in favor of admissibility.” Johnson, 2019 WL 1130259, at *18 (collecting cases). However, several recent district court decisions have gone even further and have questioned the underlying methodology and associated low error rate of these validation studies. See Tibbs, 2019 WL 4359486 at *18 (finding “most of the studies on which the government relies . . . cannot provide an accurate accounting of the error rate”); Adams, 444 F.Supp.3d at 1264-65 (concluding that “[i]t is possible that the error rate for toolmarking testing is very low, but it is more likely that it is not.”); United States v. Shipp, 422 F.Supp.3d 762, 777-79 (E.D.N.Y. 2019) (finding the potential error rate “does not favor a finding of reliability at this time”). Both the Adams and Shipp courts found that the test that most closely resembled real 6 world analysis could have an error rate as high as 2.2%. 444 F.Supp.3d at 1265; Shipp, 422 F.Supp.3d at 778. Both courts determined an error rate of 1 in every 46 comparisons was too high to favor a finding of reliability. 444 F.Supp.3d at 1265; Shipp, 422 F.Supp.3d at 778; but see Johnson, 2019 WL 1130258 (concluding that a 2.2% error rate is not impermissibly high).

The Court finds this factor weighs slightly in favor of admissibility for two reasons. First, the weight of authority suggests the potential error rate is between 0-1%. Harris, 502 F.Supp.3d at 39 (collecting cases); Johnson, 2019 WL 1130259, at *18 (same). Second, even if the error rate is as large as 2.2%, the Court disagrees with the conclusion by the Adams and Shipp courts that such an error rate is impermissibly high. The 2016 PCAST Report, on which Defendant primarily relies to critique the majority of the validity studies, posits that an acceptable error rate from a scientific perspective is 5%. Opp'n at 21. Thus a 2.2% error rate would still be significantly lower than the recommended threshold by scientific experts. See Johnson, 2019 WL 1130258 (concluding that a 2.2% error rate is not impermissibly high).

Second, and more importantly, according to the government, accredited labs, like in the instant case, use “a second examiner who independently examines the results of the first examiner.” Opp'n at 20. The government also notes that there's a third layer of review as well- the defense firearms expert, who can also challenge the testimony of the prosecution firearms expert. Id. Defendant fails to contest these points on reply, and thus implicitly concedes the likelihood that additional layers of review would drastically reduce the ultimate false positive rate at trial.

Thus, the underlying concern expressed in Adams and Shipp, the potential for a false conviction based on a false positive, appears to be much lower with the inclusion of another independent examiner, and even lower when there is a defense firearm expert. With just another independent examiner, the cumulative probability of a false positive rate could be as low as 0.05%. Including a defense firearm expert that conducts an independent analysis would make the 7 risk potentially negligible. Accordingly, the potential error rate, even when using the highest measured false positive rate, is likely still with the range supporting a finding of reliability. See Harris, 502 F.Supp.3d at 39.

Assuming each examiner's false positive rate is an independent variable, the probability of a false positive with two examiners would be about 0.05%: initial examiner (2.2%) x second lab examiner (2.2%). With three examiners the potential false positive rate would be orders of magnitude lower. However, the Court notes the record is not entirely clear to what extent the second lab examiner would be conducting an independent analysis, nor is there any evidence that in every instance a defendant rebuts such testimony with their own firearm expert.

4. The AFTE Methodology Does Not Have An Objective Standard

Under the fourth Daubert factor, the Court considers whether the fields of firearm examination has a set of standards. “[T]his is the Daubert factor on which firearm toolmark analysis scores the lowest.” Shipp, 422 F.Supp.3d at 779. The government contends that published manuals such as the AFTE Training Manual, the AFTE Technical Procedures Manual, and the AFTE Theory of Identification support a finding that there are a set of objective standards. Opp'n at 21. The AFTE Theory of Identification, “the primary standard that governs the discipline, ” explains that “examiners can conclude that a firearm and cartridges have a common origin when a comparison of toolmarks shows there is ‘sufficient agreement.'” Harris, 502 F.Supp.3d at 41; see also Johnson, 2019 WL 1130258 at *17 (the AFTE standard for toolmark identification is “sufficient agreement.”).

However, “both courts and the scientific community have voiced serious concerns about the ‘sufficient agreement' standard, characterizing it as ‘tautological,' ‘wholly subjective,' ‘circular,' ‘leav[ing] much to be desired,' and ‘not scientific.'” Johnson, 2019 WL 1130258 at *17. As another court has explained, under the sufficient agreement standard “matching two tool marks essentially comes down to the examiner's subjective judgment based on his training, experience, and knowledge of firearms.” Romero-Lobato, 379 F.Supp.3d at 1121; see also Harris, 502 F.Supp.3d at 41 (same). The government's opposition does not directly address any of Defendants' arguments, or the relevant case law, about the subjectivity of the sufficient agreement standard. Rather, the government notes that the subjective aspect of the methodology 8 “is not fatal to admissibility.” Opp'n at 22.

The Court thus concludes this factor weighs against a finding of reliability, but agrees with the government that such a conclusion is not fatal to admissibility. See Harris, 502 F.Supp.3d at 42 (concluding the factor weighs against admissibility but is not “an immediate bar to admissibility” because even “a partially subjective methodology is not inherently unreliable”).

5. There is Sufficiently General Acceptance in the Relevant Community

Under the fifth Daubert factor, the Court considers whether firearm examination enjoys general acceptance in the scientific community. Courts have generally concluded that “[t]here is no dispute here that toolmark identification analysis is a generally accepted method in the community of forensic scientists, and firearms examiners in particular.” Johnson, 2019 WL 1130258, at *19. Moreover, “[e]ven courts that have been critical of the validity of the discipline have conceded that it does enjoy general acceptance as a reliable methodology in the relevant scientific community of examiners.” Harris, 502 F.Supp.3d at 40. In addition to the 209 accredited labs in the United States, the government points to accredited labs throughout the world practicing firearm identification, such as “England (Scotland Yard), New Zealand, Canada, South Africa, Australia, Germany, Sweden, Greece, Turkey, China, Mexico, Singapore, Malaysia, Belgium, Netherlands, and Denmark.”. Opp'n at 23.

Here again, defendant relies on a small minority of court cases which have found that firearm examination does not enjoy “general acceptance” in the scientific community because several scientists outside the firearms expert community have been critical of the methodology. Mot at 44; see also Tibbs, 2019 WL 4359486 at *21; Adams, 444 F.Supp.3d at 1266; Shipp, 422 F.Supp.3d at 783; People v. Ross, 68 Misc.3d 899, 913 (N.Y. Sup. Ct. 2020). In each of these cases, the court expanded the definition of relevant scientific community beyond the firearm expert community to include at minimum the members of the 2009 NRC and 2016 PCAST reports, who were “admittedly not members of the forensic ballistic community” but were “preeminent scientists and scholars” who were “undoubtedly capable of assessing the validity of a 9 metrological method.” Shipp, 422 F.Supp.3d at 782-83.

Setting aside whether this is a correct definition of the relevant scientific community under Daubert, the Court respectfully disagrees with the minority view. In Daubert the United States Supreme Court explained that this “general acceptance” factor can include an “express determination of a particular degree of acceptance within that community.” 509 U.S. at 594. Thus, “[w]idespread acceptance can be an important factor in ruling particular evidence admissible, while “a known technique which has been able to attract only minimal support within the community, may be properly viewed with skepticism.” Id. (emphasis added).

Here, there is no question on the record that even including the non-firearm experts from the NRC and PCAST reports, as well as defense experts, there is still an overwhelming acceptance in the United States and worldwide of firearm identification methodology. More importantly, even accepting defendant's implicit premise that more weight should be given to neutral scientists and independent scientific organizations, defendant cannot show that the technique in question has “only minimal support” to warrant skepticism. See id; see also Romero-Lobato, 379 F.Supp.3d at 1122 (“[T]echniques do not need to have universal acceptance before they are allowed to be presented before a court.”).

Accordingly, the Court finds this factor weighs in favor of admissibility.

CONCLUSION

To sum up, four of the five Daubert factors weigh in favor of reliability. Balancing these factors the Court concludes that AFTE methodology is generally relevant and reliable. Moreover, given the government's voluntary limitations on the expert testimony, Opp'n at 24, the Court concludes no additional limiting instructions are necessary because the government's representations adequately address any potential concerns with error rates or the subjective nature of the AFTE methodology. Defendants may cross-examine the governments' experts and present their own ballistic experts. Under Ninth Circuit law, these safeguards are sufficient and no additional limitations are necessary. See Johnson, 875 F.3d at 1281. 10

For all these reasons, the Court DENIES Defendants' Daubert motion to limit the presentation of ballistics comparison evidence by the government.

IT IS SO ORDERED. 11


Summaries of

United States v. Chavez

United States District Court, Northern District of California
Dec 13, 2021
No. 15-CR-00285-LHK-1 (N.D. Cal. Dec. 13, 2021)
Case details for

United States v. Chavez

Case Details

Full title:UNITED STATES OF AMERICA, Plaintiff, v. DANIEL CHAVEZ, et al., Defendants.

Court:United States District Court, Northern District of California

Date published: Dec 13, 2021

Citations

No. 15-CR-00285-LHK-1 (N.D. Cal. Dec. 13, 2021)

Citing Cases

United States v. Rhodes

The Court agrees with the great weight of authority finding AFTE methodology can, and has been, tested. See…

United States v. Graham

And the court cannot ignore that the overwhelming majority of courts have found the AFTE's methodology…