From Casetext: Smarter Legal Research

United States v. Blackman

United States District Court, Northern District of Illinois
May 12, 2023
No. 18-CR-00728 (N.D. Ill. May. 12, 2023)

Opinion

18-CR-00728

05-12-2023

UNITED STATES OF AMERICA, Plaintiff, v. ROMEO BLACKMAN, TERRANCE SMITH, JOLICIOUS TURMAN, and NATHANIEL MCELROY Defendants.


MEMORANDUM OPINION AND ORDER

John Robert Blakey United States District Judge

This case comes before the Court upon Defendants' joint motion to exclude the Government's Ballistics/Toolmarks Experts, [256]. Defendants Romeo Blackman, Terrance Smith, Jolicious Turman, and Nathaniel McElroy together move to bar or limit the testimony of four government ballistics and toolmarks experts pursuant to Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993); Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999); and Federal Rules of Evidence 702 and 403. For the reasons explained, the Court denies the motion to exclude, but grants, to the degree set forth herein, the motion to limit testimony.

The Court may exercise its discretion and admit expert testimony without conducting a Daubert hearing. Kumho Tire, 526 U.S. at 152-53. The parties agreed no hearing was necessary here, and thus the Court rules based upon the subject matter and content of the written submissions.[256]; [267].

The parties agreed that the record shall include background materials in this discipline as well as the materials submitted.

I. Proposed Firearms and Toolmarks Identification Testimony

Firearms and toolmark identification constitutes a forensic discipline in which examiners seek to evaluate, through comparison, whether an evidentiary sample is or is not associated with a source sample. The Association of Firearms and Toolmark Examiners (“AFTE”), a professional organization for practitioners of firearm and toolmark identification, has developed a theory that an examiner's identification should fall within one of four categories: (1) identification, meaning that the pieces of evidence come from the same source; (2) elimination, meaning that they come from different sources; (3) inconclusive, meaning there is not enough evidence for the examiner to reach either of the first two conclusions; and (4) unsuitable, meaning that the recovered evidence lacks discernable class and individual characteristics. United States v. Shipp, 422 F.Supp.3d 762, 771 (E.D.N.Y. 2019) (internal citations omitted).

The theory relies upon the premise that “tools used in the manufacture of a firearm leave distinct marks on various firearm components, such as the barrel, breech face or firing pin.” United States v. Otero, 849 F.Supp.2d 425 (D.N.J. 2012). The theory further posits that:

[T]he marks are individualized to a particular firearm through changes the tool undergoes each time it cuts and scrapes metal to create an item in the production of the weapon. Toolmark identification thus rests on the premise that any two manufactured products, even those produced consecutively off the same production line, will bear microscopically different marks. With regard to firearms, these toolmarks are transferred to the surface of a bullet or shell casing in the process of firearm discharge. Depending on the tool and the type of impact it makes on the bullet or casing, these surface marks consist of either contour scratch lines, known as striations (or striae), or impressions. For example, rifling (spiraled indentations) inside of a gun barrel will leave raised and depressed striae, known as lands and grooves, on the bullet as it is fired from the weapon, whereas the striking of the firing pin against the base of the cartridge, which initiates discharge of the ammunition, will leave an impression but not striae.
Comparing a test bullet or cartridge fired from a firearm of known origin to another bullet or cartridge of unknown origin, the examiner seeks to determine congruence in the pattern of marks left on the examined specimens. This process is known as “pattern matching.” ... An examiner observes three types of characteristics on spent bullets or cartridges: class, subclass and individual. Class characteristics are gross features common to most if not all bullets and cartridge cases fired from a type of firearm, for example, the caliber and the number of lands and grooves on a bullet. Individual characteristics are microscopic markings produced in the manufacturing process by the random imperfections of tool surfaces (the constantly changing tool as described above) and by use of and/or damage to the gun post-manufacture. According to the theory of toolmark identification espoused by the Association of Firearms and Toolmarks Examiners (“AFTE”), individual characteristics “are unique to that tool and distinguish it from all other tools.” Subclass characteristics generally fill the gap between the class and individual characteristics categories. They are produced incidental
to manufacture but apply only to a subset of the firearms produced, for example, as may occur when a batch of barrels is formed by the same irregular tool.
Id. at 427-28 (quoting Theory of Identification as it Relates to Toolmarks, 30 AFTE J. 1, 87 (Winter 1998)).

The AFTE theory of toolmark comparison permits an examiner to conclude that two bullets or two cartridges are of common origin if the microscopic surface contours of their toolmarks are in “sufficient agreement.” In turn, “sufficient agreement” requires:

significant duplication of random toolmarks as evidenced by the correspondence of a pattern or combination of patterns of surface contours. Significance is determined by the comparative examination of two or more sets of surface contour patterns comprised of individual peaks, ridges and furrows. Specifically, the relative height or depth width, curvature and spatial relationship of the individual peaks, ridges and furrows within one set of surface contours are defined and compared to the corresponding features in the second set of surface contours. Agreement is significant when the agreement in individual characteristics exceeds the best agreement demonstrated between toolmarks known to have been produced by different tools and is consistent with agreement demonstrated by toolmarks known to have been produced by the same tool. The statement that “sufficient agreement” exists between two toolmarks means the agreement of individual characteristics is of a quantity and quality that the likelihood another tool could have made the mark is so remote as to be considered a practical impossibility.

Ass'n of Firearm & Tool Mark Examiners, Theory of Identification as it Relates to Tool Marks: Revised, 43 AFTE J. 287 (2011); see also Keith L. Monson, et al., Planning, Design and Logistics of a Decision Analysis Study: The FBI/Ames Study Involving Forensic Firearms Examiners, Forensic Sci. Int'l: Synergy 4 (2022) (hereinafter “Monson et al.”) (“The Theory allows for an opinion of common origin (identification) when the surface contours of two toolmarks are in ‘sufficient agreement.' Sufficient agreement is decided when the level of microscopic agreement is similar to the microscopic agreement seen from specimens known to have originated from the same source and exceeds microscopic agreement occurring between the Best-Known Non-Match (worst case scenario).”). The AFTE agrees that the interpretation of individualization/identification is subjective in nature, although it emphasizes that it remains “founded on scientific principles and based on the examiner's training and experience.” 43 AFTE J. 287 (2011).

Here, the Government seeks to offer four experts to testify about cartridge casings recovered from the scenes of several murders and whether they “were fired” from particular firearms recovered by law enforcement during their investigation. [267] at 4-5. More specifically, the parties anticipate that: (1) Gregory Hickey will testify that “casings associated with the bullets that killed Krystal Jackson ‘were fired from' a gun recovered from Quincent Hayes;” (2) Aimee Stevens will testify that “cartridges found near the scene of the Davon Horace murder ‘were fired' from a certain weapon;” (3) Diana Pratt will testify that “a cartridge found near the scene of the Stanley Bobo murder ‘was fired' from a certain weapon;” and (4) Brian Sokeniewicz will testify that “cartridge casings found near the scene of the Andre Donner murder ‘were fired' from a certain weapon.” Id. at 6; [256-1]. Notably, the Government does not intend to elicit testimony that the experts' testing would be “100% certain” or “to the exclusion of any other firearm in the world.” [267] at 2. Instead, each will testify as noted above and that based upon their training and experience, they “would not expect another firearm to make the exact same marks that were identified on a shell casing, cartridge, or bullet.” Id. at 3.

Hickey, Stevens, and Pratt work as forensic scientists for the Illinois State Police, while Sokeniewicz works at the Chicago Police Department Forensic Science Unit. See [267-1]. Both laboratories are accredited, employ AFTE methodology, and require peer review of all identifications. [267] at 13.

II. Rule 702 Analysis

Federal Rule of Evidence 702 provides that a person may testify as an expert if: (1) the testimony will “help the trier of fact to understand the evidence or to determine a fact in issue”; (2) the testimony is “based on sufficient facts or data”; (3) the testimony is “the product of reliable principles and methods”; and (4) the expert has “reliably applied the principles and methods to the facts of the case.” Fed.R.Evid. 702. Under this rule, the district courts serve a gatekeeping function to prevent the admission of irrelevant or unreliable testimony. See Daubert, 509 U.S. at 597; Lapsley v. Xtek, Inc., 689 F.3d 802, 809 (7th Cir. 2012). The proponent of the evidence bears the burden of establishing its admissibility by a preponderance of the evidence. See Lewis v. CITGO Petroleum Corp., 561 F.3d 698, 705 (7th Cir. 2009). So long as the proponent establishes a threshold level of reliability and relevance, then the testimony is admissible under Rule 702. Daubert, 509 U.S. at 596.

To begin, Defendants do not attack the reliability of the Government's experts, in particular: they do not, for example, question any expert's credentials or the manner in which any of the experts applied the AFTE methodology to the evidence here.Nor do they attack the relevance of the testimony to the issues at trial. Nor could they, since the Government easily satisfies its burden as to those criteria. Its experts are appropriately credentialed, see [267-1], employed AFTE methodology, and intend to provide testimony that is highly relevant both to Count One (the RICO conspiracy) and to individual VICAR counts.

As the court observe in United States v. Shipp: “it is uncontroversial that toolmark analysis testimony should not be admitted if, for example, examiners reach different conclusions when examining different evidence from the same firearm (i.e. the conclusions must be repeatable), different examiners reach different conclusions (i.e., the conclusions must be reproducible), or examiners make incorrect conclusions (i.e. the conclusions must be accurate).” 422 F.Supp.3d 762, 774-775 (E.D.N.Y. 2019). The record indicates no such issues here.

Instead, Defendants attack the experts' testimony' under the third factor under Rule 703: whether it is the product of reliable principles and methods. To evaluate the general reliability of expert testimony, Daubert provided a list five factors that a court may consider: (1) whether a method can or has been tested; (2) the known or potential rate of error; (3) whether the methods have been subject to peer review; (4) whether there are standards controlling the technique's operation; and (5) the general acceptance of the method within the relevant community. Daubert, 509 U.S. at 59394. This list remains non-exhaustive, however, since “reliability is determined on a case-by-case basis” and the trial court has “the same broad latitude when it decides how to determine reliability as it enjoys in respect to its ultimate reliability determination.” Gopalratnam v. Hewlett-Packard Co., 877 F.3d 771, 780 (7th Cir. 2017) (first quoting C.W. ex rel. Wood v. Textron, 807 F.3d 827, 835 (7th Cir. 2015) and then Kumho Tire Co., Ltd. v. Carmichael, 526 U.S. 137, 142 (1999)).

Before turning to any enumerated factors, however, this Court addresses one of Defendants' global arguments about the reliability of the AFTE method. Defendants doubt the fundamental premises of the discipline itself as a scientific matter, namely, that manufacturing processes leave unique, identifiable toolmarks on firearm components, which in turn leave identifiable markings on ammunition. [256] at 20-22. They also question whether such processes create markings that remain consistent over time.

Defendants suggest that “the manufacturing process has completely undermined the uniqueness assumption,” [257] at 20. They rely heavily upon a 2008 report published by the National Research Council, which called for further research in firearms/toolmarks analysis. See Nat'l Rsch Council, Ballistics Imaging Report (2008) (“2008 NRC Report”). The 2008 NRC Report indicated that “the validity of the fundamental assumptions of uniqueness and reproducibility has not yet been full demonstrated,” and called upon the field to generate further validation studies. Id. at 81. Nonetheless, it also acknowledged that “firearms-related toolmarks are not completely random and volatile; one can find similar marks on bullets and cartridge cases from the same gun.” Id. at 3. Another NRC publication released a year later echoed the call for further research to validate the fundamental assumptions of this and other forensic fields. See Nat'l Rsch Council, Strengthening Forensic Science in the United States: A Path Forward (2009) (“2009 NRC Report”).

Defendants argue that the 2008 and 2009 NRC Reports are new and devastating to the admissibility of firearms/toolmarks expertise. Not so. The courts have considered the matter for well over a decade, and none has categorically excluded this type of testimony based upon these reports. See United States v. Harris, 502 F.Supp.3d 28, 35 (D.D.C. 2020) (“[T]he 2008 Ballistic Imaging Report and the 2009 National Academy of Science Report are both ‘outdated by over a decade' due to intervening scientific studies and as a result have been repeatedly rejected by courts as a proper basis to exclude firearm and toolmark identification testimony.”); United States v. Lee, No. 16-CR-641, 2022 WL 3586164, at *7 (N.D. Ill. Aug. 22, 2022) (following the 2008 and 2009 NRC Reports and other critical commentary, courts “unanimously continue to allow firearms identification testimony, finding that cross examination, as opposed to exclusion, is the appropriate remedy to counter the criticisms”). Notably, the Seventh Circuit recently affirmed a district court's decision not to give the NRC reports-and subsequent critiques-dispositive weight as to admissibility. United States v. Brown, 973 F.3d 667, 703-04 (7th Cir. 2020).

While providing no basis for the exclusion of the field or the opinions themselves, Defendants' general critique also challenges the degree of certainty that ballistics experts' opinions often present: they suggest that experts in this field inappropriately attest that their identifications can be supported by absolute scientific certainty. [256] at 18 (citing 2008 NRC Report at 82). To do so is obviously inappropriate, and such absolute conclusions will not be permitted at trial. By its own terms, AFTE methodology relies upon individual examiners' observations, not scientific formulas or mathematical calculations. See United States v. Monteiro, 407 F.Supp.2d 351, 354 (D. Mass. 2006) (noting “the process of deciding that a cartridge case was fired by a particular gun is based primarily on a visual inspection of patterns of toolmarks, and is largely a subjective determination based on experience and expertise”). The Government does not deny this fact; its cited sources acknowledge the subjectivity involved in firearms/toolmarks examination, noting that experts in this field rely upon experience and training to render identifications. See Richard Gryzybowski et al., Firearms/Toolmarks Identification: Passing the Reliability Test Under Federal and State Evidentiary Standards, 35 AFTE J. 2, 4 (2003) (hereinafter “Gryzybowski et al.”).

Overall, for the reasons discussed below and despite the unpersuasive questions raised by the 2008 and 2009 NRC Reports, this Court finds that the proffered ballistics expertise meets the reliability requirements of Daubert and Rule 702. As discussed further herein, numerous studies, including those conducted in the time since the NRC Reports were released, have continued to bolster the underlying premises of the field. The Court turns to a review of the Daubert factors to further explain its reliability analysis.

1. Testability Factor

The first Daubert factor asks whether a technique “can be (and has been) tested.” Daubert, 509 U.S. at 592. As described in the Advisory Committee Notes to Rule 702, “testability” refers to “whether the expert's theory can be challenged in some objective sense, or whether it is instead simply a subjective, conclusory approach that cannot be reasonably assessed for reliability.” Testability enables meaningful cross-examination, for while it is impossible to disprove an unfalsifiable proposition, test results can be presented to a jury for evaluation. As Defendants note, this prong requires empirical, rather than adversarial, testing. [256] at 12.

Ostensibly, Defendants dispute the Government's argument (often made in response to defense challenges to fingerprint experts) that adversarial testing is sufficient to satisfy this Daubert factor. See [257] at 18. The Government does not rely upon that argument here.

AFTE theory can be (and has been) tested. As the Government points out, numerous validation studies have examined the method's propositions. See Otero, 849 F.Supp.2d at 432 (collecting studies); Monson et al. at 2-3 (providing an overview of recent work in the field). Indeed, multiple studies “have been published documenting the ability of F/T examiners to correctly identify breech face marks after repetitive firings of the same firearm and to differentiate and identify those produced from consecutively manufactured slides or breech bolts. Other studies have investigated the ability of F/T examiners to correctly identify bullets fired from the same barrel and to distinguish those fired from consecutively manufactured barrels.” Monson et al. at 3. Validation studies have probed not only the average case but have specifically chosen consecutively manufactured items for study “because they are universally acknowledged to present the greatest challenge to distinguish due to their similarity in individual characteristics and likelihood of exhibiting subclass characteristics.” Id.

In arguing to the contrary, Defendants cite a single, out-of-circuit district court case from 2005 that the theories underlying firearms identification “have never been tested in the field in general,” [256] at 7 (quoting United States v. Green, 405 F.Supp.2d 104, 119 (D. Mass. 2005)), and then suggest that the fundamental premise of uniqueness is “unfalsifiable.” [256] at 19.

This Court does not find Green persuasive, for the district court there did not critique or even acknowledge any efforts at empirical testing despite other courts' near unanimous agreement that such tests can be, and indeed have been, conducted.Further, the Green decision came out nearly two decades ago and, as discussed above, significant validation studies have come out since then. And while it is indeed practically impossible to compare every firearm in the world, the studies on which the Government relies certainly test-and validate-the premise that AFTE methodology can be employed to reach reliable identifications, even cases where similarity would be expected.

Moreover, the trial court's decision in Green was colored by numerous case-specific issues with the examiner, who lacked professional credentials, had never undergone proficiency testing, admitted that he had not followed standard methodology, and did not work for an accredited laboratory. 405 F.Supp.2d at 133-16. Further, the court did not wholly exclude the testimony but only held that the expert could not testify that he observed a match “to the exclusion of all other guns.” Id. at 124. The record here demonstrates no parallel credibility concerns and, as discussed below, the Government agrees that its experts will comply with the same limitation on the degree of certainty of the opinion testimony here. Green thus presents no meaningful support for exclusion-or further limitation-of the expert testimony proffered here.

Defendants further argue that insufficient testing has been conducted to establish meaningful distinctions between class, subclass, and individual characteristics. [256] at 17. The Court does not find this a sufficient basis to exclude the testimony. To the contrary, testing of the method as a whole incorporates the application of these concepts. See Monson et al. at 2 (describing the methods examiners employ). Defendants may probe the limits of what validation testing has shown on cross-examination. These factors go to weight, rather than admissibility.

Lastly, and more importantly, this Court's finding that AFTE methodology is testable aligns with the Seventh Circuit's observations in Brown and with numerous other courts considering the same question. See Brown, 979 F.3d at 704 (affirming district court's finding that the “AFTE method has been tested”); Colonel (Ret.) Jim Agar, The Admissibility of Firearms and Toolmarks Expert Testimony in the Shadow of PCAST, 74 Baylor L. Rev. 93 (2022) (concluding that virtually every court to consider the testability question has found AFTE theory testable). This factor, then, weighs in favor of the admissibility of the proffered firearms/toolmarks expert testimony.

2. Error Rate Factor

The second Daubert factor considers whether the technique has a high “known or potential rate of error.” Kumho Tire Co., 526 U.S. at 149. In the firearms/toolmarks context, the critical inquiry under this factor “is the rate of error in which an examiner makes a false positive identification, as this is the type of error that could lead to a conviction premised on faulty evidence.” United States v. Harris, 502 F.Supp.3d 28, 39 (D.D.C. 2020).

The Supreme Court has not put a precise number on what constitutes too “high” a rate of error. Certainly, perfection is not required, for expert testimony “is still testimony, not irrefutable fact, and its ultimate persuasive power is for the jury to decide.” Brown, 973 F.3d at 704. In Brown, the Seventh Circuit affirmed the district court's decision that the error rates associated with AFTE methodology do not place it beyond the pale, noting that “although the error rate of this method varies slightly from study to study, overall it is low-in the single digits.” Id.

Defendants repeatedly suggest that the field has not established an error rate. [256] at 17-20. But the Government cites several studies which establish a low rate of error. See Monson et al. (collecting studies which place the error rate in the low single digits); United States v. Cloud, 576 F.Supp.3d 827, 843 (E.D. Wash. 2021) (noting that the recent FBI/Ames study described in Monson et. al. produced an estimated false positive error rate in the 0.933%-1.57% range); Otero, 849 F.Supp.2d at 433-34 (finding that proficiency testing data gave rise to error rates between 0.9% and 1.5%).

Defendants do not marshal any competing studies indicating an unacceptably high rates of error, nor do they offer meaningful critique of the Government's cited studies sufficient to cast doubt on their reliability. While studies differ in their precise estimations of error rates, this Court remains satisfied that AFTE methodology gives rise to low rates of error. This factor too, then, supports admissibility.

Other courts have entertained countless arguments regarding study designs, the need for black-box studies in the field, and the like. While Defendants do not levy any specific critiques regarding study design here, the Court notes the amount of research that has been conducted in response to the NRC Reports and other critiques and finds that recent research, including several black-box studies, bolsters the reliability of low estimated error rate in this field. See Monson et al. at 2-3 (describing recent research in the field and addressing critiques regarding the lack of black-box studies and double-blind assessments); United States v. Cloud, 576 F.Supp. 827, 842 (E. D. Wash. 2021) (identifying four recent black-box studies).

3. Peer Review and Publication Factor

The next Daubert factor asks whether the methodology employed is subject to peer review and publication. 509 U.S. at 594. While this factor is far from a prerequisite to admissibility, in this case, the Government has shown that the methodology has undergone significant peer review processes. As the Seventh Circuit noted in Brown, “three different peer-reviewed journals address the AFTE method.” Brown, 973 F.3d at 704 . Among these, the Government highlights the AFTE Journal, which focuses specifically on articles, studies, and reports on firearm and toolmark evidence. [267] at 12. Prior to publication, articles are peer-reviewed by experts in the field; following publication, an additional process permits interested persons to comment on published articles. Otero, 849 F.Supp.2d at 433. Defendants advance no basis to critique these review processes.The Court thus finds the peer review and publication standard factor satisfied.

A party waives undeveloped arguments. Schaefer v. Universal Scaffolding & Equip., LLC, 839 F.3d 599, 607 (7th Cir. 2016) (“Perfunctory and undeveloped arguments are waived, as are arguments unsupported by legal authority.”).

4. Governing Standards Factor

The next Daubert factor considers whether the AFTE methodology provides standards to govern the technique's operation. Daubert, 509 U.S. at 594. As described above, AFTE theory employs the “sufficient agreement” standard, which relies upon an examiners' background and training to enable them to determine whether matching characteristics exceed the microscopic agreement occurring between the best-known non-match of which they are aware. Monson et al. at 2. In addition to the AFTE theory, the Government adds, too, that the examiners in this case also work for accredited laboratories with their own procedural requirements, including documentation and peer review of any identifications. [267] at 13.

Nevertheless, Defendants suggest that “sufficient agreement” is a hopelessly vague standard, which results in “identifications” better characterized as “rank speculation.” [256] at 27. In short, according to Defendants, examiners simply “know a match” when they “see a match.” Id. at 14. Defendants highlight that the AFTE does not require a particular number or quality of class, subclass, or individual markings to reach an identification. See United States v. Glynn, 578 F.Supp.2d 567, 574 (S.D.N.Y. 2008) (relying on this reasoning to distinguish firearms identification from fingerprint analysis); Monteiro, 407 F.Supp. at 364 (“This conclusion is not based on any quantitative standard for how many striations or marks need to match or line up. Instead, it is based on a holistic assessment of what the examiner sees.”).

Even though the “sufficient agreement” standard contains a degree of intrinsic subjectivity, this does not undermine the reliability or admissibility of the proffered testimony. In a real way, Defendants' challenge presents a zero-sum proposition, improperly replacing the Rule 702 analysis with an unreasonable requirement of DNA-like certainty or total exclusion. See [257] at 19. They cite no case, however, which has drawn that kind of line. Instead, as Kumho Tire made clear, Daubert applies to all kinds of expertise, whether rooted in scientific testing or experiencebased observation. See Kumho Tire, 526 U.S. at 152. Firearm identification evidence “straddles the line between testimony based on science and experience,” 407 F.Supp. at 365, but that fact does not render it inadmissible.

Given the rigor of the underlying analysis and methodology, as well as the nature of the proffered opinions-that the experts would not expect, based upon their training and experience, for another firearm to produce the makings observed-the Court finds that this factor does not help Defendants either.

5. General Acceptance Factor

The last Daubert factor considers whether AFTE methodology enjoys “general acceptance” in the relevant scientific or technical community. 509 U.S. at 593-94. The standard is general-not universal-acceptance; and the courts routinely find competing methodologies reliable.

In this case, Defendants do not argue that AFTE methodology lacks general acceptance in the relevant field; instead, they merely suggest that “storm clouds... are gathering” over toolmarks and firearms identifications, [256] at 6 (citing United States v. Monteiro, 407 F.Supp.2d 351, 355 (D. Mass. 2006)), and cite several skeptics of the methodology.

As the Government notes, however, the AFTE methodology is the primary approach in the field. See Agar at 162. Numerous colleges and universities offer courses in firearm and toolmark identification. See United States v. Wrensford, No. 13-CR-0003, 2014 WL 3715036 at *13 (D.V.I. Jul. 28, 2014). Crime labs undergo accreditation processes, and examiners employing AFTE methodology in those laboratories are subject to routine proficiency testing. See Agar at 162. Put simply, these are the hallmarks of general acceptance.

While some critics have called for further research to support the foundations of the methodology, nothing in the record suggests that the “tides have turned” such that the AFTE methodology lacks general acceptance. A methodology need not be free from critique to satisfy this Daubert factor. After all, Daubert emphasized that “[v]igorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are traditional and appropriate means of attacking” potential weaknesses or limitations in evidence. Daubert, 509 U.S. at 596. AFTE remains the dominant approach in the field, and thus this Court easily concludes that the general acceptance factor has been satisfied and supports admissibility.

6. Weighing the Factors

Finding the AFTE methodology testable, supported by a low error rate, subject to peer review and publication, and generally accepted in the relevant field, and further finding that the governing standards employed in the process do not render firearms/toolmarks identification an inappropriate area for expert testimony, this Court concludes that the Government has met its reliability burden under Daubert for all four experts.

Defendants raise one other complaint, which does not fit squarely within any of the Daubert factors, namely, they claim that pro-prosecution bias must infect the firearms/toolmarks examination discipline, because nearly all of those in the field work for law enforcement agencies. See [256] at 6 (citing 2009 NRC Report at 6 (noting this concern with regard to forensic disciplines as a whole)). Thus, Defendants assert, examiners receive only one suspect weapon and recovered ammunition, and, like “show-up” identifications of suspects, this creates a fatal risk that the examiner will be predisposed to find a match. A handful of courts have noted this concern, but none has found ballistics evidence unreliable or generally inadmissible on this basis. See, e.g. Green, 405 F.Supp.2d at 107-08; United States v. Taylor, 663 F.Supp.2d 1170, 1179 (D. N.M. 2009). Likewise, this Court finds the critique unconvincing and, in fact, finds that the complaint rests on false premises. Such examinations constitute a routine part of countless criminal investigations-naturally conducted by criminal investigators-but only find their way into prosecutions if a match is found. Obviously, in all other instances, the inconclusive or no-match exams inevitably do not lead to charges and, naturally, defense attorneys remain unaware of them. While certainly the critique presents a ripe area of crossexamination to attack the weight of this evidence, the Defendants' complaints of perceived bias fail to undermine reliability of the methodology as a whole, which, as noted above, has been thoroughly tested in contexts other than individual casework.

As a result, this Court will permit the proffered firearms/toolmarks experts to testify at trial, subject to the limitations discussed below.

7. Limitations of the Expert Testimony

While the Court finds the methodology described above sufficiently reliable to support the bulk of the proffered expert testimony, certain conclusions (which Defendants suggest the experts might draw) remain beyond the bounds of the Court's reliability finding. First, the Court will require (and the Government has already agreed) that the experts will not testify to 100% certainty or “to the exclusion of any other firearm in the world.” [267] at 2.

Both in their original motion and in Defendant Blackman's motion in limine, Defendants ask this Court to also exclude certain additional conclusions the experts may attempt to draw. Consistent with the findings above, the Court grants this request, in part.

The Defendants point to several judicial decisions which have limited the testimony of experts in this field to varying degrees-apparently, though not explicitly, requesting that this Court employ one or more of these limitations. See, e.g. Green, 405 F.Supp. At 108-09 (“I will not allow him to conclude that the shell casings come from a specific pistol ‘to the exclusion of every other firearm in the world.'”); Glynn, 578 F.Supp. at 570-75 (expert only permitted to conclude that a match was more likely than not); Monteiro, 407 F.Supp.2d at 372 (experts permitted to testify to a “reasonable degree of ballistic certainty” but not “100% certainty”); United States v. Tibbs, 2019 WL 4359486 (D.C. Super. Ct. Sept. 5, 2019) (expert permitted to testify only that the source “cannot be excluded). In Defendant Blackman's fifth motion in limine, he also cites the Department of Justice Uniform Language for Testimony applicable to federal ballistics examiners, which requires examiners to testify within a particular set of parameters. See [386] at 9; Dept' of Just., Uniform Language for Testimony and Reports for the Forensic Firearms/Toolmarks Discipline Pattern Examination 2 (2020) (available at www.justice.gov/olp/uniform-language-testimony-and-reports) (hereinafter “DOJ Ballistics ULTR”).

The Government represents that its experts will testify consistent with the state-level requirements applicable to them as state forensic examiners, and that it expects the experts' testimony to comply with Defendants' suggested limitations even absent the Court's requirement. Because the precise contours of each expert's testimony have not been proffered, the Court nonetheless articulates the restrictions it deems necessary.

The Court has reviewed the various iterations of limitations Defendants propose, and finds the following limitations below reasonable and necessary to ensure the reliability of testimony pursuant to Rule 702.

As noted above, the Government has already mitigated concerns about the degree of certainty with which their experts will testify in this case, agreeing in advance that its experts will not testify to 100% certainty in their identifications but rather that based upon their training and experience, they would not expect any other firearm to produce the markings observed. In the same vein, this Court holds that the experts shall not use language that implies the methods are an exact science or reflect any specific statistical degree of certainty (100% or otherwise).

Defendants levy a critique against the way statistics have been used in forensic disciplines broadly, arguing that forensic scientists inappropriately estimate the likelihood of error using misleading calculations: for example, calculating the likelihood of a coincidental match without adequate “base-rate” data about how rare a particular characteristic is. See [257] at 8-11. The Government does not argue that sufficient data exists to permit reliable determination of the probability of an accidental match in the firearms/toolmarks context through statistical reasoning. As such, the experts should refrain from testifying about statistical guarantees (e.g. there is a “one in a million chance” the match is a coincidence). By contrast, testimony about empirical error rate research, where based upon appropriate and relevant data, remains admissible.

Defendants also highlight that error rates cannot be developed for an individual examiners' casework, because there is no “answer key” in the real world. [256] at 25. Defendants suggest that examiners in past cases, knowing this, have inappropriately relied upon the volume of their previous examinations to bolster the jury's impression that the results of a given case are accurate. [256] at 24. The Court excludes misleading testimony of this sort here, adopting the line that the Department of Justice has set forth: “An examiner shall not cite the number of examinations conducted in the forensic firearms/toolmarks discipline performed in his or her career as a direct measure for the accuracy of a conclusion provided. An examiner may cite the number of examinations conducted in the forensic firearms/toolmarks discipline performed in his or her career for the purpose of establishing, defending, or describing his or her qualifications or experience.” See DOJ Ballistics ULTR at 3. The Court directs the examiners in this case to follow the same limitation.

Defendants' proposed limitations would also bar the Government's experts from using the terms “individualize” and “uniqueness,” to describe the identification, see id., but this Court finds such a request impracticable given that similar terms remain core to parts of the AFTE methodology. So long as the experts acknowledge, as the Government has indicated they will, that the “uniqueness” of which they speak is not absolute or “to the exclusion of any other firearm in the world,” the Court finds the proffered testimony remains appropriate under Daubert.

Beyond the limits noted above (which remain consistent with DOJ guidelines), this Court imposes no further restrictions.

II. Rule 403 Analysis

As an alternative to their arguments under Daubert and Rule 702, the Defendants ask this Court to exclude the firearms identification testimony under Rule 403, which provides that the trial court “may exclude relevant evidence if its probative value is substantially outweighed by a danger of one or more of the following: unfair prejudice, confusing the issues, misleading the jury, undue delay, wasting time, or needlessly presenting cumulative evidence.” Defendants argue that the aura of infallibility attached to forensic experts in the “CSI” era creates unfair prejudice to Defendants, and that the experts' conclusions are so unreliable as to lack any probative value.

Finding the proffered testimony sufficiently reliable under Rule 702, this Court further finds its probative value significant. The expert testimony will assist the jury in understanding the forensic evidence collected by law enforcement. The limitations discussed above serve to exclude testimony for which any potential danger of unfair prejudice might arise. Testimony within those bounds does not give rise to unfair prejudice, for prejudice is not unfair where it stems from the probative value of testimony. While Rule 403 provides an alternative basis for the Court's previously articulated limitations, the Court does not exclude any additional testimony under Rule 403.

III. Conclusion

Defendants' joint motion to bar the ballistics experts' testimony is denied. Their motion, in the alternative, to limit the testimony is granted to the degree described above.


Summaries of

United States v. Blackman

United States District Court, Northern District of Illinois
May 12, 2023
No. 18-CR-00728 (N.D. Ill. May. 12, 2023)
Case details for

United States v. Blackman

Case Details

Full title:UNITED STATES OF AMERICA, Plaintiff, v. ROMEO BLACKMAN, TERRANCE SMITH…

Court:United States District Court, Northern District of Illinois

Date published: May 12, 2023

Citations

No. 18-CR-00728 (N.D. Ill. May. 12, 2023)