From Casetext: Smarter Legal Research

Earley Info. Sci. v. Omega Eng'g, Inc.

United States District Court, D. Massachusetts
Dec 10, 2021
575 F. Supp. 3d 242 (D. Mass. 2021)

Opinion

Civil Action No. 19-10364-FDS

2021-12-10

EARLEY INFORMATION SCIENCE, INC., Plaintiff, v. OMEGA ENGINEERING, INC., Defendant.

Evans Huber, Jeremiah W. Doyle, V., Frieze Cramer Rosen & Huber LLP, Wellesley, MA, for Plaintiff. Joseph C. Merschman, Laura Ann Froning, Wiggin and Dana, New Haven, CT, for Defendant.


Evans Huber, Jeremiah W. Doyle, V., Frieze Cramer Rosen & Huber LLP, Wellesley, MA, for Plaintiff.

Joseph C. Merschman, Laura Ann Froning, Wiggin and Dana, New Haven, CT, for Defendant.

MEMORANDUM AND ORDER ON PLAINTIFF'S MOTION TO EXCLUDE OPINIONS AND TESTIMONY OF JOSEPH J. SHEPLEY, III, PH.D.

SAYLOR, C.J.

This is a lawsuit arising out of a contract for data-organization services. In 2017, Omega Engineering, Inc., retained Earley Information Science, Inc., to organize and migrate product data to modernize Omega's e-commerce platform. For reasons disputed by the parties, that effort was largely unsuccessful. Earley has sued Omega for breach of contract and several related claims. Omega has asserted counterclaims arising out of the same contract.

Earley has filed a motion to exclude the testimony of Omega's expert, Joseph J. Shepley, III, Ph.D. For the reasons set forth below, that motion will be granted.

I. Standard of Review

Federal Rule of Evidence 702 provides as follows:

A witness who is qualified as an expert by knowledge, skill, experience, training, or education may testify in the form of an opinion or otherwise if:

(a) the expert's scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue;

(b) the testimony is based on sufficient facts or data;

(c) the testimony is the product of reliable principles and methods; and

(d) the expert has reliably applied the principles and methods to the facts of the case.

Fed. R. Evid. 702. The adoption of Rule 702 in its present form codified the standard of admissibility for expert testimony that was set forth in Daubert v. Merrell Dow Pharms., Inc. , 509 U.S. 579, 113 S.Ct. 2786, 125 L.Ed.2d 469 (1993). United States v. Diaz , 300 F.3d 66, 73 (1st Cir. 2002).

Under Rule 702, district courts considering the admissibility of expert testimony must "act as gatekeepers, ensuring that an expert's proffered testimony ‘both rests on a reliable foundation and is relevant to the task at hand.’ " Samaan v. St. Joseph Hosp. , 670 F.3d 21, 31 (1st Cir. 2012) (quoting Daubert , 509 U.S. at 597, 113 S.Ct. 2786 ). That gatekeeping function requires that the court consider three sets of issues: (1) whether the proposed expert is qualified by "knowledge, skill, experience, training or education"; (2) whether the subject matter of the proposed testimony properly concerns "scientific, technical, or other specialized knowledge"; and (3) "whether the testimony [will be] helpful to the trier of fact, i.e. , whether it rests on a reliable foundation and is relevant to the facts of the case." Bogosian v. Mercedes-Benz of N. Am., Inc. , 104 F.3d 472, 476 (1st Cir. 1997) (quoting Fed. R. Evid. 702 ) (internal quotation marks omitted). "These two requirements—a reliable foundation and an adequate fit—are separate and distinct." Samaan , 670 F.3d at 31.

The requirement that an expert's testimony must be based on reliable methods is often the "central focus of a Daubert inquiry." Ruiz-Troche v. Pepsi Cola of P.R. Bottling Co. , 161 F.3d 77, 81 (1st Cir. 1998). In Daubert , the Supreme Court enumerated a non-exhaustive list of factors that a court may consider in undertaking its reliability analysis: (1) whether the scientific theory or technique can be (and has been) tested; (2) whether it has been subjected to peer review and publication; (3) whether it has a known rate of error; (4) whether there are standards controlling its application or operation; and (5) whether it is generally accepted in the relevant scientific community. Daubert , 509 U.S. at 593-94, 113 S.Ct. 2786 ; see also Samaan , 670 F.3d at 31-32.

Less centrally, but importantly, Rule 702 also requires the court to examine whether those methods have been reliably applied. In other words, the court must "ensure that there is an adequate fit between the expert's methods and his conclusions." Samaan , 670 F.3d at 32 (citing Daubert , 509 U.S. at 591, 113 S.Ct. 2786 ). "This prong of the Daubert inquiry addresses the problem that arises when an expert's methods, though impeccable, yield results that bear a dubious relationship to the questions on which he proposes to opine." Id. (citing Daubert , 509 U.S. at 591-92, 113 S.Ct. 2786 ).

In evaluating whether expert testimony will be helpful to the trier of fact, the court must determine whether it is relevant, "not only in the sense that all evidence must be relevant, but also in the incremental sense that the expert's proposed opinion, if admitted, likely would assist the trier of fact to understand or determine a fact in issue." Ruiz-Troche , 161 F.3d at 81 (citations omitted); see also Cipollone v. Yale Indus. Prods., Inc. , 202 F.3d 376, 380 (1st Cir. 2000) ("The ultimate purpose of the Daubert inquiry is to determine whether the testimony of the expert would be helpful to the jury in resolving a fact in issue.").

The focus of the inquiry is on the principles and methodology employed by the expert, not the ultimate conclusions. Daubert , 509 U.S. at 595, 113 S.Ct. 2786. The court may not subvert the role of the factfinder in assessing credibility or in weighing conflicting expert opinions. Rather, "[v]igorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are the traditional and appropriate means of attacking shaky but admissible evidence." Id. at 596, 113 S.Ct. 2786 ; see also Ruiz-Troche , 161 F.3d at 85 (admitting testimony notwithstanding a lack of peer-reviewed publications because the opinion rested upon good grounds generally and should be tested by the "adversary process").

Expert testimony that is admissible under Rule 702 may nonetheless be excluded under Rule 403 "if its probative value is substantially outweighed by the danger of one or more of the following: unfair prejudice, confusion of the issues, misleading the jury, undue delay, wasting time, or needlessly presenting cumulative evidence." Fed. R. Evid. 403 ; see also Daubert , 509 U.S. at 595, 113 S.Ct. 2786. Thus, expert testimony that is relevant and that passes muster from a scientific or technical standpoint may nonetheless be excluded if it is likely to be misinterpreted or misused by the jury.

II. Analysis

Plaintiff seeks to exclude the testimony of defendant's expert, Dr. Joseph Shepley. Plaintiff challenges both Dr. Shepley's qualifications, an issue that the Court does not reach, and his methods.

Dr. Shepley is the Managing Director of Data and Technology at Ankura Consulting Group. According to defendant, he is a "subject matter expert in developing and deploying data models and other electronic data management tools." (Defendant Opp. at 1). Dr. Shepley was engaged by defendant to provide an expert opinion as to whether Earley's work met the specifications required by Phase 2 of Omega's Product Information Management (PIM) project.

Dr. Shepley concluded that four out of twelve deliverables that Earley was to provide during Phase 2 of the PIM project did not meet 50% of the specifications, and therefore "likely would not serve the purpose for which [each deliverable] was intended." (Shepley Report ¶ 15). The four deliverables were Deliverable 3.5 (SKU Classification Data); Deliverable 3.6 (Data Migration Plan); Deliverable 3.8 (Migrated Data); and Deliverable 3.9 (QA Plan and Results).

Overall, Dr. Shepley concluded that 42.4% of the specifications in the Statement of Work were not met, making it unlikely, in his experience, "that the project would have enabled the successful launch of an e-commerce website such as the one Omega hired [Earley] to assist with." (Shepley Report ¶ 94).

A. Reliable Methods

The central dispute is whether Dr. Shepley's expert testimony is based on reliable methods.

Dr. Shepley's general methodology was as follows. First, he reviewed the descriptions of deliverables listed in the Statement of Work. Second, he "collated the details about each deliverable and created a specification list that enumerates the characteristics agreed to in the [Statement of Work] for each deliverable and organized them in a table." (Shepley Report ¶ 34(b)). Third, he classified and reviewed the subset of documents that could be considered deliverables and matched them to deliverables in the Statement of Work. Finally, he formed an opinion as to whether each specification was out of scope; not met by the documents received; met by the documents received; or "could not be evaluated" in relation to the documents reviewed. (Shepley Report ¶ 34(e)).

There are five principal flaws in Dr. Shepley's methodology that, taken together, require exclusion of his opinion under Fed. R. Evid. 702.

First, Dr. Shepley's methods for creating a specification list appear to be arbitrary. Dr. Shepley parsed each narrative description from the Statement of Work into sentences or sentence fragments, and then deemed them as "specifications" that were to be met as part of each deliverable. For example, Deliverable 3.6 was one of the four deliverables that exceeded Dr. Shepley's 50% test. One of the portions of Deliverable 3.6 in the Statement of Work provided as follows:

EIS will craft a detailed document which outlines the approach it will take to migrating the legacy product information into the new PIM environment. The process document will be based on the results of the POC, the selected loading methods for the selected PIM platform, and the project plan.

From that portion of Deliverable 3.6, Dr. Shepley parsed out four specifications:

3.6.6: be a detailed document (not met)

3.6.7: outline the approach EIS would use to migrate the legacy product information into the new PIM environment based on results of the Proof of Concepts ("POC") (met)

3.6.8: include the selected loading methods for the selected PIM platform (not met)

3.6.9: include the project plan (met)

Dr. Shepley does not explain how creating specifications from sentence fragments of deliverables constitutes a reliable method. It is unclear why small items such as "be a detailed document" and "include the project plan" are separate specifications, whereas "outline the approach EIS would use to migrate the legacy product information into the new PIM environment based on results of the Proof of Concepts" is not. If Dr. Shepley had added another specification, such as "be based on the results of the Proof of Concepts," then three, not two, specifications would have been met for that portion of Deliverable 3.6. Thus, as plaintiff notes, Dr. Shepley has designated specifications in "such a way as to increase the number of specifications he says are not met, while decreasing the number of specifications that are met." (Plaintiff Reply at 5). Even assuming he did not intend that result, his methods for deriving the number and substance of specifications appear to be entirely arbitrary—or, at the very least, he does not explain those methods. This is particularly concerning because the number of specifications drives the rest of his analysis, including his ultimate conclusions about plaintiff's contract performance.

As footnote 4 indicates below, it appears that there are other inconsistencies with Deliverable 3.6.

Second, Dr. Shepley's decision to deem certain specifications "out of scope" made his methods less reliable. After parsing each deliverable in the Statement of Work, he arrived at a total of 91 specifications. He then excluded 25 as out of scope, either because the specifications were not document-based, they duplicated other specifications, or because they were beyond his expertise to evaluate. Specifications that were not document-based included stakeholder meetings and storage on SharePoint. (Shepley Report ¶¶ 11; 34(e)(i)).

Dr. Shepley's report did not explain why those specifications could not be assessed, nor what criteria made a specification document-based. He offered no analysis as to whether those out-of-scope specifications would be material to his opinion about plaintiff's performance. For example, Specification 3.3.7.6 is listed as "out of scope" in Dr. Shepley's Deliverable Specification Matrix. It provides for "[t]raining/knowledge transfer of internal Omega data production team to maintain and extend the information ongoing." (Shepley Report at p. 88). While no expert can take into account the entire universe of data, it is troubling that Dr. Shepley rendered an opinion about the overall completion of each deliverable while also excluding what could be material components. And, as plaintiff argues, by excluding 25 specifications as out of scope, he reduced the denominator for his 50% test, "thereby artificially and arbitrarily increasing the percentage of ‘specifications not met.’ " (Plaintiff Mem. at 12).

One example is Deliverable 3.8, which is one of the four deliverables Dr. Shepley claims did not meet 50% or more of the specifications. Dr. Shepley parsed Deliverable 3.8 into eight subcomponents and concluded that one specification was met; two were not met; one could not be evaluated; and four were out of scope. (Shepley Report ¶¶ 65; 67-71). He calculated a 50% failure rate by taking the two specifications not met and dividing them by the four that were either met, not met, or could not be evaluated. However, if the four out-of-scope specifications were included, the denominator becomes eight, not four, meaning that Deliverable 3.8 is less likely to hit Dr. Shepley's 50% threshold. Finally, Dr. Shepley's exclusion of 25 specifications as out of scope is not supported by externally validated principles or methods, or at least he does not cite any. And again, when his ultimate conclusions are based on the fraction of specifications not met, any defects in Dr. Shepley's intermediate steps are particularly troubling.

Another judge in this District excluded expert testimony in a securities fraud case based on similar concerns. Bricklayers & Trowel Trades Int'l Pension Fund v. Credit Suisse First Bos. , 853 F. Supp. 2d 181, 188 (D. Mass. 2012). There, the expert excluded 22 days from his event study because they were "irrelevant." Id. at 187. The district court ultimately granted the Daubert motion, interpreting these exclusions as "cherry-pick[ing]." Id. at 188. The First Circuit affirmed, emphasizing that "[t]he district court was not obligated to prune away all of the problematic events in order to preserve [the expert's] testimony." Bricklayers & Trowel Trades Int'l Pension Fund v. Credit Suisse Secs. (USA) LLC , 752 F.3d 82, 96 (1st Cir. 2014).

The third issue with Dr. Shepley's report concerns his method of determining that deliverables with a 50% or greater failure rate are not likely to "serve the purpose for which [they were] intended." (Shepley Report ¶ 92). His approach appears to have been created solely for the purpose of this litigation. See In re Nexium (Esomeprazole) Antitrust Litig. , 842 F.3d 34, 52 (1st Cir. 2016) (affirming exclusion of expert testimony where only other example of expert's methodology was "unpublished, non-peer-reviewed working paper that [expert] co-authored during the course of this litigation"); McGovern ex rel. McGovern v. Brigham & Women's Hosp. , 584 F. Supp. 2d 418, 424 (D. Mass. 2008) (considering factors such as "whether [experts] have developed their opinions expressly for purposes of testifying") (internal citations omitted).

Dr. Shepley acknowledged during his deposition that he had never seen that approach utilized, discussed, or published in the relevant industry. (Shepley Dep. at 87-89). He has offered no basis for that methodology, other than that it is generally derived from his 20 years of experience in the industry. Generic appeals to an expert's experience are no substitute for reliable principles and methods, as other courts have held:

Asked repeatedly during his deposition what methods he had used to generate projections, [the expert] repeatedly answered "my expertise" or some variant ("my industry expertise," "[my] awareness," and "my curriculum vitae")—which is to say that he either had no method or could not describe one. He was relying on intuition, which won't do.... Someone else using the same data and methods must be able to replicate the result. [The expert's] method, "expert intuition," is neither normal among social scientists nor testable—and conclusions that are not falsifiable aren't worth much to either science or the judiciary.

Zenith Elecs. Corp. v. WH-TV Broad. Corp. , 395 F.3d 416, 418-19 (7th Cir. 2005) ; see also Gen. Elec. Co. v. Joiner , 522 U.S. 136, 137, 118 S.Ct. 512, 139 L.Ed.2d 508 (1997) (stating that "[n]othing in either Daubert or the Federal Rules of Evidence requires a district court to admit opinion evidence that is connected to existing data only by the ipse dixit of the expert"); Leese v. Lockheed Martin Corp. , 6 F. Supp. 3d 546, 556 (D.N.J. 2014) (excluding expert testimony where expert's 10 percent figure "appear[ed] to have [been] selected ... out of thin air"). Similarly, it is not enough for Dr. Shepley to say that his 50% test is based on his years of experience in the industry, without "explain[ing] how that experience leads to the conclusion reached, why that experience is a sufficient basis for the opinion, and how that experience is reliably applied to the facts." McGovern , 584 F. Supp. 2d at 426 (quoting Brown v. Wal-Mart Stores, Inc. , 402 F. Supp. 2d 303, 308-09 (D. Me. 2005) ).

Thus, Dr. Shepley's methods have not been tested; they have not been subjected to peer review or publication; they do not have a known rate of error; there are no standards controlling their application or operation; and there is no evidence that they are generally accepted in the relevant scientific community. See Daubert , 509 U.S. at 593-94, 113 S.Ct. 2786 ; see also Samaan , 670 F.3d at 31-32. It is also concerning that Dr. Shepley's 50% test creates an illusion of mathematical certainty when the underlying procedures are untested and, at times, appear entirely arbitrary.

The fourth issue is that Dr. Shepley, without explanation, weighs all his derived specifications equally. See Leese , 6 F. Supp. 3d at 556 (excluding expert testimony where expert could not "sufficiently explain his weighting"); Davis v. Carroll , 937 F. Supp. 2d 390, 417 (S.D.N.Y. 2013) (admonishing that "[a]t no point in [expert's] report, deposition, or declaration does [expert] clarify the full set of factors that play a role in this analysis, nor does [expert] explain how these factors interact or how much weight each factor is assigned in his calculus"); Laber v. Long View R.V., Inc. , 454 F. Supp. 3d 158, 166 (D. Conn. 2020) (excluding expert testimony where expert "provide[d] no analysis as to how [15 factors] impacted his valuation or the weight given to them in that valuation").

In response, defendant contends that "EIS faults Dr. Shepley for refusing to inject subjectivity into his analysis and conclusions remaining objective in order to provide reliable, unbiased testimony." (Defendant Opp. at 12). But it cannot be true that all specifications are equally important. Although this Court is not an expert in product information management, it nonetheless seems obvious that Specification 3.1.3 (deliver via a Microsoft Word document) is not as important as Specification 3.1.1 (represent EIS’ advice and recommendations on the selection of a PIM platform). That is particularly concerning because Dr. Shepley's ultimate conclusion—that certain deliverables likely did not serve their intended purpose—depends on a mathematical calculation that fails to take into account each specification's relative importance, or indeed weigh them in any way.

Finally, Dr. Shepley excluded at least one item that was "beyond his expertise," which again effectively manipulates the denominator in the percentage of specifications not met. Specifically, he excluded Specification 3.4.1 because it concerned Resource Description Frameworks, a subject outside his expertise. (Shepley Report ¶ 47). The expert's experience, or lack thereof, should not change his calculations about plaintiff's performance.

The Court does not believe that these issues could be addressed by "[v]igorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof." Daubert , 509 U.S. at 596, 113 S.Ct. 2786. Under the circumstances, and taken as a whole, the Court cannot conclude that Dr. Shepley's analysis was the product of reliable principles and methods in accordance with Daubert and Rule 702.

Plaintiff also contends that Dr. Shepley's methods are unreliable because he concluded that Deliverable 3.3 (Taxonomy Design) had a failure rate of 45%, which is "a failure rate that would make it less likely to serve the purpose for which it was intended than if it had a failure rate closer to 0%." (Shepley Report ¶ 93). As plaintiff points out, defendant's own Global Director of Master Data Management, Kathleen Wiener, stated in her deposition that the one project she thought that plaintiff performed well was taxonomy. (Plaintiff Ex. A at 176; Plaintiff Reply at 7-8). At a minimum, that evidence surely tends to undermine the reliability of his methodology.

B. Application of Methods

Pursuant to Rule 702, the Court must also ensure that the expert's methods have been reliably applied. Here, even assuming that Dr. Shepley's methods are reliable, there are significant inconsistencies in the application of those methods.

The first inconsistency led Dr. Shepley to withdraw his opinion with respect to Deliverable 3.9 (QA Plan and Results). He originally concluded that Deliverable 3.9 had 80% of specifications unmet and "likely did not serve the purpose for which [it] was intended." (Shepley Report ¶ 92). However, during his deposition, he abandoned that opinion after plaintiff's counsel pointed out several errors. (Shepley Dep. at 175-78). If Dr. Shepley had applied his methods consistently and correctly, the percentage of specifications that were unmet would have actually been 20%. A similar inconsistency can be found with respect to Deliverable 3.6 (Data Migration Plan). It is particularly troubling that Dr. Shepley opined that four deliverables failed to meet 50% or more of their specifications, yet two of those four opinions were based on material mathematical mistakes. See Dart v. Kitchens Bros. Mfg. Co. , 253 F. App'x 395, 399 (5th Cir. 2007) (affirming exclusion of expert testimony based on "basic mathematical errors and flaws in methodology"); Dzielak v. Whirlpool Corp. , 2017 WL 1034197, at *27 (D.N.J. Mar. 17, 2017) (stating that "mathematical errors and flawed data can of course be significant in the Daubert calculus"); United States v. Mills , 2019 WL 2464782, at *4 (W.D. Pa. June 13, 2019) (excluding expert report where mathematical error and withdrawn portion of opinion called into question "reliability and admissibility").

In the Deliverable Specification Matrix, Dr. Shepley mislabeled Specifications 3.92, 3.94, and 3.95 as "not met," when earlier his report stated that these specifications were unable to be confirmed. (Shepley Report ¶ 77). That changed the percentage of specifications not met from 80% (4 out of 5) to 20% (1 out of 5).

In the Deliverable Specification Matrix, Dr. Shepley mislabeled Specifications 3.6.1 and 3.6.6 as "not met," when earlier his report stated that these specifications were out of scope. (Shepley Report ¶ 56). He also, apparently, mislabeled Specification 3.6.7 as "met," when earlier his report stated that this was "not met." (Shepley Report ¶ 57). These errors changed the percentage of specifications not met from 58.3% (7 out of 12) to 60% (6 out of 10).

C. Conclusion

If it were a question of one error or one methodological problem, it is perhaps possible that Dr. Shepley's expert opinion could survive analysis. However, the flaws here are simply too pervasive. See Bricklayers, 853 F. Supp. 2d at 191 (stating that "[h]ad [expert's] event study suffered from only one of the four methodological defects identified by this Court, or suffered from those flaws jointly but to a lesser degree, today's ruling might have been different. Given, however, the pervasiveness of [expert's] methodological errors and the lack of congruity between his theory and the data, the Court is compelled to exercise its role as gatekeeper and to exclude his event study as unreliable."). Under the circumstances, his expert opinion does not meet the standards of Fed. R. Evid. 702, and accordingly will be excluded.

III. Conclusion

For the foregoing reasons, plaintiff's motion to exclude the testimony of Joseph J. Shepley, III, Ph.D, is GRANTED.

So Ordered.


Summaries of

Earley Info. Sci. v. Omega Eng'g, Inc.

United States District Court, D. Massachusetts
Dec 10, 2021
575 F. Supp. 3d 242 (D. Mass. 2021)
Case details for

Earley Info. Sci. v. Omega Eng'g, Inc.

Case Details

Full title:EARLEY INFORMATION SCIENCE, INC., Plaintiff, v. OMEGA ENGINEERING, INC.…

Court:United States District Court, D. Massachusetts

Date published: Dec 10, 2021

Citations

575 F. Supp. 3d 242 (D. Mass. 2021)

Citing Cases

Arcudi v. Builder Servs. Grp.

The adoption of Rule 702 in its present form codified the standard of admissibility for expert testimony that…

Neural Magic, Inc. v. Meta Platforms, Inc.

Defendants offer citations to several cases, where courts struck expert testimony regarding testing that was…