The State of Technology Assisted Review in 2017

The State of Technology Assisted Review in 2017

It has been almost five years since Judge Andrew Peck of the Southern District of New York first approved the use of technology assisted review (“TAR” or “predictive coding” or “computer assisted review”) in Da Silva Moore v. Publicis Groupe. Since Da Silva Moore, courts and government agencies continue to support the use of TAR to expedite discovery and reduce litigation costs. However, attorneys have been slow TAR adopters even though they also recognize the benefits of TAR. Despite statistics and new terminology, the good news is that TAR is not as complicated as one might think.

Courts Continue to Give TAR a Thumbs Up

“[T]he case law has developed to the point that it is now black letter law that where the producing party wants to utilize TAR for document review, courts will permit it.”[1]Those were the words of Judge Andrew Peck of the Southern District of New York in 2015. And it is true that courts across the country continue to accept and even encourage the use of TAR to reduce costs and create efficiencies in the litigation process. Federal or state courts in California, Delaware, Georgia, Illinois, Minnesota, Nebraska, New Mexico, New York, Pennsylvania, Texas, Tennessee, and Virginia have accepted TAR to search for relevant electronic information since 2012.

Government agencies including the Department of Justice, the Federal Trade Commission, and the Securities & Exchange Commission may also authorize the use of TAR so long as the TAR methodology is communicated with the agency. In 2016, courts in Ireland, England and Australia also approved TAR.[2]

In 2016, a producing party’s TAR results were challenged in Dynamo Holdings vs. Commission of Internal Revenue.[3] The Commission claimed the TAR production was flawed and insufficient and moved to compel the production of documents by using key words and human manual review. The court concluded that there was “no question” that Dynamo Holdings satisfied its discovery obligations by using TAR.

The Commission’s motion was “predicated on two myths” according to the court: (1) that human review is accurate and complete and constitutes the gold standard for searching; and (2) that the Rules require a perfect response. Citing studies undermining the accuracy of keyword searches and human review, the court countered that “[t]his myth of human review is exactly that: a myth. Research shows that human review is far from perfect.” Furthermore, the Rules require a “reasonable inquiry” as opposed to “perfection” and cited Judge Peck who stressed that TAR should not be held to a higher standard than keywords or manual review.[4]

Several other courts have indicated that TAR can be superior to using search terms and/or human review. In Hyles v. New York, Judge Peck described TAR as “cheaper, more efficient and superior to keyword searching,” and thus “the best and most efficient search tool” for most cases."[5] The court in Malone vs. Kanter Ingredients observed that TAR is “promoted (and gaining acceptance) as not only a more efficient and cost effective method of ESI review, but a more accurate one.”[6] Judge Denise Cote of the Southern District of New York has recognized that TAR “may be more reliable — not just as reliable but more reliable than manual review, and certainly more cost effective[.]”[7] The District of Minnesota entered an order in 2016 governing the use of computer assisted review which was designed to “reap the benefits” of TAR including “greater accuracy than keyword search or manual review, reduced costs, and the more rapid document delivery time frames[.]”[8]

Despite acknowledgment that TAR may be more accurate and certainly less costly, courts typically defer to the producing party to determine what search methodology is appropriate. This approach is consistent with Principle 6 of the Sedona Principles:

Responding Parties are best situated to evaluate the procedures, methodologies, and technologies appropriate for preserving and producing their own electronically stored information.[9]

Accordingly, several courts declined to dictate what search methodology a party utilizes.[10]

Lawyers Slow to Use TAR

While many courts are sold on the benefits of TAR, attorneys, in general, have yet to embrace TAR with the same enthusiasm and endorsement. There is a lot of speculation as to why that is as well as how many and how frequently law firms are actually using TAR—but the use of TAR still seems to be the exception. Attorney Ralph Losey, a TAR proponent, recently suggested that predictive coding is “for average lawyers today, still just a far off remote mountaintop; something they have heard about, but never tried.”[11]

Two other TAR thought leaders, Maura Grossman and Gordan Cormack, noted:

[A]doption of TAR has been remarkably slow, considering the amount of attention these offerings have received since the publication of the first federal opinion approving TAR use . . . .The complex vocabulary and rituals that have come to be associated with TAR, including statistical control sets, stabilization, F1 measure, overturns, and elusion, have dissuaded many practitioners from embracing TAR.[12]

However, the needle may start to move on the use of TAR. In a 2016 Exterro survey, 90% of attorneys surveyed see benefits of using TAR.[13] As the legal profession becomes more knowledgeable about TAR – and perhaps as TAR software becomes more user intuitive – the use of TAR should continue to increase.

So How Does TAR Work?

Have you ever wondered how Pandora creates your music playlist, Amazon makes recommendations on the next book you should read, or your email filters out spam from your inbox? These are examples of machine learning. Likewise, TAR involves machine learning algorithms or systematic rules to determine which documents are relevant based on a review of smaller sets of documents by “subject matter experts” – i.e., the attorneys who know the most about the case as opposed to contract attorneys.

TAR software is not developed to the point that a user pushes a “magic button” and out comes the relevant documents. However, the TAR process is not overly complex. At a very high level, the TAR process involves the following basic steps:

1. Define the parameters of the review including:

  • Data sources and/or custodians to be searched;
  • Time frames to be applied
  • Measurements for sampling, recall, and precision
  • Subject matter expert(s).

2. Prepare a review protocol that defines responsiveness and privilege.

3. The subject matter expert(s) review and “train” the documents (commonlyreferred to as the “seed set”).

4. Apply the subject matter expert training or seed set to a larger set of documents.

5. The subject matter expert(s) should validate the TAR results through different formsof testing and sampling to determine the recall and precision rates.

6. Evaluate the machine review results and then repeat Steps 3, 4, and 5 above untilsatisfied with the results.

Considerations Before Using Tar

  • Prepare a TAR protocol that is reasonable and proportional to the needs of the case – so that the search methodology is defensible if challenged.
  • Consider the requirements in your jurisdiction as well as the advantages and disadvantages of disclosing the use of TAR to opposing parties or entering into a stipulated order governing a TAR protocol.
  • Consider entering into a Rule 502(d) order to protect the inadvertent production of privilege documents.

[1]Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125, 127 (S.D.N.Y. 2015).

[2]Irish Bank Resolution Corp. v. Quinn; David Brown v. BCA Trading; Pyrrho Investments Ltd. v. MWB Property Ltd.; Money Max v. QBE Ins. Group.

[3]Dynamo Holdings Ltd. P’ship v. Comm’r of Internal Revenue, No. 2685-11, 2016 WL 4204067 (T.C. July 13, 2016).

[4] See also In re Biomet M2a Magnum Hip Implant Products Liability Litig., No. 3:12-MD-2391, 2013 WL 1729682 (N.D. Ind. Apr. 18, 2013) (holding that the defendant’s keyword and TAR search methodologies satisfied its discovery obligations under Rule 26 and Rule 34).

[5]Hyles v. N.Y. City, No. 10CIV3119ATAJP, 2016 WL 4077114, at *2-3 (S.D.N.Y. Aug. 1, 2016).

[6]Malone v. Kantner Ingredients, Inc., No. 4:12CV3190, 2015 WL 1470334, at *3 at FN 7 (D. Neb. Mar. 31, 2015).

[7]Fed. Hous. Fin. Agency v. HSBC N. Am. Holdings Inc., No. 11 CIV. 6189 DLC (S.D.N.Y. July 24, 2014).

[8]In re Bair Hugger Forced Air Warming Prod. Liab. Litig., No. MDL 15-2666(JNE/FLN), 2016 WL 3702959, at *1 (D. Minn. July 8, 2016).

[9]The Sedona Principles: Second Edition, “Best Practices Recommendations & Principles for Addressing Electronic Document Production, Principle 6” (available at www.TheSedonaConference.org).

[10] In re Viagra (Sildenafil Citrate) Prod. Liab. Litig., No. 16-MD-02691-RS (SK), 2016 WL 7336411, at *1 (N.D. Cal. Oct. 14, 2016); Kleen Prod. LLC v. Packaging Corp. of Am., No. 10 C 5711, 2012 WL 4498465, at *5 (N.D. Ill. Sept. 28, 2012); Hyles, 2016 WL 4077114.

[11] Ralph Losey, “Predictive Coding 4.0” available at: https://ralphlosey.files.wordpress.com/2016/11/predictive_coding_4-0.pdf.

[12] Grossman & Cormack, “Continuous Active Learning for TAR” (Practical Law, April/May 2016), available at: http://cormack.uwaterloo.ca/caldemo/AprMay16_EdiscoveryBulletin.pdf.

[13] “2nd Annual Federal Judges Survey - Views from Both Sides of the Bench” available at: http://www.exterro.com/judges-survey/.