Computer Assisted Review and Reasonable Solutions under Rule26

1,049 views

Published on

December 17, 2009 webinar sponsored by Aphelion Legal Solutions, featuring Anne Kershaw, Patrick Oot, and Michael Roman Geske

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,049
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • SLIDE 1 Mike: Welcome everyone to today’s webinar about discovery solutions that meet the reasonableness requirements of Rule 26 of the Federal Rules of Civil Procedure. Today’s event, which will focus on how to meet the requirement of reasonable discovery in an age dominated by Electronically Discoverable Information, is provided by Aphelion and the E-Discovery Institute. The slides being used today will be available to you afterwards and are filled with hyperlinks to many of the sources materials and other usual information. I’m Mike Geske and I’m the Chief Operating Officer of Aphelion legal Solutions .
  • SLIDE 2: Mike Before joining Aphelion about 2 years ago, I was special counsel at Arnold & Porter in Washington, DC where I litigated 18 years. After being lucky enough to have some trial experience early in my career, I developed a reputation for successfully managing massive document discovery for clients facing the worst document and privilege problems in the worst kind of cases while keeping in mind that the whole point is to win at trial, not just survive the discovery process. Aphelion Legal Solutions consults about and provides professional staff and management of the same type of work: offshore at our facility in India, through domestic legal staffing, and by providing just project management for clients who have already lined up their own staff for a project. Over a decade ago, Anne Kershaw was consulting for a client of mine. Back then she was already thinking of ways that technology could help rather than hinder the discovery process. She’s continued to work on that, and she’s also a presenter for today’s webinar.
  • SLIDE 3 Anne: My name is Anne Kershaw. I am an attorney and the founder of A.Kershaw, PC//Attorneys & Consultants, a litigation management consulting firm. Before starting my firm 10 years ago, I was a trial attorney in multi-state mass tort products litigations. We have expertise in electronic discovery and assist corporations and law firms in developing sensible and defensible discovery and data management strategies. I teach at Columbia University, I am on the advisory board of Georgetown’s Advanced E-Discovery Institute and on the advisory board and faculty of Georgetown’s E-Discovery Training Academy. I have also been involved in measuring the effect of technology on litigation processes since the early 1990s and am a co-founder of the E-Discovery Institute.
  • SLIDE 4: Patrick Thanks Anne, and I am Patrick Oot and I am Co founder of the Electronic Discovery Institute Currently working on Judicial Education and Outreach programs through the use of studies and surveys (like the ones we will discuss today) Also known for my former role as Director of Electronic Discovery and Senior Counsel at Verizon. Responsible for Verizon’s ED Program At Verizon worked on a Variety of Complex Litigation Matters and regulatory filings including rocket docket patent cases and HSR filings. Did considerable work on FRE 502.
  • SLIDE 5: Ryan/Mike Ryan: If you would like to submit a question, click the Q&A button on the Live Meeting console. (If you’re logged in through a Live Meeting Client, you’ll be presented with a Questions and Answers window, click the Q&A tab there, too.) Type your question into the text box and click “ask.” Mike: We will respond to as many questions as possible while we’re all gathered online. To the extent we cannot respond here and now, we will retain your questions and amongst the three of us and our companies we will strive to provide a response later through email.
  • SLIDE 6: Mike: So, let’s begin. Today’s session will first review the Rule 26 requirement that discovery be requested and produced “reasonably.” We’ll discuss the leading cases that have applied that rule to ESI. Then we’ll discuss four studies about how technology can increase the efficiency of searching for and retrieving ESI. Throughout the presentation, we’ll provide practical advice for all of you that will help you confront ESI successfully and reasonably. As the amount of ESI has increased in recent years, so has the time and cost required to search that ESI and retrieve information relevant to litigation. As everyone attending today’s webinar already knows, the largest category of litigation expense is attorney time spent reviewing documents to select responsive materials and protect privileged items. Current, widely used techniques to search ESI for relevant information are still centered on attorney review of each individual document, the cost of which threatens to swamp the amounts at issue. And the time that such searches still take can significantly delay resolution of the merits. But for better or worse, document by document review by attorneys and legal staff is generally considered the quintessentially “reasonable” method to find and produce responsive information and protect privileged items. This situation can accurately be described as approaching crisis levels for the profession and the justice system. So it is increasingly important that the bench and bar learn about, consider, and when appropriate, begin to gain experience using automated, or at least more automated, ways to search for and retrieve relevant ESI in a manner that meets the reasonableness standards of the discovery rules. Recent studies suggest that currently available automated techniques might be used now in ways that could be defended as “reasonable” under the discovery rules. Those recent studies are why we thought it would be a good idea to bring together the insights of those who participated in them. A lot of the work has been suggested and conducted by the Electronic Discovery Institute.
  • SLIDE 7: Anne: The E-Discovery Institute is a non-profit research institution founded in late 2007 in response to requests from Judges and in-house counsel for more independent, peer-reviewed research on the effects of technology on litigation. This call for research came on the heels of a private study I did in 2005 comparing traditional manual document review with technology assisted document review. That study demonstrated that using technology could reduce the chances of missing relevant documents by as much as 90%. When I met Patrick Oot and he told me he had real data that we could use for further studies, we founded the E-Discovery Institute.  
  • SLIDE 8: Anne The Institute received seed money form a corporate sponsor and since then has obtained its principal funding from an annual fundraiser called the Pizza Party,  
  • SLIDE 9: Anne which is held every November immediately following the Georgetown Advanced E-Discovery Institute in Washington, DC.
  • SLIDE 10: Mike: Aphelion sponsors the E-Discovery Institute and we do so for more than altruistic purposes. Because Aphelion’s work is focused on document discovery and in the vast majority of cases that discovery is conducted on electronically discoverable information, it’s essential that we not only stay abreast of developments in the field of EDI, but also that we get ahead of the curve both for purposes of advising our clients and for long term planning and shaping our business model. A company like Aphelion – both because of its experience and expertise and also its own strategic interests -- needs to be aware of such changes and also, to the extent possible, to be involved in setting and articulating those standards. Anne: The important thing for all of us to remember, of course, is that while trying to do discovery as well and efficiently as possible, everyone is obligated to act “reasonably” under Rule 26. Patrick is going to start with that.
  • SLIDE 11: Patrick: The basic rule, of course, is that discovery must be conducted reasonably in light of the circumstances. In other words, the Rules do not demand, and never have demanded, that discovery be done perfectly. Rather, discovery must be conducted reasonably in light of the circumstances.   Rule 26(g)(2) sets out the enforceable standard. It requires that every discovery request, response, and objection be signed by counsel. That signature is a certification that “a reasonable inquiry” has been made and that in light of that “reasonable inquiry,” the request or response is Not unreasonable and not unduly burdensome or expensive given the needs of the case, the amount in controversy, and the issues in the litigation. Likewise, Rule 26(g)(1) requires a signature on every disclosure as a certification that it is accurate and complete when it is made by the party “after reasonable inquiry.”   The signature-certification is the basis for enforcing reasonableness. Under Rule 26(g)(3), the person signing the discovery request or response, the person represented, or both may be sanctioned by the court if the reasonableness standard is violated “without substantial justification.” Not only can the sanction include attorneys fees for work resulting from the unreasonable request or response, but the court can impose a sanction on its own motion, without requests from an opponent.
  • SLIDE 12: Patrick The Rule 26 standard has been applied with explanation in several leading cases. Equity Analytics, and O’Keefe , Victor Stanley, tand William Gross the citations for which are here. We suggest reading these cases because they are the leading cases on the subject matters.
  • SLIDE 13: Patrick In United States v. O'Keefe , the government charged defendants with receiving gifts for expediting visas while working at The Department of State in Canada. The government searched for and produced documents using a method self-selected by a a representative from the Department of State using Boolean search query of keyword search terms. In O’Keefe , Judge Facciola ruled that “if defendants are going to contend that the search terms used by the government were insufficient, they will have to specifically so contend in a motion to compel and their contention must be based on evidence that meets the requirements of Rule 702 of the Federal Rules of Evidence.” Interestingly, Judge Facciola commented about the perils of keyword search terms in his “angels fear to tread opinion”
  • SLIDE 14: Patrick Coincidentally, Judge Facciola revisited his O’Keef ruling shortly thereafter in the employment case Equity Analytics. . Again, Judge Facciola ruled on the difficulty is selecting search methodology and the courts need of evidence to determine the validity of the search technique. In essence, this means that not only is testing required to determine the general accuracy of a search device, but also that litigants who are using a particular tool in a particular case, must test the tool’s effectiveness. Doing so will advance your client’s interest two ways. First, it will advance your client’s interests when negotiating and preparing for the Rule 26 conference. Second, if the methods and scope of documents retrieval must be resolved by the court, your test results will go a long way to demonstrate that your client’s approach is both reasonable as required by Rule 26 and also that it should be adopted by the court in its resolution of the dispute.
  • SLIDE 15: Patrick In Victor Stanley v. Creative Pipe , a litigant ineffectively settled on keyword search terms to cull for privilege. Ruling the search methodology unreasonable, Judge Grimm put forth a multi-factor analysis litigants should deploy when selecting search techniques.
  • SLIDE 16 Mike: The Gross Construction court saw itself as issuing a “wake up call” to the bar. Just supplying a list of keywords obvious to a lawyer-layman isn’t sufficient. Rather, one must understand how keyword search techniques differ one from the other and then select the technique best suited to needs of the litigation, test the results, and disclose. Need to understand differences between what is being searched. Here are some practical questions you need to ask in order to help determine which tool to use and make supportable assertions about that tool’s effectiveness: whether files were extracted prior to being indexed; differences between and best ways to use proximity searches, Boolean techniques. It’s interesting to see the development of a common theme from the courts……
  • SLIDE 17: Patrick In our forthcoming University of Denver Law Review Article, Anne, Herb, and I combined the framework created by combining the Grimm-Peck opinions of the two cases. I think we have a great takeaway outlining what the judges are really asking for. MIKE: But if you’re new to this field or to setting up such tests, be sure that you discuss the testing and the tool with your technical folks so that the testing is analytically sound. As stated by the court in Equity Analytics, "[D]etermining whether a particular search methodology, such as keywords, will or will not be effective certainly requires knowledge beyond the ken of a lay person (and a lay lawyer) . . . ." This doesn’t mean that you have to hire a full time expert and pay for such bills throughout the discovery process. But you do need to invest the time, brain power, and whatever help you and your particular task require, and you need to do so early on in the process, probably as soon as preparing for the Rule 26 conference.
  • SLIDE 18: Patrick Finally, we should also note that Rule of Evidence 502 imposes a similar reasonableness requirement for protecting privileged materials. Parties must take “reasonable steps” to prevent inadvertent disclosure and to remedy any actual disclosure. In addition, to carry those protections outside an individual case, the discovery regime ordered by the court as a whole must also be “reasonable” in light of the needs and circumstances of the case , the needs of the parties, and the court.
  • Without going down a full 502 Rabbit Hole (we could spend a whole Webinar on that subject) – I suggest folks check out an article I drafted last spring for the Sedona Conference. In the appendix, I included a model protective order that was the work product of several members: By getting an order with this language – you outline and define reasonable precautions before you have a problem, pulling a litigant outside of a disparate analysis from various courts.
  • Slide 20: Patrick As we saw from the cases. Measurement is necessary. Studies develop and refine methods for measuring a discovery technique’s RESULTS and how that compares with other available techniques. This in turn allows one to compare techniques and determine if that technique is suited for a particular discovery task. Sedona Conference Best Practices Commentary on Search & Retrieval Methods in E-Discovery, Practice Point 7: “Parties should expect that their choice of search methodology will need to be explained, either formally or informally in subsequent legal contexts (including in depositions, evidentiary proceedings, and trials).”
  • SLIDE 21: ANNE Duplication is a huge problem in e-discovery. Not only do we see duplicate printed versions of electronic files, we also have duplicates of those electronic files as attachments in email. And then we have massive duplication in email, sending copies to everyone we think might want to know. The Institute’s Director of Metric Development, Joe Howie, suggested that many lawyers might fail to understand the technology of single-instance storage and be missing the advantages of full de-duplication in data sets. Specifically, he believed that attorneys were only de-duplicating email within custodians and not across data populations.   So we did a study. We surveyed leading e-discovery providers to learn what their clients requested with respect to the management of duplicate records. A “duplicate” for purposes of electronic files is an exact copy of the text and a duplicate in email is an exact copy of text, author, recipients, subject and date. We received responses from ACT, BIA, CaseCentral, Clearwell, Daticon, Encore, Fios, FTI, GGO, Iris, Kroll Ontrack, LDM Global, LDSI, Rational Retention, Recommind, StoredIQ Trilantic and Valora.  
  • SLIDE 22: ANNE We found that, on average, single-custodian deduping removes one out of five records, but across-custodian deduping almost doubles that reduction. Review costs are proportional to the volume reviewed and if a review based on single-custodian deduping cost $500,000, having deduped across custodians would have saved, on average $106,000 of that money with prospects of saving up to $200,000 or more. Survey responses indicated that although all of the respondents offered across-custodian deduping, only 52% of projects received across-custodian deduping, 41% received single-custodian deduping and 7 percent received no deduping. In light of the potential savings the obvious question is, “Why don’t all projects have across-custodian deduping?” Anyone seeking more information can obtain both the report and article published in Law Tachnology News from the EDI website.
  • SLIDE 23: Patrick: Because of this study, attorneys can feel comfortable asking for de-duplication across projects and can reliably reduce the cost of the document review. Indeed, full de-duplication may now become the standard for data processing for e-discovery, which would save litigants tremendous sums. Failure to advise your client to deduplicate data is like ordering your client to dump cash into the shredder.
  • SLIDE 24 Mike: TREC is the Text REtrieval Conference, sponsored by NIST (the National Institute of Standards and Technology, an agency of the U.S. Department of Commerce), in cooperation with the Department of Defense. Each year’s program presents a set of tasks to vendors of litigation-document searching products and evaluates their performance. The exercise is intended to inform consideration of industry best practices and articulate standards for evaluating computerized search and retrieval programs. The tasks are modeled on real life circumstances that arise when litigants must discover and produce large amounts of electronically stored information.  
  •   SLIDE 26: MIKE This was the fourth year of the Legal track exercise. Since the beginning, TREC has come to some basic conclusions. The problems are caused by two basic problems: inherent ambiguity of human language and human error in reviewing documents. Keyword searches alone miss the great majority of relevant documents. Boolean keyword searches (those which are tied with and, or, and within so many words, usually retrieve only 20-57% of the relevant documents retrieved by using a variety of search methods. Each type of search usually retrieves somewhere around 20% of responsive documents but different searches usually retrieve other documents, so responsiveness rate can be additive or cumulative if more than one search method is used. It is important to use sampling to see whether the method being used is retrieving known responsive documents .
  • SLIDE 27: Mike In the end, the results of each tools’ techniques were judged in two manners, for how many responsive documents were retrieved (recall) and how many non-responsive documents were also retrieved (precision). The project’s significance has been judicially recognized: Victor Stanley Inc. v. Creative Pipe, 250 F.R.D. 251, 260 n.10 (D. Md. 2008).   [T]here is room for optimism that as search and information retrieval methodologies are studied and tested, this will result in identifying those that are most effective and least expensive to employ for a variety of ESI discovery tasks. … This project can be expected to identify both cost effective and reliable search and information retrieval methodologies and best practice recommendations, which, if adhered to, certainly would support an argument that the party employing them performed a reasonable ESI search, whether for privilege review or other purposes.   It’s also important because the interactive exercise models how real life discovery happens. When you begin a review, you’re not familiar with everything that will be responsive. Instead, you learn as the project proceeds, you share that knowledge with you vendor and reviewers, they respond with additional materials, and you improve the results through that iterative process. This is something quite different from merely loading a database and typing in a function key, then producing the results. The TREC retrieval Studies are different from but have been running at the same time with another study by the Ediscovery Institute.
  • SLIDE 28: PATRICK The study that Mike just referred to is The EDI Document Categorization Study For better or worse, the de facto standard for reasonable categorization of documents (deciding whether particular documents are responsive to a request) has been human review of each document to determine whether it is responsive or not, or privileged. The goal of this study was to determine whether automated systems could categorize documents for responsiveness at least as well as human reviewers and thereby meet the needs of the legal community for effective tools to identify the responsive documents in a collection and reduce the costs associated with lawyer review of every individual document. The study was conducted on a dataset consisting of a large set of categorized documents that had been produced in response to a Department of Justice Second Request for a HSR premerger investigation in the telecommunications industry. After eliminating duplicates, 1,600,047 items were reviewed by a team of 225 attorneys, deciding whether each item was responsive or not. That work took 4 months, 7 days a week, and 16 hours per day and cost millions of dollard.. After review, a total of 176,440 items were produced to the DOJ. The study enlisted two legal service providers who used their automated categorization tools (computer programs) to perform independent categorizations of the original dataset. The results were compared the categorization that had been prepared for the DOJ Second Request. The study also used two teams of lawyers from the original project to re-review a random sample of 5,000 documents from the original dataset.
  • SLIDE 29: PATRICK Comparing the results of the original project to the re-review results indicates the amount of agreement that can be expected from the traditional process of using human review projects. To the extent that the computer systems show similar levels of agreement between themselves, the automated tools could be thought of as a reasonable substitute for the traditional process. The level of agreement among human reviewers was not strikingly high. The two re-review teams agreed with the original review on about 76% and 72% of the documents. They agreed with one another on about 70% of the documents. Although low, these levels are realistic. They are comparable to those observed in the TREC studies and other studies Vendor 1’s tool classified 15.99% of the documents and Vendor 2’s tool classified 16.92% of the documents as responsive, which were both higher than the proportion identified as responsive by the original team. Vendor 1agreed with the original classification on 83.2% and Vendor 2 agreed with the original classification on 83.6% of the documents. This is big news – the computer assisted systems match the original review better than a second human review.
  • SLIDE 30: Mike: This study is a substantial advance for those of us who have to provide legal advice to clients because they are, not unexpectedly, the driving force behind finding way to reduce the expense of discovery review.   The results support the idea that machine assisted categorization is no less accurate at identifying relevant/responsive documents than employing a team of reviewers. Based on these results, it would appear that using machine assisted categorization can be a reasonable substitute for purely human review.   Moreover, the results of human review are not nearly as impressive as any professional would hope, including the best of the best of document reviewers and lawyers highly skilled in managing large document intensive discovery projects using a large number of human reviewers.   In addition, this study does not suggest that discovery can be conducted simply by choosing any vendor’s tool, loading the data set, hitting F7, and then providing the results to the requestor. Rather, the discovery task must be matched to the tool and vice versa. That means at the very least that the lawyers must know the basic operations of the tool and whether those operations are suitable to the discovery to be undertaken.   Finally, there are other ways to reduce a client’s discovery costs which are easy and most of the time, all you need to do is ask the vendor to do it. So let’s close with a really good example of that, which is the Ediscovery Institute’s study about email threading, which is something that’s often available just for the asking.  
  • SLIDE 31 Anne: Think about email conversations, also called email chains or threads. Similar to duplicates problem, when you collect email, and each email building the conversation, you are also re-collecting all the email in the chain below. In addition, as you review this email, you have no way of knowing whether you are reading the last email of the thread – or not – which can also have profound effects on privilege review. We knew that technologies existed that could find the last email of the chain and we suspected that attorneys were not using them. An email “thread” is a series of connected emails brought about by recipients replying or forwarding emails that they have received. While there are many different implementations of email threading, in its most basic form it involves associating the initiating email with all of the subsequent replies or forwards so that reviewers can examine all of the emails within the thread at the same time.We surveyed 13 electronic data discovery providers who offer threading technology. The results suggest that e-mail threading technology can reduce legal review time and cost by more than 33%.  
  • SLIDE 32: ANNE In the threading survey, the average number of e-mails per thread was 4.9, with individual respondents reporting e-mails per thread that ran as high as 11 on specific projects. Being able to focus on the one e-mail that contains all of the message content without having to read (or worry about) earlier ones creates some obvious advantages — especially when your review software has bulk tagging features that permit users to tag all e-mails in the thread with one click. The study also revealed that very few attorneys requested or even inquired about email threading when purchasing-discovery processing services. While cost reduction typically is the most discussed advantage of e-mail threading, there is also a qualitative improvement to the review when the same reviewer understands the complete conversation within the context of the entire thread at the time of making relevance and privilege determinations.
  • SLIDE 33: PATRICK   Patrick: This is huge. Every document review should be setup so that the reviewers are reading only the full chain of the email conversation. Hopefully, this study will help to make that happen. If a reviewer can review an entire thread at once by reviewing the longest e-mail in the chain – it could knock out considerable repeat review time.
  • SLIDE 34: Mike: The last decades’ significant increase in affordable computing power is well known by every laptop and cell phone user. But a major problem is that the legal profession and justice system remains largely unaware of parallel increases in computerized search and retrieval techniques and how those techniques can be applied effectively to discovery tasks. To avoid a crisis in the legal profession and to restore a more reasonable way of life for lawyers, courts, and clients who are increasingly and routinely faced with enormous ESI discovery tasks, it is important that all of us at today’s webinar begin to learn about, consider, evaluate, and promote the further development and refinement of computerized search and retrieval techniques. The first step is to learn about the empirical studies that have provided solid grounds to hope that we can successfully improve the current situation and avoid the potential crisis. That requires us to learn what automated techniques purport to do, how effectively they do it, and which techniques are reasonably matched to the specific discovery tasks. Today’s webinar has provided the essential elements for all of us to take that first step. The next step is to develop and refine that empirical work until it provides an acceptable level of comfort that automated techniques can and do meet the reasonableness requirements of Rule 26 when they are applied after appropriately matching their tools to the specific discovery tasks in specific cases. At that point, their use will be legally defensible and we can confidently provide legal advice about how clients can protect and advance their legal interests by using them. But that next step requires your assistance. When you see a request for volunteer help that you can fulfill, respond or consider who amongst your colleagues might have the experience to be able to help. It’s usually not a heavy burden. For instance, my work as a TREC Topic Authority took around 40 hours all stretched through the summer and some of the fall this year. And although this was my first year participating, my comments and suggestions were always taken seriously and professionally. And if gave me entrance to interactions with some of the foremost thinkers and pioneers in the field. It was truly enjoyable.
  • Computer Assisted Review and Reasonable Solutions under Rule26

    1. 1. Solutions that Meet FRCP 26’s Reasonableness Requirements Michael Roman Geske, Esq. Aphelion Legal Solutions Anne E. Kershaw, Esq. A.Kershaw, PC // Attorneys & Consultants Patrick Oot, Esq. The Electronic Discovery Institute
    2. 2. Michael Roman Geske, Esq. Chief Operating Officer Aphelion Legal Solutions Aphelion Legal Solutions is a litigation consultancy. We conduct and manage large scale document discovery review and privilege projects. Aphelion does that work with offshore legal staff at our operations facility in Chennai, India; through domestic legal staffing; and project management services. Prior to joining Aphelion, Mike litigated for 18 years at Arnold & Porter, LLP in Washington, DC. He has worked with Anne Kershaw for over 10 years. Mike is serving as a Topic Authority is the 2009 TREC Legal Track exercise .
    3. 3. A. Kershaw, P.C. // Attorneys & Consultants (www.AKershaw.com), is a nationally recognized litigation management consulting firm. Ms. Kershaw teaches at Columbia University and the Georgetown University E-Discovery Academy. She is also the President and a founding sponsor of the E-Discovery Institute , a not-for-profit corporation dedicated to testing the use of technology in litigation to resolve discovery challenges facing the legal community (www.eDiscoveryInstitute.org). Anne E. Kershaw, Esq. A.Kershaw, PC // Attorneys & Consultants
    4. 4. Patrick Oot is an experienced corporate attorney and co-founder of The Electronic Discovery Institute , a non-profit organization dedicated to resolving litigation challenges by conducting studies of litigation processes for the benefit of the federal and state judiciary. Mr. Oot is also known for his former role as Director of Electronic Discovery and Senior Counsel at Verizon in Washington, DC. He has extensive experience in discovery practices involving commercial litigation, regulatory filings, and antitrust matters including Hart-Scott Rodino Second Requests. Mr. Oot is a member of the advisory board for, ALM’s LegalTech, The Georgetown University Law Center’s Advanced eDiscovery Institute, and The Council on Litigation Management. Mr. Oot lectures regularly at educational events and legal conferences internationally, has been interviewed on National Public Radio’s Morning Edition and appeared in the August 2008 edition of The Economist . Patrick Oot, Esq. The Electronic Discovery Institute
    5. 5. Questions ? If you would like to submit a question, click the Q&A button on the Live Meeting console at the TOP of your screen.
    6. 6. Overview: Meeting a Reasonableness Standard in Litigation and Regulatory Filings <ul><li>Electronic Discovery Institute </li></ul><ul><li>Federal Rules of Civil Procedure </li></ul><ul><li>Federal Rules of Evidence </li></ul><ul><li>Case Law Focusing the Reasonableness of a Litigant’s Search Techniques </li></ul><ul><li>Studies and Surveys Linking Technology to the Reasonableness Requirement. </li></ul><ul><ul><li>Increased Speed </li></ul></ul><ul><ul><li>Greater Accuracy </li></ul></ul><ul><ul><li>Lower Cost </li></ul></ul>
    7. 7. The E-Discovery Institute is a non-profit research institution founded in late 2007 in response to requests from Judges and in-house counsel for more independent, peer-reviewed research on the effects of technology on litigation.
    8. 8. Annual Fundraiser – Gourmet  Pizza After Party In 2009 was held at the Newseum in Washington DC
    9. 9. Advanced E-Discovery Institute
    10. 10.
    11. 11. Reasonableness Federal Rule of Evidence Rule 26(g ) promulgates the duty of care by requiring the responding party’s (or its attorney) to certify “that to the best of the person's knowledge, information, and belief formed after a reasonable inquiry … with respect to a disclosure, it is complete and correct as of the time it is made. Not unreasonable and not unduly burdensome or expensive given the needs of the case, the amount in controversy, and the issues in the litigation.
    12. 12. Caselaw Discussing Search and Retrieval <ul><li>Equity Analytics, LLC v. Lundin </li></ul><ul><li>248 F.R.D. 331, 335 (D.D.C. 2008) </li></ul><ul><li>United States v. O’Keefe </li></ul><ul><li>537 F. Supp. 14, 24 (D.D.C. 2008) </li></ul><ul><li>Victor Stanley, Inc. v. Creative Pipe Inc. 250 F.R.D. 251, 254 (D.Md. 2008) </li></ul><ul><li>William A. Gross Constr. Assocs. v. American Mfgrs. Mut. Ins. Co. </li></ul><ul><li>256 F.R.D. 134 (S.D.N.Y. 2009) </li></ul>
    13. 13. United States v. O’Keefe 537 F. Supp. 14, 24 (D.D.C. 2008) <ul><li>&quot; Whether search terms or 'keywords' will yield the information sought is a complicated question involving the interplay, at least, of the sciences of computer technology, statistics and linguistics . . . . Given this complexity, for lawyers and judges to dare opine that a certain search term or terms would be more likely to produce information than the terms that were used is truly to go where angels fear to tread.&quot; </li></ul>
    14. 14. Equity Analytics, LLC v. Lundin 248 F.R.D. 331 (D.D.C. 2008) <ul><li>“ whether a particular search methodology, </li></ul><ul><li>such as keywords, will or will not be effective certainly requires </li></ul><ul><li>knowledge beyond the ken of a lay person (and a lay lawyer)” </li></ul>
    15. 15. <ul><li>“… selection of the appropriate search and information retrieval technique requires careful advance planning by persons qualified to design effective search methodology . The implementation of the methodology selected should be tested for quality assurance; and the party selecting the methodology must be prepared to explain the rationale for the method chosen to the court, demonstrate that it is appropriate for the task, and show that it was properly implemented.” </li></ul>Victor Stanley, Inc. v. Creative Pipe Inc. 250 F.R.D. 251, 254 (D.Md. 2008)
    16. 16. <ul><li>“ This Opinion should serve as a wake-up call to the Bar in this District about the need for careful thought, quality control, testing, and cooperation with opposing counsel in designing search terms or &quot;keywords&quot; to be used to produce emails or other electronically stored information (&quot;ESI&quot;).” </li></ul>William A. Gross Constr. Assocs. v. American Mfgrs. Mut. Ins. Co. 256 F.R.D. 134 (S.D.N.Y. 2009)
    17. 17. Meeting the Reasonable Inquiry Standard: Framework Distilled from Judges Grimm & Peck <ul><li>Explain how what was done was sufficient ; </li></ul><ul><li>Show that it was reasonable and why ; </li></ul><ul><li>Set forth the qualifications of the persons selected to design the search; </li></ul><ul><li>Carefully craft the appropriate keywords with input from the ESI's custodians as to the words and abbreviations they use; </li></ul><ul><li>Quality control tests the methodology to assure accuracy in retrieval and the elimination of false positives. </li></ul>
    18. 18. Fed. R. Evid. 502 <ul><li>Report of the Advisory Committee of Evidence Rules on May 15, 2007. In determining whether waiver applies for inadvertent disclosures, courts should consider: </li></ul><ul><li>The reasonableness of the precautions taken; </li></ul><ul><li>The time taken to rectify the error; </li></ul><ul><li>The scope of discovery; </li></ul><ul><li>The extent of discovery; and </li></ul><ul><li>The over-ridding issue of fairness. </li></ul>
    19. 19. Fed. R. Evid. 502 <ul><li>Consider Protective Order Language…. </li></ul><ul><li>“ The Producing Party will be deemed to have taken reasonable steps to prevent communications or information from inadvertent disclosure if that party utilized either attorney screening, keyword search term screening, advanced analytical software applications and/or linguistic tools in screening for privilege, work product or other protection.” </li></ul><ul><li>-“The Protective Order Toolkit: Protecting Privilege with FRE 502.” The Sedona Conference Journal , Fall 2009, vol. X. </li></ul>
    20. 20. Why Studies Are Important Sedona Conference Best Practices Commentary on Search & Retrieval Methods in E-Discovery, Practice Point 7: “ Parties should expect that their choice of search methodology will need to be explained, either formally or informally in subsequent legal contexts (including in depositions, evidentiary proceedings, and trials).”
    21. 21. The De-Dupe Study
    22. 22. The De-Dupe Study Type De-Duping Average of Responses* Average % Reduction Minimum % Reduction Maximum % Reduction Single-Custodian 21.4 9.7 40.2 Across Multiple Custodians 38.1 22.6 62.7
    23. 23. Why is the De-Dupe Study Important? $106,000 wasted attorney review hours
    24. 24.
    25. 25. TREC Legal Track Insights <ul><li>Keyword searches alone miss the great majority of relevant documents. </li></ul><ul><li>Boolean searches usually retrieve 20-57% of the documents found with variety of methods. </li></ul><ul><li>Each type of search usually retrieves somewhere around 20% of responsive documents. </li></ul><ul><li>Sampling is essential. </li></ul>More information about TREC Legal Track 2009 is at http://trec-legal.umiacs.umd.edu/ and http://trec-legal.umiacs.umd.edu/ol3.pdf
    26. 26. The TREC home page will lead you to published studies by participants in previous Legal Track exercises . An Overview of the 2008 Legal Track exercise is available online. Online legal journals have recently published some excellent reviews and summaries about TREC Legal Track, including In Search of the Perfect Search by Jason Krause in April 2009 ABA Journal TREC 2008 Stresses Human Element in EDD By Jason Krause in May 2009 LTN Legal Technology News PUBLISHED STUDIES OF PRIOR TREC LEGAL TRACK EXERCISES
    27. 27. The EDI Document Categorization Study
    28. 28. Figure 1. The level of agreement with the original review and chance levels to be expected from the marginals for the two human teams and the two computer systems (the four re-assessments). Error bars show standard error. The EDI Document Categorization Study
    29. 29. Why is the Document Categorization Study Important? The results support the idea that machine categorization is no less accurate at identifying relevant/responsive documents than employing a team of reviewers . Based on these results, it would appear that using machine categorization can be a reasonable substitute for human review.
    30. 30. The Threading Study
    31. 31. The Threading Study Company Name Project-Level Average Emails per Thread Savings in review from Email Threading compared to no Email Threading Q 4.1 Avg Q. 4.2 Highest Q. 4.3 Lowest Q. 4.4 Avg Q. 4.5 Highest Q. 4.6 Lowest Anacomp Capital Legal Solutions 4.2 11 2 10% 20% 5% Clearwell Systems 5 10 2 30% 55% 10% Daticon EED 4.8 11.4 1.7 58% 82% Equivio 3-5 InterLegis 2.1 3.5 1.5 Kroll Ontrack Logik 10-15 OrcaTec Recommind 3 10 2 20% 50% 5% TCDI 4.6 9.2 2.3 25% TRILANTIC 2.6 Valora 6 12 1.5 75% TOTALS 48.8 67.1 13 218 287 30 NUMBER OF RESPONSES 10 7 7 6 5 4 AVERAGE 4.9 9.6 1.9 36% 57% 8%
    32. 32. Save Money on Review TIme Why is the Threading Study Important?
    33. 33. Conclusions & Observations <ul><li>Many lawyers and judges need education regarding “reasonable inquiry” discovery response techniques. </li></ul><ul><li>Litigants should consider cooperation with an opponent early to establish a search protocol. </li></ul><ul><li>All categorization systems require some level of educated interaction . Better results result occur when knowledge is transferred early and continuously throughout the process. </li></ul><ul><li>The use of auto-categorization systems can potentially reduce document request response times from over four months to as little as thirty days for even the largest datasets. </li></ul><ul><li>Human review is of unknown accuracy and consistency. </li></ul><ul><li>Measurement against an accepted standard is essential to evaluating reasonableness. </li></ul><ul><li>Using auto-categorization will save money and time . </li></ul><ul><li>As data volumes increase, auto-categorization may be the only practical solution to massive data sets common in today’s corporations. </li></ul>

    ×