Share with your friends










Submit

EDI-Oracle Study: Electronic Document Review Inferior to Human Review

In 2012, Oracle teamed up with local Stanford scientists to form the nonprofit Electronic Discovery Institute (EDI). The goal—to benchmark the “accuracy performance of different [EDI] providers as they compared to different standards, but the cost of deploying their process as if it were a real case,” said Patrick Oot, EDI cofounder and senior special counsel to the Securities Exchange Commission. In short, EDI’s objective is to better understand the capabilities and limitations of electronic discovery, as compared to human review of documents.

Electronic discovery has become more and more common over the last 5 years. For instance, in a high-profile litigation involving Landow Aviation, the company requested the judge allow a computer program to perform much of the initial review of nearly two million electronic documents. Landow was hoping to avoid hiring dozens of lawyers to review the documents contained in 8,000 gigabytes—about eight computer hard drives.

You can’t turn discovery over to robots—humans are still the most vital component of the project

Firms have begun to use predictive coding to perform particular aspects of pretrial document review, primarily electronic discovery. Computer algorithms are being consulted to determine which documents are relevant to a case. The theory is that only 10% of documents in any given case are actually relevant to the trial. With the costs of high-stakes litigation rising by the month, predictive coding provides a cost-effective solution for document review. At least that’s the theory. EDI was formed to determine if these so-called state-of-the-art predictive coding discovery platforms are all they’re cracked up to be.

As documented by Law Technology News, the first phase of the EDI research project has been completed and it confirms what many proponents have been addressing about technology-assisted document review (and what we here at American Discovery have been arguing for quite some time as well), —that spending more money doesn’t correlate with greater quality; that senior attorneys know what they are doing; and that you can’t turn discovery over to robots—humans are still the most vital component of the project.

Oot identified the key takeaways for Phase I, when compared to actual document review performed by skilled professionals:

  • Software is only as good as its operators. Human contribution is the most significant element.
  • Spending more money does not correlate with greater quality. Inexpensive service provider performed very well.
  • The document review company that performed best in Phase I was a single senior-level attorney who spent 64.5 hours on review and analysis. The single attorney performed best at finding both responsive documents and privileged documents.
  • The second best performance came from a company with middle-tier pricing and that married a U.S.-based team with an overseas team. This may dispel the myth that overseas teams perform less effectively than U.S.-based document review teams. Especially when other technology providers performed poorly with U.S. based teams.

If the results of EDI’s Phase I study could be summarized even more succinctly, it might be summarized in the words of Tom M. Mitchell, a computer scientists at Carnegie Mellon University: “For all their brilliance, computers can be thick as a brick.”

Phases II & III will continue to refine these results using different methodologies and reporting processes, but it’s likely the data will continue to point to the limitations of predictive coding and electronic document review. There is no substitute for human review of discovery documents.