Part 5 in this series of webinars on Demystifying Technology Assisted Review covers Dispelling Myths and Offering Practice Tips. Sonya Sigler of SFL Data, Paige Hunt of Perkins Coie, and CHris Mammen of Hogan Lovells cover this topic in depth.
2. Overview
Introduc/on
of
Panelists
Paige
Hunt,
Perkins
Coie
Chris
Mammen,
Hogan
Lovells
Dispelling
TAR
Myths
TAR
Prac/ce
Tips
Interac/ve
Ques/ons,
Comments,
Concerns
3. TAR:
Spectrum
of
Solu/ons
Linear
Review
Culling
Itera/ve
search
Review
Accelerated
Review
Email
Threading
Near
Duplicate
Detec/on
RA
-‐
Clustering
Categoriza/on
(Supervised)
Automated
Review
Relevance
Ranking
Machine
Learning
Latent
Seman/c
Indexing
(sta/s/cal
probability)
PaOern
Analysis
Sampling
Data
for
High
Precision
and
Recall
Rates
Per
Document
Cost
Organiza3on
Commitment
4. Myth:
I
Should
Use
TAR
on
EVERY
Case
Timeline
Pressures
Deposi/on
Prepara/on
Quick
Produc/on
Deadlines
(2nd
Requests,
M&A)
Understanding
Your
Data
(Advanced
Analysis)
Inves/ga/ons
(Internal,
Government,
Regulatory)
Priority
Review
Opposing
Produc/ons
(Clustering)
Hot
Documents
Issue
Coding
&
Priori/za/on
(Categoriza/on)
Costs
(Propor/onality,
Cost
Control)
Review
More
Relevant
Data
(Cull
Out
NR
Data)
Increased
Reviewer
Efficiency
Review
Like
Documents
Review
With
Equivio
5. Prac/ce
Tip:
Lower
Risk
Ways
to
Get
Started
Measure
Linear
Review
Accuracy
Priori/zed
Review
Internal
Inves/ga/ons
Arbitra/ons
Third
Party
Produc/ons
Opposing
Produc/ons
6. Myth:
TAR
Tools
Work
Well
on
All
Cases
Prac&ce
Tip:
Choosing
the
Right
TAR
Tool
• Easy
to
Understand
• Easy
to
Use
• Flexible
Workflow
• Understand
Tool
Limita/ons
7. Prac/ce
Tip:
Know
TAR
Tool
Limita/ons
Minimum
Case
or
Data
Requirements
• >
50,000
documents
(~10GB+)
• Data
types
(Images,
OCR’d
Data,
PPT
or
XLS)
• Clearly
defined
relevancy
and/
or
case
issues
• ~
10%+
relevant
documents
(data
richness)
8. OR
Myth:
All
TAR
Tools
Work
the
Same
Seed
Set
Machine
Categoriza&on
Machine
Categoriza&on
Seed
Set
OR
Prac&ce
Tip:
Machine
Categoriza&on
or
Seed
Set
or
Both
9. Myth:
I
don’t
Need
a
Search
Strategy
Scoping
&
Filtering
Data
Objec/ve
Custodians
Date
Ranges
deNISTing
Duplicates
Culling
Data
Subjec/ve
Junk
Analysis
Domain
Analysis
Subject
MaOer
Case
Specific
Reviewing
Data
Near
Duplicates
Clustering
Categoriza/on
Relevance
Ranking
Predic/ve
Coding
Sampling
10. Prac/ce
Tip:
Workflow
Includes
Search
Strategy
Objec/ve
Scoping
Custodians
Date
Ranges
Deduplica/on
(horizontal
or
ver/cal)
File
Exclusion
(DeNISTing)
File
Inclusion
(Images)
Subjec've
Culling
(Op'onal)
Domain
Analysis
(include
or
exclude)
Junk
Analysis
(Spam
or
Permissive)
Non-‐Business
Communica/ons
Subject
MaOer
Case
Specific
11. Myth:
Anyone
Can
Train
the
Sojware
Prac/ce
Tip:
Choose
a
Case
Expert
With
Care
• Knows
the
case
strategy,
case
issues,
and
case
data
• Is
willing
and
able
to
learn
a
new
plakorm/tool
(i.e.
Equivio
Zoom,
Rela/vity,
Clearwell,
OrcaTec,
etc.)
• Is
open
to
a
more
interac/ve
review
while
in
the
predic/ve
coding
tool
• Is
available
to
train
the
sojware
• 1
-‐
2
days
for
the
ini/al
machine
learning
Assessment
Phase
of
500-‐
1,000
documents
• 2
-‐
5
days
for
the
machine
learning
Interac/ve
Ranking/
Training
Phase
of
1,000-‐
3,000
documents
12. Myth:
95
Relevancy
Ranking
=
95%
Relevant
Prac&ce
Tip:
Depends
on
TAR
Tool
Results
13. Myth:
No
Need
for
Document
Review(ers)
Privilege
Screen
Privilege
Screen
Privilege
Screen
Non-‐
relevant
Non-‐
relevant
Non-‐
relevant
Relevant
Relevant
Relevant
Review
Sample
Accuracy
MaOers
Cost
MaOers
Time
MaOers
Legend
14. Review
Scenarios
-‐
#1
Accuracy
MaOers
Review
it
All
Priori/zed
Review
Using
Batch
Rankings
Low
Ranked
Docs
–
Contract
Reviewers?
Middle
Ranked
Docs
–
law
firm
or
outsourced?
Highly
relevant
docs
–
law
firm
or
in-‐house?
15. Review
Scenarios
-‐
#2
Accuracy
&
Have
Time
Review
all
above
cut-‐off
Sample
below
the
cut-‐off
Sample
documents
that
are
below
the
cut-‐off
point
Review
all
documents
above
the
cut-‐off
point
16. Review
Scenarios
-‐
#3
Cost
MaOers
Most
Review
Docs
in
Priv
Screen
Sample
Above
Cut-‐off,
but
not
in
Priv
Screen
Sample
Below
Cut-‐off
point
Sample
documents
that
are
below
the
cut-‐off
point
Sample
all
other
documents
above
the
cut-‐off
point
Review
all
documents
caught
in
Privilege
Screen
17. Review
Scenarios
-‐
#4
Time
or
Compliance
Sample
all
docs
Withhold
priv
screen
docs
Turn
over
above
the
cutoff,
but
not
priv
screened
Withhold
docs
below
cut-‐off
point
Sample
documents
that
are
below
the
cut-‐off
point
Sample/turn
over
all
other
documents
above
the
cut-‐off
point
Withhold
all
documents
caught
in
Privilege
Screen
18. Myth:
Open
Kimono
=
Disclose
Everything
Da
Silva
Moore
Actos
Products
Global
Aerospace
Hooters
(EORHB)
Biomet
Judge
Andrew
Peck
19. Prac/ce
Tip:
TAR
Protocol
is
Nego/able
No
Easy
BuOon
No
Preset
Workflow
No
One
Size
Fits
ALL
Protocol
Transparency
is
Key
Collabora/on
is
Key
20. Dispelling
Myths
&
Prac/ce
Tips
One
Size
Fits
All
Using
TAR
Tools
on
Every
Case
Choosing
a
TAR
Tool
(Analy/cs)
Machine
Categoriza/on
or
Seed
Set
To
Cull
or
Not
to
Cull
in
Your
Workflow
Objec/ve
Culling
Subjec/ve
Culling
Choosing
a
Case
Expert
Collabora/ve
Training
21. Dispelling
Myths
&
Prac/ce
Tips
What
Does
Relevancy
Ranking
Mean
Elimina/ng
Document
Review
Elimina/ng
Document
Reviewers
Open
Kimono
Required
Disclosing
Non-‐responsive
Documents
22. Q&A - Thank you!
Sonya
L.
Sigler
VP
Product
Strategy
&
Consul&ng
SFL
Data
415-‐321-‐8385
sonya@sfldata.com
www.sfldata.com
Chris&an
Mammen
Partner
Hogan
Lovells
US
LLP
415-‐374-‐2325
chris.mammen@hoganlovells.com
www.hoganlovells.com
Paige
Hunt
Director
of
E-‐Discovery
Services
Perkins
Coie
LLP
206-‐359-‐8339
phunt@perkinscoie.com
www.perkinscoie.com