Getting started using technology assisted review can be difficult if lawyers aren't used to this type of technology. Part 4 of this webinar series provides in depth coverage on how to get started with TAR tools.
Part 5 in this series of webinars on Demystifying Technology Assisted Review covers Dispelling Myths and Offering Practice Tips. Sonya Sigler of SFL Data, Paige Hunt of Perkins Coie, and CHris Mammen of Hogan Lovells cover this topic in depth.
Learning from Big Data – Simplify Your Workflow Using Technology Assisted ReviewDaegis
Technology assisted review (TAR) or predictive coding has received both good and bad press in the eDiscovery arena. Proponents of TAR tout its abilities to speed up review and decrease costs without sacrificing accuracy. Opponents assert the technology is unproven and may be indefensible. Ultimately, it may be a necessity in the era of Big Data. This webinar examines the legal, economic, and technological issues surrounding conventional technology assisted review and new predictive technologies, and addresses the following:
- Is TAR becoming essential for law firms and legal departments?
- What are the risks associated with using TAR?
- Can TAR fit into existing workflows instead of requiring legal professionals to adapt to the technology?
- Can alternative methods of TAR relieve senior attorneys from the burden of creating seed sets to jump start reviews?
Featured Speakers:
- David Horrigan, Esq., Analyst, E-Discovery and Information Governance, 451 Research
- Anita Engles, Vice President of Product Marketing, Daegis
- Doug Stewart, EnCE, Vice President of Technology and Innovation, Daegis
Technology Assisted Review (TAR): Opening, Exploring and Bringing Transparen...Daegis
It’s time to set the record straight on technology assisted review (TAR). Some people object to what they mistakenly believe is the “black box” nature of the technology, while others are hesitant to adopt an approach that they perceive as novel. This panel will dispel the myths, clarify the definitions, and shed light on the so-called “black box” of technology assisted review.
Some audience members may be surprised to learn that technology assisted review is nothing new. Search and clustering technology, for example, have been commonplace for many years. The phrase "technology assisted review" simply refers to a more efficient use of people, process, and technology that is the next evolutionary step in electronic discovery. As with other legal technologies, human expertise and a proven workflow are the keys to success. This panel will clearly explain what technology assisted review is all about and how it can be used as a tool in your practice so that you can make an informed decision about adopting it in your organization
The PAC aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing.
Since its beginning, the PAC is designed to connect performance experts during a single event. In June, during 24 hours, 20 participants convened exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
Kairntech combines technologies from natural language processing (NLP) and machine learning to support clients in analysing large amounts of text-based information.
You find more information at https://kairntech.com/
Part 5 in this series of webinars on Demystifying Technology Assisted Review covers Dispelling Myths and Offering Practice Tips. Sonya Sigler of SFL Data, Paige Hunt of Perkins Coie, and CHris Mammen of Hogan Lovells cover this topic in depth.
Learning from Big Data – Simplify Your Workflow Using Technology Assisted ReviewDaegis
Technology assisted review (TAR) or predictive coding has received both good and bad press in the eDiscovery arena. Proponents of TAR tout its abilities to speed up review and decrease costs without sacrificing accuracy. Opponents assert the technology is unproven and may be indefensible. Ultimately, it may be a necessity in the era of Big Data. This webinar examines the legal, economic, and technological issues surrounding conventional technology assisted review and new predictive technologies, and addresses the following:
- Is TAR becoming essential for law firms and legal departments?
- What are the risks associated with using TAR?
- Can TAR fit into existing workflows instead of requiring legal professionals to adapt to the technology?
- Can alternative methods of TAR relieve senior attorneys from the burden of creating seed sets to jump start reviews?
Featured Speakers:
- David Horrigan, Esq., Analyst, E-Discovery and Information Governance, 451 Research
- Anita Engles, Vice President of Product Marketing, Daegis
- Doug Stewart, EnCE, Vice President of Technology and Innovation, Daegis
Technology Assisted Review (TAR): Opening, Exploring and Bringing Transparen...Daegis
It’s time to set the record straight on technology assisted review (TAR). Some people object to what they mistakenly believe is the “black box” nature of the technology, while others are hesitant to adopt an approach that they perceive as novel. This panel will dispel the myths, clarify the definitions, and shed light on the so-called “black box” of technology assisted review.
Some audience members may be surprised to learn that technology assisted review is nothing new. Search and clustering technology, for example, have been commonplace for many years. The phrase "technology assisted review" simply refers to a more efficient use of people, process, and technology that is the next evolutionary step in electronic discovery. As with other legal technologies, human expertise and a proven workflow are the keys to success. This panel will clearly explain what technology assisted review is all about and how it can be used as a tool in your practice so that you can make an informed decision about adopting it in your organization
The PAC aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing.
Since its beginning, the PAC is designed to connect performance experts during a single event. In June, during 24 hours, 20 participants convened exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
Kairntech combines technologies from natural language processing (NLP) and machine learning to support clients in analysing large amounts of text-based information.
You find more information at https://kairntech.com/
Migrating to Alfresco Part II: The “How” – Tools & Best Practices for Renovat...Zia Consulting
In the first presentation of this Migration series from Alfresco Partner of the Year, Zia Consulting, we focused on the “Why” and the “What”: Why should you migrate to Alfresco and What people are migrating from. We looked at the costs associated with legacy ECM systems–both license and maintenance costs–as well as the costs associated with systems that aren’t being used, won’t integrate with your critical business applications, or won’t support modern initiatives including cloud and mobile. We then discussed moving from technologies like Documentum or Sharepoint, as well as moving from embedded or vertical-specific ECM systems, or even moving from content repositories in files shares or email.
For this second presentation of the Migration series, we focused on the “How”. Specifically, we covered:
-Best practices for migrating your content to Alfresco based on experience from dozens of successful Alfresco migration projects
-Recommended approaches for “phased” migrations
Opportunities for “multi-repository” solutions, keeping existing documents within legacy systems
-Migrating records to Alfresco Records Management (RM) 2.1
Tom DeMarco states that “You can’t control what you can’t measure”, but how much can we change and control (with) what we measure? This talk investigates the opportunities and limits of data-driven software engineering, shows which opportunities lie ahead of us when we engage in mining and analyzing software engineering process data, but also highlights important factors that influence the success and adaptability of data-based improvement approaches.
Continuous Performance Testing and Monitoring in Agile DevelopmentNeotys
Continuous Performance testing and monitoring is the best way to ensure application performance with quicker development cycles. Balancing Agile and DevOps velocity with the need for ongoing performance testing and monitoring is essential. We call it Continuous Performance Validation.
Gain a deeper understanding of what Exploratory Testing (ET) is, the essential elements of the practice with practical tips and techniques, and finally, ideas for integrating ET into the cadence of an agile process
An introduction to Elasticsearch's advanced relevance ranking toolboxElasticsearch
The hallmark of a great search experience is always delivering the most relevant results, quickly, to every user. The difficulty lies behind the scenes in making that happen elegantly and at a scale. From App Search’s intuitive drag and drop interface to the advanced relevance capabilities built into the core of Elasticsearch — Elastic offers a range of tools for developers to tune relevance ranking and create incredible search experiences. In this session, we’ll explore some of Elasticsearch’s advanced relevance ranking features, such as dense vector fields, BM25F, ranking evaluation, and more. Plus we’ll give you some ideas for how these features are being used by other Elastic users to create world-class, category defining search experiences.
How to Build an Attribution Solution in 1 DayPhillip Law
Presented at the London Measurecamp Conference in September 2016 - This presentation runs through how to build a basic attribution model using Tableau and Python. This is meant as a starter to get you up and running in attribution.
How to Build an Attribution Solution in 1 DayPhillip Law
I presented this at the London Measurecamp Conference, in September 2016. This is an overview on how to build an attribution solution with Python and Tableau. This is meant as a starter solution.
Machine learning has become an important tool in the modern software toolbox, and high-performing organizations are increasingly coming to rely on data science and machine learning as a core part of their business. eBay introduced machine learning to its commerce search ranking and drove double-digit increases in revenue. Stitch Fix built a multibillion dollar clothing retail business in the US by combining the best of machines with the best of humans. And WeWork is bringing machine-learned approaches to the physical office environment all around the world. In all cases, algorithmic techniques started simple and slowly became more sophisticated over time. This talk will use these examples to derive an agile approach to machine learning, and will explore that approach across several different dimensions. We will set the stage by outlining the kinds of problems that are most amenable to machine-learned approaches as well as describing some important prerequisites, including investments in data quality, a robust data pipeline, and experimental discipline. Next, we will choose the right (algorithmic) tool for the right job, and suggest how to incrementally evolve the algorithmic approaches we bring to bear. Most fancy cutting-edge recommender systems in the real world, for example, started out with simple rules-based techniques or basic regression. Finally, we will integrate machine learning into the broader product development process, and see how it can help us to accelerate business results
The challenge for every product is to ship bug-free code as often as possible. Whether you are an early stage startup with a pilot application or a large corporation with myriad services, you’re dealing with this problem every day.
We usually end up with either too little or too much testing and it’s hard to find the sweet spot. Too little testing and you have bugs and application instability, leading to time spent fixing bugs and manually regression testing your apps. You’re asking yourself, “isn’t there an easier way to do this?” Too much testing and you have slow release times and high automation maintenance costs. In this scenario, you’re asking yourself, “are the bugs I’m catching worth the time I’m spending maintaining this code?”
In this webinar, software engineer Kate Green will go over a framework for evaluating your testing situation in order to find your organization’s sweet spot.
Key Takeaways
- Understanding where you are today
- Identifying weak, brittle, or buggy parts of your application
- Figuring out where to test first, and with what types of tests
- How to pare down an excessively large automation suite
Measuring test effectiveness
Qiagram is a collaborative visual data exploration environment that enables investigator-initiated, hypothesis-driven data exploration, allowing business users as well as IT professionals to easily ask complex questions against complex data sets.
Migrating to Alfresco Part II: The “How” – Tools & Best Practices for Renovat...Zia Consulting
In the first presentation of this Migration series from Alfresco Partner of the Year, Zia Consulting, we focused on the “Why” and the “What”: Why should you migrate to Alfresco and What people are migrating from. We looked at the costs associated with legacy ECM systems–both license and maintenance costs–as well as the costs associated with systems that aren’t being used, won’t integrate with your critical business applications, or won’t support modern initiatives including cloud and mobile. We then discussed moving from technologies like Documentum or Sharepoint, as well as moving from embedded or vertical-specific ECM systems, or even moving from content repositories in files shares or email.
For this second presentation of the Migration series, we focused on the “How”. Specifically, we covered:
-Best practices for migrating your content to Alfresco based on experience from dozens of successful Alfresco migration projects
-Recommended approaches for “phased” migrations
Opportunities for “multi-repository” solutions, keeping existing documents within legacy systems
-Migrating records to Alfresco Records Management (RM) 2.1
Tom DeMarco states that “You can’t control what you can’t measure”, but how much can we change and control (with) what we measure? This talk investigates the opportunities and limits of data-driven software engineering, shows which opportunities lie ahead of us when we engage in mining and analyzing software engineering process data, but also highlights important factors that influence the success and adaptability of data-based improvement approaches.
Continuous Performance Testing and Monitoring in Agile DevelopmentNeotys
Continuous Performance testing and monitoring is the best way to ensure application performance with quicker development cycles. Balancing Agile and DevOps velocity with the need for ongoing performance testing and monitoring is essential. We call it Continuous Performance Validation.
Gain a deeper understanding of what Exploratory Testing (ET) is, the essential elements of the practice with practical tips and techniques, and finally, ideas for integrating ET into the cadence of an agile process
An introduction to Elasticsearch's advanced relevance ranking toolboxElasticsearch
The hallmark of a great search experience is always delivering the most relevant results, quickly, to every user. The difficulty lies behind the scenes in making that happen elegantly and at a scale. From App Search’s intuitive drag and drop interface to the advanced relevance capabilities built into the core of Elasticsearch — Elastic offers a range of tools for developers to tune relevance ranking and create incredible search experiences. In this session, we’ll explore some of Elasticsearch’s advanced relevance ranking features, such as dense vector fields, BM25F, ranking evaluation, and more. Plus we’ll give you some ideas for how these features are being used by other Elastic users to create world-class, category defining search experiences.
How to Build an Attribution Solution in 1 DayPhillip Law
Presented at the London Measurecamp Conference in September 2016 - This presentation runs through how to build a basic attribution model using Tableau and Python. This is meant as a starter to get you up and running in attribution.
How to Build an Attribution Solution in 1 DayPhillip Law
I presented this at the London Measurecamp Conference, in September 2016. This is an overview on how to build an attribution solution with Python and Tableau. This is meant as a starter solution.
Machine learning has become an important tool in the modern software toolbox, and high-performing organizations are increasingly coming to rely on data science and machine learning as a core part of their business. eBay introduced machine learning to its commerce search ranking and drove double-digit increases in revenue. Stitch Fix built a multibillion dollar clothing retail business in the US by combining the best of machines with the best of humans. And WeWork is bringing machine-learned approaches to the physical office environment all around the world. In all cases, algorithmic techniques started simple and slowly became more sophisticated over time. This talk will use these examples to derive an agile approach to machine learning, and will explore that approach across several different dimensions. We will set the stage by outlining the kinds of problems that are most amenable to machine-learned approaches as well as describing some important prerequisites, including investments in data quality, a robust data pipeline, and experimental discipline. Next, we will choose the right (algorithmic) tool for the right job, and suggest how to incrementally evolve the algorithmic approaches we bring to bear. Most fancy cutting-edge recommender systems in the real world, for example, started out with simple rules-based techniques or basic regression. Finally, we will integrate machine learning into the broader product development process, and see how it can help us to accelerate business results
The challenge for every product is to ship bug-free code as often as possible. Whether you are an early stage startup with a pilot application or a large corporation with myriad services, you’re dealing with this problem every day.
We usually end up with either too little or too much testing and it’s hard to find the sweet spot. Too little testing and you have bugs and application instability, leading to time spent fixing bugs and manually regression testing your apps. You’re asking yourself, “isn’t there an easier way to do this?” Too much testing and you have slow release times and high automation maintenance costs. In this scenario, you’re asking yourself, “are the bugs I’m catching worth the time I’m spending maintaining this code?”
In this webinar, software engineer Kate Green will go over a framework for evaluating your testing situation in order to find your organization’s sweet spot.
Key Takeaways
- Understanding where you are today
- Identifying weak, brittle, or buggy parts of your application
- Figuring out where to test first, and with what types of tests
- How to pare down an excessively large automation suite
Measuring test effectiveness
Qiagram is a collaborative visual data exploration environment that enables investigator-initiated, hypothesis-driven data exploration, allowing business users as well as IT professionals to easily ask complex questions against complex data sets.
Similar to 2013 3 27 TAR Webinar Part 4 Getting Started Sigler (20)
Guest Lecture on Litigation Holds, Preservation, and Search Methodologies. If you want to download this rather than just view, please email me at sonya@sonyasigler.com
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
2.
Demys&fying
Technology
Assisted
Review
Agenda
Overview
of
TAR
Spectrum
Your
Purpose
for
Using
TAR
What
the
TAR
Technology
Does
The
TAR
Balancing
Act
People
Process
Technology
QuesBons
3.
Demys&fying
Technology
Assisted
Review
Overview
of
TAR
Spectrum
Linear
Review
Culling
IteraBve
search
Review
Accelerated
Review
Email
Threading
Near
Duplicate
DetecBon
RA
-‐
Clustering
CategorizaBon
(Supervised)
Automated
Review
Relevance
Ranking
Machine
Learning
Latent
SemanBc
Indexing
(staBsBcal
probability)
PaPern
Analysis
Sampling
Data
for
High
Precision
and
Recall
Rates
Per
Document
Cost
Organiza3on
Commitment
4.
Demys&fying
Technology
Assisted
Review
Underlying
Technologies
StaBsBcal
-‐
#s
based
LinguisBc
–
word
based
Key
word
Search
Ontologies
Lucene
dtSearch
Other
Search
Engines
Rules
Based
Systems
Bayesian
ClassificaBon
Latent
SemanBc
Indexing
Support
Vector
Models
5.
Demys&fying
Technology
Assisted
Review
Your
Purpose
for
Using
TAR
BePer,
Cheaper,
Faster
CombinaBon
All
3…
Money
(Cheaper)
Time
(Faster)
Accuracy
(BePer)
6.
Demys&fying
Technology
Assisted
Review
Purpose
of
the
TAR
Technology
Itself
Learn
from
case
expert
Propagate
informaBon
to
enBre
data
set
Provide
Forum
Quality
Control
of
the
Training
Quality
Control
of
the
PropagaBon.
7.
Demys&fying
Technology
Assisted
Review
Balancing
Act
Technology
Process
People
8.
Demys&fying
Technology
Assisted
Review
Technology/Data
TAR
Tools
Data
Types
Data
Size
Types
of
Cases
9.
Demys&fying
Technology
Assisted
Review
TAR
Tools
Seed
Set
Training
Set
Control
Set
10.
Demys&fying
Technology
Assisted
Review
Data
Types
Text
based
Meta
data
Images
Excel
files
Power
point
Funky
file
types
–
normalize
Sun
Mac
Online
(IMs,
Facebook,
Sharepoint,
Wiki)
11.
Demys&fying
Technology
Assisted
Review
Data
Size
#
of
GBs
#
of
Files
#
of
Each
File
Type
Families
12.
Demys&fying
Technology
Assisted
Review
When
to
Consider
TAR?
Timeline
Pressures
2nd
Requests
M&A
TransacBons
Understanding
Your
Data
InvesBgaBons
(Internal,
Government,
Regulatory)
Advanced
Analysis
AnBtrust
Cases,
Complex
LiBgaBon
ProducBons
Costs
Mere
Compliance
Trumps
Large
Data
Sets,
MulBple
Data
Sets
ProporBonal,
Managed
Costs
13.
Demys&fying
Technology
Assisted
Review
Low
Risk
Case
Types
for
Gebng
Started
Measure
Linear
Review
Accuracy
Internal
InvesBgaBons
ProducBons
Opposing
ProducBons
14.
Demys&fying
Technology
Assisted
Review
Case
Type:
InvesBgaBons
People
Involved
Subject
MaPer
Bad
Behavior
15.
Demys&fying
Technology
Assisted
Review
Case
Type:
ProducBons
Time
is
of
the
Essence
Volume
is
Ever
Expanding
2
Case
Studies
1.7
M
docs
–
narrow
review
set
3
weeks
16.
Demys&fying
Technology
Assisted
Review
Case
Type:
ArbitraBon
-‐
3
Weeks
366,960
Total
documents
108,750
docs
–
IteraBve
Keyword
Culling
92,067
to
Zoom
13,844
Reviewed
Aner
Zoom
258,210
sampled
16,683
to
Linear
Review
~1,000
+
families
~2,000
+
families
24,209
Docs
Reviewed
2,026
Docs
Produced
2,440
docs
trained
17.
Demys&fying
Technology
Assisted
Review
Case
Type:
ArbitraBon
(1.7m
Docs)
1,713,860
Total
documents
1,646,062
docs
to
Zoom
112,285
Junk
File
56,949
Junk
Domain
Analysis
137,000
Reviewed
Aner
Zoom
File
types,
size,
no
text
1,600
Docs
Trained
Ongoing
NegoBaBons
Ltd
Custodians
~1.1M
docs
Sampling
Below
the
Cutoff
–
500
docs
(95%
+/-‐
4.38%)
~600K
docs
Above
the
Cutoff
–
Review
Priv
Screen
Docs
Only
18.
Demys&fying
Technology
Assisted
Review
Case
Type:
Opposing
ProducBons
Time
is
of
the
Essence
What
is
There?
What
is
Missing?
Re-‐use
Training
Your
ProducBon
Training
19.
Demys&fying
Technology
Assisted
Review
TAR
Workflow/Process
PrioriBzed
Review
Culling
(Responsive
v.
Not
Responsive)
Privilege
Screen
Quality
Control
Privilege
Screen
Non-‐
relevant
Relevant
21.
Demys&fying
Technology
Assisted
Review
How
to
Decide
What
to
Review,
Sample
or
Ignore?
Privilege
Screen
Privilege
Screen
Privilege
Screen
Non-‐
relevant
Non-‐
relevant
Non-‐
relevant
Relevant
Relevant
Relevant
Review
Sample
Accuracy
MaPers
Cost
MaPers
Time
MaPers
Legend
22.
Demys&fying
Technology
Assisted
Review
Process:
PrioriBzed
Review
Review
It
ALL
0
v
1
0
-‐
>
1
But
How?
Priority
Batching
Ranking/Scores
Topics
Email
Threads
Deduplicated
Sets
Why?
23.
Demys&fying
Technology
Assisted
Review
Process:
Culling
Non-‐responsive
v
Responsive
Where
is
the
Cutoff
Point?
24.
Demys&fying
Technology
Assisted
Review
24
Ignoring,
Sampling,
or
Reviewing
Documents
that
are
highly
likely
to
NOT
be
relevant
Documents
that
could
go
either
way
Documents
that
are
highly
likely
to
be
relevant
25.
Demys&fying
Technology
Assisted
Review
Process:
Privilege
Screen
How
much
data?
How
is
it
PrioriBzed
for
Review
Who
Reviews
this
Data?
26.
Demys&fying
Technology
Assisted
Review
Process:
Quality
Control
What
was
Len
Behind?
Review
Sampling
Nothing
What
is
Moving
Forward?
Produce
Sample,
then
Produce
Review,
then
Produce
Privilege
Screen
Non-‐
relevant
Relevant
27.
Demys&fying
Technology
Assisted
Review
People
Case
Expert
Project
Manager
Defensibility
28.
Demys&fying
Technology
Assisted
Review
Who
Trains
the
TAR
Tool?
v.
29.
Demys&fying
Technology
Assisted
Review
People:
Case
Expert
Who
Why
What
Do
They
Do?
How
Long
Does
It
Take?
Benefits
30.
Demys&fying
Technology
Assisted
Review
Assessment
and
Training
View
in
Equivio
Document
List
Document
Contents
Ranking
PalePe
for
ranking
all
issues
at
once
Progress
status
Ranking
PalePe
for
ranking
issues
separately
31.
Demys&fying
Technology
Assisted
Review
People:
Project
Manager
Who
Why
How
Involved
Training
ExperBse
32.
Demys&fying
Technology
Assisted
Review
People:
Defensibility
Who
ExperBse
Why
What
Do
They
Do?
Reports
Affidavits
TesBmony
33.
Demys&fying
Technology
Assisted
Review
Defensibility
Report
Document,
Document,
Document
Transparency
Workflow
What
Was
Considered,
By
Whom?
QC
Process
Metrics
34.
Demys&fying
Technology
Assisted
Review
Delicate
Balance
People
Process
Technology
Technology
SophisBcaBon
Level
Required
Subject
MaPer
ExperBse
Involvement
Elapsed
Time
Document
Review
Expenditure
Required
Human
Resources
35.
Demys&fying
Technology
Assisted
Review
Q&A - Thank you!
Sonya
L.
Sigler
Vice
President,
Product
Strategy
&
Consul&ng
SFL
Data
415-‐321-‐8385
sonya@sfldata.com
www.sfldata.com
Post
your
ques&ons
to
the
presenter
in
the
chat
secBon