spChains: A Declarative Framework for Data Stream Processing in Pervasive App...Fulvio Corno
Presentation given at the 3rd International Conference on Ambient Systems, Networks and Technologies
August 27-29, 2012, Niagara Falls, Ontario, Canada.
The paper is available on the PORTO open access repository: http://porto.polito.it/2496720/
2013 3 27 TAR Webinar Part 4 Getting Started SiglerSonya Sigler
Getting started using technology assisted review can be difficult if lawyers aren't used to this type of technology. Part 4 of this webinar series provides in depth coverage on how to get started with TAR tools.
“Who’s Afraid of E-Discovery” was presented by George E. Pallas and Jason Copley from the Law Firm of Cohen Seglias Pallas Greenhall & Fuman PC for the members of the Mid-Atlantic Steel Fabricators Association.
Part 5 in this series of webinars on Demystifying Technology Assisted Review covers Dispelling Myths and Offering Practice Tips. Sonya Sigler of SFL Data, Paige Hunt of Perkins Coie, and CHris Mammen of Hogan Lovells cover this topic in depth.
Julia Brickell - Your "Big Buckets" Are Full Of "Big Data" - The Information ...ARMA International
The End Game? To Retain What's Needed.
-Know what you need to keep
-Employ the right expertise to find it
-The right tools
-The right expertise
-Deployed effectively against diverse sources
-Securely dispose of the rest
spChains: A Declarative Framework for Data Stream Processing in Pervasive App...Fulvio Corno
Presentation given at the 3rd International Conference on Ambient Systems, Networks and Technologies
August 27-29, 2012, Niagara Falls, Ontario, Canada.
The paper is available on the PORTO open access repository: http://porto.polito.it/2496720/
2013 3 27 TAR Webinar Part 4 Getting Started SiglerSonya Sigler
Getting started using technology assisted review can be difficult if lawyers aren't used to this type of technology. Part 4 of this webinar series provides in depth coverage on how to get started with TAR tools.
“Who’s Afraid of E-Discovery” was presented by George E. Pallas and Jason Copley from the Law Firm of Cohen Seglias Pallas Greenhall & Fuman PC for the members of the Mid-Atlantic Steel Fabricators Association.
Part 5 in this series of webinars on Demystifying Technology Assisted Review covers Dispelling Myths and Offering Practice Tips. Sonya Sigler of SFL Data, Paige Hunt of Perkins Coie, and CHris Mammen of Hogan Lovells cover this topic in depth.
Julia Brickell - Your "Big Buckets" Are Full Of "Big Data" - The Information ...ARMA International
The End Game? To Retain What's Needed.
-Know what you need to keep
-Employ the right expertise to find it
-The right tools
-The right expertise
-Deployed effectively against diverse sources
-Securely dispose of the rest
• Explored and cleaned huge amount of user activity logs (JSON) from Movies website using Map Reduce jobs in Python.
• Classified user accounts into adults and children for targeted advertising by implementing Similarity Ranking algorithm.
• Grouped user sessions based on user behavior using K means clustering to observe outliers and to find distinctive groups.
• Predicted ratings for movies using User-user and Item-Item based recommendation algorithms using Mahout.
This presentation describes a intelligent IT monitoring solution that uses Nagios as source of information, Esper as the CEP engine and a PCA algorithm.
EUGM 2014 - Brock Luty (Dart Neuroscience): A ChemAxon/KNIME based tool for ...ChemAxon
As the usage of parallel synthesis in early stage drug discovery has evolved, medicinal chemists have demanded ever more sophisticated tools for the design and virtual screening of potential chemical libraries. We have created and deployed a chemical library design tool (LDT) using ChemAxon technology along with the Infocom nodes in KNIME. Users enumerate potential libraries with
Reactor, employing curated reactions and add standardized calculated properties. Custom KNIME nodes call back-end services on a high-performance computing grid to enable computationally intensive calculations (e.g. Open Eye ROCS) with result sets pushed back to the user on reconnection. Library profile shaping in Spotfire allows the selection of reaction sets with optimized properties, which are then pushed back into KNIME for further processing and export.
From Surendra Reddy's presentation "Walking Through Cloud Serving at Yahoo!" at the 2009 Cloud Computing Expo in Santa Clara, CA, USA. Here's the talk description on the Expo's site: http://cloudcomputingexpo.com/event/session/508
6° Sessione - Ambiti applicativi nella ricerca di tecnologie statistiche avan...Jürgen Ambrosi
In questa sessione vedremo, con il solito approccio pratico di demo hands on, come utilizzare il linguaggio R per effettuare analisi a valore aggiunto,
Toccheremo con mano le performance di parallelizzazione degli algoritmi, aspetto fondamentale per aiutare il ricercatore nel raggiungimento dei suoi obbiettivi.
In questa sessione avremo la partecipazione di Lorenzo Casucci, Data Platform Solution Architect di Microsoft.
In this deck from the Stanford HPC Conference, Ryan Quick from Providentia Worldwide describes how DNNs can be used to improve EDA simulation runs.
"Systems Intelligence relies on a variety of methods for providing insight into the core mechanisms for driving automated behavioral changes in self-healing command and control platforms. This talk reports on initial efforts with leveraging Semiconductor Electronic Design Automation (EDA) telemetry data from cross-domain sources including power, network, storage, nodes, and applications in neural networks as a driving method for insight into SI automation systems."
Watch the video: https://youtu.be/2WbR8tq-XbM
Learn more: http://www.providentiaworldwide.com/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
One of the biggest problems of software projects is that, while the practice of software development is commonly thought of as engineering, it is inherently a creative discipline; hence, many things about it are hard to measure. While simple yardsticks like test coverage and cyclomatic complexity are important for code quality, what other metrics can we apply to answer questions about our code? What coding conventions or development practices can we implement to make our code easier to measure? We'll take a tour through some processes and tools you can implement to begin improving code quality in your team or organization, and see what a difference it makes to long-term project maintainability. More importantly, we'll look at how we can move beyond today's tools to answer higher-level questions of code quality. Can 'good code' be quantified?
One of the biggest problems of software projects is that, while the practice of software development is commonly thought of as engineering, it is inherently a creative discipline; hence, many things about it are hard to measure. While simple yardsticks like test coverage and cyclomatic complexity are important for code quality, what other metrics can we apply to answer questions about our code? What coding conventions or development practices can we implement to make our code easier to measure? We'll take a tour through some processes and tools you can implement to begin improving code quality in your team or organization, and see what a difference it makes to long-term project maintainability. More importantly, we'll look at how we can move beyond today's tools to answer higher-level questions of code quality. Can 'good code' be quantified?
Keynote talk at the International Conference on Supercoming 2009, at IBM Yorktown in New York. This is a major update of a talk first given in New Zealand last January. The abstract follows.
The past decade has seen increasingly ambitious and successful methods for outsourcing computing. Approaches such as utility computing, on-demand computing, grid computing, software as a service, and cloud computing all seek to free computer applications from the limiting confines of a single computer. Software that thus runs "outside the box" can be more powerful (think Google, TeraGrid), dynamic (think Animoto, caBIG), and collaborative (think FaceBook, myExperiment). It can also be cheaper, due to economies of scale in hardware and software. The combination of new functionality and new economics inspires new applications, reduces barriers to entry for application providers, and in general disrupts the computing ecosystem. I discuss the new applications that outside-the-box computing enables, in both business and science, and the hardware and software architectures that make these new applications possible.
Transfer Learning: Repurposing ML Algorithms from Different Domains to Cloud ...Priyanka Aash
Machine learning algorithms are key to modern at-scale cyberdefense. Transfer learning is a state of the art ML paradigm that enables applying knowledge and algorithms developed from one field to another, resulting in innovative solutions. This talk presents transfer learning in action wherein techniques created from other areas are successfully re-purposed and applied to cybersecurity.
(Source: RSA Conference USA 2018)
Apache Calcite: A Foundational Framework for Optimized Query Processing Over ...Julian Hyde
A talk given at ACM SIGMOD 2018 in support of the paper <a href="https://arxiv.org/abs/1802.10233"> Calcite: A Foundational Framework for Optimized Query Processing Over Heterogeneous Data Sources</a>.
Apache Calcite is a foundational software framework that provides query processing, optimization, and query language support to many popular open-source data processing systems such as Apache Hive, Apache Storm, Apache Flink, Druid, and MapD. Calcite's architecture consists of a modular and extensible query optimizer with hundreds of built-in optimization rules, a query processor capable of processing a variety of query languages, an adapter architecture designed for extensibility, and support for heterogeneous data models and stores (relational, semi-structured, streaming, and geospatial). This flexible, embeddable, and extensible architecture is what makes Calcite an attractive choice for adoption in big-data frameworks. It is an active project that continues to introduce support for the new types of data sources, query languages, and approaches to query processing and optimization.
New developments in open source ecosystem spark3.0 koalas delta lakeXiao Li
In this talk, we will highlight major efforts happening in the Spark ecosystem. In particular, we will dive into the details of adaptive and static query optimizations in Spark 3.0 to make Spark easier to use and faster to run. We will also demonstrate how new features in Koalas, an open source library that provides Pandas-like API on top of Spark, helps data scientists gain insights from their data quicker.
• Explored and cleaned huge amount of user activity logs (JSON) from Movies website using Map Reduce jobs in Python.
• Classified user accounts into adults and children for targeted advertising by implementing Similarity Ranking algorithm.
• Grouped user sessions based on user behavior using K means clustering to observe outliers and to find distinctive groups.
• Predicted ratings for movies using User-user and Item-Item based recommendation algorithms using Mahout.
This presentation describes a intelligent IT monitoring solution that uses Nagios as source of information, Esper as the CEP engine and a PCA algorithm.
EUGM 2014 - Brock Luty (Dart Neuroscience): A ChemAxon/KNIME based tool for ...ChemAxon
As the usage of parallel synthesis in early stage drug discovery has evolved, medicinal chemists have demanded ever more sophisticated tools for the design and virtual screening of potential chemical libraries. We have created and deployed a chemical library design tool (LDT) using ChemAxon technology along with the Infocom nodes in KNIME. Users enumerate potential libraries with
Reactor, employing curated reactions and add standardized calculated properties. Custom KNIME nodes call back-end services on a high-performance computing grid to enable computationally intensive calculations (e.g. Open Eye ROCS) with result sets pushed back to the user on reconnection. Library profile shaping in Spotfire allows the selection of reaction sets with optimized properties, which are then pushed back into KNIME for further processing and export.
From Surendra Reddy's presentation "Walking Through Cloud Serving at Yahoo!" at the 2009 Cloud Computing Expo in Santa Clara, CA, USA. Here's the talk description on the Expo's site: http://cloudcomputingexpo.com/event/session/508
6° Sessione - Ambiti applicativi nella ricerca di tecnologie statistiche avan...Jürgen Ambrosi
In questa sessione vedremo, con il solito approccio pratico di demo hands on, come utilizzare il linguaggio R per effettuare analisi a valore aggiunto,
Toccheremo con mano le performance di parallelizzazione degli algoritmi, aspetto fondamentale per aiutare il ricercatore nel raggiungimento dei suoi obbiettivi.
In questa sessione avremo la partecipazione di Lorenzo Casucci, Data Platform Solution Architect di Microsoft.
In this deck from the Stanford HPC Conference, Ryan Quick from Providentia Worldwide describes how DNNs can be used to improve EDA simulation runs.
"Systems Intelligence relies on a variety of methods for providing insight into the core mechanisms for driving automated behavioral changes in self-healing command and control platforms. This talk reports on initial efforts with leveraging Semiconductor Electronic Design Automation (EDA) telemetry data from cross-domain sources including power, network, storage, nodes, and applications in neural networks as a driving method for insight into SI automation systems."
Watch the video: https://youtu.be/2WbR8tq-XbM
Learn more: http://www.providentiaworldwide.com/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
One of the biggest problems of software projects is that, while the practice of software development is commonly thought of as engineering, it is inherently a creative discipline; hence, many things about it are hard to measure. While simple yardsticks like test coverage and cyclomatic complexity are important for code quality, what other metrics can we apply to answer questions about our code? What coding conventions or development practices can we implement to make our code easier to measure? We'll take a tour through some processes and tools you can implement to begin improving code quality in your team or organization, and see what a difference it makes to long-term project maintainability. More importantly, we'll look at how we can move beyond today's tools to answer higher-level questions of code quality. Can 'good code' be quantified?
One of the biggest problems of software projects is that, while the practice of software development is commonly thought of as engineering, it is inherently a creative discipline; hence, many things about it are hard to measure. While simple yardsticks like test coverage and cyclomatic complexity are important for code quality, what other metrics can we apply to answer questions about our code? What coding conventions or development practices can we implement to make our code easier to measure? We'll take a tour through some processes and tools you can implement to begin improving code quality in your team or organization, and see what a difference it makes to long-term project maintainability. More importantly, we'll look at how we can move beyond today's tools to answer higher-level questions of code quality. Can 'good code' be quantified?
Keynote talk at the International Conference on Supercoming 2009, at IBM Yorktown in New York. This is a major update of a talk first given in New Zealand last January. The abstract follows.
The past decade has seen increasingly ambitious and successful methods for outsourcing computing. Approaches such as utility computing, on-demand computing, grid computing, software as a service, and cloud computing all seek to free computer applications from the limiting confines of a single computer. Software that thus runs "outside the box" can be more powerful (think Google, TeraGrid), dynamic (think Animoto, caBIG), and collaborative (think FaceBook, myExperiment). It can also be cheaper, due to economies of scale in hardware and software. The combination of new functionality and new economics inspires new applications, reduces barriers to entry for application providers, and in general disrupts the computing ecosystem. I discuss the new applications that outside-the-box computing enables, in both business and science, and the hardware and software architectures that make these new applications possible.
Transfer Learning: Repurposing ML Algorithms from Different Domains to Cloud ...Priyanka Aash
Machine learning algorithms are key to modern at-scale cyberdefense. Transfer learning is a state of the art ML paradigm that enables applying knowledge and algorithms developed from one field to another, resulting in innovative solutions. This talk presents transfer learning in action wherein techniques created from other areas are successfully re-purposed and applied to cybersecurity.
(Source: RSA Conference USA 2018)
Apache Calcite: A Foundational Framework for Optimized Query Processing Over ...Julian Hyde
A talk given at ACM SIGMOD 2018 in support of the paper <a href="https://arxiv.org/abs/1802.10233"> Calcite: A Foundational Framework for Optimized Query Processing Over Heterogeneous Data Sources</a>.
Apache Calcite is a foundational software framework that provides query processing, optimization, and query language support to many popular open-source data processing systems such as Apache Hive, Apache Storm, Apache Flink, Druid, and MapD. Calcite's architecture consists of a modular and extensible query optimizer with hundreds of built-in optimization rules, a query processor capable of processing a variety of query languages, an adapter architecture designed for extensibility, and support for heterogeneous data models and stores (relational, semi-structured, streaming, and geospatial). This flexible, embeddable, and extensible architecture is what makes Calcite an attractive choice for adoption in big-data frameworks. It is an active project that continues to introduce support for the new types of data sources, query languages, and approaches to query processing and optimization.
New developments in open source ecosystem spark3.0 koalas delta lakeXiao Li
In this talk, we will highlight major efforts happening in the Spark ecosystem. In particular, we will dive into the details of adaptive and static query optimizations in Spark 3.0 to make Spark easier to use and faster to run. We will also demonstrate how new features in Koalas, an open source library that provides Pandas-like API on top of Spark, helps data scientists gain insights from their data quicker.
Similar to 2012 11 7 TAR Webinar Part 3 Sigler (20)
New developments in open source ecosystem spark3.0 koalas delta lake
2012 11 7 TAR Webinar Part 3 Sigler
1. Demys&fying
Technology
Assisted
Review
Part
3:
Deconstruc&ng
the
Technology
Sonya
L.
Sigler
2. Agenda
Review/Overview
Underlying
Search
Technology
dtSearch
Lucene
(open
source)
Others
–
My
SQL,
etc.
Underlying
StaCsCcal
Based
Technology
Rules
Based
Technology
(LinguisCc
or
StaCsCcal)
Bayesian
ProbabilisCc
Technologies
Latent
SemanCc
Indexing
Q
&
A
Demys&fying
Technology
Assisted
Review
3. Review/Overview
-‐
Search
&
Review
Spectrum
Linear
Review
Culling
IteraCve
search
Review
Accelerated
Review
Email
Threading
Near
Duplicate
DetecCon
Automated
Review
Per
CA
-‐
Clustering
Relevance
Ranking
Document
CategorizaCon
(Supervised)
Machine
Learning
Cost
Latent
SemanCc
Indexing
(staCsCcal
probability)
PaRern
Analysis
Sampling
Data
for
High
Precision
and
Recall
Rates
Organiza3on
Commitment
Demys&fying
Technology
Assisted
Review
4. Underlying
Technologies
Rules
Based
Systems
dtSearch
Key
word
Search
Ontologies
Lucene
Other
Search
Engines
LinguisCc
–
word
based
StaCsCcal
-‐
#s
based
Bayesian
ClassificaCon
Support
Vector
Models
Latent
SemanCc
Indexing
Demys&fying
Technology
Assisted
Review
5. Database
NormalizaCon
From:
Nuala
Coogan
Nuala@SFLData.com
Subject:
EDI
Summit
–
Florida
Date:
October
3,
2012
10:11:21
AM
PDT
To:
Sigler
L.
Sonya
Sonya@sigler.name
From:
Nuala
Coogan
Subject:
EDI
Summit
–
Florida
Date:
10/03/12
To:
Sonya
Sigler
Demys&fying
Technology
Assisted
Review
6. TokenizaCon
Words,
Phrases,
Symbols
Mostly
at
the
word
level
Numbers
PunctuaCon
Meaningful
Elements
or
Pieces
–>
Tokens
Parsing
and
Text
Mining
Treatment
of
ContracCons,
Hyphenated
words,
EmoCcons
and
Larger
Constructs
(like
urls)
Look-‐up
tables
Demys&fying
Technology
Assisted
Review
7. LinguisCc
Based
Technologies
Keyword
Sample
Ontology
Sample
Simple:
q
((+(std:%CapacityReports_%
std:%DINCapacity_
%)
(std:%ACMEEPPlant_%
std:%ProductName_%))
"legal
systems"
OR
legalsystems
(+(std:%ACMEPNPlant_%
std:%ProductName_%)
+
"Mike
Custodian”
(std:%ProducCveCapability_%
std:
%CapacityReports_%))
(+(std:%CapacityCreep_%
std:%OperaConsImprovement_%
std:
Medium:
%CapacityExpansion_%
std:%CapacityRestoraCon_
mail(custodian@domain.com)
AND
"legal
systems”
%)
+(std:%ACMEPNPlant_%
std:%ProductName_
%))
(+(std:%EquipmentReplacement_%
std:
(Custodian
w/3
(Mike
OR
Michael
OR
M))
%FinishingColumn_%)
+(std:%ACMEPNPlant_%
std:
%ProductName_%))
(std:%Audit_%
actor:%Audit_
Complex:
%)
(+(std:%SeRlementNegoCaCons_%
std:
%ContractNegoCaCons_%
)
+(actor:
(privilege
OR
privileged
OR
legally
OR
"work
%ACMEOutsideCounsel_%
std:
product")
NOT
w/35
(((original
OR
intended
OR
%ACMEOutsideCounsel_%
actor:%ACME
designated
OR
named)
w/3
(recipient
OR
UBOutsideCounsel_%
std:
recipients
OR
addressee
OR
addressees
OR
%AcmeSubOutsideCounsel_%
actor:%AcmeSub_%
solely))
OR
("message
in
error")
OR
("received
in
std:%AcmeSub_%))
(std:%FTC_%
actor:%FTC_%)
((+subject:%ProductName_%
+(std:swap
error")
OR
("named
above")
OR
((electronic
or
std:"supply
agreement"
std:"exchange
agreement"
email
or
e-‐mail)
w/3
(message
or
transmission))
std:"agree
to
exchange"))
std:"name
OR
("confidenCality
noCce"))
Demys&fying
Technology
Assisted
Review
8. Search
Engines
dtSearch
dtSearch
Corp.,
founded
1991
Incorporated
into
Symantec’s
Norton
Navigator
SDKs
available,
most
license
off
the
shelf
hRp://support.dtsearch.com/faq/search.html
Lucene
Open
source
-‐
hRp://lucene.apache.org/core/
Doug
Cukng,
1999,
Part
of
Apache
projects
in
2001
APIs,
Customizable
Other
–
My
SQL,
SQL,
(DBMS,
RDBMS)
Demys&fying
Technology
Assisted
Review
9. dtSearch
RelaCvity,
Concordance,
Viewpoint,
others
Single
User
desktop
license
$199
LiRle
CustomizaCon
–
more
similariCes
across
apps
Includes
Boolean
operators
Includes
Proximity
searching
Includes
Fuzzy
Searching
Alphabet
-‐>
Alphaqet,
alpphabet,
alpkaqet
Demys&fying
Technology
Assisted
Review
10. Lucene
Clearwell,
Intella,
Cataphora,
SHIFT,
others
Open
Source
Tool
–
meant
to
be
customized
LiRle
SimilariCes
Across
Apps
Know
your
defaults!
Includes
Boolean
Operators
Includes
Proximity
Searching
Demys&fying
Technology
Assisted
Review
12. Boolean
Operators
–
AND,
OR
,
NOT
dtSearch
Lucene
Search
for
Depends
on
MulCple
words
customizaCon
treated
as
a
phrase
OR
ANY
–
treats
word
AND
list
as
separated
by
OR
Know
your
defaults
ALL
–
treats
word
list
Spell
out
variaCons
as
separated
by
AND
Demys&fying
Technology
Assisted
Review
13. Proximity
dtSearch
Lucene
Pre/post
w/
order
doesn’t
w/
order
doesn’t
maRer
maRer
House
white
No
pre
usage
White
house
Pre/
finds
first
word
prior
to
second
word
White
house
Demys&fying
Technology
Assisted
Review
14. Punctua&on
dtSearch
Lucene
LeRers
All
punctuaCon
Space
treated
as
a
word
Ignored
break
Hyphens
%
-‐
fuzzy
searching
_
-‐
ignored
Demys&fying
Technology
Assisted
Review
16. Noise
Words
–
Unindexed,
Ignored
dtSearch
Lucene
Unindexed,
Can
Ignores
*
in
quotes
create
Custom
Index
(Quality
Control*)
=
Many,
but
a
few
Quality
Control
but
examples:
Do,
not,
nothing
else
for,
your,
only,
under,
made,
way
Know
defualts
Demys&fying
Technology
Assisted
Review
17. Stemming
v.
Wild
Cards
Stemming
Wild
Cards
SyntacCc
VariaCons
Strings
of
characters
Replacements
for
beginning,
Regular
Verbs
parts,
or
endings
Irregular
Verbs
Lucene
-‐
*
dtSearch
performs
dtSearch
-‐
?
For
single
character,
*
for
any
#
of
poorly
with
irregular
characters
verbs
Time
consuming
Spelling
out
recommended
Wild
cards
in
quotes
Demys&fying
Technology
Assisted
Review
18. Stemming
v.
Wild
Cards
Example
Stemming
Wild
Cards
Catch
–
Lucene
Catch*
Catch~
-‐
dtSearch
Catch
Catch
Catches
Catches
Catching
Catching
Catcher
Catcher
Catch1234
–
not
in
Caught
-‐
not
in
dtSearch
stemming
Demys&fying
Technology
Assisted
Review
21. StaCsCcal
Based
Technologies
Concept
-‐
Categoriza&on
User
Created
Supervised
Control
Topics
Time
Consuming
Demys&fying
Technology
Assisted
Review
22. StaCsCcal
Based
Technologies
Rules
Based
Systems
If..
Then…
If
email
=
person
1
to
person
2
then
return
it
If
email
=
person
1
or
person
2
then
return
it
ArCficial
Intelligence
Systems
EnCty
extracCon
(&
dicConaries)
Time
consuming
Mirror
human
thinking
Case,
subject
maRer
Transparent
System
Demys&fying
Technology
Assisted
Review
24. Bayesian
Bayesian
illustraCon
Baseball,
glove,
diamond,
bats,
hit,
home
run
Diamond,
pendant,
jewelry
Co-‐occurrence
Local
–
within
a
document
Global
–
across
document
populaCon
Frequency
–
how
ozen
does
it
appear
WeighCng
–
uniqueness
counts
Demys&fying
Technology
Assisted
Review
30. Defensibility
Report
Document,
Document,
Document
Transparency
Workflow
What
Was
Considered,
By
Whom?
QC
Process
Metrics
Demys&fying
Technology
Assisted
Review
31. Q&A - Thank you!
Post
your
ques&ons
to
the
presenter
in
the
chat
secCon
Sonya
L.
Sigler
Vice
President,
Product
Strategy
&
Consul&ng
SFL
Data
415-‐321-‐8385
sonya@sfldata.com
www.sfldata.com
Demys&fying
Technology
Assisted
Review