Technology Assisted Review Moving Beyond the First Generation
John Tredennick CEO/Founder
Catalyst
§ 1,800 Exabytes
§ 1.8 million Petabytes
§ 1.8 billion Terabytes
§ 1.8 trillion Gigabytes
§ 1.8 quadrillion Megabytes
1.8 Zettabytes a year
Library of Congress—30 Terabytes
Exploding Content >> Big Data
Sixty Million Libraries of Congress each year!
60 million libraries a year...
... and growing
0"
50"
100"
150"
200"
250"
300"
2003" 2004" 2005" 2006" 2007" 2008" 2009" 2010" 2011" 2012"
Case Size (in Gigabytes)
Big Data >> Big Discovery
Telling Stories 1. Your job has not changed. 2. But it has gotten a bit harder. . .
þ Find the story
þ Tell the story
þ Prove the story
Trust
Is This New?
We Already Use It
Predictive Ranking
What is the Process? 1. Assemble your files
Shredding the Documents
1 2
3
What is the Process? 1. Assemble your files 2. Add seed documents to the mix 3. Analyze seeds and rank similar
documents
How Does it Work?
How Does it Work?
§ Support Vector Machines § Naïve Bayes § K-Nearest Neighbor § Geospatial Predictive Modeling § Latent Semantic
"I may be less interested in the science behind the "black box” than in whether it produced responsive documents with reasonably high recall and high precision.“ Peck, M.J. (SDNY)
What Goes on Under the Hood?
The computer builds a big, complex search!
What terms are most likely to be associated with good documents?
What terms are most likely to be associated with bad documents?
What is the Process? 1. Assemble your files 2. Add seed documents to the mix 3. Analyze seeds and rank similar
documents 4. Test results and provide more
samples—iterative process 5. Order review by ranking
Cut Point
Ranking a Document Set
Understanding the Savings
0%#
10%#
20%#
30%#
40%#
50%#
60%#
70%#
80%#
90%#
100%#
0%# 10%# 20%# 30%# 40%# 50%# 60%# 70%# 80%# 90%# 100%#
Percen
tage)of
)Rele
vant)Docum
ents)Foun
d)(Re
call))
Percentage)of)Documents)Reviewed)
Yield)Curve)
Percentage of relevant documents found
Number of documents in the review
Linear Review
0%#
10%#
20%#
30%#
40%#
50%#
60%#
70%#
80%#
90%#
100%#
0%# 10%# 20%# 30%# 40%# 50%# 60%# 70%# 80%# 90%# 100%#
Yield&Curve&
%&of&Documents&
%&Re
levan
t&
Review 12% and get 80% recall
Understanding the Savings
0%#
10%#
20%#
30%#
40%#
50%#
60%#
70%#
80%#
90%#
100%#
0%# 10%# 20%# 30%# 40%# 50%# 60%# 70%# 80%# 90%# 100%#
Yield&Curve&
%&of&Documents&
%&Re
levan
t&
Review 25% and get 95% recall
Understanding the Savings
12,000
10,000
8,000
6,000
4,000
2,000
Res
pons
ive
10,000 20,000 30,000 40,000 50,000 60,000 70,000 80,000 90,000
Reviewed
Wellington F Responsive Review
80% Recall Review 29,248
95% Recall Review 39,132
100% (Linear) Review 85,725
12,000
10,000
8,000
6,000
4,000
2,000
Res
pons
ive
10,000 20,000 30,000 40,000 50,000 60,000 70,000 80,000 90,000
Reviewed
Wellington F Responsive Review
80% Recall Review 29,248
95% Recall Review 39,132
100% (Linear) Review 85,725
Predict(Review 80%(Recall 95%(RecallResponsive 9,168 10,887Reviewed 29,248 39,112Reduction 56,477 46,613Saving<($4<Doc) $225,908< $186,452<
1. You only get one bite at the apple.
2. Subject matter experts are required for training.
3. You must train on randomly selected documents.
4. You can’t start TAR training until you have all of your documents.
5. TAR doesn’t work on foreign (Asian) language documents.
6. TAR doesn’t work with sparse collections.
The Five Myths of TAR