get another label? improving data quality and machine learning using multiple, noisy labelers
Post on 24-Feb-2016
46 Views
Preview:
DESCRIPTION
TRANSCRIPT
Get Another Label? Improving Data Quality and Machine
Learning Using Multiple, Noisy Labelers
Foster ProvostNew York University
Joint work with Panos Ipeirotis, Victor Sheng, and Jing Wang
3
4
Get Another Label? Improving Data Quality and Machine
Learning Using Multiple, Noisy Labelers
Foster ProvostNew York University
Joint work with Panos Ipeirotis, Victor Sheng, and Jing Wang
6
Outsourcing machine learning preprocessing
Traditionally, modeling teams have invested substantial internal resources in data formulation, information extraction, cleaning, and other preprocessing– Raghu Ramakrishnan from his SIGKDD Innovation Award Lecture (2008)
“the best you can expect are noisy labels” Now, we can outsource preprocessing tasks, such as labeling,
feature extraction, verifying information extraction, etc.– using Mechanical Turk, Rent-a-Coder, etc.– quality may be lower than expert labeling (much?) – but low costs can allow massive scale
The ideas may apply also to focusing user-generated tagging, crowdsourcing, etc.
Example: Build a web-page classifier for inappropriate content
Need a large number of hand-labeled web-pages Get people to look at pages and classify them. For example, for adult content:
G (general), PG (parental guidance), R (restricted), X (porn)
Cost/Speed Statistics Undergrad intern: 200 web-pages/hr, cost:
$15/hr MTurk: 2500 web-pages/hr, cost: $12/hr
8
12
Noisy labels can be problematic
Many tasks rely on high-quality labels for objects:– webpage classification for safe advertising– learning predictive models– searching for relevant information – finding duplicate database records – image recognition/labeling– song categorization– sentiment analysis
Noisy labels can lead to degraded task performance
13
40
50
60
70
80
90
100
1 20 40 60 80 100
120
140
160
180
200
220
240
260
280
300
Number of examples (Mushroom)
Accu
racy
Quality and Classification Performance
Labeling quality increases classification quality increases
P = 0.5
P = 0.6
P = 0.8
P = 1.0
Here, labels are values for target variable
14
Summary of results
Repeated labeling can improve data quality and model quality (but not always)
When labels are noisy, repeated labeling can be preferable to single labeling
When labels are relatively cheap, repeated labeling can do much better
Round-robin repeated labeling does well Selective repeated labeling improves
significantly
15
I won’t talk about …
16
Related topic
Estimating (and using) the labeler quality– for multilabeled data: Dawid & Skene 1979; Raykar et
al. JMLR 2010; Donmez et al. KDD09– for single-labeled data with variable-noise labelers: Donmez &
Carbonell 2008; Dekel & Shamir 2009a,b– to eliminate/down-weight poor labelers: Dekel &
Shamir, Donmez et al.; Raykar et al. (implicitly)– and correct labeler biases: Ipeirotis et al. HCOMP-10– Example-conditional labeler performance
Yan et al. 2010a,b Using learned model to find bad labelers/labels:
Brodley & Friedl 1999; Dekel & Shamir, Us (I’ll discuss)
Setting for this talk (I)
unknown process provides data points to be labeled, randomly from some fixed probability distribution– data points, or “examples”, comprise a vector of features or
descriptive attributes– we sometimes consider a fixed subset S of examples– labels are binary
set L of labelers, L1, L2, …, (potentially unbounded for this talk) – each Li has “quality” pi, which is the probability that Li will label any
given example correctly– pi = pj for most of this talk (sometimes called q)– some subset of L will label each example– some strategies will acquire k labels for each example
Total acquisition cost includes CU cost of acquiring unlabeled “feature portion” and cost CL of acquiring “label” of example– for most of the talk I’ll ignore CU – ρ=CU/CL gives cost ratio
Setting for this talk (II)
we select a fixed process for producing an integrated label from a set of labels (e.g., majority voting)
we care about:1) the quality of the labeling, i.e., the expectation that an
integrated label will be correct2) the generalization performance of predictive models
induced from the data+integrated labels, e.g., measured as generalization performance on hold-out data (accuracy, AUC, etc.)
20
Majority Voting and Label Quality
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1 3 5 7 9 11 13
Number of labelers
Inte
grat
ed q
ualit
y
P=0.4
P=0.5
P=0.6
P=0.7
P=0.8
P=0.9
P=1.0
Ask multiple labelers, keep majority label as “true” label Quality is probability of being correct
P is probabilityof individual labelerbeing correct
21
Tradeoffs for Modeling
Get more examples Improve classification Get more labels Improve label quality Improve classification
40
50
60
70
80
90
100
1 20 40 60 80 100
120
140
160
180
200
220
240
260
280
300
Number of examples (Mushroom)
Accu
racy
P = 0.5
P = 0.6
P = 0.8
P = 1.0
22
Basic Labeling Strategies
Single Labeling– Get as many data points as possible– One label each
Round-robin Repeated Labeling– Fixed Round Robin (FRR)
keep labeling the same set of points in some order
– Generalized Round Robin (GRR) repeatedly label data points, giving next label to the one
with the fewest so far
23
Fixed Round Robin vs. Single Labeling
p= 0.6, labeling quality#examples =100
FRR(100 examples)
SL
With high noise or many examples, repeated labeling better than single labeling
24
Tradeoffs for Modeling
Get more labels Improve label quality Improve classification Get more examples Improve classification
40
50
60
70
80
90
100
1 20 40 60 80 100
120
140
160
180
200
220
240
260
280
300
Number of examples (Mushroom)
Accu
racy
P = 0.5
P = 0.6
P = 0.8
P = 1.0
25
Fixed Round Robin vs. Single Labeling
p= 0.8, labeling quality#examples =50
FRR (50 examples)
Single
With low noise and few examples, more (single labeled) examples better
26
Tradeoffs for Modeling
Get more labels Improve label quality Improve classification Get more examples Improve classification
40
50
60
70
80
90
100
1 20 40 60 80 100
120
140
160
180
200
220
240
260
280
300
Number of examples (Mushroom)
Accu
racy
P = 0.5
P = 0.6
P = 0.8
P = 1.0
27
Tradeoffs for Modeling
Get more labels Improve label quality Improve classification Get more examples Improve classification
40
50
60
70
80
90
100
1 20 40 60 80 100
120
140
160
180
200
220
240
260
280
300
Number of examples (Mushroom)
Accu
racy
P = 0.5
P = 0.6
P = 0.8
P = 1.0
28
Gen. Round Robin vs. Single Labeling
CU=0 (i.e. ρ =0), k=10
40
50
60
70
80
90
100
0 5000 10000 15000 20000Data acquisition cost (mushroom, p=0.6, CU=0, k=10)
Labe
ling
qual
ity
SLGRR
ρ : cost ratiok: #labels
Use up all examples
Repeated labeling is better than single labeling for this setting
29
Gen. Round Robin vs. Single Labeling
ρ=CU/CL=3, k=5
40
50
60
70
80
90
100
0 4000 8000 12000 16000Data acquisition cost (mushroom, p=0.6, ρ=3, k=5)
Labe
ling
qual
ity
SLGRR
Repeated labeling is better than single labeling for this setting
ρ : cost ratiok: #labels
30
Gen. Round Robin vs. Single Labeling
40
50
60
70
80
90
100
0 11000 22000 33000 44000Data acquisition cost (mushroom, p=0.6, ρ=10, k=12)
Labe
ling
qual
ity
SLGRR
ρ=CU/CL=10, k=12 ρ : cost ratiok: #labels
Repeated labeling is better than single labeling for this setting
32
Selective Repeated-Labeling
We have seen so far: – With enough examples and noisy labels, getting multiple
labels is better than single-labeling– When we consider costly preprocessing, the benefit is
magnified
Can we do better than the basic strategies? Key observation: we have additional information to
guide selection of data for repeated labeling the current multiset of labels
Example: {+,-,+,-,-,+} vs. {+,+,+,+,+,+}
33
Natural Candidate: Entropy
Entropy is a natural measure of label uncertainty:
E({+,+,+,+,+,+})=0 E({+,-, +,-, -,+ })=1
Strategy: Get more labels for high-entropy label multisets
||||log
||||
||||log
||||)( 22 S
SSS
SS
SSSE
negativeSpositiveS |:||:|
34
What Not to Do: Use Entropy
0.60.65
0.70.75
0.80.85
0.90.95
1
0 1000 2000 3000 4000Number of labels (mushroom, p=0.6)
Labe
ling
qual
ity
GRRENTROPY
Improves at first, hurts in long run
Why not Entropy
In the presence of noise, entropy will be high even with many labels
Entropy is scale invariant (3+ , 2-) has same entropy as (600+ , 400-)
35
36
Estimating Label Uncertainty (LU)
Observe +’s and –’s and compute Pr{+|obs} and Pr{-|obs} Label uncertainty = tail of beta distribution
SLU
0.50.0 1.0
Beta probability density function
Label Uncertainty
p=0.7 5 labels
(3+, 2-) Entropy ~ 0.97 CDFb=0.34
37
Label Uncertainty
p=0.7 10 labels
(7+, 3-) Entropy ~ 0.88 CDFb=0.11
38
Label Uncertainty
p=0.7 20 labels
(14+, 6-) Entropy ~ 0.88 CDFb=0.04
39
Label Uncertainty vs. Round Robin
40
0.6
0.7
0.8
0.9
1
0 1000 2000 3000 4000Number of labels (mushroom, p=0.6)
Labe
ling
qual
ity
GRRLU
similar results across a dozen data sets
41
Gen. Round Robin vs. Single Labeling
40
50
60
70
80
90
100
0 11000 22000 33000 44000Data acquisition cost (mushroom, p=0.6, ρ=10, k=12)
Labe
ling
qual
ity
SLGRR
ρ=CU/CL=10, k=12 ρ : cost ratiok: #labels
Repeated labeling is better than single labeling here
Label Uncertainty vs. Round Robin
42
0.6
0.7
0.8
0.9
1
0 1000 2000 3000 4000Number of labels (mushroom, p=0.6)
Labe
ling
qual
ity
GRRLU
similar results across a dozen data sets
More sophisticated label uncertainty?
43
44
Estimating Label Uncertainty (LU)
Observe +’s and –’s and compute Pr{+|obs} and Pr{-|obs} Label uncertainty = tail of beta distribution
SLU
0.50.0 1.0
Beta probability density function
More sophisticated label uncertainty?(using estimated instance-specific label quality)
45
More sophisticated LU improves labeling quality under class imbalance and fixes some pesky LU learning curve glitches
46
Both techniquesperform essentially optimally with balanced classes
47
Another strategy:Model Uncertainty (MU)
Learning models of the data provides an alternative source of information about label certainty– (a random forest for the results to come)
Model uncertainty: get more labels for instances that cause model uncertainty
Intuition?– for modeling: why improve training data
quality if model already is certain there?– for data quality, low-certainty “regions”
may be due to incorrect labeling of corresponding instances
Models
Examples
Self-healing process
+ ++
++ ++
+
+ ++
+
+ ++
+ + ++
+
- - - -- - - -- -
- -
- - - -
- - - -- - - -- - - -
- - - -
+
48
Yet another strategy:Label & Model Uncertainty (LMU)
Label and model uncertainty (LMU): avoid examples where either strategy is certain
MULULMU SSS
Quality
49
0.60.650.7
0.750.8
0.850.9
0.951
0 400 800 1200 1600 2000Number of labels (waveform, p=0.6)
Labe
ling
qual
ity
UNF MULU LMU
Label Uncertainty
Uniform, round robin
Label + Model Uncertainty
Model Uncertainty alone also improves
quality
51
Comparison: Model Quality (I)
Label & Model Uncertainty
Across 12 domains, LMU is always better than GRR. LMU is statistically significantlybetter than LU and MU.
70
75
80
85
90
95
100
0 1000 2000 3000 4000Number of labels (sick, p=0.6)
Acc
urac
y
GRR MULU LMU
52
Comparison: Model Quality (II)
65
70
75
80
85
90
95
100
0 1000 2000 3000 4000Number of labels (mushroom, p=0.6)
Acc
urac
y
GRR MULU LMUSL
Across 12 domains, LMU is always better than GRR. LMU is statistically significantlybetter than LU and MU.
Why does Model Uncertainty (MU) work?
MU score distributions for correctly labeled (blue) and incorrectly labeled (purple) cases
53
+ ++
++ ++ +
+ ++
+
+ ++
+ + ++
+
- - - -- - - -- -
- -
- - - -
- - - -- - - -- - - -
- - - -
+
54
Why does Model Uncertainty (MU) work?
Models
ExamplesSelf-healing process
+ ++
++ ++ +
+ ++
+
+ ++
+ + ++
+
- - - -- - - -- -
- -
- - - -
- - - -- - - -- - - -
- - - -
+
Self-healing MU
“active learning” MU
Adult content classification
57
58
Summary of results
Micro-task outsourcing (e.g., MTurk, RentaCoder ESP game) changes the landscape for data formulation
Repeated labeling improves data quality and model quality (but not always)
With noisy labels, repeated labeling can be preferable to single labeling even when labels aren’t particularly cheap
When labels are relatively cheap, repeated labeling can do much better
Round-robin repeated labeling works well Selective repeated labeling improves substantially Best performance is by combining model-based and
label-set based indications of uncertainty
59
Opens up many new directions…
Strategies using “learning-curve gradient” Estimating & correcting the quality of each
labeler (cf. related work list earlier) Example-conditional labeling difficulty Increased compensation vs. labeler quality Multiple “real” labels Truly “soft” labels Selective repeated tagging
Example: Build a web-page classifier for inappropriate content
Need a large number of hand-labeled webpages Get people to look at pages and classify them as:
G (general), PG (parental guidance), R (restricted), X (porn)
Cost/Speed Statistics Undergrad intern: 200 websites/hr, cost:
$15/hr MTurk: 2500 websites/hr, cost: $12/hr
Bad news: Spammers!
Worker ATAMRO447HWJQ
labeled X (porn) sites as G (general audience)
Solution: Repeated Labeling
Probability of correctness increases with number of workers
Probability of correctness increases with quality of workers
1 worker
70%
correct
11 workers
93%
correct
11-vote Statistics MTurk: 227 websites/hr, cost: $12/hr Undergrad: 200 websites/hr, cost: $15/hr
Single Vote Statistics MTurk: 2500 websites/hr, cost: $12/hr Undergrad: 200 websites/hr, cost: $15/hr
But Majority Voting can be Expensive
Spammer among 9 workers
Our “friend” ATAMRO447HWJQ mainly marked sites as G.Obviously a spammer…
We can compute error rates for each worker
Error rates for ATAMRO447HWJQ P[X → X]=9.847% P[X → G]=90.153% P[G → X]=0.053% P[G → G]=99.947%
Rejecting spammers and Benefits
Random answers error rate = 50%
Average error rate for ATAMRO447HWJQ: 45.2% P[X → X]=9.847% P[X → G]=90.153% P[G → X]=0.053% P[G → G]=99.947%
Action: REJECT and BLOCK
Results: Over time you block all spammers Spammers learn to avoid your HITS You can decrease redundancy, as quality of workers is higher
After rejecting spammers, quality goes up
With spam
1 worker
70%
correct
With spam
11 workers
93%
correct
Without
spam
1 worker
80% correct
Without
spam
5 workers
94% correct
Correcting biases
Sometimes workers are careful but biased
Classifies G → P and P → R Average error rate for ATLJIK76YH1TF: 45.0%
Is ATLJIK76YH1TF a spammer?
Error Rates for Worker: ATLJIK76YH1TF
P[G → G]=20.0% P[G → P]=80.0%P[G → R]=0.0% P[G → X]=0.0%P[P → G]=0.0% P[P → P]=0.0% P[P → R]=100.0% P[P → X]=0.0%P[R → G]=0.0% P[R → P]=0.0% P[R → R]=100.0% P[R → X]=0.0%P[X → G]=0.0% P[X → P]=0.0% P[X → R]=0.0% P[X → X]=100.0%
Correcting biases
For ATLJIK76YH1TF, we simply need to compute the “non-recoverable” error-rate (technical details omitted)
Non-recoverable error-rate for ATLJIK76YH1TF: 9%
The “condition number” of the matrix [how easy is to invert the matrix] is a good indicator of spamminess
Error Rates for Worker: ATLJIK76YH1TF
P[G → G]=20.0% P[G → P]=80.0%P[G → R]=0.0% P[G → X]=0.0%P[P → G]=0.0% P[P → P]=0.0% P[P → R]=100.0% P[P → X]=0.0%P[R → G]=0.0% P[R → P]=0.0% P[R → R]=100.0% P[R → X]=0.0%P[X → G]=0.0% P[X → P]=0.0% P[X → R]=0.0% P[X → X]=100.0%
A different sort of Label Uncertainty:
70
If we were to know quality and class prior, we could estimate label uncertainty directly from p & n
71
But we estimated the distribution over q above…
0.50.0 1.0
Beta probability density function
“New” label uncertainty (NLU) I
72
“New” label uncertainty (NLU) II
73
More sophisticated LU improves labeling quality under class imbalance and fixes some pesky LU learning curve glitches
74
… but doesn’t systematically help learning, even on the same data
75
What if different labelers have different qualities?
(Sometimes) quality of multiple noisy labelers is better than quality of best labeler in set
here, 3 labelers:p-d, p, p+d
76
Estimating Labeler Quality
(Dawid, Skene 1979): “Multiple diagnoses”
– Assume equal qualities– Estimate “true” labels for examples– Estimate qualities of labelers given the “true” labels– Repeat until convergence
77
Soft Labeling vs. Majority Voting
MV: majority voting ME: soft labeling
55
60
65
70
10 410 810 1210 1610Number of examples (bmg, p=0.6)
Acc
urac
y
MVME
Adult content classification
79
Thanks!
Q & A?
top related