2nd detection / segmentation challengesegmentation leaderboard (i) coco ap (over all iou) coco ap...
TRANSCRIPT
2ndDetection / Segmentation Challenge
Yin Cui, Tsung-Yi Lin, Matteo Ruggero Ronchi, Genevieve Patterson
ImageNet and COCO Visual Recognition Challenges WorkshopSunday, October 9th, ECCV 2016
/ 542
Workshop Organizers
Yin CuiCornell Tech
Matteo Ruggero RonchiCaltech
Genevieve PattersonBrown University
Michael MaireSerge BelongieLubomir BourdevRoss GirshickJames HaysPietro PeronaLarry ZitnickPiotr Dollár
Workshop Advisors:Deva RamananPietro PeronaMichael MaireLubomir BourdevSerge BelongieMatteo Ruggero RonchiGenevieve PattersonYin Cui
Award Committee:
Tsung-Yi LinCornell Tech
/ 543
Outline
1. Download MS COCO Train / Val set
Participate in challenge?
Yes
No 3. Download MS COCO Test-Dev
3. Download MS COCO Test-Full
2. Develop the algorithm
4. Upload to CodaLab (unlimited)
4. Upload to CodaLab (5 times max)
/ 543
Outline
1. Download MS COCO Train / Val set
Participate in challenge?
Yes
No 3. Download MS COCO Test-Dev
3. Download MS COCO Test-Full
2. Develop the algorithm
4. Upload to CodaLab (unlimited)
4. Upload to CodaLab (5 times max)
/ 544
• 80 object categories • 200k images• 1.2M instances (350k people)• Every instance segmented
Available for download atmscoco.org
COCO Dataset
/ 544
• 80 object categories • 200k images• 1.2M instances (350k people)• Every instance segmented
Available for download atmscoco.org
COCO Dataset
• 106k people with keypoints
/ 545
Available for download atmscoco.org/external
COCO 3rd Party Datasets
/ 546
1. Download MS COCO Train / Val set
Participate in challenge?
Yes
No
2. Develop the algorithm
Outline
3. Download MS COCO Test-Dev
3. Download MS COCO Test-Full
4. Upload to CodaLab (unlimited)
4. Upload to CodaLab (5 times max)
/ 546
1. Download MS COCO Train / Val set
Participate in challenge?
Yes
No
2. Develop the algorithm
Outline
3. Download MS COCO Test-Dev
3. Download MS COCO Test-Full
4. Upload to CodaLab (unlimited)
4. Upload to CodaLab (5 times max)
/ 547
Shout-out to previous algorithms!
/ 547
Shout-out to previous algorithms!
/ 548
1. Download MS COCO Train / Val set
Participate in challenge?
Yes
No
2. Develop the algorithm
Outline
3. Download MS COCO Test-Dev
3. Download MS COCO Test-Full
4. Upload to CodaLab (unlimited)
4. Upload to CodaLab (5 times max)
/ 548
1. Download MS COCO Train / Val set
Participate in challenge?
Yes
No
2. Develop the algorithm
Outline
3. Download MS COCO Test-Dev
3. Download MS COCO Test-Full
4. Upload to CodaLab (unlimited)
4. Upload to CodaLab (5 times max)
/ 549
Challenges at ECCV 2016
/ 5410
Participate in challenge?
Yes
No
1. Download MS COCO Train / Val set 2. Develop the algorithm
Outline
3. Download MS COCO Test-Dev
3. Download MS COCO Test-Full
4. Upload to CodaLab (unlimited)
4. Upload to CodaLab (5 times max)
/ 5410
Participate in challenge?
Yes
No
1. Download MS COCO Train / Val set 2. Develop the algorithm
Outline
3. Download MS COCO Test-Dev
3. Download MS COCO Test-Full
4. Upload to CodaLab (unlimited)
4. Upload to CodaLab (5 times max)
/ 5411
MS COCO Test Sets
/ 5411
The 2015/2016 MS COCO Test set consists of ~80k test images.
MS COCO Test Sets
/ 5411
The 2015/2016 MS COCO Test set consists of ~80k test images.
Test-dev (development) Debugging, Validation and Ablation Studies. Allows unlimited submission to the evaluation server.
MS COCO Test Sets
/ 5411
The 2015/2016 MS COCO Test set consists of ~80k test images.
Test-dev (development) Debugging, Validation and Ablation Studies. Allows unlimited submission to the evaluation server.
Test-standard (publications) Used to score entries for the Public Leaderboard.
MS COCO Test Sets
/ 5411
The 2015/2016 MS COCO Test set consists of ~80k test images.
Test-dev (development) Debugging, Validation and Ablation Studies. Allows unlimited submission to the evaluation server.
Test-standard (publications) Used to score entries for the Public Leaderboard.
Test-challenge (competitions) Used to score workshop competition.
MS COCO Test Sets
/ 5411
The 2015/2016 MS COCO Test set consists of ~80k test images.
Test-dev (development) Debugging, Validation and Ablation Studies. Allows unlimited submission to the evaluation server.
Test-standard (publications) Used to score entries for the Public Leaderboard.
Test-challenge (competitions) Used to score workshop competition.
Test-reserve (security) Used to estimate overfitting. Scores on this set are never released.
MS COCO Test Sets
/ 5412
Participate in challenge?
Yes
No
1. Download MS COCO Train / Val set 2. Develop the algorithm
Outline
3. Download MS COCO Test-Dev
3. Download MS COCO Test-Full
4. Upload to CodaLab (unlimited)
4. Upload to CodaLab (5 times max)
/ 5412
Participate in challenge?
Yes
No
1. Download MS COCO Train / Val set 2. Develop the algorithm
Outline
3. Download MS COCO Test-Dev
3. Download MS COCO Test-Full
4. Upload to CodaLab (unlimited)
4. Upload to CodaLab (5 times max)
/ 5413
Evaluation Server Usage
Submissions to all test sets
/ 5414
Evaluation Metrics
/ 5415
Evaluation Metrics
• AP is averaged over multiple IoU values between 0.5 and 0.95.
Challenges Score: AP• More comprehensive metric than
the traditional AP at a fixed IoU value (0.5 for PASCAL).
/ 54
• AP is averaged over instance size:• small (A < 32 x 32)• medium (32x 32 < A < 96 x 96)• large (A > 96 x 96)
16
Evaluation Metrics
A<32x32
32x32<A<96x96
A>96x96Other Scores: Size AP
/ 5417
Evaluation Metrics
Other Scores: AR
• Measures the maximum recall over a fixed number of detections allowed in the image of 1, 10, 100.
• AR is averaged over small (A < 32 x 32), medium (32x 32 < A < 96 x 96) and large (A > 96 x 96) instances of objects.
/ 5418
Evaluation Ambiguity
Which one is better?
/ 5418
Evaluation Ambiguity
Which one is better?
/ 5418
Evaluation Ambiguity
Which one is better?
Ground-Truth BBox
/ 5418
Evaluation Ambiguity
Which one is better?
Detection BBoxGround-Truth BBox
/ 5418
Evaluation Ambiguity
IoU = 0.5 IoU = 0.7 IoU = 0.95
Which one is better?
Detection BBoxGround-Truth BBox
/ 5419
COCO Challenges Results
/ 5420
Bounding Boxes Leaderboard (I)
COCO AP (over all IoU)
/ 5420
0%
10%
20%
30%
40%
50%
G-RMI
MSRAVC**
Trimps-S
oush
en
Imag
ine La
b
Cmu-a2-v
gg16
ToCon
coctP
ellucid Wall
hust-
mclab
Fast
R-CNN*
(VGG-16)
Bounding Boxes Leaderboard (I)
COCO AP (over all IoU)
/ 5420
0%
10%
20%
30%
40%
50%
G-RMI
MSRAVC**
Trimps-S
oush
en
Imag
ine La
b
Cmu-a2-v
gg16
ToCon
coctP
ellucid Wall
hust-
mclab
Fast
R-CNN*
(VGG-16)
Bounding Boxes Leaderboard (I)
COCO AP (over all IoU)
(*) Performance on Test-Dev
/ 5420
0%
10%
20%
30%
40%
50%
G-RMI
MSRAVC**
Trimps-S
oush
en
Imag
ine La
b
Cmu-a2-v
gg16
ToCon
coctP
ellucid Wall
hust-
mclab
Fast
R-CNN*
(VGG-16)
Bounding Boxes Leaderboard (I)
COCO AP (over all IoU)
(*) Performance on Test-Dev
/ 5420
0%
10%
20%
30%
40%
50%
G-RMI
MSRAVC**
Trimps-S
oush
en
Imag
ine La
b
Cmu-a2-v
gg16
ToCon
coctP
ellucid Wall
hust-
mclab
Fast
R-CNN*
(VGG-16)
Bounding Boxes Leaderboard (I)
COCO AP (over all IoU)
(*) Performance on Test-Dev (**) 2015 Winner
/ 5420
0%
10%
20%
30%
40%
50%
G-RMI
MSRAVC**
Trimps-S
oush
en
Imag
ine La
b
Cmu-a2-v
gg16
ToCon
coctP
ellucid Wall
hust-
mclab
Fast
R-CNN*
(VGG-16)
Bounding Boxes Leaderboard (I)
COCO AP (over all IoU)
(*) Performance on Test-Dev
+22% absolute+110% relative
(**) 2015 Winner
/ 5420
0%
10%
20%
30%
40%
50%
G-RMI
MSRAVC**
Trimps-S
oush
en
Imag
ine La
b
Cmu-a2-v
gg16
ToCon
coctP
ellucid Wall
hust-
mclab
Fast
R-CNN*
(VGG-16)
Bounding Boxes Leaderboard (I)
COCO AP (over all IoU)
(*) Performance on Test-Dev
+22% absolute+110% relative
(**) 2015 Winner
+4.2% absolute+11.2% relative
/ 5421
Segmentation Leaderboard (I)
COCO AP (over all IoU)
/ 5421
0%
10%
20%
30%
40%
MSRAG-R
MI
MSRAVC**
anon
ymou
s
Segmentation Leaderboard (I)
COCO AP (over all IoU)
/ 5421
0%
10%
20%
30%
40%
MSRAG-R
MI
MSRAVC**
anon
ymou
s
Segmentation Leaderboard (I)
COCO AP (over all IoU)
(**) 2015 Winner
/ 5421
0%
10%
20%
30%
40%
MSRAG-R
MI
MSRAVC**
anon
ymou
s
Segmentation Leaderboard (I)
COCO AP (over all IoU)
(**) 2015 Winner
/ 5421
0%
10%
20%
30%
40%
MSRAG-R
MI
MSRAVC**
anon
ymou
s
Segmentation Leaderboard (I)
COCO AP (over all IoU)
(**) 2015 Winner
/ 5421
0%
10%
20%
30%
40%
MSRAG-R
MI
MSRAVC**
anon
ymou
s
Segmentation Leaderboard (I)
COCO AP (over all IoU)
(**) 2015 Winner
+9.1% absolute+32.3% relative
/ 5421
0%
10%
20%
30%
40%
MSRAG-R
MI
MSRAVC**
anon
ymou
s
Segmentation Leaderboard (I)
COCO AP (over all IoU)
COCO AP for segmentation winner trails the one for bbox detection by ~4%:
• Last year the gap was ~10%• Localization is harder
for segmentation
(**) 2015 Winner
+9.1% absolute+32.3% relative
/ 5422
Bounding Boxes Leaderboard (II)
Object Localization is improving
/ 5422
0%
10%
20%
30%
40%
50%
60%
70%
G-RMI
MSRAVC**
Trimps-S
oush
en
Imag
ine La
b
Cmu-a2-v
gg16
ToCon
coctP
ellucid Wall
hust-
mclab
AP_50 AP_75
Bounding Boxes Leaderboard (II)
Object Localization is improving
/ 5422
0%
10%
20%
30%
40%
50%
60%
70%
G-RMI
MSRAVC**
Trimps-S
oush
en
Imag
ine La
b
Cmu-a2-v
gg16
ToCon
coctP
ellucid Wall
hust-
mclab
AP_50 AP_75
Bounding Boxes Leaderboard (II)
Object Localization is improving
/ 5422
0%
10%
20%
30%
40%
50%
60%
70%
G-RMI
MSRAVC**
Trimps-S
oush
en
Imag
ine La
b
Cmu-a2-v
gg16
ToCon
coctP
ellucid Wall
hust-
mclab
AP_50 AP_75
Bounding Boxes Leaderboard (II)
Object Localization is improving
objects correctly detected but not well localized17% AP
/ 5422
0%
10%
20%
30%
40%
50%
60%
70%
G-RMI
MSRAVC**
Trimps-S
oush
en
Imag
ine La
b
Cmu-a2-v
gg16
ToCon
coctP
ellucid Wall
hust-
mclab
AP_50 AP_75
Bounding Boxes Leaderboard (II)
Object Localization is improving
17% AP 19% AP
/ 5423
Segmentation Leaderboard (II)
Mask localization can improve
/ 5423
0%
10%
20%
30%
40%
50%
60%
70%
MSRAG-R
MI
MSRAVC**
anon
ymou
s
AP_50 AP_75
Segmentation Leaderboard (II)
Mask localization can improve
/ 5423
0%
10%
20%
30%
40%
50%
60%
70%
MSRAG-R
MI
MSRAVC**
anon
ymou
s
AP_50 AP_75
Segmentation Leaderboard (II)
Mask localization can improve
/ 5423
0%
10%
20%
30%
40%
50%
60%
70%
MSRAG-R
MI
MSRAVC**
anon
ymou
s
AP_50 AP_75
Segmentation Leaderboard (II)
Mask localization can improve
20% AP
/ 5424
Bounding Boxes vs Segmentation
Segmentation provides great bounding boxes!
/ 5424
Bounding Boxes vs Segmentation
Segmentation provides great bounding boxes!
(*) 2015 Winner
0%
10%
20%
30%
40%
50%
60%
70%
G-RMI
MSRA (seg
m)
MSRAVC (bbox
)*
AP AP_75 AP_50
/ 5424
Bounding Boxes vs Segmentation
Segmentation provides great bounding boxes!
COCO AP for segmentation winner trails the one for bbox detection by ~2%:
(*) 2015 Winner
0%
10%
20%
30%
40%
50%
60%
70%
G-RMI
MSRA (seg
m)
MSRAVC (bbox
)*
AP AP_75 AP_50
/ 5424
Bounding Boxes vs Segmentation
Segmentation provides great bounding boxes!
COCO AP for segmentation winner trails the one for bbox detection by ~2%:
(*) 2015 Winner
0%
10%
20%
30%
40%
50%
60%
70%
G-RMI
MSRA (seg
m)
MSRAVC (bbox
)*
AP AP_75 AP_50
/ 5424
Bounding Boxes vs Segmentation
Segmentation provides great bounding boxes!
COCO AP for segmentation winner trails the one for bbox detection by ~2%:
• Results in 2nd place in the bbox challenge!
(*) 2015 Winner
0%
10%
20%
30%
40%
50%
60%
70%
G-RMI
MSRA (seg
m)
MSRAVC (bbox
)*
AP AP_75 AP_50
/ 5424
Bounding Boxes vs Segmentation
Segmentation provides great bounding boxes!
COCO AP for segmentation winner trails the one for bbox detection by ~2%:
• Results in 2nd place in the bbox challenge!
• Gap is about constant at multiple IoU values.
(*) 2015 Winner
0%
10%
20%
30%
40%
50%
60%
70%
G-RMI
MSRA (seg
m)
MSRAVC (bbox
)*
AP AP_75 AP_50
/ 5424
Bounding Boxes vs Segmentation
Segmentation provides great bounding boxes!
COCO AP for segmentation winner trails the one for bbox detection by ~2%:
• Results in 2nd place in the bbox challenge!
• Gap is about constant at multiple IoU values.
• Participate in Segmentation Challenge!
(*) 2015 Winner
0%
10%
20%
30%
40%
50%
60%
70%
G-RMI
MSRA (seg
m)
MSRAVC (bbox
)*
AP AP_75 AP_50
/ 5425
Performance Breakdown (I)
COCO AP varies across supercategories and size
/ 5425
0%
10%
20%
30%
40%
50%
60%
anim
alou
tdoo
rve
hicle
elect
roni
cpe
rson
appl
iance
furn
iture
spor
tsfo
odin
door
kitch
enac
cess
ory
Performance Breakdown (I)
COCO AP varies across supercategories and size
/ 5425
0%
10%
20%
30%
40%
50%
60%
anim
alou
tdoo
rve
hicle
elect
roni
cpe
rson
appl
iance
furn
iture
spor
tsfo
odin
door
kitch
enac
cess
ory
Performance Breakdown (I)
COCO AP varies across supercategories and size
/ 5425
0%
10%
20%
30%
40%
50%
60%
anim
alou
tdoo
rve
hicle
elect
roni
cpe
rson
appl
iance
furn
iture
spor
tsfo
odin
door
kitch
enac
cess
ory
Performance Breakdown (I)
COCO AP varies across supercategories and size
Performance across teams improved on all supercategories
• Average AP increase of ~10%.• Average Standard Deviation
decrease of ~1%.
/ 5426
Performance Breakdown (II)
Impact of size on performance
0%
15%
30%
45%
60%
2015 2016
small
medium large
/ 5426
Performance Breakdown (II)
Impact of size on performance
0%
15%
30%
45%
60%
2015 2016
small
medium large
+33%
/ 5426
Performance Breakdown (II)
Impact of size on performance
0%
15%
30%
45%
60%
2015 2016
small
medium large
+33%
+53%
/ 5426
Performance Breakdown (II)
Impact of size on performance
0%
15%
30%
45%
60%
2015 2016
small
medium large
+33%
+53%
+118%!!
/ 5427
Correlation between methods
/ 5427
Correlation between methods
How similarly do algorithms perform?
/ 5427
Correlation between methods
How similarly do algorithms perform?
G-R
MI
MSRAVC*
Bounding Boxes
(*) 2015 Winner
/ 5427
Correlation between methods
How similarly do algorithms perform?
G-R
MI
MSRAVC*
Bounding Boxes
0 % 80%AP
0 %
80 %
AP
(*) 2015 Winner
/ 5427
Correlation between methods
How similarly do algorithms perform?
G-R
MI
MSRAVC*
Bounding Boxes
0 % 80%AP
0 %
80 %
AP
(*) 2015 Winner
/ 5427
Correlation between methods
How similarly do algorithms perform?
G-R
MI
MSRAVC*
Bounding Boxes
0 % 80%AP
0 %
80 %
AP
R2 = 0.98
(*) 2015 Winner
/ 5427
Correlation between methods
How similarly do algorithms perform?
G-R
MI
MSRAVC*
Bounding Boxes
0 % 80%AP
0 %
80 %
AP
R2 = 0.98
(*) 2015 Winner
Segmentation
G-RMI
MSR
A
/ 5427
Correlation between methods
How similarly do algorithms perform?
G-R
MI
MSRAVC*
Bounding Boxes
0 % 80%AP
0 %
80 %
AP
0 % 80%AP
R2 = 0.98
(*) 2015 Winner
Segmentation
G-RMI
MSR
A
/ 5427
Correlation between methods
How similarly do algorithms perform?
G-R
MI
MSRAVC*
Bounding Boxes
0 % 80%AP
0 %
80 %
AP
0 % 80%AP
R2 = 0.98
(*) 2015 Winner
Segmentation
G-RMI
MSR
A
/ 5427
Correlation between methods
How similarly do algorithms perform?
G-R
MI
MSRAVC*
Bounding Boxes
0 % 80%AP
0 %
80 %
AP
0 % 80%AP
R2 = 0.98 R2 = 0.97
(*) 2015 Winner
Segmentation
G-RMI
MSR
A
/ 5428
Bounding Box Detection Errors
/ 5428
Bounding Box Detection Errors
How similarly do top algorithms perform?
/ 5428
AP @ IoU = [0.5; 0.75]
AP @ IoU = 0.1
Super-category FP removed
Category FP removed
Background FP removed
All errors are removed
Bounding Box Detection Errors
How similarly do top algorithms perform?
0 0.2 0.4 0.6 0.8 1recall
0
0.2
0.4
0.6
0.8
1
prec
ision
overall-all-all
[.456] C75[.623] C50[.686] Loc[.700] Sim[.723] Oth[.925] BG[1.00] FN
G-RMI
/ 5428
AP @ IoU = [0.5; 0.75]
AP @ IoU = 0.1
Super-category FP removed
Category FP removed
Background FP removed
All errors are removed
Bounding Box Detection Errors
How similarly do top algorithms perform?
0 0.2 0.4 0.6 0.8 1recall
0
0.2
0.4
0.6
0.8
1
prec
ision
overall-all-all
[.456] C75[.623] C50[.686] Loc[.700] Sim[.723] Oth[.925] BG[1.00] FN
G-RMI MSRAVC*
0 0.2 0.4 0.6 0.8 1recall
0
0.2
0.4
0.6
0.8
1
prec
ision
overall-all-all
[.399] C75[.589] C50[.682] Loc[.695] Sim[.713] Oth[.870] BG[1.00] FN
(*) 2015 Winner
/ 5429
Bounding Box Detection Errors (I)
/ 5429
Bounding Box Detection Errors (I)
What type of errors are algorithms doing?
/ 5429
AP @ IoU = [0.5; 0.75]
AP @ IoU = 0.1
Super-category FP removed
Category FP removed
Background FP removed
All errors are removed
Bounding Box Detection Errors (I)
/ 5429
AP @ IoU = [0.5; 0.75]
AP @ IoU = 0.1
Super-category FP removed
Category FP removed
Background FP removed
All errors are removed
Bounding Box Detection Errors (I)
G-RMI MSRAVC*
(*) 2015 Winner
/ 5429
AP @ IoU = [0.5; 0.75]
AP @ IoU = 0.1
Super-category FP removed
Category FP removed
Background FP removed
All errors are removed
0 0.2 0.4 0.6 0.8 1recall
0
0.2
0.4
0.6
0.8
1
prec
ision
person-person-all
[.582] C75[.812] C50[.875] Loc[.875] Sim[.886] Oth[.970] BG[1.00] FN
0 0.2 0.4 0.6 0.8 1recall
0
0.2
0.4
0.6
0.8
1
prec
ision
person-person-all
[.510] C75[.724] C50[.832] Loc[.832] Sim[.841] Oth[.911] BG[1.00] FN
Bounding Box Detection Errors (I)
G-RMI MSRAVC*
(*) 2015 Winner
/ 5430
Bounding Box Detection Errors (II)
/ 5430
Bounding Box Detection Errors (II)
What type of errors are algorithms doing?
/ 5430
AP @ IoU = [0.5; 0.75]
AP @ IoU = 0.1
Super-category FP removed
Category FP removed
Background FP removed
All errors are removed
Bounding Box Detection Errors (II)
/ 5430
AP @ IoU = [0.5; 0.75]
AP @ IoU = 0.1
Super-category FP removed
Category FP removed
Background FP removed
All errors are removed
Bounding Box Detection Errors (II)
G-RMI MSRAVC*
(*) 2015 Winner
/ 5430
0 0.2 0.4 0.6 0.8 1recall
0
0.2
0.4
0.6
0.8
1
prec
ision
overall-all-small
[.244] C75[.416] C50[.506] Loc[.518] Sim[.533] Oth[.824] BG[1.00] FN
0 0.2 0.4 0.6 0.8 1recall
0
0.2
0.4
0.6
0.8
1
prec
ision
overall-all-small
[.175] C75[.343] C50[.469] Loc[.476] Sim[.484] Oth[.709] BG[1.00] FN
AP @ IoU = [0.5; 0.75]
AP @ IoU = 0.1
Super-category FP removed
Category FP removed
Background FP removed
All errors are removed
Bounding Box Detection Errors (II)
G-RMI MSRAVC*
(*) 2015 Winner
/ 5431
Summary of Findings
/ 5431
Summary of Findings
2016 Detection Challenge Take-aways
/ 5431
Summary of Findings
• MSRAVC 2015 set a very high bar for performance.
2016 Detection Challenge Take-aways
/ 5431
Summary of Findings
• MSRAVC 2015 set a very high bar for performance.• G-RMI imroved COCO AP by 4% absolute, 11% relative.
2016 Detection Challenge Take-aways
/ 5431
Summary of Findings
• MSRAVC 2015 set a very high bar for performance.• G-RMI imroved COCO AP by 4% absolute, 11% relative.• MSRA 2016 segmentation algorithm is great on bboxes.
2016 Detection Challenge Take-aways
/ 5431
Summary of Findings
• MSRAVC 2015 set a very high bar for performance.• G-RMI imroved COCO AP by 4% absolute, 11% relative.• MSRA 2016 segmentation algorithm is great on bboxes.• Performance on all classes has improved across entries.
2016 Detection Challenge Take-aways
/ 5431
Summary of Findings
• MSRAVC 2015 set a very high bar for performance.• G-RMI imroved COCO AP by 4% absolute, 11% relative.• MSRA 2016 segmentation algorithm is great on bboxes.• Performance on all classes has improved across entries.• Localization improved greatly in both challenges.
2016 Detection Challenge Take-aways
/ 5431
Summary of Findings
• MSRAVC 2015 set a very high bar for performance.• G-RMI imroved COCO AP by 4% absolute, 11% relative.• MSRA 2016 segmentation algorithm is great on bboxes.• Performance on all classes has improved across entries.• Localization improved greatly in both challenges.• High relative improvement on small object instances.
2016 Detection Challenge Take-aways
/ 5431
Summary of Findings
• MSRAVC 2015 set a very high bar for performance.• G-RMI imroved COCO AP by 4% absolute, 11% relative.• MSRA 2016 segmentation algorithm is great on bboxes.• Performance on all classes has improved across entries.• Localization improved greatly in both challenges.• High relative improvement on small object instances.• False negatives are reduced, thus better recall of teams.
2016 Detection Challenge Take-aways
/ 5432
Challenges Ranking
/ 54
G-RMI 1st 2nd
MSRA - 1st
Trimps-Soushen 2nd -
Imagine Lab 3rd -
UofA 5th -
1026 - 3rd
32
Challenges RankingTeam BBox Segmentation
/ 54
G-RMI 1st 2nd
MSRA - 1st
Trimps-Soushen 2nd -
Imagine Lab 3rd -
UofA 5th -
1026 - 3rd
32
Challenges Ranking
Invited Speakers:• G-RMI / Object Detection / (2:30pm - 2:45pm)• MSRA / Segmentation / (2:45pm - 3:00pm)
Team BBox Segmentation