analysis of scores, datasets, and models in visual saliency modeling

Post on 04-Jan-2016

43 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Analysis of scores, datasets, and models in visual saliency modeling. Ali Borji, Hamed R. Tavakoli, Dicky N. Sihite, and Laurent Itti,. Toronto dataset. Toronto dataset. Toronto dataset. Toronto dataset. Toronto dataset. Visual Saliency. Why important? Current status - PowerPoint PPT Presentation

TRANSCRIPT

Analysis of scores, datasets, and models in visual saliency modeling

Ali Borji, Hamed R. Tavakoli, Dicky N. Sihite, and Laurent Itti,

Toronto dataset

Toronto dataset

Toronto dataset

Toronto dataset

Toronto dataset

Why important?

Current status

Methods: numerous / 8 categories (Borji and Itti, PAMI, 2012)

Databases:

Measures:

scan-path analysis

correlation based measures

ROC analysis

Visual Saliency

How good my method works?

Benchmarks

Judd et al. http://people.csail.mit.edu/tjudd/SaliencyBenchmark/

Borji and Itti https://sites.google.com/site/saliencyevaluation/

Yet another benchmark!!!?

Dataset Challenge

Dataset bias :

Center-Bias (CB),

Border effect

Metrics are affected by these phenomena.

MIT Le Meur Toronto

Tricking the metric

Solution ?

• sAUC

• Best smoothing factor

• More than one metric

The Benchmark Fixation Prediction

The Feature Crises

intensityintensity

orientationorientation

colorcolor

sizesize

depthdepth

Low level

peoplepeople

symmetrysymmetry

carcar

texttext

signssigns

High level

Features

Does it capture any semantic scene property or affective stimuli?

Challenge of Challenge of performance on performance on

stimulus stimulus categories categories

&&affective stimuliaffective stimuli

Challenge of Challenge of performance on performance on

stimulus stimulus categories categories

&&affective stimuliaffective stimuli

The Benchmark Image categories and affective data

The Benchmark Image categories and affective data

vs 0.64 (non-

emotional)

The Benchmarkpredicting scanpath

bB

cC

dD

aAbBcCaA

aAdDbBcCaAaAcCaAaAcCbBcCaAaA

….

aA

bBbBcC

matching score

The Benchmarkpredicting scanpath (scores)

Category Decoding

Lessons learned

We recommend using shuffled AUC score for model evaluation.

Stimuli affects the performance .

Combination of saliency and eye movement statistics can be used in category recognition.

There seems the gap between models and IO is small (though statistically significant). It somehow alerts the need for new dataset.

The challenge of task decoding using eye statistics is open yet.

Saliency evaluation scores can still be introduced

Questions ??

top related