a novel method for generation of motion saliency yang xia, ruimin hu, zhenkun huang, and yin su icip...

12
A Novel Method for Generation of Motion Saliency Yang Xia, Ruimin Hu, Zhenkun Huang, and Yin Su ICIP 2010

Post on 19-Dec-2015

216 views

Category:

Documents


1 download

TRANSCRIPT

A Novel Method for Generation of Motion Saliency

Yang Xia, Ruimin Hu, Zhenkun Huang, and Yin Su

ICIP 2010

Outline

• Introduction• Itti’s model• Proposed Visual Saliency– Generation of motion feature map– Enhancement of motion sub-saliency map

• Experiment Results• Conclusion

Introduction

• Visual saliency– Bottom up saliency– Top down saliency– Applications• Image segmentation, motion detection, image/video

compression……

• Motion saliency– Motion object is more salient to human vision

system(HVS) than spatial contrast in video.

Itti’s Model

• For Image– Spatial features• Color, Intensity, and Orientation

– Feature maps– Combining the normalized activation maps

• Visual Saliency Model For Video– Temporal features• Flicker and Motion

Itti’s Model

• Motion feature map– Computed by spatially-shifted differences between Gabor

pyramids from the current frame n and previous frame n-1– Motion feature

– The minimum captured object velocity at scale

: motion feature map for scale and orientation : Gabor pyramid of original frame n : the shifted Gabor pyramid of original frame n

dx, dy : horizontal and vertical shift distancef : the frame rate

Itti’s Model

• Drawback– Inaccurate when the objects move slowly• when the velocity is smaller than , none of the

pyramidal scales can capture the movement • group into the background

– Only the edge of object is labeled salient• Using spatially-shifted differences

Multi reference frame

Enhance motion saliency map by spatial saliency information

Proposed Visual Saliency

• Generation of motion feature map– Multi reference frames to enhance the ability to

capture object movement

– Motion feature map

– Processed by graph theory to form the activation map[1]

[1] J. Harel, C. Koch, and P. Perona, “graph-based visual saliency,” in Advances in Neural Information Processing Systems 19, Cambridge, MA: MIT Press, 2007

(Reference frame) np

add two velocity profiles about and

Proposed Visual Saliency

• Enhancement of motion sub-saliency map– Spatial sub-saliency map

– Find the point which belongs to salient object• Check un-salient point is near a salient point which has

large saliency value both in motion and spatial sub-saliency maps, and its spatial saliency value is close to that of the salient point.

– Update the motion saliency– Generate the whole saliency map

Enhancement of Motion Sub-saliency Map

{𝑆𝑖 } {𝑀𝑖 }

Top 25% of locations which have larger saliency values in SMS

Top 5% of locations which have larger saliency values in SMM

𝑑(𝑝𝑖)

𝑝𝑖

𝑛𝑞𝑖If and the difference of the spatial saliency values between and

New saliency location set

Motion saliency points

Whole saliency map +

Experimental Results

• Dataset in CAVIAR—ThreePastShop1cor– ROC(Relative Operating Characteristic) score between estimated

saliency maps(ESMs) and ground-truth saliency maps(GSMs)

Anchor1: Itti’s modelAnchor2: Itti’s model using activation operator based on graph theorySMRF: saliency model with the multi-reference framesSMRF+STE: plus spatio-temporal enhancement

Experimental Results

motion channel

five channel

anchor1

SMRF+STE

anchor2

anchor1

anchor2

SMRF

SMRF+STE

Conclusion

• First analyze the drawback of Itti’s motion saliency model.

• Propose a novel motion saliency model in which motion saliency map is obtained through the multi reference frames, and enhanced by spatial saliency information.