Download - Grouping and Segmentation
![Page 1: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/1.jpg)
Grouping and Segmentation
• Previously: model-based grouping (e.g., straight line, figure 8…)
• Now: general ``bottom-up” image organization
(Like detecting-specific-brightness-patterns vs “interesting” patterns/interest points/corners)
![Page 2: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/2.jpg)
Grouping
• Grouping is the process of associating similar image features together
![Page 3: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/3.jpg)
Grouping
• Grouping is the process of associating similar image features together
• The Gestalt School:
– Proximity: nearby image elements tend to be grouped.– Similarity: similar image elements ... – Common fate: image elements with similar motion ...– Common region: image elements in same closed region...– Parallelism: parallel curves or other parallel image elements ...– Closure: image elements forming closed curves ...– Symmetry: symmetrically positioned image elements...– Good continuation: image elements that join up nicely ...– Familiar Conguration: image elements giving a familiar object...
![Page 4: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/4.jpg)
![Page 5: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/5.jpg)
![Page 6: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/6.jpg)
![Page 7: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/7.jpg)
![Page 8: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/8.jpg)
Good Continuation
![Page 9: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/9.jpg)
Good continuation
![Page 10: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/10.jpg)
![Page 11: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/11.jpg)
Common Form: (includes color and texture)
![Page 12: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/12.jpg)
Connectivity
![Page 13: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/13.jpg)
Symmetry
![Page 14: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/14.jpg)
Symmetry
![Page 15: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/15.jpg)
Convexity (stronger than symmetry?)
![Page 16: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/16.jpg)
Good continuation also stronger than symmetry?
![Page 17: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/17.jpg)
Closureclosed curves are much easier to pick out than open ones
![Page 18: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/18.jpg)
Grouping and Segmentation
• Segmentation is the process of dividing an image into regions of “related content”
Courtesy Uni Bonn
![Page 19: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/19.jpg)
Grouping and Segmentation
• Segmentation is the process of dividing an image into regions of “related content”
Courtesy Uni Bonn
![Page 20: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/20.jpg)
Grouping and Segmentation
• Both are ill defined problems “Related” or “similar” are “high level” concepts. Hard to apply directly to image data
• How are they used? Are they “early” process that precedes recognizing objects?
• A lot of research, some pretty good algorithms. No single best approach.
![Page 21: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/21.jpg)
Boundaries
![Page 22: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/22.jpg)
Problem: Find best path between two boundary points.
User can help by clicking on endpoints
![Page 23: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/23.jpg)
How do we decide how good a path is? Which of two paths is better?
![Page 24: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/24.jpg)
Discrete Grid• Curve is good if it follows object boundary So it should have:
• Strong gradients along curve (on average)
• Curve directions roughly perpendicular to gradients
(since these is usually true for boundaries)
• Curve should be smooth, not jagged Since:
• Object boundaries typically smooth
• Curves that follow “noise edges” typically jagged
So good curve has:• Low curvature (on average) along curve
• Slow change in gradient direction along curve
![Page 25: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/25.jpg)
Discrete Grid
• How to find the best curve?
• Approach- Define cost function on curves
(Given any curve, rule for computing its cost)
- Define so good curves have lower cost (non-wiggly curves passing mostly through edgels have low cost)
- Search for best curve with lowest cost (minimize cost function over all curves)
![Page 26: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/26.jpg)
Discrete Grid
• How to find the best curve?
• Approach- Define cost function on curves
(Given any curve, rule for computing its cost)
- Define so good curves have lower cost (non-wiggly curves passing mostly through edgels have low cost)
- Search for best curve with lowest cost (minimize cost function over all curves)
• How to compute cost?
- At every pixel in image, define costs for “curve fragments” connecting it to adjoining pixels - Compute curve cost by summing up costs for fragments.
![Page 27: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/27.jpg)
Defining cost: smoothness
• Path: series of pixels , , … , ,
• Good curve has small curvature at each pixel– Small direction changes.– small on average (summing along curve)
• Good curve has small changes in gradient direction
One possible term in cost definition:
11
1
i ii i
i i
I I p pI I
1ip 1, ...ip
ip2p1p
1ip
ip
1ip
1ip
ip
Fragment Fragment Cost
![Page 28: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/28.jpg)
Defining cost: smoothness
• Path: series of pixels , , … , ,
• Good curve has small curvature at each pixel– Small direction changes.– small on average (summing along curve)
• Good curve has small changes in gradient direction
One possible term in cost definition:
11
1
i ii i
i i
I I p pI I
small when gradients at both nearly perpendicular to fragment direction
1,i ip p
1i ip p
1ip 1, ...ip
ip2p1p
1ip
ip
1ip
1ip
ip
Fragment Fragment Cost
![Page 29: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/29.jpg)
Defining cost: smoothness
• Path: series of pixels , , … , ,
• Good curve has small curvature at each pixel– Small direction changes.– small on average (summing along curve)
• Good curve has small changes in gradient direction
One possible term in cost definition:
11
1
i ii i
i i
I I p pI I
small when gradients at both nearly perpendicular to fragment direction
1,i ip p
1i ip p
1ip 1, ...ip
If this small at both i, i+1, then small at i.Low average value of this quantity low average value (good curve)
ip2p1p
1ip
ip
1ip
1ip
ip
Fragment Fragment Cost
![Page 30: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/30.jpg)
Path Cost Function (better path has smaller cost)
• Path: , , , ,
• Total cost: sum cost for each pixel over whole path One possible cost definition:
11 11
1
| | ( )n i ii i i i ii
i i
I Ip p g p p pI I
p ip1 p2
2max
1 ( some constant)( / )i
i
g pI I
1... ip 1, ...ip
![Page 31: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/31.jpg)
Path Cost Function (better path has smaller cost)
• Path: , , , ,
• Total cost: sum cost for each pixel over whole path One possible cost definition:
11 11
1
| | ( )n i ii i i i ii
i i
I Ip p g p p pI I
p ip1 p2
2max
1 ( some constant)( / )i
i
g pI I
1... ip 1, ...ip
Small for high gradient,small direction change
![Page 32: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/32.jpg)
How do we find the best Path? Computer Science…
Remember:
Curve is path through grid.
Cost sums over each step of path.
We want to minimize cost.
![Page 33: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/33.jpg)
Map problem to Graph
Edges connect pixels
Edge weight = cost of fragment connecting the two pixels
Each pixel is a node
![Page 34: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/34.jpg)
Map problem to Graph
Edge weight
Example Cost:
Edge weight = cost of fragment connecting the two pixels
1 111
1
| | ( ) i ii i
ni
ii i i
i
I Ip p g p p pI I
![Page 35: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/35.jpg)
Map problem to Graph
Edge weight
Example Cost:
Edge weight = cost of fragment connecting the two pixels
1 111
1
| | ( ) i ii i
ni
ii i i
i
I Ip p g p p pI I
Note: this is just an example of a cost function. The cost function actually used differs
![Page 36: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/36.jpg)
Algorithm: basic idea
• Two steps:
1) Compute least costs for best paths from start pixel to all other pixels.
2) Given a specific end pixel, use the cost computation to get best path between start and end pixels.
![Page 37: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/37.jpg)
Dijkstra’s shortest path algorithm
0531
33
4 9
2
• Algorithm: compute least costs from start node to all other nodes
Iterate outward from start node, adding one node at a time. Cost estimates start high, nodes gradually “learn” their true cost
Always choose next node so can calculate its correct cost.
link cost
![Page 38: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/38.jpg)
Dijkstra’s shortest path algorithm
4
1 0
5
3
3 2 3
9
531
33
4 9
2
• Algorithm: compute least costs from start node to all other nodes
Iterate outward from start node, adding one node at a time. Cost estimates start high, gradually improve
Always choose next node so can calculate its correct cost.
Example: choose left node L with cost 1. Min cost to L equals 1 since any other path to L gets cost>1 on first step from start node
L
![Page 39: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/39.jpg)
Dijkstra’s shortest path algorithm
• Computes minimum costs from seed to every pixel
– Running time (N pixels): O(N log N) using active priority queue (heap)
fraction of a second for a typical (640x480) image
• Then find best path from seed to any point in O(N).
Real time!
![Page 40: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/40.jpg)
Intelligent Scissors
![Page 41: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/41.jpg)
Results
![Page 42: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/42.jpg)
Voting again
• Recall Hough Transform
– Idea: data votes for best model(eg, best line passing through data points)
– Implementation• Choose type of model in advance (eg, straight lines)• Each data point votes on parameters of model
(eg, location + orientation of line)
-0.2 0 0.2 0.4 0.6 0.8 1 1.20
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
-80 -60 -40 -20 0 20 40 60 80-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Line Direction (degrees)
Dis
tanc
e to
orig
in
-80 -60 -40 -20 0 20 40 60 80-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Line Direction (degrees)
Dis
tanc
e to
orig
in
-0.2 0 0.2 0.4 0.6 0.8 1 1.20
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
![Page 43: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/43.jpg)
Another type of voting
• Popularity contest
– Each data point votes for others that it “likes,” that it feels “most related to”
– “Most popular” points form a distinct group
– (Like identifying social network from cluster of web links)
![Page 44: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/44.jpg)
B A
B “feels related” to A and votes for it.
![Page 45: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/45.jpg)
C
A
C also “feels related” to A and votes for it.
![Page 46: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/46.jpg)
DA
So do D…
![Page 47: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/47.jpg)
A
So do D and E
E
![Page 48: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/48.jpg)
“Popular” points form a group, with “friends”reinforcing each other
![Page 49: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/49.jpg)
feels unconnected to , doesn’t vote for it(or votes against)
![Page 50: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/50.jpg)
also feels unconnected and doesn’t vote for
![Page 51: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/51.jpg)
Only B feels connected and votes for (but it’s not enough)
B
![Page 52: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/52.jpg)
is unpopular and left out of group
![Page 53: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/53.jpg)
Note: even though can connect smoothly to , it doesn’t feel related since it’s too far away
![Page 54: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/54.jpg)
Identifying groups by “popularity”
• In vision, technical name for “popularity” is saliency
• Procedure
– Voting: Pixels accumulate votes from their neighbors
– Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes)
![Page 55: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/55.jpg)
Identifying groups by “popularity”
• In vision, technical name for “popularity” is saliency
• Procedure
– Voting: Pixels accumulate votes from their neighbors
– Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes)
Grouping:• Start at salient point
![Page 56: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/56.jpg)
Identifying groups by “popularity”
• In vision, technical name for “popularity” is saliency
• Procedure
– Voting: Pixels accumulate votes from their neighbors
– Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes)
Grouping:• Start at salient point
• Grow curve toward most related salient point
![Page 57: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/57.jpg)
Identifying groups by “popularity”
• In vision, technical name for “popularity” is saliency
• Procedure
– Voting: Pixels accumulate votes from their neighbors
– Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes)
Grouping:• Start at salient point
• Grow curve toward most related salient point
![Page 58: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/58.jpg)
Identifying groups by “popularity”
• In vision, technical name for “popularity” is saliency
• Procedure
– Voting: Pixels accumulate votes from their neighbors
– Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes)
Grouping:• Start at salient point
• Grow curve toward most related salient point
![Page 59: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/59.jpg)
Identifying groups by “popularity”
• In vision, technical name for “popularity” is saliency
• Procedure
– Voting: Pixels accumulate votes from their neighbors
– Grouping: Find smooth curves made up of the most salient pixels (the pixels with the most votes)
Grouping:• Start at salient point
• Grow curve toward most related salient point
![Page 60: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/60.jpg)
Affinity• In vision, the amount of “relatedness” between two data points is
called their affinity
Related curve fragments: high affinity
![Page 61: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/61.jpg)
Affinity• In vision, the amount of “relatedness” between two data points is
called their affinity
Related curve fragments: high affinity (Since can be connected by smooth curve)
![Page 62: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/62.jpg)
Affinity• In vision, the amount of “relatedness” between two data points is
called their affinity
Related curve fragments: high affinity (Since can be connected by smooth curve)
Unrelated curve fragments: low affinity
![Page 63: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/63.jpg)
Affinity• In vision, the amount of “relatedness” between two data points is
called their affinity
Related curve fragments: high affinity (Since can be connected by smooth curve)
Unrelated curve fragments: low affinity (Can’t be connected by smooth curve)
![Page 64: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/64.jpg)
Computing Affinity• Co-circularity: a simple measure of affinity between line fragments
– Two fragments are smoothly related if both lie on same circle.
– Two fragments on the same circle have property that 2
1
1 2
Chord joining line fragment centers
1 1,x y
2 2,x y
![Page 65: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/65.jpg)
Computing Affinity• Co-circularity: a simple measure of affinity between line fragments
– Two fragments are smoothly related if both lie on same circle.
– Two fragments on the same circle have property that 2
1
1 2
Chord joining line fragment centers is angle between chord and x-axis
1 1,x y
2 2,x y
![Page 66: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/66.jpg)
Computing Affinity• Co-circularity: a simple measure of affinity between line fragments
– Two fragments are smoothly related if both lie on same circle.
– Two fragments on the same circle have property that 2
1
1 2
Chord joining line fragment centers is angle between chord and x-axis
1 1,x y
2 2,x y
1 1
Angle of first fragment with X axis
1
![Page 67: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/67.jpg)
Computing Affinity• Co-circularity: a simple measure of affinity between line fragments
– Two fragments are smoothly related if both lie on same circle.
– Two fragments on the same circle have property that 2
1
1 2
Chord joining line fragment centers is angle between chord and x-axis
1 1,x y
2 2,x y
1 1
2 2
Angle of second fragment with X axis
![Page 68: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/68.jpg)
Computing Affinity• Co-circularity: a simple measure of affinity between line fragments
– Two fragments are smoothly related if both lie on same circle.
– Two fragments on the same circle have property that 2
1
1 2
Chord joining line fragment centers is angle between chord and x-axis
1 1,x y
2 2,x y
1 1
2 2
2 12Co C
![Page 69: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/69.jpg)
Computing Affinity• Co-circularity: a simple measure of affinity between line fragments
– Two fragments are smoothly related if both lie on same circle.
– Two fragments on the same circle have property that 2
1
1 2
Chord joining line fragment centers is angle between chord and x-axis
1 1,x y
2 2,x y
1 1
2 2
2 12Co C
Co-circularity: Second fragment angle determined byangle of first, and angle of connecting line
![Page 70: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/70.jpg)
Computing Affinity• is the orientation that second fragment should have if it
connects smoothly to first fragment.
• Measure affinity by difference with actual orientation:
2Co C
2 2Co C
2
1 1 1 1,E x y
2 2 2,E x y
2 2
2 2 1 2 1 1 1 21 2 2
, ,, exp
2
Co C Co C
orient
E E E EA E E
2
1 22exp
2 dist
E E
Example (use Gaussians so affinity falls off smoothly):
Edgel locations
![Page 71: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/71.jpg)
Computing Affinity• is the orientation that second fragment should have if it
connects smoothly to first fragment.
• Measure affinity by difference with actual orientation:
2Co C
2 2Co C
2
1 1 1 1,E x y
2 2 2,E x y
2 2
2 2 1 2 1 1 1 21 2 2
, ,, exp
2
Co C Co C
orient
E E E EA E E
Example (use Gaussians so affinity falls off smoothly):
Treat both edge fragments in same way: sum orientation discrepancy for both
2
1 22exp
2 dist
E E
![Page 72: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/72.jpg)
Computing Affinity• is the orientation that second fragment should have if it
connects smoothly to first fragment.
• Measure affinity by difference with actual orientation:
2Co C
2 2Co C
2
1 1 1 1,E x y
2 2 2,E x y
2 2
2 2 1 2 1 1 1 21 2 2
, ,, exp
2
Co C Co C
orient
E E E EA E E
Example (use Gaussians so affinity falls off smoothly):
More distant edges should have less affinity
2
1 22exp
2 dist
E E
![Page 73: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/73.jpg)
Computing Affinity
• Why give smaller affinity to distant fragments?
– Distant fragments likely to come from different objects– Many distant fragments, so some line up just by
random chance
![Page 74: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/74.jpg)
Computing Affinity
• For faster computation, fewer affinities: Set affinities to 0 beyond some distance T:
(Often choose T small: e.g., less than 7 pixels, or just to include the nearest neighbors)
1 2 1 2, 0 for A E E E E T
![Page 75: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/75.jpg)
Eigenvector Grouping
![Page 76: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/76.jpg)
Reminder:Grouping with Affinity Matrix
Task: Partition image elements into groups G1,..,Gn
of similar elements.
Affinities should be:
– high within group:
– low between groups:
, , highai j G Aff i j
, , lowa b ai G j G Aff i j
![Page 77: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/77.jpg)
Eigenvector grouping
• Eigenvector grouping is just improved popularity grouping!
• Idea: – To judge your own popularity, weight the votes from your
friends according to their popularity.
![Page 78: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/78.jpg)
Eigenvector grouping
• Eigenvector grouping is just improved popularity grouping!
• Idea: – To judge your own popularity, weight the votes from your
friends according to their popularity.
– Recompute using better estimate of friends popularity:(use weighted voting for them also)
![Page 79: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/79.jpg)
Eigenvector grouping
• Eigenvector grouping is just improved popularity grouping!
• Idea: – To judge your own popularity, weight the votes from your
friends according to their popularity.
– Recompute using better estimate of friends popularity:(use weighted voting for them also)
– Even better if recompute using this improved estimate for friends
![Page 80: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/80.jpg)
Eigenvector grouping
• Eigenvector grouping is just improved popularity grouping!
• Idea: – To judge your own popularity, weight the votes from your
friends according to their popularity.
– Recompute using better estimate of friends popularity:(use weighted voting for them also)
– Even better if recompute using this improved estimate for friends
![Page 81: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/81.jpg)
Eigenvector grouping
• Eigenvector grouping is just improved popularity grouping!
• Idea: – To judge your own popularity, weight the votes from your
friends according to their popularity.
– Recompute using better estimate of friends popularity:(use weighted voting for them also)
– Even better if recompute using this improved estimate for friends
– Iterating this, “weighted popularity” converges to eigenvector!
![Page 82: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/82.jpg)
• Your popularity = sum of friend’s votes 11
P Aff11
![Page 83: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/83.jpg)
• Your popularity = sum of friend’s votes 11
1P= AffN
11
Usually add normalization so
popularities always add to 1
![Page 84: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/84.jpg)
• Your popularity = sum of friend’s votes
But don’t care about normalization.Important thing is that P is proportional to this
11
P ~ Aff11
![Page 85: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/85.jpg)
• Your popularity = sum of friend’s votes
• Now use new estimate of friend’s popularity to weight their vote
P ~ Aff Pnew old
11
P ~ Aff11
![Page 86: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/86.jpg)
• Your popularity = sum of friend’s votes
• Now use new estimate of friend’s popularity to weight their vote
2
11
P ~ Aff P ~ Aff
1
new old
11
P ~ Aff11
![Page 87: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/87.jpg)
• Your popularity = sum of friend’s votes
• Again use new estimate…
3
11
P ~ Aff P ~ Aff
1
newer new
11
P ~ Aff11
![Page 88: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/88.jpg)
• Your popularity = sum of friend’s votes
• Keep going…
11
P ~ lim Aff
1
Nbest N
11
P ~ Aff11
![Page 89: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/89.jpg)
• Your popularity = sum of friend’s votes
• Keep going…
11
P ~ Aff11
11
P ~ lim Aff
1
Nbest N
An eigenvector of ! Aff
![Page 90: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/90.jpg)
• Your popularity = sum of friend’s votes
• Keep going…
11
P Aff11
11
P ~ lim Aff
1
Nbest N
An eigenvector of !
Why?
For very large N,
(one more time makes little difference)
Aff
N+1Aff ~ Aff NV V
NAff Aff ~ Aff NV V
![Page 91: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/91.jpg)
• Your popularity = sum of friend’s votes
• Keep going…
11
P Aff11
11
P ~ lim Aff
1
Nbest N
An eigenvector of !
Why?
For very large N,
(one more time makes little difference)
Aff
N+1Aff ~ Aff NV V
NAff Aff ~ Aff NV VEigenvector of Aff, by definition
![Page 92: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/92.jpg)
• Your popularity = sum of friend’s votes
• Another view
11
P Aff11
big big
11
P ~ lim Aff ~
1
N Nbest N E
Multiplying by Aff many times,
only the component E with largest |Aff E| remains from (1 1 1 1…1)
![Page 93: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/93.jpg)
• Your popularity = sum of friend’s votes
• Another view
11
P Aff11
big big
11
P ~ lim Aff ~
1
N Nbest N E
Multiplying by Aff many times,
only the component E with largest |Aff E| remains from (1 1 1 1…1)
So this is the eigenvector of Affwith the biggest eigenvalue
![Page 94: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/94.jpg)
Biggest Eigenvector: Math
11
1
i ii
c
E
Eigenvectors of Aff (or any symmetric matrix) give a rotated coordinated system.So we can write
Eigenvector
![Page 95: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/95.jpg)
Biggest Eigenvector: Math
11
1
i ii
c
E
Eigenvectors of Aff (or any symmetric matrix) give a rotated coordinated system.So we can write
Eigenvector
N
11
Aff Aff
1
NNi i i i i
i i
c c
E E
![Page 96: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/96.jpg)
Biggest Eigenvector: Math
11
1
i ii
c
E
Eigenvectors of Aff (or any symmetric matrix) give a rotated coordinated system.So we can write
Eigenvector
N
11
Aff Aff
1
NNi i i i i
i i
c c
E E
Eigenvalue for iE
![Page 97: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/97.jpg)
Biggest Eigenvector: Math
11
1
i ii
c
E
Eigenvectors of Aff (or any symmetric matrix) give a rotated coordinated system.So we can write
Eigenvector
N
11
Aff Aff
1
NNi i i i i
i i
c c
E E
Eigenvalue for iE
Biggest eigenvalue dominates:
NNi j i j
![Page 98: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/98.jpg)
Eigenvector Grouping: Another View
• Toy problem– Single foreground group:
Aff , 1 if , both in foreground groupi j i j
0 otherwise ( or in background)i j
![Page 99: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/99.jpg)
Eigenvector Grouping
• Toy problem– Single foreground group:
Aff , 1 if , both in foreground groupi j i j
0 otherwise ( or in background)i j
1 1 0 1 0 ... 0 1 1 .... 1 01 1 0 1 0 ... 0 1 1 ... 1 00 0 0 0 0 ... 0 0 0 ... 0 0
Aff 1 1 0 1 0 ... 0 1 1 .... 1 0
1 1 0 1 0 ... 0 1 1 .... 1 00 0 0 0 0 ... 0 0 0 ... 0 0
…
![Page 100: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/100.jpg)
Eigenvector Grouping
• Toy problem– Single foreground group:
Aff , 1 if , both in foreground groupi j i j
0 otherwise ( or in background)i j
1 1 ... 1 0 ... 01 1 ... 1 0 ... 0... ... ... 1 0 ... 0
Aff 1 ... ... 1 0 ... 00 ... 0... ... ...0 0
This is what Aff looks like if pixels in foreground group come before background pixels
![Page 101: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/101.jpg)
Grouping
• Toy problem: single foreground group:
• Algorithm to find group– Find unit vector V that makes largest
Affinity matrix:
1 1 0 1 0 ... 0 1 1 .... 1 01 1 0 1 0 ... 0 1 1 ... 1 00 0 0 0 0 ... 0 0 0 ... 0 0
Aff 1 1 0 1 0 ... 0 1 1 .... 1 0
1 1 0 1 0 ... 0 1 1 .... 1 00 0 0 0 0 ... 0 0 0 ... 0 0
…Aff V
![Page 102: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/102.jpg)
Grouping
• Toy problem: single foreground group:
• Algorithm to find group– Find unit vector V that makes largest– Large entry in V indicates group member.
Affinity matrix:
1 1 0 1 0 ... 0 1 1 .... 1 01 1 0 1 0 ... 0 1 1 ... 1 00 0 0 0 0 ... 0 0 0 ... 0 0
Aff 1 1 0 1 0 ... 0 1 1 .... 1 0
1 1 0 1 0 ... 0 1 1 .... 1 00 0 0 0 0 ... 0 0 0 ... 0 0
…Aff V
![Page 103: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/103.jpg)
Grouping
• Toy problem: single foreground group:
• Algorithm to find group– Find unit vector V that makes largest– Answer ~
Affinity matrix:
1 1 0 1 0 ... 0 1 1 .... 1 01 1 0 1 0 ... 0 1 1 ... 1 00 0 0 0 0 ... 0 0 0 ... 0 0
Aff 1 1 0 1 0 ... 0 1 1 .... 1 0
1 1 0 1 0 ... 0 1 1 .... 1 00 0 0 0 0 ... 0 0 0 ... 0 0
…
1 1 0 1 0 ... 0 1 1 .... 1 0
Aff V
(because every nonzero entry of V contributes as much as possible to |Aff V|)
![Page 104: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/104.jpg)
Grouping
• Toy problem: single foreground group:
• Algorithm to find group– Find unit vector V that makes largest– To compute V: find ‘largest’ eigenvector of
Affinity matrix:
1 1 0 1 0 ... 0 1 1 .... 1 01 1 0 1 0 ... 0 1 1 ... 1 00 0 0 0 0 ... 0 0 0 ... 0 0
Aff 1 1 0 1 0 ... 0 1 1 .... 1 0
1 1 0 1 0 ... 0 1 1 .... 1 00 0 0 0 0 ... 0 0 0 ... 0 0
…
Aff(with the largest eigenvalue)
Aff V
![Page 105: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/105.jpg)
Grouping
• Toy problem: single foreground group:
• Algorithm to find group– Find unit vector V that makes largest– To compute V: find ‘largest’ eigenvector of
(This gives largest value for |Aff V|)
Affinity matrix:
1 1 0 1 0 ... 0 1 1 .... 1 01 1 0 1 0 ... 0 1 1 ... 1 00 0 0 0 0 ... 0 0 0 ... 0 0
Aff 1 1 0 1 0 ... 0 1 1 .... 1 0
1 1 0 1 0 ... 0 1 1 .... 1 00 0 0 0 0 ... 0 0 0 ... 0 0
…
AffAff V
![Page 106: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/106.jpg)
Eigenvector Grouping
• Toy problem– Single foreground group:
• (Toy) Algorithm summary:
– Compute largest eigenvector E of Aff– Assign foreground if– Assign background if
i
Aff , 1 if , both in foreground groupi j i j
0 otherwise ( or in background)i j
| | 0iE | | 0iE i
![Page 107: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/107.jpg)
Eigenvector Grouping
Why does largest eigenvector work?
Leading eigenvector E of A has properties:
(by definition)maxAE E•
![Page 108: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/108.jpg)
Eigenvector Grouping
Why does largest eigenvector work?
Leading eigenvector E of A has properties:
(by definition)
Eigenvalue (a number)Should be largest eigenvalue
maxAE E•
![Page 109: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/109.jpg)
Eigenvector Grouping
Why does largest eigenvector work?
Leading eigenvector E of A has properties:
maxAE E (by definition)
max
maxV
AVV
occurs at
(max value )
V E
•
• Fact:
![Page 110: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/110.jpg)
Eigenvector Grouping
Why does largest eigenvector work?
Leading eigenvector E of A has properties:
maxAE E (by definition)
max
maxV
AVV
occurs at
(max value )
V E
•
• Fact:
Interpretation: E is “central direction” of all rows of A (largest dot products with rows of A)
![Page 111: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/111.jpg)
Aside: Math
max
occurs at
(max value )
V E• Fact: 2
2maxV
AVV
(equivalent to maximize the square)
![Page 112: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/112.jpg)
Aside: Math
• Derive from calculus: max is at stationary point
2
2maxV
AV
V occurs for V E
2
20T T
T
AVd d V A AVdV dV V VV
2
22T TT
T T
V V A AVA AVV V V V
Show
![Page 113: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/113.jpg)
Aside: Math
• Derive from calculus: max is at stationary point2
20T T
T
AVd d V A AVdV dV V VV
2
22T TT
T T
V V A AVA AVV V V V
,TA A V VT T
T
V A AVV V
2
2maxV
AV
V occurs for V EShow
![Page 114: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/114.jpg)
Aside: Math
• Derive from calculus: max is at stationary point2
20T T
T
AVd d V A AVdV dV V VV
2
22T TT
T T
V V A AVA AVV V V V
,TA A V VT T
T
V A AVV V
At max V is an eigenvector of TA A
![Page 115: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/115.jpg)
Aside: Math
• Derive from calculus: max is at stationary point2
20T T
T
AVd d V A AVdV dV V VV
2
22T TT
T T
V V A AVA AVV V V V
,TA A V VT T
T
V A AVV V
At max V is an eigenvector of TA A
V an eigenvector of A
![Page 116: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/116.jpg)
Aside: Math
• Derive from calculus: max is at stationary point2
20T T
T
AVd d V A AVdV dV V VV
2
22T TT
T T
V V A AVA AVV V V V
,TA A V VT T
T
V A AVV V
At max V is an eigenvector of TA A
V an eigenvector of A2TA A A
2AE E A AE A E AE E
Why? A symmetric A has same eigenvectors as ,
since
![Page 117: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/117.jpg)
Eigenvector Grouping
• More realistic problem– Single foreground group:
Aff , if , both in foregrolar unge dgroupi j i j
otherwise( or in backgsm ro d)all uni j
... ...
... ...... ... ... ...
Aff ... ... ......
... ... ...
Big Big Big Small Small
Big Big Small SmallSmall Small
Small Small
BIG smallsmall small
![Page 118: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/118.jpg)
Eigenvector Grouping
• More realistic problem– Single foreground group:
Aff , if , both in foregrolar unge dgroupi j i j
otherwise( or in backgsm ro d)all uni j
... ...
... ...... ... ... ...
Aff ... ... ......
... ... ...
Big Big Big Small Small
Big Big Small SmallSmall Small
Small Small
BIG smallsmall small
Eigenvector also has form ;E BIG small
![Page 119: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/119.jpg)
Eigenvector Grouping
• More realistic problem– Single foreground group:
Aff , if , both in foregrolar unge dgroupi j i j
otherwise( or in backgsm ro d)all uni j
... ...
... ...... ... ... ...
Aff ... ... ......
... ... ...
Big Big Big Small Small
Big Big Small SmallSmall Small
Small Small
BIG smallsmall small
Eigenvector also has form ;E BIG small
(E is unit vector. To getlargest product with Aff,more efficient to have bigentries where they multiply big entries of Aff .)
![Page 120: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/120.jpg)
Eigenvector Grouping
• More realistic problem– Single foreground group:
Aff , if , both in foregrolar unge dgroupi j i j
otherwise( or in backgsm ro d)all uni j
... ...
... ...... ... ... ...
Aff ... ... ......
... ... ...
Big Big Big Small Small
Big Big Small SmallSmall Small
Small Small
BIG smallsmall small
Eigenvector also has form ;E BIG small Can identify foreground items by their big entries of E
![Page 121: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/121.jpg)
Eigenvector Grouping
• Real Algorithm summary
– Compute largest eigenvector E of Aff
– Assign foreground if
– Assign background if(T a threshold chosen by you)
i | |iE T
| |iE Ti
![Page 122: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/122.jpg)
Eigenvector Grouping
• Even more realistic problem– Several groups:
– Example: two groups
Aff , if , both inlarge same groupi j i j
s otherwise( and in groups)mall differenti j
Aff'
BIG smallsmall BIG
Group 1 Group 2
![Page 123: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/123.jpg)
Eigenvector Grouping
• Even more realistic problem– Several groups:
– Example: two groups
Aff , if , both inlarge same groupi j i j
s otherwise( and in groups)mall differenti j
Aff'
BIG smallsmall BIG
Group 1 Group 2
Usually, leading eigenvector picks out one of the groups (the one with biggest affinities). In this case, we again expect ;E BIG small
![Page 124: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/124.jpg)
Technical Aside
Why does eigenvector pick out just one group?
• Toy example: 2 x 2 “affinity matrix”
1
2
00B
AB
are “big” values; affinities between “groups” = 01 2B B
![Page 125: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/125.jpg)
Technical Aside
Why does eigenvector pick out just one group?
• Toy example: 2 x 2 “affinity matrix”
1
2
00B
AB
are “big” values; affinities between “groups” = 01 2B B
Leading eigenvector is with eigenvalue 10
E
1B
Intuition: E has to be a unit vector. To be most efficient in getting large dot product , it put all its large values into group with largest affinities
![Page 126: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/126.jpg)
Eigenvector Grouping
• For several groups, leading eigenvector picks out the “leading” (most self similar) group.
(Remember: calculate eigenvector by svd on )
• To identify other groups, can remove points from the leading group and repeat leading eigenvector computation for remaining points
TA A
![Page 127: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/127.jpg)
Eigenvector Grouping
• For several groups, leading eigenvector (or singular vector) picks out the “leading” (most self similar) group.
• To identify other groups, can remove points from the leading group and repeat leading eigenvector computation for remaining points
• Alternative: just use non-leading eigenvectors/eigenvalues to pick out the non-leading groups.
![Page 128: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/128.jpg)
(Alternative) Algorithm Summary
• Choose affinity measures and create affinity matrix Aff
• Compute eigenvalues for Aff. Assign all elements to active list L
• Repeat for each eigenvalue starting from largest:
– Compute corresponding eigenvector
– Choose threshold , assign to new group
– Remove new group elements i from active list L.
– Stop if L empty or new group too small.
• “Clean up” groups (optional)
{ : | | }ii L E T T
![Page 129: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/129.jpg)
Eigenvector Grouping
• Problems
Method picks “strongest” group.
When there are several groups of similar strength, it can get confused
• Eigenvector algorithm described so far not used currently.
![Page 130: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/130.jpg)
Eigenvectors + Graph Cut Methods
![Page 131: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/131.jpg)
Graph Cut Methods
• Image undirected graph G = (V,E)
• Each graph edge has weight value w(E)
• Example– Vertex nodes =edgels i– Graph edges go between edgels i, j– Graph edge has weight w(i,j)=Aff(i,j)
Task: Partition V into V1...Vn, s.t. similarity is high within groups and low between groups
![Page 132: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/132.jpg)
Issues
• What is a good partition ?• How can you compute such a partition efficiently ?
![Page 133: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/133.jpg)
Graph Cut
• G=(V,E)
• Sets A and B are a disjoint partition of V
![Page 134: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/134.jpg)
Graph Cut
• G=(V,E)
• Sets A and B are a disjoint partition of V
Measure of dissimilarity between the two groups:
,
( , ) ( , )u A v B
Cut A B w u v
![Page 135: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/135.jpg)
Graph Cut
Cut
• G=(V,E)
• Sets A and B are a disjoint partition of V
Measure of dissimilarity between the two groups:
,
( , ) ( , )u A v B
Cut A B w u v
Cut
![Page 136: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/136.jpg)
Graph Cut
Cut links that get summedover
• G=(V,E)
• Sets A and B are a disjoint partition of V
Measure of dissimilarity between the two groups:
,
( , ) ( , )u A v B
Cut A B w u v
Cut
![Page 137: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/137.jpg)
The temptation
Cut is a measure of disassociation
Minimizing Cut gives partition with maximum disassociation.
Efficient poly-time algorithms exist
![Page 138: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/138.jpg)
The problem with MinCut
It usually outputs segments that are too small!
![Page 139: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/139.jpg)
The problem with MinCut
It usually outputs segments that are too small!Can get small cut just by having fewer cut links
![Page 140: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/140.jpg)
The Normalized Cut (Shi + Malik)
![Page 141: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/141.jpg)
The Normalized Cut (Shi + Malik)
Given a partition (A,B) of the vertex set V.
Ncut(A,B) measures difference between two groups, normalized by how similar they are within themselves.
If A small, assoc(A) is small and Ncut large (bad partition).
Problem: Find partition minimizing Ncut
,
( , ) ( , )( , )( ) ( )
( ) ( , )u A t V
cut A B cut B ANcut A Bassoc A assoc B
assoc A w u t
![Page 142: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/142.jpg)
Matrix formulation
Definitions:
D is an n x n diagonal matrix with entries
W is an n x n symmetric matrix
D(i,i) w(i, j)j
),(),( jiwjiW
![Page 143: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/143.jpg)
Normalized cuts
• Transform the problem to one which can be approximated by eigenvector methods.
• After some algebra, the Ncut problem becomes
subject to the constraints:
yTD10
1iy for some constant 1
( )minT
y t
y D W yMin Ncuty Dy
![Page 144: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/144.jpg)
Normalized cuts
• Transform the problem to one which can be approximated by eigenvector methods.
• After some algebra, the Ncut problem becomes
subject to the constraints:
yTD10
1iy for some constant 1
( )minT
y t
y D W yMin Ncuty Dy
NP-Complete!
![Page 145: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/145.jpg)
Normalized cuts
• Drop discreteness constraints to make easier, solvable by eigenvectors:
Subject to constraint:
yTD10
( )minT
y t
y D W yApproxMin Ncuty Dy
![Page 146: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/146.jpg)
Normalized cuts
• Drop discreteness constraints to make easier, solvable by eigenvectors:
Subject to constraint:
• Solution
y is second least eigenvector of matrix
yTD10
( )minT
y t
y D W yApproxMin Ncuty Dy
1/ 2 1/ 2( )M D D W D
![Page 147: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/147.jpg)
Normalized cuts
• Drop discreteness constraints to make easier, solvable by eigenvectors:
Subject to constraint:
• Solution
y is second least eigenvector of matrix
yTD10
( )minT
y t
y D W yApproxMin Ncuty Dy
1/ 2 1/ 2( )M D D W D
Easy, sinceD is diagonal
![Page 148: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/148.jpg)
Normalized cuts
• Drop discreteness constraints to make easier, solvable by eigenvectors:
Subject to constraint:
• Solution
y is second least eigenvector of matrix
yTD10
( )minT
y t
y D W yApproxMin Ncuty Dy
Because of constraint! (Means eigenvector for second smallest eigenvalue.)
1/ 2 1/ 2( )M D D W D
![Page 149: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/149.jpg)
Normalized cuts
• Drop discreteness constraints to make easier, solvable by eigenvectors:
Subject to constraint:
• Solution
y is second least eigenvector of matrix
yTD10
( )minT
y t
y D W yApproxMin Ncuty Dy
1/ 2 1/ 2( )M D D W D
MATLAB: [U,S,V] = svd(M); y= V(:,end-1);
![Page 150: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/150.jpg)
Normalized Cuts: algorithm
• Define affinities
• Compute matrices: and
• [U,S,V] = svd(M); y= V(:,end-1);
• Threshold y:
(could also use a different threshold besides the mean of y)
1/ 2 1/ 2( )M D D W D
,i jw1/ 2, ,D W D
i{ : mean y }iA i y
i{ : <mean y }iB i y
![Page 151: Grouping and Segmentation](https://reader036.vdocuments.us/reader036/viewer/2022081505/56816363550346895dd436e0/html5/thumbnails/151.jpg)
Normalized Cuts
• Very powerful algorithm, widely used
• Results shown later