segmentation in the field medicine

Post on 07-Jan-2016

28 Views

Category:

Documents

4 Downloads

Preview:

Click to see full reader

DESCRIPTION

Segmentation In The Field Medicine. Advanced Image Processing course By: Ibrahim Jubran Presented To: Prof. Hagit Hel-Or. What we will go through today. A little inspiration. Medical image segmentation methods: -Deformable Models. -Markov Random Fields. Results. - PowerPoint PPT Presentation

TRANSCRIPT

Segmentation In The Field Medicine

Advanced Image Processing course

By: Ibrahim Jubran

Presented To: Prof. Hagit Hel-Or

What we will go through today

• A little inspiration.

• Medical image segmentation methods:

- Deformable Models.

- Markov Random Fields.

• Results.

Why Let A Human Do It, When The Computer Does It

Better?• “Image data is of immense practical

importance in medical informatics.”

• For instance: CAT, MRI, CT, X-Ray, Ultrasound.All represented as images, and as images, they can be processed to extract meaningful information such as: volume, shape, motion of organs, layers, or to detect any abnormalities.

Why Let A Human Do It, When The Computer Does It

Better? Cont.

• Here’s a task for you:

Look at this image:could you manually mark the boundaries of the two abnormal regions?

Answer: Maybe…

Not Bad...

And… What if I told you to do it in 3D?

Answer?You would probably fail badly.

But… the computer, on other hand, dealt with it perfectly:

Common Methods:Deformable Models

• Deformable models are curves whose deformations are determined by the displacement of a discrete number of control points along the curve.

• Advantage: usually very fast convergence, depending on the predetermined number of control points.

• Disadvantage: Topology dependent: a model can capture only one ROI, therefore in images with multiple ROIs we need to initialize multiple models.

Deformable models

• A widely used method in the medicine field is the Deformable Models, which is divided into two main categories:- The Parametric Deformable Models.- The Geometric Deformable Models.

•We shall discuss each of them briefly.

Geometric Models

• Geometric Models use a distance transformation to define the shape from the n-dimentional to an n+1-dimentional domain (where n=1 for curves, n=2 for surfaces on the image plane…)

Example of a transformation

• Here you see a transformation from 1D to 2D.

Geometric Models cont.

• Advantages: 1) The evolving interface can be described by a single function even if it consists of more than one curve. 2) The shape can be defined in a domain with dimensionality similar to the dataset space (for example, for 2D segmentation, a curve is transformed into a 2D surface) -> more mathematically straightforward integration of shape and appearance.

In Other Words…

• We transform the n dimensional image into an n+1 dimensional image, then we try to find the best position for a “plane” , called the “zero level set”, to be in.

• We start from the highest point and descend, until the change in the gradient is below a predefined threshold.

• The distance function:• g is the speed function, C is our zero

level set

• C’ forces the boundaries to be smooth.

And Formally…

Geometric Deformable Models Example

Geometric Models Results

Geometric Deformable Models Short demonstration

Click to watch a demonstration of the MRF

Parametric Models

• Also known as “Active contours”, or Snakes.Sounds familiar?

• The following slides are taken from Saar Arbel’s presentation about Snakes.

Five instances of the evolution of a region based deformable model

What is a snake?A framework for drawing an object outline from a possibly noisy 2D image.An energy-minimizing curve guided by external constraint forces and influenced by image forces that pull it towards features (lines, edges).Represents an object boundary or some other salient image feature as a parametric curve

Every snake includes:External Energy FunctionInternal Energy FunctionA set of k points (in the discreet world)or a continuous  function that will represent the points

Snakes are autonomous and self-adapting in their search for a minimal energy state

So... Why snakes?

They can be easily manipulated using external image forces

They can be used to track dynamic objects in temporal as well as the spatial dimensions

Common Methods:Learned Based Classification

• Learning based pixel and region classification is among the popular approaches for image segmentation.

• Those methods use the advantages of supervised learning (training from examples) to assign a probability for each image site of belonging to the region of interest (ROI).

The MRF & The Cartoon Model

A cartoon model

The Markov Random Field

• The name “Markov Random Field” might sound like a hard and scary subject at first… I thought so too when I started reading about it…

• Unfortunately I still do.

An unrelated photo of Homer Simpson

• Click to watch a demonstration of the MRF

• https://www.youtube.com/watch?v=hfOfAqLWo5c

The MRF & The Cartoon Model

• The MRF uses a model called the “cartoon model”, which assumes that the “world” consists of regions where low level features change slowly, but across the boundaries these features change abruptly.

• Our goal is: to find , a “cartoon”, which is a simplified version of the input image but with Labels attached to the regions.

& The Cartoon Model

is modeled as a discrete random variable taking values in .

The Cartoon Model Cont.

• The discontinuities between those regions form a curve (the contour).

• (, ) form a segmentation.

•We will only focus on finding the best , because once is determined, can be easily obtained.

More Cartoon Model Examples

Original labelled (𝜔 , 𝛤 )

The Probabilistic ApproachFor Finding The Model

• For each possible segmentation / cartoon of the input image G we want to give a probability measure that describes how suitable the cartoon is, for this specific image.

• Let be the set of all possible segmentations, Note that is finite!

The Probabilistic Approach cont.

• Assumptions: in this approach we assume that we have 2 sets of variables:1) The observation random variables Y.

ℱ Y ,the observation , represents the low- level features in the image.2) The hidden random variables X.

The hidden entity X represents the segmentation itself.

Observation and Hidden Variables

Low level features,for example:

Defining the Parameters needed

• First we need to define how well a segmentation fits the image features ℱ.P(| ℱ) – the image model.

•We want every image to posses a set of properties. P() – the prior, tells us how well satisfies these properties.

Illustration Of P(| ℱ)

original P(| ℱ) is high P(| ℱ) is low

Example

•We want the regions to be more homogeneous.

For example,In this image P()would be a large number

Example cont.

But, In this image P() would be a very small number

Our Goal

• Our goal is to maximize P(|ℱ), since the higher this probability is, the more suitable the segmentation fits the image features ℱ.

An unrelated photo of Homer Simpson (again)

• Click to watch a demonstration of the MRF

A Lesson In Probability

• As you might remember (probably not) from Probability lectures,

• Since is constant for each image and so is dropped, therefor, we are looking for that maximizes the posterior.

Defining the Parameters needed Cont.

• In addition to the probability distributions that we defined, our model also depend on certain parameters that we denote by .

• In the supervised segmentation we assume these parameters are either known or that a training set is available.

• In the unsupervised case, we will have to infer both and from the observable entity .

The MRF cont.

• There are many features that one can take as observation for the segmentation process: gray-level, color, motion, different texture features, etc.

• In our lesson we would be using a combination of classical, gray-level based, texture features and color, instead of direct modeling of color textures.

Feature extraction

• For each pixel s, we define a vector , which represents the features at that pixel.

• The set of all the feature vectors form a vector field:={ | s S}, S = { (pixels).And as you remember, is the Observation, and will be the input of the MRF segmentation algorithm.

Notes

• REMINDER: our features will be texture and color.

• We use the CIE-L*U*V color plane, so regions will be formed where both features are homogeneous while boundaries will be present where there is discontinuity in either color or texture.

CIE-L*u*v* VS. RGB

CIELUV color histogram

RGB color histogram

The Markov Random Field Segmentation Model

• Lets start by defining :

• We defined in a way that it represents the simple fact that segmentation should be locally homogeneous.

Let’s call this SQUIRREL

Definitions

• = The number of possible cartoons.

• = The label of pixel s

And now… the FUN part !!

• Don’t listen to me, just RUN!

The Image Process

•We assume follows a normal distribution N(

The Image Process cont.

• n = The dimension of our color-texture space.

• = A pixel class.

• = The mean vector (The average of all the feature vectors within the class ).

• = The covariance matrix, which describes the correlation between each two features in a given class.

IntuitionOur feature vectors would look like this:

(1902180)

(2202220) (10535) (1046140 )

1

2 3

4

(2052200) (20530)

(30525)

5

6 (1306140)

(1176140)�⃗�1 �⃗�2 �⃗�3

Intuition cont.

• For example: if we want to include vector 1 in class then: would be a very small number, and so is very low.

• If we include vector 5 in class then would be a big number and would be high.

The Image Process cont.

•We assume each pixel feature is independent, and so:

Let’s call this CATLet’s call this DOG

Fun Equations cont.

• REMINDER: (but we drop P))

Fun Equations cont.

• REMINDER: We need to find that MAXIMIZES this expression.

• Or, equivalently, we need to find that MINIMIZES the expression inside the “-”.

MINIMIZATION

• There are two main methods used to minimize our expression:1) ICM (Iterated Conditional Modes).2) Gibbs sampler.

• In some of the results we would be comparing those two methods.

Parameter estimation

• There are some parameters in our equations that should be estimated, with or without supervision:1) If a training set is provided, then those parameters can be easily calculated based on the given data.2) If we do not have such a training set, we would have to use an iterative EM algorithm.

Supervised Parameter Estimation cont.

•We can estimate by summing the vectors of class in the given dataset (and normalize the result).

•We can estimate by summing for every vector of class (and normalize the result).

Unsupervised Parameter Estimation cont.

•When no dataset is available we would be using the Estimation Maximization for Guassian Mixture Identification.

• Our goal is to find parameter values which maximize the normalized log-likelihood function (which is a concave function).

The EM Algorithm

• E step: compute a distribution on the labels based on the current parameter estimates.

• M step: calculating the parameters again based on the new labels, very similar to the supervised case.

• We repeat those two steps until convergence.

• K-Means is a specific case of the EM algorithm.

• The EM approach is similar to the Gradient Descent.

MRF Results (Supervised)Texture Color Combined

ICM

Gibbs Sampler

MRF Results (Unsupervised)Texture Color Combined

ICM

Gibbs Sampler

The Finale

• Segmentation in the medicine field covers many topics and methods, today we covered 2 of them, saw some results and introduced a small estimation algorithm widely used in those topics.

Thank You For Listening!!

References

• A Markov random field image segmentation model for color textured images. –Zoltan Kato, Ting-Chuen Pong.

• Medical Image Segmentation . –Xiaolei Huang, Gavriil Tsechpenakis.

• Deformable Model-Based Medical Image Segmentation. –Gavriil Tsechpenakis.

• http://en.wikipedia.org/wiki/Markov_random_field

• Saar Arbel’s presentation about snakes.

• http://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm

top related