cs654: digital image analysis lecture 33: introduction to image compression and coding

28
CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Upload: cuthbert-greer

Post on 17-Jan-2016

226 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

CS654: Digital Image Analysis

Lecture 33: Introduction to Image Compression and Coding

Page 2: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Outline of Lecture 33

• Introduction to image compression

• Measure of information content

• Introduction to Coding

• Redundancies

• Image compression model

Page 3: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Goal of Image Compression

• The goal of image compression is to reduce the amount of data required to represent a digital image

Page 4: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Approaches

Lossless

• Information preserving

• Low compression ratios

Lossy

• Not information preserving

• High compression ratios

Page 5: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Data ≠ Information

• Data and information are NOT synonymous terms

• Data is the means by which information is conveyed

• Data compression aims to reduce the amount of data required to represent a given quantity of information

• To preserve as much information as possible

• The same amount of information can be represented by various amount of data

Page 6: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Compression Ratio (CR)

compression

Compression ratio: 𝐶𝑅=𝑛1

𝑛2

Page 7: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Data Redundancy

• Relative data redundancy:

𝑅𝐷=1−1𝐶𝑅

Example:

A compression engine represents 10 bits of information of the source data with only 1 bit. Calculate the data redundancy present in the source data.

𝐶𝑅=101

∴𝑅𝐷=1−1

10=90 %

Page 8: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Types of Data Redundancy

Coding Redundancy1• Number of bits required to code

Interpixel Redundancy2• similarity in the neighborhood.

Psychovisual Redundancy3• irrelevant information for the human visual system

Compression attempts to reduce one or more of these redundancy types.

Page 9: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Coding Redundancy

• Code: a list of symbols (letters, numbers, bits etc.)

• Code word: a sequence of symbols used to represent a piece of information or an event (e.g., gray levels).

• Code word length: number of symbols in each code word

image: k-th gray level: probability of : number of bits for

Expected value:

𝐸 ( 𝑋 )=∑𝑥

𝑥𝑃(𝑋=𝑥)

Average number of bits 𝐿𝑎𝑣𝑔=𝐸 ( 𝑙 (𝑟𝑘 ))=∑𝑘=0

𝐿− 1

𝑙 (𝑟 𝑘)𝑃 (𝑟𝑘)

Total number of bits 𝑁𝑀𝐿𝑎𝑣𝑔

Page 10: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Coding RedundancyFixed length coding

Code 1

0.19 000 3

0.25 001 3

0.21 010 3

0.16 011 3

0.08 100 3

0.06 101 3

0.03 110 3

0.02 111 3 bits

Total number of bits ¿3𝑁𝑀

Page 11: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Coding RedundancyVariable length coding

Code 1 Code 2

0.19 000 3 11 2

0.25 001 3 01 2

0.21 010 3 10 2

0.16 011 3 001 3

0.08 100 3 0001 4

0.06 101 3 00001 5

0.03 110 3 000001 6

0.02 111 3 000000 6 bits

𝐶𝑅=3

2.7=1.11≈10% 𝑅𝐷=1−

11.11

=0.099

Page 12: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Inter-pixel redundancy

• Inter-pixel redundancy implies that pixel values are correlated

• A pixel value can be reasonably predicted by its neighborsA B

Page 13: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Psychovisual redundancy

• The human eye does not respond with equal sensitivity to all visual information.

• It is more sensitive to the lower frequencies than to the higher frequencies in the visual spectrum.

• Discard data that is perceptually insignificant!

Page 14: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Example

256 gray levels 16 gray levels/ random noise

16 gray levels

Page 15: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Measuring Information

• What is the minimum amount of data that is sufficient to describe completely an image without loss of information?

• How do we measure the information content of a message/ image?

Page 16: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Modeling Information

• We assume that information generation is a probabilistic process.

• Associate information with probability!

Note: when

A random event with probability contains:

Page 17: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

How much information does a pixel contain?

• Suppose that gray level values are generated by a random process, then contains:

units of information!

Page 18: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

How much information does an image contain?

• Average information content of an image:

(assumes statistically independent random events)

units/pixel(e.g., bits/pixel)

Entropy:

using

1

0

( ) P( )L

k kk

E I r r

Page 19: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Redundancy (revisited)

• Redundancy:

where:

Note: if , then (no redundancy)

Page 20: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Image Compression Model

Source decoder

Channel Decoder

Channel

Channel encoder

Source Encoder

𝒇 (𝒙 ,𝒚 )

𝒇 ′ (𝒙 , 𝒚 )

Encoder

Decoder

Compression

Noise tolerance

Page 21: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Functional blocks

Page 22: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Functional blocks

Mapper: transforms input data in a way that facilitates reduction of inter-pixel redundancies.

Page 23: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Functional blocks

Quantizer: reduces the accuracy of the mapper’s output in accordance with some pre-established fidelity criteria.

Page 24: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Functional blocks

Symbol encoder: assigns the shortest code to the most frequently occurring output values.

Page 25: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Fidelity Criteria

• How close is to ?

• Criteria

• Subjective: based on human observers • Objective: mathematically defined criteria

Page 26: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Subjective Fidelity Criteria

Page 27: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Objective Fidelity Criteria

• Root mean square error (RMS)

• Mean-square signal-to-noise ratio (SNR)

Page 28: CS654: Digital Image Analysis Lecture 33: Introduction to Image Compression and Coding

Lossless Compression

Types of coding

Repetitive Sequence Encoding Statistical Encoding Predictive Coding Bitplane coding

RLE HuffmanArithmaticLZW

DPCM