it-101 section 001 lecture #7 introduction to information technology

27
IT-101 Section 001 Lecture #7 Introduction to Information Technology

Upload: austen-martin-gallagher

Post on 12-Jan-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: IT-101 Section 001 Lecture #7 Introduction to Information Technology

IT-101Section 001

Lecture #7

Introduction to Information Technology

Page 2: IT-101 Section 001 Lecture #7 Introduction to Information Technology

Overview

Chapter 5: From the real world to Images and Video Introduction to visual representation

and display Converting images to gray scale Color representation Video Image Compression

Page 3: IT-101 Section 001 Lecture #7 Introduction to Information Technology

Introduction to Visual Representation and Display

Images play a fundamental role in the representation, storage and transmission of information

In the previous chapters we learned how to represent information that was in the form of numbers and text

In this chapter we will learn how to represent still and time varying images with binary digits

“A picture is worth ten thousand words” But it takes a whole lot more than that!

Page 4: IT-101 Section 001 Lecture #7 Introduction to Information Technology

Image Issues

The world we live in is analog, or continuously varying

There are problems involved in digitizing, or making discrete, so we make some approximations and determine tradeoffs involved

While digitizing, we need to consider the following facts: We are producing information for human use Human vision has limitations Take advantage of this fact Produce displays that are “good enough”

Page 5: IT-101 Section 001 Lecture #7 Introduction to Information Technology

Digital Information for Humans

Many digital systems take advantage of human limitations (visual, aural, etc)

Human gray scale acuity is 2% of full brightness Or: Most people can detect at most 50 gray levels (6 bits)

The human eye can resolve about 60 “lines per degree of visual arc”- a measure of the ability of the eye to resolve fine detail

When we look at a 8.5 x 11” sheet of paper at 1 foot (landscape) the viewing angles are 49.25 degrees for the horizontal dimension, and 39 degrees for the vertical dimension

We can therefore distinguish : 49.25 degrees x 60 = 2955 horizontal lines 39 degrees x 60 = 2340 vertical lines

These numbers give us a clue about the length of the code needed to capture images

Page 6: IT-101 Section 001 Lecture #7 Introduction to Information Technology

“lines per degree of visual arc”

Image brought closer to the eye, we can resolve more detail

Humans can resolve 60 lines per degree of visual arc

A line requires two strings of pixels – one black, one white

Pixel – The smallest unit of representation for visual information

Visual Arc

Page 7: IT-101 Section 001 Lecture #7 Introduction to Information Technology

Pixels

A pixel is the smallest unit of representation for visual information

Each pixel in a digitized image represents one intensity (brightness) level (gray scale or color)

13 x 13 grid = 169 pixels

Gray scale Color

Page 8: IT-101 Section 001 Lecture #7 Introduction to Information Technology

To form a black line, you need two rows of pixels (one black and one white) to give a visual clue of the transition from black to white

For our paper example: Number of pixels needed to represent total image on a

page: (2 x 2955) x (2 x 2340) = 27,658,800 pixels per page

This number of pixels would be sufficient to represent any image on the page with no visible degradation compared to a perfect (unpixelized) image at a distance of one foot

As the number of pixels that form an image (spatial resolution) decreases, the amount of data that needs to be transmitted, stored or processed decreases as well. However, the tradeoff is that the quality of the image degrades as a result

Page 9: IT-101 Section 001 Lecture #7 Introduction to Information Technology

When dealing with printers we often quote the resolution in terms of dots per inch (dpi), which corresponds to pixels per inch in our example

It is popular to set laser or ink-jet printer settings to 600 dpi of resolution

However, if we hold the paper closer, we would need a greater resolution printer for example: 720 dpi, 1200 dpi or greater

A note about printer resolution

Page 10: IT-101 Section 001 Lecture #7 Introduction to Information Technology

How many pixels should be used

If too few pixels used, image appears “coarse”

16 x16 (256 pixels) 64 x 64 (4096 pixels)

Page 11: IT-101 Section 001 Lecture #7 Introduction to Information Technology

Digitizing Images (gray scale) The first step to digitize a “black and white” image

composed of an array of gray shades, is to divide the image into a number of pixels, depending on the required spatial resolution

The number of brightness levels to be represented by each pixel is assigned next

If we wish to use for example, 6 bits for the brightness level of each pixel, then each pixel can represent 64 different brightness levels (shades of gray, from black to white)

Then, each pixel would have a 6-bit number associated with it, representing the brightness level (shade) that is closest to the actual brightness level at that pixel

Page 12: IT-101 Section 001 Lecture #7 Introduction to Information Technology

This process is known as quantization (we will learn more about this later in the course)-It is the process of rounding off actual continuous values so that they can be represented by a fixed number of binary digits

As a result of the operations just described, the analog image is digitized and represented by a string of binary digits…

Page 13: IT-101 Section 001 Lecture #7 Introduction to Information Technology

6-bit image (64 gray levels)

In the figures below, each pixel in the image is represented by 6 bits, 3 bits and 1 bit. The effect of varying the number of bits used to represent each pixel is evident

3-bit image (8 gray levels)

Page 14: IT-101 Section 001 Lecture #7 Introduction to Information Technology

1-bit image (black and white)

Page 15: IT-101 Section 001 Lecture #7 Introduction to Information Technology

How much storage is needed?

Total number of bits required for storage = total number of pixels * number of bits used per pixel

For example – Black and white photo 64 x 64 pixels Use 32 gray levels (5 bits) 64 x 64 x 5 = 20,480 bits = 2560/1024 bytes =

2.5KB Remember data storage is in bytes KB represents 210 or 1024 bytes

Page 16: IT-101 Section 001 Lecture #7 Introduction to Information Technology

Another example

Black and White photo 256 x 256 pixel 6 bits (64 gray levels) How much storage is needed? 256 x 256 x 6 = 393,216 bits 393,216/8 = 49,152 bytes 49,152/1024 = 48 KB

Page 17: IT-101 Section 001 Lecture #7 Introduction to Information Technology

A note about resolution

Since the total number of bits required for storage = total number of pixels * number of bits used per pixel, there are two ways to reduce the number of bits needed to represent an image:

Reduce total number of pixels Reduce number of bits used per pixel

Applying these however, reduces the quality of the image. The first results in low spatial resolution (image appears coarse). The second results in poor brightness resolution, as seen by the previous couple of slides.

The amount of storage can, however be reduced by applying Image Compression

Page 18: IT-101 Section 001 Lecture #7 Introduction to Information Technology

Recall that any color can be created by adding the right proportions of red, green and blue light

If we wish to digitize a color image, we must first divide the image into pixels

We must then determine the amount of red, green and blue (RGB) that comprises the color at each pixel location

Finally, we must convert these three levels to a binary number of a predefined length

For example: If we use 3 bits for each color value, we would be able to

represent 8 intensity levels each of red, green and blue This representation would require 9 bits per pixel This would give us 512 different colors per pixel

Digitizing Images (color)

Page 19: IT-101 Section 001 Lecture #7 Introduction to Information Technology

Example

Color photo 256x256 pixel 9 bits per pixel (3 bits each for red, green and

blue) 256x256x9=589,826 bits 589,826/8=73,728 bytes 73,728/1024=72 KB of storage is needed to

store this color photo

Page 20: IT-101 Section 001 Lecture #7 Introduction to Information Technology

Another approach to color representation of images is Hue, Luminance and Saturation (HLS)

This system does not represent colors by combinations of other colors, but it still uses 3 numerical values

Hue: Represents where the pure color component falls on a scale that extends across the visible light spectrum

Luminance: Represents how light or dark a pixel is Saturation: How “pure” the color is, i.e. how much it is

diluted by the addition of white (100% saturation means no dilution with white )

Let us see at how this system works with the power point color palette on this box:

Hue Luminance Saturation

Page 21: IT-101 Section 001 Lecture #7 Introduction to Information Technology

Video Human perception of movement is slow

Studies show that humans can only take in 20 different images per second before they begin to blur together

If these images are sufficiently similar, then the blurring which takes place appears to the eye to resemble motion, in the same way we discern it when an object moves smoothly in the real world.

We can detect higher rates of flicker, but only to about 50 per second

This phenomenon has been used since the beginning of the 20th century to produce ``moving pictures,'' or movies.

Movies show 24 frames per second TV works similarly, but instead of a frame, TV refreshes in lines

across the tube This same phenomenon can be used to create digitized video--a

video signal stored in binary form

Page 22: IT-101 Section 001 Lecture #7 Introduction to Information Technology

Video

We have already discussed how individual images are digitized; digital video simply consists of a sequence of digitized still images, displayed at a rate sufficiently high to appear as continuous motion to the human visual system. The individual images are obtained by a digital camera that acquires a new image at a fast enough rate (say,60 times per second), to create a time-sampled version of the scene in motion

Because of human visual latency, these samples at certain instants in time are sufficient to capture all of the information that we are capable of taking in!

Page 23: IT-101 Section 001 Lecture #7 Introduction to Information Technology

Adding up the bits Assume a screen that is 512x512 pixels--about the same

resolution as a good TV set. Assume 3 bits per color per pixel, for a total of 9 bits per pixel Let's say we want the scene to change 60 times per second,

so that we don't see any flicker or choppiness. This means we will need 512 x 512 pixels x 9 bits per pixel x 60 frames per second x 3600 seconds = 500 billion bits per hour--just for the video. Francis Ford Coppola's The Godfather, at over 3 hours, would require nearly 191 GB--over 191 billion bytes--of memory using this approach. This almost sounds like an offer we can refuse. But, do films actually require this much storage? – Fortunately, no..

The reason we can represent video with significantly fewer bits than in this example is due to compression techniques, which take advantage of certain predict abilities and redundancies in video information to reduce the amount of information to be stored.

Page 24: IT-101 Section 001 Lecture #7 Introduction to Information Technology

Image Compression

Near Photographic Quality Image 1,280 Rows of 800 pixels each, with 24 bits of color

information per pixel Total = 24,576,000 bits

56 Kbps modem 56,000 bits/sec How long does it take

to download? 24,576,000/56,000 =

439 seconds/60 = 7.31 minutes

Obviously image compression is essential.

Page 25: IT-101 Section 001 Lecture #7 Introduction to Information Technology

Images are well suited for compression

Images have more redundancy than other types of data. Images contain a large amount of structure. Human eye is very tolerant of approximation error.

2 types of image compression Lossless coding

Every detail of original data is restored upon decoding Examples – Run Length Encoding, JPEG, GIF

Lossy coding Portion of original data is lost but undetectable to

human eye Good for images and audio Examples - JPEG

Page 26: IT-101 Section 001 Lecture #7 Introduction to Information Technology

JPEG -Joint Photographic Experts Group 29 distinct coding systems for compression, 2 for

Lossless compression Lossless JPEG uses a technique called predictive

coding to attempt to identify pixels later in the image in terms of previous pixels in that same image

Lossy JPEG consists of image simplification, removing image complexity at some loss of fidelity

GIF – Graphics Interchange Format Developed by CompuServe Lossless image compression system. Application of Lempel-Ziv-Welch (LZW)

The two compressed image formats most often encountered on the Web are JPEG and GIF.

Page 27: IT-101 Section 001 Lecture #7 Introduction to Information Technology

Digital Video Compression (MPEG)

MPEG is a series of techniques for compressing streaming digital information

DVDs use MPEG coding MPEG achieves compression results on the order of 1/35

of original If we examine two still images from a video sequence of

images, we will almost always find that they are similar This fact can be exploited by transmitting only the

changes from one image to the next Many pixels will not change from one image to the next.

Called IMAGE DIFFERENCE CODING

Motion Picture Expert Group (MPEG) standard for video compression.