image processing by manish myst, ssgbcoet

7
IMAGE PROCESSING Mr.J.P.Patil Mr.R.D.Badgujar Mr.M.L.Patel Lecturer, Lecturer, Lecturer, RCPIT,Shirpur RCPIT,Shirpur RCPIT,Shirpur [email protected] [email protected] Mob.:9423193448 Mob.:-9881804224 Mob.:9372092305 ABSTRACT Image and Speech processing are used to be a single unified field in the early sixties and seventies. Today , it has expanded and diversified into several branches based on mathematical tools as well as applications. For instance there are separate topics dealing with fuzzy IP, morphological IP knowledge based IP etc. Similarly several topics deal with diverse application specific tools for remote sensing industrial vision and so forth. Image analysis issue such as segmentation, edge/line detection, feature extraction, image description and pattern recognition have been covered in great deal and all the state- of-art concepts have been discussed in many papers. The main motivation for extracting the content of information is the accessibility problem. A problem that is even more relevant for dynamic multimedia data, which also have to be searched and retrieved. While content extraction techniques are reasonably developed for text, video data still is essentially opaque. Its richness and complexity suggests that there is a long way to go in extracting video features, and the implementation of more suitable and effective processing procedures is an important goal to be achieved. 1. INTRODUCTION Image Processing is development of the art and technique of producing images known as photographs. Photography is so much a part of life today that the average person may encounter more than 1000 camera images a day. Photographs preserve personal memories (family snapshots) and inform us of public events (news photos). They provide a means of identification (driver's license photos) and of glamorization (movie-star portraits); views of far-off places on Earth (travel photographs) and in space (astral photographs); as well as microscopic scenes from inside the human body (medical and scientific photos). Many specialized commercial categories, including fashion, product, and architectural photography, also fit under the broad umbrella that defines photography's function in the world today. Speech Processing improved speech recognition will make the operation of a computer easier. Virtual reality, the technology of interacting with a computer using all of the human senses, will also contribute to better human and computer interfaces. Standards for virtual-reality program languages—for example, Virtual Reality Modeling language (VRML)—are currently in use or are being developed for the World Wide Web. Synchronization of Image and Speech Processing plays a very important role in this fairy world. Other, exotic models of computation are being developed, including biological computing that uses living organisms, molecular computing that uses molecules with particular properties, and computing that uses deoxyribonucleic acid (DNA), the basic unit of heredity, to store data and carry out operations. These are examples of possible future computational platforms that, so far, are limited in abilities or are strictly theoretical. Scientists investigate them because of the physical limitations of Documents PDF Complete Click Here & Upgrade Expanded Features Unlimited Pages

Upload: manish-myst

Post on 06-May-2015

985 views

Category:

Technology


2 download

DESCRIPTION

by - Manish MystB.E E&C 2011 batchshri sant gadge baba college of engg & tech, bhusawal SSGBCOET

TRANSCRIPT

Page 1: Image processing by manish myst, ssgbcoet

IMAGE PROCESSINGMr.J.P.Patil Mr.R.D.Badgujar Mr.M.L.Patel

Lecturer, Lecturer, Lecturer,

RCPIT,Shirpur RCPIT,Shirpur RCPIT,Shirpur

[email protected] [email protected]

Mob.:9423193448 Mob.:-9881804224 Mob.:9372092305

ABSTRACT

Image and Speech processing are used to be a single unified field in the early sixtiesand seventies. Today , it has expanded and diversified into several branches based onmathematical tools as well as applications. For instance there are separate topics dealingwith fuzzy IP, morphological IP knowledge based IP etc. Similarly several topics deal withdiverse application specific tools for remote sensing industrial vision and so forth.

Image analysis issue such as segmentation, edge/line detection, feature extraction,image description and pattern recognition have been covered in great deal and all the state-of-art concepts have been discussed in many papers.

The main motivation for extracting the content of information is the accessibilityproblem. A problem that is even more relevant for dynamic multimedia data, which also haveto be searched and retrieved. While content extraction techniques are reasonably developedfor text, video data still is essentially opaque. Its richness and complexity suggests that thereis a long way to go in extracting video features, and the implementation of more suitable andeffective processing procedures is an important goal to be achieved.

1. INTRODUCTION

Image Processing is development of the artand technique of producing images known asphotographs. Photography is so much a part oflife today that the average person mayencounter more than 1000 camera images aday. Photographs preserve personal memories(family snapshots) and inform us of publicevents (news photos). They provide a meansof identification (driver's license photos) andof glamorization (movie-star portraits); viewsof far-off places on Earth (travel photographs)and in space (astral photographs); as well asmicroscopic scenes from inside the humanbody (medical and scientific photos). Manyspecialized commercial categories, includingfashion, product, and architecturalphotography, also fit under the broad umbrellathat defines photography's function in theworld today.

Speech Processing improved speechrecognition will make the operation of a

computer easier. Virtual reality, thetechnology of interacting with a computerusing all of the human senses, will alsocontribute to better human and computerinterfaces. Standards for virtual-realityprogram languages—for example, VirtualReality Modeling language (VRML)—arecurrently in use or are being developed for theWorld Wide Web.

Synchronization of Image and SpeechProcessing plays a very important role in thisfairy world. Other, exotic models ofcomputation are being developed, includingbiological computing that uses livingorganisms, molecular computing that usesmolecules with particular properties, andcomputing that uses deoxyribonucleic acid(DNA), the basic unit of heredity, to store dataand carry out operations. These are examplesof possible future computational platformsthat, so far, are limited in abilities or arestrictly theoretical. Scientists investigate thembecause of the physical limitations of

DocumentsPDFComplete

Click Here & UpgradeExpanded Features

Unlimited Pages

Page 2: Image processing by manish myst, ssgbcoet

miniaturizing circuits embedded in silicon.There are also limitations related to heatgenerated by even the tiniest of transistors.

3. BACKGROUND

Content of image includes resolution,color, intensity, and texture. Image resolutionis just the size of image in term of displaypixels. Color is represented using RGB colormodel in computer. For each pixel on thescreen, there are three bytes (R,G,B colorcomponent) to represent its color. Each colorcomponent is in the range of 0 to 255.Intensity is the gray level information of pixelsrepresented by one byte. The intensity value isin the range of 0 to 255. Texture characterizeslocal variations of image color or intensity.Although texture-based methods has beenwidely used in computer vision and graphics,there is no single commonly accepteddefinition of texture. Each texture analysismethod defines texture according to its ownmodel. We consider texture as a symbol oflocal color or intensity variation. Imageregions that are detected to have a similartexture have similar pattern of local variationof color or intensity.

4. BASIS IMAGE PROCESSING:4.1 THEORY OF IMAGE PROCESSING Modern digital technology has made itpossible to manipulate multi-dimensionalsignals with systems that range from simpledigital circuits to advanced parallel computers.The goal of this manipulation can be dividedinto three categories:* Image Processing image in -> image out* Image Analysis image in -> measurementsout* Image Understanding image in -> high-leveldescription outCommon Image Processing techniques :

4.1.1 Dithering :-

Dithering is a process of using a pattern ofsolid dots to simulate shades of gray. Differentshapes and patterns of dots have beenemployed in this process, but the effect is thesame. When viewed from a great enoughdistance that the dots are not discernible, thepattern appears as a solid shade of gray.

4.1.2 Erosion :-

Erosion is the process of eliminating all theboundary points from an object, leaving theobject smaller in area by one pixel all aroundits perimeter. If it narrows to less than threepixels thick at any point, it will becomedisconnected (into two objects) at that point. Itis useful for removing from a segmentedimage objects that are too small to be ofinterest.

Shrinking is an special kind of erosion inthat single-pixel objects are left intact. This isuseful when the total object count must bepreserved.

Thinning is another special kind oferosion. It is implemented in a two-stepprocess. The first step will mark all candidatepixels for removal. The second step actuallyremoves those candidates that can be removedwithout destroying object connectivity.

4.1.3 Dilation :-

Dilation is the process of incorporatinginto the object all the background pixels thattouch it, leaving it larger in area by thatamount. If two objects are separated by lessthan three pixels at any point, they willbecome connected (merged into one object) atthat point. It is useful for filling small holes insegmented objects.

Thickening is a special kind of dilation. Itis implemented in a two-step process. The firststep marks all the candidate pixels foraddition. The second step adds thosecandidates that can be added without mergingobjects.

4.1.4 Opening :-

The process of erosion followed bydilation is called opening. It has the effect ofeliminating small and thin objects, breakingobjects at thin points, and generally smoothingthe boundaries of larger objects withoutsignificantly changing their area.

4.1.5 Closing :-

The process of dilation followed byerosion is called closing. It has the effect offilling small and thin holes in objects,connecting nearby objects, and generally

DocumentsPDFComplete

Click Here & UpgradeExpanded Features

Unlimited Pages

Page 3: Image processing by manish myst, ssgbcoet

smoothing the boundaries of objects withoutsignificantly changing their area.

4.1.6 Filtering :-

Image filtering can be used for noisereduction, image sharpening, and imagesmoothing. By applying a low-pass or high-pass filter to the image, the image can besmoothed or sharpened respectively. Low passfilter is used to reduce the amplitude of high-frequency components. Simple low pass filtersapplies local averaging. The gray level at eachpixel is replaced with the average of the graylevels in a square or rectangular neighborhood.Gaussian Low pass Filter applies Fouriertransform to the image. High pass filter is usedto increase the amplitude of high-frequencycomponents

4.2 IMAGE ANALYSIS

Image techniques are used to enhance,improve, or otherwise alter an image and toprepare it for image analysis.The various techniques employed in imageprocessing and analysis are:

1. Image data reduction

2. Segmentation

3. Feature extraction

4. Object recognition

SEGMENTATION

Segmentation is the generic name for thenumber of different techniques that divide theimage into segments of its constituents. Insegmentation, the objective is to group areasof an image having similar characteristics orfeatures into distinct entities representing partsof the image. One of the most importanttechniques which this papers deals with isthresholding.

THRESHOLDING

Thresholding is a binary conversiontechnique in which each pixel is converted intoa binary value either black or white. This isaccomplished by utilizing a frequencyhistogram of the image and establishing whatintensity (gray level) is to be the border

between black and white. To improve theability to differentiate, special lightingtechniques must often be employed. It shouldbe pointed out that the above method of usinga histogram is only one of a large number ofways to threshold an image. Such a method issaid to use a global threshold for an entireimage. When it is not possible to find a singlethreshold or an entire image, an approach is topartition the total image into smallerrectangular areas and determine the thresholdor each window being analyzed. Images of aweld pool in real time were taken and digitizedusing thresholding technique. The imageswere thresholded at various threshold valuesand also at the optimum value to show theimportance of choosing an appropriatethreshold.

FEATURE EXTRACTION:

We have seen analysis or any visualpattern reorganization problem, the cameratakes the picture of scene and passes thepicture to a feature extractor, whose purposeis data reduction by measuring certain featuresor properties that distinguish objects or theirparts. Feature extraction usually, is associatedwith another method called feature selection.The objective of feature selection andextraction techniques is to reduce thisdimensionality.The objective of feature extraction is torepresent an object in compact way thatfacilities image analysis task in terms ofalgorithmic simplicity and computationallyefficiency.

OBJECT RECOGNITION

The most difficult part of imageprocessing is object recognition. Althoughthere are many image segmentation algorithmsthat can segment image into regions with somecontinuous feature, it is still very difficult torecognize objects from these regions.

There are several reasons for this.First, image segmentation is an ill-posed taskand there is always some degree of uncertaintyin the segmentation result. Second, an objectmay contain several regions and how toconnect different regions is another problem.

DocumentsPDFComplete

Click Here & UpgradeExpanded Features

Unlimited Pages

Page 4: Image processing by manish myst, ssgbcoet

At present, no algorithm can segment generalimages into objects automatically with highaccuracy. In the case that there is some a priorknowledge about the foreground objects orbackground scene, the accuracy of objectrecognition could be pretty good. Usually theimage is first segmented into regionsaccording to the pattern of color or texture.Then separate regions will be grouped to formobjects. The grouping process isimportant for the success of object recognition.Full automatically grouping only occurs whenthe a prior knowledge about the foregroundobjects or background scene exists. In theother cased, human interaction may berequired to achieve good accuracy of objectrecognition5. DEMOS This is a demo showing different imageprocessing techniques.Here is the ORIGINAL image, taken from thephoto "Robin Jeffers at Ton House" (1927) byEdward Weston.

Here is the image with every 3rd pixelsampled, and the intermediate pixels filled inwith the sampled values. Note the blockyappearance of the new image.

QUANTIZATIONLOW PASS FILTERING Here is the image with onlyHere is the image filtered5 grayscale shades; the originalthis filter is a 3-by-3 mean filterhas 184 shades. -notice how it smoothes theNote how much detail is retained thetexture of the image whilewith only 5 shadesblurring out the edgesLOW PASS FILTERING IIEDGE DETECTIONHere is the image filteredThis filter is a 2-dimensionalNotice the difference between theLaplacian (actually the negative Images from the two filters?of the Laplacian) - notice how it brings outthe edges in the image

DocumentsPDFComplete

Click Here & UpgradeExpanded Features

Unlimited Pages

Page 5: Image processing by manish myst, ssgbcoet

EDGE DETECTION IIThis is the Laplacian filter with the originalimage added back in – notice how it brings outthe edges in the image while maintaining theunderlying grey scale information.

6. IMAGE PROCESSING VERSUSIMAGE ANALYSIS

Image processing relates to thepreparation of an image for latter analysis anduse. Images captured by a camera or a similartechnique (e.g. by a scanner) are notnecessarily in a form that can be used byimage analysis routines. Some may needimprovement to reduce noise, others may needto be simplified, and still others may need tobe enhanced, altered, segmented, filtered, etc.Image processing is the collection of routinesand techniques that improve, simplify,enhance, or otherwise alter an image. Imageanalysis is the collection of processes in whicha captured image that is prepared by imageprocessing is analyzed in order to extractinformation about the image and to identifyobjects or facts about the object or itsenvironment.

In a sophisticated image processingsystem it should be possible to apply specificimage processing operations to selected

regions. Thus one part of an image (region)might be processed to suppress motion blurwhile another part might be processed toimprove color rendition. The image is stored only as a set ofpixels with RGB values in computer. Thecomputer knows nothing about the meaning ofthese pixel values. The content of an image isquite clear for a person. However, it is not soeasy for a computer. For example, it is a pieceof cake to recognize yourself in an image,even in a crowd. But this is extremely difficultfor computer. The preprocessing is to help thecomputer to understand the content of image.What is the so-called content of image? Herecontent means features of image or its objectssuch as color, texture, resolution, and motion.Object can be viewed as a meaningfulcomponent in an image. For example, amoving car, a flying bird, a person are allobjects. There are a lot of techniques for imageprocessing. This chapter starts with anintroduction to general image processingtechniques and then talks about videoprocessing techniques. The reason we want tointroduce image processing first is that imageprocessing techniques can be used on video ifwe treat each picture of a video as a stillimage.

7. APPLICATIONS REAL-TIME MEASUREMENT OFTRAFFIC QUEUE PARAMETERS BYUSING IMAGE PROCESSINGTECHNIQUES

The real-time measurement of trafficqueue parameters are required in many trafficsituations such as accident and congestionmonitoring and adjusting the timings of thetraffic lights. So far the reported imageprocessing methods have been targeted formeasuring simple traffic parameters. In thispaper we describe image processingtechniques together with the results to measurethe queue traffic parameters in real-time. Theproposed queue detection algorithm consists ofa motion detection and vehicle detectionoperation, both based on extracting edges ofthe scene. The results show that the reposedalgorithms are able to measure various queueparameters such as queue detection, length ofthe queue, period of the occurrence of thequeue, slope of the queue etc.

DocumentsPDFComplete

Click Here & UpgradeExpanded Features

Unlimited Pages

Page 6: Image processing by manish myst, ssgbcoet

BAW-Project - Digital Image Processing

Aim of this project is to investigate theeffects that lead to structural changes in riverembankments. Changes on the microscopicscale can eventually course completedestabilization of shore fortifications. Westudy the microscopic movement that occurs atboundaries between sediment layers (orgeotextiles) due to hydraulic pressure changes.Towards this end endoscopes are used togather images from within the sediment. Theimages are in turn analyzed by digital imagesequence analysis techniques which yieldinformation on the frequency of motion andoccurring velocity fields. Another aspect ofour research is the estimation of flow fieldsthrough sediment layers which again can bedone using endoscopes in conjunction withimage processing techniques.

Remote sensingNatural resources survey and

management; estimation related to agriculture,hydrology, forestry, mineralogy; urbanplanning; environment and pollution control;cartography, registration of satellite imageswith terrain maps; monitoring traffic alongroads, docks, air fields; etc.

Bio-medical

ECG, EEG, EMG analysis; cytological,histological and stereological applications;automated radiology and pathology, X-rayimages analysis; mask screening of medicalimages such as chromosome slides fordetection various diseases mammograms,cancers, smears, CAP, MRI, PET, SPECT,USG and other tomography images.

Military Applications

Missile guidance and detection; targetidentification; navigation of pilot less vehicles;reconnaissance; and range finding; etc.

8. SPEECH PROCESSING

Speech processing is the study of speechsignals and the processing methods of thesesignals.

The signals are usually processed in adigital representation whereby speechprocessing can be seen as the intersection ofdigital signal processing and natural languageprocessing.

Speech processing can be divided in thefollowing categories:

Speech recognition, which deals withanalysis of the linguistic content of a speechsignal.

Speaker recognition, where the aim is torecognize the identity of the speaker.

Enhancement of speech signals, e.g. noisereduction,

Speech coding for compression andtransmission of speech. See alsotelecommunication.

Voice analysis for medical purposes, suchas analysis of vocal loading and dysfunction ofthe vocal cords.

Speech synthesis: the artificial synthesis ofspeech, which usually means computergenerated speech.

Speech compression is important in thetelecommunications area for increasing theamount of info which can be transferred,stored, or heard, for a given set of time andspace constraints.

Speech can be described as an act ofproducing voice through the use of the vocalfolds and vocal apparatus to create a linguisticact designed to convey information.

Various types of linguistic acts where theaudience consists of more than one individual,including public speaking, oration, andquotation.

The physical act of speaking, primarilythrough the use of vocal cords to producevoice. See phonology and linguistics for moredetailed information on the physical act ofspeaking.

However, speech can also take placeinside one's head, known as intrapersonalcommunication, for example, when one thinksor utters sounds of approval or disapproval. At

DocumentsPDFComplete

Click Here & UpgradeExpanded Features

Unlimited Pages

Page 7: Image processing by manish myst, ssgbcoet

a deeper level, one could even considersubconscious processes, including dreamswhere aspects of oneself communicate witheach other (see Sigmund Freud), as part ofintrapersonal communication, even thoughmost human beings do not seem to have directaccess to such communication.

Speech recognition (in many contexts alsoknown as 'automatic speech recognition',computer speech recognition or erroneously asVoice Recognition) is the process ofconverting a speech signal to a sequence ofwords, by means of an algorithm implementedas a computer program. Speech recognitionapplications that have emerged over the lastyears include voice dialing (e.g., Call home),call routing (e.g., I would like to make acollect call), simple data entry (e.g., entering acredit card number), and preparation ofstructured documents (e.g., a radiology report).

Voice recognition or speaker recognitionis a related process that attempts to identify theperson speaking, as opposed to what is beingsaid.

CONCLUSION So, these were some of the primitiveprocessing operations which are applied onthe captured Image. Not all the operations arenecessary; actually it depends on our need.Speech Processing improved speechrecognition will make the operation of acomputer easier. Virtual reality, thetechnology of interacting with a computerusing all of the human senses, will alsocontribute to better human and computerinterfaces. Standards for Virtual-realityprogram languages

REFERENCES :

1. ACM Transaction on graphics

2. Digital Image Processing and Analysis- B.Chanda, D.Dutta Maujmder

3. http://www.google.com

4. www.howstuffworks.com

5. http://www.baw.de

DocumentsPDFComplete

Click Here & UpgradeExpanded Features

Unlimited Pages