hypothesis selection filter - seminar interim report

Upload: psycho316

Post on 10-Feb-2018

221 views

Category:

Documents


1 download

TRANSCRIPT

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    1/21

    HYPOTHESIS SELECTION FILTER:

    THEORY & APPLICATIONS

    SEMINAR REPORT

    Submitted by

    PRADEEP N MOOSATH

    in partial fulfilment for the award of the degree

    of

    Master of Technology

    in

    ELECTRONICS WITH SPECIALIZATION IN SIGNAL PROCESSING

    of

    COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY

    DEPARTMENT OF ELECTRONICS ENGINEERING

    MODEL ENGINEERING COLLEGE

    COCHIN 682 021

    MARCH 2013

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    2/21

    MODEL ENGINEERING COLLEGE

    THRIKKAKARA, KOCHI-21

    DEPARTMENT OF ELECTRONICS ENGINEERING

    COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY

    BONAFIDE CERTIFICATE

    This is to certify that the interim seminar report entitled

    Hypothesis Selection Filter: Theory & Applications

    Submitted by

    PRADEEP N MOOSATH

    is a bonafide account of the work done by him under my supervision

    Ms. Aparnadevi P.S.

    Seminar Guide

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    3/21

    TABLE OF CONTENTS

    Chapter.

    NoItem Page No.

    Abstract

    1 Introduction 1

    2 Hypothesis Selection Filter 3

    3 HSF For JPEG Decoding 11

    4 Conclusion 15

    References

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    4/21

    List of Symbols

    M - An arbitrary number

    X,Y,Z,J,L - Random Variables

    x,y,z,j,l - Realizations of the Random Variables.

    , - Mean and Variance of Gaussian Process

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    5/21

    ABSTRACT

    There are many number of images, wherein using a single image enhancement

    tool will not produce the required result and may have to apply many number of such

    filters to different regions to get the job done. In such a scenario, comes the

    importance of a new kind of tool called the Hypothesis Selection filter. A number of

    filters will be chosen beforehand, which are assumed to be applicable to the image for

    different regions. These filters will be applied to the image on a region by region

    basis, based on the need of the region. Another interesting point to be noted is that

    any number of filters may be used, or for that matter, any algorithms with any number

    of iterations. Then the individual outputs shall be combined with the help of a

    probabilistic model in which the probability that a pixel be processed by a filter is

    calculated. At each pixel, the HSF uses a feature vector to predict the effect of each

    filter in the total final output of HSF. The final output is formulated as the Bayesian

    Minimum Mean Square Error estimate of the original image. An unsupervised

    training procedure is done on the model parameters of pixel classification to reach at

    the maximum likelihood estimate of each filter.

    Also as a practical application, the HSF is applied to JPEG-encoded image.

    The images in consideration will be document-images, which have all components

    like text; graphics, natural images etc. and the quality of decoding will be observed.

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    6/21

    Model Engineering College Hypothesis Selection Filter:

    Theory & Applications

    Department of Electronics Engineering 1

    Chapter 1

    INTRODUCTION

    In many applications, a single specific image enhancement filter will not

    produce the best quality at all locations in a complex image. For example, a filter

    designed to remove Gaussian noise from smooth regions of an image, will tend to blur

    edge detail or a non-linear filter designed to preserve edge detail, may produce

    undesirable artefacts in smooth noisy regions of an image. Ideally, the best result is

    achieved by using a suite of linear or nonlinear filters, with each filter being applied to

    regions of the image for which it is best suited. However, this approach requires some

    methodology for selecting the best filter, among an available set of filters, at each

    location of an image.

    The Hypothesis Selection Filter (HSF) is presented as a new approach for

    combining the outputs of distinct filters, each of which is chosen to improve a

    particular type of content in an image. In its formulation,Mpixel classes are defined,

    where each pixel class is associated with one of the M image filters chosen for the

    scheme. During the processing of an image, HSF performs a soft classification of

    each pixel into the M pixel classes through the use of a locally computed feature

    vector. After classification, the filter outputs are weighted by the resulting class

    probabilities and combined to form the final processed output. The major

    contributions of research by the authors include the basic architecture of HSF and a

    novel probabilistic model which is used to define the pixel classes and to capture the

    dependence between the feature vector and the defined pixel classes. Based on this

    model, an unsupervised training procedure for the design of the classifier is derived.

    The training procedure uses a set of example degraded images and their

    corresponding high-quality original images to estimate the probability distribution of

    the feature vector conditioned on each pixel class.

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    7/21

    Model Engineering College Hypothesis Selection Filter:

    Theory & Applications

    Department of Electronics Engineering 2

    As an example to demonstrate its potential, the HSF is applied as a post-

    processing step for reducing the artefacts in JPEG-encoded document images. A

    document image is the one going to be considered and it typically consists of text,

    graphics, and natural images in a complex lay-out. Each type of content is distorted

    differently by the JPEG artefacts due to its unique characteristics. Here, four image

    filters are used in the HSF. The image filters are selected to reduce the JPEG artefacts

    for different types of image content. Comparing with several other state-of-the-art

    approaches, HSF tends to improve the decoding quality more consistently over

    different types of image content.

    A variety of approaches have been proposed for enhancing image quality by

    various researchers in the field. Examples include different approaches varying from

    basic traditional linear filters with rich theory support to nonlinear filtering

    approaches and even to spatially adaptive methods. Coefficients are calculated using

    different operators like Laplacian of Gaussian operators and other different algorithms

    to reduce noise. The hypothesis selection filter is introduced, which may be

    considered as a spatially adaptive image enhancement method, with the advantage

    that it provides a framework for applying and combining any set of filter or

    algorithms, whether linear or non-linear, iterative or non-iterative.

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    8/21

    Model Engineering College Hypothesis Selection Filter:

    Theory & Applications

    Department of Electronics Engineering 3

    Chapter 2

    HYPOTHESIS SELECTION FILTER

    The Hypothesis Selection Filter is presented in a general setting. For a given

    image enhancement problem, suppose that there are a set of image filters each of

    which is effective in improving the image quality for a certain type of content. The

    objective will be to combine the filter outputs to obtain an overall best quality result.

    Here, upper-case letters to denote random variables and random vectors, and lower-

    case letters to denote their realizations.

    Figure 2.1: Block Diagram for HSF [1]

    The structure of HSF is shown in Figure 2.1 In the upper portion, the HSF

    processes the input image using the M filters that have been selected. The inputimage is considered as a distorted version of a high quality original image x. For the

    n-th pixel, the filter outputs, denoted by zn = [zn1,..., znM] T , will each serve as a

    different intensity estimate of the corresponding pixel in the original image, xn. The

    assumption will be that a judiciously selected feature vectoryn, carries information on

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    9/21

    Model Engineering College Hypothesis Selection Filter:

    Theory & Applications

    Department of Electronics Engineering 4

    how to combine the filter outputs to produce the best estimate of xn. Also, Mpixel

    classes corresponding to theM image filters will be defined. In the lower portion of

    Figure 2.1, HSF computes the feature vector for the current pixel, followed by a soft

    classification of the current pixel into the M pixel classes. The soft classification

    computes the conditional probability of the pixel to belong to each one of the pixel

    classes, given the feature vector. The resulting probabilities are used to weight the

    filter outputs, which are then summed to form the final processed output. The original

    pixel intensity xn, the feature vectoryn, and the filter outputsznare realizations of the

    random variables/vectors Xn, Yn, and Zn respectively, whose probabilistic model is

    described next.

    A. Probabilistic Model

    The probabilistic model presented here is based on four assumptions about the

    original pixel intensity Xn, the feature vector Yn, and the filter outputs Zn. Based on

    this model, the output of HSF as the minimum mean square error estimate of Xn is

    derived.

    Assumption 1: GivenZn= zn, the conditional distribution ofXnis a Gaussian mixture

    withMcomponents:

    |(| ) = N(;;) 2.1

    where the means of the Gaussian components are the filter outputs zn, j; jand areparameters of the distribution function; and N (;, 2 ) denotes the univariateGaussian distribution function with mean and variance 2.

    An unobserved random variableJn{1,...,M} to represent the component index in2.1 is defined. This random variable is independent of Zn, and the prior probability

    mass function ofJnis given by

    Prob{Jn=j}= j 2.2for j =1,...,M.

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    10/21

    Model Engineering College Hypothesis Selection Filter:

    Theory & Applications

    Department of Electronics Engineering 5

    A graphical illustration of 2.1 is shown in Figure 2.2.

    Figure 2.2: Generative probabilistic model for original pixelXn.[1]

    Assumption 2: Given the component index Jn=j, the conditional distribution of the

    feature vector Ynis a multivariate Gaussian mixture:

    | (| ) = , (;,,,) 2.3where Kj is the order of the Gaussian mixture distribution; j,l , mj, l, and Rj,l arerespectively the weight, the mean vector, and the covariance matrix of the l-th

    Gaussian component; and N(; m, R) denotes the multivariate Gaussian distribution

    function with mean vector m and covariance matrix R. Similar to Assumption 1, a

    discrete random variable Ln {1,...,K}to denote the component index in 2.3 isdefined, where K=max jKj,and j,l = 0 whenever l >Kj. With this definition, theconditional probability ofLn=lgivenJn=jis

    Prob{Ln=l|Jn=j}= j,l for l ={1,...,K} 2.4The conditional probability distribution of YngivenJn=jandLn= lis

    | (|) = (;, ,,) 2.5Combining (2) and (3), Ynis assumed to be a mixture of Gaussian mixtures:

    () = [ , (;, ,,)] 2.6

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    11/21

    Model Engineering College Hypothesis Selection Filter:

    Theory & Applications

    Department of Electronics Engineering 6

    Assumption 3: The two sets of random quantities {Xn,Zn}and{Yn,Ln} are

    conditionally independent given the component indexJn= j:

    , , ,|(,, , |) =,|( ,|),|(, |) 2.7

    Assumption 4: Given Xn= xn and Zn= zn, the component index Jn is conditionally

    independent of the feature vector Yn, or equivalently

    | ,,( j|,, ) =| ,(|, ) 2.8In Figure 2.3, the graphical model for the HSF is shown; to illustrate the conditional

    dependency of the different random quantities. The graphical model corresponds to

    the factorization of the joint probability distribution function

    (,, ,, )=(|,)(|, )(|)()()2.9

    Figure 2.3: Graphical Model for the HSF. [1]

    In Assumption 1, the use of eqn. 2.1 provides a mechanism whereby

    the M pixel classes may be defined from training based on the errors made by the

    image filters for a set of training samples. Another assumption is that the training

    samples are of the form (xn,yn,zn), where xn is the ground truth value of the original

    undistorted pixel, yn is the feature vector extracted from the distorted image, and z n

    are the corresponding filter outputs. Therefore, the absolute error made by each filter,

    [xn zn,j] , can be computed during training. As will be explained, the training

    procedure uses 2.1 to compute the conditional class probability given the ground truth

    and the filter outputs, fJn|Xn,Zn (j|xn,zn).

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    12/21

    Model Engineering College Hypothesis Selection Filter:

    Theory & Applications

    Department of Electronics Engineering 7

    By applying Bayes rule to eqn 2.1, this conditional class probability can be written as

    | ,(|, )=

    {,

    }

    {|, |

    } 2.10

    Therefore, this conditional class probability is a monotonically decreasing

    function of the absolute error xnzn,jmade by the j-th filter. This insures that filters

    making larger errors on the training data will have less influence in the soft classifier.

    In fact, the conditional class probability is also a monotonically decreasing function of

    the standard score, |xnzn,j| / j. As explained in the coming section, the error variance

    of each filter,, is also estimated during the training procedure. Therefore, if filter jhas been poorly chosen for the scheme and consistently makes large errors, its role in

    the scheme will be diminished by a large estimate of .

    In assumption 2, Gaussian mixture distribution as a general parametric model to

    characterize the feature vector for each pixel class is used. In Assumption 3 and

    Assumption 4, the latent variable Jnto decouple the dependence among the variables

    Xn, Yn,andZnso as to obtain a simpler probabilistic model.

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    13/21

    Model Engineering College Hypothesis Selection Filter:

    Theory & Applications

    Department of Electronics Engineering 8

    B. Optimal Filtering

    HSF computes the Bayesian minimum mean square error (MMSE) estimate

    ofXnbased on observing the feature vector Ynand the filter outputsZn. The MMSEestimate is the conditional expectation ofXngiven YnandZn

    = E [|,] 2.11= [| =,,]|,(|,) 2.12= [| =, ]|,(| ,) 2.13= ,|(|,) 2.14= ,|(|) 2.15= , |(|) |(|)

    2.16

    where the total expectation theorem has been used to go from 2.11 to 2.12;

    Assumption 3 that the pixel value Xn and the feature vector Yn are conditionally

    independent given Jn= j to go from 2.12 to 2.13; and Assumption 1 that Zn,j is the

    conditional mean ofXngivenJn= jandZnto go from 2.13 to 2.14. To go from 2.14 to

    2.15, assumption is made that the filter outputs Zn and the class label Jn are

    conditionally independent given the feature vector Yn. This assumption does not result

    in any loss of generality since filter outputs can be included into the feature vector.

    Note that 2.15 express the estimateas a weighted sum of the filter outputs, wherethe weights are the conditional probabilities of pixel classes given the feature vector.To go from 2.15 to 2.16, Bayes rule has been used. In order to evaluate the right-

    hand side of 2.16, there is need to estimate the following unknown parameters: the

    component weights j and component variance from (1); and all the parameters oftheMdifferent Gaussian mixture distributions,fYn|Jn(yn| j), from (3).

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    14/21

    Model Engineering College Hypothesis Selection Filter:

    Theory & Applications

    Department of Electronics Engineering 9

    Maximum likelihood estimates of these parameters are obtained through the

    expectation-maximization algorithm, as detailed in the next subsection as a weighted

    sum of the filter outputs, where the weights are the conditional probabilities of pixel

    classes given the feature vector.

    C. Parameter Estimation

    Our statistical model consists of three sets of parameters: = { ,} from(1); = {, ,, ,,}, from (3); and the orders Kj of the Gaussian mixturedistributions, fYn|Jn (yn|j), also from (3). The parameters are estimated by an

    unsupervised training procedure. The training procedure makes use of a set of training

    samples, each of which is in the form of a triplet (xn,yn,zn), extracted from example

    pairs of original and degraded images.

    The maximum likelihood (ML) estimates of and are required here, whichare defined as the values that maximize the likelihood of the observed training

    samples

    , = , ,(,) (, ,|,) 2.17

    where the probability distribution ,,(, , |,) is obtained frommarginalizing the random variablesJnandLnin ,,(,, ,, |,). Becausethe random variablesJnandLnare unobserved, this so-called incomplete data problem

    can be solved using the expectation-maximization (EM) algorithm.

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    15/21

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    16/21

    Model Engineering College Hypothesis Selection Filter:

    Theory & Applications

    Department of Electronics Engineering 11

    Chapter 3

    HSF FOR JPEG DECODING

    The Hypothesis Selection Filter is presented in a general context for image

    enhancement in the previous section. This method can be applied to any filtering

    problem. To apply the HSF to a particular application, the only need is to specify the

    set of desired image filters, and a feature vector that is well suited for selecting among

    the filters. Once the filters and the feature vector have been chosen, then the training

    procedure can be used to estimate the parameters of the HSF from training data. In

    order to illustrate how the HSF can be utilized for image enhancement, an application

    of the HSF is presented to improve the decoding quality for any JPEG-encoded

    document images. JPEG compression typically leads to the ringing and blocking

    artefacts in the decoded image. For a document image that contains text, graphics, and

    natural images, the different types of image content will be affected differently by the

    JPEG artefacts. For example, text and graphics regions usually contain many sharp

    edges that lead to severe ringing artefacts in these regions, while natural images are

    usually more susceptible to blocking artefacts. To improve the quality of the decoded

    document image, the image is first decoded using a conventional JPEG decoder and

    then post-process it using an HSF to reduce the JPEG artefacts. The specific filters

    and the feature vector which were used to construct the HSF are specified in coming

    sections. Then the training procedure is briefly described. For simplicity, the HSF is

    designed for mono-chrome images. Consequently, the training procedure is also

    performed using a set of monochrome training images. To process a colour image, the

    trained HSF is applied to the R, G, and B colour components of the image separately.

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    17/21

    Model Engineering College Hypothesis Selection Filter:

    Theory & Applications

    Department of Electronics Engineering 12

    A. Filter Selection

    Document images generally contain both text and pictures. Text regions

    contain many sharp edges and suffer significantly from ringing artefacts. Picture

    regions suffer from both blocking artefacts (in smooth parts) and ringing artefacts

    (around sharp edges). The HSF employs four filters to eliminate these artefacts. The

    first filter selected for the HSF is a Gaussian filter with kernel standard deviation =

    1. The value is chosen large enough so that the Gaussian filters can effectively

    eliminate the coding artefacts in the smooth picture regions, even for images encoded

    at a high compression ratio. Applied alone, however, the Gaussian filter will lead to

    significant blurring of image detail. Therefore, to handle the picture regions that

    contain edges and textures, a bilateral filter with geometric spread d =0.5 and

    photometric spread r =10 is used as the second filter in the HSF. The d value is

    selected to be relatively small, as compared to the Gaussian filters kernel width , so

    that the bilateral filter can moderately remove the coding artefacts without blurring

    edges and image detail for non-smooth picture regions. To eliminate the ringing

    artefacts around the text, another two filters are applied in the HSF. A text region

    typically consists of pixels which concentrate around two intensities, corresponding to

    the background and the text foreground. Assuming that the local region is a two-

    intensity text region, the third filter estimates the local foreground gray-level, and the

    fourth filter estimates the local background gray-level. For each 88 JPEG blocks, a

    16 16 window is centred around the block, and the K-means clustering algorithm to

    partition the 256 pixels of the window into two groups is applied. Then, the two

    cluster medians, cf(s) and cb(s), are used as the local foreground gray-level estimate

    and the local background gray-level estimate of the JPEG block. For a text region that

    satisfies the two-intensity assumption, the ringing artefacts can be reduced by first

    classifying each pixel into either foreground or background. Foreground pixels will

    then be clipped to the foreground gray-level estimate, and background pixels will be

    clipped to the background gray-level estimate. In the feature vector, a feature

    component is designed to detect whether the local region is a two-intensity text

    region. Another two feature components detect whether a pixel is a foreground pixel

    or a background pixel. Lastly, it should be noted that there are also text regions which

    do not satisfy the two-intensity assumption. For examples, in graphic design, a text

    region may use different colours for the individual letters or a smoothly varying text

    foreground. Even in a text region that satisfies the two-intensity assumption, the edge

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    18/21

    Model Engineering College Hypothesis Selection Filter:

    Theory & Applications

    Department of Electronics Engineering 13

    pixels may still deviate substantially from the foreground and background intensities.

    For these cases, the pixels will be handled by the first two filters of the HSF.

    B. Feature Vector

    A five-dimensional feature vector has been used to separate out the different

    types of image content which are most suitably processed by each of the filters

    employed in the HSF. More specifically, the feature vector must capture the relevant

    information to distinguish between picture regions and text regions. For the picture

    regions, the feature vector also separates the smooth regions from the regions that

    contain textures and edges. For the text regions, the feature vector detects whether an

    individual pixel belongs to the background or to the text foreground. For the first

    feature component, the variance of the JPEG block associated with the current pixel is

    computed. The block variance is used to detect the smooth regions in the image,

    which generally have small block variance. For the second feature component, the

    local gradient magnitude at the current pixel is computed using the Sobel operator to

    detect major edges. The Sobel operator consists of a pair of 33 kernels. Convolving

    the image with these two kernels produces a 2-dimensional local gradient estimate at

    each pixel. The second feature component is then computed as the norm of the

    gradient vector. The third feature component is used to evaluate how well the JPEG

    block associated with the current pixel can be approximated by using only the local

    foreground gray-level and the local background gray-level. The feature is designed to

    detect the text regions that satisfy the two-intensity assumption.

    For the JPEG blocks, suppose {us,I:i =0,...,63} are the gray-levels of the 64 pixels in

    the block. The two-level normalized variance can be defined as

    = (|,()|,|,()|)

    |()()| 3.1

    where cb(s) and cf(s) are the estimates of the local foreground gray-level and the local

    background gray-level respectively, obtained from the third and the forth filters. The

    two-level normalized variance of the JPEG block associated with the current pixel is

    computed as the third feature component of the pixel. In the above equation, the

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    19/21

    Model Engineering College Hypothesis Selection Filter:

    Theory & Applications

    Department of Electronics Engineering 14

    numerator is proportional to the mean square error between the image block and its

    two-level quantization to the local foreground and background estimates. The

    denominator is proportional to the local contrast. Therefore, the feature component for

    a two-level text block will tend to be small. The fourth feature component is the

    difference between the pixels gray-level and the local foreground gray-level cf(s); the

    fifth component is the difference between the pixels gray-level and the local

    background gray-level cb(s):

    yn,4=uncf(s) 3.2

    yn,5=uncb(s) 3.3

    where unis the gray-level of the current pixel, and s is the JPEG block associated with

    the pixel. When the local region is a text region, these two features are used to

    distinguish between the text foreground and background. They help avoid applying

    the outputs of the last two filters when their errors are large.

    C. Training

    To perform training for the HSF, a set of 24 natural images and six text images

    is used. The natural images are from the Kodak lossless true colour image suite, each

    of size 512768. Each text image is 72006800, with text of different fonts, sizes, and

    contrasts. Also, the Independent JPEG Groups implementation of JPEG encoder is

    used to encode the original training images. Each original image is JPEG encoded at

    three different quality settings corresponding to medium to high compression ratios,

    resulting in 90 pairs of original and degraded images. Then the unsupervised training

    procedure described in section of parameter estimation is applied; to the pairs of

    original and JPEG-encoded images. For simplicity, only monochrome images are

    used to perform training. That is, a true colour image is first converted to a

    monochrome image before JPEG encoding. To process a colour image, the trained

    HSF is applied to the different colour components of the image separately. In

    experiments presented in next section, none of the test images are used in training.

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    20/21

    Model Engineering College Hypothesis Selection Filter:

    Theory & Applications

    Department of Electronics Engineering 15

    Chapter 4

    CONCLUSION

    The Hypothesis Selection Filter (HSF) was proposed as a new approach for

    image enhancement. The HSF provides a systematic method for combining the

    advantages of multiple image filters, linear or nonlinear, into a single general

    framework. The major contributions of this research work include the basic

    architecture of the HSF and a novel probabilistic model which leads to an

    unsupervised training procedure for the design of an optimal soft classifier. The

    resulting classifier distinguishes the different types of image content and appropriately

    adjusts the weighting factors of the image filters. This method is particularly

    appealing in applications where image data are heterogeneous and can benefit from

    the use of more than one filter. The effectiveness of the HSF have been demonstrated

    by applying it as a post-processing step for JPEG decoding so as to reduce the JPEG

    artefacts in the decoded document image. In the above scheme, four different image

    filters were incorporated for reducing the JPEG artefacts in the different types of

    image content that are common in document images, like text, graphics, and natural

    images. Based on several evaluation methods, including visual inspection of a variety

    of image patches with different types of content, global PSNR, and a global

    blockiness measure, this method outperforms state-of-the-art JPEG decoding

    methods. Potential future research include exploring other options of more advanced

    image filters and features for the application of JPEG artefact reduction, as well as

    applying the scheme to other quality enhancement and image reconstruction tasks.

  • 7/22/2019 Hypothesis Selection Filter - Seminar Interim Report

    21/21

    REFERENCES

    1. Tak-Shing Wong, Charles A. Bouman and Ilya Pollak, Image Enhancement Usingthe Hypothesis Selection Filter: Theory and Application to JPEG Decoding, inIEEE

    Transactions on Image Processing Vis,. vol 22, no. 3, March 2013

    2. C. Tomasi and R. Manduchi, Bilateral filtering for gray and colour images, inProc.6th Int. Conf. Comput. Vis., Jan. 1998, pp. 839846.

    3. S. M. Smith and J. M. Brady, SUSANa new approach to low level imageprocessing,Int. J. Comput. Vis.,vol. 23, no. 1, pp. 4578, May 1997.

    4. C. B. Atkins, C. A. Bouman, and J. P. Allebach, Optimal image scaling using pixelclassification, inProc. Int. Conf. Image Process.,Oct. 2001, pp. 864867.

    5.

    B. Zhang, J. S. Gondek, M. T. Schramm, and J. P. Allebach, Improved resolution

    synthesis for image interpolation, inProc. Int. Conf. Digit. Print. Technol.Sep. 2006,

    pp. 343345.

    6. Independent JPEG Group Website. (2012) [Online]. Available: http://www.ijg.org/7. T.-S. Wong, C. A. Bouman, I. Pollak, and Z. Fan, A document image model and

    estimation algorithm for optimized JPEG decompression, IEEE Trans. Image

    Process.,vol. 18, no. 11, pp. 25182535, Nov. 2009.