vensoft technologies bangalore ieee 2014 matlab projects academic year 2014-2015

Upload: vensoft

Post on 06-Feb-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    1/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    IEEE 2014 MATLAB PROJECTS ACADEMIC YEAR 2014-2015 FOR M.Tech/ B.E/B.Tech

    VENTM14001 Regularized Simultaneous ForwardBackward Greedy Algorithm for Sparse Unmixing of

    Hyperspectral Data

    AbstractSparse unmixing assumes that each observed signature of a hyper spectral image is a linearcombination of only a few spectra (end members) in an available spectral library. It then estimates the

    fractional abundances of these end members in the scene. The sparse un mixing problem still remains a

    great difficulty due to the usually high correlation of the spectral library. Under such circumstances, this

    paper presents a novel algorithm termed as the regularized simultaneous forwardbackward greedy

    algorithm (RSFoBa) for sparse un mixing of hyper spectral data. The RSFoBa has low computational

    complexity of getting an approximate solution for the l0 problem directly and can exploit the joint

    sparsity among all the pixels in the hyper spectral data. In addition, the combination of the forward

    greedy step and the backward greedy step makes the RSFoBa more stable and less likely to be trapped

    into the local optimum than the conventional greedy algorithms. Furthermore, when updating the

    solution in each iteration, a regularizer that enforces the spatial-contextual coherence within the hyper

    spectral image is considered to make the algorithm more effective. We also show that the sublibrary

    obtained by the RSFoBa can serve as input for any other sparse unmixing algorithms to make them more

    accurate and time efficient. Experimental results on both synthetic and real data demonstrate the

    effectiveness of the proposed algorithm.

    Published in: Geo science and Remote Sensing, IEEE Transactions on (Volume:52 , Issue: 9 )

    Date of Publication: Sept. 2014

    Index TermsDictionary pruning, greedy algorithm (GA), hyperspectral unmixing, multiple-

    measurement vector (MMV), sparse unmixing.

    VENTM14002 Mixed Noise Removal by Weighted Encoding with Sparse Nonlocal Regularization

    Abstract: Mixed noise removal from natural images is a challenging task since the noise distribution

    usually does not have a parametric model and has a heavy tail. One typical kind of mixed noise is

    additive white Gaussian noise (AWGN) coupled with impulse noise (IN).

    Many mixed noise removal methods are detection based methods. They first detect the locations of IN

    pixels and then remove the mixed noise. However, such methods tend to generate many artifacts when

    the mixed noise is strong. In this paper, we propose a simple yet effective method,

    namely weighted encoding with sparsenonlocal regularization (WESNR), for mixed noise removal. InWESNR, there is not an explicit step of impulse pixel detection; instead, soft impulse pixel detection

    via weighted encoding is used to deal with IN and AWGN simultaneously. Meanwhile, the image sparsity

    prior and nonlocal self-similarity prior are integrated into a regularization term and introduced into the

    variational encoding framework. Experimental results show that the proposed WESNR method achieves

    leading mixed noise removal performance in terms of both quantitative measures and visual quality.

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    2/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    Published in:Image Processing, IEEE Transactions on (Volume:23 , Issue: 6 )

    Date of Publication: June 2014

    Index TermsMixed noise removal, weighted encoding, nonlocal, sparse representation.

    VENTM14003 Subspace Matching Pursuit for Sparse Unmixing of Hyperspectral Data

    Abstract : Sparse unmixing assumes that each mixed pixel in the hyperspectral image can be expressed

    as a linear combination of only a few spectra (end members) in a spectral library, known a priori. It then

    aims at estimating the fractional abundances of these endmembers in the scene. Unfortunately,

    because of the usually high correlation of the spectral library, the sparse unmixing problem still remains

    a great challenge. Moreover, most related work focuses on the l1 convex relaxation methods, and little

    attention has been paid to the use of simultaneous sparse representation via greedy algorithms (GAs)

    (SGA) for sparse unmixing. SGA has advantages such as that it can get an approximate solution for the

    l0 problem directly without smoothing the penalty term in a low computational complexity as well as

    exploit the spatial information of the hyperspectral data. Thus, it is necessary to explore the potential of

    using such algorithms for sparse unmixing. Inspired by the existing SGA methods, this paper presents a

    novel GA termed subspace matching pursuit (SMP) forsparse unmixing of hyperspectral data. SMP

    makes use of the low-degree mixed pixels in thehyperspectral image to iteratively find a subspace to

    reconstruct the hyperspectral data. It is proved that, under certain conditions, SMP can recover the

    optimal endmembers from the spectral library. Moreover, SMP can serve as a dictionary pruning

    algorithm. Thus, it can boost other sparseunmixing algorithms, making them more accurate and time

    efficient. Experimental results on both synthetic and real data demonstrate the efficacy of the proposed

    algorithm.

    Published in: Geoscience and Remote Sensing, IEEE Transactions on (Volume:52 , Issue: 6 )Date of Publication: June 2014

    Index TermsDictionary pruning, greedy algorithm (GA), hyperspectral unmixing, multiple-

    measurement vector (MMV), simultaneous sparse representation, sparse unmixing, subspace matching

    pursuit (SMP).

    VENTM14004 Sparse Unmixing of Hyperspectral Data Using Spectral a Priori Information

    Abstract: Given a spectral library, sparse unmixing aims at finding the optimal subset of endmembers

    from it to model each pixel in the hyperspectral scene. However, sparse unmixing still remains a

    challenging task due to the usually high mutual coherence of the spectral library. In this paper, weexploit the spectral a priori information in the hyperspectral image to alleviate this difficulty. It assumes

    that some materials in the spectral library are known to exist in the scene. Such information can be

    obtained via field investigation or hyperspectral data analysis. Then, we propose a novel model to

    incorporate the spectral a priori information into sparse unmixing. Based on the alternating direction

    method of multipliers, we present a new algorithm, which is

    termed sparse unmixing using spectral apriori information (SUnSPI), to solve the model. Experimental

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    3/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    results on both synthetic and real datademonstrate that the spectral a priori information is beneficial

    to sparse unmixing and that SUnSPI can exploit this information effectively to improve the abundance

    estimation.

    Published in: Geoscience and Remote Sensing, IEEE Transactions on (Volume:53 , Issue: 2 )

    Date of Publication: Feb. 2015

    Index TermsHyperspectral unmixing, sparse unmixing, alternating direction method of multipliers

    (ADMM), spectral a priori information.

    VENTM14005 Gradient Histogram Estimation and Preservation for Texture Enhanced Image Denoising

    Abstract: Natural image statistics plays an important role in image denoising, and various

    natural image priors, including gradient-based, sparse representation-based, and nonlocal self-

    similarity-based ones, have been widely studied and exploited for noise removal. In spite of the great

    success of many denoising algorithms, they tend to smooth the fine scale image textures when

    removing noise, degrading the image visual quality. To address this problem, in this paper, we propose

    a texture enhanced image denoising method by enforcing the gradient histogram of the

    denoised image to be close to a reference gradient histogram of the original image. Given the

    reference gradient histogram, a novel gradient histogram preservation (GHP) algorithm is developed

    to enhance the texture structures while removing noise. Two region-based variants of GHP are proposed

    for the denoising of images consisting of regions with different textures. An algorithm is also developed

    to effectively estimate the reference gradient histogram from the noisy observation of the

    unknownimage. Our experimental results demonstrate that the proposed GHP algorithm can well

    preserve thetexture appearance in the denoised images, making them look more natural.

    Published in: Image Processing, IEEE Transactions on (Volume:23 , Issue: 6 )

    Date of Publication: June 2014

    Index TermsImage denoising, histogram specification, nonlocal similarity, sparse representation.

    VENTM14006 Image Set based Collaborative Representation for Face Recognition

    Abstract: With the rapid development of digital imaging and communication technologies, image set-

    based face recognition (ISFR) is becoming increasingly important. One key issue of ISFR is how to

    effectively and efficiently represent the query face image set using the gallery face image sets. Theset-to-set distance-based methods ignore the relationship between gallery sets, whereas representing the

    query set images individually over the gallery sets ignores the correlation between query set images. In

    this paper, we propose a novel image set-based collaborative representationand classification method

    for ISFR. By modeling the query set as a convex or regularized hull, we represent this hull collaboratively

    over all the gallery sets. With the resolved representationcoefficients, the distance between the

    query set and each gallery set can then be calculated for classification. The proposed model naturally

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    4/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    and effectively extends the image-based collaborative presentation to an image set based one, and our

    extensive experiments on benchmark ISFR databases show the superiority of the proposed method to

    state-of-the-art ISFR methods under different set sizes in terms of both recognition rate and efficiency.

    Published in: Information Forensics and Security, IEEE Transactions on (Volume:9 , Issue: 7 )

    Date of Publication: July 2014

    Index Termsimage set, collaborative representation, set to sets distance, face recognition

    VENTM14007 Fast Compressive Tracking

    Abstract : It is a challenging task to develop effective and efficient appearance models for robust

    object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur.

    Existing online tracking algorithms often update models with samples from observations in recent

    frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First,

    while these adaptive appearance models are data-dependent, there does not exist sufficient amount of

    data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the

    drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade

    the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm

    with an appearance model based on features extracted from a multiscale image feature space with

    data-independent basis. The proposed appearance model employs non-adaptive random projections

    that preserve the structure of the image feature space of objects. A very sparse measurement matrix is

    constructed to efficiently extract the features for the appearance model. We compress sample images

    of the foreground target and the background using the same sparse measurement matrix. The tracking

    task is formulated as a binary classification via a naive Bayes classifier with online update in the

    compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational

    complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time

    and performs favorably against state-of-the-art methods on challenging sequences in terms of

    efficiency, accuracy and robustness.

    Published in: Pattern Analysis and Machine Intelligence, IEEE Transactions on (Volume:PP , Issue: 99 )

    April 2014

    Index TermsVisual Tracking, Random Projection, Compressive Sensing, Compressed sensing, Feature

    extraction, Image coding, Object tracking, Robustness, Sparse matrices, Target tracking

    VENTM14008 Speech Intelligibility Prediction Based on Mutual Information

    Abstract: This paper deals with the problem of predicting the average intelligibility of noisy and

    potentially processed speech signals, as observed by a group of normal hearing listeners. We propose a

    model which performs this prediction based on the hypothesis that intelligibility is monotonically

    related to the mutual information between critical-band amplitude envelopes of the clean signal and the

    corresponding noisy/processed signal. The resulting intelligibility predictor turns out to be a simple

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    5/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    function of the mean-square error (mse) that arises when estimating a clean critical-band amplitude

    using a minimum mean-square error (mmse) estimator based on the noisy/processed amplitude. The

    proposed model predicts that speech intelligibility cannot be improved by any processing of noisy

    critical-band amplitudes. Furthermore, the proposed intelligibility predictor performs well ( > 0.95) in

    predicting the intelligibility of speech signals contaminated by additive noise and potentially non-linearly

    processed using time-frequency weighting.

    Published in: Audio, Speech, and Language Processing, IEEE/ACM Transactions on (Volume:22 ,Issue: 2 )

    Date of Publication: Feb. 2014

    Index Terms Instrumental measures, noise reduction, objective distortion measures, speech

    enhancement, speech intelligibility prediction.

    VENTM14009 Super-Resolution Compressed Sensing: An Iterative Reweighted Algorithm for Joint

    Parameter Learning and Sparse Signal Recovery

    Abstract : In many practical applications such as direction-of- arrival (DOA) estimation and line spectral

    estimation, the sparsifying dictionary is usually characterized by a set of unknown parameters in a

    continuous domain. To apply the conventional compressed sensing to such applications, the

    continuous parameter space has to be discretized to a finite set of grid points. Discretization, however,

    incurs errors and leads to deteriorated recovery performance. To address this issue, we propose

    an iterative reweighted method which jointly estimates the unknown parameters and thesparse signals.

    Specifically, the proposed algorithm is developed by iteratively decreasing a surrogate function

    majorizing a given objective function, which results in a gradual and interweavediterative process to

    refine the unknown parameters and the sparse signal. Numerical results show that

    the algorithm provides superior performance in resolving closely-spaced frequency components.

    Published in: Signal Processing Letters, IEEE (Volume:21 , Issue: 6 )

    Date of Publication: June 2014

    Index TermsCompressed sensing, super-resolution, parameter learning, sparse signal recovery

    VENTM14010 Variants of non-negative least-mean-square algorithm and convergence analysis

    Abstract: Due to the inherent physical characteristics of systems under investigation, non-negativity is

    one of the most interesting constraints that can usually be imposed on the parameters to estimate.TheNon-Negative Least-Mean-Square algorithm (NNLMS) was proposed to adaptively find solutions of a

    typical Wiener filtering problem but with the side constraint that the resulting weights need to be non-

    negative. It has been shown to have good convergence properties. Nevertheless, certain practical

    applications may benefit from the use of modified versions of this algorithm. In this paper, we derive

    three variants of NNLMS. Each variant aims at improving the NNLMS performance regarding one of the

    following aspects: sensitivity of input power, unbalance of convergence rates for different weights and

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    6/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    computational cost. We study the stochastic behavior of the adaptive weights for these three

    new algorithms for non-stationary environments. This study leads to analytical models to predict the

    first and second order moment behaviors of the weights for Gaussian inputs. Simulation results are

    presented to illustrate the performance of the new algorithms and the accuracy of the derived models.

    Published in: Signal Processing, IEEE Transactions on (Volume:62 , Issue: 15 )

    Date of Publication: Aug.1, 2014

    Keywords: Adaptive signal processing, convergence analysis, exponential algorithm, least-mean-

    square algorithms, non-negativity constraints, normalized algorithm, sign-sign algorithm.

    VENTM14011 Training-Free Non-Intrusive Load Monitoring of Electric Vehicle Charging with Low

    Sampling Rate

    AbstractNon-intrusive load monitoring (NILM) is an important topic in smart-grid and smart-home.

    Many energy disaggregation algorithms have been proposed to detect various individual appliances

    from one aggregated signal observation. However, few works studied the energy disaggregation of

    plugin electric vehicle (EV) charging in the residential environment since EVs charging at home has

    emerged only recently. Recent studies showed that EV charging has a large impact on smartgrid

    especially in summer. Therefore, EV charging monitoring has become a more important and urgent

    missing piece in energy disaggregation. In this paper, we present a novel method to disaggregate EV

    charging signals from aggregated real power signals. The proposed method can effectively mitigate

    interference coming from air-conditioner (AC), enabling accurate EV charging detection and energy

    estimation under the presence of AC power signals. Besides, the proposed algorithm requires no

    training, demands a light computational load, delivers high estimation accuracy, and works well for data

    recorded at the low sampling rate 1/60 Hz. When the algorithm is tested on real-world data recorded

    from 11 houses over about a whole year (total 125 months worth of data), the averaged error in

    estimating energy consumption of EV charging is 15.7 kwh/month (while the true averaged energy

    consumption of EV charging is 208.5 kwh/month), and the averaged normalized mean square error in

    disaggregating EV charging load signals is 0.19.

    KeywordsNon-intrusive load monitoring (NILM); Electric Vehicle (EV); Smart Grid; Energy

    Disaggregation

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    7/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    IEEE 2013 MATLAB PROJECTS ACADEMIC YEAR 2014-2015 FOR M.Tech/ B.E/B.Tech

    VENTM13001 -Dimensional Wavelet Packet Spectrum for Texture Analysis

    Abstract :This brief derives a 2-D spectrum estimator from some recent results on the statistical

    properties ofwavelet packet coefficients of random processes. It provides an analysis of the bias of thisestimator with respect to the wavelet order. This brief also discusses the performance of this wavelet-

    based estimator, in comparison with the conventional 2-D Fourier-based spectrum estimator

    on textureanalysis and content-based image retrieval. It highlights the effectiveness of the wavelet-

    basedspectrum estimation.

    Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 6 )

    Date of Publication: June 2013

    Keywords 2-D Wavelet packet transforms; Random fields; Spectral analysis, Spectrum estimation,

    Similarity measurements.

    VENTM13002 Supervised and Unsupervised Speech Enhancement Using Nonnegative Matrix

    Factorization

    AbstractReducing the interference noise in a monaural noisy speech signal has been a challenging task

    for many years. Compared to traditional unsupervised speech enhancement methods, e.g., Wiener

    filtering, supervised approaches, such as algorithms based on hidden Markov models (HMM), lead to

    higher-quality enhanced speech signals. However, the main practical difficulty of these approaches is

    that for each noise type a model is required to be trained a priori. In this paper, we investigate a new

    class of supervised speech denoising algorithms using nonnegative matrix factorization (NMF). We

    propose a novel speech enhancement method that is based on a Bayesian formulation of NMF (BNMF).

    To circumvent the mismatch problem between the training and testing stages, we propose two

    solutions. First, we use an HMM in combination with BNMF (BNMF-HMM) to derive a minimum mean

    square error (MMSE) estimator for the speech signal with no information about the underlying noise

    type. Second, we suggest a scheme to learn the required noise BNMF model online, which is then used

    to develop an unsupervised speech enhancement system. Extensive experiments are carried out to

    investigate the performance of the proposed methods under different conditions. Moreover, we

    compare the performance of the developed algorithms with state-of-the-art speech enhancement

    schemes using various objective measures. Our simulations show that the proposed BNMF-based

    methods outperform the competing algorithms substantially.

    Published in: Audio, Speech, and Language Processing, IEEE Transactions on (Volume:21 , Issue: 10 )

    Date of Publication: Oct. 2013

    Index TermsNonnegative matrix factorization (NMF), speech enhancement, PLCA, HMM, Bayesian

    Inference

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    8/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    VENTM13003 Image Segmentation Using a Sparse Coding Model of Cortical Area V1

    Abstract: Algorithms that encode images using a sparse set of basis functions have previously been

    shown to explain aspects of the physiology of a primary visual cortex (V1), and have been used for

    applications, such as image compression, restoration, and classification. Here, a sparse coding algorithm,

    that has previously been used to account for the response properties of orientation tuned cells in

    primary visual cortex, is applied to the task of perceptually salient boundary detection. The proposed

    algorithm is currently limited to using only intensity information at a single scale. However, it is shown

    to out-perform the current state-of-the-art image segmentation method (Pb) when this method is also

    restricted to using the same information.

    Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 4 )

    Index Terms Image Segmentation; Edge detection; Neural Networks; Predictive Coding; Sparse

    Coding; Primary Visual Cortex

    VENTM13004 How to SAIF-ly Boost Denoising Performance

    Abstract: Spatial domain image filters (e.g., bilateral filter, non-local means, locally adaptive regression

    kernel) have achieved great success in de noising. Their overall performance, however, has not generally

    surpassed the leading transform domain-based filters (such as BM3-D). One important reason is that

    spatial domain filters lack efficiency to adaptively fine tune their de noising strength; something that is

    relatively easy to do in transform domain method with shrinkage operators. In the pixel domain, the

    smoothing strength is usually controlled globally by, for example, tuning a regularization parameter. In

    this paper, we propose spatially adaptive iterative filtering (SAIF) a new strategy to control the de

    noising strength locally for any spatial domain method. This approach is capable of filtering local image

    content iteratively using the given base filter, and the type of iteration and the iteration number are

    automatically optimized with respect to estimated risk (i.e., mean-squared error). In exploiting the

    estimated local signal-to-noise-ratio, we also present a new risk estimator that is different from the

    often-employed SURE method, and exceeds its performance in many cases. Experiments illustrate that

    our strategy can significantly relax the base algorithm's sensitivity to its tuning (smoothing) parameters,

    and effectively boost the performance of several existing de noising filters to generate state-of-the-art

    results under both simulated and practical conditions.

    Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 4 )

    Index Terms Image de noising, spatial domain filter, risk estimator, SURE, pixel aggregation

    VENTM13005 Nonlocally Centralized Sparse Representation for Image Restoration

    Abstract: Sparse representation models code an image patch as a linear combination of a few atoms

    chosen out from an over-complete dictionary, and they have shown promising results in

    various image restoration applications. However, due to the degradation of the observed image (e.g.,

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    9/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    noisy, blurred, and/or down-sampled), the sparse representations by conventional models may not be

    accurate enough for a faithful reconstruction of the original image. To improve the performance

    of sparse representation-based image restoration, in this paper the concept of sparse coding noise is

    introduced, and the goal of image restoration turns to how to suppress the sparse coding noise. To this

    end, we exploit the image nonlocal self-similarity to obtain good estimates of the sparse coding

    coefficients of the original image, and then centralize the sparse coding coefficients of the

    observed image to those estimates. The so-called non locally centralized sparse representation (NCSR)

    model is as simple as the standard sparse representation model, while our extensive experiments on

    various types of image restoration problems, including de noising, de blurring and super-resolution,

    validate the generality and state-of-the-art performance of the proposed NCSR algorithm.

    Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 4 )

    Date of Publication: April 2013

    Index Terms Image restoration, nonlocal similarity, sparse representation.

    VENTM13006 Sparse Representation Based Image Interpolation With Nonlocal AutoregressiveModeling

    Abstract: Sparse representation is proven to be a promising approach to image super-resolution, where

    the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR)

    counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is

    directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes

    an image interpolation problem. In such cases, however, the conventional sparse representation

    models (SRM) become less effective, because the data fidelity term fails to constrain the image local

    structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide

    nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity

    into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is

    proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix

    is less coherent with the representation dictionary, and consequently makes SRM more effective for

    image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based

    image interpolation method can effectively reconstruct the edge structures and suppress the

    jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as

    perceptual quality metrics such as SSIM and FSIM.

    Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 4 )

    Date of Publication: April 2013

    Index TermsImage interpolation, nonlocal autoregressive model, sparse representation, super-resolution.

    VENTM13007 Acceleration of the Shiftable Algorithm for Bilateral Filtering and Nonlocal

    Means

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    10/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    Abstract: A direct implementation of the bilateral filter requires O(s2) operations per pixel, where

    s is the(effective) width of the spatial kernel. A fast implementation of the bilateral filter that

    required O(1) operations per pixel with respect to s was recently proposed. This was done by using

    trigonometric functions for the range kernel of the bilateral filter, and by exploiting their so-called shift

    ability property. In particular, a fast implementation of the Gaussian bilateral filter was realized by

    approximating the Gaussian range kernel using raised cosines. Later, it was demonstrated that this idea

    could be extended to a larger class of filters, including the popular non-local means filter. As already

    observed, a flip side of this approach was that the run time depended on the width r of the range

    kernel. For an image with dynamic range [0,T], the run time scaled as O(T2/r

    2) with r. This made it

    difficult to implement narrow range kernels, particularly for images with large dynamic range. In this

    paper, we discuss this problem, and propose some simple steps to accelerate the implementation, in

    general, and for small r in particular. We provide some experimental results to

    demonstrate the acceleration that is achieved using these modifications.

    Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 4 )

    Date of Publication: April 2013Index TermsBilateral filter, non-local means, shiftability, constant-time algorithm, Gaussian kernel,

    truncation, running maximum, max filter, recursive filter, O(1) complexity.

    VENTM13008 Incremental Learning of 3D-DCT Compact Representations for Robust Visual Tracking

    Abstract: Visual tracking usually requires an object appearance model that is robust to changing

    illumination, pose, and other factors encountered in video. Many recent trackers utilize appearance

    samples in previous frames to form the bases upon which the object appearance model is built. This

    approach has the following limitations: 1) The bases are data driven, so they can be easily corrupted,

    and 2) it is difficult to robustly update the bases in challenging situations. In this paper, we construct anappearance model using the 3D discrete cosine transform (3D-DCT). The 3D-DCT is based on a

    set of cosine basis functions which are determined by the dimensions of the 3D signal and thus

    independent of the input video data. In addition, the 3D-DCT can generate a compact energy spectrum

    whose high-frequency coefficients are sparse if the appearance samples are similar. By discarding these

    high-frequency coefficients, we simultaneously obtain a compact 3D-DCT-based

    object representation and a signal reconstruction-based similarity measure (reflecting the information

    loss from signal reconstruction). To efficiently update the object representation, we propose

    an incremental 3D-DCTalgorithm which decomposes the 3D-DCT into successive operations of the 2D

    discrete cosine transform (2D-DCT) and 1D discrete cosine transform (1D-DCT) on the input video data.

    As a result, the incremental 3D-DCT algorithm only needs to compute the 2D-DCT for newly addedframes as well as the 1D-DCT along the third dimension, which significantly reduces the computational

    complexity. Based on this incremental 3D-DCT algorithm, we design a discriminative criterion to

    evaluate the likelihood of a test sample belonging to the foreground object. We then embed the

    discriminative criterion into a particle filtering framework for object state inference over time.

    Experimental results demonstrate the effectiveness and robustness of the proposed tracker.

    Published in: Pattern Analysis and Machine Intelligence, IEEE Transactions on (Volume:35 , Issue: 4 )

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    11/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    Date of Publication: April 2013

    Index TermsVisual tracking, appearance model, compact representation, discrete cosine transform

    (DCT), incremental learning, template matching.

    VENTM13009 Visual Saliency Based on Scale-Space Analysis in the Frequency Domain

    Abstract: We address the issue of visual saliency from three perspectives. First, we

    consider saliency detection as a frequency domain analysis problem. Second, we achieve this by

    employing the concept of non saliency. Third, we simultaneously consider the detection of salient

    regions of different size. The paper proposes a new bottom-up paradigm for detecting visual saliency,

    characterized by a scale-space analysis of the amplitude spectrum of natural images. We show

    that the convolution of the image amplitude spectrum with a low-pass Gaussian kernel of an

    appropriate scale is equivalent to an image saliency detector. The saliency map is obtained by

    reconstructing the 2D signal using the original phase and the amplitude spectrum, filtered at

    a scale selected by minimizing saliency map entropy. A Hypercomplex Fourier Transform

    performs the analysis in the frequency domain. Using available databases, we demonstrate

    experimentally that the proposed model can predict human fixation data. We also introduce a new

    image database and use it to show that the saliency detector can highlight both small and large salient

    regions, as well as inhibit repeated distractors in cluttered images. In addition, we show that it is able to

    predict salient regions on which people focus their attention.

    Published in: Pattern Analysis and Machine Intelligence, IEEE Transactions on (Volume:35 , Issue: 4 )

    Date of Publication: April 2013

    Index TermsVisual attention, saliency, Hypercomplex Fourier Transform, eye-tracking, scale space

    analysis.

    VENTM13010 Demosaicking of Noisy Bayer-Sampled Color Images With Least-Squares Luma-ChromaDemultiplexing and Noise Level Estimation

    Abstract: This paper adapts the least-squares luma-chroma de multiplexing (LSLCD) de

    mosaicking method to noisy Bayer color filter array (CFA) images. A model is presented for the noise in

    white-balanced gamma-corrected CFA images. A method to estimate the noise level in each of the red,

    green, and blue color channels is then developed. Based on the estimated noise parameters, one of a

    finite set of configurations adapted to a particular level of noise is selected to de mosaic the noisy data.

    The noise-adaptive de mosaicking scheme is called LSLCD with noise estimation (LSLCD-NE).

    Experimental results demonstrate state-of-the-art performance over a wide

    range of noise levels, with low computational complexity. Many results with severalalgorithms, noise levels, and images are presented on our companion web site along with software to

    allow reproduction of our results.

    Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 1 )

    Date of Publication: Jan. 2013

    Index Termscolor filter array, Bayer sampling, demosaicking, noise estimation, noise reduction, noise

    model

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    12/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    VENTM13011 Fuzzy C-Means Clustering With Local Information and Kernel Metric for Image

    Segmentation

    Abstract: In this paper, we present an improved fuzzy C-means (FCM)

    algorithm for image segmentation by introducing a tradeoff weighted fuzzy factor and a kernel metric.The tradeoff weighted fuzzy factor depends on the space distance of all neighboring pixels and their

    gray-level difference simultaneously. By using this factor, the new algorithm can accurately estimate the

    damping extent of neighboring pixels. In order to further enhance its robustness to noise and outliers,

    we introduce a kernel distance measure to its objective function. The new algorithm adaptively

    determines the kernel parameter by using a fast bandwidth selection rule based on the distance

    variance of all data points in the collection. Furthermore, the tradeoff

    weighted fuzzy factor and the kernel distance measure are both parameter free. Experimental results on

    synthetic and real images show that the new algorithm is effective andefficient, and is relatively

    independent of this type of noise.

    Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 2 )

    Date of Publication: Feb. 2013

    Index TermsFuzzy clustering, gray-level constraint, image segmentation, kernel metric, spatial

    constraint.

    VENTM13012 Re initialization-Free Level Set Evolution via Reaction Diffusion

    Abstract: This paper presents a novel reaction-diffusion (RD) method for implicit active contours that is

    completely free of the costly re initialization procedure in level set evolution (LSE). A diffusion term is

    introduced into LSE, resulting in an RD-LSE equation, from which a piecewise constant solution can be

    derived. In order to obtain a stable numerical solution from the RD-based LSE, we propose a two-stepsplitting method to iteratively solve the RD-LSE equation, where we first iterate the LSE equation, then

    solve the diffusion equation. The second step regularizes the level set function obtained in the first step

    to ensure stability, and thus the complex and costly re initialization procedure is completely eliminated

    from LSE. By successfully applying diffusion to LSE, the RD-LSE model is stable by means of the simple

    finite difference method, which is very easy to implement. The proposed RD method can be generalized

    to solve the LSE for both variational level set method and partial differential equation-based

    level set method. The RD-LSE method shows very good performance on boundary anti leakage. The

    extensive and promising experimental results on synthetic and real images validate the effectiveness of

    the proposed RD-LSE approach.

    Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 1

    Date of Publication: Jan. 2013

    Index TermsActive contours, image segmentation, level set, partial differential equation (PDE),

    reaction-diffusion, variational method.

    VENTM13013 Online Object Tracking With Sparse Prototypes

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    13/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    Abstract: Online object tracking is a challenging problem as it entails learning an effective model to

    account for appearance change caused by intrinsic and extrinsic factors. In this paper, we propose a

    novel online object tracking algorithm with sparse prototypes, which exploits both classic principal

    component analysis (PCA) algorithms with recent sparse representation schemes for learning effective

    appearance models. We introduce l1 regularization into the PCA reconstruction, and develop a novel

    algorithm to represent an object by sparse prototypes that account explicitly for data and noise.

    For tracking, objects are represented by the sparse prototypes learned online with update. In order to

    reduce tracking drift, we present a method that takes occlusion and motion blur into account rather

    than simply includes image observations for model update. Both qualitative and quantitative

    evaluations on challenging image sequences demonstrate that the proposed tracking algorithm

    performs favorably against several state-of-the-art methods.

    Published in: Image Processing, IEEE Transactions on (Volume:22 , Issue: 1 )

    Date of Publication: Jan. 2013

    Index TermsAppearance model, _1 minimization, object tracking, principal component analysis (PCA),

    sparse prototypes

    VENTM13014 Reversible Data Hiding in Encrypted Images by Reserving Room Before Encryption

    Abstract: Recently, more and more attention is paid

    to reversible data hiding (RDH) in encrypted images, since it maintains the excellent property that the

    original cover can be losslessly recovered after embedded data is extracted while protecting

    the image content's confidentiality. All previous methods embed data by reversibly vacating room from

    the encrypted images, which may be subject to some errors on data extraction

    and/or image restoration. In this paper, we propose a novel method by reserving room before

    encryption with a traditional RDH algorithm, and thus it is easy for the data hider to reversibly embed

    data in the encrypted image. The proposed method can achieve real reversibility, that is, data extraction

    and image recovery are free of any error. Experiments show that this novel method can embed more

    than 10 times as large payloads for the same image quality as the previous methods, such as for

    PSNR=40 dB.

    Published in: Information Forensics and Security, IEEE Transactions on (Volume:8 , Issue: 3 )

    Date of Publication: March 2013

    Index Terms Reversible data hiding, image encryption, privacy protection, histogram shift.

    VENTM13015 Fast and Accurate Matrix Completion via Truncated Nuclear Norm Regularization

    Abstract: Recovering a large matrix from a small subset of its entries is a challenging problem arising in

    many real applications, such as image inpainting and recommender systems. Many existing approaches

    formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is

    non convex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex

    relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that

    all the singular values are simultaneously minimized, and thus the rank may not be well approximated in

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    14/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    practice. In this paper, we propose to achieve a better approximation to the rank

    of matrix by truncatednuclear norm, which is given by the nuclear norm subtracted by the sum of the

    largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing

    the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM,

    TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the

    alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal

    gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an

    adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our

    empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-

    art matrix completion algorithms on both synthetic and real visual datasets.

    Published in: Pattern Analysis and Machine Intelligence, IEEE Transactions on (Volume:35 , Issue: 9 )

    Date of Publication: Sept. 2013

    Index TermsMatrix completion, nuclear norm minimization, alternating direction method of

    multipliers, accelerated proximal gradient Method

    VENTM13016 A Saliency Detection Model Using Low-Level Features Based on Wavelet Transform

    Abstract: Researchers have been taking advantage of visual attention in various image processing

    applications such as image retargeting, video coding, etc. Recently, many saliency detection algorithms

    have been proposed by extracting features in spatial or transform domains. In this paper, a

    novel saliency detection model is introduced by utilizing low-level features obtained from

    the wavelet transform domain. Firstly, wavelet transform is employed to create the multi-

    scale feature maps which can represent different features from edge to texture. Then, we propose a

    computational model for the saliency map from these features. The proposed model aims to modulate

    local contrast at a location with its global saliency computed based on the likelihood of the features, and

    the proposed model considers local center-surround differences and global contrast in the

    final saliency map. Experimental evaluation depicts the promising results from the proposed model by

    outperforming the relevant state of the artsaliency detection models.

    Published in: Multimedia, IEEE Transactions on (Volume:15 , Issue: 1 )

    Date of Publication: Jan. 2013

    Index TermsFeature map, saliency detection, saliency map, visual attention, wavelet transform.

    VENTM13017 Robust Point Matching Revisited: A Concave Optimization Approach

    Abstract- The well-known robust point matching (RPM) method uses deterministic annealing for

    optimization, and it has two problems. First, it cannot guarantee the global optimality of the solution

    and tends to align the centers of two point sets. Second, deformation needs to be reg- ularized to avoid

    the generation of undesirable results. To address these problems, in this paper we show that the energy

    function of RPM can be reduced to a concave function with very few non-rigid terms after eliminating

    the transformation variables and applying linear transformation; we then propose to use concave

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    15/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    optimization technique to minimize the resulting energy function. The proposed method scales well

    with problem size, achieves the globally optimal solution, and does not need regularization for simple

    transformations such as similarity transform. Experiments on synthetic and real data validate the

    advantages of our method in comparison with state-of-the-art methods.

    VENTM13018 Phase Noise in MIMO Systems: Bayesian Cramer-Rao Bounds and Soft-Input

    Estimation

    Abstract: This paper addresses the problem of estimating time varying phase noise caused by

    imperfect oscillators in multiple-input multiple-output (MIMO) systems. The estimation

    problem is parameterized in detail and based on an equivalent signal model its dimensionality

    is reduced to minimize the . New exact and

    closed-form expressions for the Bayesian Cramr-Rao lower bounds (BCRLBs) and soft-input

    maximum a posteriori (MAP) estimators for online, i.e., filtering, and offline, i.e., smoothing,

    estimation of phase noise over the length of a frame are derived. Simulations demonstrate that

    the proposed MAP estimators' mean-square error (MSE) performances are very close to the

    derived BCRLBs at moderate-to-high signal-to-noise ratios. To reduce the overhead and

    complexity associated with tracking the phase noise processes over the length of a frame, a

    novel soft-input extended Kalman filter (EKF) and extended Kalman smoother (EKS) that use

    soft statistics of the transmitted symbols given the current observations are proposed.

    Numerical results indicate that by employing the proposed phase tracking approach, the bit-

    error rate performance of a MIMO system affected by phase noise can be significantly

    improved. In addition, simulation results indicate that the proposed phase noise estimation

    scheme allows for application of higher order modulations and larger numbers of antennas in

    MIMO systems that employ imperfect oscillators.

    Published in: Signal Processing, IEEE Transactions on (Volume:61 , Issue: 10 )

    Issue Date : May15, 2013

    Index TermsMulti-input multi-output (MIMO), Wiener phase noise, Bayesian Cramer Rao lower

    bound (BCRLB), maximum-a-posteriori (MAP), soft-decision extended Kalman filter (EKF), and extended

    Kalman smoother (EKS).

    VENTM13019 Multi scale Gossip for Efficient Decentralized Averaging in Wireless Packet

    Networks

    Abstract: This paper describes and analyzes a hierarchical algorithm called Multi

    scale Gossip for solving the distributed average consensus problem in wireless sensor networks.

    The algorithm proceeds by recursively partitioning a given network. Initially, nodes at the finest

    scale gossip to compute local averages. Then, using multi-hop communication and geographic

    routing to communicate between nodes that are not directly connected, these

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    16/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    local averages are progressively fused up the hierarchy until the global average is computed.

    We show that the proposed hierarchical scheme with k=(loglogn) levels of hierarchy is

    competitive with state-of-the-art randomized gossip algorithms in terms of message

    complexity, achieving -accuracy with high probability after O(n loglogn log[1/()] ) single-hop

    messages. Key to our analysis is the way in which the network is recursively partitioned. Wefind that the above scaling law is achieved when sub networks at scale j contain O(n

    (2/3)j) nodes;

    then the message complexity at any individual scale is O(n log[1/]). Another important

    consequence of the hierarchical construction is that the longest distance over which messages

    are exchanged is O(n1/3

    ) hops (at the highest scale), and most messages (at lower scales) travel

    shorter distances. In networks that use link-level acknowledgements, this results in less

    congestion and resource usage by reducing message retransmissions. Simulations illustrate that

    the proposed scheme is more efficient than state-of-the-art randomized gossip algorithms

    based on averaging along paths.

    Published in: Signal Processing, IEEE Transactions on (Volume:61 , Issue: 9 )

    Date of Publication: May1, 2013

    VENTM13020 Compressed Sensing of EEG for Wireless Tele monitoring with Low Energy

    Consumption and Inexpensive Hardware

    Abstract: Tele monitoring of electroencephalogram (EEG) through wireless body-area networks

    is an evolving direction in personalized medicine. Among various constraints in designing such a

    system, three important constraints are energy consumption, data compression, and device

    cost. Conventional data compression methodologies, although effective in data compression,consumes significant energy and cannot reduce device cost. Compressed sensing (CS), as an

    emerging data compression methodology, is promising in catering to these constraints.

    However, EEG is non sparse in the time domain and also non sparse in transformed domains

    (such as the wavelet domain). Therefore, it is extremely difficult for current CS algorithms to

    recover EEG with the quality that satisfies the requirements of clinical

    diagnosis and engineering applications. Recently, block sparse Bayesian learning (BSBL) was

    proposed as a new method to the CS problem. This study introduces the technique to the tele

    monitoring of EEG. Experimental results show that its recovery quality is better than state-of-

    the-art CS algorithms, and sufficient for practical use. These results suggest that BSBL is verypromising for tele monitoring of EEG and other non sparse physiological signals.

    Published in: Biomedical Engineering, IEEE Transactions on (Volume:60 , Issue: 1 )

    Date of Publication: Jan. 2013

    Index TermsTelemonitoring, Healthcare, Wireless Body- Area Network (WBAN), Compressed

    Sensing (CS), Block Sparse Bayesian Learning (BSBL), electroencephalogram (EEG)

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    17/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    VENTM13021 Compressed Sensing for Energy-Efficient Wireless Tele monitoring of Non-

    Invasive Fetal ECG via Block Sparse Bayesian Learning

    Abstract: Fetal ECG (FECG) tele monitoring is an important branch in telemedicine. The

    design of a tele monitoring system via a wireless body area network with

    low energy consumption for ambulatory use is highly desirable. As an emerging

    technique, compressed sensing (CS) shows great promise in compressing/reconstructing data

    with low energy consumption. However, due to some specific characteristics of raw FECG

    recordings such as non sparsity and strong noise contamination, current CS algorithms

    generally fail in this application. This paper proposes to use the block sparse Bayesian

    learning framework to compress/reconstruct non sparse raw FECG recordings. Experimental

    results show that the framework can reconstruct the raw recordings with high quality.

    Especially, the reconstruction does not destroy the interdependence relation among the

    multichannel recordings. This ensures that the independent component analysis

    decomposition of the reconstructed recordings has high fidelity. Furthermore, the framework

    allows the use of a sparse binary sensing matrix with much fewer nonzero entries

    to compress recordings. Particularly, each column of the matrix can contain only two nonzero

    entries. This shows that the framework, compared to other algorithms such as current CS

    algorithms and wavelet algorithms, can greatly reduce code execution in CPU in the data

    compression stage.

    Published in: Biomedical Engineering, IEEE Transactions on (Volume:60 , Issue: 2 )Date of Publication: Feb. 2013

    Index TermsFetal ECG (FECG), Tele monitoring, Telemedicine, Healthcare, Block Sparse

    Bayesian Learning (BSBL), Compressed Sensing (CS), Independent Component Analysis (ICA)

    IEEE 2012 MATLAB PROJECTS ACADEMIC YEAR 2014-2015 FOR M.Tech/ B.E/B.Tech

    VENTM12001 Image Forgery Localization via Fine-Grained Analysis of CFA Artifacts

    Abstract: In this paper, a forensic tool able to discriminate between original and forged regions in

    an image captured by a digital camera is presented. We make the assumption that the image is acquired

    using a Color Filter Array, and that tampering removes the artifacts due to the de mosaicking algorithm.

    The proposed method is based on a new feature measuring the presence of de mosaicking artifacts at a

    local level, and on a new statistical model allowing to derive the tampering probability of each 2

    2image block without requiring to know a priori the position of the forged region. Experimental results

    on different cameras equipped with different de mosaicking algorithms demonstrate both the

    validity of the theoretical model and the effectiveness of our scheme.

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    18/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    Published in: Information Forensics and Security, IEEE Transactions on (Volume:7 , Issue: 5 )

    Date of Publication: Oct. 2012

    Index TermsImage forensics, CFA artifacts, digital camera demosaicing, tampering probability map,

    forgery localization.

    VENTM12002 Bottom-Up Saliency Detection Model Based on Human Visual Sensitivity and AmplitudeSpectrum

    Abstract: With the wide applications of saliency information in visual signal processing, many saliency

    detection methods have been proposed. However, some key characteristics of the human visual system

    (HVS) are still neglected in building these saliency detection models. In this paper,we propose a new

    saliencydetection model based on the human visual sensitivity and the amplitude spectrum of

    quaternion Fourier transform (QFT). We use the amplitude spectrum of QFT to represent the color,

    intensity, and orientation distributions for image patches. The saliency value for each image patch is

    calculated by not only the differences between the QFT amplitude spectrum of this patch and other

    patches in the whole image, but also the visual impacts for these differences determined by the human

    visual sensitivity. The experiment results show that the proposed saliency detection model outperforms

    the state-of-the-art detection models. In addition, we apply our proposed model in the application of

    image retargeting and achieve better performance over the conventional algorithms.

    Published in: Multimedia, IEEE Transactions on (Volume:14 , Issue: 1 )

    Date of Publication: Feb. 2012

    Index TermsAmplitude spectrum, Fourier transform, human visual sensitivity, saliency detection,

    visual attention.

    VENTM12003 Monogenic Binary Coding: An Efficient Local Feature Extraction Approach to Face

    Recognition

    Abstract: Local-feature-based face recognition (FR) methods, such as Gabor features encoded

    by local binary pattern, could achieve state-of-the-art FR results in large-scale face databases such as

    FERET and FRGC. However, the time and space complexity of Gabor transformation are too high for

    many practical FR applications. In this paper, we propose a new

    and efficient local feature extraction scheme, namely monogenic binary coding (MBC),

    for face representation and recognition. Monogenic signal representation decomposes an original signal

    into three complementary components: amplitude, orientation, and phase. We encode

    the monogenic variation in each local region and monogenic feature in each pixel, and then calculate

    the statistical features (e.g., histogram) of the extracted local features.

    The local statistical features extracted from the complementary monogenic components (i.e.,

    amplitude, orientation, and phase) are then fused for effective FR. It is shown that the proposed MBC

    scheme has significantly lower time and space complexity than the Gabor-transformation-based local

    feature methods. The extensive FR experiments on four large-scale databases demonstrated the

    effectiveness of MBC, whose performance is competitive with and even better than state-of-the-

    artlocal-feature-based FR methods.

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    19/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    Published in: Information Forensics and Security, IEEE Transactions on (Volume:7 , Issue: 6 )

    Biometrics Compendium, IEEE

    Date of Publication: Dec. 2012

    Index TermsFace recognition, Gabor filtering, LBP, monogenic binary coding, monogenic signal

    analysis.

    VENTM12004 A Joint Time-Invariant Filtering Approach to the Linear Gaussian Relay Problem

    Abstract :In this paper, the linear Gaussian relay problem is considered. Under the linear time-

    invariant (LTI) model the rate maximization problem in the linear Gaussian relay channel is formulated in

    the frequency domain based on the Toeplitz distribution theorem. Under the further assumption of

    realizable input spectra, the rate maximization problem is converted to the problem of joint source

    and relay filter design with two power constraints, one at the source and the other at the relay, and a

    practical solution to this problem is proposed based on the (adaptive) projected (sub)gradient method.

    Numerical results show that the proposed method yields a considerable gain over the instantaneous

    amplify-and-forward (AF) scheme in inter-symbol interference (ISI) channels. Also, the optimality of the

    AF scheme within the class of one-tap relay filters is established in flat-fading channels.

    Published in: Signal Processing, IEEE Transactions on (Volume:60 , Issue: 8 )

    Date of Publication: Aug. 2012

    Index TermsFilter design, linear Gaussian relay, linear time invariant model, projected subgradient

    method, Toeplitz distribution theorem.

    VENTM12005 Monotonic Regression: A New Way for Correlating Subjective and Objective Ratings in

    Image Quality Research

    Abstract: To assess the performance of image quality metrics (IQMs), some regressions, such as logistic

    regression and polynomial regression, are used to correlate objective ratings with subjective scores.

    However, some defects in optimality are shown in these regressions. In this correspondence,

    monotonic regression (MR) is found to be an effective correlation method in the performance

    assessment of IQMs. Both theoretical analysis and experimental results have proven that MR performs

    better than any other regression. We believe that MR could be an effective tool for performance

    assessment in the IQM research.

    Published in: Image Processing, IEEE Transactions on (Volume:21 , Issue: 4 )

    Date of Publication: April 2012Index TermsImage quality assessment, image quality metric (IQM), metric performance, monotonic

    regression (MR).

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    20/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    VENTM12006 An efficient leaf recognition algorithm for plant classification using support vector

    machine

    Abstract: Recognition of plants has become an active area of research as most of the plant species are at

    the risk of extinction. This paper uses an efficient machine learning approach for

    the classification purpose. This proposed approach consists of three phases such as preprocessing,feature extraction and classification. The preprocessing phase involves a typical image processing steps

    such as transforming to gray scale and boundary enhancement. The feature extraction phase derives the

    common DMF from five fundamental features. The main contribution of this approach is

    the SupportVector Machine (SVM) classification for efficient leaf recognition. 12 leaf features which are

    extracted and orthogonalized into 5 principal variables are given as input vector to the SVM. Classifier

    tested with flavia dataset and a real dataset and compared with k-NN approach, the proposed approach

    produces very high accuracy and takes very less execution time.

    Published in: Pattern Recognition, Informatics and Medical Engineering (PRIME), 2012 International

    Conference on

    Date of Conference: 21-23 March 2012

    Keywords- Digital Morphological Features (DMFs); Leaf Recognition; Support Vector Machine

    VENTM12007 Image Signature: Highlighting Sparse Salient Regions

    AbstractWe introduce a simple image descriptor referred to as the image signature. We show, within

    the theoretical framework of sparse signal mixing, that this quantity spatially approximates the

    foreground of an image. We experimentally investigate whether this approximate foreground overlaps

    with visually conspicuous image locations by developing a saliency algorithm based on the image

    signature. This saliency algorithm predicts human fixation points best among competitors on the Bruce

    and Tsotsos [1] benchmark data set and does so in much shorter running time. In a related experiment,

    we demonstrate with a change blindness data set that the distance between images induced by the

    image signature is closer to human perceptual distance than can be achieved using other saliency

    algorithms, pixel-wise, or GIST [2] descriptor methods.

    Published in: Pattern Analysis and Machine Intelligence, IEEE Transactions on (Volume:34 , Issue: 1 )

    Date of Publication: Jan. 2012

    Index TermsSaliency, visual attention, change blindness, sign function, sparse signal analysis.

    VENTM12008 An Efficient Algorithm for Level Set Method Preserving Distance Function

    Abstract: The level set method is a popular technique for tracking moving interfaces in several

    disciplines, including computer vision and fluid dynamics. However, despite its high flexibility, the

    original level set method is limited by two important numerical issues. First, the level set method does

    not implicitly preserve the level set function as a distance function, which is necessary to estimate

    accurately geometric features, s.a. the curvature or the contour normal. Second,

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    21/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    the level set algorithm is slow because the time step is limited by the standard Courant-Friedrichs-Lewy

    (CFL) condition, which is also essential to the numerical stability of the iterative scheme. Recent

    advances with graph cut methods and continuous convex relaxation methods provide powerful

    alternatives to the level set method for image processing problems because they are fast, accurate, and

    guaranteed to find the global minimizer independently to the initialization. These recent techniques use

    binary functions to represent the contour rather than distance functions, which are usually considered

    for the level set method. However, the binary function cannot provide the distance information, which

    can be essential for some applications, s.a. the surface reconstruction problem from scattered points

    and the cortex segmentation problem in medical imaging. In this paper, we propose a

    fast algorithm to preserve distance functions inlevel set methods. Our algorithm is inspired by

    recent efficient l1

    optimization techniques, which will provide an efficient and easy to

    implement algorithm. It is interesting to note that our algorithm is not limited by the CFL condition and

    it naturally preserves the level set function as a distance function during the evolution, which avoids the

    classical re-distancing problem in level set methods. We apply the proposed algorithm to carry out

    image segmentation, where our methods prove to be 5-6 times faster than

    standard distance preserving level set - techniques. We also present two applications where

    preserving a distance function is essential. Nonetheless, our method stays generic and can be applied to

    any level set methods that require the distance information.

    Published in: Image Processing, IEEE Transactions on (Volume:21 , Issue: 12 )

    Date of Publication: Dec. 2012

    Index Terms: Image segmentation, level set, numerical scheme, signed distance function,splitting,

    surface reconstruction

    VENTM12009 Structure Extraction from Texture via Relative Total Variation

    Abstract: It is ubiquitous that meaningful structures are formed by or appear over textured surfaces.

    Extracting them under the complication of texture patterns, which could be regular, near-regular, or

    irregular, is very challenging, but of great practical importance. We propose new inherent variation and

    relative total variation measures, which capture the essential difference of these two types of visual

    forms, and develop an efficient optimization system to extract main structures. The new variation

    measures are validated on millions of sample patches. Our approach finds a number of new applications

    to manipulate, render, and reuse the immense number of structure with texture images and drawings

    that were traditionally difficult to be edited properly.

    Keywords: texture, structure, smoothing, total variation, relative total variation, inherent variation,

    prior, regularized optimization

    2012

    VENTM12010 Quick Detection of Brain Tumors and Edemas: A Bounding Box Method Using Symmetry

    Abstract: A significant medical informatics task is indexing patient databases according to size, location,

    and other characteristics of brain tumors and edemas, possibly based on magnetic resonance (MR)

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    22/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    imagery. This requires segmenting tumors and edemas within images from different MR modalities. To

    date, automated brain tumor or edema segmentation from MR modalities remains a challenging as well

    as computationally intensive task. In this paper, we propose a novel automated, fast, and approximate

    segmentation technique. The input is a patient study consisting of a set of MR slices. The output is a

    corresponding set of the slices that circumscribe the tumors with axis-parallel bounding boxes. The

    proposed approach is based on an unsupervised change detection method that searches for the most

    dissimilar region (axis-parallel bounding boxes) between the left and the right halves of a brain in an

    axial view MR slice. This change detection process uses a novel score function based on Bhattacharya

    coefficient computed with gray level intensity histograms. We prove that this score function admits a

    very fast (linear in image height and width) search to locate the bounding box. The average dice

    coefficients for localizing brain tumors and edemas, over ten patient studies, are 0.57 and 0.52

    respectively, which significantly exceeds the scores for two other competitive region-based bounding

    box techniques.

    Index Terms MR image Segmentation, Bhattacharya coefficient, Brain Tumor, Edema.

    VENTM12011 Efficient Misalignment-Robust Representation for Real-Time Face Recognition

    Abstract: Sparse representation techniques for robust face recognition have been widely studied in the

    past several years. Recently face recognition with simultaneous misalignment, occlusion and other

    variations has achieved interesting results via robust alignment by sparse representation (RASR). In

    RASR, the best alignment of a testing sample is sought subject by subject in the database. However, such

    an exhaustive search strategy can make the time complexity of RASR prohibitive in large-scale face

    databases. In this paper, we propose a novel scheme, namely misalignment robust representation

    (MRR), by representing the misaligned testing sample in the transformed face space spanned by all

    subjects. The MRR seeks the best alignment via a two-step optimization with a coarse-to-fine search

    strategy, which needs only two deformation-recovery operations. Extensive experiments on

    representative face databases show that MRR has almost the same accuracy as RASR in various face

    recognition and verification tasks but it runs tens to hundreds of times faster than RASR. The running

    time of MRR is less than 1 second in the large-scale Multi-PIE face database, demonstrating its great

    potential for real-time face recognition.

    VENTM12012 Multi-User Diversity vs. Accurate Channel State Information in MIMO Downlink

    Channels

    Abstract: In a multiple transmit antenna, single antenna per receiver downlink channel with limited

    channel state feedback, we consider the following question: given a constraint on the total system-wide

    feedback load, is it preferable to get low-rate/coarse channel feedback from a large number of receivers

    or high-rate/high-quality feedback from a smaller number of receivers? Acquiring feedback from many

    receivers allows multi-user diversity to be exploited, while high-rate feedback allows for very precise

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    23/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    selection of beamforming directions. We show that there is a strong preference for obtaining high-

    quality feedback, and that obtaining near-perfect channel information from as many receivers as

    possible provides a significantly larger sum rate than collecting a few feedback bits from a large number

    of users. In terms of system design, this corresponds to a preference for acquiring high-quality feedback

    from a few users on each time-frequency resource block, as opposed to coarse feedback from many

    users on each block.

    Published in: Wireless Communications, IEEE Transactions on (Volume:11 , Issue: 9 )

    Date of Publication: September 2012

    Index Terms-MIMO downlink channels, MU-MIMO communication, multi-user diversity

    VENTM12013 Joint Estimation of Channel and Oscillator Phase Noise in MIMO Systems

    Abstract: Oscillator phase noise limits the performance ofhigh speed communication systems since

    it results in time varying channels and rotation ofthe signal constellation from symbol to

    symbol. In this paper,jointestimation of channel gains and Wiener phase noise in multi-input multi-

    output (MIMO) systems is analyzed. The signal model for the estimation problem is

    outlined in detail and new expressions for the Cramr-Rao lower bounds (CRLBs) for the multi-

    parameter estimation problem are derived. A data-aided least-squares (LS) estimator for jointly

    obtaining the channel gains and phase noise parameters is derived. Next, a decision-directed

    weighted least-squares (WLS) estimator is proposed, where pilots and estimated data symbols are

    employed to track the time-varying phase noise parameters over a frame. In order to reduce the

    overhead and delay associated with the estimation process, a new decision-directed extended Kalman

    filter (EKF) is proposed for tracking the MIMO phase noise throughout a frame. Numerical results show

    that the proposed LS, WLS, and EKF estimators' performances are close to the CRLB. Finally, simulation

    results demonstrate that by employing the proposed channel and time-varying phase noise estimators the bit-error rate performance ofa MIMO system can be significantly

    improved.

    Published in: Signal Processing, IEEE Transactions on (Volume:60 , Issue: 9 )

    Date of Publication: Sept. 2012

    Index TermsChannel estimation, Cramr-Rao lower bound (CRLB), extended Kalman filter (EKF), multi-

    input multi-output (MIMO), weighted least squares (WLS), Wiener phase noise.

    IEEE 2011 MATLAB PROJECTS ACADEMIC YEAR 2014-2015 FOR M.Tech/ B.E/B.Tech

    VENTM11001 An Augmented Lagrangian Method for Total Variation Video Restoration

    Abstract: This paper presents a fast algorithm for restoring video sequences. The proposed algorithm, as

    opposed to existing methods, does not consider video restoration as a sequence of image restoration

    problems. Rather, it treats a video sequence as a space-time volume and poses a space-time total

    variation regularization to enhance the smoothness of the solution. The optimization problem is solved

    by transforming the original unconstrained minimization problem to an equivalent constrained

    minimization problem. An augmented Lagrangian method is used to handle the constraints, and an

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    24/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    alternating direction method is used to iteratively find solutions to the subproblems. The proposed

    algorithm has a wide range of applications, including video deblurring and denoising, video disparity

    refinement, and hot-air turbulence effect reduction.

    Published in: Image Processing, IEEE Transactions on (Volume:20 , Issue: 11 )

    Date of Publication: Nov. 2011

    Index Terms: Alternating direction method (ADM), augmented Lagrangian, hot-air turbulence,

    total variation (TV), video deblurring, video disparity, video restoration

    VENTM11002 On Optimal Power Control for Delay-Constrained Communication Over Fading Channels

    Abstract: In this paper, the problem of optimal power control for delay-constrained communication

    over fading channels is studied. The objective is to find a power control law that optimizes the link layer

    performance, specifically, minimizes delay bound violation probability (or equivalently, the packet drop

    probability), subject to constraints on average power, arrival rate and delay bound. The transmission

    buffer size is assumed to be finite; hence, when the buffer is full, there will be packet drop. The fading

    channel under study has a continuous state, e.g., Rayleigh fading. Since directly solving the power

    control problem (which optimizes the link layer performance) is particularly challenging, the problem is

    decomposed into three sub problems and the three sub problems are solved iteratively; the resulting

    scheme is called joint queue length aware (JQLA) power control, which produces a local optimal solution

    to the three sub problems. It is proved that the solution that simultaneously solves the three sub

    problems is also an optimal solution to the optimal power control problem. Simulation results show that

    the JQLA scheme achieves superior performance over the time domain water filling and the truncated

    channel inversion power control.

    Published in: Information Theory, IEEE Transactions on (Volume:57 , Issue: 6 )

    Date of Publication: June 2011

    Index TermsDelay-constrained communication, power control, queuing analysis, delay bound

    violation probability, packet drop probability.

    VENTM11003 A Level Set Method for Image Segmentation in the Presence of Intensity

    Inhomogeneities With Application to MRI

    Abstract: Intensity in homogeneity often occurs in real-world images, which presents a considerablechallenge in image segmentation. The most widely used image segmentation algorithms are region-

    based and typically rely on the homogeneity of the image intensities in the regions of interest, which

    often fail to provide accurate segmentation results due to the intensity inhomogeneity. This paper

    proposes a novel region-based method for image segmentation, which is able to deal

    with intensity inhomogeneities in the segmentation. First, based on the model

    of images with intensity inhomogeneities, we derive a local intensity clustering property of

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    25/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    the image intensities, and define a local clustering criterion function for the image intensities in a

    neighborhood of each point. This local clustering criterion function is then integrated with respect to the

    neighborhood center to give a global criterion of image segmentation. In alevel set formulation, this

    criterion defines an energy in terms of the level set functions that represent a partition of

    the image domain and a bias field that accounts for the intensity inhomogeneity of the image.

    Therefore, by minimizing this energy, our method is able to simultaneously segment the image and

    estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction

    (or bias correction). Our method has been validated on synthetic images and real images of various

    modalities, with desirable performance in the presence of intensity inhomogeneities. Experiments show

    that our method is more robust to initialization, faster and more accurate than the well-known

    piecewise smooth model. As an application, our method has been used for segmentation and bias

    correction of magnetic resonance (MR) images with promising results.

    Published in: Image Processing, IEEE Transactions on (Volume:20 , Issue: 7 )

    Date of Publication: July 2011

    Index Terms: Bias correction, MRI, image segmentation, intensity inhomogeneity, level set

    VENTM11004 Hybrid DE algorithm with adaptive crossover operator for solving real-world numerical

    optimization problems

    AbstractIn this paper, the results for the CEC 2011 Competition on testing evolutionary algorithms on

    real world optimization problems using a hybrid differential evolution algorithm are presented. The

    proposal uses a local search routine to improve convergence and an adaptive crossover operator.

    According to the obtained results, this algorithm shows to be able to find competitive solutions with

    reported results.

    Index TermsDifferential Evolution algorithm, parameter selection, CEC competition.

    Published in: Evolutionary Computation (CEC), 2011 IEEE Congress on June 2011

    VENTM11005 An Improved Algorithm for Blind Reverberation Time Estimation

    AbstractAn improved algorithm for the estimation of the reverberation time (RT) from reverberant

    speech signals is presented. This blind estimation of the RT is based on a simple statistical model for the

    sound decay such that the RT can be estimated by means of a maximum-likelihood (ML) estimator. The

    proposed algorithm has a significantly lower computational complexity than previous ML-based

    algorithms for RT estimation. This is achieved by a downsampling operation and a simple pre-selection

    of possible sound decays. The new algorithm is more suitable to track time-varying RTs than related

    approaches. In addition, it can also estimate the RT in the presence of (moderate) background noise.

    The proposed algorithm can be employed to measure the RT of rooms from sound recordings without

  • 7/21/2019 Vensoft Technologies Bangalore IEEE 2014 Matlab Projects Academic Year 2014-2015

    26/32

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected]: +91 9448847874

    VenSoft Technologies www.ieeedeveloperslabs.in Email: [email protected] Contact: +91 9448847874

    using a dedicated measurement setup. Another possible application is its use within speech

    dereverberation systems for hands-free devices or digital hearing aids.

    Index Termsreverberation time, blind estimation, low complexity, speech dereverberation

    IEEE 2010 MATLAB PROJECTS ACADEMIC YEAR 2014-2015 FOR M.Tech/ B.E/B.Tech

    VENTM10001 Distance Regularized Level Set Evolution and Its Application to Im