matlab 2013,ieee 2013 matlab projects,mtech matlab projects 2013,ieee power electronics projects,...

53
MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing) IEEE GEOSCIENCE AND REMOTE SENSING 1. Remote Sensing Image Fusion via Sparse Representations Over Learned Dictionaries Remote sensing image fusion can integrate the spatial detail of panchromatic (PAN) image and the spectral information of a low-resolution multispectral (MS) image to produce a fused MS image with high spatial resolution. In this paper, a remote sensing image fusion method is proposed with sparse representations over learned dictionaries. The dictionaries for PAN image and low-resolution MS image are learned from the source images adaptively. Furthermore, a novel strategy is designed to construct the dictionary for unknownF high-resolution MS images without training set, which can make our proposed method more practical. The sparse coefficients of the PAN image and low-resolution MS image are sought by the orthogonal matching pursuit algorithm. Then, the fused high-resolution MS image is calculated by combining the obtained sparse coefficients and the dictionary for the highresolution MS image. By comparing with six well-known methods in terms of several universal quality valuation indexes with or without references, the simulated and real experimental results on QuickBird and IKONOS images demonstrate the superiority of our method. 2. Evaluation of Spatial and Spectral Effectiveness of Pixel-Level Fusion Techniques Along with the launch of a number of very highresolution satellites in the last decade, efforts have been made to increase the spatial resolution of the multispectral bands using the panchromatic information. Quality evaluation of pixel-fusion techniques is a fundamental issue to benchmark and to optimize different algorithms. In this letter, we present a thorough analysis of the spatial and spectral distortions produced by eight pan sharpening techniques. The study was conducted using real data from different types of land covers and also a synthetic image with different colors and spatial structures for comparison purposes. Several spectral and spatial quality indexes and visual information were considered in the analysis. Experimental results have shown that fusion methods cannot simultaneously incorporate the maximum spatial detail without degrading the spectral information. Atrous_IHS, Atrous_PCA, IHS, and eFIHS algorithms provide the best spatial–spectral tradeoff for wavelet-based and algebraic or component substitution methods. Finally, inconsistencies between some quality indicators were detected and analyzed.

Upload: citl-tech-varsity

Post on 10-May-2015

5.466 views

Category:

Technology


1 download

DESCRIPTION

SIMULATION: Image Processing, Power Electronics, Power Systems, Communication, Biomedical, Geo Science & Remote Sensing, Digital Signal processing, Vanets, Wireless Sensor network, Mobile ad-hoc networks

TRANSCRIPT

Page 1: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

IEEE GEOSCIENCE AND REMOTE SENSING

1. Remote Sensing Image Fusion via Sparse Representations Over Learned Dictionaries

Remote sensing image fusion can integrate the spatial detail of panchromatic (PAN) image and the spectral information of a low-resolution multispectral (MS) image to produce a fused MS image with high spatial resolution. In this paper, a remote sensing image fusion method is proposed with sparse representations over learned dictionaries. The dictionaries for PAN image and low-resolution MS image are learned from the source images adaptively. Furthermore, a novel strategy is designed to construct the dictionary for unknownF high-resolution MS images without training set, which can make our proposed method more practical. The sparse coefficients of the PAN image and low-resolution MS image are sought by the orthogonal matching pursuit algorithm. Then, the fused high-resolution MS image is calculated by combining the obtained sparse coefficients and the dictionary for the highresolution MS image. By comparing with six well-known methods in terms of several universal quality valuation indexes with or without references, the simulated and real experimental results on QuickBird and IKONOS images demonstrate the superiority of our method.

2. Evaluation of Spatial and Spectral Effectiveness of Pixel-Level Fusion Techniques

Along with the launch of a number of very highresolution satellites in the last decade, efforts have been made toincrease the spatial resolution of the multispectral bands using the panchromatic information. Quality evaluation of pixel-fusion techniques is a fundamental issue to benchmark and to optimize different algorithms. In this letter, we present a thorough analysis of the spatial and spectral distortions produced by eight pan sharpening techniques. The study was conducted using real data from different types of land covers and also a synthetic image with different colors and spatial structures for comparison purposes. Several spectral and spatial quality indexes and visual information were considered in the analysis. Experimental results have shown that fusion methods cannot simultaneously incorporate the maximum spatial detail without degrading the spectral information. Atrous_IHS, Atrous_PCA, IHS, and eFIHS algorithms provide the best spatial–spectral tradeoff for wavelet-based and algebraic or component substitution methods. Finally, inconsistencies between some quality indicators were detected and analyzed.

3. Spatiotemporal Satellite Image Fusion Through One-Pair Image Learning

This paper proposes a novel spatiotemporal fusion model for generating images with high-spatial and high-temporal resolution (HSHT) through learning with only one pair of prior images. For this purpose, this method establishes correspondence between low-spatial-resolution but high-temporal-resolution (LSHT) data and high-spatial-resolution but low-temporalresolution (HSLT) data through the superresolution of LSHT data and further fusion by using high-pass modulation. Specifically, this method is implemented in two stages. In the first stage, the spatial resolution of LSHT data on prior and prediction dates is improved simultaneously by means of sparse representation; in the second stage, the known HSLT and the superresolved LSHTs are fused via high-pass modulation to generate the HSHT data on the prediction date. Remarkably, this method forms a unified framework for blending remote sensing images with temporal reflectance changes, whether phenology change (e.g., seasonal change of vegetation) or land-cover-type change (e.g., conversion of farmland to built-up area) based on a two-layer spatiotemporal fusion strategy due to the large spatial resolution difference between HSLT and LSHT data. This method was tested on both a simulated data set and two actual data sets of Landsat Enhanced Thematic Mapper Plus–Moderate Resolution Imaging Spectroradiometer acquisitions. It was also compared with other well-known spatiotemporal fusion algorithms on two types of data: images primarily with phenology changes and images primarily with land-cover-type changes. Experimental results demonstrated that our method performed better in capturing surface reflectance changes on both types of images. A Sparse Image Fusion Algorithm With pplication to Pan-Sharpening Data provided by most optical Earth observation satellites such as IKONOS, QuickBird, and GeoEye are composed of a panchromatic channel of high spatial resolution (HR) and several multispectral channels at a lower spatial resolution (LR). The fusion of an HR panchromatic and the corresponding LR spectral channels is called “pan-sharpening.” It aims at obtaining an HR multispectral image. In this paper, we propose a new pan-sharpening method named Sparse Fusion of Images (SparseFI,

Page 2: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

pronounced as “sparsify”). SparseFI is based on the compressive sensing theory and explores the sparse representation of HR/LR multispectral image patches in the dictionary pairs cotrained from the panchromatic image and its downsampled LR version. Compared with conventional methods, it “learns” from, i.e., adapts itself to, the data and has generally better performance than existing methods. Due to the fact that the SparseFI method does not assume any spectral composition model of thepanchromatic image and due to the super-resolution capabilityand robustness of sparse signal reconstruction algorithms, it gives higher spatial resolution and, in most cases, less spectral distortion compared with the conventional methods.

4. Evaluation of Spatial and Spectral Effectiveness of Pixel-Level Fusion Techniques

Along with the launch of a number of very highresolution satellites in the last decade, efforts have been made toincrease the spatial resolution of the multispectral bands using the panchromatic information. Quality evaluation of pixel-fusion techniques is a fundamental issue to benchmark and to optimizedifferent algorithms. In this letter, we present a thorough analysis of the spatial and spectral distortions produced by eight pan sharpening techniques. The study was conducted using real data from different types of land covers and also a synthetic image with different colors and spatial structures for comparison purposes. Several spectral and spatial quality indexes and visual information were considered in the analysis. Experimental results have shown that fusion methods cannot simultaneously incorporate the maximum spatial detail without degrading the spectral information. Atrous_IHS, Atrous_PCA, IHS, and eFIHS algorithms provide the best spatial–spectral tradeoff for wavelet-based and algebraic or component substitution methods. Finally, inconsistencies between some quality indicators were detected and analyzed.

5. Hybrid Pansharpening Algorithm for High Spatial Resolution Satellite Imagery to Improve Spatial

Quality

Most pansharpened images from existing algorithms are apt to present a tradeoff relationship between the spectral preservation and the spatial enhancement. In this letter, we developed a hybrid pansharpening algorithm based on primary and secondary high-frequency information injection to efficiently improve the spatial quality of the pansharpened image. The injected high-frequency information in our algorithm is composed of two types of data, i.e., the difference between panchromatic and intensity images, and the Laplacian filtered image of high-frequency information. The extracted high frequencies are injected by the multispectral image using the local adaptive fusion parameter and postprocessing of the fusion parameter. In the experiments using various satellite images, our results show better spatial quality than those of other fusion algorithms while maintaining as much spectral information as possible.

IEEE TRANSACTIONS ON COMMUNICATION SYSTEMS

1. Sum-Product Algorithm Utilizing Soft Distances on Additive Impulsive Noise Channels

In this letter, a Sum-Product algorithm (SPA) utilizing soft distances is shown to be more resilient to impulsivenoise than conventional likelihood-based SPAs, when the noise distribution is unknown. An efficient version of the soft distanceSPA is also developed but with half the storage requirements and running time.

2. Downlink Optimization with Interference Pricing and Statistical CSI

In this paper, we propose a downlink transmission strategy based on intercell interference pricing and a distributed algorithm that enables each base station (BS) to design locally its own beamforming vectors without

Page 3: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

relying on downlink channel state information of links from other BSs to the users. This algorithm is the solution to an ptimization problem that minimizes a linear combination of data transmission power and the resulting weighted intercell interference with pricing factors at each BS and maintains the required signal-to-interference-plusnoise ratios (SINR) at user terminals. We provide a convergence analysis for the proposed distributed algorithm and derive conditions for its existence. We characterize the impact of the pricing factors in expanding the operational range of SINR targets at user terminals in a power-efficient manner. Simulation results confirm that the proposed algorithm converges to a networkwide equilibrium point by balancing and stabilizing the intercell interference levels and assigning power optimal beamforming vectors to the BSs. The results also show the effectiveness of the proposed algorithm in closely following the performance limits of its centralized coordinated beamforming counterpart.

3. Evaluation of the Low Error-Rate Performance of LDPC Codes over Rayleigh Fading Channels

Using Importance Sampling

In this paper we propose a novel importance sampling (IS) scheme to estimate the low error-rate performanceof low-density parity-check (LDPC) codes over Rayleigh fading channels. The proposed scheme exploits the tructural weakness of LDPC codes due to trapping sets (TSs). The Rayleigh fading distribution on the bits belonging to a TS is biased by parameter scaling (PS), while the noise distribution on them is biased via mean translation (MT) according to their fading coefficients. The biases in PS and MT are determined so that the variance of the proposed IS estimator is minimized. The proposed IS scheme is compared with the Monte Carlo (MC) simulator and other IS schemes modified from the conventional IS scheme employed for performance estimation of LDPC codes over an AWGN channel. Numerical results show that it provides much more accurate performance than other IS schemes. Furthermore, the proposed IS estimator is even more efficient than the MC estimator and other IS estimators from the viewpoint of the number of requiredsimulation runs.

4. Extended Reed-Solomon Codes for Optical CDMA

In this paper, the extended Reed-Solomon codes are modified to construct a new family of 2-D codes for synchronous optical code-division multiple access (O-CDMA). In addition of having expanded and asymptotically optimal cardinality, these 2-D asynchronous optical codes can be partitioned into multiple tree structures of code subsets, in which code cardinality is a function of the (periodic) cross-correlation value assigned to the subset. The performance of these 2-D optical codes is analyzed and compared with that of the multilevel prime codes. Our results show that the unique partition property of the new optical codes supports a trade-off between code cardinality and performance for meeting different system requirements, such as user capacity and throughput. In addition, the multiple tree structures of the new codes potentially support applications that require rapid switching of many codewords, such as in O-CDMA-network gateway or in strategic environments where code obscurity is essential.

5. The Multicell Multiuser MIMO Uplink with Very Large Antenna Arrays and a Finite-Dimensional

Channel

We consider multicell multiuser MIMO systems with a very large number of antennas at the base station (BS).We assume that the channel is estimated by using uplink training. We further consider a physical channel model where the angular domain is separated into a finite number of distinct directions. We analyze the so-called pilot contamination effect discovered in previous work, and show that this effect persists under the finite-dimensional channel model that we consider. In particular, we consider a uniform array at the BS. For this scenario, we show that when the number of BS antennas goes to infinity, the system performance under a finite-dimensional channel model with P angular bins is the same as the performance under anuncorrelated channel model with P antennas. We urther derive a lower bound on the achievable rate of uplink data transmission with a linear detector at the BS. We then specialize this lower bound to the cases of maximum-

Page 4: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

ratio combining (MRC) and zero-forcing (ZF) receivers, for a finite and an infinite number of BS antennas. Numerical results corroborate our analysis and show a comparison between the performances of MRC and ZF in terms of sum-rate.

6. A Study on Inter-Cell Subcarrier Collisions due to Random Access in OFDM-Based Cognitive Radio

Networks

In cognitive radio (CR) systems, one of the main implementation issues is spectrum sensing because of the uncertainties in propagation channel, hidden primary user (PU) problem, sensing duration and security issues. This paper considers an orthogonal frequency-division multiplexing (OFDM)- based CR spectrum sharing system that assumes random access of primary network subcarriers by secondary users (SUs) and absence of the PU’s spectrum utilization information, i.e., no spectrum sensing is employed to acquire information about the PU’s activity or availability of free subcarriers. In the absence of information about the PU’s activity, the SUs randomly access (utilize) the subcarriers of the primary network and collide with the PU’s subcarriers with a certain probability. In addition, inter-cell collisions among the subcarriers of SUs (belonging to different cells) can occur due to the inherent nature of random access scheme. This paper conducts a stochastic analysis of the number of subcarrier collisions between the SUs’ and PU’s subcarriers assuming fixed and random number of subcarriers requirements for each ser. The performance of the random scheme in terms of capacity and capacity (rate) loss caused by the subcarrier collisions is investigated by assuming an interference power constraint at PUs to protect their operation.

7. Normalized Adaptive Channel Equalizer Based on Minimal Symbol-Error-Rate

Existing minimum-symbol-error-rate equalizers were derived based on the symbol-error-rate objective function.Due to the complexity of the objective function the derivation is not straightforward. In this paper we present a new approach to derive the minimum-symbol-error-rate adaptive equalizers. The problem is formulated as minimizing the norm between two subsequent parameter vectors under the constraint of symbolerror- rate minimization. The constrained optimization problem then is solved with the Lagrange multiplier method, which results in an adaptive algorithm with normalization. Simulation results show that the proposed algorithm outperforms the existing adaptive minimum-symbol-error-rate equalizer in convergence speed and steady-state performance.

8. Generalized Mean Detector for Collaborative Spectrum Sensing

In this paper, a unified generalized eigenvalue based spectrum sensing framework referred to as Generalized mean detector (GMD) has been introduced. The generalization of the detectors namely (i) the eigenvalue ratio detector (ERD) involving the ratio of the largest and the smallest eigenvalues; (ii) the Geometric mean detector (GEMD) involving the ratio of the largest eigenvalue and the geometric mean of the eigenvalues and (iii) the Arithmetic mean detector (ARMD) involving the ratio of the largest and the arithmetic mean of the eigenvalues is explored. The foundation of the proposed unified framework is based on the calculation of exact analytical moments of the random variables of test statistics of the respective detectors. In this context, we approximate the probability density function (PDF) of the test statistics of the respective detectors by Gaussian/Gamma PDF using the moment matching method. Finally, we derive closed-form expressions to calculate the decision threshold of the eigenvalue based detectors by exchanging the derived exact moments of the random variables of test statistics with the moments of the Gaussian/Gamma distribution function. The performance of the eigenvalue based detectors is compared with the traditional detectors such as energy detector (ED) and cyclostationary detector (CSD) and validate the importance of the eigenvalue based detectors particularly over realistic wireless cognitive environments. Analytical and simulation results show that the GEMD and the ARMD yields considerable performance advantage in realistic spectrum sensing scenarios. Moreover, our results based on proposed simple and tractable approximation approaches are in perfect agreement with the empirical results.

Page 5: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

9. Energy and Spectral Efficiency of Very Large Multiuser MIMO Systems

A multiplicity of autonomous terminals simultaneously transmits data streams to a compact array of antennas. The array uses imperfect channel-state information derived from transmitted pilots to extract the individual data streams. The power radiated by the terminals can be made inversely proportional to the square-root of the number of base station antennas with no reduction in performance. In contrast if perfect channel-state information were available the power could be made inversely proportional to the number of antennas. Lower capacity bounds for maximum-ratio combining (MRC), zeroforcing (ZF) and minimum mean-square error (MMSE) detection are derived. An MRC receiver normally performs worse than ZF and MMSE. However as power levels are reduced, the cross-talk introduced by the inferior maximum-ratio receiver eventually falls below the noise level and this simple receiver becomes a viable option. The tradeoff between the energy efficiency (as measured in bits/J) and spectral efficiency (as measured in bits/channel use/terminal) is quantified for a channel model that includes small-scale fading but not large-scale fading. It is shown that the use of moderately large antenna arrays can improve the spectral and energy efficiency with orders of magnitude compared to a single-antenna system.

10. Stochastic Decoding of LDPC Codes over GF(q)

Despite the outstanding performance of non-binary low-density parity-check (LDPC) codes over many communication channels, they are not in widespread use yet. This is due to the high implementation complexity of their decoding algorithms, even those that compromise performance for the sake of simplicity. In this paper, we present three algorithms based on stochastic computation to reduce the decoding complexity. The first is a purely stochastic algorithm with error-correcting performance matching that of the sum-product algorithm (SPA) for LDPC codes over Galois fields with low order and a small variable node degree. We also present a modified version which reduces the number of decoding iterations required while remaining purely stochastic and having a low per-iteration complexity. The second algorithm, relaxed half-stochastic (RHS) decoding, combines elements of the SPA and the stochastic decoder and uses successive relaxation to match the error-correcting performance of the SPA. Furthermore, it uses fewer iterations than the purely stochastic algorithm and does not have limitations on the field order and variable node degree of the codes it can decode. The third algorithm, NoX, is a fully stochastic pecialization of RHS for codes with a variable node degree 2 that offers similar performance, but at a significantly lower computational complexity. We study the performance and complexity of the algorithms; noting that all have lower per-iteration complexity than SPA and that RHS can have comparable average per-codeword computational complexity, and NoX a lower one.

11. SHARP: Spectrum Harvesting with ARQ Retransmission and Probing in Cognitive Radio

In underlay cognitive radio, a secondary user transmits in the transmission band of a primary user without serious degradation in the performance of the primary user. This paper proposes a method of underlay cognitive radio where the secondary pair listens to the primary ARQ feedback to glean information about the primary channel. The secondarytransmitter may also probe the channel by transmitting a packet and listening to the primary ARQ, thus getting additional information about the relative strength of the cross channel and primary channel. The method is entitled Spectrum Harvesting with ARQ Retransmission and Probing (SHARP). The probing is done only infrequently to minimize its impact on the primary throughput. Two varieties of spectrum sharing, named conservative and aggressive SHARP, are introduced. Both methods avoid introducing any outage in the primary; their difference is that conservative SHARP leaves the primary operations altogether unaffected, while aggressive SHARP may occasionally force the primary to use two instead of one transmission cycle for a packet, in order to harvest a better throughput for the secondary. The performance of the proposed system is analyzed and it is shown that the secondary throughput can be significantly improved via the proposed approach, possibly with a small loss of the primary throughput during the transmission as well as probing period.

Page 6: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

12. Per-Antenna Constant Envelope Precoding for Large Multi-User MIMO Systems

We consider the multi-user MIMO broadcast channel with M single-antenna users and N transmit antennas under the constraint that each antenna emits signals having constantenvelope (CE). The motivation for this is that CE signals facilitatethe use of power-efficient RF power amplifiers. Analytical and numerical results show that, under certain mild conditions on the channel gains, for a fixed M, an array gain is achievable even under the stringent per-antenna CE constraint. Essentially, for a fixed M, at sufficiently large N the total transmitted power can be reduced with increasing N while maintaining a fixed information rate to each user. Simulations for the i.i.d. Rayleigh fading channel show that the total transmit power can be reduced linearly with increasing N (i.e., an O(N) array gain). We also propose a precoding scheme which finds near-optimal CE signals to be transmitted, and has O(MN) complexity. Also,in terms of the total transmit power required to achieve a fixed desired information sum-rate, despite the stringent per-antenna CE constraint, the proposed CE precoding scheme performs close to the sum-capacity achieving scheme for an average-only total transmit power constrained channel.

IEEE TRANSACTIONS ON IMAGE PROCESSING

1. Perceptual Quality Metric With Internal Generative Mechanism

Abstract—Objective image quality assessment (IQA) aims to evaluate image quality consistently with human perception. Most of the existing perceptual IQA metrics cannot accurately represent the degradations from different types of distortion, e.g., existing structural similarity metrics perform well on contentdependent distortions while not as well as peak signal-to-noise ratio (PSNR) on content-independent distortions. In this paper, we integrate the merits of the existing IQA metrics with the guide of the recently revealed internal generative mechanism (IGM). The IGM indicates that the human visual system actively predicts sensory information and tries to avoid residual uncertainty for image perception and understanding. Inspired by the IGM theory, we adopt an autoregressive prediction algorithm to decompose an input scene into two portions, the predicted portion with the predicted visual content and the disorderly portion with the residual content. Distortions on the predicted portion degrade the primary visual information, and structural similarity procedures are employed to measure its degradation; distortions on the disorderly portion mainly change the uncertain information and the PNSR is employed for it. Finally, according to the noise energy deployment on the two portions, we combine the two evaluation results to acquire the overall quality score. Experimental results on six publicly available databases demonstrate that the proposed metric is comparable with the state-of-the-art quality metrics.

2. Local Edge-Preserving Multiscale Decomposition for High Dynamic Range Image Tone Mapping

Abstract—Local energy pattern, a statistical histogram-based representation, is proposed for texture classification. First, we use normalized local-oriented energies to generate local feature vectors, which describe the local structures distinctively and are less sensitive to imaging conditions. Then, each local feature vector is quantized by self-adaptive quantization thresholds determined in the learning stage using histogram specification, and the quantized local feature vector is transformed to a number by N-nary coding, which helps to preserve more structure information during vector quantization. Finally, the frequency histogram is used as the representation feature. The performance is benchmarked by material categorization on KTH-TIPS and KTH-TIPS2-a databases. Our method is compared with typical statistical approaches, such as basic image features, local binary pattern (LBP), local ternary pattern, completed LBP,Weber local descriptor, and VZ algorithms (VZ-MR8 and VZ-Joint). The results show that our method is superior to other methods on the KTH-TIPS2-a database, and achieving competitive performance on the KTH-TIPS database. Furthermore, we extend the representation from static image to dynamic texture, and achieve favorable recognition results on the University of California at Los Angeles (UCLA) dynamic texture database.

Page 7: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

3. Fast Positive Deconvolution of Hyperspectral Images

Abstract—In this brief, we provide an efficient scheme for performing deconvolution of large hyperspectral images under a positivity constraint, while accounting for spatial and spectral smoothness of the data.

4. Fuzzy C-Means Clustering With Local Information and Kernel Metric for Image Segmentation

Abstract—In this paper, we present an improved fuzzy C-means (FCM) algorithm for image segmentation by introducing a tradeoff weighted fuzzy factor and a kernel metric. The tradeoff weighted fuzzy factor depends on the space distance of all neighboring pixels and their gray-level difference simultaneously. By using this factor, the new algorithm can accurately estimate the damping extent of neighboring pixels. In order to further enhance its robustness to noise and outliers, we introduce a kernel distance measure to its objective function. The new algorithm adaptively determines the kernel parameter by using a fast bandwidth selection rule based on the distance variance of all data points in the collection. Furthermore, the tradeoff weighted fuzzy factor and the kernel distance measure are both parameter free. Experimental results on synthetic and real images show that the new algorithm is effective and efficient, and is relatively independent of this type of noise.

5. Image-Difference Prediction: From Grayscale to Color

Abstract—Existing image-difference measures show excellent accuracy in predicting distortions, such as lossy compression, noise, and blur. Their performance on certain other distortions could be improved; one example of this is gamut mapping. This is partly because they either do not interpret chromatic information correctly or they ignore it entirely. We present an image-difference framework that comprises image normalization, feature extraction, and feature combination. Based on this framework, we create image-difference measures by selecting specific implementations for each of the steps. Particular emphasis is placed on using color information to improve the assessment of gamut-mapped images. Our best image-difference measure shows significantly higher prediction accuracy on a gamut-mapping dataset than all other evaluated measures. Index Terms—Color, image difference, image quality.

6. Modified Gradient Search for Level Set Based Image Segmentation

Abstract—Level set methods are a popular way to solve the image segmentation problem. The solution contour is found by solving an optimization problem where a cost functional is minimized. Gradient descent methods are often used to solve this optimization problem since they are very easy to implement and applicable to general nonconvex functionals. They are, however, sensitive to local minima and often display slow convergence. Traditionally, cost functionals have been modified to avoid these problems. In this paper, we instead propose using two modified gradient descent methods, one using a momentum term and one based on resilient propagation. These methods are commonly used in the machine learning community. In a series of 2- D/3-D-experiments using real and synthetic data with ground truth, the modifications are shown to reduce the sensitivity for local optima and to increase the onvergence rate. The parameter sensitivity is also investigated. The proposed methods are very simple modifications of the basic method, and are directly compatible with any type of level set implementation. Downloadable reference code with examples is available online.

7. Variational Approach for the Fusion of Exposure Bracketed Pairs

Abstract—When taking pictures of a dark scene with artificial lighting, ambient light is not sufficient for most cameras to obtain both accurate color and detail information. The exposure bracketing feature usually available in many camera models enables the user to obtain a series of pictures taken in rapid succession with different exposure times; the implicit idea is that the user picks the best image from this set. But in many cases, none of these images is good enough; in general, good brightness and color information are retained from longer-

Page 8: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

exposure settings, whereas sharp details are obtained from shorter ones. In this paper, we propose a variational method for automatically combining an exposure-bracketed pair of images within a single picture that reflects the desired properties of each one. We introduce an energy functional consisting of two terms, one measuring the difference in edge information with the short-exposure image and the other measuring the local color difference with a warped version of the long-exposure image. This method is able to handle camera and subject motion as well as noise, and the results compare favorably with the state of the art.

8. Catching a Rat by Its Edglets

Abstract—Computer vision is a noninvasive method for monitoring laboratory animals. In this article, we propose a robust tracking method that is capable of extracting a rodent from a frame under uncontrolled normal laboratory conditions. The method consists of two steps. First, a sliding window combines three features to coarsely track the animal. Then, it uses the edglets of the rodent to adjust the tracked region to the animal’s boundary. The method achieves an average tracking error that is smaller than a representative state-of-the-art method.

9. Image Denoising With Dominant Sets by a Coalitional Game Approach

Abstract—Dominant sets are a new graph partition method for pairwise data clustering proposed by Pavan and Pelillo. We address the problem of dominant sets with a coalitional game model, in which each data point is treated as a player and similar data points are encouraged to group together for cooperation. We propose betrayal and hermit rules to describe the cooperative behaviors among the players. After applying the betrayal and hermit rules, an optimal and stable graph partition emerges, and all the players in the partition will not change their groups. For computational feasibility, we design an approximate algorithm for finding a dominant set of mutually similar players and then apply the algorithm to an application such as image denoising. In image denoising, every pixel is treated as a player who seeks similar partners according to its patch appearance in its local neighborhood. By averaging the noisy effects with the similar pixels in the dominant sets, we improve nonlocal means image denoising to restore the intrinsic structure of the original images and achieve competitive denoising results with the state-of-the-art methods in visual and quantitative qualities.

10. Human Detection in Images via Piecewise Linear Support Vector Machines

Abstract—Human detection in images is challenged by the view and posture variation problem. In this paper, we propose a piecewise linear support vector machine (PL-SVM) method to tackle this problem. The motivation is to exploit the piecewise discriminative function to construct a nonlinear classification boundary that can discriminate multiview and multiposture human bodies from the backgrounds in a high-dimensional feature space. A PL-SVM training is designed as an iterative procedure of feature space division and linear SVM training, aiming at the margin maximization of local linear SVMs. Each piecewise SVM model is responsible for a subspace, corresponding to a human cluster of a special view or posture. In the PL-SVM, a cascaded detector is proposed with block rientation features and a histogram of oriented gradient features. Extensive experiments show that compared with several recent SVM methods, our method reaches the state of the art in both detection accuracy and computational efficiency, and it performs best when dealing with low-resolution human regions in clutter backgrounds.

11. Nonedge-Specific Adaptive Scheme for Highly Robust Blind Motion Deblurring of Natural Imagess

Abstract—Blind motion deblurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. Although significant progress has been made on tackling this problem, existing methods, when applied to highly diverse natural images, are still far from stable. This paper focuses on the robustness of blind motion deblurring methods toward image diversity—a critical problem that has been

Page 9: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

previously neglected for years. We classify the existing methods into two schemes and analyze their robustness using an image set consisting of 1.2 million natural images. The first scheme is edge-specific, as it relies on the detection and prediction of large-scale step edges. This scheme is sensitive to the diversity of the image edges in natural images. The second scheme is nonedge-specific and explores various image statistics, such as the prior distributions. This scheme is sensitive to statistical variation over different images. Based on the analysis, we address the robustness by proposing a novel nonedge-specific adaptive scheme (NEAS), which features a new prior that is adaptive to the variety of textures in natural images. By comparing the performance of NEAS against the existing methods on a very large image set, we demonstrate its advance beyond the state-of-the-art.

12. Missing Texture Reconstruction Method Based on Error Reduction Algorithm Using Fourier

Transform Magnitude Estimation Scheme

A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

13. Video Deblurring Algorithm Using Accurate Blur Kernel Estimation and Residual Deconvolution

Based on a Blurred-Unburned Frame Pair

Abstract—Blurred frames may happen sparsely in a video sequence acquired by consumer devices such as digital camcorders and digital cameras. In order to avoid visually annoying artifacts due to those blurred frames, this paper presents a novel motion deblurring algorithm in which a blurred frame can be reconstructed utilizing the high-resolution information of adjacent unblurred frames. First, a motion-compensated predictor for the blurred frame is derived from its neighboring unblurred frame via specific motion estimation. Then, an accurate blur kernel, which is difficult to directly obtain from the blurred frame itself, is computed using both the predictor and the blurred frame. Next, a residual deconvolution is applied to both of those frames in order to reduce the ringing artifacts inherently caused by conventional deconvolution. The blur kernel estimation and deconvolution processes are iteratively performed for the deblurred frame. Simulation results show that the proposed algorithm provides superior deblurring results over conventional deblurring algorithms while preserving details and reducing ringing artifacts.

14. Comments on “A Robust Fuzzy Local Information C-Means Clustering Algorithm”

Abstract—In a recent paper, Krinidis and Chatzis proposed a variation of fuzzy c-means algorithm for image clustering. The local spatial and gray-level information are incorporated in a fuzzy way through an energy function. The local minimizers of the designed energy function to obtain the fuzzy membership of each pixel and cluster centers are proposed. In this paper, it is shown that the local minimizers of Krinidis and Chatzis to obtain the fuzzy membership and the cluster centers in an iterative manner are not exclusively solutions for true local minimizers of their designed energy function. Thus, the local minimizers of Krinidis and Chatzis do not converge to the correct local minima of the designed energy function not because of tackling to the local minima, but because of the design of energy function.

Page 10: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

15. Multiscale Image Fusion Using the Undecimated Wavelet Transform With Spectral Factorization

and Nonorthogonal Filter Banks

Abstract—Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)- based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.

16. Efficient Contrast Enhancement Using Adaptive Gamma Correction With Weighting Distribution

Abstract—This paper proposes an efficient method to modify histograms and enhance contrast in digital images. Enhancement plays a significant role in digital image processing, computer vision, and pattern recognition. We present an automatic transformation technique that improves the brightness of dimmed images via the gamma correction and probability distribution of luminance pixels. To enhance video, the proposed image enhancement method uses temporal information regarding the differences between each frame to reduce computational complexity. Experimental results demonstrate that the proposed method produces enhanced images of comparable or higher quality than those produced using previous state-of-the-art methods. Index Terms—Contrast enhancement, gamma correction, histogram equalization, histogram modification.

17. Wavelet Bayesian Network Image Denoising

Abstract—From the perspective of the Bayesian approach, the denoising problem is essentially a prior probability modeling and estimation task. In this paper, we propose an approach that exploits a hidden Bayesian network, constructed from wavelet coefficients, to model the prior probability of the original image. Then, we use the belief propagation (BP) algorithm, which estimates a coefficient based on all the coefficients of an image, as the maximum-a-posterior (MAP) estimator to derive the denoised wavelet coefficients. We show that if the network is a spanning tree, the standard BP algorithm can perform MAP estimation efficiently. Our experiment results demonstrate that, in terms of the peak-signal-to-noise-ratio and perceptual quality, the proposed approach outperforms state-of-the-art algorithms on several images, particularly in the textured regions, with various amounts of white Gaussian noise.

18. Nonlinearity Detection in Hyperspectral Images Using a Polynomial Post-Nonlinear Mixing Model

Abstract—This paper studies a nonlinear mixing model for hyperspectral image unmixing and nonlinearity detection. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated by polynomials leading to a polynomial post-nonlinear mixing model. We have shown in a previous paper that the parameters involved in the resulting model can be estimated using least squares methods. A generalized likelihood ratio test based on the estimator of the nonlinearity parameter is proposed to decide whether a pixel of the image results from the commonly used linear mixing model or from a more general nonlinear mixing model. To compute the test statistic associated with the nonlinearity detection, we propose to approximate the

Page 11: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

variance of the estimated nonlinearity parameter by its constrained Cramér–Rao bound. The performance of the detection strategy is valuated via simulations conducted on synthetic and real data. More precisely, synthetic data have been generated according to the standard linear mixing model and three nonlinear models from the literature. The real data investigated in this study are extracted from the Cuprite image, which shows that some minerals seem to be nonlinearly mixed in this image. Finally, it is interesting to note that the estimated abundance maps obtained with the post-nonlinear mixing model are in good agreement with results obtained in previous studies.

19. Image Quality Assessment Using Multi-Method Fusion

Abstract—A new methodology for objective image quality assessment (IQA) with multi-method fusion (MMF) is presented in this paper. The research is motivated by the observation that there is no single method that can give the best performance in all situations. To achieve MMF, we adopt a regression approach. The new MMF score is set to be the nonlinear combination of scores from multiple methods with suitable weights obtained by a training process. In order to improve the regression results further, we divide distorted images into three to five groups based on the distortion types and perform regression within each group, which is called “context-dependent MMF” (CD-MMF). One task in CD-MMF is to determine the context automatically, which is achieved by a machine learning approach. To further reduce the complexity of MMF, we perform algorithms to select a small subset from the candidate method set. The result is very good even if only three quality assessment methods are included in the fusion process. The proposed MMF method using support vector regression is shown to outperform a large number of existing IQA methods by a significant margin when being tested in six representative databases.

20. Unified Blind Method for Multi-Image Super-Resolution and Single/Multi-Image Blur Deconvolution

Abstract—This paper presents, for the first time, a unified blind method for multi-image super-resolution (MISR or SR), single-image blur deconvolution (SIBD), and multi-image blur deconvolution (MIBD) of low-resolution (LR) images degraded by linear space-invariant (LSI) blur, aliasing, and additive white Gaussian noise (AWGN). The proposed approach is based on alternating minimization (AM) of a new cost function with respect to the unknown high-resolution (HR) image and blurs. The regularization term for the HR image is based upon the Huber-Markov random field (HMRF) model, which is a type of variational integral that exploits the piecewise smooth nature of the HR image. The blur estimation process is supported by an edge-emphasizing smoothing operation, which improves the quality of blur estimates by enhancing strong soft edges toward step edges, while filtering out weak structures. The parameters are updated gradually so that the number of salient edges used for blur estimation increases at each iteration. For better performance, the blur estimation is done in the filter domain rather than the pixel domain, i.e., using the gradients of the LR and HR images. The regularization term for the blur is Gaussian (L2 norm), which allows for fast noniterative optimization in the frequency domain. We accelerate the processing time of SR reconstruction by separating the upsampling and registration processes from the optimization procedure. Simulation results on both synthetic and real-life images (from a novel computational imager) confirm the robustness and effectiveness of the proposed method.

21. In-Plane Rotation and Scale Invariant Clustering Using Dictionaries

Abstract—In this paper, we present an approach that simultaneously clusters images and learns dictionaries from the clusters. The method learns dictionaries and clusters images in the radon transform domain. The main feature of the proposed approach is that it provides both in-plane rotation and scale invariant clustering, which is useful in numerous applications, including content-based image retrieval (CBIR). We demonstrate the effectiveness of our rotation and scale invariant clustering method on a series of CBIR experiments. Experiments are performed on the Smithsonian isolated leaf, Kimia shape, and Brodatz texture datasets. Our method provides both good retrieval performance and greater robustness compared to standard Gabor-based and three state-of-the-art shape-based methods that have similar objectives.

Page 12: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

22. Analysis Operator Learning and its Application to Image Reconstruction

Abstract—Practical image-acquisition systems are often modeled as a continuous-domain prefilter followed by an ideal sampler, where generalized samples are obtained after convolution with the impulse response of the device. In this paper, our goal is to interpolate images from a given subset of such samples. We express our solution in the continuous domain, considering consistent resampling as a data-fidelity constraint. To make the problem well posed and ensure edge-preserving solutions, we develop an efficient anisotropic regularization approach that is based on an improved version of the edgeenhancing anisotropic diffusion equation. Following variational principles, our econstruction algorithm minimizes successive quadratic cost functionals. To ensure fast convergence, we solve the corresponding sequence of linear problems by using multigrid iterations that are specifically tailored to their sparse structure. We conduct illustrative experiments and discuss the potential of our approach both in terms of algorithmic design and reconstruction quality. In particular, we present results that use as little as 2% of the image samples.

23. Robust Ellipse Fitting Based on Sparse Combination of Data Points

Abstract—Ellipse fitting is widely applied in the fields of computer vision and automatic industry control, in which the procedure of ellipse fitting often follows the preprocessing step of edge detection in the original image. Therefore, the ellipse fitting method also depends on the accuracy of edge detection besides their own performance, especially due to the introduced outliers and edge point errors from edge detection which will cause severe performance degradation. In this paper, we develop a robust ellipse fitting method to alleviate the influence of outliers. The proposed algorithm solves ellipse parameters by linearly combining a subset of (“more accurate”) data points (formed from edge points) rather than all data points (which contain possible outliers). In addition, considering that squaring the fitting residuals can magnify the contributions of these extreme data points, our algorithm replaces it with the absolute residuals to reduce this influence. Moreover, the norm of data point errors is bounded, and the worst case performance optimization is formed to be robust against data point errors. The resulting mixed l1–l2 optimization problem is further derived as a secondorder cone programming one and solved by the computationally efficient interior-point methods. Note that the fitting approach developed in this paper specifically deals with the overdetermined system, whereas the current sparse representation theory is only applied to underdetermined systems. Therefore, the proposed algorithm can be looked upon as an extended application and development of the sparse representation theory. Some simulated and experimental examples are presented to illustrate the effectiveness of the proposed ellipse fitting approach. Index Terms—Diameter control, edge points, ellipse fitting, Iris recognition, least squares (LS), minimax criterion, outliers, overdetermined system, silicon single crystal, sparse representation.

24. Learning Dynamic Hybrid Markov Random Field for Image Labeling

Abstract—Using shape information has gained increasing concerns in the task of image labeling. In this paper, we present a dynamic hybridMarkov random field (DHMRF), which explicitly captures middle-level object shape and low-level visual appearance (e.g., texture and color) for image labeling. Each node in DHMRF is described by either a deformable template or an appearance model as visual prototype. On the other hand, the edges encode two types of intersections: co-occurrence and spatial layered context, with respect to the labels and prototypes of connected nodes. To learn the DHMRF model, an iterative algorithm is designed to automatically select the most informative features and estimate model parameters. The algorithm achieves high computational efficiency since a branch-and-bound schema is introduced to estimate model parameters. Compared with previous methods, which usually employ implicit shape cues, our DHMRF model seamlessly integrates color, texture, and shape cues to inference labeling output, and thus produces more accurate and reliable results. Extensive experiments validate its superiority over other state-of-the-art methods in terms of recognition accuracy and implementation efficiency on: 1) the MSRC 21-class dataset, and 2) the lotus hill institute 15-class dataset.

Page 13: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

25. Coupled Variational Image Decomposition and Restoration Model for Blurred Cartoon-Plus-Texture

Images With Missing Pixels

Abstract—In this paper, we develop a decomposition model to restore blurred images with missing pixels. Our assumption is that the underlying image is the superposition of cartoon and texture components. We use the total variation norm and its dual norm to regularize the cartoon and texture, respectively. We recommend an efficient numerical algorithm based on the splitting versions of augmented Lagrangian method to solve the problem. Theoretically, the existence of a minimizer to the energy function and the convergence of the algorithm are guaranteed. In contrast to recently developed methods for deblurring images, the proposed algorithm not only gives the restored image, but also gives a decomposition of cartoon and texture parts. These two parts can be further used in segmentation and inpainting problems. Numerical comparisons between this algorithm and some state-of-the-art methods are also reported.

26. Computationally Tractable Stochastic Image Modeling Based on Symmetric Markov Mesh Random

Fields

Abstract—In this paper, the properties of a new class of causal Markov random fields, named symmetric Markov mesh random field, are initially discussed. It is shown that the symmetric Markov mesh random fields from the upper corners are equivalent to the symmetric Markov mesh random fields from the lower corners. Based on this new random field, a symmetric, corner-independent, and isotropic image model is then derived which incorporates the dependency of a pixel on all its neighbors. The introduced image model comprises the product of several local 1D density and 2D joint density functions of pixels in an image thus making it computationally tractable and practically feasible by allowing the use of histogram and joint histogram approximations to estimate the model parameters. An image restoration application is also presented to confirm the effectiveness of the model developed. The experimental results demonstrate that this new model provides an improved tool for image modeling purposes compared to the conventional Markov random field models.

27. Image Sharpness Assessment Based on Local Phase Coherence

Abstract—Sharpness is an important determinant in visual assessment of image quality. The human visual system is able to effortlessly detect blur and evaluate sharpness of visual images, but the underlying mechanism is not fully understood. Existing blur/sharpness evaluation algorithms are mostly based on edge width, local gradient, or energy reduction of global/local high frequency content. Here we understand the subject from a different perspective, where sharpness is identified as strong local phase coherence (LPC) near distinctive image features evaluated in the complex wavelet transform domain. Previous LPC computation is restricted to be applied to complex coefficients spread in three consecutive dyadic scales in the scale-space. Here we propose a flexible framework that allows for LPC computation in arbitrary fractional scales. We then develop a new sharpness assessment algorithm without referencing the original image. We use four subject-rated publicly available image databases to test the proposed algorithm, which demonstrates competitive performance when compared with state-of-the-art algorithms.

28. Colorization-Based Compression Using Optimization

Abstract—In this paper, we formulate the colorization-based coding problem into an optimization problem, i.e., an L1 minimization problem. In colorization-based coding, the encoder chooses a few representative pixels (RP) for which the chrominance values and the positions are sent to the decoder, whereas in the decoder, the chrominance values for all the pixels are reconstructed by colorization methods. The main issue in colorization-based coding is how to extract the RP well therefore the compression rate and the quality of the reconstructed color image becomes good. By formulating the colorization-based coding into an L1 minimization problem, it is

Page 14: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

guaranteed that, given the colorization matrix, the chosen set of RP becomes the optimal set in the sense that it minimizes the error between the original and the reconstructed color image. In other words, for a fixed error value and a given colorization matrix, the chosen set of RP is the smallest set possible. We also propose a method to construct the colorization matrix that colorizes the image in a multiscale manner. This, combined with the proposed RP extraction method, allows us to choose a very small set of RP. It is shown experimentally that the proposed method outperforms conventional colorization-based coding methods as well as the JPEG standard and is comparable with the JPEG2000 compression standard, both in terms of the compression rate and the quality of the reconstructed color image.

29. A Generalized Random Walk With Restart and Its Application in Depth Up-Sampling and

Interactive Segmentation

Abstract—In this paper, the origin of random walk with restart (RWR) and its generalization are described. It is wellknown that the random walk (RW) and the anisotropic diffusion models share the same energy functional, i.e., the former provides a steady-state solution and the latter gives a flow solution. In contrast, the theoretical background of the RWR scheme is different from that of the diffusion-reaction equation, although the restarting term of the RWR plays a role similar to the reaction term of the diffusion-reaction equation. The behaviors of the two approaches with respect to outliers reveal that they possess different attributes in terms of data propagation. This observation leads to the derivation of a new energy functional, where both volumetric heat capacity and thermal conductivity are considered together, and provides a common framework that unifies both the RW and the RWR approaches, in addition to other regularization methods. The proposed framework allows the RWR to be generalized (GRWR) in semilocal and nonlocal forms. The experimental results demonstrate the superiority of GRWR over existing regularization approaches in terms of depth map up-sampling and interactive image segmentation.

30. Library-Based Illumination Synthesis for Critical CMOS Patterning

Abstract—In optical microlithography, the illumination source for critical complementary metal–oxide– semiconductor layers needs to be determined in the early stage of a technology node with very limited design information, leading to simple binary shapes. Recently, the availability of freeform sources permits us to increase pattern fidelity and relax mask complexities with minimal insertion risks to the current manufacturing flow. However, source optimization across many patterns is often treated as a design-of-experiments problem, which may not fully exploit the benefits of a freeform source. In this paper, a rigorous source-optimization algorithm is presented via linear superposition of optimal sources for pre-selected patterns. We show that analytical solutions are made possible by using Hopkins formulation and quadratic programming. The algorithm allows synthesized illumination to be linked with assorted pattern libraries, which has a direct impact on design rule studies for early planning and design automation for full wafer optimization.

31. Variational Optical Flow Estimation Based on Stick Tensor Voting

Abstract—Variational optical flow techniques allow the estimation of flow fields from spatio-temporal derivatives. They are based on minimizing a functional that contains a data term and a regularization term. Recently, numerous approaches have been presented for improving the accuracy of the estimated flow fields. Among them, tensor voting has been shown to be particularly effective in the preservation of flow discontinuities. This paper presents an adaptation of the data term by using anisotropic stick tensor voting in order to gain robustness against noise and outliers with significantly lower computational cost than (full) tensor voting. In addition, an anisotropic complementary smoothness term depending on directional information estimated through stick tensor voting is utilized in order to preserve discontinuity capabilities of the estimated flow fields. Finally, a weighted non-local term that depends on both the estimated directional information and the occlusion state of pixels is integrated during the optimization process in order to denoise the final flow field. The proposed approach yields state-of-the-art results on the Middlebury benchmark.

Page 15: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

32. GPU Accelerated Edge-Region Based Level Set Evolution Constrained by 2D Gray-Scale Histogram

Abstract—Due to its intrinsic nature which allows to easily handle complex shapes and topological changes, the level set method (LSM) has been widely used in image segmentation. Nevertheless, LSM is computationally expensive, which limits its applications in real-time systems. For this purpose, we propose a new level set algorithm, which uses simultaneously edge, region, and 2D histogram information in order to efficiently segment objects of interest in a given scene. The computational complexity of the proposed LSM is greatly reduced by using the highly parallelizable lattice Boltzmann method (LBM) with a body force to solve the level set equation (LSE). The body force is the link with image data and is defined from the proposed LSE. The proposed LSM is then implemented using an NVIDIA graphics processing units to fully take advantage of the LBM local nature. The new algorithm is effective, robust against noise, independent to the initial contour, fast, and highly parallelizable. The edge and region information enable to detect objects with and without edges, and the 2D histogram information enable the effectiveness of the method in a noisy environment. Experimental results on synthetic and real images demonstrate subjectively andobjectively the performance of the proposed method.

33. Orientation Imaging Microscopy With Optimized Convergence Angle Using CBED Patterns in

TEMs

Grain size statistics, texture, and grain boundary distribution are microstructural characteristics that greatly influence materials properties. These characteristics can be derived from an orientation map obtained using orientation imaging microscopy (OIM) techniques. The OIM techniques are generally performed using a transmission electron microscopy (TEM) for nanomaterials. Although some of these techniques have limited applicability in certain situations, others have limited availability because of external hardware required. In thispaper, an automated method to generate orientation maps using convergence beam electron diffraction patterns obtained in a conventional TEM setup is presented. This method is based upon dynamical diffraction theory that describes electron diffraction more accurately as compared with kinematical theory used by several existing OIM techniques. In addition, the method of this paper uses wide angle convergent beam electron diffraction for performing OIM. It is shown in this paper that the use of the wide angle convergent electron beam provides additional information that is not available otherwise. Together, the presented method exploits the additional information and combines it with the calculations from the dynamical theory to provide accurate orientation maps in a conventional TEM setup. The automated method of this paper is applied to a platinum thin film sample. The presented method correctly identified the texture preference in the sample.

34. Multivariate Slow Feature Analysis and Decorrelation Filtering for Blind Source Separation

We generalize the method of Slow Feature Analysis (SFA) for vector-valued functions of several variables and apply it to the problem of blind source separation, in particular to image separation. It is generally necessary to use multivariate SFA instead of univariate SFA for separating multi-dimensional signals. For the linear case, an exact mathematical analysis is given, which shows in particular that the sources are perfectly separated by SFA if and only if they and their first-order derivatives are uncorrelated. When the sources are correlated, we apply the following technique called Decorrelation Filtering: use a linear filter to decorrelate the sources and their derivatives in the given mixture, then apply the unmixing matrix obtained on the filtered mixtures to the original mixtures. If the filtered sources are perfectly separated by this matrix, so are the original sources. A decorrelation filter can be numerically obtained by solving a nonlinear optimization problem. This technique can also be applied to other linear separation methods, whose output signals are decorrelated, such as ICA. When there are more mixtures than sources, one can determine the actual number of sources by using a regularized version of SFA with decorrelation filtering. Extensive numerical experiments using SFA and ICA

Page 16: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

with decorrelation filtering, supported by athematical analysis, demonstrate the potential of our methods for solving problems involving blind source separation.

35. A Variational Approach for Pan-Sharpening

Pan-sharpening is a process of acquiring a high resolution multispectral (MS) image by combining a low resolution MS image with a corresponding high resolution panchromatic (PAN) image. In this paper, we propose a new variational pansharpening method based on three basic assumptions: 1) the gradient of PAN image could be a linear combination of those of the pan-sharpened image bands; 2) the upsampled low resolution MS image could be a degraded form of the pan-sharpened image; and 3) the gradient in the spectrum direction of pan-sharpened image should be approximated to those of the upsampled low resolution MS image. An energy functional, whose minimize is related to the best pan-sharpened result, is built based on these assumptions. We discuss the existence of minimizer of our energy and describe the numerical procedure based on the split Bregman algorithm. To verify the effectiveness of our method, we qualitatively and quantitatively compare it with some state-of-the-art schemes using QuickBird and IKONOS data. Particularly, we classify the existing quantitative measures into four categories and choose two representatives in each category for more reasonable quantitative evaluation. The results demonstrate the effectiveness and stability of our method in terms of the related evaluation benchmarks. Besides, the computation efficiency comparison with other variational methods also shows that our method is remarkable.

36. Segment Adaptive Gradient Angle Interpolation

We introduce a new edge-directed interpolator based on locally defined, straight line approximations of imageisophotes. Spatial derivatives of image intensity are used to describe the principal behavior of pixel-intersecting isophotes in terms of their slopes. The slopes are determined by inverting a tridiagonal matrix and are forced to vary linearly from pixel-to-pixel within segments. Image resizing is performed by interpolating along the approximated isophotes. The proposed method can accommodate arbitrary scaling factors, provides state-of-the-art results in terms of PSNR as well as other quantitative visual quality metrics, and has the advantage of reduced computational complexity that is directly proportional to the number of pixels.

37. Texture Enhanced Histogram Equalization Using TV-L1 Image Decomposition

Histogram transformation defines a class of image processing operations that are widely applied in the implementation of data normalization algorithms. In this paper, we present a new variational approach for image enhancement that is constructed to alleviate the intensity saturation effects that are introduced by standard contrast enhancement (CE) methods based on histogram equalization. In this paper, we initially apply total variation (TV) minimization with a L1 fidelity term to decompose the input image with respect to cartoon and texture components. Contrary to previous papers that rely solely on the information encompassed in the distribution of the intensity information, in this paper, the texture information is also employed to emphasize the contribution of the local textural features in the CE process. This is achieved by implementing a nonlinear histogram warping CE strategy that is able to maximize the information content in the transformed image. Our experimental study addresses the CE of a wide variety of image data and comparative evaluations are provided to illustrate that our method produces better results than conventional CE strategies.

38. Novel True-Motion Estimation Algorithm and Its Application to Motion Compensated Temporal

Frame Interpolation

In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate

Page 17: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.

39. Nonlocal Regularization of Inverse Problems: A Unified Variational Framework

We introduce a unifying energy minimization framework for nonlocal regularization of inverse problems. Incontrast to the weighted sum of square differences between image pixels used by current schemes, the proposed functional is an unweighted sum of inter-patch distances. We use robust distance metrics that promote the averaging of similar patches, while discouraging the averaging of dissimilar patches. We show thatthe first iteration of a majorize–minimize algorithm to minimize the proposed cost function is similar to current nonlocal methods. The reformulation thus provides a theoretical justification for the heuristic approach of iterating nonlocal schemes, which reestimate the weights from the current image estimate. Thanks to the reformulation, we now understand that the widely reported alias amplification associated with iterative nonlocal methods are caused by the convergence to local minimum of the nonconvex penalty. We introduce an efficient continuation strategy to overcome this problem. The similarity of the proposed criterion to widely used nonquadratic penalties (e.g., total variation and _ p semi-norms) opens the door to the adaptation of fast algorithms developed in the context of compressive sensing; we introduce several novel algorithms to solve the proposed nonlocal optimization problem. Thanks to the unifying framework, these fast algorithms are readily applicable for a large class of distance metrics.

40. Image Inpainting on the Basis of Spectral Structure from 2-D Nonharmonic Analysis

The restoration of images by digital inpainting is an active field of research and such algorithms are, in fact, now widely used. Conventional methods generally apply textures that are most similar to the areas around the missing region or use a large image database. However, this produces discontinuous textures and thus unsatisfactory results. Here, we propose a new technique to overcome this limitation by using signal prediction based on the nonharmonic analysis (NHA) technique proposed by the authors. NHA can be used to extract accurate spectra, irrespective of the window function, and its frequency resolution is less than that of the discrete Fourier transform. The proposed 0. method sequentially generates new textures on the basis of the spectrum obtained by NHA. Missing regions from the spectrum are repaired using an improved cost function for 2D NHA. The proposed method is evaluated using the standard images Lena, Barbara, Airplane, Pepper, and Mandrill. The results show an improvement in MSE of about 10 20 compared with the examplar-based ∼method and good subjective quality.

41. Image Completion by Diffusion Maps and Spectral Relaxation

We present a framework for image inpainting that utilizes the diffusion framework approach to spectral dimensionality reduction. We show that on formulating the inpainting problem in the embedding domain, the domain to be inpainted is smoother in general, particularly for the textured images. Thus, the textured images can be inpainted through simple exemplarbased and variational methods. We discuss the properties of the induced smoothness and relate it to the underlying assumptions used in contemporary inpainting schemes. As the diffusion embedding is nonlinear and noninvertible, we propose a novel computational approach to approximate the inverse mapping from the inpainted embedding space to the image domain. We formulate the mapping as a discrete optimization problem, solved through spectral relaxation. The effectiveness of the

Page 18: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

presented method is exemplified by inpainting real images, where it is shown to compare favorably with contemporary state-of-the-art schemes.

42. Gaussian Blurring-Invariant Comparison of Signals and Images

We present a Riemannian framework for analyzing signals and images in a manner that is invariant to their level of blurriness, under Gaussian blurring. Using a well known relation between Gaussian blurring and the heat equation, we establish an action of the blurring group on image space and define an orthogonal section of this action to represent and compare images at the same blur level. This comparison is based on geodesic distances on the section manifold which,in turn, are computed using a path-straightening algorithm. The actual implementations use coefficients of images under a truncated orthonormal basis and the blurring action corresponds to exponential decays of these coefficients. We demonstrate this framework using a number of experimental results, involving 1D signals and 2D images. As a specific application, we study the effect of blurring on the recognition performance when 2D facial images are used for recognizing people.

43. Corner Detection and Classification Using Anisotropic Directional Derivative Representations

This paper proposes a corner detector and classifier using anisotropic directional derivative (ANDD) representations. The ANDD representation at a pixel is a function of the oriented angle and characterizes the local directional grayscale variation around the pixel. The proposed corner detector fuses the ideasof the contour- and intensity-based detection. It consists of three cascaded blocks. First, the edge map of an image is obtained by the Canny detector and from which contours are extracted and patched. Next, the ANDD representation at each pixel on contours is calculated and normalized by its maximal magnitude. The area surrounded by the normalized ANDD representation forms a new corner measure. Finally, the nonmaximum suppression and thresholding are operated on each contour to find corners in terms of the corner measure. Moreover, a corner classifier based on the peak number of the ANDD representation is given. Experiments are made to evaluate the proposed detector and classifier. The proposed detector is competitive with the tworecent state-of-the-art corner detectors, the He & Yung detector and CPDA detector, in detection capability and attains higher repeatability under affine transforms. The proposed classifier can discriminate effectively simple corners, Y-type corners, and higher order corners.

44. Fusion of Multifocus Images to Maximize Image Information

When an image of a 3-D scene is captured, only scene parts at the focus plane appear sharp. Scene parts in front of or behind the focus plane appear blurred. In order to create an image where all scene parts appear sharp, it is necessary to capture images of the scene at different focus levels and fuse the images. In this paper, first registration of multifocus images is discussed and then an algorithm to fuse the registered images is described. The algorithm divides the image domain into uniform blocks and for each block identifies the image with the highest contrast. The images selected in this manner are then locally blended to create an image that has overall maximum contrast. Examples demonstrating registration and fusion of multifocus images are given and discussed..

45. Inception of Hybrid Wavelet Transform using Two Orthogonal Transforms and It’s use for Image Compression

The paper presents the novel hybrid wavelet transform generation technique using two orthogonal transforms. The orthogonal transforms are used for analysis of global properties of the data into frequency domain. For studying the local properties of the signal, the concept of wavelet transform is introduced, where the mother wavelet function gives the global properties of the signal and wavelet basis functions which are compressed versions of mother wavelet are used to study the local properties of the signal. In wavelets of some orthogonal transforms the global characteristics of the data are hauled out better and some orthogonal transforms might give the local characteristics in better way. The idea of hybrid wavelet transform comes in to picture in view of combining the traits of two different orthogonal transform wavelets to exploit the strengths of both the transform wavelets.

Page 19: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

46. A Comparative Analysis of Image Fusion Methods

There are many image fusion methods that can be used to produce high-resolution mutlispectral images from a high-resolution panchromatic image and low-resolution mut-lispectral images. Starting from the physical principle of image formation, this paper presents a comprehensive framework, the general image fusion (GIF) method, which makes it possible to categorize, compare, and evaluate the existing image fusion methods. Using the GIF method, it is shown that the pixel values of the high-resolution mutlispectral images are determined by the corresponding pixel values of the low-resolution panchromatic image, the approximation of the high-resolution panchromatic image at the low-resolution level. Many of the existing image fusion methods, including, but not limited to, intensity–hue–saturation Brovey transform, principal component analysis, high-pass filtering, high-pass modulation, the à trous algorithm-based modulation (MRAIM), are evaluated and found to be particular cases of the GIF method. The performance of each image fusion method is theoretically analyzed based on how the corresponding low-resolution panchromatic image is computed and how the modulation coefficients are set. An experiment based on IKONOS images shows that there is consistency between the theoretical analysis and the experimental results and that the MRAIM method synthesizes the images closest to those the corresponding multisensors would observe at the high-resolution level.

47. A New DCT-based Multiresolution Method for Simultaneous Denoising and Fusion of SAR Images

Individual multiresolution techniques for separate image fusion and denoising have been widely researched. We propose a novel multiresolution Discrete Cosine Transform based method for simultaneous image denoising and fusion, demonstrating its efficacy with respect to Discrete Wavelet Transform and Dual- tree complex Wavelet Transform. We incorporate the Laplacian pyramid transform multiresolution analysis and a sliding window Discrete Cosine Transform for simultaneous denoising and fusion of the multiresolution coefficients. The impact of image denoising on the results of fusion is demonstrated and advantages of simultaneous denoising and fusion for SAR images are also presented

48. Brain Segmentation using Fuzzy C means clustering to detect tumour Region

Tumor Segmentation from MRI data is an important but time consuming manual task performed by medical experts. The research which addresses the diseases of the brain in the field of the vision by computer is one of the challenges in recent times in medicine, the engineers and researchers recently launched challenges to carryout innovations of technology pointed in imagery. This paper focuses on a new algorithm for brain segmentation of MRI images by fuzzy C means algorithm to diagnose accurately the region of cancer. In the first step it proceeds by nioise filtering later applying FCM algorithm to segment only tumor area. In this research multiple MRI images of brain can be applied detection of glioma (tumor) growth by advanced diameter technique

49. Comprehensive and Comparative Study of Image Fusion Techniques

Image Fusion is one of the major research fields in image processing. Image Fusion is a process of combining the relevant information from a set of images, into a single image, wherein the resultant fused image will be more informative and complete than any of the input images. Image fusion process can be defined as the integration of information from a number of registered images without the introduction of distortion. It is often not possible to get an image that contains all relevant objects in focus. One way to overcome this problem is image fusion, in which one can acquire a series of pictures with different focus settings and fuse them to produce an image with extended depth of field. Image fusion techniques can improve the quality and increase the application of these data. This paper discusses the three categories of image fusion algorithms – the basic fusion algorithms, the pyramid based algorithms and the basic DWT algorithms. It gives a literature review on some of the existing image fusion techniques for image fusion like, primitive fusion (Averaging Method, Select Maximum, and Select Minimum), Discrete Wavelet transform based fusion, Principal component analysis (PCA) based fusion etc. The purpose of the paper is to elaborate wide range of algorithms their comparative study together. There are many techniques proposed by different authors in order to fuse the images and produce the clear visual of the image. Hierarchical multiscale and multiresolution image processing techniques, pyramid decomposition are the basis for the majority of image fusion algorithms. All these available techniques are designed for particular kind of images. Until now, of highest relevance for remote sensing data processing and analysis have been techniques for pixel level image fusion for

Page 20: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

which many different methods have been developed and a rich theory exists. Researchers have shown that fusion techniques that operate on such features in the transform domain yield subjectively better fused images than pixel based techniques. For this purpose, feature based fusion techniques that are usually based on empirical or heuristic rules are employed. Because a general theory is lacking fusion, algorithms are usually developed for certain applications and datasets. To implement the pixel level fusion, arithmetic operations are widely used in time domain and frequency transformations are used in frequency domain. In many applications area of navigation guidance, object detection and recognition, medical diagnosis, satellite imaging for remote sensing, rob vision, military and civilian surveillance, etc., the image fusion plays an important role. It also provides survey about some of the various existing techniques applied for image fusion and comparative study of all the techniques concludes the better approach for its future research

50. Efficient image compression technique using full, column and row transforms on colour image

This paper presents image compression technique based on column transform, row transform and full transform of an image. Different transforms like, DFT, DCT, Walsh, Haar, DST, Kekre’s Transform and Slant transform are applied on colour images of size 256x256x8 by separating R, G, and B colour planes. These transforms are applied in three different ways namely: column, row and full transform. From each transformed image, specific number of low energy coefficients is eliminated and compressed images are reconstructed by applying inverse transform. Root Mean Square Error (RMSE) between original image and compressed image is calculated in each case. From the implementation of proposed technique it has been observed that, RMSE values and visual quality of images obtained by column transform are closer to RMSE values given by full transform of images. Row transform gives quite high RMSE values as compared to column and full transform at higher compression ratio. Aim of the proposed technique is to achieve compression with acceptable image quality and lesser computations by using column transform.

51. Grading of rice grains by image processing

The purpose of this paper is grading of rice grains by image processing technique. Commercially the grading of rice is done according to the size of the grain kernel (full, half or broken). The food grain types and their quality are rapidly assessed through visual inspection by human inspectors. The decision making capabilities of human-inspectors are subjected to external influences such as fatigue, vengeance, bias etc. with the help of image processing we can overcome that. By image processing we can also identify any broken grains mixed . Here we discuss the various procedures used to obtain the percentage quality of rice grains.

52. Innovative Multilevel Image Fusion Algorithm using Combination of Transform Domain and Spatial Domain Methods with Comparative Analysis of Wavelet and Curve let Transform

Image fusion is widely used term in different applications namely satellite imaging, remote sensing, multifocus imaging and medical imaging. In this paper, we have implemented multi level image fusion in which fusion is carried out in two stages. Firstly, Discrete wavelet or Fast Discrete Curvelet transform is applied on both source images and secondly image fusion is carried out with either spatial domain methods like Averaging, Minimum selection, maximum selection and PCA or with Pyramid transform methods like Laplacian Pyramid transform. Further, comparative analysis of fused image obtained from both Discrete Wavelet and Fast Discrete Curvelet transform is done which proves effective image fusion using proposed Curvelet transform than Wavelet transform through enhanced visual quality of fused image and by analysis of 7 quality metrics parameters. The proposed method is very innovative which can be applied to medical and multifocus imaging applications in real time. These analyses can be useful for further research work in image fusion and also the fused image obtained using Curvelet transform can be helpful for better medical diagnosis.

53. Multi layer information hiding -a blend of steganography and visual cryptograph

Page 21: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

This study combines the notion of both steganography [1] and visual cryptography [2]. Recently, a number of innovative algorithms have been proposed in the fields of steganography and visual cryptography with the goals of improving security, reliability, and efficiency; because there will be always new kinds of threats in the field of information hiding. Actually Steganography and visual cryptography are two sides of a coin. Visual cryptography has the problem of revealing the existence of the hidden data where as Steganography hides the existence of hidden data. Here this study is to suggest multiple layers of encryption by hiding the hidden data. Hiding the hidden data means, first encrypting the information using visual cryptography and then hide the share/s[3] into images or audio files using steganography. The proposed system can be less of draw backs and can resist towards attacks.

54. Non-destructive Quality Analysis of Indian Basmati Oryza Sativa SSP Indica (Rice) Using Image Processing

The Agricultural industry on the whole is ancient so far. Quality assessment of grains is a very big challenge since time immemorial. The paper presents a solution for quality evaluation and grading of Rice industry using computer vision and image processing. In this paper basic problem of rice industry for quality assessment is defined which is traditionally done manually by human inspector. Machine vision provides one alternative for an automated, nondestructive and cost-effective technique. With the help of proposed method for solution of quality assessment via computer vision, image analysis and processing there is a high degree of quality achieved as compared to human vision inspection. This paper proposes a new method for counting the number of Oryza sativa L (rice seeds) with long seeds as well as small seeds using image processing with a high degree of quality and then quantify the same for the rice seeds based on combined measurements

55. Quality Evaluation of Rice Grains Using Morphological Methods

In this paper we present an automatic evaluation method for the determination of the quality of milled rice. Amongthe milled rice samples the quantity of broken kernels are determined with the help of shape descriptors, and geometricfeatures. Grains are said to be broken kernels whose lengths are75% of the grain size. This proposed method gives good results in evaluation of rice quality

56. Algorithmic Approach to Quality Analysis of India Basmathi Rice using Digital Image Processing

The aim of this paper is to suggest algorithm for quality analysis of Indian Basmati Rice using image processing techniques. With the help of this algorithm, an automated software system can be made to avoid the human inspection and related drawbacks. Convenient software tools compatible with hardware platform can be selected. Analysis and Classification of rice is done visually and manually by human inspectors. The decisions taken by human inspectors may be affected by external factors like tiredness, bias, revenge or human psychological limitations. We can overcome this by using image processing techniques. Digital Image processing can classify the rice grain with speed and accuracy. Here we discuss the different parameters used for analysis of rice grains and how algorithm can be used to measure and compare them with accepted standards

57. SVD Based Image Processing Applications: State of the Art Contributions and Research Challenges

Singular Value Decomposition (SVD) has recently emerged as a new paradigm for processing different types of images. SVD is an attractive algebraic transform for image processing applications. The paper proposes an experimental survey for the SVD as an efficient transform in image processing applications. Despite the well-known fact that SVD offers attractive properties in imaging, the exploring of using its properties in various image applications is currently at its infancy. Since the SVD has many attractive properties have not been utilized, this paper contributes in using these generous properties in newly image applications and gives a highly recommendation for more research challenges. In this paper, the SVD properties for images are experimentally presented to be

Page 22: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

utilized in developing new SVD-based image processing applications. The paper offers survey on the developed SVD based image applications. The paper also proposes some new contributions that were originated from SVD properties analysis in different image processing. The aim of this paper is to provide a better understanding of the SVD in image processing and identify important various applications and open research directions in this increasingly important area; SVD based image processing in the future research

58. Two-stage image denoising by principal component analysis with local pixel grouping

This paper presents an efficient image denoising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). For a better preservation of image local structures, a pixel and its nearest neighbors are modeled as a vector variable, whose training samples are selected from the local window by using block matching based LPG. Such an LPG procedure guarantees that only the sample blocks with similar contents are used in the local statistics calculation for PCA transform estimation, so that the image local features can be well preserved after coefficient shrinkage in the PCA domain to remove the noise. The LPG-PCA denoising procedure is iterated one more time to further improve the denoising performance, and the noise level is adaptively adjusted in the second stage. Experimental results on benchmark test images demonstrate that the LPG-PCA method achieves very competitive denoising performance, especially in image fine structure preservation, compared with state-of-the-art denoising algorithms.

IEEE TRANSACTIONS on Biomedical

1. A brain tumor segmentation framework based on outlier detection

This paper describes a framework for automatic brain tumor segmentation from MR images. The detection of edema is simultaneously with tumor segmentation, as the knowledge of the extent of edema is important for diagnosis, planning, and treatment. Whereas many other tumor segmentation methods rely on the intensity enhancement produced by the gadolinium. contrast agent in the T1-weighted image, the method proposed here does not require contrast enhanced image channels. The only required input for the segmentation procedure is the T2 MR image channel, but it can make use of any additional non-enhanced image channels for improved tissue segmentation. The segmentation framework is composed of three stages. First, we detect ab-normal regions using a registered brain atlas as a model for healthy brains. We then make use of the robust estimates of the location and dispersion of the normal brain tissue intensity clusters to determine the intensity properties of the different tissue types. In the second stage, we determine from the T2 image intensities whether edema appears together with tumor in the abnormal regions Finally, we apply geometric and spatial constraints to the detected tumor and edema regions. The segmentation procedure has been applied to three real datasets, representing different tumor shapes, locations, sizes, image intensities, and enhancement.

2. A Multi-Resolution Image Fusion Scheme for 2D Images based on Wavelet Transform

The fusion of images is the process of combining two or more images into a single image retaining important features from each of the images. A scheme for fusion of multi-resolution 2D gray level images based on wavelet transform is presented in this paper. If the images are not already registered, a point-based registration, using affine transformation is performed prior to fusion. The images to be fused are first decomposed into sub images with different frequency and then information fusion is performed using these images under the proposed gradient and relative smoothness criterion. Finally these sub images are reconstructed into the result image with plentiful information. A quantitative measure of the degree of fusion is estimated by cross-correlation coefficient and comparison with some of the existing wavelet transform based image fusion techniques is carried out

3. Brain Segmentation using Fuzzy C means clustering to detect tumour Region

Page 23: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

Tumor Segmentation from MRI data is an important but time consuming manual task performed by medical experts. The research which addresses the diseases of the brain in the field of the vision by computer is one of the challenges in recent times in medicine, the engineers and researchers recently launched challenges to carryout innovations of technology pointed in imagery. This paper focuses on a new algorithm for brain segmentation of MRI images by fuzzy C means algorithm to diagnose accurately the region of cancer. In the first step it proceeds by nioise filtering later applying FCM algorithm to segment only tumor area. In this research multiple MRI images of brain can be applied detection of glioma (tumor) growth by advanced diameter technique

4. Detection of Epileptic Activity In The Human EEG-Based Wavelet Transforms

Epilepsy is a chronic neurological disorder which is identified by successive unexpected seizures.Electroencephalogram (EEG) is the electrical signal of brain which contains valuable information about its normal or epileptic activity. In this work EEG and its frequency sub-bands have been analyzed to detect epileptic seizures. A discrete wavelet transform (DWT) has been applied to decompose the EEG into its sub bands. Statistical features Energy, Covariance are calculated for each sub-band. The extracted features are applied to Feed Forward Neural Network For system for classifications got classification accuracy of 98%.

5. Morphological image processing approach on the Detection of tumor and cancer cells

Image processing is one of most growing research area these days and now it is very much integrated with the medical and biotechnology field. Image Processing can be used to analyze different medical and MRI images to get the abnormality in the image. This paper proposes an efficient K-means clustering algorithm under Morphological Image Processing (MIP). Medical Image segmentation deals with segmentation of tumor in CT and MR images for improved quality in medical diagnosis. It is an important process and a challenging problem due to noise presence in input images during image analysis. It is needed for applications involving estimation of the boundary of an object,classification of tissue abnormalities, shape analysis, contour detection. Segmentation determines as the process of dividing an image into disjoint homogeneous regions of a medical image. The amount of resources required to describe large set of data is simplified and is selected for tissue segmentation. In our paper, this segmentation is carried out using K-means clustering algorithm for better performance. This enhances the tumor

6. EEG signal classification for Epilepsy Seizure Detection using Improved Approximate Entropy

The result of the transient and unexpected electrical disturbance of the brain. About 50 million people worldwide have epilepsy, and nearly two out of every three new cases are discovered in developing countries. Epilepsy is more likely to occur in young children or people over the age of 65 years; however, it can occur at any age. The detection of epilepsy is possible by analyzing EEG signals. This paper, presents a hybrid technique to classification EEG signals for identification of epilepsy seizure. Proposed system is combination of multi-wavelet transform and artificial neural network. Approximate Entropy algorithm is enhanced (called as Improved Approximate Entropy: IApE) to measure irregularities present in the EEG signals. The proposed technique is implemented, tested and compared with existing method, based on performance indices such as sensitivity,specificity, accuracy parameters. EEG signals are classified as normal and epilepsy seizures with an accuracy of ~90%.

7. Higuchi fractal dimension as a measure of analgesia

Avoidance of patients’ intraoperative awareness and explicit recall of pain during surgery is important. Conventional methods of depth of anesthesia (DoA) monitoring involve physiological monitoring which are influenced by the administered anesthetic drugs. Balanced anesthesia is fusion of its four components analgesia, amnesia, motor blockade and hypnosis. One major component is analgesia which means inability to feel pain during surgery. Pain cannot be estimated any single physio- athological signal. A proper analgesia index proportional to the degree of pain experienced by the patient is required. Electroencephalogram (EEG) is a reliable means to determine real time DoA. In the present study, EEG of 12 volunteer subjects was recorded during relaxed and during pain. It was found that the Higuchi fractal dimension (HFD) feature of EEG from parietal region of brain reflects the sensation of pain and gives an overall accuracy of 95% in determining the pain experienced by the patient.

Page 24: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

8. Hybrid Dwt-Dct Coding Techniques for Medical Images

In this paper, a hybrid image compression coding technique using the discrete cosine transform (DCT) and the discrete wavelet transform (DWT) is used for medical images. The aim is to achieve higher compression rates by applying different compression thresholds for LL and HH band wavelet coefficients. The DCT transform is applied on HL and LH bands with maintaining the quality of reconstructed images. After this, the image is quantized to calculate probability index for each unique quantity so as to find out the unique binary code for each unique symbol for their encoding

9. A New Approach to Image Segmentation for Brain Tumor detection using Pillar K-means AlgorithmThis paper presents a new approach to image segmentation using Pillar K-means algorithm. This segmentation method includes a new mechanism for grouping the elements of high resolution images in order toimprove accuracy and reduce the computation time. The system uses K-means for image segmentation optimized by the algorithm after Pillar. The Pillar algorithm considers the placement of pillars should be located as far from each other to resist the pressure distribution of a roof, as same as the number of centroids between the data distribution. This algorithm is able to optimize the K-means clustering for image segmentation in the aspects of accuracy and computation time. This algorithm distributes all initial centroids according to the maximum cumulative distance metric. This paper evaluates the proposed approach for image segmentation by comparing with K-means clustering algorithm and Gaussian mixture model and the participation of RGB, HSV, HSL and CIELAB color spaces. Experimental results clarify the effectiveness of our approach to improve the segmentation quality and accuracy aspects of computing time.

10. Wavelet Based Image Fusion for Detection of Brain TumorBrain tumor, is one of the major causes for the increase in mortality among children and adults. Detecting the regions of brain is the major challenge in tumor detection. In the field of medical image processing, multi sensor images are widely being used as potential sources to detect brain tumor. In this paper, a wavelet based image fusion algorithm is applied on the Magnetic Resonance (MR) images and Computed Tomography (CT) images which are used as primary sources to extract the redundant and complementary information in order to enhance the tumor detection in the resultant fused image. The main features taken into account for detection of brain tumor are location of tumor and size of the tumor, which is further optimized through fusion of images using various wavelet transforms parameters. We discuss and enforce the principle of evaluating and comparing the performance of the algorithm applied to the images with respect to various wavelets type used for the wavelet analysis. The performance efficiency of the algorithm is evaluated on the basis of PSNR values. The obtained results are compared on the basis of PSNR with gradient vector field and big bang optimization. The algorithms are analyzed in terms of performance with respect to accuracy in estimation of tumor region and computational efficiency of the algorithms

IEEE TRANSACTIONS ON POWER ELECTRONICS

1. Review of Battery Charger Topologies, Charging Power Levels, and Infrastructure for Plug-In

Electric and Hybrid Vehicles

This paper reviews the current status and implementation of battery chargers, charging power levels, and infrastructure for plug-in electric vehicles and hybrids. Charger systems are categorized into off-board and on-board types with unidirectional or bidirectional power flow. Unidirectional charging limits hardware requirements and simplifies interconnection issues. Bidirectional charging supports battery energy injection back to the grid. Typical on-board chargers restrict power because of weight, space, and cost constraints. They can be integrated with the electric drive to avoid these problems. The availability of charging infrastructure reduces on-board energy storage requirements and costs. On-board charger systems can be conductive or

Page 25: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

inductive. An off-board charger can be designed for high charging rates and is less constrained by size and weight. Level 1 (convenience), Level 2(primary), and Level 3 (fast) power levels are discussed. Future aspects such as roadbed charging are presented. Various power level chargers and infrastructure configurations are presented, compared, and evaluated based on amount of power, charging time and location, cost, equipment, and other factors.

2. An Integrated Three-Port Bidirectional DC–DC Converter for PV Application on a DC Distribution

System

In this paper, an integrated three-port bidirectional dc–dc converter for a dc distribution system is presented. One port of the low-voltage side of the proposed converter is chosen as a current source port which fits for photovoltaic (PV) panels with wide voltage variation. In addition, the interleaved structure of the current source port can provide the desired small current ripple to benefit the PV panel to achieve the maximum power point tracking (MPPT). Another port of the low-voltage side is chosen as a voltage source port interfaced with battery that has small voltage variation; therefore, the PV panel and energy storage element can be integrated by using one converter topology. The voltage port on the high-voltage side will be connected to the dc distribution bus. A high-frequency transformer of the proposed converter not only provides galvanic isolation between energy sources and high voltage dc bus, but also helps to remove the leakage current resulted from PV panels. The MPPT and power flow regulations are realized by duty cycle control and phase-shift angle control, respectively. Different from the single-phase dual-half-bridge converter, the power flow between the low-voltage side and high-voltage side is only related to the phase-shift angle in a large operation area. The system operation modes under different conditions are analyzed and the zero-voltage switching can be guaranteed in the PV application even when the dc-link voltage varies. Finally, system simulation and experimental results on a 3-kW hardware prototype are presented to verify the proposed technology.

3. The Multilevel Modular DC Converter

The modular multilevel converter (M2C) has become an increasingly important topology in medium- and high-voltage applications. A limitation is that it relies on positive and negative half-cycles of the ac output voltage waveform to achieve charge balance on the submodule capacitors. To overcome this constraint a secondary power loop is introduced that exchanges power with the primary power loops at the input and output. Power is exchanged between the primary and secondary loops by using the principle of orthogonality of power flow at different frequencies. Two modular multilevel topologies are proposed to step up or step down dc in medium- and high-voltage dc applications: the tuned filter modular multilevel dc converter and the push–pull modular multilevel dc converter. An analytical simulation of the latter converter is presented to explain the operation.

4. Mitigation of Lower Order Harmonics in a Grid-Connected Single-Phase PV Inverter

In this paper, a simple single-phase grid-connected photovoltaic (PV) inverter topology consisting of a boost section, a low-voltage single-phase inverter with an inductive filter, and a step-up transformer interfacing the grid is considered. Ideally, this topology will not inject any lower order harmonics into the grid due to high-frequency pulse width modulation operation. However, the nonideal factors in the system such as core saturation-induced distorted magnetizing current of the transformer and the dead time of the inverter, etc., contribute to a significant amount of lower order harmonics in the grid current. A novel design of inverter current control that mitigates lower order harmonics is presented in this paper. An adaptive harmonic compensation technique and its design are proposed for the lower order harmonic compensation. In addition, a proportional-resonant-integral (PRI) controller and its design are also proposed. This controller eliminates the dc component in the control system, which introduces even harmonics in the grid current in the topology considered.The dynamics of the system due to the interaction between the PRI controller and the adaptive

Page 26: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

compensation scheme is also analyzed. The complete design has been validated with experimental results and good agreement with theoretical analysis of the overall system is observed.

5. Cascaded Multilevel Converter-Based Transmission STATCOM: System Design Methodology and

Development of a 12 kV ±12 MVAr Power Stage

This paper deals with the design methodology for cascaded multilevel converter (CMC)-based transmission-type STATCOM( T-STATCOM) and the development of a±12MVAR, 12 kV line-to-line wye-connected, 11-level CMC. Sizing of the CMCmodule, the number of H-bridges (HBs) in each phase of the CMC, ac voltage rating of theCMC, the number of paralleledCMCmodules in the T-STATCOM system, the optimum value of series filter reactors, and the determination of busbar in the power grid to which the T-STATCOMsystem is going to be connected are also discussed in this paper in view of the IEEE Std. 519-1992, current status of high voltage (HV) insulated gate bipolar transistor (IGBT) technology, and the required reactive power variation range for the T-STATCOM application. In the field prototype of the CMC module, the ac voltages are approximated to sinusoidal waves by the selective harmonic elimination method (SHEM). The equalization of dc-link capacitor voltages is achieved according to the modified selective swapping (MSS) algorithm. In this study, an L-shaped laminated bus has been designed and the HV IGBT driver circuit has been modified for the optimum switching performance of HV IGBT modules in each HB. The laboratory and field performances of the CMC module and of the resulting T-STATCOM system are found to be satisfactory and quite consistentwith the design objectives.

6. Adaptive Voltage Control of the DC/DC Boost Stage in PV Converters With Small Input Capacitor

In the case of photovoltaic (PV) systems, an adequate PV voltage regulation is fundamental in order to both maximize and limit the power. For this purpose, a large input capacitor has traditionally been used. However, when reducing that capacitor’s size, the nonlinearities of the PV array make the performance of the voltage regulation become highly dependent on the operating point. This paper analyzes the nonlinear characteristics of the PV generator and clearly states their effect on the control of the dc/dc boost stage of commercial converters by means of a linearization around the operating point. Then, it proposes an adaptive control, which enables the use of a small input capacitor preserving at the same time the performance of the original system with a large capacitor. Experimental results are carried out for a commercial converter with a 40 μF input capacitor, and a 4 kWPV array. The results corroborate the theoretical analysis; they evidence the problems of the traditional control, and validate the proposed control with such a small capacitor.

7. Variable Switching Frequency PWM for Three-Phase Converters Based on Current Ripple

Prediction

Compared with the widely used constant switching frequency pulse-width-modulation (PWM) method, variableswitching frequency PWM can benefit more because of the extra freedom. Based on the analytical expression of current ripple of three-phase converters, variable switching frequency control methods are proposed to satisfy different ripple requirements. Switching cycleTs is updated inDSP in every interruption period based on theripple requirement. Two methods are discussed in this paper. The first method is designed to arrange the current ripple peak value within a certain value and can reduce the equivalent switching frequency and electromagnetic interference (EMI) noise; the second method is designed to keep ripple current RMS value constant and reduce the EMI noise. Simulation and experimental results show that variable switching frequency control could improve the performance of EMI and efficiency without impairing the power quality.

8. An Inner Current Suppressing Method for Modular Multilevel Converters

Page 27: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

Ideally, the inner (the upper or lower arm) current of amodular multilevel converter (MMC) is ideally assumed to be the sum of a dc component and an ac component of the fundamental frequency. However, as ac current flows through the submodule (SM) capacitors, the capacitor voltages fluctuate with time. Consequently, the inner current is usually distorted and the peak/RMS value of it is increased compared with the theoretical value. The increased currents will increase power losses and may threaten the safe operation of the power devices and capacitors. This paper proposes a closed-loop method for suppression of the inner current in an MMC. This method is very simple and is implemented in a stationary frame, and no harmonic extraction algorithm is needed. Hence, it can be applied to single-phase or three-phase MMCs. Besides, this method does not influence the balancing of the SM capacitor voltages. Simulation and experimental results show that the proposedmethod can suppress the peak and RMS values of the inner currents dramatically. Fault Detection for Modular Multilevel Converters Based on Sliding Mode Observer This letter presents a fault detectionmethod for modular multilevel converters which is capable of locating a faulty semiconductor switching device in the circuit. The proposed fault detection method is based on a sliding mode observer (SMO) and a switching model of a half-bridge, the approach taken is to conjecture the location of fault, modify the SMOaccordingly and then compare the observed and measured states to verify, or otherwise,the assumption. This technique requires no additional measurement elements and can easily be implemented in a DSP or microcontroller. The operation and robustness of the fault detection technique are confirmed by simulation results for the fault condition of a semiconductor switching device appearing as an open circuit. An Improved Soft-Switching Buck Converter With Coupled Inductor This letter presents a novel topology for a buck dc– dc converter with soft-switching capability, which operates under a zero-current-switching condition at turn on and a zero-voltageswitching condition at turn off. In order to realize soft switching, based on a basic buck converter, the proposed converter added a small inductor, a diode, and an inductor coupled with the main inductor. Because of soft switching, the proposed converter can obtain a high efficiency under heavy load conditions. Moreover, a high efficiency is also achieved under light load conditions, which is significantly different from other soft-switching buck converters. The detailed theoretical analyses of steady-state operation modes are presented, and the detailed design methods and some simulation results are also given. Finally, a 600 W prototype is built to validate the theoretical principles. The switching waveforms and the efficiencies are alsomeasured to validate the proposed topology.

9. A Bridgeless Boost Rectifier for Low-Voltage Energy Harvesting Applications In this paper,

a single-stage ac–dc power electronic converter is proposed to efficiently manage the energy harvested from electromagnetic microscale and mesoscale generators with low-voltage outputs. The proposed topology combines a boost converter and a buck-boost converter to condition the positive and negative half portions of the input ac voltage, respectively. Only one inductor and capacitor are used in both circuitries to reduce the size of the converter. A 2 cm × 2 cm, 3.34-g prototype has been designed and tested at 50-kHz switching frequency, which demonstrate 71% efficiency at 54.5 mW. The input ac voltage with 0.4-V amplitude is rectified and stepped up to 3.3-V dc. Detailed design guidelines are provided with the purpose of minimizing the size, weight, and power losses. The theoretical analyses are validated bythe experiment results.

10. A High Step-Up Three-Port DC–DC Converter for Stand-Alone PV/Battery Power Systems

Athree-port dc–dc converter integrating photovoltaic (PV) and battery power for high step-up applications is proposed in this paper. The topology includes five power switches, two coupled inductors, and two active-clamp circuits. The coupled inductors are used to achieve high step-up voltage gain and to reduce the voltage stress of input side switches. Two sets of active-clamp circuits are used to recycle the energy stored in the leakage inductors and to improve the system efficiency. The operation mode does not need to be changed when a transition between charging and discharging occurs.Moreover, tracking maximum power point of the PV source and regulating the output oltage can be operated simultaneously during charging/discharging transitions. As long as the sun irradiation level is not too low, the maximum power point tracking (MPPT) algorithm will be disabled only when the battery charging voltage is too high. Therefore, the control scheme of the proposed

Page 28: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

converter provides maximum utilization of PV power most of the time. As a result, the proposed converter has merits of high boosting level, reduced number of devices, and simple control strategy. Experimental results of a 200-W laboratory prototype are presented to verify the performance of the proposed three-port converter.

11. Γ-Z-Source Inverters

Voltage-type Γ-Z-source inverters are proposed in this letter. They use a unique Γ-shaped impedance network for boosting their output voltage in addition to their usual voltagebuck behavior. Comparing them with other topologies, the proposed inverters use lesser components and a coupled transformer for producing the high-gain and modulation ratio simultaneously. The obtained gain can be tuned by varying the turns ratio γΓZ of the transformer within the narrow range of 1 < γΓZ ≤ 2. This leads to lesser winding turns at high gain, as compared to other related topologies. Experimental testing has already proven the validity of the proposed inverters.

12. Modular Multilevel Inverter with New Modulation Method and Its Application to Photovoltaic Grid-

Connected Generator

This paper proposed an improved phase disposition pulse width modulation (PDPWM) for a modular multilevel inverter which is used for Photovoltaic grid connection. This new modulation method is based on selective virtual loop mapping, to achieve dynamic capacitor voltage balance without the help of an extra compensation signal. The concept of virtual submodule (VSM) is first established, and by changing the loop mapping relationships between the VSMs and the real submodules, the voltages of the upper/lower arm’s capacitors can be well balanced. This method does not requiring sorting voltages from highest to lowest, and just identifies the MIN and MAX capacitor voltage’s index which makes it suitable for a modular multilevel converter with a large number of submodules in one arm. Compared to carrier phase-shifted PWM (CPSPWM), this method is more easily to be realized in field- programmable gate array and has much stronger dynamic regulation ability, and is conducive to the control of circulating current. Its feasibility and validity have been verified by simulations and experiments.

13. Soft-Switched Dual-Input DC–DC Converter Combining a Boost-Half-Bridge Cell and a Voltage-Fed

Full-Bridge Cell

This letter presents a new zero-voltage-switching (ZVS) isolated dc–dc converter which combines a boost half-bridge(BHB) cell and a full-bridge (FB) cell, so that two different type of power sources, i.e., both current fed and voltage fed, can be coupled effectively by the proposed converter for various applications, such as fuel cell and upercapacitor hybrid energy system. By fully using two high- frequency transformers and a shared leg of switches, number of the power devices and associated gate driver circuits can be reduced.With phase-shift control, the converter can achieve ZVS turn-on of active switches and zero-current switching (ZCS) turn-off of diodes. In this letter, derivation, analysis, and design of the proposed converter are presented. Finally, a 25–50 V input, 300–400 V output prototype with a 600 W nominal power rating is built up and tested to demonstrate the effectiveness of the proposed converter topology.

14. Integration and Operation of a Single-Phase Bidirectional Inverter With Two Buck/Boost MPPTs

for DC-Distribution Applications

This study is focused on integration and operation of a single-phase bidirectional inverter with two buck/boost maximum power point trackers (MPPTs) for dc-distribution applications. In a dc-distribution system, a bidirectional inverter is required to control the power flow between dc bus and ac grid, and to regulate the dc

Page 29: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

bus to a certain range of voltages.Adroop regulation mechanism according to the inverter inductor current levels to reduce capacitor size, balance power flow, and accommodate load variation is proposed. Since the photovoltaic (PV) array voltage can vary from 0 to 600 V, especially with thin-film PV panels, the MPPT topology is formed with buck and boost converters to operate at the dc-bus voltage around 380 V, reducing the voltage stress of its followed inverter. Additionally, the controller can online check the input configuration of the two MPPTs, equally distribute the PV-array output current to the twoMPPTs in parallel operation, and switch control laws to smooth out mode transition. A comparison between the conventional boostMPPT and the proposed buck/boostMPPT integrated with a PV inverter is also presented. Experimental results obtained froma 5-kW system have verified the discussion and feasibility.

IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS

1. QOS-Aware and Energy-Efficient Resource Management in OFDMA Femtocells

We consider the joint resource allocation and admission control problem for Orthogonal Frequency-Division Multiple Access (OFDMA)-based femtocell networks. We assume that Macrocell User Equipments (MUEs) can establish connections with Femtocell Base Stations (FBSs) to mitigate the excessive cross-tier interference and achieve better throughput. A crosslayer design model is considered where multiband opportunistic scheduling at the Medium Access Control (MAC) layer and admission control at the network layer working at different time-scales are assumed. We assume that both MUEs and Femtocell User Equipments (FUEs) have minimum average rate constraints, which depend on their geographical locations and their application requirements. In addition, blocking probability constraints are imposed on each FUE so that the connections from MUEs only result in controllable performance degradation for FUEs.We present an optimal design for the admission control problem by using the theory of Semi-Markov Decision Process (SMDP). Moreover, we devise a novel distributed femtocell power adaptation algorithm, which converges to the Nash equilibrium of a corresponding power adaptation game. This power adaptation algorithm reduces energy consumption for femtocells while still maintaining individual cell throughput by adapting the FBS power to the traffic load in the network. Finally, numerical results are presented to demonstrate the desirable operation of the optimal admission control solution, the significant performance gain of the proposed hybrid access strategy with respect to the closed access counterpart, and the great power saving gain achieved by the proposed power adaptation algorithm.

2. Spectrum Sharing Scheme Between Cellular Users and Ad-hoc Device-to-Device Users

In an attempt to utilize spectrum resources more efficiently, protocols sharing licensed spectrum with unlicensed users are receiving increased attention. From the perspective of cellular networks, spectrum underutilization makes spatial reuse a feasible complement to existing standards. Interference management is a major component in designing these schemes as it is critical that licensed users maintain their expected quality of service. We develop a distributed dynamic spectrum protocol in which ad-hoc device-to-device users opportunistically access the spectrum actively in use by cellular users. First, channel gain estimates are used to set feasible transmit powers for device-to-device users that keeps the interference they cause within the allowed interference temperature. Then network information is distributed by route discovery packets in a random access manner to help establish either a single-hop or multi-hop route between two device-to-device users. We show that network information in the discovery packet can decrease the failure rate of the route discovery and reduce the number of necessary transmissions to find a route. Using the found route, we show that two device-to-device users can communicate with a low probability of outage while only minimally affecting the cellular network, and can achieve significant power savings when communicating directly with each other instead of utilizing the cellular base station.

3. A Practical Cooperative Multicell MIMO-OFDMA Network Based on Rank Coordination

Page 30: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

An important challenge of wireless networks is to boost the cell edge performance and enable multi-stream transmissions to cell edge users. Interference mitigation techniques relying on multiple antennas and coordination among cells are nowadays heavily studied in the literature. Typical strategies in OFDMA networks include coordinated scheduling, beamforming and power control. In this paper, we propose a novel and practical type of coordination for OFDMA downlink networks relying on multiple antennas at the transmitter and the receiver. The transmission ranks, i.e. the number of transmitted streams, and the user scheduling in all cells are jointly optimized in order to maximize a network utility function accounting for fairness among users. A distributed coordinated scheduler motivated by an interference pricing mechanism and relying on a masterslave architecture is introduced. The proposed scheme is operated based on the user report of a recommended rank for the interfering cells accounting for the receiver interference suppression capability. It incurs a very low feedback and backhaul overhead and enables efficient link adaptation. It is moreover robust to channel measurement errors and applicable to both open-loop and closed-loop MIMO operations. A 20% cell edge performance gain over uncoordinated LTE-A system is shown through system level simulations.

4. Downlink Resource Allocation for Next Generation Wireless Networks with Inter-Cell Interference

This paper presents a novel downlink resource allocation scheme for OFDMA-based next generation wirelessnetworks subject to inter-cell interference (ICI). The scheme consists of radio resource and power allocations, which are implemented separately. Low-complexity heuristic algorithms are first proposed to achieve the radio resource allocation, where graph-based framework and fine physical resource block (PRB) assignment are performed to mitigate major ICI and hence improve the network performance. Given the solution of radio resource allocation, a novel distributed power allocation is then performed to optimize the performance of cell-edge users under the condition that desirable performance for cell-center users must be maintained. The power optimization is formulated as an iterative barrier-constrained water-filling problem and solved by using the Lagrange method. Simulation results indicate that our proposed scheme can achieve significantly balanced performance improvement between cell-edge and cell-center users in multi-cell networks compared with other schemes, and therefore realize the goal of future wireless networks in terms of providing high performance to anyone from anywhere.

5. SINR and Throughput Analysis for Random Beamforming Systems with Adaptive Modulation

In this paper, we derive the exact probability distribution of post-scheduling signal-to-interference-plus-noise ratio (SINR) considering both user feedback and scheduling. We also develop an optimized adaptive modulation scheme in orthogonal random beamforming systems with M transmit antennas and K single-antenna users. The exact robability distributions of each user’s feedback SINR and the exact postscheduling SINR are derived rigorously by direct integration and multinomial distribution. It is also shown that the derived cumulative distribution function (CDF) of the post-scheduling SINR happens to be identical to the the existing approximate CDF for SINR higher than 0 dB. The closed form expressions of system performance, such as average spectral efficiency (ASE) and average bit error ratio (A-BER), are derived using the CDF of the post-scheduling SINR. The optimal SINR thresholds that maximize the ASE with a target A-BER constraint are solved using the derived closed form CDF and a Lagrange multiplier. Key contributions of this paper include the derivation of the exact CDF of post-scheduling SINR by direct integration, and its application to an optimized adaptive modulation based on a Lagrange multiplier. Simulations show the correspondence between theoretical and empirical CDF’s, and the performance improvement of the proposed adaptive modulation method in terms of ASE.

6. Robust and Efficient Multi-Cell Cooperation under Imperfect CSI and Limited Backhaul

Future cellular networks need to harvest existing spectral resources more efficiently. Hence, networks will bedeployed at a higher density in order to increase the spatial reuse. This will require advanced interference mitigation techniques that allow to cope with the increased interference level. In this paper, the two-way

Page 31: Matlab 2013,IEEE 2013 matlab projects,Mtech Matlab Projects 2013,IEEE power electronics projects, Simulation projects 2013

MATLAB PROJECT ABSTRACT (Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing)

interference channel is analyzed as a model for a typical inter-cell interference scenario. Based on this model, a new inter-cell interference mitigation approach is derived. This new approach reshapes interference by asymmetrically assigning uplink and downlink to communication pairs, i. e., one communication pair operates in uplink while an adjacent communication pair is in downlink. In addition, backhaul resources are taken into account, which are used to exchange support information between radio access points and to support the interference mitigation process. The introduced approach is compared to cooperative multi-point techniques which employ joint transmission and reception algorithms. The evaluation is done under consideration of limited backhaul resources and imperfect channel state information. It shows that assigning uplink and downlink asymmetrically is able to outperform cooperative multi-point techniques for terminals close to the cell border with gains of up to about 20% compared to noncooperative transmission and 10% compared to CoMP. Hierarchical Competition for Downlink Power Allocation in OFDMA Femtocell Networks This paper considers the problem of downlink power allocation in an orthogonal frequency-division multiple access (OFDMA) cellular network with macrocells underlaid with femtocells. The femto-access points (FAPs) and the macro-base stations (MBSs) in the network are assumed to compete with each other to maximize their capacity under power constraints. This competition is captured in the framework of a Stackelberg game with the MBSs as the leaders and the FAPs as the followers. The leaders are assumed to have foresight enough to consider the responses of the followers while formulating their own strategies. The Stackelberg equilibrium is introduced as the solution of the Stackelberg game, and it is shown to exist under some mild assumptions. The game is expressed as a mathematical program with equilibrium constraints (MPEC), and the best response for a one leader-multiple follower game is derived. The best response is also obtained when a quality-of-service constraint is placed on the leader. Orthogonal power allocation between leader and followers is obtained as a special case of this solution under high interference. These results are used to build algorithms to iteratively calculate the Stackelberg equilibrium, and a sufficient condition is given for its convergence. The performance of the system at a Stackelberg equilibrium is found to be much better than that at a Nash equilibrium.

7. Minimum Energy Channel Codes for Nanoscale Wireless Communications

It is essential to develop energy-efficient communication techniques for nanoscale wireless communications. In this paper, a new modulation and a novel minimum energy coding scheme (MEC) are proposed to achieve energy efficiency in wireless nanosensor networks (WNSNs). Unlike existing studies, MEC maintains the desired code distance to provide reliability, while minimizing energy. It is analytically shown that, with MEC, codewords can be decoded perfectly for large code distances, if the source set cardinality is less than the inverse of the symbol error probability. Performance evaluations show that MEC outperforms popular codes such as Hamming, Reed-Solomon and Golay in the average codeword energy sense.

8. Spectrum Sensing for Digital Primary Signals in Cognitive Radio: A Bayesian Approach for

Maximizing Spectrum Utilization

With the prior knowledge that the primary user is highly likely idle and the primary signals are digitally modulated, we propose an optimal Bayesian detector for spectrum sensing to achieve higher spectrum utilization in cognitive radio networks. We derive the optimal detector structure for MPSK modulated primary signals with known order over AWGN channels and give its corresponding suboptimal detectors in both low and high SNR (Signal-to-Noise Ratio) regimes. Through approximations, it is found that, in low SNR regime, for MPSK (M >2) signals, the suboptimal detector is the energy detector, while for BPSK signals the suboptimal detector is the energy detection on the real part. In high SNR regime, it is shown that, for BPSK signals, the test statistic is the sum of signal magnitudes, but uses the real part of the phase-shifted signals as the input. We provide the performance analysis of the suboptimal detectors in terms of probabilities of detection and false alarm, and selection of detection threshold and number of samples. The simulations have shown that Bayesian detector has a performance similar to the energy detector in low SNR regime, but has better performance in high SNR regime in terms of spectrum utilization and secondary users’ throughput.