what is blind signal separation

Upload: subhashissahoo

Post on 09-Apr-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/8/2019 What is Blind Signal Separation

    1/41

    1

    PROJECT REPORT

    ON

    BLIND SOURCE SEPARATION

    USING H FILTER PROJECT REPORT SUBMITTED TOWARDS PARTIAL

    FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

    BACHELOR OF

    TECHNOLOGY

    IN

    APPLIED ELECTRONICS & INSTRUMENTATION

    ENGINEERING UNDER

    BIJU PATTNAIK UNIVERSITY OF

    TECHNOLOGY,ROURKELA

    BY

    VIJAY PRASAD POUDEL 0701209230

    SAURAV SAMANTRAY 0701209357

    NAIRITA BANERJEE

    0701209374

    DEEPAK KUMAR SWAIN 0701209376

    JYOTI RANJAN SAHOO 0821209029

    UNDER THE GUIDANCE OF Er. SUDHANSHU MOHAN BISWAL

    DEPARTMENT OF APPLIED ELECTRONICS &INSTRUMENTATION

  • 8/8/2019 What is Blind Signal Separation

    2/41

    2

    ENGINEERING SILICON INSTITUTE OF TECHNOLOGY

    SILICON HILLS,PATIA,BHUBANESWAR-751024

    SILICON INSTITUTE OF TECHNOLOGY,BHUBANESWAR

    DEPARTMENT OF APPLIED ELECTRONICS AND INSTRUMENTATION

    SILICON INSTITUTE OF TECHNOLOGY, BHUBANESWAR

    BIJU PATTNAIK UNIVERSITY OF TECHNOLOGY

    CERTIFICATEThis is to certify that the project entitled BLIND

    SOURCE SIGNAL SEPARATION BY H FILTER is submitted

    by VIJAY PRASAD POUDEL, SAURAV SAMANTRAY, NAIRITA

    BANERJEE , DEEPAK KUMAR SWAIN & JYOTI RANJAN SAHOO

    bearing Regd. No.: 0701209230 , 0701209357 , 0701209374 ,

    0701209376 & 0821209029 respectively, in the partial

    fulfillment of the bachelor of technology under Biju

    Pattnaik University of Technology, Rourkela, in

    Applied Electronics and Instrumentation engineering

    is a bonfide work carried out by them under the

    supervision and guidance of Er. SUDHANSHU MOHAN

  • 8/8/2019 What is Blind Signal Separation

    3/41

    3

    BISWAL during the 7 th semester of academic session

    2010-2011.

    H.O.D PROJECT GUIDE EXTERNAL

    ACKNOWLEDGEMENT

    The satisfaction that accompanies the successful completion of any task would

    be incomplete without the people, whose constant guidance and encouragement

    crowns all effort with success.

    We take the opportunities to express our sincere thanks to the management of

    SILICON INSTITUE OF TECHNOLOGY , who has been a constant source

    of inspiration and strength to us throughout our study.

    It gives us immense pleasure to have the privilege of expressing us

    indebtedness and gratitude to our respected H.O.D Er. NARAYAN NAYAK

    for always being a motivational force.

    We express our gratitude to our Project guide Er.SUDHANSHU MOHAN

    BISWAL for positive criticism, valuable suggestions and guidance helped us to

    complete our work successfully.

  • 8/8/2019 What is Blind Signal Separation

    4/41

    4

    Lastly, words run to express our gratitude to all lecturers and friends for their

    co-operation, constructive criticism and valuable suggestions during the project

    work.

    Thanks to all..

    CONTENTS

    CHAPTER-1 Pagesy What Is Blind Signal Separation --------------------------------------7

    y Why Blind Signal Separation? ----------------------------------------8

    y Applications Of BSS -----------------------------------------------------8

    y Assumptions Made In Blind Source Separation -------------------8

    CHAPTER -2y Techniques Used In Blind Source Separation ----------------------9

    y Principal Component Analysis ----------------------------------------10y Independent Component Analysis ------------------------------------10y Sparse Component Analysis -------------------------------------------11

    CHAPTER-3y Adaptive Filtering Technique ----------------------------------------18

    y Least Mean Square Filters ---------------------------------------------19

    y Kalman Filter -------------------------------------------------------------20

  • 8/8/2019 What is Blind Signal Separation

    5/41

    5

    y Why H-Infinity Filtering -----------------------------------------------21

    CHAPTER- 4y Bss Problem Formulation ---------------------------------------------22

    y Matlab Code -------------------------------------------------------------26

    CHAPTER-5

    y Applications -------------------------------------------------------------36

    FIGURES pages

    FIG-1-------------------------------------------------------------------- 7

    FIG-2-------------------------------------------------------------------- 7

    FIG-3-------------------------------------------------------------------- 7

    FIG-4-------------------------------------------------------------------- 15

    FIG-5-------------------------------------------------------------------- 16

    FIG-6-------------------------------------------------------------------- 18

    FIG-7-------------------------------------------------------------------- 19

    FIG-8-------------------------------------------------------------------- 23

    FIG-9-------------------------------------------------------------------- 36

  • 8/8/2019 What is Blind Signal Separation

    6/41

    6

    CHAPTER-1BLIND SIGNAL SEPARATION

  • 8/8/2019 What is Blind Signal Separation

    7/41

    7

    WHAT IS BLIND SIGNAL SEPARATION ?

    In blind signal separation, multiple streams of information are extracted from linearmixtures of these signal streams. This process is blind if examples of the sourcesignals, along with their corresponding mixtures, are unavailable for training.

    Figure 1: An illustration of blind source separation. This figureshows three source signals, or independent components.

    Figure 2: Due to some external circumstances, only linear mixturesof the source signals in Fig. 1, as depicted here, can be observed.

  • 8/8/2019 What is Blind Signal Separation

    8/41

    8

    Figure 3: Using only the linear mixtures in Fig. 2, the source signals inFig. 1 can be estimated, up to some multiplying factors. This figure

    shows the estimates of the source signals.

    WHY BLIND SIGNAL SEPARATION?

    Interest in blind signal separation has recently developed for three reasons:(1) the development of statistical frameworks for understanding the BSS task,

    (2) a corresponding development of several useful BSS methods(3) the identification of many potential applications of BSS.

    APPLICATIONS OF BSS

    1. Blind I/Q Signal Separation for Receiver Signal Processing2. Blind Separation of Co-Channel Signals Using an Antenna Array3. Separation of Overlapping Radio Frequency Identification (RFID) Signals by

    Antenna Arrays4. Blind Signal Separation in Biomedical Applications like

    Electroencephalography (EEG),Magneto encephalography(MEG),Electrocardiography (ECG/EKG),Functional Magnetic Resonance Imaging(fMRI)

    5. Communication system: to separate/reduce intersymbol interference (ISI),

    ASSUMPTIONS MADE IN BLIND SOURCE SEPARATION

  • 8/8/2019 What is Blind Signal Separation

    9/41

    9

    1. Assumes no information about mixing process or sources: BLIND2. Does not assume knowledge of direction of arrival (DOA) of sources.3. Does not require signals to be separable in frequency domain.4. Mutual statistical independence of sources i.e. they dont have mutual

    information.

    CHAPTER-2TECHNIQUES USED IN BLIND

    SIGNAL SEPARATION

  • 8/8/2019 What is Blind Signal Separation

    10/41

    10

    TECHNIQUES USED IN BLIND SOURCE SEPARATION

    1.Principal Component Analysis (PCA)2.Independent Component Analysis (ICA)3.Sparse Component Analysis (SCA)

    PRINCIPAL COMPONENT ANALYSIS

    Principal component analysis is one of the unsupervised learning method.PCAprojects the data into a new space spanned by the principal components. Eachsuccessive principal component is selected to be orthonormal to the previous onesand to capture the maximum variance that is not already present in the previouscomponents. The constraint of mutual orthogonality of components implied byclassical PCA, however, may not be appropriate for biomedical data. Moreover,since PCA features all use second-order statistics technique to achieve thecorrelation learning rules and only covariances between the observed variables areused in the estimation, these features are only sensitive to second-order statistics.The failure of correlation-based learning algorithms is that they reflect only theamplitude spectrum of the signal and ignore the phase spectrum. Extracting andcharacterizing the most informative features of the signals, however, require higher-order statistics.

    INDEPENDENT COMPONENT ANALYSIS

    The goal of ICA is to find a linear representation of non-Gaussian data so that thecomponents are statistically independent, or as independent as possible. Such a

  • 8/8/2019 What is Blind Signal Separation

    11/41

    11

    representation can be used to capture the essential structure of the data in manyapplications, including feature extraction and signal separation. ICA is alreadywidely used for performing blind source separation (BSS) in signal processing. ICAis a fairly new and a generally applicable method to several challenges in signalprocessing. It reveals a diversity of theoretical questions and opens a variety ofpotential applications. Successful results in EEG, fMRI, speech recognition and facerecognition systems indicate the power and optimistic hope in the new paradigm .

    SPARSE COMPONENT ANALYSIS

    The blind separation technique includes two steps, one is to estimate a mixingmatrix (basis matrix in the sparse representation), the second is to estimate sources(coefficient matrix). If the sources are sufficiently sparse, blind separation can be

    carried out directly in the time domain. Otherwise, blind separation can beimplemented in time-frequency domain after applying wavelet packettransformation preprocessing to the observed mixtures.thus in these cases sparsecomponent analysis is used.

    PRINCIPLE COMPONENT ANALYSIS

    The following is a detailed description of PCA using the covariance method

    Organize the data set

    Suppose we have data comprising a set of observations of M variables, and youwant to reduce the data so that each observation can be described withonly L variables, L < M . Suppose further, that the data are arranged as a set of N datavectors with each representing a single grouped observation ofthe M variables.

    W rite as column vectors, each of which has M rows. Place the column vectors into a single matrix X of dimensions M N .

    Calculate the empirical mean

    Find the empirical mean along each dimension m = 1, ..., M .

  • 8/8/2019 What is Blind Signal Separation

    12/41

    12

    Place the calculated mean values into an empirical mean vector u ofdimensions M 1.

    Calculate the deviations from the mean

    Mean subtraction is an integral part of the solution towards finding a principalcomponent basis that minimizes the mean square error of approximating thedata [6]. Hence we proceed by centering the data as follows:

    Subtract the empirical mean vector u from each column of the data matrix X . Store mean-subtracted data in the M N matrix B.

    where h is a 1 N row vector of all 1s:

    Find the covariance matrix

    Find the M M empirical covariance matrix C from the outer product ofmatrix B with itself:

    where

    is the expected value operator,

    is the outer product operator, andis the conjugate transpose operator. Note that if B consists entirely of real

    numbers, which is the case in many applications, the "conjugate transpose" isthe same as the regular transpose.

    .Find the eigenvectors and eigenvalues of the covariance matrixCompute the matrix V of eigenvectors which diagonalizes the covariance matrix C:

    where D is the diagonal matrix of eigenvalues of C. Matrix D will take the form ofan M M diagonal matrix, where

    is the mth eigenvalue of the covariance matrix C, and

  • 8/8/2019 What is Blind Signal Separation

    13/41

    13

    Matrix V , also of dimension M M , contains M column vectors, each of length M ,which represent the M eigenvectors of the covariance matrix C.

    The eigenvalues and eigenvectors are ordered and paired. The mth eigenvaluecorresponds to the mth eigenvector.Rearrange the eigenvectors and eigenvalues

    Sort the columns of the eigenvector matrix V and eigenvalue matrix D in orderof decreasing eigenvalue.

    Make sure to maintain the correct pairings between the columns in each matrix.Compute the cumulative energy content for each eigenvector

    The eigenvalues represent the distribution of the source data's energy among each ofthe eigenvectors, where the eigenvectors form abasis for the data. The cumulative

    energy content g for the mth eigenvector is the sum of the energy content across allof the eigenvalues from 1 through m:

    Select a subset of the eigenvectors as basis vectors

    Save the first L columns of V as the M L matrix W :

    where

    Use the vector g as a guide in choosing an appropriate value for L . The goal is tochoose a value of L as small as possible while achieving a reasonably high valueof g on a percentage basis. For example, you may want to choose L so that thecumulative energy g is above a certain threshold, like 90 percent. In this case, choosethe smallest value of L such that

    Convert the source data to z-scores

    Create an M 1 empirical standard deviation vector s from the square root of eachelement along the main diagonal of the covariance matrix C:

  • 8/8/2019 What is Blind Signal Separation

    14/41

    14

    Calculate the M N z-score matrix:

    (divide element-by-element)Project the z-scores of the data onto the new basisThe projected vectors are the columns of the matrix

    W* is the conjugate transpose of the eigenvector matrix.

    LIMITATIONS OF PRINCIPLE COMPONENT ANALYSIS

    There are however some limitations with PCA that we should take intoconsideration. First of all its a linear method. W hile the PCA still tries to produce

    components by variance, it fails as the largest variance is not along a single vector,but along a non-linear path. Neural networks on the other hand are perfectlycapable of dealing with nonlinear problems and can on their own do this. Inaddition, they can do scaling directly so that the principal components can be scaledby their importance (eigenvalues): So while PCA in theory is an optimal linearfeature extractor, it can be bad for non-linear problems.

    linear independent component analysis

    Independent component analysis (ICA) is a statistical and computational techniquefor revealing hidden factors that underlie sets of random variables, measurements,or signals.

    Linear independent component analysis can be divided into noiseless and noisycases, where noiseless ICA is a special case of noisy ICA.In the definitions, the

    observed m-dimensional random vector is denoted by .

    General defination

    The data is represented by the random vector and thecomponents as the random vector . The task is to transform theobserved data x , using a linear static transformation W as

    ,

  • 8/8/2019 What is Blind Signal Separation

    15/41

    15

    into maximally independent components s measured by somefunction of independence.

    Noisy ICA model ICA of a random vector consists of estimating the following generative model forthe data:

    where the latent variables (components) si in the vector are assumed

    independent. The matrix is a constant 'mixing' matrix, and is a m-dimensional random noise vector.This definition reduces the ICA problem toordinary estimation of a latent variable model.

    Noise-free ICA model ICA of a random vector consists of estimating the following generative model for

    the data:

    where and are as in noisy model.Here the noise vector has been omitted. thenatural relation is used with n=m.

    One of the applications of ICA is in solving the cocktail party problem

  • 8/8/2019 What is Blind Signal Separation

    16/41

    16

    FIG-4

    A simple application of ICA is the cocktail party problem , where the underlyingspeech signals are separated from a sample data consisting of people talkingsimultaneously in a room. Usually the problem is simplified by assuming no timedelays or echoes. An important note to consider is that if N sources are present, atleast N observations (e.g. microphones) are needed to get the original signals. Thisconstitutes the square ( M = N , where M is the input dimension of the data and N isthe dimension of the model).

    FIG-5

    Sources

    Observations

    s1

    s2

    x 1

    x 2

    Mixing matrix A

    x = As

    n sources, m= n observations

  • 8/8/2019 What is Blind Signal Separation

    17/41

    17

    CHAPTER-3ADAPTIVE FILTERING TECHNIQUE

  • 8/8/2019 What is Blind Signal Separation

    18/41

    18

    ADAPTIVE FILTERING TECHNIQUE

    An adaptive filter is a filter that self-adjusts its transfer function according to anoptimizing algorithm. Because of the complexity of the optimizing algorithms, mostadaptive filters are digital filters that perform digital signal processing and adapt

    their performance based on the input signal .

    A conventional fixed Filter,which is used to extract information from an input timesequence, is linear and time invariant. An adaptive filter is a filter whichautomatically adjusts its coefficients to optimize an objective function. A conceptualadaptive filter is shown in Fig.5, where the filter minimizes the objective function ofmean square error by modifying itself and is thus a time varying system. Anadaptive filter is useful when an exact filtering operation may be unknown and/orthis operation may be mildly non-stationary.

    FIG-6

  • 8/8/2019 What is Blind Signal Separation

    19/41

    19

    e(k)=d(k)-y(k)where w1,w2,w3 are the adjustment weights

    d(k)=desired signaly(k)=filter output signale(k)=error signal

    Least mean square filters

    Least mean squares (LMS) algorithms are a class of adaptive filter used to mimic adesired filter by finding the filter coefficients that relate to producing the least meansquares of the error signal (difference between the desired and the actual signal). Itis a stochastic gradient descent method in that the filter is only adapted based on the

    error at the current time.

    FIG-7

    Most linear adaptive filtering problems can be formulated using the block diagram

    above. That is, an unknown system is to be identified and the adaptive filter

    attempts to adapt the filter to make it as close as possible to , while

    using only observable signals x (n), d(n) and e(n); but y(n), v(n) and h(n) are notdirectly observable.

    definition of symbols

  • 8/8/2019 What is Blind Signal Separation

    20/41

    20

    d(n) = y(n) + (n)

    LMS algorithm summary

    The LMS algorithm for a pth order algorithm can be summarized as

    Parameters: p = filter order = step size

    Initialisation:

    Computation: For n = 0,1,2,...

    where denotes the Hermitian transpose of .

    KALMAN FILTER

    Its purpose is to use measurements that are observed over time thatcontain noise (random variations) and other inaccuracies, and produce values thattend to be closer to the true values of the measurements and their associatedcalculated values. The Kalman filter has many applications in technology, and is anessential part of the development of space and military technologyThe Kalman filter produces estimates of the true values of measurements and theirassociated calculated values by predicting a value, estimating the uncertainty of thepredicted value, and computing a weighted average of the predicted value and themeasured value. The most weight is given to the value with the least uncertainty.The estimates produced by the method tend to be closer to the true values than theoriginal measurements because the weighted average has a better estimateduncertainty than either of the values that went into the weighted average. TheKalman filter is a recursive estimator. This means that only the estimated state from

  • 8/8/2019 What is Blind Signal Separation

    21/41

    21

    the previous time step and the current measurement are needed to compute theestimate for the current state .

    WHY H-INFINITY FILTERING

    H2 filtering, also known as Kalman filtering , is an estimation method whichminimizes "average" estimation error. More precisely, the Kalman filter minimizesthe variance of the estimation error. But there are a couple of serious limitations tothe Kalman filter .

    The Kalman filter assumes that the noise properties are known. W hat if we don'tknow anything about the system noise?

    The Kalman filter minimizes the "average" estimation error. W hat if we wouldprefer to minimize the worst-case estimation error?

    These limitations gave rise to H-infinity filtering, also known as minimax filtering.Minimax filtering minimizes the "worst-case" estimation error. More precisely, theminimax filter minimizes the maximum singular value of the transfer function fromthe noise to the estimation error. W hile the Kalman filter requires a knowledge ofthe noise statistics of the filtered process, the minimax filter requires no suchknowledge.

  • 8/8/2019 What is Blind Signal Separation

    22/41

    22

    CHAPTER-4

    BSS PROBLEM FORMULATION

  • 8/8/2019 What is Blind Signal Separation

    23/41

    23

    BSS PROBLEM FORMULATION

    FIG-8

    R unobserved source signals: x[n] = ( x1[n], x2[n], xR[n] )R random signals: y[n] = ( y1[n], y2[n], yR[n] ) as linear instantaneous

    mixtures, given as:y[n] = W x[n]

    Consider the problem of estimating the variables of some system. In dynamicsystems (that is, systems which vary with time) the system variables are oftendenoted by the term state variables. Assume that the system variables, represented bythe vector x , are governed by the equation x k+1 = Ax k + wk where wk is randomprocess noise, and the subscripts on the vectors represent the time step. Nowsuppose we can measure some combination of the states. Then our measurementcan be represented by the equation z k = H x k + vk where vk is random measurementnoise.

    Now suppose we want to find an estimator for the state x based on themeasurements z and our knowledge of the system equation. The estimator structureis assumed to be in the following predictor-corrector form:

    x1[n]

    x2[n]

    y1[n]

    y2[n]

    u1[n]

    u2[n]

    g11

    g22

    g12

    g21

    h11

    h22

    h12

    h21

  • 8/8/2019 What is Blind Signal Separation

    24/41

    24

    where K k is some gain which we need to determine. If we want to minimize the 2-norm (the variance) of the estimation error, then we will choose K k based on theKalman filter. However, if we want to minimize the infinity-norm (the "worst-case"value) of the estimation error, then we will choose K k based on the minimax filter.

    Several minimax filtering formulations have been proposed. The one we willconsider here is the following: Find a filter gain K k such that the maximum singularvalue is less than g . This is a way of minimizing the worst-case estimation error.This problem will have a solution for some values of g but not for values of g whichare too small. If we choose a g for which the stated problem has a solution, then theminimax filtering problem can be solved by a constant gain K which is found bysolving the following simultaneous equations:

    In the above equations, the superscript -1 indicates matrix inversion, the superscriptT indicates matrix transpostion, and I is the identity matrix. The simultaneoussolution of these three equations is a problem in itself, but once we have a solution,the matrix K gives the minimax filtering solution. If g is too small, then the equationswill not have a solution.

    One method to solve the three simultaneous equations is to use an iterativeapproach. A more analytical approach is as follows:

    Form the following 2n

    2n matrix:

    Find the eigenvectors of Z. Denote those eigenvectors corresponding toeigenvalues outside the unit circle as c i (i = 1, . . . , n).

    If we have a square matrix D, then the eigenvalues of D are defined as all valuesof l that satisfy the equation Dg= g for some vector g. It turns outthat if D is an nXn matrix, then there are always n values of l that satisfy thisequation; that is, D has n eigenvalues.For instance, consider the following D matrix:

  • 8/8/2019 What is Blind Signal Separation

    25/41

    25

    The eigenvalues of this matrix are 1 and 3. In other words, the two solutions tothe Dg= g equation are as follows:3 Eigenvalues are difficult to compute in general.But if you have a 2X2 matrix of the following form:

    The two eigenvalues of this matrix are:

    Form the following matrix:

    where X 1 and X 2 are n

    n matrices. Compute M = X 2 X 1 -1 .

    This method only works if X 1 has an inverse. If X 1 does not have an inverse, thatmeans that the chosen value of g is too small.

    At this point we see that both Kalman and minimax filtering have their pros andcons. The Kalman filter assumes that the noise statistics are known. The minimaxfilter does not make this assumption, but further assumes that absolutely nothing isknown about the noise.

  • 8/8/2019 What is Blind Signal Separation

    26/41

    26

    MATLAB CODE

    1.FOR SIGNAL SEPARATION(INPUT IS A SINE WA V E,SAWTOOTH WA V EAND NOISE SIGNAL)

    function [sources,mixtures,A]=input_data(samples);% Get sources and mixtures.

    no_sources = 3;no_mixtures = no_sources;

    %sources=zeros(num_sources,samples);

    fc=50e6;t=0:1/100/fc:60/fc;y=sin(2*pi*fc*t);sources(1,:)=y(1:samples)';y=sawtooth(4*pi*fc*t);

    sources(2,:)=y(1:samples)';load gong; sources(3,:)=y(1:samples)';

    sources=sources';% Make mixing matrix A.A = randn(no_mixtures,no_sources);mixtures=sources*A;

  • 8/8/2019 What is Blind Signal Separation

    27/41

    27

    BLIND SOURCE SEPARATION USING H INFINITY FILTER BY PCA

    clear all;clc;

    no_of_sources = 3;no_of_mixtures = no_of_sources;

    % INPUT DATA.

    [sources mixtures A] = input_data_test(); % one mixture per column.

    % Evaluation of R AND P.[gamma1,gamma2]=gamma();

    % Filter each column of mixtures array.S=filter(gamma1,1,mixtures); L=filter(gamma2,1,mixtures);

    % Find short-term and long-term covariance matrices.R=cov(S,1); P=cov(L,1);

    % Find eigenvectors W and eigenvalues d.[W d]=eig(R,P); W =real( W );

    % Recover source signals.

    ys = mixtures* W ;

    % PLOT RESULTS

    figure(1);subplot(3,1,1);plot(sources(:,1));subplot(3,1,2);plot(sources(:,2));Subplot(3,1,3);plot(sources(:,3));

    figure(2);

  • 8/8/2019 What is Blind Signal Separation

    28/41

    28

    subplot(3,1,1);plot(mixtures(:,1));subplot(3,1,2);plot(mixtures(:,2));subplot(3,1,3);plot(mixtures(:,3));

    figure(3);subplot(3,1,1);plot(ys(:,2));Subplot(3,1,2);plot(ys(:,1));Subplot(3,1,3);plot(ys(:,3));

    FUNCTION TO INPUT V OICE FILES

    function [sources,mixtures,A,f]=input_data_test();% Get sources and mixtures.

    no_sources = 3;no_mixtures = no_sources;

    %sources=zeros(num_sources,samples);[temp,f]=wavread('voice1.wav');

    temp=temp';sources=temp(1,:);N=length(sources);

    [temp,f]=wavread('voice2.wav');temp=temp';sources(2,:)=temp(1,1:N);

    [temp,f]=wavread('voice3.wav');temp=temp';

    sources(3,:)=temp(1,1:N);

    %fc=50e6;%t=0:1/100/fc:60/fc;%y=sin(2*pi*fc*t);%sources(1,:)=y(1:samples)';%y=sawtooth(4*pi*fc*t);%sources(2,:)=y(1:samples)';

  • 8/8/2019 What is Blind Signal Separation

    29/41

    29

    %load gong; sources(3,:)=y(1:samples)';

    sources=sources';% Make mixing matrix A.A = randn(no_mixtures,no_sources);mixtures=sources*A;

    CODE FOR THE GAMMA ESTIMATION

    function[gamma1,gamma2]=gamma();

    lower_bound = 1;upper_bound = 900000;max_len= 5000;n = 8;

    h=lower_bound; t = n*h; gamma = 2^(-1/h); temp = [0:t-1]';gammas = ones(t,1)*gamma; mask = gamma.^temp;mask(1) = 0; mask = mask/sum(abs(mask)); mask(1) = -1;gamma1=mask; s_len = length(gamma1);

    h=upper_bound;t = n*h; t = min(t,max_len); t=max(t,1);gamma = 2^(-1/h); temp = [0:t-1]';

    gammas = ones(t,1)*gamma; mask = gamma.^temp;mask(1) = 0; mask = mask/sum(abs(mask)); mask(1) = -1;gamma2=mask; l_len = length(gamma2);

  • 8/8/2019 What is Blind Signal Separation

    30/41

    30

    Input voice signals

    LINEAR MIXTURE OF THE SOURCE SIGNALS

  • 8/8/2019 What is Blind Signal Separation

    31/41

    31

    SIGNALS RECOVERED THROUGH BSS

    BLIND SOURCE SEPARATION USING H INFINITY FILTER BY ICA

    clcfc=50;t=0:1/1000/fc:6/fc;y=sin(2*pi*fc*t);sources(1,:)=y';

    samples=length(y);y=sawtooth(4*pi*fc*t);sources(2,:)=y(1:samples)';y=sin(200*pi*fc*t); % Create sawtooth signal. y = awgn(y,10, 'measured' );sources(3,:)=y(1:samples)';

  • 8/8/2019 What is Blind Signal Separation

    32/41

    32

    A=[23 4 50;3 21 43;67 26 7];r=A*sources;c=r*r';c=c^-(1/2);w=c*r;fNorm = 50 / (6000/2); %0.3Hz cutoff frequency, 6kHz sample rate [b, a] = butter(2, fNorm, 'low' ); %generates some vectors, b & a for thefilter w(1,:) = filtfilt(b, a, w(1,:));w(2,:) = filtfilt(b, a, w(2,:));w(3,:) = filtfilt(b, a, w(3,:));

    figure(1);subplot(3,1,1)plot(sources(1,:));subplot(3,1,2)plot(sources(2,:));subplot(3,1,3)plot(sources(3,:));

    figure(2);subplot(3,1,1);plot(r(1,:));

    subplot(3,1,2)plot(r(2,:));subplot(3,1,3)plot(r(3,:));

    figure(3);subplot(3,1,1)plot(w(3,:));subplot(3,1,2)

    plot(w(2,:));subplot(3,1,3)plot(w(1,:));

  • 8/8/2019 What is Blind Signal Separation

    33/41

    33

    Input signals

    LINEAR MIXTURE OF THE SOURCE SIGNALS

  • 8/8/2019 What is Blind Signal Separation

    34/41

    34

    SIGNALS RECOVERED THROUGH BSS

    H-Infinity Performance

  • 8/8/2019 What is Blind Signal Separation

    35/41

    35

    The modern approach to characterizing closed-loop performance objectives is tomeasure the size of certain closed-loop transfer function matrices using variousmatrix norms. Matrix norms provide a measure of how large output signals can getfor certain classes of input signals. Optimizing these types of performance objectivesover the set of stabilizing controllers is the main thrust of recent optimal controltheory, such as L 1, H2, H , and optimal control.

    Vector norms

    A vector norm is defined as follows

    In order to avoid the square root sign in the rest of this article, we will use thesquare of the norm:

    CHAPTER-5

  • 8/8/2019 What is Blind Signal Separation

    36/41

    36

    APPLICATIONS

    APPLICATIONS

    Blind I/Q Signal Separation for Receiver Signal Processing

    In order to increase the receiver flexibility whilst one also emphasizes the receiverintegrability and other implementation related aspects, the design of radio receivers`is no longer dominated by the traditional superheterodyne architecture. Instead,alternative receiver structures, like the direct conversion and low-IF architectures,are receiving more and more attention. The analog front-end of these types ofreceivers is partially based on complex or I/Q signal processing.

    In direct-conversion receivers, the image signal is inherently a self-image (thedesired signal itself at negative frequencies), and the analog front-end imageattenuation might be sufficient with low-order modulations. However, practicalanalog implementations of the needed I/Q signal processing have mismatches in theamplitude and phase responses of the I and Q branches, leading to finite attenuationof the image band signal. W ith higher-order modulations, such as 16- or 64-QAM,the distortion due to self-image cannot be neglected and again some kind ofcompensation is needed. The I/Q mismatches and carrier offsets as well as the lineardistortion due to general bandpass channels were shown to create crosstalk betweenthe transmitted I and Q signals.

    Using blind signal separation (BSS) techniques, the crosstalk or mixing of the I andQ is removed for receiver signal processing. Combining the presented I/Q mismatch

  • 8/8/2019 What is Blind Signal Separation

    37/41

    37

    and carrier offset compensation and the channel equalizer principles into a single (ora cascade of two) I/Q separator(s) results in a versatile receiver building block forfuture radio communication systems. Generally speaking, this idea gives new viewsfor applying complex or I/Q signal processing efficiently in radio receiver designand to take full advantage of the rich signal structure inherent to complex-valuedcommunications signals.

    FIG-9

    Blind Separation of Co-Channel Signals Using an Antenna Array

    W ireless communication systems are witnessing rapid advances in volume andrange of services. A major challenge for these systems today is the limited radiofrequency spectrum available. Approaches that increase spectrum efficiency aretherefore of great interest. The key idea involved here is that if one can make asufficient number of measurements of independent linear combinations of themessage signals from several sources, separation of the signals can be accomplishedby solving a system of linear equations. One way to make such independentmeasurements is to use an antenna array at the base station. Array processingtechniques can then be used to receive and transmit multiple signals that areseparated in space. Hence, multiple co-channel users can be supported per cell toincrease capacity. W e study the problem of separating multiple synchronous digitalsignals received at an antenna array. The goal is to reliably demodulate each signalin the presence of other co-channel signals and noise.

    Several algorithms have been proposed in the array processing literature forseparating co-channel signals based on availability of prior spatial or temporalinformation. The traditional spatial algorithms combine high-resolution direction-finding techniques such as MUSIC and ESPRIT with optimum beamforming toestimate the signal waveforms. However, these algorithms require that the numberof signal wavefronts including multipath reflections be less than the number ofsensors, which restricts their applicability in a wireless setting. In the recent past,several techniques have been developed that exploit the temporal structure ofcommunication signals while assuming no prior spatial knowledge. These

  • 8/8/2019 What is Blind Signal Separation

    38/41

    38

    techniques take advantage of signal properties such as constant modulus (CM),discrete alphabet, self-coherence, and high-order statistical properties.

    W e developed a light-weight iterative least-squares (ILS) method for blindseparation of co-channel signals using an antenna array. This algorithm can blindlyrecover the co-channel digital signal without the requirement of training signal orthe knowledge of direction of arrival. This is particularly useful in situations wheretraining signals are not available. For example, in communications intelligence,training signals are not accessible. In cellular applications, blind algorithms can beused to reject interference from adjacent cells.

    Separation of Overlapping Radio Frequency Identification (RFID) Signals byAntenna Arrays

    Radio frequency identification (RFID) is a generic term that is used to describe asystem that transmits the identity (in the form of a unique serial number) of anobject or person wirelessly, using radio waves. RFID has become a key technology inmainstream applications that help the efficient tracking of manufactured goods andmaterials by technology achievements in microelectronics and communications.Unlike barcode technology, RFID does not require a line of sight. Some uses of RFIDtechnology can be found in general application areas such as security and accesscontrol, transportation and supply chain management.

    An RFID system includes three primary components: a transponder (tag), atransceiver (reader) and a data collection device. The operation of RFID systemsoften involves a situation in which numerous transponders are present in thereading zone of a single reader at the same time. The reader's ability of processing agreat quantity of tags simultaneously for data collection is important. If multipletags are activated simultaneously, their messages can collide and cancel each otherat the reader. This situation will require a retransmission of the tag IDs, whichresults in a waste of bandwidth and increases the overall delay in identifying theobjects. A mechanism for handling tag collisions is necessary.

    Current solutions are based on collision avoidance using MAC protocols (e.g. slottedALOHA and binary tree algorithms). If tags collide, they are instructed to wait arandom time up to a certain maximum, which is doubled at each iteration until nocollisions are reported. In other standards, spread spectrum or similar techniquesare used to deterministically separate reader and tag transmissions, when permittedby local regulations. This can be a time-consuming process.

    The collision problem has hardly been studied from a signal processing perspective.If the reader is equipped with an antenna array, we arrive at a MIMO problem

  • 8/8/2019 What is Blind Signal Separation

    39/41

    39

    ("multiple input-multiple output"), and it may be possible to separate theoverlapping collisions based on differences in the spatial locations of the tags.Therefore an antenna array in combination with blind source separation techniquescan be used to separate multiple overlapping tag signals. The source signals can bemodeled as Zero Constant Modulus (ZCM) signals, and with antenna arrays, theblind source separation algorithms can efficiently separate overlapping RFIDsignals.

    Blind Signal Separation in Biomedical Applications

    The term Blind Signal Separation (BSS) refers to a wide class of problems in signaland image processing, where one needs to extract the underlying sources from a setof mixtures. Almost no prior knowledge about the sources, nor about the mixing isknown, hence the name blind. In practice, the sources can be one-dimensional (e.g.

    acoustic signals), two-dimensional (images) or three-dimensional (volumetric data).The mixing can be linear, or nonlinear, and instantaneous or convolutive; in thelatter case, the problem is referred to as multichannel blind deconvolution (BD) orconvolutive BSS. In many medical applications, the instantaneous linear mixingmodel holds, hence the most common situation is when the mixtures are formed bysuperposition of sources with different scaling coefficients. These coefficients areusually referred to as mixing or crosstalk coefficients, and can be arranged into amixing (crosstalk) matrix. The number of mixtures can be smaller, larger or equal tothe number of sources.

    In the application of medical signal and image processing, the BSS problem arises inanalysis of electroencephalogram (EEG), magnetoencephalogram (MEG) andelectrocardiogram (ECG/EKG) signals and functional magnetic resonance images(fMRI). In these applications, the linear mixture assumption is usually justified bythe physical principles of signal formation, and high signal propagation velocityallows the use of the instantaneous mixture model. Otherwise, nonlinear BSS or BDmethods are used.

    Electroencephalography (EEG)

    The brain cortex can be thought of as a field of K tiny sources, which in turn aremodeled as current dipoles. The j-th dipole is characterized by the locationvector r j and the dipole moment vector q j. The electromagnetic field produced by theneural activity determines the potential on the scalp surface, sampled at a setof M sensors.

  • 8/8/2019 What is Blind Signal Separation

    40/41

    40

    M agnetoencephalography ( M EG)

    Similarly to EEG, the forward model in MEG is also essentially linear. The sensorsmeasure the vector of the magnetic field b j around the scalp. The forward field atsensor j due to dipole i can be expressed as b i j = G i j q j, where G i j= G(r j,r 'i) is thematrix kernel depending on the geometry and the electromagnetical properties ofthe head. BSS can be used for separation of independent temporal components inthe same way it is used in EEG.

    Electrocardiography (ECG/EKG)

    The mechanical action of the heart is initiated by a quasi-periodic electrical stimulus,which causes an electrical current to propagate through the body tissues and resultsin potential differences. The potential differences measured by electrodes on the skin(cutaneous recording) as a function of time is termed electrocardiogram(ECG/EKG). The measured ECG/EKG signal can be considered as a superpositionof several independent processes, resulting, for example, from electromyographic

    activity (electrical potentials generated by muscles), 50Hz or 60Hz net interferences,or the electrical activity of a fetal heart (FECG/FEKG). The latter contains importantindications about the fetus' health. BSS methods have been successfully used forseparation of interference in ECG/EKG data.

    Functional M agnetic Resonance Imaging (f M RI)The principle of fMRI is based on different magnetic properties of oxygenated anddeoxygenated hemoglobin, which allows as to obtain a Blood Oxygenation LevelDependent (BOLD) signal. The observed spatio-temporal signal q(r,t) of magneticinduction can be considered as a superposition of N spatially independent

    components, each associated with a unique time course and a spatial map. Eachsource represents the loci of concurrent neural activity and can be either task-relatedor non task-related (e.g., physiological pulsations, head movements, backgroundbrain activity, etc). The spatial map corresponding to each source determines itsinfluence in each volume element (voxel), and is assumed to be fixed in time. Spatialmaps can be overlapping. The main advantage of BSS techniques over fMRI analysistools is that there is no need to assume any a priori information about the timecourse of processes contributing to the measured signals

  • 8/8/2019 What is Blind Signal Separation

    41/41

    BIBLIOGRAPHY

    1. Simon Haykin Adaptive F ilter Th eory, 2. Bernard Widrow, Samuel D. Stearns: Adaptive S ignal P rocessing, Prentice Hall,3. Simon, D. (2006). Optimal State Estimation: Kalman, H Infinity, and Nonlinear

    Approaches4. Eleswhere