research article vowel imagery decoding toward silent

12
Research Article Vowel Imagery Decoding toward Silent Speech BCI Using Extreme Learning Machine with Electroencephalogram Beomjun Min, Jongin Kim, Hyeong-jun Park, and Boreom Lee Department of Biomedical Science and Engineering (BMSE), Institute of Integrated Technology (IIT), Gwangju Institute of Science and Technology (GIST), Gwangju, Republic of Korea Correspondence should be addressed to Boreom Lee; [email protected] Received 30 September 2016; Revised 4 November 2016; Accepted 17 November 2016 Academic Editor: Han-Jeong Hwang Copyright © 2016 Beomjun Min et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. e purpose of this study is to classify EEG data on imagined speech in a single trial. We recorded EEG data while five subjects imagined different vowels, /a/, /e/, /i/, /o/, and /u/. We divided each single trial dataset into thirty segments and extracted features (mean, variance, standard deviation, and skewness) from all segments. To reduce the dimension of the feature vector, we applied a feature selection algorithm based on the sparse regression model. ese features were classified using a support vector machine with a radial basis function kernel, an extreme learning machine, and two variants of an extreme learning machine with different kernels. Because each single trial consisted of thirty segments, our algorithm decided the label of the single trial by selecting the most frequent output among the outputs of the thirty segments. As a result, we observed that the extreme learning machine and its variants achieved better classification rates than the support vector machine with a radial basis function kernel and linear discrimination analysis. us, our results suggested that EEG responses to imagined speech could be successfully classified in a single trial using an extreme learning machine with a radial basis function and linear kernel. is study with classification of imagined speech might contribute to the development of silent speech BCI systems. 1. Introduction People communicate with each other by exchanging verbal and visual expressions. However, paralyzed patients with various neurological diseases such as amyotrophic lateral sclerosis and cerebral ischemia have difficulties in daily communications because they cannot control their body voluntarily. In this context, brain-computer interface (BCI) has been studied as a tool of communication for these types of patients. BCI is a computer-aided control technology based on brain activity data such as EEG, which is appropriate for BCI systems because of its noninvasive nature and convenience of recording [1, 2]. e classification of EEG signals recorded during the motor imagery paradigm has been widely studied as a BCI controller [3–5]. According to these studies, differ- ent imagined tasks induce different EEG patterns on the contralateral hemisphere mainly in mu (7.5–12.5Hz) and beta (13–30 Hz) frequency bands. Many researchers have successfully constructed BCI systems based on the limb movement imagination paradigm such as right hand, leſt hand, and foot movement [5–7]. However, EEG signals recorded during imagination of speech without any move- ment of either mouth or tongue are still difficult to classify; however, this topic has become an interesting issue for researchers because speech imagination has high similarity to real voice communication. For example, Deng et al. proposed a method to classify imagined syllables, /ba/ and /ku/, in three different rhythms using Hilbert spectrum methods, and the classification results were significantly greater than the chance level [8]. In addition, DaSalla et al. classified /a/ and /u/ as vowel speech imagery for EEG-based BCI [9]. Furthermore, a study to discriminate syllables embedded in spoken and imagined words using an electrocorticogram (ECoG) was conducted [10]. Obviously, for the BCI system, the use of optimized classification algorithms that categorize a set of data into different classes is essential, and these algorithms are usually Hindawi Publishing Corporation BioMed Research International Volume 2016, Article ID 2618265, 11 pages http://dx.doi.org/10.1155/2016/2618265

Upload: others

Post on 27-Nov-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Research ArticleVowel Imagery Decoding toward Silent Speech BCI UsingExtreme Learning Machine with Electroencephalogram

Beomjun Min Jongin Kim Hyeong-jun Park and Boreom Lee

Department of Biomedical Science and Engineering (BMSE) Institute of Integrated Technology (IIT) Gwangju Institute ofScience and Technology (GIST) Gwangju Republic of Korea

Correspondence should be addressed to Boreom Lee leebrgistackr

Received 30 September 2016 Revised 4 November 2016 Accepted 17 November 2016

Academic Editor Han-Jeong Hwang

Copyright copy 2016 Beomjun Min et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The purpose of this study is to classify EEG data on imagined speech in a single trial We recorded EEG data while five subjectsimagined different vowels a e i o and u We divided each single trial dataset into thirty segments and extracted features(mean variance standard deviation and skewness) from all segments To reduce the dimension of the feature vector we applieda feature selection algorithm based on the sparse regression model These features were classified using a support vector machinewith a radial basis function kernel an extreme learning machine and two variants of an extreme learning machine with differentkernels Because each single trial consisted of thirty segments our algorithm decided the label of the single trial by selecting themost frequent output among the outputs of the thirty segments As a result we observed that the extreme learning machine andits variants achieved better classification rates than the support vector machine with a radial basis function kernel and lineardiscrimination analysis Thus our results suggested that EEG responses to imagined speech could be successfully classified ina single trial using an extreme learning machine with a radial basis function and linear kernel This study with classification ofimagined speech might contribute to the development of silent speech BCI systems

1 Introduction

People communicate with each other by exchanging verbaland visual expressions However paralyzed patients withvarious neurological diseases such as amyotrophic lateralsclerosis and cerebral ischemia have difficulties in dailycommunications because they cannot control their bodyvoluntarily In this context brain-computer interface (BCI)has been studied as a tool of communication for these typesof patients BCI is a computer-aided control technology basedon brain activity data such as EEG which is appropriatefor BCI systems because of its noninvasive nature andconvenience of recording [1 2]

The classification of EEG signals recorded during themotor imagery paradigm has been widely studied as aBCI controller [3ndash5] According to these studies differ-ent imagined tasks induce different EEG patterns on thecontralateral hemisphere mainly in mu (75ndash125Hz) andbeta (13ndash30Hz) frequency bands Many researchers have

successfully constructed BCI systems based on the limbmovement imagination paradigm such as right hand lefthand and foot movement [5ndash7] However EEG signalsrecorded during imagination of speech without any move-ment of either mouth or tongue are still difficult to classifyhowever this topic has become an interesting issue forresearchers because speech imagination has high similarity toreal voice communication For example Deng et al proposeda method to classify imagined syllables ba and ku inthree different rhythms using Hilbert spectrum methodsand the classification results were significantly greater thanthe chance level [8] In addition DaSalla et al classified aand u as vowel speech imagery for EEG-based BCI [9]Furthermore a study to discriminate syllables embeddedin spoken and imagined words using an electrocorticogram(ECoG) was conducted [10]

Obviously for the BCI system the use of optimizedclassification algorithms that categorize a set of data intodifferent classes is essential and these algorithms are usually

Hindawi Publishing CorporationBioMed Research InternationalVolume 2016 Article ID 2618265 11 pageshttpdxdoiorg10115520162618265

2 BioMed Research International

a

a

Beep Syllable cue Beep Speech imaginationBeep

bull Syllable cue vowels a e i o u and mute are randomly presented

bull Beep beep sound for preparation of listening the sound or covert vowel articulation

bull Speech Imagination covert vowel articulation (to imagine the vowel)

3 sec300 msec1 sec1 sec

Figure 1 Schematic sequence of the experimental paradigm Vowels a e i o u and mute were randomly presented 1 s after thebeginning of each trial After the third beep sound the subject imagines the same vowel heard at the beginning of the trial The EEG dataacquired during the speech imagination period were used for signal processing and classification in this study

divided into five groups linear classifiers neural networksnonlinear Bayesian classifiers nearest neighbor classifiersand combinations of classifiers [11] For instance variousalgorithms for speech classification have been used suchas k-nearest neighbor classifier (KNN) [12] support vectormachine (SVM) [9 13] and linear discriminant analysis(LDA) [8]

The extreme learningmachine (ELM) is a type of feedfor-ward neural network for classification proposed by Huanget al [14] ELM has high speed and good generalizationperformance compared to the classic gradient-based learningalgorithms There is growing interest in the application ofELM and its variants in the biomedical field such as epilepticEEG pattern recognition [15 16] MRI study [17] and BCI[18]

In this study we measured the EEG activities of speechimagination and attempted to classify those signals using theELM algorithm and its variants with kernels In addition wecompared the results to the support vector machine with aradial basis function (SVM-R) kernel and linear discriminantanalysis (LDA) As far as we know applications of ELM as aclassifier for EEG data of imagined speech have been rarelystudied In the present study we will examine the validity ofusing ELM and its variants in the classification of imaginedspeech and the possibility of our method for applications inBCI systems based on silent speech

2 Materials and Methods

21 Participants Five healthy human participants (5 malesmean age 2825 plusmn 271 range 26ndash32) participated in thisstudy All participants were native Koreans with normalhearing and right-handedness None of the participants hadany known neurological disorders or other significant healthproblems All participants gave written informed consent

and the experimental protocol was approved by the Institu-tional ReviewBoard (IRB) of theGwangju Institute of Scienceand Technology (GIST) The approval process of the IRBcomplies with the declaration of Helsinki

22 Experimental Paradigm Participants were seated in acomfortable armchair and wore earphones (er-4p Etymoticresearch Inc IL 60007 United States of America) pro-viding auditory stimuli Five types of Korean syllablesmdasha e i o and u as well as a mute (zero volume)soundmdashwere utilized in the experiment Figure 1 describesthe overall experimental paradigm At the beginning ofeach trial a beep sound was presented to prepare theparticipants for perception of the target syllable These sixauditory cues (including the mute sound) were recordedusing Goldwave software (GoldWave Inc St Johnrsquos New-foundland Canada) and the source audio was fromOddcastrsquos online (httpwwwoddcastcomhomedemosttstts examplephpsitepa) The five vowels and mute soundwere randomly presented Another 1 s after the onset of thetarget syllable two beep sounds were given sequentially witha 300ms interval between them After the two beep soundsparticipants were instructed to imagine the same syllableheard at the beginning of the trial The time for imaginationwas 3 s for each trial Participants performed 5 sessions witheach session consisting of 10 trials for each syllable Restingtimes were given between sessions for 1min Therefore 50trials were recorded for each syllable and themute sound andthe total time for the experiment was approximately 10minAll sessions were carried out in a day

The experimental procedure was designed with e-Prime20 software (Psychology Software Tools Inc SharpsburgPAUSA) AHydroCel Geodesic SensorNet with 64 channelsand Net Amps 300 amplifiers (Electrical Geodesics Inc

BioMed Research International 3

02 sec

01 sec3 sec

4 (features) times 60 (channels)= 240 (dimension of features)

Featureextraction

Featureselection

(Mean variancestandard deviation

skewness)

Sparse regressionmodel based

feature selection

ClassifierMajority

report

9000 samples = 6 (conditions) times 50 (trials) times 30 (blocks)

(ELM ELM-R ELM-LLDA SVM-R)

Figure 2 Overall signal processing procedure for classification First each trial was divided into thirty blocks with a 02 s length and 01 soverlap Mean variance standard deviation and skewness were extracted from all blocks and channels Sequentially sparse-regression-model-based feature selection was employed to reduce the dimension of the features All features were used as the input of the trainedclassifier Because each trial includes thirty blocks thirty classifier outputs were acquired therefore the label of each trial was determined byselecting the most frequent output of the thirty classifier outputs

Eugene OR USA)were used to record the EEG signals usinga 1000Hz sampling rate (Net Station version 456)

23 Data Processing and Classification Procedure

231 Preprocessing First we resampled the acquired EEGdata into 250Hz for fast preprocessing procedure The EEGdata was bandpass filtered with 1ndash100Hz Sequentially an IIRnotch filter (Butterworth order 4 bandwidth 59ndash61Hz) wasapplied to remove the power line noise

In general EEG classification has problems in terms ofpoor generalization performance and the overfitting phe-nomenon because the number of samples is much smallerthan the dimension of the features Therefore to obtainenough samples for learning and testing the classifier wedivided each imagination trial for 3 s into 30 time segmentswith a 02 s length and 01 s overlap Therefore we obtaineda total of 9000 segments = (6 (conditions) times 50 (trials pereach condition) times 30 segments) to learn and test the classifierWe calculated the mean variance standard deviation andskewness from each segment to acquire the feature vectorfor the classifier The dimension of the feature vector is240 (4 (types of features) times 60 (the number of channels))Additionally to reduce the dimension of the feature vectorwe applied a feature selection algorithm based on the sparseregression model The selected set of features extracted fromall segments was employed to learn and test the classifierBecause a trial consists of thirty segments a trial has thirtyoutputs of the classifier Therefore the label of the test trialwas determined by selecting themost frequent output amongthe outputs of the thirty segments The training and testingof the classifier model are conducted using the segmentsextracted only from training data and testing data respec-tively Finally to accurately estimate the classification perfor-mance we applied 10-fold cross-validationThe classificationaccuracies of ELM extreme learning machine with linearfunction (ELM-L) extreme learning machine with radialbasis function (ELM-R) and SVM-R for all five subjects werecompared to select the optimal classifier to discriminate the

vowel imagination The overall signal processing proceduresare briefly described in Figure 2

232 Sparse-Regression-Model-Based Feature Selection Tib-shirani developed a sparse regression model known as theLasso estimate [19] In this study we employed the sparseregression model to select the discriminative set of featuresto classify the EEG responses to covert articulation Theformula for selecting discriminative features based on thesparse regression model can be described as follows

zlowast = argminz

1003817100381710038171003817Fz minus t100381710038171003817100381722 + 120582 1199111 (1)

where sdot 119901 denotes the 119897119901-norm z is a sparse vector to belearned and zlowast indicates an optimal sparse vector t isin R119873119905times1

is a vector about the true class label for the number of trainingsamples119873119905 and 120582 is a positive regularization parameter thatcontrols the sparsity of z F is the matrix that consists of themean variance standard deviation and skewness for eachchannel

F = [f1 f2 f240] (2)

where f119901 isin R119873119905times1 is the 119901th column vector of F The coor-dinate descent algorithm is adopted to solve the optimizationproblem in (1) [20]

The column vectors in F corresponding to the zero entriesin z are excluded to form an optimized feature set F that isof lower dimensionality than F

233 Extreme Learning Machine Conventional feedforwardneural networks require weights and biases for all layersto be adjusted by the gradient-based learning algorithmsHowever the procedure for tuning the parameters of alllayers is very slow because it is repeated many times andits solutions easily fall into local optima For this reasonHuang et al proposed ELM which randomly assigns theinput weights and analytically calculates only the output

4 BioMed Research International

weights Therefore the learning speed of ELM is much fasterthan conventional learning algorithms and has outstandinggeneralization performance [21ndash23] If we assume the 119873119905training samples (v119896 l119896)119873119905119896=1 where v119896 is an 119899-dimensionalfeature vector v119896 = [V1198961 V1198962 V119896119899]119879 and l119896 is the truelabels which consists of 119898-classes l119896 = [1198971198961 1198971198962 119897119896119898]119879a standard SLFN with 119873ℎ hidden neurons and activationfunction 119886(sdot) can be formulated as follows

119873ℎsum119895=1

wℎ119895119886 (w119894119895 sdot v119896 + b119895) = o119896 119896 = 1 119873119905 (3)

where w119894119895 = [1199081198941198951 1199081198941198952 119908119894119895119899]119879 is the weight vector for theinput layer between the 119895th hidden neuron and the inputneurons wℎ119895 = [119908ℎ1198951 119908ℎ1198952 119908ℎ119895119898]119879 is the weight vector forthe hidden layer between the 119895th hidden neuron and theoutput neurons o119896 = [1199001198961 1199001198962 119900119896119898]119879 is the output vectorof the network and b119895 is the bias of the 119895th hidden neuronThe operator sdot indicates the inner product We can nowreformulate the equation into matrix form as follows

AWℎ = O (4)

where

A

=[[[[[

119886 (w1198941 sdot v1 + b1) sdot sdot sdot 119886 (w119894119873ℎ sdot v1 + b119873ℎ) sdot sdot sdot

119886 (w1198941 sdot v119873119905 + b1) sdot sdot sdot 119886 (w119894119873ℎ sdot v119873119905 + b119873ℎ)

]]]]]119873119905times119873ℎ

Wℎ = [wℎ1 sdot sdot sdot wℎ119873ℎ]119879

119873ℎtimes119898

O = [o1 sdot sdot sdot o119873119905]119879119873119905times119898

(5)

where matrix A is the output matrix of the hidden layer andthe operator 119879 indicates the transpose of the matrix Becausethe ELMalgorithm randomly selects the inputweightsw119894119895 andbiases b119895 we can find weights for the hidden layer wℎ119895 bysolving the following optimization problem

minwℎ119895

10038171003817100381710038171003817AWℎ minus L100381710038171003817100381710038172 (6)

where L is the matrix of true labels for training samples

L = (l1 sdot sdot sdot l119873119905)119879119873119905times119898 (7)

The above problem is known as a linear system optimizationproblem and its unique least-squares solution with a mini-mum norm is as follows

Wℎ = AdaggerL (8)

where Adagger is the MoorendashPenrose generalized inverse of thematrix A According to the analysis of Bartlett and Huang

the ELM algorithms achieve not only the minimum squaretraining error but also the best generalization performanceon novel test samples [14 24]

In this paper the activation function 119886(sdot)was determinedto be a sigmoidal function and the probability densityfunction for assigning the input weights and biases was setto be a uniform distribution function

3 Results and Discussion

31 Time-Frequency Analysis for Imagined Speech EEG DataWe computed the time-frequency representation (TFR) ofimagined speech EEG data for every subject to identifyspeech-related brain activities TFR of each trial was cal-culated using a Morlet wavelet and averaged over all trialsAmong the five subjects we plotted TFRs of subjects 2and 5 which showed notable patterns in gamma frequencyAs shown in Figure 3 much of the gamma band (30ndash70Hz) powers of five vowel conditions (a e i o andu) in the left temporal area are totally distinct and muchhigher than those of the control condition (mute sound) Inaddition topographical head plot of subject 5 was presentedin Figure 4 Increased gamma activities were observed in bothtemporal regions when the subject imagined vowels

32 Classification Results Figure 5 shows the classificationaccuracies averaged over all pairwise classifications for fivesubjects using ELM ELM-L ELM-R SVM-R and LDA Wealso conducted SVM and SVM with a linear kernel but theresults of SVM and SVM with a linear kernel are excludedbecause these classifiers could not be converged during manyiterations (100000 times) All classification accuracies areestimated by 10 times 10-fold cross-validation In the cases ofsubjects 1 3 and 4 ELM-L shows the best classificationperformance compared to the other four classifiers HoweverELM-R shows the best classification accuracies in subjects2 and 5 In the cases of all subjects the classificationaccuracies of ELM ELM-L and ELM-R are much better thanthose of SVM-R which are approximately the chance levelof 50 To identify the best classifier to discriminate thevowel imagination we conducted paired 119905-tests between theclassification accuracies of ELM-R and those of the otherthree classifiers As a result the classification performance ofELM-R is significantly better than those of ELM (119901 lt 001)LDA (119901 lt 001) and SVM-R (119901 lt 001) However there is nosignificant difference between the classification accuracies ofELM-R and ELM-L (119901 = 046)

Table 1 describes the classification accuracies of subject2 which shows the highest overall accuracies among allsubjects after 10 times 10-fold cross-validation for all pairwisecombinations In almost all pairwise combinations ELM-R has better classification performance than the other fourclassifiers for subject 2 The most discriminative pairwisecombination for subject 2 is vowels a and i which shows100 classification accuracy using ELM-R for subject 2

Table 2 contains the results of ELM-R for the pairwisecombinations and shows the top five classification perfor-mances for each subject There is no pairwise combinationto be selected from all subjects however a versus mute and

BioMed Research International 5

Table1Classifi

catio

naccuracies

in

employingSV

M-RE

LME

LM-LE

LM-Rand

LDAfors

ubject

2Th

ehigh

estclassificatio

naccuracy

amon

gthefour

classifiersism

arkedin

bold

forp

airw

isecombinatio

nClassifi

catio

naccuracies

areexpressedas

meanandassociated

stand

arddeviation

SVM-RE

LME

LM-LE

LM-Rand

LDAdeno

tethesupp

ortv

ectorm

achine

with

radialbasis

functio

nextre

melearningm

achineextremelearningm

achine

with

alineark

ernelextre

melearningm

achine

with

aradialbasisfunctio

nandlin

eard

iscrim

inantanalysis

respectiv

ely

Classifi

erav

ersus

e

av

ersus

iav

ersus

o

av

ersus

u

ev

ersus

iev

ersus

o

ev

ersus

u

iversus

o

iversus

u

oversus

u

av

ersus

mute

ev

ersus

mute

iversus

mute

ov

ersus

mute

u

versus

mute

SVM-R

5024

plusmn10

155

32plusmn

215

5141

plusmn05

150

22plusmn

060

5031

plusmn01

949

18plusmn

023

5241

plusmn15

752

47plusmn

071

5111

plusmn02

251

02plusmn

132

4948

plusmn07

050

35plusmn

022

5122

plusmn12

250

14plusmn

160

5123

plusmn02

3EL

M81

41plusmn

118

9423

plusmn10

262

42plusmn

215

7334

plusmn32

190

55plusmn

178

6976

plusmn12

9456

23plusmn

348

9632

plusmn01

897

47plusmn

076

6681

plusmn38

392

85plusmn

243

8231

plusmn21

699

41plusmn

012

8816

plusmn37

480

28plusmn

287

ELM-L

8132

plusmn04

798

15plusmn

067

6711

plusmn10

482

22plusmn

240

8831

plusmn17

378

25plusmn

223

5314

plusmn31

098

16plusmn

037

9823

plusmn21

268

36plusmn

163

9228

plusmn08

380

49plusmn

370

9912

plusmn03

393

14plusmn

221

8725

plusmn11

2EL

M-R

8628

plusmn11

299

02plusmn

076

7303

plusmn46

283

14plusmn

102

8908

plusmn01

478

27plusmn

016

5836

plusmn34

198

15plusmn

022

9707

plusmn14

273

38plusmn

273

9509

plusmn10

389

28plusmn

313

9901

plusmn01

493

25plusmn

073

9301

plusmn11

2

LDA

7925

plusmn16

290

32plusmn

261

6057

plusmn21

384

12plusmn

114

8823

plusmn16

770

04plusmn

143

5638

plusmn14

197

07plusmn

039

9628

plusmn16

265

14plusmn

173

9126

plusmn14

380

38plusmn

470

9807

plusmn13

290

39plusmn

162

8225

plusmn12

7

6 BioMed Research International

Time (seconds)

Freq

uenc

y (H

z)Vowel a

Vowel e

Vowel i

Vowel o

Vowel u

Mute2151

Vowel a

Vowel e

Vowel i

Vowel o

Vowel u

Mute

Subject 2 Subject 5

60

20

60

2040

60

2040

60

2040

60

2040

20

6040

40

0 02 04 06 08 1 12 14 16 18 2

Time (seconds)0 02 04 06 08 1 12 14 16 18 2

2151

2151

2151

2151

2151

uV2

uV2

uV2

uV2

uV2

uV2

Figure 3 Time-frequency representation (TFR) of EEG signals averaged over all trials for subjects 2 and 5 The EEG signals were obtainedfrom eight electrodes in the left temporal areas during each of the six experimental conditions (vowels a e i o u and mute) TheEEG data were bandpass filtered with 1ndash100Hz and a Morlet mother wavelet transform was used to calculate the TFR The TFRs are plottedfor the first 2 s after final beep sound and for the frequency range of 10ndash70Hz

15

14

13

12

11

10

09

15

14

13

12

11

10

09

A E I

O U Mute

Figure 4 Topographical distribution of gamma activities during vowel imagination for subject 5 Increased activities were observed in bothtemporal areas when the subject imagined vowels Time interval for the analysis is 0ndash3 sec

i versus mute are selected from four subjects and a versusi is selected from three subjects

Table 3 indicates the confusion matrix for all pairwisecombinations and subjects using ELM ELM-L ELM-RSVM-R and LDA In terms of sensitivity and specificityELM-L is the best classifier for our EEG data AlthoughSVM-R shows higher specificity than those of the other threeclassifiers in this table SVM-R classified almost all conditionsas positive and resulted in poor sensitivity therefore the highspecificity of the SVM-R is possibly invalid Thus SVM-Rmight be an unsuitable classifier for our study

33 Discussion Overall ELM ELM-L and ELM-R showedbetter performance than the SVM-R and LDA algorithms inthis study In several previous studies ELM achieved similaror better classification accuracy rates with much less trainingtime compared to other algorithms using EEG data [16 25ndash27] However we could not find studies on classification ofimagined speech using ELM algorithms Deng et al reportedclassification rates using LDA for imagined speech with7267 of the highest accuracy but the average results werenot much better than the chance level [8] DaSalla et al usingSVM showed approximately 82 of the best accuracy and

BioMed Research International 7

S4S5S2

S3

S1SV

M-R

LDA

ELM

ELM

-LEL

M-R

SVM

-RLD

AEL

MEL

M-L

ELM

-RSV

M-R

LDA

ELM

ELM

-LEL

M-R

SVM

-RLD

AEL

MEL

M-L

ELM

-RSV

M-R

LDA

ELM

ELM

-LEL

M-R

4045505560657075808590

Clas

sifica

tion

accu

raci

es (

)

Figure 5 Averaged classification accuracies over all pairwise classification using a support vector machine with a radial basis function kernel(SVM-R) extreme learning machine (ELM) extreme learning machine with a linear kernel (ELM-L) and extreme learning machine with aradial basis function kernel (ELM-R) for all five subjects

Table 2 Classification accuracies in employing ELM-R for the pairwise combinations which shows the top five classification performancesfor each subject Classification accuracies are expressed as mean and associated standard deviation

Subjects

S1 8647 plusmn 107 8121 plusmn 103 8001 plusmn 373 7335 plusmn 317 7244 plusmn 171(a versus i) (a versus mute) (a versus u) (a versus e) (i versus o)

S2 9902 plusmn 076 9930 plusmn 014 9822 plusmn 022 9514 plusmn 103 9301 plusmn 073(a versus i) (i versus mute) (i versus o) (a versus mute) (o versus mute)

S3 9208 plusmn 108 9019 plusmn 063 8915 plusmn 137 8727 plusmn 071 7038 plusmn 138(e versus mute) (i versus mute) (u versus mute) (o versus mute) (a versus i)

S4 9333 plusmn 031 9227 plusmn 103 9224 plusmn 213 9112 plusmn 054 9005 plusmn 183(i versus mute) (u versus mute) (a versus mute) (e versus mute) (o versus mute)

S5 9632 plusmn 231 9401 plusmn 017 9229 plusmn 114 9007 plusmn 058 8806 plusmn 123(e versus mute) (i versus mute) (o versus mute) (a versus mute) (u versus mute)

73 of the average result overall [9] whereas Huang et alreported that ELM tends to have a much higher learningspeed and comparable generalization performance in binaryclassification [21] In another study Huang argued that ELMhas fewer optimization constraints owing to its special sepa-rability feature and results in simpler implementation fasterlearning and better generalization performance [23] Thusour results showed consistent characterswith othersrsquo previousresearch using ELM and even similar or better classificationresults for imagined speech compared to other research usingdifferent algorithms Recently ELM algorithms have beenextensively applied in many other medical and biomedicalstudies [28ndash31] More detailed information about ELM canbe found in a recent review [32]

In this study each trial was divided into the thirty timesegments of 02 s in length and a 01 s overlap Each timesegment was considered as a sample for training the classifierand the final label of the test sample was determined byselecting the most frequent output (see Figure 2) We alsocompared the classification accuracy of our method withthose of a conventional method that does not divide the trialsinto multiple time segments As a result our method showedsuperior performance in terms of classification accuracy to

the conventional method In our opinion by dividing thetrials some effects such as increasing number of trials forclassifier training might occur and each time segment witha 02 s length is likely to retain enough information fordiscrimination of EEG vowel imagination Generally EEGclassification has problems in terms of poor generalizationperformance and the overfitting phenomenon because ofthe deficiency of the number of samples for the classifierTherefore an increased number of samples by dividingtrials could mitigate the aforementioned problems Howeverfurther analyses are required to prove our assumptions insubsequent studies

To reduce the dimension of the feature vector weemployed a feature selection algorithm based on the sparseregression model In the sparse-regression-model-based fea-ture selection algorithm the regularization parameter 120582 ofequation (1) must be carefully selected because 120582 determinesthe dimension of the optimized feature parameter Forexample when the selected 120582 is too large the algorithmexcludes discriminative features from an optimal feature setF However when users set 120582 too small redundant featuresare not excluded from an optimal feature set F Thereforethe optimal value for 120582 was selected by cross-validation on

8 BioMed Research International

Table3Con

fusio

nmatrix

fora

llpairw

isecombinatio

nsandsubjectsusingEL

ME

LM-LE

LM-RSVM-Rand

LDA

Classifi

ers

ELM

ELM-L

ELM-R

SVM-R

LDA

Con

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

npo

sitive

Testpo

sitive

2516

1234

2649

1101

2635

1115

3675

752556

1194

Testnegativ

e1509

2241

1261

2489

1297

2453

3525

225

1398

2352

Sensitivity=

06251

Specificity=

064

49Sensitivity=

06775

Specificity=

06933

Sensitivity=

06701

Specificity=

06875

Sensitivity=

05104

Specificity=

07500

Sensitivity=

064

64Specificity=

06633

BioMed Research International 9

40455055606570758085

Clas

sifica

tion

accu

racy

()

002 004 016008 01 012 014 0180 006Regularization parameter

Figure 6 Effects of varying the regularization parameter on the classification accuracies obtained by ELM-R with sparse-regression-model-based feature selection for subject 2 The parameter value giving the highest accuracy is highlighted with a red circle

the training session in our study For example the changeof classification accuracy caused by varying 120582 for subject 1 isillustrated in Figure 6 In the case of a and i using ELM-R the best classification accuracy reached a plateau at 120582 =008 and declined after 014 However the optimal values of 120582are totally different among the pairwise combinations and allsubjects

Furthermore our optimized results were achieved in thegamma frequency band (30ndash70Hz) We also tested the otherfrequency ranges such as beta (13ndash30Hz) alpha (8ndash13Hz)and theta (4ndash8Hz) however the classification rates of thosebands were not much better than the chance level in everysubject and pairwise combination of syllables In additionthe results of our TFR and topographical analysis (Figures3 and 4) could support some relationship between gammaactivities and imagined speech processing As far as we knowin the EEG classification of imagined speech there have beenonly a few studies that examined the differences betweenmultiple frequency bands including gamma frequency [3334] Therefore our study might be the first report that thegamma frequency band could play an important role asfeatures for the EEG classification of imagined speech More-over several studies using ECoG reported quite good resultsin the gamma frequency for imagined speech classification[35 36] and these findings are consistent with our resultsHowever several studies have been conducted that suggestedthe role of gamma frequency band for speech processingin neurophysiological perspectives [37ndash39] However thosestudies usually used intracranial recordings and focused onthe analysis for the high gamma (70ndash150Hz) frequency bandThus suggesting a relevance between those results and ourclassification study is not easy However a certain relationbetween some information in low gamma frequencies as afeature for classification and its implication from a neuro-physiological view will be specified in future studies

Currently communication systems with various BCItechnologies have been developed for disabled people [40]For instance the P300 speller is one of the most widelyresearched BCI technologies to decode verbal thoughts fromEEG [41] Despite many efforts toward better and fasterperformance the P300 speller is still insufficient for usein normal conversation [42 43] whereas independent of

the P300 component efforts toward extraction and analysisof EEG or ECoG induced by imagined speech have beenconducted [44 45] In this context our results of highperformance from the application of ELM and its variantshave potential to advance BCI research using silent speechcommunication However the pairwise combinations withthe highest accuracies (see Table 2) differed in each subjectAfter experiment each participant reported different patternsof vowel discrimination For example one subject reportedthat he could not discriminate e from i and the othersubject reported the other pair was not easy to distinguishAlthough those reports were not exactly matched to theresults of classification these discrepancies of subjectivesensory perception might be related to process of imaginingspeech and classification results Besides we have not triedmulticlass classification in this study yet some attemptsin multiclass classification of imagined speech have beenperformed by others [8 46 47] These issues related tointersubject variability and multiclass systems should beconsidered for our future study to developmore practical andgeneralized BCI systems using silent speech

4 Conclusions

In the present study we used classification algorithms forEEG data of imagined speech Particularly we comparedELM and its variants to SVM-R and LDA algorithms andobserved that ELM and its variants showed better perfor-mance than other algorithms with our data These resultsmight lead to the development of silent speech BCI systems

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Authorsrsquo Contributions

Beomjun Min and Jongin Kim equally contributed to thiswork

10 BioMed Research International

Acknowledgments

This research was supported by the GIST Research Institute(GRI) in 2016 and the Pioneer Research Center Programthrough the National Research Foundation of Korea fundedby theMinistry of Science ICTamp Future Planning (Grant no2012-0009462)

References

[1] H-J Hwang S Kim S Choi and C-H Im ldquoEEG-based brain-computer interfaces a thorough literature surveyrdquo InternationalJournal of Human-Computer Interaction vol 29 no 12 pp 814ndash826 2013

[2] U Chaudhary N Birbaumer and A Ramos-MurguialdayldquoBrainndashcomputer interfaces for communication and rehabilita-tionrdquoNature Reviews Neurology vol 12 no 9 pp 513ndash525 2016

[3] M Hamedi S-H Salleh and A M Noor ldquoElectroencephalo-graphic motor imagery brain connectivity analysis for BCI areviewrdquo Neural Computation vol 28 no 6 pp 999ndash1041 2016

[4] C Neuper R Scherer S Wriessnegger and G PfurtschellerldquoMotor imagery and action observation modulation of sen-sorimotor brain rhythms during mental control of a brain-computer interfacerdquoClinical Neurophysiology vol 120 no 2 pp239ndash247 2009

[5] H-J Hwang K Kwon and C-H Im ldquoNeurofeedback-basedmotor imagery training for brainndashcomputer interface (BCI)rdquoJournal of Neuroscience Methods vol 179 no 1 pp 150ndash1562009

[6] G Pfurtscheller C Brunner A Schlogl and F H Lopes daSilva ldquoMu rhythm (de)synchronization and EEG single-trialclassification of different motor imagery tasksrdquo NeuroImagevol 31 no 1 pp 153ndash159 2006

[7] M Ahn and S C Jun ldquoPerformance variation inmotor imagerybrain-computer interface a brief reviewrdquo Journal of Neuro-science Methods vol 243 pp 103ndash110 2015

[8] S Deng R Srinivasan T Lappas and M DrsquoZmura ldquoEEG clas-sification of imagined syllable rhythm using Hilbert spectrummethodsrdquo Journal of Neural Engineering vol 7 no 4 Article ID046006 2010

[9] C S DaSalla H Kambara M Sato and Y Koike ldquoSingle-trialclassification of vowel speech imagery using common spatialpatternsrdquo Neural Networks vol 22 no 9 pp 1334ndash1339 2009

[10] X Pei D L Barbour E C Leuthardt and G Schalk ldquoDecodingvowels and consonants in spoken and imagined words usingelectrocorticographic signals in humansrdquo Journal of NeuralEngineering vol 8 no 4 Article ID 046028 2011

[11] F Lotte M Congedo A Lecuyer F Lamarche and B ArnaldildquoA review of classification algorithms for EEG-based brain-computer interfacesrdquo Journal of Neural Engineering vol 4 no2 article R01 pp R1ndashR13 2007

[12] K Brigham and B V K V Kumar ldquoImagined speech classifica-tion with EEG signals for silent communication a preliminaryinvestigation into synthetic telepathyrdquo in Proceedings of the4th International Conference on Bioinformatics and BiomedicalEngineering (iCBBE rsquo10) Chengdu China June 2010

[13] J Kim S-K Lee and B Lee ldquoEEG classification in a single-trialbasis for vowel speech perception using multivariate empiricalmode decompositionrdquo Journal of Neural Engineering vol 11 no3 Article ID 036010 2014

[14] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1-3 pp 489ndash501 2006

[15] Y Song and J Zhang ldquoAutomatic recognition of epileptic EEGpatterns via Extreme Learning Machine and multiresolutionfeature extractionrdquoExpert Systemswith Applications vol 40 no14 pp 5477ndash5489 2013

[16] Q YuanWZhou S Li andDCai ldquoEpileptic EEG classificationbased on extreme learning machine and nonlinear featuresrdquoEpilepsy Research vol 96 no 1-2 pp 29ndash38 2011

[17] M N I Qureshi B Min H J Jo and B Lee ldquoMulticlass clas-sification for the differential diagnosis on the ADHD subtypesusing recursive feature elimination and hierarchical extremelearning machine structural MRI studyrdquo PLoS ONE vol 11 no8 Article ID e0160697 2016

[18] L Gao W Cheng J Zhang and J Wang ldquoEEG classificationfor motor imagery and resting state in BCI applications usingmulti-class Adaboost extreme learning machinerdquo Review ofScientific Instruments vol 87 no 8 Article ID 085110 2016

[19] R Tibshirani ldquoRegression shrinkage and selection via the LassoRobert Tibshiranirdquo Journal of the Royal Statistical Society SeriesB (Statistical Methodology) vol 58 no 1 pp 267ndash288 2007

[20] J Friedman T Hastie and R Tibshirani ldquoRegularization pathsfor generalized linear models via coordinate descentrdquo Journal ofStatistical Software vol 33 no 1 pp 1ndash22 2010

[21] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[22] J Cao Z Lin and G-B Huang ldquoComposite function waveletneural networks with extreme learning machinerdquo Neurocom-puting vol 73 no 7-9 pp 1405ndash1416 2010

[23] G-B Huang X Ding and H Zhou ldquoOptimization methodbased extreme learning machine for classificationrdquo Neurocom-puting vol 74 no 1-3 pp 155ndash163 2010

[24] P L Bartlett ldquoThe sample complexity of pattern classificationwith neural networks the size of the weights is more importantthan the size of the networkrdquo Institute of Electrical and Electron-ics Engineers Transactions on InformationTheory vol 44 no 2pp 525ndash536 1998

[25] L-C Shi and B-L Lu ldquoEEG-based vigilance estimation usingextreme learning machinesrdquo Neurocomputing vol 102 pp 135ndash143 2013

[26] Y Peng and B-L Lu ldquoDiscriminative manifold extreme learn-ing machine and applications to image and EEG signal classifi-cationrdquo Neurocomputing vol 174 pp 265ndash277 2014

[27] N-Y Liang P Saratchandran G-B Huang and N Sundarara-jan ldquoClassification of mental tasks from EEG signals usingextreme learning machinerdquo International Journal of NeuralSystems vol 16 no 1 pp 29ndash38 2006

[28] J Kim H S Shin K Shin and M Lee ldquoRobust algorithmfor arrhythmia classification in ECG using extreme learningmachinerdquo Biomedical Engineering OnLine vol 8 article 312009

[29] S Karpagachelvi M Arthanari and M Sivakumar ldquoClas-sification of electrocardiogram signals with support vectormachines and extreme learning machinerdquo Neural Computingand Applications vol 21 no 6 pp 1331ndash1339 2012

[30] W Huang Z M Tan Z Lin et al ldquoA semi-automatic approachto the segmentation of liver parenchyma from 3D CT imageswith Extreme Learning Machinerdquo in Proceedings of the IEEE

BioMed Research International 11

Annual International Conference of the IEEE Engineering inMedicine and Biology Society (EMBCrsquo 12) pp 3752ndash3755 Bue-nos Aires Argentina September 2012

[31] R Barea L Boquete S Ortega E Lopez and J M Rodrıguez-Ascariz ldquoEOG-based eye movements codification for humancomputer interactionrdquo Expert Systems with Applications vol 39no 3 pp 2677ndash2683 2012

[32] GHuangG-BHuang S Song andKYou ldquoTrends in extremelearning machines a reviewrdquo Neural Networks vol 61 pp 32ndash48 2015

[33] B M Idrees and O Farooq ldquoVowel classification using waveletdecomposition during speech imageryrdquo in Proceedings of theInternational Conference on Signal Processing and IntegratedNetworks (SPIN rsquo16) pp 636ndash640 Noida India Feburary 2016

[34] A Riaz S Akhtar S Iftikhar A A Khan and A SalmanldquoInter comparison of classification techniques for vowel speechimagery using EEG sensorsrdquo in Proceedings of the 2nd Interna-tional Conference on Systems and Informatics (ICSAI rsquo14) pp712ndash717 IEEE Shanghai China November 2014

[35] S Martin P Brunner I Iturrate et al ldquoWord pair classificationduring imagined speech using direct brain recordingsrdquo Scien-tific Reports vol 6 Article ID 25803 2016

[36] X Pei J Hill and G Schalk ldquoSilent communication towardusing brain signalsrdquo IEEE Pulse vol 3 no 1 pp 43ndash46 2012

[37] A-L Giraud and D Poeppel ldquoCortical oscillations and speechprocessing emerging computational principles and operationsrdquoNature Neuroscience vol 15 no 4 pp 511ndash517 2012

[38] V L Towle H-A Yoon M Castelle et al ldquoECoG 120574 activityduring a language task differentiating expressive and receptivespeech areasrdquo Brain vol 131 no 8 pp 2013ndash2027 2008

[39] SMartin P Brunner C Holdgraf et al ldquoDecoding spectrotem-poral features of overt and covert speech from the humancortexrdquo Frontiers in Neuroengineering vol 7 article 14 2014

[40] J S Brumberg A Nieto-Castanon P R Kennedy and F HGuenther ldquoBrain-computer interfaces for speech communica-tionrdquo Speech Communication vol 52 no 4 pp 367ndash379 2010

[41] L A Farwell and E Donchin ldquoTalking off the top of your headtoward a mental prosthesis utilizing event-related brain poten-tialsrdquo Electroencephalography and Clinical Neurophysiology vol70 no 6 pp 510ndash523 1988

[42] D J Krusienski EW Sellers F Cabestaing et al ldquoA comparisonof classification techniques for the P300 Spellerrdquo Journal ofNeural Engineering vol 3 no 4 article 007 pp 299ndash305 2006

[43] E Yin Z Zhou J Jiang F Chen Y Liu and D Hu ldquoA novelhybrid BCI speller based on the incorporation of SSVEP intothe P300 paradigmrdquo Journal of Neural Engineering vol 10 no2 Article ID 026012 2013

[44] C Herff and T Schultz ldquoAutomatic speech recognition fromneural signals a focused reviewrdquo Frontiers in Neuroscience vol10 article 429 2016

[45] S Chakrabarti H M Sandberg J S Brumberg and D JKrusienski ldquoProgress in speech decoding from the electrocor-ticogramrdquoBiomedical Engineering Letters vol 5 no 1 pp 10ndash212015

[46] K Mohanchandra and S Saha ldquoA communication paradigmusing subvocalized speech translating brain signals intospeechrdquoAugmentedHumanResearch vol 1 no 1 article 3 2016

[47] O Ossmy I Fried and R Mukamel ldquoDecoding speech percep-tion from single cell activity in humansrdquo NeuroImage vol 117pp 151ndash159 2015

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

2 BioMed Research International

a

a

Beep Syllable cue Beep Speech imaginationBeep

bull Syllable cue vowels a e i o u and mute are randomly presented

bull Beep beep sound for preparation of listening the sound or covert vowel articulation

bull Speech Imagination covert vowel articulation (to imagine the vowel)

3 sec300 msec1 sec1 sec

Figure 1 Schematic sequence of the experimental paradigm Vowels a e i o u and mute were randomly presented 1 s after thebeginning of each trial After the third beep sound the subject imagines the same vowel heard at the beginning of the trial The EEG dataacquired during the speech imagination period were used for signal processing and classification in this study

divided into five groups linear classifiers neural networksnonlinear Bayesian classifiers nearest neighbor classifiersand combinations of classifiers [11] For instance variousalgorithms for speech classification have been used suchas k-nearest neighbor classifier (KNN) [12] support vectormachine (SVM) [9 13] and linear discriminant analysis(LDA) [8]

The extreme learningmachine (ELM) is a type of feedfor-ward neural network for classification proposed by Huanget al [14] ELM has high speed and good generalizationperformance compared to the classic gradient-based learningalgorithms There is growing interest in the application ofELM and its variants in the biomedical field such as epilepticEEG pattern recognition [15 16] MRI study [17] and BCI[18]

In this study we measured the EEG activities of speechimagination and attempted to classify those signals using theELM algorithm and its variants with kernels In addition wecompared the results to the support vector machine with aradial basis function (SVM-R) kernel and linear discriminantanalysis (LDA) As far as we know applications of ELM as aclassifier for EEG data of imagined speech have been rarelystudied In the present study we will examine the validity ofusing ELM and its variants in the classification of imaginedspeech and the possibility of our method for applications inBCI systems based on silent speech

2 Materials and Methods

21 Participants Five healthy human participants (5 malesmean age 2825 plusmn 271 range 26ndash32) participated in thisstudy All participants were native Koreans with normalhearing and right-handedness None of the participants hadany known neurological disorders or other significant healthproblems All participants gave written informed consent

and the experimental protocol was approved by the Institu-tional ReviewBoard (IRB) of theGwangju Institute of Scienceand Technology (GIST) The approval process of the IRBcomplies with the declaration of Helsinki

22 Experimental Paradigm Participants were seated in acomfortable armchair and wore earphones (er-4p Etymoticresearch Inc IL 60007 United States of America) pro-viding auditory stimuli Five types of Korean syllablesmdasha e i o and u as well as a mute (zero volume)soundmdashwere utilized in the experiment Figure 1 describesthe overall experimental paradigm At the beginning ofeach trial a beep sound was presented to prepare theparticipants for perception of the target syllable These sixauditory cues (including the mute sound) were recordedusing Goldwave software (GoldWave Inc St Johnrsquos New-foundland Canada) and the source audio was fromOddcastrsquos online (httpwwwoddcastcomhomedemosttstts examplephpsitepa) The five vowels and mute soundwere randomly presented Another 1 s after the onset of thetarget syllable two beep sounds were given sequentially witha 300ms interval between them After the two beep soundsparticipants were instructed to imagine the same syllableheard at the beginning of the trial The time for imaginationwas 3 s for each trial Participants performed 5 sessions witheach session consisting of 10 trials for each syllable Restingtimes were given between sessions for 1min Therefore 50trials were recorded for each syllable and themute sound andthe total time for the experiment was approximately 10minAll sessions were carried out in a day

The experimental procedure was designed with e-Prime20 software (Psychology Software Tools Inc SharpsburgPAUSA) AHydroCel Geodesic SensorNet with 64 channelsand Net Amps 300 amplifiers (Electrical Geodesics Inc

BioMed Research International 3

02 sec

01 sec3 sec

4 (features) times 60 (channels)= 240 (dimension of features)

Featureextraction

Featureselection

(Mean variancestandard deviation

skewness)

Sparse regressionmodel based

feature selection

ClassifierMajority

report

9000 samples = 6 (conditions) times 50 (trials) times 30 (blocks)

(ELM ELM-R ELM-LLDA SVM-R)

Figure 2 Overall signal processing procedure for classification First each trial was divided into thirty blocks with a 02 s length and 01 soverlap Mean variance standard deviation and skewness were extracted from all blocks and channels Sequentially sparse-regression-model-based feature selection was employed to reduce the dimension of the features All features were used as the input of the trainedclassifier Because each trial includes thirty blocks thirty classifier outputs were acquired therefore the label of each trial was determined byselecting the most frequent output of the thirty classifier outputs

Eugene OR USA)were used to record the EEG signals usinga 1000Hz sampling rate (Net Station version 456)

23 Data Processing and Classification Procedure

231 Preprocessing First we resampled the acquired EEGdata into 250Hz for fast preprocessing procedure The EEGdata was bandpass filtered with 1ndash100Hz Sequentially an IIRnotch filter (Butterworth order 4 bandwidth 59ndash61Hz) wasapplied to remove the power line noise

In general EEG classification has problems in terms ofpoor generalization performance and the overfitting phe-nomenon because the number of samples is much smallerthan the dimension of the features Therefore to obtainenough samples for learning and testing the classifier wedivided each imagination trial for 3 s into 30 time segmentswith a 02 s length and 01 s overlap Therefore we obtaineda total of 9000 segments = (6 (conditions) times 50 (trials pereach condition) times 30 segments) to learn and test the classifierWe calculated the mean variance standard deviation andskewness from each segment to acquire the feature vectorfor the classifier The dimension of the feature vector is240 (4 (types of features) times 60 (the number of channels))Additionally to reduce the dimension of the feature vectorwe applied a feature selection algorithm based on the sparseregression model The selected set of features extracted fromall segments was employed to learn and test the classifierBecause a trial consists of thirty segments a trial has thirtyoutputs of the classifier Therefore the label of the test trialwas determined by selecting themost frequent output amongthe outputs of the thirty segments The training and testingof the classifier model are conducted using the segmentsextracted only from training data and testing data respec-tively Finally to accurately estimate the classification perfor-mance we applied 10-fold cross-validationThe classificationaccuracies of ELM extreme learning machine with linearfunction (ELM-L) extreme learning machine with radialbasis function (ELM-R) and SVM-R for all five subjects werecompared to select the optimal classifier to discriminate the

vowel imagination The overall signal processing proceduresare briefly described in Figure 2

232 Sparse-Regression-Model-Based Feature Selection Tib-shirani developed a sparse regression model known as theLasso estimate [19] In this study we employed the sparseregression model to select the discriminative set of featuresto classify the EEG responses to covert articulation Theformula for selecting discriminative features based on thesparse regression model can be described as follows

zlowast = argminz

1003817100381710038171003817Fz minus t100381710038171003817100381722 + 120582 1199111 (1)

where sdot 119901 denotes the 119897119901-norm z is a sparse vector to belearned and zlowast indicates an optimal sparse vector t isin R119873119905times1

is a vector about the true class label for the number of trainingsamples119873119905 and 120582 is a positive regularization parameter thatcontrols the sparsity of z F is the matrix that consists of themean variance standard deviation and skewness for eachchannel

F = [f1 f2 f240] (2)

where f119901 isin R119873119905times1 is the 119901th column vector of F The coor-dinate descent algorithm is adopted to solve the optimizationproblem in (1) [20]

The column vectors in F corresponding to the zero entriesin z are excluded to form an optimized feature set F that isof lower dimensionality than F

233 Extreme Learning Machine Conventional feedforwardneural networks require weights and biases for all layersto be adjusted by the gradient-based learning algorithmsHowever the procedure for tuning the parameters of alllayers is very slow because it is repeated many times andits solutions easily fall into local optima For this reasonHuang et al proposed ELM which randomly assigns theinput weights and analytically calculates only the output

4 BioMed Research International

weights Therefore the learning speed of ELM is much fasterthan conventional learning algorithms and has outstandinggeneralization performance [21ndash23] If we assume the 119873119905training samples (v119896 l119896)119873119905119896=1 where v119896 is an 119899-dimensionalfeature vector v119896 = [V1198961 V1198962 V119896119899]119879 and l119896 is the truelabels which consists of 119898-classes l119896 = [1198971198961 1198971198962 119897119896119898]119879a standard SLFN with 119873ℎ hidden neurons and activationfunction 119886(sdot) can be formulated as follows

119873ℎsum119895=1

wℎ119895119886 (w119894119895 sdot v119896 + b119895) = o119896 119896 = 1 119873119905 (3)

where w119894119895 = [1199081198941198951 1199081198941198952 119908119894119895119899]119879 is the weight vector for theinput layer between the 119895th hidden neuron and the inputneurons wℎ119895 = [119908ℎ1198951 119908ℎ1198952 119908ℎ119895119898]119879 is the weight vector forthe hidden layer between the 119895th hidden neuron and theoutput neurons o119896 = [1199001198961 1199001198962 119900119896119898]119879 is the output vectorof the network and b119895 is the bias of the 119895th hidden neuronThe operator sdot indicates the inner product We can nowreformulate the equation into matrix form as follows

AWℎ = O (4)

where

A

=[[[[[

119886 (w1198941 sdot v1 + b1) sdot sdot sdot 119886 (w119894119873ℎ sdot v1 + b119873ℎ) sdot sdot sdot

119886 (w1198941 sdot v119873119905 + b1) sdot sdot sdot 119886 (w119894119873ℎ sdot v119873119905 + b119873ℎ)

]]]]]119873119905times119873ℎ

Wℎ = [wℎ1 sdot sdot sdot wℎ119873ℎ]119879

119873ℎtimes119898

O = [o1 sdot sdot sdot o119873119905]119879119873119905times119898

(5)

where matrix A is the output matrix of the hidden layer andthe operator 119879 indicates the transpose of the matrix Becausethe ELMalgorithm randomly selects the inputweightsw119894119895 andbiases b119895 we can find weights for the hidden layer wℎ119895 bysolving the following optimization problem

minwℎ119895

10038171003817100381710038171003817AWℎ minus L100381710038171003817100381710038172 (6)

where L is the matrix of true labels for training samples

L = (l1 sdot sdot sdot l119873119905)119879119873119905times119898 (7)

The above problem is known as a linear system optimizationproblem and its unique least-squares solution with a mini-mum norm is as follows

Wℎ = AdaggerL (8)

where Adagger is the MoorendashPenrose generalized inverse of thematrix A According to the analysis of Bartlett and Huang

the ELM algorithms achieve not only the minimum squaretraining error but also the best generalization performanceon novel test samples [14 24]

In this paper the activation function 119886(sdot)was determinedto be a sigmoidal function and the probability densityfunction for assigning the input weights and biases was setto be a uniform distribution function

3 Results and Discussion

31 Time-Frequency Analysis for Imagined Speech EEG DataWe computed the time-frequency representation (TFR) ofimagined speech EEG data for every subject to identifyspeech-related brain activities TFR of each trial was cal-culated using a Morlet wavelet and averaged over all trialsAmong the five subjects we plotted TFRs of subjects 2and 5 which showed notable patterns in gamma frequencyAs shown in Figure 3 much of the gamma band (30ndash70Hz) powers of five vowel conditions (a e i o andu) in the left temporal area are totally distinct and muchhigher than those of the control condition (mute sound) Inaddition topographical head plot of subject 5 was presentedin Figure 4 Increased gamma activities were observed in bothtemporal regions when the subject imagined vowels

32 Classification Results Figure 5 shows the classificationaccuracies averaged over all pairwise classifications for fivesubjects using ELM ELM-L ELM-R SVM-R and LDA Wealso conducted SVM and SVM with a linear kernel but theresults of SVM and SVM with a linear kernel are excludedbecause these classifiers could not be converged during manyiterations (100000 times) All classification accuracies areestimated by 10 times 10-fold cross-validation In the cases ofsubjects 1 3 and 4 ELM-L shows the best classificationperformance compared to the other four classifiers HoweverELM-R shows the best classification accuracies in subjects2 and 5 In the cases of all subjects the classificationaccuracies of ELM ELM-L and ELM-R are much better thanthose of SVM-R which are approximately the chance levelof 50 To identify the best classifier to discriminate thevowel imagination we conducted paired 119905-tests between theclassification accuracies of ELM-R and those of the otherthree classifiers As a result the classification performance ofELM-R is significantly better than those of ELM (119901 lt 001)LDA (119901 lt 001) and SVM-R (119901 lt 001) However there is nosignificant difference between the classification accuracies ofELM-R and ELM-L (119901 = 046)

Table 1 describes the classification accuracies of subject2 which shows the highest overall accuracies among allsubjects after 10 times 10-fold cross-validation for all pairwisecombinations In almost all pairwise combinations ELM-R has better classification performance than the other fourclassifiers for subject 2 The most discriminative pairwisecombination for subject 2 is vowels a and i which shows100 classification accuracy using ELM-R for subject 2

Table 2 contains the results of ELM-R for the pairwisecombinations and shows the top five classification perfor-mances for each subject There is no pairwise combinationto be selected from all subjects however a versus mute and

BioMed Research International 5

Table1Classifi

catio

naccuracies

in

employingSV

M-RE

LME

LM-LE

LM-Rand

LDAfors

ubject

2Th

ehigh

estclassificatio

naccuracy

amon

gthefour

classifiersism

arkedin

bold

forp

airw

isecombinatio

nClassifi

catio

naccuracies

areexpressedas

meanandassociated

stand

arddeviation

SVM-RE

LME

LM-LE

LM-Rand

LDAdeno

tethesupp

ortv

ectorm

achine

with

radialbasis

functio

nextre

melearningm

achineextremelearningm

achine

with

alineark

ernelextre

melearningm

achine

with

aradialbasisfunctio

nandlin

eard

iscrim

inantanalysis

respectiv

ely

Classifi

erav

ersus

e

av

ersus

iav

ersus

o

av

ersus

u

ev

ersus

iev

ersus

o

ev

ersus

u

iversus

o

iversus

u

oversus

u

av

ersus

mute

ev

ersus

mute

iversus

mute

ov

ersus

mute

u

versus

mute

SVM-R

5024

plusmn10

155

32plusmn

215

5141

plusmn05

150

22plusmn

060

5031

plusmn01

949

18plusmn

023

5241

plusmn15

752

47plusmn

071

5111

plusmn02

251

02plusmn

132

4948

plusmn07

050

35plusmn

022

5122

plusmn12

250

14plusmn

160

5123

plusmn02

3EL

M81

41plusmn

118

9423

plusmn10

262

42plusmn

215

7334

plusmn32

190

55plusmn

178

6976

plusmn12

9456

23plusmn

348

9632

plusmn01

897

47plusmn

076

6681

plusmn38

392

85plusmn

243

8231

plusmn21

699

41plusmn

012

8816

plusmn37

480

28plusmn

287

ELM-L

8132

plusmn04

798

15plusmn

067

6711

plusmn10

482

22plusmn

240

8831

plusmn17

378

25plusmn

223

5314

plusmn31

098

16plusmn

037

9823

plusmn21

268

36plusmn

163

9228

plusmn08

380

49plusmn

370

9912

plusmn03

393

14plusmn

221

8725

plusmn11

2EL

M-R

8628

plusmn11

299

02plusmn

076

7303

plusmn46

283

14plusmn

102

8908

plusmn01

478

27plusmn

016

5836

plusmn34

198

15plusmn

022

9707

plusmn14

273

38plusmn

273

9509

plusmn10

389

28plusmn

313

9901

plusmn01

493

25plusmn

073

9301

plusmn11

2

LDA

7925

plusmn16

290

32plusmn

261

6057

plusmn21

384

12plusmn

114

8823

plusmn16

770

04plusmn

143

5638

plusmn14

197

07plusmn

039

9628

plusmn16

265

14plusmn

173

9126

plusmn14

380

38plusmn

470

9807

plusmn13

290

39plusmn

162

8225

plusmn12

7

6 BioMed Research International

Time (seconds)

Freq

uenc

y (H

z)Vowel a

Vowel e

Vowel i

Vowel o

Vowel u

Mute2151

Vowel a

Vowel e

Vowel i

Vowel o

Vowel u

Mute

Subject 2 Subject 5

60

20

60

2040

60

2040

60

2040

60

2040

20

6040

40

0 02 04 06 08 1 12 14 16 18 2

Time (seconds)0 02 04 06 08 1 12 14 16 18 2

2151

2151

2151

2151

2151

uV2

uV2

uV2

uV2

uV2

uV2

Figure 3 Time-frequency representation (TFR) of EEG signals averaged over all trials for subjects 2 and 5 The EEG signals were obtainedfrom eight electrodes in the left temporal areas during each of the six experimental conditions (vowels a e i o u and mute) TheEEG data were bandpass filtered with 1ndash100Hz and a Morlet mother wavelet transform was used to calculate the TFR The TFRs are plottedfor the first 2 s after final beep sound and for the frequency range of 10ndash70Hz

15

14

13

12

11

10

09

15

14

13

12

11

10

09

A E I

O U Mute

Figure 4 Topographical distribution of gamma activities during vowel imagination for subject 5 Increased activities were observed in bothtemporal areas when the subject imagined vowels Time interval for the analysis is 0ndash3 sec

i versus mute are selected from four subjects and a versusi is selected from three subjects

Table 3 indicates the confusion matrix for all pairwisecombinations and subjects using ELM ELM-L ELM-RSVM-R and LDA In terms of sensitivity and specificityELM-L is the best classifier for our EEG data AlthoughSVM-R shows higher specificity than those of the other threeclassifiers in this table SVM-R classified almost all conditionsas positive and resulted in poor sensitivity therefore the highspecificity of the SVM-R is possibly invalid Thus SVM-Rmight be an unsuitable classifier for our study

33 Discussion Overall ELM ELM-L and ELM-R showedbetter performance than the SVM-R and LDA algorithms inthis study In several previous studies ELM achieved similaror better classification accuracy rates with much less trainingtime compared to other algorithms using EEG data [16 25ndash27] However we could not find studies on classification ofimagined speech using ELM algorithms Deng et al reportedclassification rates using LDA for imagined speech with7267 of the highest accuracy but the average results werenot much better than the chance level [8] DaSalla et al usingSVM showed approximately 82 of the best accuracy and

BioMed Research International 7

S4S5S2

S3

S1SV

M-R

LDA

ELM

ELM

-LEL

M-R

SVM

-RLD

AEL

MEL

M-L

ELM

-RSV

M-R

LDA

ELM

ELM

-LEL

M-R

SVM

-RLD

AEL

MEL

M-L

ELM

-RSV

M-R

LDA

ELM

ELM

-LEL

M-R

4045505560657075808590

Clas

sifica

tion

accu

raci

es (

)

Figure 5 Averaged classification accuracies over all pairwise classification using a support vector machine with a radial basis function kernel(SVM-R) extreme learning machine (ELM) extreme learning machine with a linear kernel (ELM-L) and extreme learning machine with aradial basis function kernel (ELM-R) for all five subjects

Table 2 Classification accuracies in employing ELM-R for the pairwise combinations which shows the top five classification performancesfor each subject Classification accuracies are expressed as mean and associated standard deviation

Subjects

S1 8647 plusmn 107 8121 plusmn 103 8001 plusmn 373 7335 plusmn 317 7244 plusmn 171(a versus i) (a versus mute) (a versus u) (a versus e) (i versus o)

S2 9902 plusmn 076 9930 plusmn 014 9822 plusmn 022 9514 plusmn 103 9301 plusmn 073(a versus i) (i versus mute) (i versus o) (a versus mute) (o versus mute)

S3 9208 plusmn 108 9019 plusmn 063 8915 plusmn 137 8727 plusmn 071 7038 plusmn 138(e versus mute) (i versus mute) (u versus mute) (o versus mute) (a versus i)

S4 9333 plusmn 031 9227 plusmn 103 9224 plusmn 213 9112 plusmn 054 9005 plusmn 183(i versus mute) (u versus mute) (a versus mute) (e versus mute) (o versus mute)

S5 9632 plusmn 231 9401 plusmn 017 9229 plusmn 114 9007 plusmn 058 8806 plusmn 123(e versus mute) (i versus mute) (o versus mute) (a versus mute) (u versus mute)

73 of the average result overall [9] whereas Huang et alreported that ELM tends to have a much higher learningspeed and comparable generalization performance in binaryclassification [21] In another study Huang argued that ELMhas fewer optimization constraints owing to its special sepa-rability feature and results in simpler implementation fasterlearning and better generalization performance [23] Thusour results showed consistent characterswith othersrsquo previousresearch using ELM and even similar or better classificationresults for imagined speech compared to other research usingdifferent algorithms Recently ELM algorithms have beenextensively applied in many other medical and biomedicalstudies [28ndash31] More detailed information about ELM canbe found in a recent review [32]

In this study each trial was divided into the thirty timesegments of 02 s in length and a 01 s overlap Each timesegment was considered as a sample for training the classifierand the final label of the test sample was determined byselecting the most frequent output (see Figure 2) We alsocompared the classification accuracy of our method withthose of a conventional method that does not divide the trialsinto multiple time segments As a result our method showedsuperior performance in terms of classification accuracy to

the conventional method In our opinion by dividing thetrials some effects such as increasing number of trials forclassifier training might occur and each time segment witha 02 s length is likely to retain enough information fordiscrimination of EEG vowel imagination Generally EEGclassification has problems in terms of poor generalizationperformance and the overfitting phenomenon because ofthe deficiency of the number of samples for the classifierTherefore an increased number of samples by dividingtrials could mitigate the aforementioned problems Howeverfurther analyses are required to prove our assumptions insubsequent studies

To reduce the dimension of the feature vector weemployed a feature selection algorithm based on the sparseregression model In the sparse-regression-model-based fea-ture selection algorithm the regularization parameter 120582 ofequation (1) must be carefully selected because 120582 determinesthe dimension of the optimized feature parameter Forexample when the selected 120582 is too large the algorithmexcludes discriminative features from an optimal feature setF However when users set 120582 too small redundant featuresare not excluded from an optimal feature set F Thereforethe optimal value for 120582 was selected by cross-validation on

8 BioMed Research International

Table3Con

fusio

nmatrix

fora

llpairw

isecombinatio

nsandsubjectsusingEL

ME

LM-LE

LM-RSVM-Rand

LDA

Classifi

ers

ELM

ELM-L

ELM-R

SVM-R

LDA

Con

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

npo

sitive

Testpo

sitive

2516

1234

2649

1101

2635

1115

3675

752556

1194

Testnegativ

e1509

2241

1261

2489

1297

2453

3525

225

1398

2352

Sensitivity=

06251

Specificity=

064

49Sensitivity=

06775

Specificity=

06933

Sensitivity=

06701

Specificity=

06875

Sensitivity=

05104

Specificity=

07500

Sensitivity=

064

64Specificity=

06633

BioMed Research International 9

40455055606570758085

Clas

sifica

tion

accu

racy

()

002 004 016008 01 012 014 0180 006Regularization parameter

Figure 6 Effects of varying the regularization parameter on the classification accuracies obtained by ELM-R with sparse-regression-model-based feature selection for subject 2 The parameter value giving the highest accuracy is highlighted with a red circle

the training session in our study For example the changeof classification accuracy caused by varying 120582 for subject 1 isillustrated in Figure 6 In the case of a and i using ELM-R the best classification accuracy reached a plateau at 120582 =008 and declined after 014 However the optimal values of 120582are totally different among the pairwise combinations and allsubjects

Furthermore our optimized results were achieved in thegamma frequency band (30ndash70Hz) We also tested the otherfrequency ranges such as beta (13ndash30Hz) alpha (8ndash13Hz)and theta (4ndash8Hz) however the classification rates of thosebands were not much better than the chance level in everysubject and pairwise combination of syllables In additionthe results of our TFR and topographical analysis (Figures3 and 4) could support some relationship between gammaactivities and imagined speech processing As far as we knowin the EEG classification of imagined speech there have beenonly a few studies that examined the differences betweenmultiple frequency bands including gamma frequency [3334] Therefore our study might be the first report that thegamma frequency band could play an important role asfeatures for the EEG classification of imagined speech More-over several studies using ECoG reported quite good resultsin the gamma frequency for imagined speech classification[35 36] and these findings are consistent with our resultsHowever several studies have been conducted that suggestedthe role of gamma frequency band for speech processingin neurophysiological perspectives [37ndash39] However thosestudies usually used intracranial recordings and focused onthe analysis for the high gamma (70ndash150Hz) frequency bandThus suggesting a relevance between those results and ourclassification study is not easy However a certain relationbetween some information in low gamma frequencies as afeature for classification and its implication from a neuro-physiological view will be specified in future studies

Currently communication systems with various BCItechnologies have been developed for disabled people [40]For instance the P300 speller is one of the most widelyresearched BCI technologies to decode verbal thoughts fromEEG [41] Despite many efforts toward better and fasterperformance the P300 speller is still insufficient for usein normal conversation [42 43] whereas independent of

the P300 component efforts toward extraction and analysisof EEG or ECoG induced by imagined speech have beenconducted [44 45] In this context our results of highperformance from the application of ELM and its variantshave potential to advance BCI research using silent speechcommunication However the pairwise combinations withthe highest accuracies (see Table 2) differed in each subjectAfter experiment each participant reported different patternsof vowel discrimination For example one subject reportedthat he could not discriminate e from i and the othersubject reported the other pair was not easy to distinguishAlthough those reports were not exactly matched to theresults of classification these discrepancies of subjectivesensory perception might be related to process of imaginingspeech and classification results Besides we have not triedmulticlass classification in this study yet some attemptsin multiclass classification of imagined speech have beenperformed by others [8 46 47] These issues related tointersubject variability and multiclass systems should beconsidered for our future study to developmore practical andgeneralized BCI systems using silent speech

4 Conclusions

In the present study we used classification algorithms forEEG data of imagined speech Particularly we comparedELM and its variants to SVM-R and LDA algorithms andobserved that ELM and its variants showed better perfor-mance than other algorithms with our data These resultsmight lead to the development of silent speech BCI systems

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Authorsrsquo Contributions

Beomjun Min and Jongin Kim equally contributed to thiswork

10 BioMed Research International

Acknowledgments

This research was supported by the GIST Research Institute(GRI) in 2016 and the Pioneer Research Center Programthrough the National Research Foundation of Korea fundedby theMinistry of Science ICTamp Future Planning (Grant no2012-0009462)

References

[1] H-J Hwang S Kim S Choi and C-H Im ldquoEEG-based brain-computer interfaces a thorough literature surveyrdquo InternationalJournal of Human-Computer Interaction vol 29 no 12 pp 814ndash826 2013

[2] U Chaudhary N Birbaumer and A Ramos-MurguialdayldquoBrainndashcomputer interfaces for communication and rehabilita-tionrdquoNature Reviews Neurology vol 12 no 9 pp 513ndash525 2016

[3] M Hamedi S-H Salleh and A M Noor ldquoElectroencephalo-graphic motor imagery brain connectivity analysis for BCI areviewrdquo Neural Computation vol 28 no 6 pp 999ndash1041 2016

[4] C Neuper R Scherer S Wriessnegger and G PfurtschellerldquoMotor imagery and action observation modulation of sen-sorimotor brain rhythms during mental control of a brain-computer interfacerdquoClinical Neurophysiology vol 120 no 2 pp239ndash247 2009

[5] H-J Hwang K Kwon and C-H Im ldquoNeurofeedback-basedmotor imagery training for brainndashcomputer interface (BCI)rdquoJournal of Neuroscience Methods vol 179 no 1 pp 150ndash1562009

[6] G Pfurtscheller C Brunner A Schlogl and F H Lopes daSilva ldquoMu rhythm (de)synchronization and EEG single-trialclassification of different motor imagery tasksrdquo NeuroImagevol 31 no 1 pp 153ndash159 2006

[7] M Ahn and S C Jun ldquoPerformance variation inmotor imagerybrain-computer interface a brief reviewrdquo Journal of Neuro-science Methods vol 243 pp 103ndash110 2015

[8] S Deng R Srinivasan T Lappas and M DrsquoZmura ldquoEEG clas-sification of imagined syllable rhythm using Hilbert spectrummethodsrdquo Journal of Neural Engineering vol 7 no 4 Article ID046006 2010

[9] C S DaSalla H Kambara M Sato and Y Koike ldquoSingle-trialclassification of vowel speech imagery using common spatialpatternsrdquo Neural Networks vol 22 no 9 pp 1334ndash1339 2009

[10] X Pei D L Barbour E C Leuthardt and G Schalk ldquoDecodingvowels and consonants in spoken and imagined words usingelectrocorticographic signals in humansrdquo Journal of NeuralEngineering vol 8 no 4 Article ID 046028 2011

[11] F Lotte M Congedo A Lecuyer F Lamarche and B ArnaldildquoA review of classification algorithms for EEG-based brain-computer interfacesrdquo Journal of Neural Engineering vol 4 no2 article R01 pp R1ndashR13 2007

[12] K Brigham and B V K V Kumar ldquoImagined speech classifica-tion with EEG signals for silent communication a preliminaryinvestigation into synthetic telepathyrdquo in Proceedings of the4th International Conference on Bioinformatics and BiomedicalEngineering (iCBBE rsquo10) Chengdu China June 2010

[13] J Kim S-K Lee and B Lee ldquoEEG classification in a single-trialbasis for vowel speech perception using multivariate empiricalmode decompositionrdquo Journal of Neural Engineering vol 11 no3 Article ID 036010 2014

[14] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1-3 pp 489ndash501 2006

[15] Y Song and J Zhang ldquoAutomatic recognition of epileptic EEGpatterns via Extreme Learning Machine and multiresolutionfeature extractionrdquoExpert Systemswith Applications vol 40 no14 pp 5477ndash5489 2013

[16] Q YuanWZhou S Li andDCai ldquoEpileptic EEG classificationbased on extreme learning machine and nonlinear featuresrdquoEpilepsy Research vol 96 no 1-2 pp 29ndash38 2011

[17] M N I Qureshi B Min H J Jo and B Lee ldquoMulticlass clas-sification for the differential diagnosis on the ADHD subtypesusing recursive feature elimination and hierarchical extremelearning machine structural MRI studyrdquo PLoS ONE vol 11 no8 Article ID e0160697 2016

[18] L Gao W Cheng J Zhang and J Wang ldquoEEG classificationfor motor imagery and resting state in BCI applications usingmulti-class Adaboost extreme learning machinerdquo Review ofScientific Instruments vol 87 no 8 Article ID 085110 2016

[19] R Tibshirani ldquoRegression shrinkage and selection via the LassoRobert Tibshiranirdquo Journal of the Royal Statistical Society SeriesB (Statistical Methodology) vol 58 no 1 pp 267ndash288 2007

[20] J Friedman T Hastie and R Tibshirani ldquoRegularization pathsfor generalized linear models via coordinate descentrdquo Journal ofStatistical Software vol 33 no 1 pp 1ndash22 2010

[21] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[22] J Cao Z Lin and G-B Huang ldquoComposite function waveletneural networks with extreme learning machinerdquo Neurocom-puting vol 73 no 7-9 pp 1405ndash1416 2010

[23] G-B Huang X Ding and H Zhou ldquoOptimization methodbased extreme learning machine for classificationrdquo Neurocom-puting vol 74 no 1-3 pp 155ndash163 2010

[24] P L Bartlett ldquoThe sample complexity of pattern classificationwith neural networks the size of the weights is more importantthan the size of the networkrdquo Institute of Electrical and Electron-ics Engineers Transactions on InformationTheory vol 44 no 2pp 525ndash536 1998

[25] L-C Shi and B-L Lu ldquoEEG-based vigilance estimation usingextreme learning machinesrdquo Neurocomputing vol 102 pp 135ndash143 2013

[26] Y Peng and B-L Lu ldquoDiscriminative manifold extreme learn-ing machine and applications to image and EEG signal classifi-cationrdquo Neurocomputing vol 174 pp 265ndash277 2014

[27] N-Y Liang P Saratchandran G-B Huang and N Sundarara-jan ldquoClassification of mental tasks from EEG signals usingextreme learning machinerdquo International Journal of NeuralSystems vol 16 no 1 pp 29ndash38 2006

[28] J Kim H S Shin K Shin and M Lee ldquoRobust algorithmfor arrhythmia classification in ECG using extreme learningmachinerdquo Biomedical Engineering OnLine vol 8 article 312009

[29] S Karpagachelvi M Arthanari and M Sivakumar ldquoClas-sification of electrocardiogram signals with support vectormachines and extreme learning machinerdquo Neural Computingand Applications vol 21 no 6 pp 1331ndash1339 2012

[30] W Huang Z M Tan Z Lin et al ldquoA semi-automatic approachto the segmentation of liver parenchyma from 3D CT imageswith Extreme Learning Machinerdquo in Proceedings of the IEEE

BioMed Research International 11

Annual International Conference of the IEEE Engineering inMedicine and Biology Society (EMBCrsquo 12) pp 3752ndash3755 Bue-nos Aires Argentina September 2012

[31] R Barea L Boquete S Ortega E Lopez and J M Rodrıguez-Ascariz ldquoEOG-based eye movements codification for humancomputer interactionrdquo Expert Systems with Applications vol 39no 3 pp 2677ndash2683 2012

[32] GHuangG-BHuang S Song andKYou ldquoTrends in extremelearning machines a reviewrdquo Neural Networks vol 61 pp 32ndash48 2015

[33] B M Idrees and O Farooq ldquoVowel classification using waveletdecomposition during speech imageryrdquo in Proceedings of theInternational Conference on Signal Processing and IntegratedNetworks (SPIN rsquo16) pp 636ndash640 Noida India Feburary 2016

[34] A Riaz S Akhtar S Iftikhar A A Khan and A SalmanldquoInter comparison of classification techniques for vowel speechimagery using EEG sensorsrdquo in Proceedings of the 2nd Interna-tional Conference on Systems and Informatics (ICSAI rsquo14) pp712ndash717 IEEE Shanghai China November 2014

[35] S Martin P Brunner I Iturrate et al ldquoWord pair classificationduring imagined speech using direct brain recordingsrdquo Scien-tific Reports vol 6 Article ID 25803 2016

[36] X Pei J Hill and G Schalk ldquoSilent communication towardusing brain signalsrdquo IEEE Pulse vol 3 no 1 pp 43ndash46 2012

[37] A-L Giraud and D Poeppel ldquoCortical oscillations and speechprocessing emerging computational principles and operationsrdquoNature Neuroscience vol 15 no 4 pp 511ndash517 2012

[38] V L Towle H-A Yoon M Castelle et al ldquoECoG 120574 activityduring a language task differentiating expressive and receptivespeech areasrdquo Brain vol 131 no 8 pp 2013ndash2027 2008

[39] SMartin P Brunner C Holdgraf et al ldquoDecoding spectrotem-poral features of overt and covert speech from the humancortexrdquo Frontiers in Neuroengineering vol 7 article 14 2014

[40] J S Brumberg A Nieto-Castanon P R Kennedy and F HGuenther ldquoBrain-computer interfaces for speech communica-tionrdquo Speech Communication vol 52 no 4 pp 367ndash379 2010

[41] L A Farwell and E Donchin ldquoTalking off the top of your headtoward a mental prosthesis utilizing event-related brain poten-tialsrdquo Electroencephalography and Clinical Neurophysiology vol70 no 6 pp 510ndash523 1988

[42] D J Krusienski EW Sellers F Cabestaing et al ldquoA comparisonof classification techniques for the P300 Spellerrdquo Journal ofNeural Engineering vol 3 no 4 article 007 pp 299ndash305 2006

[43] E Yin Z Zhou J Jiang F Chen Y Liu and D Hu ldquoA novelhybrid BCI speller based on the incorporation of SSVEP intothe P300 paradigmrdquo Journal of Neural Engineering vol 10 no2 Article ID 026012 2013

[44] C Herff and T Schultz ldquoAutomatic speech recognition fromneural signals a focused reviewrdquo Frontiers in Neuroscience vol10 article 429 2016

[45] S Chakrabarti H M Sandberg J S Brumberg and D JKrusienski ldquoProgress in speech decoding from the electrocor-ticogramrdquoBiomedical Engineering Letters vol 5 no 1 pp 10ndash212015

[46] K Mohanchandra and S Saha ldquoA communication paradigmusing subvocalized speech translating brain signals intospeechrdquoAugmentedHumanResearch vol 1 no 1 article 3 2016

[47] O Ossmy I Fried and R Mukamel ldquoDecoding speech percep-tion from single cell activity in humansrdquo NeuroImage vol 117pp 151ndash159 2015

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

BioMed Research International 3

02 sec

01 sec3 sec

4 (features) times 60 (channels)= 240 (dimension of features)

Featureextraction

Featureselection

(Mean variancestandard deviation

skewness)

Sparse regressionmodel based

feature selection

ClassifierMajority

report

9000 samples = 6 (conditions) times 50 (trials) times 30 (blocks)

(ELM ELM-R ELM-LLDA SVM-R)

Figure 2 Overall signal processing procedure for classification First each trial was divided into thirty blocks with a 02 s length and 01 soverlap Mean variance standard deviation and skewness were extracted from all blocks and channels Sequentially sparse-regression-model-based feature selection was employed to reduce the dimension of the features All features were used as the input of the trainedclassifier Because each trial includes thirty blocks thirty classifier outputs were acquired therefore the label of each trial was determined byselecting the most frequent output of the thirty classifier outputs

Eugene OR USA)were used to record the EEG signals usinga 1000Hz sampling rate (Net Station version 456)

23 Data Processing and Classification Procedure

231 Preprocessing First we resampled the acquired EEGdata into 250Hz for fast preprocessing procedure The EEGdata was bandpass filtered with 1ndash100Hz Sequentially an IIRnotch filter (Butterworth order 4 bandwidth 59ndash61Hz) wasapplied to remove the power line noise

In general EEG classification has problems in terms ofpoor generalization performance and the overfitting phe-nomenon because the number of samples is much smallerthan the dimension of the features Therefore to obtainenough samples for learning and testing the classifier wedivided each imagination trial for 3 s into 30 time segmentswith a 02 s length and 01 s overlap Therefore we obtaineda total of 9000 segments = (6 (conditions) times 50 (trials pereach condition) times 30 segments) to learn and test the classifierWe calculated the mean variance standard deviation andskewness from each segment to acquire the feature vectorfor the classifier The dimension of the feature vector is240 (4 (types of features) times 60 (the number of channels))Additionally to reduce the dimension of the feature vectorwe applied a feature selection algorithm based on the sparseregression model The selected set of features extracted fromall segments was employed to learn and test the classifierBecause a trial consists of thirty segments a trial has thirtyoutputs of the classifier Therefore the label of the test trialwas determined by selecting themost frequent output amongthe outputs of the thirty segments The training and testingof the classifier model are conducted using the segmentsextracted only from training data and testing data respec-tively Finally to accurately estimate the classification perfor-mance we applied 10-fold cross-validationThe classificationaccuracies of ELM extreme learning machine with linearfunction (ELM-L) extreme learning machine with radialbasis function (ELM-R) and SVM-R for all five subjects werecompared to select the optimal classifier to discriminate the

vowel imagination The overall signal processing proceduresare briefly described in Figure 2

232 Sparse-Regression-Model-Based Feature Selection Tib-shirani developed a sparse regression model known as theLasso estimate [19] In this study we employed the sparseregression model to select the discriminative set of featuresto classify the EEG responses to covert articulation Theformula for selecting discriminative features based on thesparse regression model can be described as follows

zlowast = argminz

1003817100381710038171003817Fz minus t100381710038171003817100381722 + 120582 1199111 (1)

where sdot 119901 denotes the 119897119901-norm z is a sparse vector to belearned and zlowast indicates an optimal sparse vector t isin R119873119905times1

is a vector about the true class label for the number of trainingsamples119873119905 and 120582 is a positive regularization parameter thatcontrols the sparsity of z F is the matrix that consists of themean variance standard deviation and skewness for eachchannel

F = [f1 f2 f240] (2)

where f119901 isin R119873119905times1 is the 119901th column vector of F The coor-dinate descent algorithm is adopted to solve the optimizationproblem in (1) [20]

The column vectors in F corresponding to the zero entriesin z are excluded to form an optimized feature set F that isof lower dimensionality than F

233 Extreme Learning Machine Conventional feedforwardneural networks require weights and biases for all layersto be adjusted by the gradient-based learning algorithmsHowever the procedure for tuning the parameters of alllayers is very slow because it is repeated many times andits solutions easily fall into local optima For this reasonHuang et al proposed ELM which randomly assigns theinput weights and analytically calculates only the output

4 BioMed Research International

weights Therefore the learning speed of ELM is much fasterthan conventional learning algorithms and has outstandinggeneralization performance [21ndash23] If we assume the 119873119905training samples (v119896 l119896)119873119905119896=1 where v119896 is an 119899-dimensionalfeature vector v119896 = [V1198961 V1198962 V119896119899]119879 and l119896 is the truelabels which consists of 119898-classes l119896 = [1198971198961 1198971198962 119897119896119898]119879a standard SLFN with 119873ℎ hidden neurons and activationfunction 119886(sdot) can be formulated as follows

119873ℎsum119895=1

wℎ119895119886 (w119894119895 sdot v119896 + b119895) = o119896 119896 = 1 119873119905 (3)

where w119894119895 = [1199081198941198951 1199081198941198952 119908119894119895119899]119879 is the weight vector for theinput layer between the 119895th hidden neuron and the inputneurons wℎ119895 = [119908ℎ1198951 119908ℎ1198952 119908ℎ119895119898]119879 is the weight vector forthe hidden layer between the 119895th hidden neuron and theoutput neurons o119896 = [1199001198961 1199001198962 119900119896119898]119879 is the output vectorof the network and b119895 is the bias of the 119895th hidden neuronThe operator sdot indicates the inner product We can nowreformulate the equation into matrix form as follows

AWℎ = O (4)

where

A

=[[[[[

119886 (w1198941 sdot v1 + b1) sdot sdot sdot 119886 (w119894119873ℎ sdot v1 + b119873ℎ) sdot sdot sdot

119886 (w1198941 sdot v119873119905 + b1) sdot sdot sdot 119886 (w119894119873ℎ sdot v119873119905 + b119873ℎ)

]]]]]119873119905times119873ℎ

Wℎ = [wℎ1 sdot sdot sdot wℎ119873ℎ]119879

119873ℎtimes119898

O = [o1 sdot sdot sdot o119873119905]119879119873119905times119898

(5)

where matrix A is the output matrix of the hidden layer andthe operator 119879 indicates the transpose of the matrix Becausethe ELMalgorithm randomly selects the inputweightsw119894119895 andbiases b119895 we can find weights for the hidden layer wℎ119895 bysolving the following optimization problem

minwℎ119895

10038171003817100381710038171003817AWℎ minus L100381710038171003817100381710038172 (6)

where L is the matrix of true labels for training samples

L = (l1 sdot sdot sdot l119873119905)119879119873119905times119898 (7)

The above problem is known as a linear system optimizationproblem and its unique least-squares solution with a mini-mum norm is as follows

Wℎ = AdaggerL (8)

where Adagger is the MoorendashPenrose generalized inverse of thematrix A According to the analysis of Bartlett and Huang

the ELM algorithms achieve not only the minimum squaretraining error but also the best generalization performanceon novel test samples [14 24]

In this paper the activation function 119886(sdot)was determinedto be a sigmoidal function and the probability densityfunction for assigning the input weights and biases was setto be a uniform distribution function

3 Results and Discussion

31 Time-Frequency Analysis for Imagined Speech EEG DataWe computed the time-frequency representation (TFR) ofimagined speech EEG data for every subject to identifyspeech-related brain activities TFR of each trial was cal-culated using a Morlet wavelet and averaged over all trialsAmong the five subjects we plotted TFRs of subjects 2and 5 which showed notable patterns in gamma frequencyAs shown in Figure 3 much of the gamma band (30ndash70Hz) powers of five vowel conditions (a e i o andu) in the left temporal area are totally distinct and muchhigher than those of the control condition (mute sound) Inaddition topographical head plot of subject 5 was presentedin Figure 4 Increased gamma activities were observed in bothtemporal regions when the subject imagined vowels

32 Classification Results Figure 5 shows the classificationaccuracies averaged over all pairwise classifications for fivesubjects using ELM ELM-L ELM-R SVM-R and LDA Wealso conducted SVM and SVM with a linear kernel but theresults of SVM and SVM with a linear kernel are excludedbecause these classifiers could not be converged during manyiterations (100000 times) All classification accuracies areestimated by 10 times 10-fold cross-validation In the cases ofsubjects 1 3 and 4 ELM-L shows the best classificationperformance compared to the other four classifiers HoweverELM-R shows the best classification accuracies in subjects2 and 5 In the cases of all subjects the classificationaccuracies of ELM ELM-L and ELM-R are much better thanthose of SVM-R which are approximately the chance levelof 50 To identify the best classifier to discriminate thevowel imagination we conducted paired 119905-tests between theclassification accuracies of ELM-R and those of the otherthree classifiers As a result the classification performance ofELM-R is significantly better than those of ELM (119901 lt 001)LDA (119901 lt 001) and SVM-R (119901 lt 001) However there is nosignificant difference between the classification accuracies ofELM-R and ELM-L (119901 = 046)

Table 1 describes the classification accuracies of subject2 which shows the highest overall accuracies among allsubjects after 10 times 10-fold cross-validation for all pairwisecombinations In almost all pairwise combinations ELM-R has better classification performance than the other fourclassifiers for subject 2 The most discriminative pairwisecombination for subject 2 is vowels a and i which shows100 classification accuracy using ELM-R for subject 2

Table 2 contains the results of ELM-R for the pairwisecombinations and shows the top five classification perfor-mances for each subject There is no pairwise combinationto be selected from all subjects however a versus mute and

BioMed Research International 5

Table1Classifi

catio

naccuracies

in

employingSV

M-RE

LME

LM-LE

LM-Rand

LDAfors

ubject

2Th

ehigh

estclassificatio

naccuracy

amon

gthefour

classifiersism

arkedin

bold

forp

airw

isecombinatio

nClassifi

catio

naccuracies

areexpressedas

meanandassociated

stand

arddeviation

SVM-RE

LME

LM-LE

LM-Rand

LDAdeno

tethesupp

ortv

ectorm

achine

with

radialbasis

functio

nextre

melearningm

achineextremelearningm

achine

with

alineark

ernelextre

melearningm

achine

with

aradialbasisfunctio

nandlin

eard

iscrim

inantanalysis

respectiv

ely

Classifi

erav

ersus

e

av

ersus

iav

ersus

o

av

ersus

u

ev

ersus

iev

ersus

o

ev

ersus

u

iversus

o

iversus

u

oversus

u

av

ersus

mute

ev

ersus

mute

iversus

mute

ov

ersus

mute

u

versus

mute

SVM-R

5024

plusmn10

155

32plusmn

215

5141

plusmn05

150

22plusmn

060

5031

plusmn01

949

18plusmn

023

5241

plusmn15

752

47plusmn

071

5111

plusmn02

251

02plusmn

132

4948

plusmn07

050

35plusmn

022

5122

plusmn12

250

14plusmn

160

5123

plusmn02

3EL

M81

41plusmn

118

9423

plusmn10

262

42plusmn

215

7334

plusmn32

190

55plusmn

178

6976

plusmn12

9456

23plusmn

348

9632

plusmn01

897

47plusmn

076

6681

plusmn38

392

85plusmn

243

8231

plusmn21

699

41plusmn

012

8816

plusmn37

480

28plusmn

287

ELM-L

8132

plusmn04

798

15plusmn

067

6711

plusmn10

482

22plusmn

240

8831

plusmn17

378

25plusmn

223

5314

plusmn31

098

16plusmn

037

9823

plusmn21

268

36plusmn

163

9228

plusmn08

380

49plusmn

370

9912

plusmn03

393

14plusmn

221

8725

plusmn11

2EL

M-R

8628

plusmn11

299

02plusmn

076

7303

plusmn46

283

14plusmn

102

8908

plusmn01

478

27plusmn

016

5836

plusmn34

198

15plusmn

022

9707

plusmn14

273

38plusmn

273

9509

plusmn10

389

28plusmn

313

9901

plusmn01

493

25plusmn

073

9301

plusmn11

2

LDA

7925

plusmn16

290

32plusmn

261

6057

plusmn21

384

12plusmn

114

8823

plusmn16

770

04plusmn

143

5638

plusmn14

197

07plusmn

039

9628

plusmn16

265

14plusmn

173

9126

plusmn14

380

38plusmn

470

9807

plusmn13

290

39plusmn

162

8225

plusmn12

7

6 BioMed Research International

Time (seconds)

Freq

uenc

y (H

z)Vowel a

Vowel e

Vowel i

Vowel o

Vowel u

Mute2151

Vowel a

Vowel e

Vowel i

Vowel o

Vowel u

Mute

Subject 2 Subject 5

60

20

60

2040

60

2040

60

2040

60

2040

20

6040

40

0 02 04 06 08 1 12 14 16 18 2

Time (seconds)0 02 04 06 08 1 12 14 16 18 2

2151

2151

2151

2151

2151

uV2

uV2

uV2

uV2

uV2

uV2

Figure 3 Time-frequency representation (TFR) of EEG signals averaged over all trials for subjects 2 and 5 The EEG signals were obtainedfrom eight electrodes in the left temporal areas during each of the six experimental conditions (vowels a e i o u and mute) TheEEG data were bandpass filtered with 1ndash100Hz and a Morlet mother wavelet transform was used to calculate the TFR The TFRs are plottedfor the first 2 s after final beep sound and for the frequency range of 10ndash70Hz

15

14

13

12

11

10

09

15

14

13

12

11

10

09

A E I

O U Mute

Figure 4 Topographical distribution of gamma activities during vowel imagination for subject 5 Increased activities were observed in bothtemporal areas when the subject imagined vowels Time interval for the analysis is 0ndash3 sec

i versus mute are selected from four subjects and a versusi is selected from three subjects

Table 3 indicates the confusion matrix for all pairwisecombinations and subjects using ELM ELM-L ELM-RSVM-R and LDA In terms of sensitivity and specificityELM-L is the best classifier for our EEG data AlthoughSVM-R shows higher specificity than those of the other threeclassifiers in this table SVM-R classified almost all conditionsas positive and resulted in poor sensitivity therefore the highspecificity of the SVM-R is possibly invalid Thus SVM-Rmight be an unsuitable classifier for our study

33 Discussion Overall ELM ELM-L and ELM-R showedbetter performance than the SVM-R and LDA algorithms inthis study In several previous studies ELM achieved similaror better classification accuracy rates with much less trainingtime compared to other algorithms using EEG data [16 25ndash27] However we could not find studies on classification ofimagined speech using ELM algorithms Deng et al reportedclassification rates using LDA for imagined speech with7267 of the highest accuracy but the average results werenot much better than the chance level [8] DaSalla et al usingSVM showed approximately 82 of the best accuracy and

BioMed Research International 7

S4S5S2

S3

S1SV

M-R

LDA

ELM

ELM

-LEL

M-R

SVM

-RLD

AEL

MEL

M-L

ELM

-RSV

M-R

LDA

ELM

ELM

-LEL

M-R

SVM

-RLD

AEL

MEL

M-L

ELM

-RSV

M-R

LDA

ELM

ELM

-LEL

M-R

4045505560657075808590

Clas

sifica

tion

accu

raci

es (

)

Figure 5 Averaged classification accuracies over all pairwise classification using a support vector machine with a radial basis function kernel(SVM-R) extreme learning machine (ELM) extreme learning machine with a linear kernel (ELM-L) and extreme learning machine with aradial basis function kernel (ELM-R) for all five subjects

Table 2 Classification accuracies in employing ELM-R for the pairwise combinations which shows the top five classification performancesfor each subject Classification accuracies are expressed as mean and associated standard deviation

Subjects

S1 8647 plusmn 107 8121 plusmn 103 8001 plusmn 373 7335 plusmn 317 7244 plusmn 171(a versus i) (a versus mute) (a versus u) (a versus e) (i versus o)

S2 9902 plusmn 076 9930 plusmn 014 9822 plusmn 022 9514 plusmn 103 9301 plusmn 073(a versus i) (i versus mute) (i versus o) (a versus mute) (o versus mute)

S3 9208 plusmn 108 9019 plusmn 063 8915 plusmn 137 8727 plusmn 071 7038 plusmn 138(e versus mute) (i versus mute) (u versus mute) (o versus mute) (a versus i)

S4 9333 plusmn 031 9227 plusmn 103 9224 plusmn 213 9112 plusmn 054 9005 plusmn 183(i versus mute) (u versus mute) (a versus mute) (e versus mute) (o versus mute)

S5 9632 plusmn 231 9401 plusmn 017 9229 plusmn 114 9007 plusmn 058 8806 plusmn 123(e versus mute) (i versus mute) (o versus mute) (a versus mute) (u versus mute)

73 of the average result overall [9] whereas Huang et alreported that ELM tends to have a much higher learningspeed and comparable generalization performance in binaryclassification [21] In another study Huang argued that ELMhas fewer optimization constraints owing to its special sepa-rability feature and results in simpler implementation fasterlearning and better generalization performance [23] Thusour results showed consistent characterswith othersrsquo previousresearch using ELM and even similar or better classificationresults for imagined speech compared to other research usingdifferent algorithms Recently ELM algorithms have beenextensively applied in many other medical and biomedicalstudies [28ndash31] More detailed information about ELM canbe found in a recent review [32]

In this study each trial was divided into the thirty timesegments of 02 s in length and a 01 s overlap Each timesegment was considered as a sample for training the classifierand the final label of the test sample was determined byselecting the most frequent output (see Figure 2) We alsocompared the classification accuracy of our method withthose of a conventional method that does not divide the trialsinto multiple time segments As a result our method showedsuperior performance in terms of classification accuracy to

the conventional method In our opinion by dividing thetrials some effects such as increasing number of trials forclassifier training might occur and each time segment witha 02 s length is likely to retain enough information fordiscrimination of EEG vowel imagination Generally EEGclassification has problems in terms of poor generalizationperformance and the overfitting phenomenon because ofthe deficiency of the number of samples for the classifierTherefore an increased number of samples by dividingtrials could mitigate the aforementioned problems Howeverfurther analyses are required to prove our assumptions insubsequent studies

To reduce the dimension of the feature vector weemployed a feature selection algorithm based on the sparseregression model In the sparse-regression-model-based fea-ture selection algorithm the regularization parameter 120582 ofequation (1) must be carefully selected because 120582 determinesthe dimension of the optimized feature parameter Forexample when the selected 120582 is too large the algorithmexcludes discriminative features from an optimal feature setF However when users set 120582 too small redundant featuresare not excluded from an optimal feature set F Thereforethe optimal value for 120582 was selected by cross-validation on

8 BioMed Research International

Table3Con

fusio

nmatrix

fora

llpairw

isecombinatio

nsandsubjectsusingEL

ME

LM-LE

LM-RSVM-Rand

LDA

Classifi

ers

ELM

ELM-L

ELM-R

SVM-R

LDA

Con

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

npo

sitive

Testpo

sitive

2516

1234

2649

1101

2635

1115

3675

752556

1194

Testnegativ

e1509

2241

1261

2489

1297

2453

3525

225

1398

2352

Sensitivity=

06251

Specificity=

064

49Sensitivity=

06775

Specificity=

06933

Sensitivity=

06701

Specificity=

06875

Sensitivity=

05104

Specificity=

07500

Sensitivity=

064

64Specificity=

06633

BioMed Research International 9

40455055606570758085

Clas

sifica

tion

accu

racy

()

002 004 016008 01 012 014 0180 006Regularization parameter

Figure 6 Effects of varying the regularization parameter on the classification accuracies obtained by ELM-R with sparse-regression-model-based feature selection for subject 2 The parameter value giving the highest accuracy is highlighted with a red circle

the training session in our study For example the changeof classification accuracy caused by varying 120582 for subject 1 isillustrated in Figure 6 In the case of a and i using ELM-R the best classification accuracy reached a plateau at 120582 =008 and declined after 014 However the optimal values of 120582are totally different among the pairwise combinations and allsubjects

Furthermore our optimized results were achieved in thegamma frequency band (30ndash70Hz) We also tested the otherfrequency ranges such as beta (13ndash30Hz) alpha (8ndash13Hz)and theta (4ndash8Hz) however the classification rates of thosebands were not much better than the chance level in everysubject and pairwise combination of syllables In additionthe results of our TFR and topographical analysis (Figures3 and 4) could support some relationship between gammaactivities and imagined speech processing As far as we knowin the EEG classification of imagined speech there have beenonly a few studies that examined the differences betweenmultiple frequency bands including gamma frequency [3334] Therefore our study might be the first report that thegamma frequency band could play an important role asfeatures for the EEG classification of imagined speech More-over several studies using ECoG reported quite good resultsin the gamma frequency for imagined speech classification[35 36] and these findings are consistent with our resultsHowever several studies have been conducted that suggestedthe role of gamma frequency band for speech processingin neurophysiological perspectives [37ndash39] However thosestudies usually used intracranial recordings and focused onthe analysis for the high gamma (70ndash150Hz) frequency bandThus suggesting a relevance between those results and ourclassification study is not easy However a certain relationbetween some information in low gamma frequencies as afeature for classification and its implication from a neuro-physiological view will be specified in future studies

Currently communication systems with various BCItechnologies have been developed for disabled people [40]For instance the P300 speller is one of the most widelyresearched BCI technologies to decode verbal thoughts fromEEG [41] Despite many efforts toward better and fasterperformance the P300 speller is still insufficient for usein normal conversation [42 43] whereas independent of

the P300 component efforts toward extraction and analysisof EEG or ECoG induced by imagined speech have beenconducted [44 45] In this context our results of highperformance from the application of ELM and its variantshave potential to advance BCI research using silent speechcommunication However the pairwise combinations withthe highest accuracies (see Table 2) differed in each subjectAfter experiment each participant reported different patternsof vowel discrimination For example one subject reportedthat he could not discriminate e from i and the othersubject reported the other pair was not easy to distinguishAlthough those reports were not exactly matched to theresults of classification these discrepancies of subjectivesensory perception might be related to process of imaginingspeech and classification results Besides we have not triedmulticlass classification in this study yet some attemptsin multiclass classification of imagined speech have beenperformed by others [8 46 47] These issues related tointersubject variability and multiclass systems should beconsidered for our future study to developmore practical andgeneralized BCI systems using silent speech

4 Conclusions

In the present study we used classification algorithms forEEG data of imagined speech Particularly we comparedELM and its variants to SVM-R and LDA algorithms andobserved that ELM and its variants showed better perfor-mance than other algorithms with our data These resultsmight lead to the development of silent speech BCI systems

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Authorsrsquo Contributions

Beomjun Min and Jongin Kim equally contributed to thiswork

10 BioMed Research International

Acknowledgments

This research was supported by the GIST Research Institute(GRI) in 2016 and the Pioneer Research Center Programthrough the National Research Foundation of Korea fundedby theMinistry of Science ICTamp Future Planning (Grant no2012-0009462)

References

[1] H-J Hwang S Kim S Choi and C-H Im ldquoEEG-based brain-computer interfaces a thorough literature surveyrdquo InternationalJournal of Human-Computer Interaction vol 29 no 12 pp 814ndash826 2013

[2] U Chaudhary N Birbaumer and A Ramos-MurguialdayldquoBrainndashcomputer interfaces for communication and rehabilita-tionrdquoNature Reviews Neurology vol 12 no 9 pp 513ndash525 2016

[3] M Hamedi S-H Salleh and A M Noor ldquoElectroencephalo-graphic motor imagery brain connectivity analysis for BCI areviewrdquo Neural Computation vol 28 no 6 pp 999ndash1041 2016

[4] C Neuper R Scherer S Wriessnegger and G PfurtschellerldquoMotor imagery and action observation modulation of sen-sorimotor brain rhythms during mental control of a brain-computer interfacerdquoClinical Neurophysiology vol 120 no 2 pp239ndash247 2009

[5] H-J Hwang K Kwon and C-H Im ldquoNeurofeedback-basedmotor imagery training for brainndashcomputer interface (BCI)rdquoJournal of Neuroscience Methods vol 179 no 1 pp 150ndash1562009

[6] G Pfurtscheller C Brunner A Schlogl and F H Lopes daSilva ldquoMu rhythm (de)synchronization and EEG single-trialclassification of different motor imagery tasksrdquo NeuroImagevol 31 no 1 pp 153ndash159 2006

[7] M Ahn and S C Jun ldquoPerformance variation inmotor imagerybrain-computer interface a brief reviewrdquo Journal of Neuro-science Methods vol 243 pp 103ndash110 2015

[8] S Deng R Srinivasan T Lappas and M DrsquoZmura ldquoEEG clas-sification of imagined syllable rhythm using Hilbert spectrummethodsrdquo Journal of Neural Engineering vol 7 no 4 Article ID046006 2010

[9] C S DaSalla H Kambara M Sato and Y Koike ldquoSingle-trialclassification of vowel speech imagery using common spatialpatternsrdquo Neural Networks vol 22 no 9 pp 1334ndash1339 2009

[10] X Pei D L Barbour E C Leuthardt and G Schalk ldquoDecodingvowels and consonants in spoken and imagined words usingelectrocorticographic signals in humansrdquo Journal of NeuralEngineering vol 8 no 4 Article ID 046028 2011

[11] F Lotte M Congedo A Lecuyer F Lamarche and B ArnaldildquoA review of classification algorithms for EEG-based brain-computer interfacesrdquo Journal of Neural Engineering vol 4 no2 article R01 pp R1ndashR13 2007

[12] K Brigham and B V K V Kumar ldquoImagined speech classifica-tion with EEG signals for silent communication a preliminaryinvestigation into synthetic telepathyrdquo in Proceedings of the4th International Conference on Bioinformatics and BiomedicalEngineering (iCBBE rsquo10) Chengdu China June 2010

[13] J Kim S-K Lee and B Lee ldquoEEG classification in a single-trialbasis for vowel speech perception using multivariate empiricalmode decompositionrdquo Journal of Neural Engineering vol 11 no3 Article ID 036010 2014

[14] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1-3 pp 489ndash501 2006

[15] Y Song and J Zhang ldquoAutomatic recognition of epileptic EEGpatterns via Extreme Learning Machine and multiresolutionfeature extractionrdquoExpert Systemswith Applications vol 40 no14 pp 5477ndash5489 2013

[16] Q YuanWZhou S Li andDCai ldquoEpileptic EEG classificationbased on extreme learning machine and nonlinear featuresrdquoEpilepsy Research vol 96 no 1-2 pp 29ndash38 2011

[17] M N I Qureshi B Min H J Jo and B Lee ldquoMulticlass clas-sification for the differential diagnosis on the ADHD subtypesusing recursive feature elimination and hierarchical extremelearning machine structural MRI studyrdquo PLoS ONE vol 11 no8 Article ID e0160697 2016

[18] L Gao W Cheng J Zhang and J Wang ldquoEEG classificationfor motor imagery and resting state in BCI applications usingmulti-class Adaboost extreme learning machinerdquo Review ofScientific Instruments vol 87 no 8 Article ID 085110 2016

[19] R Tibshirani ldquoRegression shrinkage and selection via the LassoRobert Tibshiranirdquo Journal of the Royal Statistical Society SeriesB (Statistical Methodology) vol 58 no 1 pp 267ndash288 2007

[20] J Friedman T Hastie and R Tibshirani ldquoRegularization pathsfor generalized linear models via coordinate descentrdquo Journal ofStatistical Software vol 33 no 1 pp 1ndash22 2010

[21] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[22] J Cao Z Lin and G-B Huang ldquoComposite function waveletneural networks with extreme learning machinerdquo Neurocom-puting vol 73 no 7-9 pp 1405ndash1416 2010

[23] G-B Huang X Ding and H Zhou ldquoOptimization methodbased extreme learning machine for classificationrdquo Neurocom-puting vol 74 no 1-3 pp 155ndash163 2010

[24] P L Bartlett ldquoThe sample complexity of pattern classificationwith neural networks the size of the weights is more importantthan the size of the networkrdquo Institute of Electrical and Electron-ics Engineers Transactions on InformationTheory vol 44 no 2pp 525ndash536 1998

[25] L-C Shi and B-L Lu ldquoEEG-based vigilance estimation usingextreme learning machinesrdquo Neurocomputing vol 102 pp 135ndash143 2013

[26] Y Peng and B-L Lu ldquoDiscriminative manifold extreme learn-ing machine and applications to image and EEG signal classifi-cationrdquo Neurocomputing vol 174 pp 265ndash277 2014

[27] N-Y Liang P Saratchandran G-B Huang and N Sundarara-jan ldquoClassification of mental tasks from EEG signals usingextreme learning machinerdquo International Journal of NeuralSystems vol 16 no 1 pp 29ndash38 2006

[28] J Kim H S Shin K Shin and M Lee ldquoRobust algorithmfor arrhythmia classification in ECG using extreme learningmachinerdquo Biomedical Engineering OnLine vol 8 article 312009

[29] S Karpagachelvi M Arthanari and M Sivakumar ldquoClas-sification of electrocardiogram signals with support vectormachines and extreme learning machinerdquo Neural Computingand Applications vol 21 no 6 pp 1331ndash1339 2012

[30] W Huang Z M Tan Z Lin et al ldquoA semi-automatic approachto the segmentation of liver parenchyma from 3D CT imageswith Extreme Learning Machinerdquo in Proceedings of the IEEE

BioMed Research International 11

Annual International Conference of the IEEE Engineering inMedicine and Biology Society (EMBCrsquo 12) pp 3752ndash3755 Bue-nos Aires Argentina September 2012

[31] R Barea L Boquete S Ortega E Lopez and J M Rodrıguez-Ascariz ldquoEOG-based eye movements codification for humancomputer interactionrdquo Expert Systems with Applications vol 39no 3 pp 2677ndash2683 2012

[32] GHuangG-BHuang S Song andKYou ldquoTrends in extremelearning machines a reviewrdquo Neural Networks vol 61 pp 32ndash48 2015

[33] B M Idrees and O Farooq ldquoVowel classification using waveletdecomposition during speech imageryrdquo in Proceedings of theInternational Conference on Signal Processing and IntegratedNetworks (SPIN rsquo16) pp 636ndash640 Noida India Feburary 2016

[34] A Riaz S Akhtar S Iftikhar A A Khan and A SalmanldquoInter comparison of classification techniques for vowel speechimagery using EEG sensorsrdquo in Proceedings of the 2nd Interna-tional Conference on Systems and Informatics (ICSAI rsquo14) pp712ndash717 IEEE Shanghai China November 2014

[35] S Martin P Brunner I Iturrate et al ldquoWord pair classificationduring imagined speech using direct brain recordingsrdquo Scien-tific Reports vol 6 Article ID 25803 2016

[36] X Pei J Hill and G Schalk ldquoSilent communication towardusing brain signalsrdquo IEEE Pulse vol 3 no 1 pp 43ndash46 2012

[37] A-L Giraud and D Poeppel ldquoCortical oscillations and speechprocessing emerging computational principles and operationsrdquoNature Neuroscience vol 15 no 4 pp 511ndash517 2012

[38] V L Towle H-A Yoon M Castelle et al ldquoECoG 120574 activityduring a language task differentiating expressive and receptivespeech areasrdquo Brain vol 131 no 8 pp 2013ndash2027 2008

[39] SMartin P Brunner C Holdgraf et al ldquoDecoding spectrotem-poral features of overt and covert speech from the humancortexrdquo Frontiers in Neuroengineering vol 7 article 14 2014

[40] J S Brumberg A Nieto-Castanon P R Kennedy and F HGuenther ldquoBrain-computer interfaces for speech communica-tionrdquo Speech Communication vol 52 no 4 pp 367ndash379 2010

[41] L A Farwell and E Donchin ldquoTalking off the top of your headtoward a mental prosthesis utilizing event-related brain poten-tialsrdquo Electroencephalography and Clinical Neurophysiology vol70 no 6 pp 510ndash523 1988

[42] D J Krusienski EW Sellers F Cabestaing et al ldquoA comparisonof classification techniques for the P300 Spellerrdquo Journal ofNeural Engineering vol 3 no 4 article 007 pp 299ndash305 2006

[43] E Yin Z Zhou J Jiang F Chen Y Liu and D Hu ldquoA novelhybrid BCI speller based on the incorporation of SSVEP intothe P300 paradigmrdquo Journal of Neural Engineering vol 10 no2 Article ID 026012 2013

[44] C Herff and T Schultz ldquoAutomatic speech recognition fromneural signals a focused reviewrdquo Frontiers in Neuroscience vol10 article 429 2016

[45] S Chakrabarti H M Sandberg J S Brumberg and D JKrusienski ldquoProgress in speech decoding from the electrocor-ticogramrdquoBiomedical Engineering Letters vol 5 no 1 pp 10ndash212015

[46] K Mohanchandra and S Saha ldquoA communication paradigmusing subvocalized speech translating brain signals intospeechrdquoAugmentedHumanResearch vol 1 no 1 article 3 2016

[47] O Ossmy I Fried and R Mukamel ldquoDecoding speech percep-tion from single cell activity in humansrdquo NeuroImage vol 117pp 151ndash159 2015

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

4 BioMed Research International

weights Therefore the learning speed of ELM is much fasterthan conventional learning algorithms and has outstandinggeneralization performance [21ndash23] If we assume the 119873119905training samples (v119896 l119896)119873119905119896=1 where v119896 is an 119899-dimensionalfeature vector v119896 = [V1198961 V1198962 V119896119899]119879 and l119896 is the truelabels which consists of 119898-classes l119896 = [1198971198961 1198971198962 119897119896119898]119879a standard SLFN with 119873ℎ hidden neurons and activationfunction 119886(sdot) can be formulated as follows

119873ℎsum119895=1

wℎ119895119886 (w119894119895 sdot v119896 + b119895) = o119896 119896 = 1 119873119905 (3)

where w119894119895 = [1199081198941198951 1199081198941198952 119908119894119895119899]119879 is the weight vector for theinput layer between the 119895th hidden neuron and the inputneurons wℎ119895 = [119908ℎ1198951 119908ℎ1198952 119908ℎ119895119898]119879 is the weight vector forthe hidden layer between the 119895th hidden neuron and theoutput neurons o119896 = [1199001198961 1199001198962 119900119896119898]119879 is the output vectorof the network and b119895 is the bias of the 119895th hidden neuronThe operator sdot indicates the inner product We can nowreformulate the equation into matrix form as follows

AWℎ = O (4)

where

A

=[[[[[

119886 (w1198941 sdot v1 + b1) sdot sdot sdot 119886 (w119894119873ℎ sdot v1 + b119873ℎ) sdot sdot sdot

119886 (w1198941 sdot v119873119905 + b1) sdot sdot sdot 119886 (w119894119873ℎ sdot v119873119905 + b119873ℎ)

]]]]]119873119905times119873ℎ

Wℎ = [wℎ1 sdot sdot sdot wℎ119873ℎ]119879

119873ℎtimes119898

O = [o1 sdot sdot sdot o119873119905]119879119873119905times119898

(5)

where matrix A is the output matrix of the hidden layer andthe operator 119879 indicates the transpose of the matrix Becausethe ELMalgorithm randomly selects the inputweightsw119894119895 andbiases b119895 we can find weights for the hidden layer wℎ119895 bysolving the following optimization problem

minwℎ119895

10038171003817100381710038171003817AWℎ minus L100381710038171003817100381710038172 (6)

where L is the matrix of true labels for training samples

L = (l1 sdot sdot sdot l119873119905)119879119873119905times119898 (7)

The above problem is known as a linear system optimizationproblem and its unique least-squares solution with a mini-mum norm is as follows

Wℎ = AdaggerL (8)

where Adagger is the MoorendashPenrose generalized inverse of thematrix A According to the analysis of Bartlett and Huang

the ELM algorithms achieve not only the minimum squaretraining error but also the best generalization performanceon novel test samples [14 24]

In this paper the activation function 119886(sdot)was determinedto be a sigmoidal function and the probability densityfunction for assigning the input weights and biases was setto be a uniform distribution function

3 Results and Discussion

31 Time-Frequency Analysis for Imagined Speech EEG DataWe computed the time-frequency representation (TFR) ofimagined speech EEG data for every subject to identifyspeech-related brain activities TFR of each trial was cal-culated using a Morlet wavelet and averaged over all trialsAmong the five subjects we plotted TFRs of subjects 2and 5 which showed notable patterns in gamma frequencyAs shown in Figure 3 much of the gamma band (30ndash70Hz) powers of five vowel conditions (a e i o andu) in the left temporal area are totally distinct and muchhigher than those of the control condition (mute sound) Inaddition topographical head plot of subject 5 was presentedin Figure 4 Increased gamma activities were observed in bothtemporal regions when the subject imagined vowels

32 Classification Results Figure 5 shows the classificationaccuracies averaged over all pairwise classifications for fivesubjects using ELM ELM-L ELM-R SVM-R and LDA Wealso conducted SVM and SVM with a linear kernel but theresults of SVM and SVM with a linear kernel are excludedbecause these classifiers could not be converged during manyiterations (100000 times) All classification accuracies areestimated by 10 times 10-fold cross-validation In the cases ofsubjects 1 3 and 4 ELM-L shows the best classificationperformance compared to the other four classifiers HoweverELM-R shows the best classification accuracies in subjects2 and 5 In the cases of all subjects the classificationaccuracies of ELM ELM-L and ELM-R are much better thanthose of SVM-R which are approximately the chance levelof 50 To identify the best classifier to discriminate thevowel imagination we conducted paired 119905-tests between theclassification accuracies of ELM-R and those of the otherthree classifiers As a result the classification performance ofELM-R is significantly better than those of ELM (119901 lt 001)LDA (119901 lt 001) and SVM-R (119901 lt 001) However there is nosignificant difference between the classification accuracies ofELM-R and ELM-L (119901 = 046)

Table 1 describes the classification accuracies of subject2 which shows the highest overall accuracies among allsubjects after 10 times 10-fold cross-validation for all pairwisecombinations In almost all pairwise combinations ELM-R has better classification performance than the other fourclassifiers for subject 2 The most discriminative pairwisecombination for subject 2 is vowels a and i which shows100 classification accuracy using ELM-R for subject 2

Table 2 contains the results of ELM-R for the pairwisecombinations and shows the top five classification perfor-mances for each subject There is no pairwise combinationto be selected from all subjects however a versus mute and

BioMed Research International 5

Table1Classifi

catio

naccuracies

in

employingSV

M-RE

LME

LM-LE

LM-Rand

LDAfors

ubject

2Th

ehigh

estclassificatio

naccuracy

amon

gthefour

classifiersism

arkedin

bold

forp

airw

isecombinatio

nClassifi

catio

naccuracies

areexpressedas

meanandassociated

stand

arddeviation

SVM-RE

LME

LM-LE

LM-Rand

LDAdeno

tethesupp

ortv

ectorm

achine

with

radialbasis

functio

nextre

melearningm

achineextremelearningm

achine

with

alineark

ernelextre

melearningm

achine

with

aradialbasisfunctio

nandlin

eard

iscrim

inantanalysis

respectiv

ely

Classifi

erav

ersus

e

av

ersus

iav

ersus

o

av

ersus

u

ev

ersus

iev

ersus

o

ev

ersus

u

iversus

o

iversus

u

oversus

u

av

ersus

mute

ev

ersus

mute

iversus

mute

ov

ersus

mute

u

versus

mute

SVM-R

5024

plusmn10

155

32plusmn

215

5141

plusmn05

150

22plusmn

060

5031

plusmn01

949

18plusmn

023

5241

plusmn15

752

47plusmn

071

5111

plusmn02

251

02plusmn

132

4948

plusmn07

050

35plusmn

022

5122

plusmn12

250

14plusmn

160

5123

plusmn02

3EL

M81

41plusmn

118

9423

plusmn10

262

42plusmn

215

7334

plusmn32

190

55plusmn

178

6976

plusmn12

9456

23plusmn

348

9632

plusmn01

897

47plusmn

076

6681

plusmn38

392

85plusmn

243

8231

plusmn21

699

41plusmn

012

8816

plusmn37

480

28plusmn

287

ELM-L

8132

plusmn04

798

15plusmn

067

6711

plusmn10

482

22plusmn

240

8831

plusmn17

378

25plusmn

223

5314

plusmn31

098

16plusmn

037

9823

plusmn21

268

36plusmn

163

9228

plusmn08

380

49plusmn

370

9912

plusmn03

393

14plusmn

221

8725

plusmn11

2EL

M-R

8628

plusmn11

299

02plusmn

076

7303

plusmn46

283

14plusmn

102

8908

plusmn01

478

27plusmn

016

5836

plusmn34

198

15plusmn

022

9707

plusmn14

273

38plusmn

273

9509

plusmn10

389

28plusmn

313

9901

plusmn01

493

25plusmn

073

9301

plusmn11

2

LDA

7925

plusmn16

290

32plusmn

261

6057

plusmn21

384

12plusmn

114

8823

plusmn16

770

04plusmn

143

5638

plusmn14

197

07plusmn

039

9628

plusmn16

265

14plusmn

173

9126

plusmn14

380

38plusmn

470

9807

plusmn13

290

39plusmn

162

8225

plusmn12

7

6 BioMed Research International

Time (seconds)

Freq

uenc

y (H

z)Vowel a

Vowel e

Vowel i

Vowel o

Vowel u

Mute2151

Vowel a

Vowel e

Vowel i

Vowel o

Vowel u

Mute

Subject 2 Subject 5

60

20

60

2040

60

2040

60

2040

60

2040

20

6040

40

0 02 04 06 08 1 12 14 16 18 2

Time (seconds)0 02 04 06 08 1 12 14 16 18 2

2151

2151

2151

2151

2151

uV2

uV2

uV2

uV2

uV2

uV2

Figure 3 Time-frequency representation (TFR) of EEG signals averaged over all trials for subjects 2 and 5 The EEG signals were obtainedfrom eight electrodes in the left temporal areas during each of the six experimental conditions (vowels a e i o u and mute) TheEEG data were bandpass filtered with 1ndash100Hz and a Morlet mother wavelet transform was used to calculate the TFR The TFRs are plottedfor the first 2 s after final beep sound and for the frequency range of 10ndash70Hz

15

14

13

12

11

10

09

15

14

13

12

11

10

09

A E I

O U Mute

Figure 4 Topographical distribution of gamma activities during vowel imagination for subject 5 Increased activities were observed in bothtemporal areas when the subject imagined vowels Time interval for the analysis is 0ndash3 sec

i versus mute are selected from four subjects and a versusi is selected from three subjects

Table 3 indicates the confusion matrix for all pairwisecombinations and subjects using ELM ELM-L ELM-RSVM-R and LDA In terms of sensitivity and specificityELM-L is the best classifier for our EEG data AlthoughSVM-R shows higher specificity than those of the other threeclassifiers in this table SVM-R classified almost all conditionsas positive and resulted in poor sensitivity therefore the highspecificity of the SVM-R is possibly invalid Thus SVM-Rmight be an unsuitable classifier for our study

33 Discussion Overall ELM ELM-L and ELM-R showedbetter performance than the SVM-R and LDA algorithms inthis study In several previous studies ELM achieved similaror better classification accuracy rates with much less trainingtime compared to other algorithms using EEG data [16 25ndash27] However we could not find studies on classification ofimagined speech using ELM algorithms Deng et al reportedclassification rates using LDA for imagined speech with7267 of the highest accuracy but the average results werenot much better than the chance level [8] DaSalla et al usingSVM showed approximately 82 of the best accuracy and

BioMed Research International 7

S4S5S2

S3

S1SV

M-R

LDA

ELM

ELM

-LEL

M-R

SVM

-RLD

AEL

MEL

M-L

ELM

-RSV

M-R

LDA

ELM

ELM

-LEL

M-R

SVM

-RLD

AEL

MEL

M-L

ELM

-RSV

M-R

LDA

ELM

ELM

-LEL

M-R

4045505560657075808590

Clas

sifica

tion

accu

raci

es (

)

Figure 5 Averaged classification accuracies over all pairwise classification using a support vector machine with a radial basis function kernel(SVM-R) extreme learning machine (ELM) extreme learning machine with a linear kernel (ELM-L) and extreme learning machine with aradial basis function kernel (ELM-R) for all five subjects

Table 2 Classification accuracies in employing ELM-R for the pairwise combinations which shows the top five classification performancesfor each subject Classification accuracies are expressed as mean and associated standard deviation

Subjects

S1 8647 plusmn 107 8121 plusmn 103 8001 plusmn 373 7335 plusmn 317 7244 plusmn 171(a versus i) (a versus mute) (a versus u) (a versus e) (i versus o)

S2 9902 plusmn 076 9930 plusmn 014 9822 plusmn 022 9514 plusmn 103 9301 plusmn 073(a versus i) (i versus mute) (i versus o) (a versus mute) (o versus mute)

S3 9208 plusmn 108 9019 plusmn 063 8915 plusmn 137 8727 plusmn 071 7038 plusmn 138(e versus mute) (i versus mute) (u versus mute) (o versus mute) (a versus i)

S4 9333 plusmn 031 9227 plusmn 103 9224 plusmn 213 9112 plusmn 054 9005 plusmn 183(i versus mute) (u versus mute) (a versus mute) (e versus mute) (o versus mute)

S5 9632 plusmn 231 9401 plusmn 017 9229 plusmn 114 9007 plusmn 058 8806 plusmn 123(e versus mute) (i versus mute) (o versus mute) (a versus mute) (u versus mute)

73 of the average result overall [9] whereas Huang et alreported that ELM tends to have a much higher learningspeed and comparable generalization performance in binaryclassification [21] In another study Huang argued that ELMhas fewer optimization constraints owing to its special sepa-rability feature and results in simpler implementation fasterlearning and better generalization performance [23] Thusour results showed consistent characterswith othersrsquo previousresearch using ELM and even similar or better classificationresults for imagined speech compared to other research usingdifferent algorithms Recently ELM algorithms have beenextensively applied in many other medical and biomedicalstudies [28ndash31] More detailed information about ELM canbe found in a recent review [32]

In this study each trial was divided into the thirty timesegments of 02 s in length and a 01 s overlap Each timesegment was considered as a sample for training the classifierand the final label of the test sample was determined byselecting the most frequent output (see Figure 2) We alsocompared the classification accuracy of our method withthose of a conventional method that does not divide the trialsinto multiple time segments As a result our method showedsuperior performance in terms of classification accuracy to

the conventional method In our opinion by dividing thetrials some effects such as increasing number of trials forclassifier training might occur and each time segment witha 02 s length is likely to retain enough information fordiscrimination of EEG vowel imagination Generally EEGclassification has problems in terms of poor generalizationperformance and the overfitting phenomenon because ofthe deficiency of the number of samples for the classifierTherefore an increased number of samples by dividingtrials could mitigate the aforementioned problems Howeverfurther analyses are required to prove our assumptions insubsequent studies

To reduce the dimension of the feature vector weemployed a feature selection algorithm based on the sparseregression model In the sparse-regression-model-based fea-ture selection algorithm the regularization parameter 120582 ofequation (1) must be carefully selected because 120582 determinesthe dimension of the optimized feature parameter Forexample when the selected 120582 is too large the algorithmexcludes discriminative features from an optimal feature setF However when users set 120582 too small redundant featuresare not excluded from an optimal feature set F Thereforethe optimal value for 120582 was selected by cross-validation on

8 BioMed Research International

Table3Con

fusio

nmatrix

fora

llpairw

isecombinatio

nsandsubjectsusingEL

ME

LM-LE

LM-RSVM-Rand

LDA

Classifi

ers

ELM

ELM-L

ELM-R

SVM-R

LDA

Con

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

npo

sitive

Testpo

sitive

2516

1234

2649

1101

2635

1115

3675

752556

1194

Testnegativ

e1509

2241

1261

2489

1297

2453

3525

225

1398

2352

Sensitivity=

06251

Specificity=

064

49Sensitivity=

06775

Specificity=

06933

Sensitivity=

06701

Specificity=

06875

Sensitivity=

05104

Specificity=

07500

Sensitivity=

064

64Specificity=

06633

BioMed Research International 9

40455055606570758085

Clas

sifica

tion

accu

racy

()

002 004 016008 01 012 014 0180 006Regularization parameter

Figure 6 Effects of varying the regularization parameter on the classification accuracies obtained by ELM-R with sparse-regression-model-based feature selection for subject 2 The parameter value giving the highest accuracy is highlighted with a red circle

the training session in our study For example the changeof classification accuracy caused by varying 120582 for subject 1 isillustrated in Figure 6 In the case of a and i using ELM-R the best classification accuracy reached a plateau at 120582 =008 and declined after 014 However the optimal values of 120582are totally different among the pairwise combinations and allsubjects

Furthermore our optimized results were achieved in thegamma frequency band (30ndash70Hz) We also tested the otherfrequency ranges such as beta (13ndash30Hz) alpha (8ndash13Hz)and theta (4ndash8Hz) however the classification rates of thosebands were not much better than the chance level in everysubject and pairwise combination of syllables In additionthe results of our TFR and topographical analysis (Figures3 and 4) could support some relationship between gammaactivities and imagined speech processing As far as we knowin the EEG classification of imagined speech there have beenonly a few studies that examined the differences betweenmultiple frequency bands including gamma frequency [3334] Therefore our study might be the first report that thegamma frequency band could play an important role asfeatures for the EEG classification of imagined speech More-over several studies using ECoG reported quite good resultsin the gamma frequency for imagined speech classification[35 36] and these findings are consistent with our resultsHowever several studies have been conducted that suggestedthe role of gamma frequency band for speech processingin neurophysiological perspectives [37ndash39] However thosestudies usually used intracranial recordings and focused onthe analysis for the high gamma (70ndash150Hz) frequency bandThus suggesting a relevance between those results and ourclassification study is not easy However a certain relationbetween some information in low gamma frequencies as afeature for classification and its implication from a neuro-physiological view will be specified in future studies

Currently communication systems with various BCItechnologies have been developed for disabled people [40]For instance the P300 speller is one of the most widelyresearched BCI technologies to decode verbal thoughts fromEEG [41] Despite many efforts toward better and fasterperformance the P300 speller is still insufficient for usein normal conversation [42 43] whereas independent of

the P300 component efforts toward extraction and analysisof EEG or ECoG induced by imagined speech have beenconducted [44 45] In this context our results of highperformance from the application of ELM and its variantshave potential to advance BCI research using silent speechcommunication However the pairwise combinations withthe highest accuracies (see Table 2) differed in each subjectAfter experiment each participant reported different patternsof vowel discrimination For example one subject reportedthat he could not discriminate e from i and the othersubject reported the other pair was not easy to distinguishAlthough those reports were not exactly matched to theresults of classification these discrepancies of subjectivesensory perception might be related to process of imaginingspeech and classification results Besides we have not triedmulticlass classification in this study yet some attemptsin multiclass classification of imagined speech have beenperformed by others [8 46 47] These issues related tointersubject variability and multiclass systems should beconsidered for our future study to developmore practical andgeneralized BCI systems using silent speech

4 Conclusions

In the present study we used classification algorithms forEEG data of imagined speech Particularly we comparedELM and its variants to SVM-R and LDA algorithms andobserved that ELM and its variants showed better perfor-mance than other algorithms with our data These resultsmight lead to the development of silent speech BCI systems

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Authorsrsquo Contributions

Beomjun Min and Jongin Kim equally contributed to thiswork

10 BioMed Research International

Acknowledgments

This research was supported by the GIST Research Institute(GRI) in 2016 and the Pioneer Research Center Programthrough the National Research Foundation of Korea fundedby theMinistry of Science ICTamp Future Planning (Grant no2012-0009462)

References

[1] H-J Hwang S Kim S Choi and C-H Im ldquoEEG-based brain-computer interfaces a thorough literature surveyrdquo InternationalJournal of Human-Computer Interaction vol 29 no 12 pp 814ndash826 2013

[2] U Chaudhary N Birbaumer and A Ramos-MurguialdayldquoBrainndashcomputer interfaces for communication and rehabilita-tionrdquoNature Reviews Neurology vol 12 no 9 pp 513ndash525 2016

[3] M Hamedi S-H Salleh and A M Noor ldquoElectroencephalo-graphic motor imagery brain connectivity analysis for BCI areviewrdquo Neural Computation vol 28 no 6 pp 999ndash1041 2016

[4] C Neuper R Scherer S Wriessnegger and G PfurtschellerldquoMotor imagery and action observation modulation of sen-sorimotor brain rhythms during mental control of a brain-computer interfacerdquoClinical Neurophysiology vol 120 no 2 pp239ndash247 2009

[5] H-J Hwang K Kwon and C-H Im ldquoNeurofeedback-basedmotor imagery training for brainndashcomputer interface (BCI)rdquoJournal of Neuroscience Methods vol 179 no 1 pp 150ndash1562009

[6] G Pfurtscheller C Brunner A Schlogl and F H Lopes daSilva ldquoMu rhythm (de)synchronization and EEG single-trialclassification of different motor imagery tasksrdquo NeuroImagevol 31 no 1 pp 153ndash159 2006

[7] M Ahn and S C Jun ldquoPerformance variation inmotor imagerybrain-computer interface a brief reviewrdquo Journal of Neuro-science Methods vol 243 pp 103ndash110 2015

[8] S Deng R Srinivasan T Lappas and M DrsquoZmura ldquoEEG clas-sification of imagined syllable rhythm using Hilbert spectrummethodsrdquo Journal of Neural Engineering vol 7 no 4 Article ID046006 2010

[9] C S DaSalla H Kambara M Sato and Y Koike ldquoSingle-trialclassification of vowel speech imagery using common spatialpatternsrdquo Neural Networks vol 22 no 9 pp 1334ndash1339 2009

[10] X Pei D L Barbour E C Leuthardt and G Schalk ldquoDecodingvowels and consonants in spoken and imagined words usingelectrocorticographic signals in humansrdquo Journal of NeuralEngineering vol 8 no 4 Article ID 046028 2011

[11] F Lotte M Congedo A Lecuyer F Lamarche and B ArnaldildquoA review of classification algorithms for EEG-based brain-computer interfacesrdquo Journal of Neural Engineering vol 4 no2 article R01 pp R1ndashR13 2007

[12] K Brigham and B V K V Kumar ldquoImagined speech classifica-tion with EEG signals for silent communication a preliminaryinvestigation into synthetic telepathyrdquo in Proceedings of the4th International Conference on Bioinformatics and BiomedicalEngineering (iCBBE rsquo10) Chengdu China June 2010

[13] J Kim S-K Lee and B Lee ldquoEEG classification in a single-trialbasis for vowel speech perception using multivariate empiricalmode decompositionrdquo Journal of Neural Engineering vol 11 no3 Article ID 036010 2014

[14] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1-3 pp 489ndash501 2006

[15] Y Song and J Zhang ldquoAutomatic recognition of epileptic EEGpatterns via Extreme Learning Machine and multiresolutionfeature extractionrdquoExpert Systemswith Applications vol 40 no14 pp 5477ndash5489 2013

[16] Q YuanWZhou S Li andDCai ldquoEpileptic EEG classificationbased on extreme learning machine and nonlinear featuresrdquoEpilepsy Research vol 96 no 1-2 pp 29ndash38 2011

[17] M N I Qureshi B Min H J Jo and B Lee ldquoMulticlass clas-sification for the differential diagnosis on the ADHD subtypesusing recursive feature elimination and hierarchical extremelearning machine structural MRI studyrdquo PLoS ONE vol 11 no8 Article ID e0160697 2016

[18] L Gao W Cheng J Zhang and J Wang ldquoEEG classificationfor motor imagery and resting state in BCI applications usingmulti-class Adaboost extreme learning machinerdquo Review ofScientific Instruments vol 87 no 8 Article ID 085110 2016

[19] R Tibshirani ldquoRegression shrinkage and selection via the LassoRobert Tibshiranirdquo Journal of the Royal Statistical Society SeriesB (Statistical Methodology) vol 58 no 1 pp 267ndash288 2007

[20] J Friedman T Hastie and R Tibshirani ldquoRegularization pathsfor generalized linear models via coordinate descentrdquo Journal ofStatistical Software vol 33 no 1 pp 1ndash22 2010

[21] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[22] J Cao Z Lin and G-B Huang ldquoComposite function waveletneural networks with extreme learning machinerdquo Neurocom-puting vol 73 no 7-9 pp 1405ndash1416 2010

[23] G-B Huang X Ding and H Zhou ldquoOptimization methodbased extreme learning machine for classificationrdquo Neurocom-puting vol 74 no 1-3 pp 155ndash163 2010

[24] P L Bartlett ldquoThe sample complexity of pattern classificationwith neural networks the size of the weights is more importantthan the size of the networkrdquo Institute of Electrical and Electron-ics Engineers Transactions on InformationTheory vol 44 no 2pp 525ndash536 1998

[25] L-C Shi and B-L Lu ldquoEEG-based vigilance estimation usingextreme learning machinesrdquo Neurocomputing vol 102 pp 135ndash143 2013

[26] Y Peng and B-L Lu ldquoDiscriminative manifold extreme learn-ing machine and applications to image and EEG signal classifi-cationrdquo Neurocomputing vol 174 pp 265ndash277 2014

[27] N-Y Liang P Saratchandran G-B Huang and N Sundarara-jan ldquoClassification of mental tasks from EEG signals usingextreme learning machinerdquo International Journal of NeuralSystems vol 16 no 1 pp 29ndash38 2006

[28] J Kim H S Shin K Shin and M Lee ldquoRobust algorithmfor arrhythmia classification in ECG using extreme learningmachinerdquo Biomedical Engineering OnLine vol 8 article 312009

[29] S Karpagachelvi M Arthanari and M Sivakumar ldquoClas-sification of electrocardiogram signals with support vectormachines and extreme learning machinerdquo Neural Computingand Applications vol 21 no 6 pp 1331ndash1339 2012

[30] W Huang Z M Tan Z Lin et al ldquoA semi-automatic approachto the segmentation of liver parenchyma from 3D CT imageswith Extreme Learning Machinerdquo in Proceedings of the IEEE

BioMed Research International 11

Annual International Conference of the IEEE Engineering inMedicine and Biology Society (EMBCrsquo 12) pp 3752ndash3755 Bue-nos Aires Argentina September 2012

[31] R Barea L Boquete S Ortega E Lopez and J M Rodrıguez-Ascariz ldquoEOG-based eye movements codification for humancomputer interactionrdquo Expert Systems with Applications vol 39no 3 pp 2677ndash2683 2012

[32] GHuangG-BHuang S Song andKYou ldquoTrends in extremelearning machines a reviewrdquo Neural Networks vol 61 pp 32ndash48 2015

[33] B M Idrees and O Farooq ldquoVowel classification using waveletdecomposition during speech imageryrdquo in Proceedings of theInternational Conference on Signal Processing and IntegratedNetworks (SPIN rsquo16) pp 636ndash640 Noida India Feburary 2016

[34] A Riaz S Akhtar S Iftikhar A A Khan and A SalmanldquoInter comparison of classification techniques for vowel speechimagery using EEG sensorsrdquo in Proceedings of the 2nd Interna-tional Conference on Systems and Informatics (ICSAI rsquo14) pp712ndash717 IEEE Shanghai China November 2014

[35] S Martin P Brunner I Iturrate et al ldquoWord pair classificationduring imagined speech using direct brain recordingsrdquo Scien-tific Reports vol 6 Article ID 25803 2016

[36] X Pei J Hill and G Schalk ldquoSilent communication towardusing brain signalsrdquo IEEE Pulse vol 3 no 1 pp 43ndash46 2012

[37] A-L Giraud and D Poeppel ldquoCortical oscillations and speechprocessing emerging computational principles and operationsrdquoNature Neuroscience vol 15 no 4 pp 511ndash517 2012

[38] V L Towle H-A Yoon M Castelle et al ldquoECoG 120574 activityduring a language task differentiating expressive and receptivespeech areasrdquo Brain vol 131 no 8 pp 2013ndash2027 2008

[39] SMartin P Brunner C Holdgraf et al ldquoDecoding spectrotem-poral features of overt and covert speech from the humancortexrdquo Frontiers in Neuroengineering vol 7 article 14 2014

[40] J S Brumberg A Nieto-Castanon P R Kennedy and F HGuenther ldquoBrain-computer interfaces for speech communica-tionrdquo Speech Communication vol 52 no 4 pp 367ndash379 2010

[41] L A Farwell and E Donchin ldquoTalking off the top of your headtoward a mental prosthesis utilizing event-related brain poten-tialsrdquo Electroencephalography and Clinical Neurophysiology vol70 no 6 pp 510ndash523 1988

[42] D J Krusienski EW Sellers F Cabestaing et al ldquoA comparisonof classification techniques for the P300 Spellerrdquo Journal ofNeural Engineering vol 3 no 4 article 007 pp 299ndash305 2006

[43] E Yin Z Zhou J Jiang F Chen Y Liu and D Hu ldquoA novelhybrid BCI speller based on the incorporation of SSVEP intothe P300 paradigmrdquo Journal of Neural Engineering vol 10 no2 Article ID 026012 2013

[44] C Herff and T Schultz ldquoAutomatic speech recognition fromneural signals a focused reviewrdquo Frontiers in Neuroscience vol10 article 429 2016

[45] S Chakrabarti H M Sandberg J S Brumberg and D JKrusienski ldquoProgress in speech decoding from the electrocor-ticogramrdquoBiomedical Engineering Letters vol 5 no 1 pp 10ndash212015

[46] K Mohanchandra and S Saha ldquoA communication paradigmusing subvocalized speech translating brain signals intospeechrdquoAugmentedHumanResearch vol 1 no 1 article 3 2016

[47] O Ossmy I Fried and R Mukamel ldquoDecoding speech percep-tion from single cell activity in humansrdquo NeuroImage vol 117pp 151ndash159 2015

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

BioMed Research International 5

Table1Classifi

catio

naccuracies

in

employingSV

M-RE

LME

LM-LE

LM-Rand

LDAfors

ubject

2Th

ehigh

estclassificatio

naccuracy

amon

gthefour

classifiersism

arkedin

bold

forp

airw

isecombinatio

nClassifi

catio

naccuracies

areexpressedas

meanandassociated

stand

arddeviation

SVM-RE

LME

LM-LE

LM-Rand

LDAdeno

tethesupp

ortv

ectorm

achine

with

radialbasis

functio

nextre

melearningm

achineextremelearningm

achine

with

alineark

ernelextre

melearningm

achine

with

aradialbasisfunctio

nandlin

eard

iscrim

inantanalysis

respectiv

ely

Classifi

erav

ersus

e

av

ersus

iav

ersus

o

av

ersus

u

ev

ersus

iev

ersus

o

ev

ersus

u

iversus

o

iversus

u

oversus

u

av

ersus

mute

ev

ersus

mute

iversus

mute

ov

ersus

mute

u

versus

mute

SVM-R

5024

plusmn10

155

32plusmn

215

5141

plusmn05

150

22plusmn

060

5031

plusmn01

949

18plusmn

023

5241

plusmn15

752

47plusmn

071

5111

plusmn02

251

02plusmn

132

4948

plusmn07

050

35plusmn

022

5122

plusmn12

250

14plusmn

160

5123

plusmn02

3EL

M81

41plusmn

118

9423

plusmn10

262

42plusmn

215

7334

plusmn32

190

55plusmn

178

6976

plusmn12

9456

23plusmn

348

9632

plusmn01

897

47plusmn

076

6681

plusmn38

392

85plusmn

243

8231

plusmn21

699

41plusmn

012

8816

plusmn37

480

28plusmn

287

ELM-L

8132

plusmn04

798

15plusmn

067

6711

plusmn10

482

22plusmn

240

8831

plusmn17

378

25plusmn

223

5314

plusmn31

098

16plusmn

037

9823

plusmn21

268

36plusmn

163

9228

plusmn08

380

49plusmn

370

9912

plusmn03

393

14plusmn

221

8725

plusmn11

2EL

M-R

8628

plusmn11

299

02plusmn

076

7303

plusmn46

283

14plusmn

102

8908

plusmn01

478

27plusmn

016

5836

plusmn34

198

15plusmn

022

9707

plusmn14

273

38plusmn

273

9509

plusmn10

389

28plusmn

313

9901

plusmn01

493

25plusmn

073

9301

plusmn11

2

LDA

7925

plusmn16

290

32plusmn

261

6057

plusmn21

384

12plusmn

114

8823

plusmn16

770

04plusmn

143

5638

plusmn14

197

07plusmn

039

9628

plusmn16

265

14plusmn

173

9126

plusmn14

380

38plusmn

470

9807

plusmn13

290

39plusmn

162

8225

plusmn12

7

6 BioMed Research International

Time (seconds)

Freq

uenc

y (H

z)Vowel a

Vowel e

Vowel i

Vowel o

Vowel u

Mute2151

Vowel a

Vowel e

Vowel i

Vowel o

Vowel u

Mute

Subject 2 Subject 5

60

20

60

2040

60

2040

60

2040

60

2040

20

6040

40

0 02 04 06 08 1 12 14 16 18 2

Time (seconds)0 02 04 06 08 1 12 14 16 18 2

2151

2151

2151

2151

2151

uV2

uV2

uV2

uV2

uV2

uV2

Figure 3 Time-frequency representation (TFR) of EEG signals averaged over all trials for subjects 2 and 5 The EEG signals were obtainedfrom eight electrodes in the left temporal areas during each of the six experimental conditions (vowels a e i o u and mute) TheEEG data were bandpass filtered with 1ndash100Hz and a Morlet mother wavelet transform was used to calculate the TFR The TFRs are plottedfor the first 2 s after final beep sound and for the frequency range of 10ndash70Hz

15

14

13

12

11

10

09

15

14

13

12

11

10

09

A E I

O U Mute

Figure 4 Topographical distribution of gamma activities during vowel imagination for subject 5 Increased activities were observed in bothtemporal areas when the subject imagined vowels Time interval for the analysis is 0ndash3 sec

i versus mute are selected from four subjects and a versusi is selected from three subjects

Table 3 indicates the confusion matrix for all pairwisecombinations and subjects using ELM ELM-L ELM-RSVM-R and LDA In terms of sensitivity and specificityELM-L is the best classifier for our EEG data AlthoughSVM-R shows higher specificity than those of the other threeclassifiers in this table SVM-R classified almost all conditionsas positive and resulted in poor sensitivity therefore the highspecificity of the SVM-R is possibly invalid Thus SVM-Rmight be an unsuitable classifier for our study

33 Discussion Overall ELM ELM-L and ELM-R showedbetter performance than the SVM-R and LDA algorithms inthis study In several previous studies ELM achieved similaror better classification accuracy rates with much less trainingtime compared to other algorithms using EEG data [16 25ndash27] However we could not find studies on classification ofimagined speech using ELM algorithms Deng et al reportedclassification rates using LDA for imagined speech with7267 of the highest accuracy but the average results werenot much better than the chance level [8] DaSalla et al usingSVM showed approximately 82 of the best accuracy and

BioMed Research International 7

S4S5S2

S3

S1SV

M-R

LDA

ELM

ELM

-LEL

M-R

SVM

-RLD

AEL

MEL

M-L

ELM

-RSV

M-R

LDA

ELM

ELM

-LEL

M-R

SVM

-RLD

AEL

MEL

M-L

ELM

-RSV

M-R

LDA

ELM

ELM

-LEL

M-R

4045505560657075808590

Clas

sifica

tion

accu

raci

es (

)

Figure 5 Averaged classification accuracies over all pairwise classification using a support vector machine with a radial basis function kernel(SVM-R) extreme learning machine (ELM) extreme learning machine with a linear kernel (ELM-L) and extreme learning machine with aradial basis function kernel (ELM-R) for all five subjects

Table 2 Classification accuracies in employing ELM-R for the pairwise combinations which shows the top five classification performancesfor each subject Classification accuracies are expressed as mean and associated standard deviation

Subjects

S1 8647 plusmn 107 8121 plusmn 103 8001 plusmn 373 7335 plusmn 317 7244 plusmn 171(a versus i) (a versus mute) (a versus u) (a versus e) (i versus o)

S2 9902 plusmn 076 9930 plusmn 014 9822 plusmn 022 9514 plusmn 103 9301 plusmn 073(a versus i) (i versus mute) (i versus o) (a versus mute) (o versus mute)

S3 9208 plusmn 108 9019 plusmn 063 8915 plusmn 137 8727 plusmn 071 7038 plusmn 138(e versus mute) (i versus mute) (u versus mute) (o versus mute) (a versus i)

S4 9333 plusmn 031 9227 plusmn 103 9224 plusmn 213 9112 plusmn 054 9005 plusmn 183(i versus mute) (u versus mute) (a versus mute) (e versus mute) (o versus mute)

S5 9632 plusmn 231 9401 plusmn 017 9229 plusmn 114 9007 plusmn 058 8806 plusmn 123(e versus mute) (i versus mute) (o versus mute) (a versus mute) (u versus mute)

73 of the average result overall [9] whereas Huang et alreported that ELM tends to have a much higher learningspeed and comparable generalization performance in binaryclassification [21] In another study Huang argued that ELMhas fewer optimization constraints owing to its special sepa-rability feature and results in simpler implementation fasterlearning and better generalization performance [23] Thusour results showed consistent characterswith othersrsquo previousresearch using ELM and even similar or better classificationresults for imagined speech compared to other research usingdifferent algorithms Recently ELM algorithms have beenextensively applied in many other medical and biomedicalstudies [28ndash31] More detailed information about ELM canbe found in a recent review [32]

In this study each trial was divided into the thirty timesegments of 02 s in length and a 01 s overlap Each timesegment was considered as a sample for training the classifierand the final label of the test sample was determined byselecting the most frequent output (see Figure 2) We alsocompared the classification accuracy of our method withthose of a conventional method that does not divide the trialsinto multiple time segments As a result our method showedsuperior performance in terms of classification accuracy to

the conventional method In our opinion by dividing thetrials some effects such as increasing number of trials forclassifier training might occur and each time segment witha 02 s length is likely to retain enough information fordiscrimination of EEG vowel imagination Generally EEGclassification has problems in terms of poor generalizationperformance and the overfitting phenomenon because ofthe deficiency of the number of samples for the classifierTherefore an increased number of samples by dividingtrials could mitigate the aforementioned problems Howeverfurther analyses are required to prove our assumptions insubsequent studies

To reduce the dimension of the feature vector weemployed a feature selection algorithm based on the sparseregression model In the sparse-regression-model-based fea-ture selection algorithm the regularization parameter 120582 ofequation (1) must be carefully selected because 120582 determinesthe dimension of the optimized feature parameter Forexample when the selected 120582 is too large the algorithmexcludes discriminative features from an optimal feature setF However when users set 120582 too small redundant featuresare not excluded from an optimal feature set F Thereforethe optimal value for 120582 was selected by cross-validation on

8 BioMed Research International

Table3Con

fusio

nmatrix

fora

llpairw

isecombinatio

nsandsubjectsusingEL

ME

LM-LE

LM-RSVM-Rand

LDA

Classifi

ers

ELM

ELM-L

ELM-R

SVM-R

LDA

Con

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

npo

sitive

Testpo

sitive

2516

1234

2649

1101

2635

1115

3675

752556

1194

Testnegativ

e1509

2241

1261

2489

1297

2453

3525

225

1398

2352

Sensitivity=

06251

Specificity=

064

49Sensitivity=

06775

Specificity=

06933

Sensitivity=

06701

Specificity=

06875

Sensitivity=

05104

Specificity=

07500

Sensitivity=

064

64Specificity=

06633

BioMed Research International 9

40455055606570758085

Clas

sifica

tion

accu

racy

()

002 004 016008 01 012 014 0180 006Regularization parameter

Figure 6 Effects of varying the regularization parameter on the classification accuracies obtained by ELM-R with sparse-regression-model-based feature selection for subject 2 The parameter value giving the highest accuracy is highlighted with a red circle

the training session in our study For example the changeof classification accuracy caused by varying 120582 for subject 1 isillustrated in Figure 6 In the case of a and i using ELM-R the best classification accuracy reached a plateau at 120582 =008 and declined after 014 However the optimal values of 120582are totally different among the pairwise combinations and allsubjects

Furthermore our optimized results were achieved in thegamma frequency band (30ndash70Hz) We also tested the otherfrequency ranges such as beta (13ndash30Hz) alpha (8ndash13Hz)and theta (4ndash8Hz) however the classification rates of thosebands were not much better than the chance level in everysubject and pairwise combination of syllables In additionthe results of our TFR and topographical analysis (Figures3 and 4) could support some relationship between gammaactivities and imagined speech processing As far as we knowin the EEG classification of imagined speech there have beenonly a few studies that examined the differences betweenmultiple frequency bands including gamma frequency [3334] Therefore our study might be the first report that thegamma frequency band could play an important role asfeatures for the EEG classification of imagined speech More-over several studies using ECoG reported quite good resultsin the gamma frequency for imagined speech classification[35 36] and these findings are consistent with our resultsHowever several studies have been conducted that suggestedthe role of gamma frequency band for speech processingin neurophysiological perspectives [37ndash39] However thosestudies usually used intracranial recordings and focused onthe analysis for the high gamma (70ndash150Hz) frequency bandThus suggesting a relevance between those results and ourclassification study is not easy However a certain relationbetween some information in low gamma frequencies as afeature for classification and its implication from a neuro-physiological view will be specified in future studies

Currently communication systems with various BCItechnologies have been developed for disabled people [40]For instance the P300 speller is one of the most widelyresearched BCI technologies to decode verbal thoughts fromEEG [41] Despite many efforts toward better and fasterperformance the P300 speller is still insufficient for usein normal conversation [42 43] whereas independent of

the P300 component efforts toward extraction and analysisof EEG or ECoG induced by imagined speech have beenconducted [44 45] In this context our results of highperformance from the application of ELM and its variantshave potential to advance BCI research using silent speechcommunication However the pairwise combinations withthe highest accuracies (see Table 2) differed in each subjectAfter experiment each participant reported different patternsof vowel discrimination For example one subject reportedthat he could not discriminate e from i and the othersubject reported the other pair was not easy to distinguishAlthough those reports were not exactly matched to theresults of classification these discrepancies of subjectivesensory perception might be related to process of imaginingspeech and classification results Besides we have not triedmulticlass classification in this study yet some attemptsin multiclass classification of imagined speech have beenperformed by others [8 46 47] These issues related tointersubject variability and multiclass systems should beconsidered for our future study to developmore practical andgeneralized BCI systems using silent speech

4 Conclusions

In the present study we used classification algorithms forEEG data of imagined speech Particularly we comparedELM and its variants to SVM-R and LDA algorithms andobserved that ELM and its variants showed better perfor-mance than other algorithms with our data These resultsmight lead to the development of silent speech BCI systems

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Authorsrsquo Contributions

Beomjun Min and Jongin Kim equally contributed to thiswork

10 BioMed Research International

Acknowledgments

This research was supported by the GIST Research Institute(GRI) in 2016 and the Pioneer Research Center Programthrough the National Research Foundation of Korea fundedby theMinistry of Science ICTamp Future Planning (Grant no2012-0009462)

References

[1] H-J Hwang S Kim S Choi and C-H Im ldquoEEG-based brain-computer interfaces a thorough literature surveyrdquo InternationalJournal of Human-Computer Interaction vol 29 no 12 pp 814ndash826 2013

[2] U Chaudhary N Birbaumer and A Ramos-MurguialdayldquoBrainndashcomputer interfaces for communication and rehabilita-tionrdquoNature Reviews Neurology vol 12 no 9 pp 513ndash525 2016

[3] M Hamedi S-H Salleh and A M Noor ldquoElectroencephalo-graphic motor imagery brain connectivity analysis for BCI areviewrdquo Neural Computation vol 28 no 6 pp 999ndash1041 2016

[4] C Neuper R Scherer S Wriessnegger and G PfurtschellerldquoMotor imagery and action observation modulation of sen-sorimotor brain rhythms during mental control of a brain-computer interfacerdquoClinical Neurophysiology vol 120 no 2 pp239ndash247 2009

[5] H-J Hwang K Kwon and C-H Im ldquoNeurofeedback-basedmotor imagery training for brainndashcomputer interface (BCI)rdquoJournal of Neuroscience Methods vol 179 no 1 pp 150ndash1562009

[6] G Pfurtscheller C Brunner A Schlogl and F H Lopes daSilva ldquoMu rhythm (de)synchronization and EEG single-trialclassification of different motor imagery tasksrdquo NeuroImagevol 31 no 1 pp 153ndash159 2006

[7] M Ahn and S C Jun ldquoPerformance variation inmotor imagerybrain-computer interface a brief reviewrdquo Journal of Neuro-science Methods vol 243 pp 103ndash110 2015

[8] S Deng R Srinivasan T Lappas and M DrsquoZmura ldquoEEG clas-sification of imagined syllable rhythm using Hilbert spectrummethodsrdquo Journal of Neural Engineering vol 7 no 4 Article ID046006 2010

[9] C S DaSalla H Kambara M Sato and Y Koike ldquoSingle-trialclassification of vowel speech imagery using common spatialpatternsrdquo Neural Networks vol 22 no 9 pp 1334ndash1339 2009

[10] X Pei D L Barbour E C Leuthardt and G Schalk ldquoDecodingvowels and consonants in spoken and imagined words usingelectrocorticographic signals in humansrdquo Journal of NeuralEngineering vol 8 no 4 Article ID 046028 2011

[11] F Lotte M Congedo A Lecuyer F Lamarche and B ArnaldildquoA review of classification algorithms for EEG-based brain-computer interfacesrdquo Journal of Neural Engineering vol 4 no2 article R01 pp R1ndashR13 2007

[12] K Brigham and B V K V Kumar ldquoImagined speech classifica-tion with EEG signals for silent communication a preliminaryinvestigation into synthetic telepathyrdquo in Proceedings of the4th International Conference on Bioinformatics and BiomedicalEngineering (iCBBE rsquo10) Chengdu China June 2010

[13] J Kim S-K Lee and B Lee ldquoEEG classification in a single-trialbasis for vowel speech perception using multivariate empiricalmode decompositionrdquo Journal of Neural Engineering vol 11 no3 Article ID 036010 2014

[14] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1-3 pp 489ndash501 2006

[15] Y Song and J Zhang ldquoAutomatic recognition of epileptic EEGpatterns via Extreme Learning Machine and multiresolutionfeature extractionrdquoExpert Systemswith Applications vol 40 no14 pp 5477ndash5489 2013

[16] Q YuanWZhou S Li andDCai ldquoEpileptic EEG classificationbased on extreme learning machine and nonlinear featuresrdquoEpilepsy Research vol 96 no 1-2 pp 29ndash38 2011

[17] M N I Qureshi B Min H J Jo and B Lee ldquoMulticlass clas-sification for the differential diagnosis on the ADHD subtypesusing recursive feature elimination and hierarchical extremelearning machine structural MRI studyrdquo PLoS ONE vol 11 no8 Article ID e0160697 2016

[18] L Gao W Cheng J Zhang and J Wang ldquoEEG classificationfor motor imagery and resting state in BCI applications usingmulti-class Adaboost extreme learning machinerdquo Review ofScientific Instruments vol 87 no 8 Article ID 085110 2016

[19] R Tibshirani ldquoRegression shrinkage and selection via the LassoRobert Tibshiranirdquo Journal of the Royal Statistical Society SeriesB (Statistical Methodology) vol 58 no 1 pp 267ndash288 2007

[20] J Friedman T Hastie and R Tibshirani ldquoRegularization pathsfor generalized linear models via coordinate descentrdquo Journal ofStatistical Software vol 33 no 1 pp 1ndash22 2010

[21] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[22] J Cao Z Lin and G-B Huang ldquoComposite function waveletneural networks with extreme learning machinerdquo Neurocom-puting vol 73 no 7-9 pp 1405ndash1416 2010

[23] G-B Huang X Ding and H Zhou ldquoOptimization methodbased extreme learning machine for classificationrdquo Neurocom-puting vol 74 no 1-3 pp 155ndash163 2010

[24] P L Bartlett ldquoThe sample complexity of pattern classificationwith neural networks the size of the weights is more importantthan the size of the networkrdquo Institute of Electrical and Electron-ics Engineers Transactions on InformationTheory vol 44 no 2pp 525ndash536 1998

[25] L-C Shi and B-L Lu ldquoEEG-based vigilance estimation usingextreme learning machinesrdquo Neurocomputing vol 102 pp 135ndash143 2013

[26] Y Peng and B-L Lu ldquoDiscriminative manifold extreme learn-ing machine and applications to image and EEG signal classifi-cationrdquo Neurocomputing vol 174 pp 265ndash277 2014

[27] N-Y Liang P Saratchandran G-B Huang and N Sundarara-jan ldquoClassification of mental tasks from EEG signals usingextreme learning machinerdquo International Journal of NeuralSystems vol 16 no 1 pp 29ndash38 2006

[28] J Kim H S Shin K Shin and M Lee ldquoRobust algorithmfor arrhythmia classification in ECG using extreme learningmachinerdquo Biomedical Engineering OnLine vol 8 article 312009

[29] S Karpagachelvi M Arthanari and M Sivakumar ldquoClas-sification of electrocardiogram signals with support vectormachines and extreme learning machinerdquo Neural Computingand Applications vol 21 no 6 pp 1331ndash1339 2012

[30] W Huang Z M Tan Z Lin et al ldquoA semi-automatic approachto the segmentation of liver parenchyma from 3D CT imageswith Extreme Learning Machinerdquo in Proceedings of the IEEE

BioMed Research International 11

Annual International Conference of the IEEE Engineering inMedicine and Biology Society (EMBCrsquo 12) pp 3752ndash3755 Bue-nos Aires Argentina September 2012

[31] R Barea L Boquete S Ortega E Lopez and J M Rodrıguez-Ascariz ldquoEOG-based eye movements codification for humancomputer interactionrdquo Expert Systems with Applications vol 39no 3 pp 2677ndash2683 2012

[32] GHuangG-BHuang S Song andKYou ldquoTrends in extremelearning machines a reviewrdquo Neural Networks vol 61 pp 32ndash48 2015

[33] B M Idrees and O Farooq ldquoVowel classification using waveletdecomposition during speech imageryrdquo in Proceedings of theInternational Conference on Signal Processing and IntegratedNetworks (SPIN rsquo16) pp 636ndash640 Noida India Feburary 2016

[34] A Riaz S Akhtar S Iftikhar A A Khan and A SalmanldquoInter comparison of classification techniques for vowel speechimagery using EEG sensorsrdquo in Proceedings of the 2nd Interna-tional Conference on Systems and Informatics (ICSAI rsquo14) pp712ndash717 IEEE Shanghai China November 2014

[35] S Martin P Brunner I Iturrate et al ldquoWord pair classificationduring imagined speech using direct brain recordingsrdquo Scien-tific Reports vol 6 Article ID 25803 2016

[36] X Pei J Hill and G Schalk ldquoSilent communication towardusing brain signalsrdquo IEEE Pulse vol 3 no 1 pp 43ndash46 2012

[37] A-L Giraud and D Poeppel ldquoCortical oscillations and speechprocessing emerging computational principles and operationsrdquoNature Neuroscience vol 15 no 4 pp 511ndash517 2012

[38] V L Towle H-A Yoon M Castelle et al ldquoECoG 120574 activityduring a language task differentiating expressive and receptivespeech areasrdquo Brain vol 131 no 8 pp 2013ndash2027 2008

[39] SMartin P Brunner C Holdgraf et al ldquoDecoding spectrotem-poral features of overt and covert speech from the humancortexrdquo Frontiers in Neuroengineering vol 7 article 14 2014

[40] J S Brumberg A Nieto-Castanon P R Kennedy and F HGuenther ldquoBrain-computer interfaces for speech communica-tionrdquo Speech Communication vol 52 no 4 pp 367ndash379 2010

[41] L A Farwell and E Donchin ldquoTalking off the top of your headtoward a mental prosthesis utilizing event-related brain poten-tialsrdquo Electroencephalography and Clinical Neurophysiology vol70 no 6 pp 510ndash523 1988

[42] D J Krusienski EW Sellers F Cabestaing et al ldquoA comparisonof classification techniques for the P300 Spellerrdquo Journal ofNeural Engineering vol 3 no 4 article 007 pp 299ndash305 2006

[43] E Yin Z Zhou J Jiang F Chen Y Liu and D Hu ldquoA novelhybrid BCI speller based on the incorporation of SSVEP intothe P300 paradigmrdquo Journal of Neural Engineering vol 10 no2 Article ID 026012 2013

[44] C Herff and T Schultz ldquoAutomatic speech recognition fromneural signals a focused reviewrdquo Frontiers in Neuroscience vol10 article 429 2016

[45] S Chakrabarti H M Sandberg J S Brumberg and D JKrusienski ldquoProgress in speech decoding from the electrocor-ticogramrdquoBiomedical Engineering Letters vol 5 no 1 pp 10ndash212015

[46] K Mohanchandra and S Saha ldquoA communication paradigmusing subvocalized speech translating brain signals intospeechrdquoAugmentedHumanResearch vol 1 no 1 article 3 2016

[47] O Ossmy I Fried and R Mukamel ldquoDecoding speech percep-tion from single cell activity in humansrdquo NeuroImage vol 117pp 151ndash159 2015

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

6 BioMed Research International

Time (seconds)

Freq

uenc

y (H

z)Vowel a

Vowel e

Vowel i

Vowel o

Vowel u

Mute2151

Vowel a

Vowel e

Vowel i

Vowel o

Vowel u

Mute

Subject 2 Subject 5

60

20

60

2040

60

2040

60

2040

60

2040

20

6040

40

0 02 04 06 08 1 12 14 16 18 2

Time (seconds)0 02 04 06 08 1 12 14 16 18 2

2151

2151

2151

2151

2151

uV2

uV2

uV2

uV2

uV2

uV2

Figure 3 Time-frequency representation (TFR) of EEG signals averaged over all trials for subjects 2 and 5 The EEG signals were obtainedfrom eight electrodes in the left temporal areas during each of the six experimental conditions (vowels a e i o u and mute) TheEEG data were bandpass filtered with 1ndash100Hz and a Morlet mother wavelet transform was used to calculate the TFR The TFRs are plottedfor the first 2 s after final beep sound and for the frequency range of 10ndash70Hz

15

14

13

12

11

10

09

15

14

13

12

11

10

09

A E I

O U Mute

Figure 4 Topographical distribution of gamma activities during vowel imagination for subject 5 Increased activities were observed in bothtemporal areas when the subject imagined vowels Time interval for the analysis is 0ndash3 sec

i versus mute are selected from four subjects and a versusi is selected from three subjects

Table 3 indicates the confusion matrix for all pairwisecombinations and subjects using ELM ELM-L ELM-RSVM-R and LDA In terms of sensitivity and specificityELM-L is the best classifier for our EEG data AlthoughSVM-R shows higher specificity than those of the other threeclassifiers in this table SVM-R classified almost all conditionsas positive and resulted in poor sensitivity therefore the highspecificity of the SVM-R is possibly invalid Thus SVM-Rmight be an unsuitable classifier for our study

33 Discussion Overall ELM ELM-L and ELM-R showedbetter performance than the SVM-R and LDA algorithms inthis study In several previous studies ELM achieved similaror better classification accuracy rates with much less trainingtime compared to other algorithms using EEG data [16 25ndash27] However we could not find studies on classification ofimagined speech using ELM algorithms Deng et al reportedclassification rates using LDA for imagined speech with7267 of the highest accuracy but the average results werenot much better than the chance level [8] DaSalla et al usingSVM showed approximately 82 of the best accuracy and

BioMed Research International 7

S4S5S2

S3

S1SV

M-R

LDA

ELM

ELM

-LEL

M-R

SVM

-RLD

AEL

MEL

M-L

ELM

-RSV

M-R

LDA

ELM

ELM

-LEL

M-R

SVM

-RLD

AEL

MEL

M-L

ELM

-RSV

M-R

LDA

ELM

ELM

-LEL

M-R

4045505560657075808590

Clas

sifica

tion

accu

raci

es (

)

Figure 5 Averaged classification accuracies over all pairwise classification using a support vector machine with a radial basis function kernel(SVM-R) extreme learning machine (ELM) extreme learning machine with a linear kernel (ELM-L) and extreme learning machine with aradial basis function kernel (ELM-R) for all five subjects

Table 2 Classification accuracies in employing ELM-R for the pairwise combinations which shows the top five classification performancesfor each subject Classification accuracies are expressed as mean and associated standard deviation

Subjects

S1 8647 plusmn 107 8121 plusmn 103 8001 plusmn 373 7335 plusmn 317 7244 plusmn 171(a versus i) (a versus mute) (a versus u) (a versus e) (i versus o)

S2 9902 plusmn 076 9930 plusmn 014 9822 plusmn 022 9514 plusmn 103 9301 plusmn 073(a versus i) (i versus mute) (i versus o) (a versus mute) (o versus mute)

S3 9208 plusmn 108 9019 plusmn 063 8915 plusmn 137 8727 plusmn 071 7038 plusmn 138(e versus mute) (i versus mute) (u versus mute) (o versus mute) (a versus i)

S4 9333 plusmn 031 9227 plusmn 103 9224 plusmn 213 9112 plusmn 054 9005 plusmn 183(i versus mute) (u versus mute) (a versus mute) (e versus mute) (o versus mute)

S5 9632 plusmn 231 9401 plusmn 017 9229 plusmn 114 9007 plusmn 058 8806 plusmn 123(e versus mute) (i versus mute) (o versus mute) (a versus mute) (u versus mute)

73 of the average result overall [9] whereas Huang et alreported that ELM tends to have a much higher learningspeed and comparable generalization performance in binaryclassification [21] In another study Huang argued that ELMhas fewer optimization constraints owing to its special sepa-rability feature and results in simpler implementation fasterlearning and better generalization performance [23] Thusour results showed consistent characterswith othersrsquo previousresearch using ELM and even similar or better classificationresults for imagined speech compared to other research usingdifferent algorithms Recently ELM algorithms have beenextensively applied in many other medical and biomedicalstudies [28ndash31] More detailed information about ELM canbe found in a recent review [32]

In this study each trial was divided into the thirty timesegments of 02 s in length and a 01 s overlap Each timesegment was considered as a sample for training the classifierand the final label of the test sample was determined byselecting the most frequent output (see Figure 2) We alsocompared the classification accuracy of our method withthose of a conventional method that does not divide the trialsinto multiple time segments As a result our method showedsuperior performance in terms of classification accuracy to

the conventional method In our opinion by dividing thetrials some effects such as increasing number of trials forclassifier training might occur and each time segment witha 02 s length is likely to retain enough information fordiscrimination of EEG vowel imagination Generally EEGclassification has problems in terms of poor generalizationperformance and the overfitting phenomenon because ofthe deficiency of the number of samples for the classifierTherefore an increased number of samples by dividingtrials could mitigate the aforementioned problems Howeverfurther analyses are required to prove our assumptions insubsequent studies

To reduce the dimension of the feature vector weemployed a feature selection algorithm based on the sparseregression model In the sparse-regression-model-based fea-ture selection algorithm the regularization parameter 120582 ofequation (1) must be carefully selected because 120582 determinesthe dimension of the optimized feature parameter Forexample when the selected 120582 is too large the algorithmexcludes discriminative features from an optimal feature setF However when users set 120582 too small redundant featuresare not excluded from an optimal feature set F Thereforethe optimal value for 120582 was selected by cross-validation on

8 BioMed Research International

Table3Con

fusio

nmatrix

fora

llpairw

isecombinatio

nsandsubjectsusingEL

ME

LM-LE

LM-RSVM-Rand

LDA

Classifi

ers

ELM

ELM-L

ELM-R

SVM-R

LDA

Con

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

npo

sitive

Testpo

sitive

2516

1234

2649

1101

2635

1115

3675

752556

1194

Testnegativ

e1509

2241

1261

2489

1297

2453

3525

225

1398

2352

Sensitivity=

06251

Specificity=

064

49Sensitivity=

06775

Specificity=

06933

Sensitivity=

06701

Specificity=

06875

Sensitivity=

05104

Specificity=

07500

Sensitivity=

064

64Specificity=

06633

BioMed Research International 9

40455055606570758085

Clas

sifica

tion

accu

racy

()

002 004 016008 01 012 014 0180 006Regularization parameter

Figure 6 Effects of varying the regularization parameter on the classification accuracies obtained by ELM-R with sparse-regression-model-based feature selection for subject 2 The parameter value giving the highest accuracy is highlighted with a red circle

the training session in our study For example the changeof classification accuracy caused by varying 120582 for subject 1 isillustrated in Figure 6 In the case of a and i using ELM-R the best classification accuracy reached a plateau at 120582 =008 and declined after 014 However the optimal values of 120582are totally different among the pairwise combinations and allsubjects

Furthermore our optimized results were achieved in thegamma frequency band (30ndash70Hz) We also tested the otherfrequency ranges such as beta (13ndash30Hz) alpha (8ndash13Hz)and theta (4ndash8Hz) however the classification rates of thosebands were not much better than the chance level in everysubject and pairwise combination of syllables In additionthe results of our TFR and topographical analysis (Figures3 and 4) could support some relationship between gammaactivities and imagined speech processing As far as we knowin the EEG classification of imagined speech there have beenonly a few studies that examined the differences betweenmultiple frequency bands including gamma frequency [3334] Therefore our study might be the first report that thegamma frequency band could play an important role asfeatures for the EEG classification of imagined speech More-over several studies using ECoG reported quite good resultsin the gamma frequency for imagined speech classification[35 36] and these findings are consistent with our resultsHowever several studies have been conducted that suggestedthe role of gamma frequency band for speech processingin neurophysiological perspectives [37ndash39] However thosestudies usually used intracranial recordings and focused onthe analysis for the high gamma (70ndash150Hz) frequency bandThus suggesting a relevance between those results and ourclassification study is not easy However a certain relationbetween some information in low gamma frequencies as afeature for classification and its implication from a neuro-physiological view will be specified in future studies

Currently communication systems with various BCItechnologies have been developed for disabled people [40]For instance the P300 speller is one of the most widelyresearched BCI technologies to decode verbal thoughts fromEEG [41] Despite many efforts toward better and fasterperformance the P300 speller is still insufficient for usein normal conversation [42 43] whereas independent of

the P300 component efforts toward extraction and analysisof EEG or ECoG induced by imagined speech have beenconducted [44 45] In this context our results of highperformance from the application of ELM and its variantshave potential to advance BCI research using silent speechcommunication However the pairwise combinations withthe highest accuracies (see Table 2) differed in each subjectAfter experiment each participant reported different patternsof vowel discrimination For example one subject reportedthat he could not discriminate e from i and the othersubject reported the other pair was not easy to distinguishAlthough those reports were not exactly matched to theresults of classification these discrepancies of subjectivesensory perception might be related to process of imaginingspeech and classification results Besides we have not triedmulticlass classification in this study yet some attemptsin multiclass classification of imagined speech have beenperformed by others [8 46 47] These issues related tointersubject variability and multiclass systems should beconsidered for our future study to developmore practical andgeneralized BCI systems using silent speech

4 Conclusions

In the present study we used classification algorithms forEEG data of imagined speech Particularly we comparedELM and its variants to SVM-R and LDA algorithms andobserved that ELM and its variants showed better perfor-mance than other algorithms with our data These resultsmight lead to the development of silent speech BCI systems

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Authorsrsquo Contributions

Beomjun Min and Jongin Kim equally contributed to thiswork

10 BioMed Research International

Acknowledgments

This research was supported by the GIST Research Institute(GRI) in 2016 and the Pioneer Research Center Programthrough the National Research Foundation of Korea fundedby theMinistry of Science ICTamp Future Planning (Grant no2012-0009462)

References

[1] H-J Hwang S Kim S Choi and C-H Im ldquoEEG-based brain-computer interfaces a thorough literature surveyrdquo InternationalJournal of Human-Computer Interaction vol 29 no 12 pp 814ndash826 2013

[2] U Chaudhary N Birbaumer and A Ramos-MurguialdayldquoBrainndashcomputer interfaces for communication and rehabilita-tionrdquoNature Reviews Neurology vol 12 no 9 pp 513ndash525 2016

[3] M Hamedi S-H Salleh and A M Noor ldquoElectroencephalo-graphic motor imagery brain connectivity analysis for BCI areviewrdquo Neural Computation vol 28 no 6 pp 999ndash1041 2016

[4] C Neuper R Scherer S Wriessnegger and G PfurtschellerldquoMotor imagery and action observation modulation of sen-sorimotor brain rhythms during mental control of a brain-computer interfacerdquoClinical Neurophysiology vol 120 no 2 pp239ndash247 2009

[5] H-J Hwang K Kwon and C-H Im ldquoNeurofeedback-basedmotor imagery training for brainndashcomputer interface (BCI)rdquoJournal of Neuroscience Methods vol 179 no 1 pp 150ndash1562009

[6] G Pfurtscheller C Brunner A Schlogl and F H Lopes daSilva ldquoMu rhythm (de)synchronization and EEG single-trialclassification of different motor imagery tasksrdquo NeuroImagevol 31 no 1 pp 153ndash159 2006

[7] M Ahn and S C Jun ldquoPerformance variation inmotor imagerybrain-computer interface a brief reviewrdquo Journal of Neuro-science Methods vol 243 pp 103ndash110 2015

[8] S Deng R Srinivasan T Lappas and M DrsquoZmura ldquoEEG clas-sification of imagined syllable rhythm using Hilbert spectrummethodsrdquo Journal of Neural Engineering vol 7 no 4 Article ID046006 2010

[9] C S DaSalla H Kambara M Sato and Y Koike ldquoSingle-trialclassification of vowel speech imagery using common spatialpatternsrdquo Neural Networks vol 22 no 9 pp 1334ndash1339 2009

[10] X Pei D L Barbour E C Leuthardt and G Schalk ldquoDecodingvowels and consonants in spoken and imagined words usingelectrocorticographic signals in humansrdquo Journal of NeuralEngineering vol 8 no 4 Article ID 046028 2011

[11] F Lotte M Congedo A Lecuyer F Lamarche and B ArnaldildquoA review of classification algorithms for EEG-based brain-computer interfacesrdquo Journal of Neural Engineering vol 4 no2 article R01 pp R1ndashR13 2007

[12] K Brigham and B V K V Kumar ldquoImagined speech classifica-tion with EEG signals for silent communication a preliminaryinvestigation into synthetic telepathyrdquo in Proceedings of the4th International Conference on Bioinformatics and BiomedicalEngineering (iCBBE rsquo10) Chengdu China June 2010

[13] J Kim S-K Lee and B Lee ldquoEEG classification in a single-trialbasis for vowel speech perception using multivariate empiricalmode decompositionrdquo Journal of Neural Engineering vol 11 no3 Article ID 036010 2014

[14] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1-3 pp 489ndash501 2006

[15] Y Song and J Zhang ldquoAutomatic recognition of epileptic EEGpatterns via Extreme Learning Machine and multiresolutionfeature extractionrdquoExpert Systemswith Applications vol 40 no14 pp 5477ndash5489 2013

[16] Q YuanWZhou S Li andDCai ldquoEpileptic EEG classificationbased on extreme learning machine and nonlinear featuresrdquoEpilepsy Research vol 96 no 1-2 pp 29ndash38 2011

[17] M N I Qureshi B Min H J Jo and B Lee ldquoMulticlass clas-sification for the differential diagnosis on the ADHD subtypesusing recursive feature elimination and hierarchical extremelearning machine structural MRI studyrdquo PLoS ONE vol 11 no8 Article ID e0160697 2016

[18] L Gao W Cheng J Zhang and J Wang ldquoEEG classificationfor motor imagery and resting state in BCI applications usingmulti-class Adaboost extreme learning machinerdquo Review ofScientific Instruments vol 87 no 8 Article ID 085110 2016

[19] R Tibshirani ldquoRegression shrinkage and selection via the LassoRobert Tibshiranirdquo Journal of the Royal Statistical Society SeriesB (Statistical Methodology) vol 58 no 1 pp 267ndash288 2007

[20] J Friedman T Hastie and R Tibshirani ldquoRegularization pathsfor generalized linear models via coordinate descentrdquo Journal ofStatistical Software vol 33 no 1 pp 1ndash22 2010

[21] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[22] J Cao Z Lin and G-B Huang ldquoComposite function waveletneural networks with extreme learning machinerdquo Neurocom-puting vol 73 no 7-9 pp 1405ndash1416 2010

[23] G-B Huang X Ding and H Zhou ldquoOptimization methodbased extreme learning machine for classificationrdquo Neurocom-puting vol 74 no 1-3 pp 155ndash163 2010

[24] P L Bartlett ldquoThe sample complexity of pattern classificationwith neural networks the size of the weights is more importantthan the size of the networkrdquo Institute of Electrical and Electron-ics Engineers Transactions on InformationTheory vol 44 no 2pp 525ndash536 1998

[25] L-C Shi and B-L Lu ldquoEEG-based vigilance estimation usingextreme learning machinesrdquo Neurocomputing vol 102 pp 135ndash143 2013

[26] Y Peng and B-L Lu ldquoDiscriminative manifold extreme learn-ing machine and applications to image and EEG signal classifi-cationrdquo Neurocomputing vol 174 pp 265ndash277 2014

[27] N-Y Liang P Saratchandran G-B Huang and N Sundarara-jan ldquoClassification of mental tasks from EEG signals usingextreme learning machinerdquo International Journal of NeuralSystems vol 16 no 1 pp 29ndash38 2006

[28] J Kim H S Shin K Shin and M Lee ldquoRobust algorithmfor arrhythmia classification in ECG using extreme learningmachinerdquo Biomedical Engineering OnLine vol 8 article 312009

[29] S Karpagachelvi M Arthanari and M Sivakumar ldquoClas-sification of electrocardiogram signals with support vectormachines and extreme learning machinerdquo Neural Computingand Applications vol 21 no 6 pp 1331ndash1339 2012

[30] W Huang Z M Tan Z Lin et al ldquoA semi-automatic approachto the segmentation of liver parenchyma from 3D CT imageswith Extreme Learning Machinerdquo in Proceedings of the IEEE

BioMed Research International 11

Annual International Conference of the IEEE Engineering inMedicine and Biology Society (EMBCrsquo 12) pp 3752ndash3755 Bue-nos Aires Argentina September 2012

[31] R Barea L Boquete S Ortega E Lopez and J M Rodrıguez-Ascariz ldquoEOG-based eye movements codification for humancomputer interactionrdquo Expert Systems with Applications vol 39no 3 pp 2677ndash2683 2012

[32] GHuangG-BHuang S Song andKYou ldquoTrends in extremelearning machines a reviewrdquo Neural Networks vol 61 pp 32ndash48 2015

[33] B M Idrees and O Farooq ldquoVowel classification using waveletdecomposition during speech imageryrdquo in Proceedings of theInternational Conference on Signal Processing and IntegratedNetworks (SPIN rsquo16) pp 636ndash640 Noida India Feburary 2016

[34] A Riaz S Akhtar S Iftikhar A A Khan and A SalmanldquoInter comparison of classification techniques for vowel speechimagery using EEG sensorsrdquo in Proceedings of the 2nd Interna-tional Conference on Systems and Informatics (ICSAI rsquo14) pp712ndash717 IEEE Shanghai China November 2014

[35] S Martin P Brunner I Iturrate et al ldquoWord pair classificationduring imagined speech using direct brain recordingsrdquo Scien-tific Reports vol 6 Article ID 25803 2016

[36] X Pei J Hill and G Schalk ldquoSilent communication towardusing brain signalsrdquo IEEE Pulse vol 3 no 1 pp 43ndash46 2012

[37] A-L Giraud and D Poeppel ldquoCortical oscillations and speechprocessing emerging computational principles and operationsrdquoNature Neuroscience vol 15 no 4 pp 511ndash517 2012

[38] V L Towle H-A Yoon M Castelle et al ldquoECoG 120574 activityduring a language task differentiating expressive and receptivespeech areasrdquo Brain vol 131 no 8 pp 2013ndash2027 2008

[39] SMartin P Brunner C Holdgraf et al ldquoDecoding spectrotem-poral features of overt and covert speech from the humancortexrdquo Frontiers in Neuroengineering vol 7 article 14 2014

[40] J S Brumberg A Nieto-Castanon P R Kennedy and F HGuenther ldquoBrain-computer interfaces for speech communica-tionrdquo Speech Communication vol 52 no 4 pp 367ndash379 2010

[41] L A Farwell and E Donchin ldquoTalking off the top of your headtoward a mental prosthesis utilizing event-related brain poten-tialsrdquo Electroencephalography and Clinical Neurophysiology vol70 no 6 pp 510ndash523 1988

[42] D J Krusienski EW Sellers F Cabestaing et al ldquoA comparisonof classification techniques for the P300 Spellerrdquo Journal ofNeural Engineering vol 3 no 4 article 007 pp 299ndash305 2006

[43] E Yin Z Zhou J Jiang F Chen Y Liu and D Hu ldquoA novelhybrid BCI speller based on the incorporation of SSVEP intothe P300 paradigmrdquo Journal of Neural Engineering vol 10 no2 Article ID 026012 2013

[44] C Herff and T Schultz ldquoAutomatic speech recognition fromneural signals a focused reviewrdquo Frontiers in Neuroscience vol10 article 429 2016

[45] S Chakrabarti H M Sandberg J S Brumberg and D JKrusienski ldquoProgress in speech decoding from the electrocor-ticogramrdquoBiomedical Engineering Letters vol 5 no 1 pp 10ndash212015

[46] K Mohanchandra and S Saha ldquoA communication paradigmusing subvocalized speech translating brain signals intospeechrdquoAugmentedHumanResearch vol 1 no 1 article 3 2016

[47] O Ossmy I Fried and R Mukamel ldquoDecoding speech percep-tion from single cell activity in humansrdquo NeuroImage vol 117pp 151ndash159 2015

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

BioMed Research International 7

S4S5S2

S3

S1SV

M-R

LDA

ELM

ELM

-LEL

M-R

SVM

-RLD

AEL

MEL

M-L

ELM

-RSV

M-R

LDA

ELM

ELM

-LEL

M-R

SVM

-RLD

AEL

MEL

M-L

ELM

-RSV

M-R

LDA

ELM

ELM

-LEL

M-R

4045505560657075808590

Clas

sifica

tion

accu

raci

es (

)

Figure 5 Averaged classification accuracies over all pairwise classification using a support vector machine with a radial basis function kernel(SVM-R) extreme learning machine (ELM) extreme learning machine with a linear kernel (ELM-L) and extreme learning machine with aradial basis function kernel (ELM-R) for all five subjects

Table 2 Classification accuracies in employing ELM-R for the pairwise combinations which shows the top five classification performancesfor each subject Classification accuracies are expressed as mean and associated standard deviation

Subjects

S1 8647 plusmn 107 8121 plusmn 103 8001 plusmn 373 7335 plusmn 317 7244 plusmn 171(a versus i) (a versus mute) (a versus u) (a versus e) (i versus o)

S2 9902 plusmn 076 9930 plusmn 014 9822 plusmn 022 9514 plusmn 103 9301 plusmn 073(a versus i) (i versus mute) (i versus o) (a versus mute) (o versus mute)

S3 9208 plusmn 108 9019 plusmn 063 8915 plusmn 137 8727 plusmn 071 7038 plusmn 138(e versus mute) (i versus mute) (u versus mute) (o versus mute) (a versus i)

S4 9333 plusmn 031 9227 plusmn 103 9224 plusmn 213 9112 plusmn 054 9005 plusmn 183(i versus mute) (u versus mute) (a versus mute) (e versus mute) (o versus mute)

S5 9632 plusmn 231 9401 plusmn 017 9229 plusmn 114 9007 plusmn 058 8806 plusmn 123(e versus mute) (i versus mute) (o versus mute) (a versus mute) (u versus mute)

73 of the average result overall [9] whereas Huang et alreported that ELM tends to have a much higher learningspeed and comparable generalization performance in binaryclassification [21] In another study Huang argued that ELMhas fewer optimization constraints owing to its special sepa-rability feature and results in simpler implementation fasterlearning and better generalization performance [23] Thusour results showed consistent characterswith othersrsquo previousresearch using ELM and even similar or better classificationresults for imagined speech compared to other research usingdifferent algorithms Recently ELM algorithms have beenextensively applied in many other medical and biomedicalstudies [28ndash31] More detailed information about ELM canbe found in a recent review [32]

In this study each trial was divided into the thirty timesegments of 02 s in length and a 01 s overlap Each timesegment was considered as a sample for training the classifierand the final label of the test sample was determined byselecting the most frequent output (see Figure 2) We alsocompared the classification accuracy of our method withthose of a conventional method that does not divide the trialsinto multiple time segments As a result our method showedsuperior performance in terms of classification accuracy to

the conventional method In our opinion by dividing thetrials some effects such as increasing number of trials forclassifier training might occur and each time segment witha 02 s length is likely to retain enough information fordiscrimination of EEG vowel imagination Generally EEGclassification has problems in terms of poor generalizationperformance and the overfitting phenomenon because ofthe deficiency of the number of samples for the classifierTherefore an increased number of samples by dividingtrials could mitigate the aforementioned problems Howeverfurther analyses are required to prove our assumptions insubsequent studies

To reduce the dimension of the feature vector weemployed a feature selection algorithm based on the sparseregression model In the sparse-regression-model-based fea-ture selection algorithm the regularization parameter 120582 ofequation (1) must be carefully selected because 120582 determinesthe dimension of the optimized feature parameter Forexample when the selected 120582 is too large the algorithmexcludes discriminative features from an optimal feature setF However when users set 120582 too small redundant featuresare not excluded from an optimal feature set F Thereforethe optimal value for 120582 was selected by cross-validation on

8 BioMed Research International

Table3Con

fusio

nmatrix

fora

llpairw

isecombinatio

nsandsubjectsusingEL

ME

LM-LE

LM-RSVM-Rand

LDA

Classifi

ers

ELM

ELM-L

ELM-R

SVM-R

LDA

Con

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

npo

sitive

Testpo

sitive

2516

1234

2649

1101

2635

1115

3675

752556

1194

Testnegativ

e1509

2241

1261

2489

1297

2453

3525

225

1398

2352

Sensitivity=

06251

Specificity=

064

49Sensitivity=

06775

Specificity=

06933

Sensitivity=

06701

Specificity=

06875

Sensitivity=

05104

Specificity=

07500

Sensitivity=

064

64Specificity=

06633

BioMed Research International 9

40455055606570758085

Clas

sifica

tion

accu

racy

()

002 004 016008 01 012 014 0180 006Regularization parameter

Figure 6 Effects of varying the regularization parameter on the classification accuracies obtained by ELM-R with sparse-regression-model-based feature selection for subject 2 The parameter value giving the highest accuracy is highlighted with a red circle

the training session in our study For example the changeof classification accuracy caused by varying 120582 for subject 1 isillustrated in Figure 6 In the case of a and i using ELM-R the best classification accuracy reached a plateau at 120582 =008 and declined after 014 However the optimal values of 120582are totally different among the pairwise combinations and allsubjects

Furthermore our optimized results were achieved in thegamma frequency band (30ndash70Hz) We also tested the otherfrequency ranges such as beta (13ndash30Hz) alpha (8ndash13Hz)and theta (4ndash8Hz) however the classification rates of thosebands were not much better than the chance level in everysubject and pairwise combination of syllables In additionthe results of our TFR and topographical analysis (Figures3 and 4) could support some relationship between gammaactivities and imagined speech processing As far as we knowin the EEG classification of imagined speech there have beenonly a few studies that examined the differences betweenmultiple frequency bands including gamma frequency [3334] Therefore our study might be the first report that thegamma frequency band could play an important role asfeatures for the EEG classification of imagined speech More-over several studies using ECoG reported quite good resultsin the gamma frequency for imagined speech classification[35 36] and these findings are consistent with our resultsHowever several studies have been conducted that suggestedthe role of gamma frequency band for speech processingin neurophysiological perspectives [37ndash39] However thosestudies usually used intracranial recordings and focused onthe analysis for the high gamma (70ndash150Hz) frequency bandThus suggesting a relevance between those results and ourclassification study is not easy However a certain relationbetween some information in low gamma frequencies as afeature for classification and its implication from a neuro-physiological view will be specified in future studies

Currently communication systems with various BCItechnologies have been developed for disabled people [40]For instance the P300 speller is one of the most widelyresearched BCI technologies to decode verbal thoughts fromEEG [41] Despite many efforts toward better and fasterperformance the P300 speller is still insufficient for usein normal conversation [42 43] whereas independent of

the P300 component efforts toward extraction and analysisof EEG or ECoG induced by imagined speech have beenconducted [44 45] In this context our results of highperformance from the application of ELM and its variantshave potential to advance BCI research using silent speechcommunication However the pairwise combinations withthe highest accuracies (see Table 2) differed in each subjectAfter experiment each participant reported different patternsof vowel discrimination For example one subject reportedthat he could not discriminate e from i and the othersubject reported the other pair was not easy to distinguishAlthough those reports were not exactly matched to theresults of classification these discrepancies of subjectivesensory perception might be related to process of imaginingspeech and classification results Besides we have not triedmulticlass classification in this study yet some attemptsin multiclass classification of imagined speech have beenperformed by others [8 46 47] These issues related tointersubject variability and multiclass systems should beconsidered for our future study to developmore practical andgeneralized BCI systems using silent speech

4 Conclusions

In the present study we used classification algorithms forEEG data of imagined speech Particularly we comparedELM and its variants to SVM-R and LDA algorithms andobserved that ELM and its variants showed better perfor-mance than other algorithms with our data These resultsmight lead to the development of silent speech BCI systems

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Authorsrsquo Contributions

Beomjun Min and Jongin Kim equally contributed to thiswork

10 BioMed Research International

Acknowledgments

This research was supported by the GIST Research Institute(GRI) in 2016 and the Pioneer Research Center Programthrough the National Research Foundation of Korea fundedby theMinistry of Science ICTamp Future Planning (Grant no2012-0009462)

References

[1] H-J Hwang S Kim S Choi and C-H Im ldquoEEG-based brain-computer interfaces a thorough literature surveyrdquo InternationalJournal of Human-Computer Interaction vol 29 no 12 pp 814ndash826 2013

[2] U Chaudhary N Birbaumer and A Ramos-MurguialdayldquoBrainndashcomputer interfaces for communication and rehabilita-tionrdquoNature Reviews Neurology vol 12 no 9 pp 513ndash525 2016

[3] M Hamedi S-H Salleh and A M Noor ldquoElectroencephalo-graphic motor imagery brain connectivity analysis for BCI areviewrdquo Neural Computation vol 28 no 6 pp 999ndash1041 2016

[4] C Neuper R Scherer S Wriessnegger and G PfurtschellerldquoMotor imagery and action observation modulation of sen-sorimotor brain rhythms during mental control of a brain-computer interfacerdquoClinical Neurophysiology vol 120 no 2 pp239ndash247 2009

[5] H-J Hwang K Kwon and C-H Im ldquoNeurofeedback-basedmotor imagery training for brainndashcomputer interface (BCI)rdquoJournal of Neuroscience Methods vol 179 no 1 pp 150ndash1562009

[6] G Pfurtscheller C Brunner A Schlogl and F H Lopes daSilva ldquoMu rhythm (de)synchronization and EEG single-trialclassification of different motor imagery tasksrdquo NeuroImagevol 31 no 1 pp 153ndash159 2006

[7] M Ahn and S C Jun ldquoPerformance variation inmotor imagerybrain-computer interface a brief reviewrdquo Journal of Neuro-science Methods vol 243 pp 103ndash110 2015

[8] S Deng R Srinivasan T Lappas and M DrsquoZmura ldquoEEG clas-sification of imagined syllable rhythm using Hilbert spectrummethodsrdquo Journal of Neural Engineering vol 7 no 4 Article ID046006 2010

[9] C S DaSalla H Kambara M Sato and Y Koike ldquoSingle-trialclassification of vowel speech imagery using common spatialpatternsrdquo Neural Networks vol 22 no 9 pp 1334ndash1339 2009

[10] X Pei D L Barbour E C Leuthardt and G Schalk ldquoDecodingvowels and consonants in spoken and imagined words usingelectrocorticographic signals in humansrdquo Journal of NeuralEngineering vol 8 no 4 Article ID 046028 2011

[11] F Lotte M Congedo A Lecuyer F Lamarche and B ArnaldildquoA review of classification algorithms for EEG-based brain-computer interfacesrdquo Journal of Neural Engineering vol 4 no2 article R01 pp R1ndashR13 2007

[12] K Brigham and B V K V Kumar ldquoImagined speech classifica-tion with EEG signals for silent communication a preliminaryinvestigation into synthetic telepathyrdquo in Proceedings of the4th International Conference on Bioinformatics and BiomedicalEngineering (iCBBE rsquo10) Chengdu China June 2010

[13] J Kim S-K Lee and B Lee ldquoEEG classification in a single-trialbasis for vowel speech perception using multivariate empiricalmode decompositionrdquo Journal of Neural Engineering vol 11 no3 Article ID 036010 2014

[14] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1-3 pp 489ndash501 2006

[15] Y Song and J Zhang ldquoAutomatic recognition of epileptic EEGpatterns via Extreme Learning Machine and multiresolutionfeature extractionrdquoExpert Systemswith Applications vol 40 no14 pp 5477ndash5489 2013

[16] Q YuanWZhou S Li andDCai ldquoEpileptic EEG classificationbased on extreme learning machine and nonlinear featuresrdquoEpilepsy Research vol 96 no 1-2 pp 29ndash38 2011

[17] M N I Qureshi B Min H J Jo and B Lee ldquoMulticlass clas-sification for the differential diagnosis on the ADHD subtypesusing recursive feature elimination and hierarchical extremelearning machine structural MRI studyrdquo PLoS ONE vol 11 no8 Article ID e0160697 2016

[18] L Gao W Cheng J Zhang and J Wang ldquoEEG classificationfor motor imagery and resting state in BCI applications usingmulti-class Adaboost extreme learning machinerdquo Review ofScientific Instruments vol 87 no 8 Article ID 085110 2016

[19] R Tibshirani ldquoRegression shrinkage and selection via the LassoRobert Tibshiranirdquo Journal of the Royal Statistical Society SeriesB (Statistical Methodology) vol 58 no 1 pp 267ndash288 2007

[20] J Friedman T Hastie and R Tibshirani ldquoRegularization pathsfor generalized linear models via coordinate descentrdquo Journal ofStatistical Software vol 33 no 1 pp 1ndash22 2010

[21] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[22] J Cao Z Lin and G-B Huang ldquoComposite function waveletneural networks with extreme learning machinerdquo Neurocom-puting vol 73 no 7-9 pp 1405ndash1416 2010

[23] G-B Huang X Ding and H Zhou ldquoOptimization methodbased extreme learning machine for classificationrdquo Neurocom-puting vol 74 no 1-3 pp 155ndash163 2010

[24] P L Bartlett ldquoThe sample complexity of pattern classificationwith neural networks the size of the weights is more importantthan the size of the networkrdquo Institute of Electrical and Electron-ics Engineers Transactions on InformationTheory vol 44 no 2pp 525ndash536 1998

[25] L-C Shi and B-L Lu ldquoEEG-based vigilance estimation usingextreme learning machinesrdquo Neurocomputing vol 102 pp 135ndash143 2013

[26] Y Peng and B-L Lu ldquoDiscriminative manifold extreme learn-ing machine and applications to image and EEG signal classifi-cationrdquo Neurocomputing vol 174 pp 265ndash277 2014

[27] N-Y Liang P Saratchandran G-B Huang and N Sundarara-jan ldquoClassification of mental tasks from EEG signals usingextreme learning machinerdquo International Journal of NeuralSystems vol 16 no 1 pp 29ndash38 2006

[28] J Kim H S Shin K Shin and M Lee ldquoRobust algorithmfor arrhythmia classification in ECG using extreme learningmachinerdquo Biomedical Engineering OnLine vol 8 article 312009

[29] S Karpagachelvi M Arthanari and M Sivakumar ldquoClas-sification of electrocardiogram signals with support vectormachines and extreme learning machinerdquo Neural Computingand Applications vol 21 no 6 pp 1331ndash1339 2012

[30] W Huang Z M Tan Z Lin et al ldquoA semi-automatic approachto the segmentation of liver parenchyma from 3D CT imageswith Extreme Learning Machinerdquo in Proceedings of the IEEE

BioMed Research International 11

Annual International Conference of the IEEE Engineering inMedicine and Biology Society (EMBCrsquo 12) pp 3752ndash3755 Bue-nos Aires Argentina September 2012

[31] R Barea L Boquete S Ortega E Lopez and J M Rodrıguez-Ascariz ldquoEOG-based eye movements codification for humancomputer interactionrdquo Expert Systems with Applications vol 39no 3 pp 2677ndash2683 2012

[32] GHuangG-BHuang S Song andKYou ldquoTrends in extremelearning machines a reviewrdquo Neural Networks vol 61 pp 32ndash48 2015

[33] B M Idrees and O Farooq ldquoVowel classification using waveletdecomposition during speech imageryrdquo in Proceedings of theInternational Conference on Signal Processing and IntegratedNetworks (SPIN rsquo16) pp 636ndash640 Noida India Feburary 2016

[34] A Riaz S Akhtar S Iftikhar A A Khan and A SalmanldquoInter comparison of classification techniques for vowel speechimagery using EEG sensorsrdquo in Proceedings of the 2nd Interna-tional Conference on Systems and Informatics (ICSAI rsquo14) pp712ndash717 IEEE Shanghai China November 2014

[35] S Martin P Brunner I Iturrate et al ldquoWord pair classificationduring imagined speech using direct brain recordingsrdquo Scien-tific Reports vol 6 Article ID 25803 2016

[36] X Pei J Hill and G Schalk ldquoSilent communication towardusing brain signalsrdquo IEEE Pulse vol 3 no 1 pp 43ndash46 2012

[37] A-L Giraud and D Poeppel ldquoCortical oscillations and speechprocessing emerging computational principles and operationsrdquoNature Neuroscience vol 15 no 4 pp 511ndash517 2012

[38] V L Towle H-A Yoon M Castelle et al ldquoECoG 120574 activityduring a language task differentiating expressive and receptivespeech areasrdquo Brain vol 131 no 8 pp 2013ndash2027 2008

[39] SMartin P Brunner C Holdgraf et al ldquoDecoding spectrotem-poral features of overt and covert speech from the humancortexrdquo Frontiers in Neuroengineering vol 7 article 14 2014

[40] J S Brumberg A Nieto-Castanon P R Kennedy and F HGuenther ldquoBrain-computer interfaces for speech communica-tionrdquo Speech Communication vol 52 no 4 pp 367ndash379 2010

[41] L A Farwell and E Donchin ldquoTalking off the top of your headtoward a mental prosthesis utilizing event-related brain poten-tialsrdquo Electroencephalography and Clinical Neurophysiology vol70 no 6 pp 510ndash523 1988

[42] D J Krusienski EW Sellers F Cabestaing et al ldquoA comparisonof classification techniques for the P300 Spellerrdquo Journal ofNeural Engineering vol 3 no 4 article 007 pp 299ndash305 2006

[43] E Yin Z Zhou J Jiang F Chen Y Liu and D Hu ldquoA novelhybrid BCI speller based on the incorporation of SSVEP intothe P300 paradigmrdquo Journal of Neural Engineering vol 10 no2 Article ID 026012 2013

[44] C Herff and T Schultz ldquoAutomatic speech recognition fromneural signals a focused reviewrdquo Frontiers in Neuroscience vol10 article 429 2016

[45] S Chakrabarti H M Sandberg J S Brumberg and D JKrusienski ldquoProgress in speech decoding from the electrocor-ticogramrdquoBiomedical Engineering Letters vol 5 no 1 pp 10ndash212015

[46] K Mohanchandra and S Saha ldquoA communication paradigmusing subvocalized speech translating brain signals intospeechrdquoAugmentedHumanResearch vol 1 no 1 article 3 2016

[47] O Ossmy I Fried and R Mukamel ldquoDecoding speech percep-tion from single cell activity in humansrdquo NeuroImage vol 117pp 151ndash159 2015

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

8 BioMed Research International

Table3Con

fusio

nmatrix

fora

llpairw

isecombinatio

nsandsubjectsusingEL

ME

LM-LE

LM-RSVM-Rand

LDA

Classifi

ers

ELM

ELM-L

ELM-R

SVM-R

LDA

Con

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

nnegativ

eCon

ditio

npo

sitive

Con

ditio

npo

sitive

Testpo

sitive

2516

1234

2649

1101

2635

1115

3675

752556

1194

Testnegativ

e1509

2241

1261

2489

1297

2453

3525

225

1398

2352

Sensitivity=

06251

Specificity=

064

49Sensitivity=

06775

Specificity=

06933

Sensitivity=

06701

Specificity=

06875

Sensitivity=

05104

Specificity=

07500

Sensitivity=

064

64Specificity=

06633

BioMed Research International 9

40455055606570758085

Clas

sifica

tion

accu

racy

()

002 004 016008 01 012 014 0180 006Regularization parameter

Figure 6 Effects of varying the regularization parameter on the classification accuracies obtained by ELM-R with sparse-regression-model-based feature selection for subject 2 The parameter value giving the highest accuracy is highlighted with a red circle

the training session in our study For example the changeof classification accuracy caused by varying 120582 for subject 1 isillustrated in Figure 6 In the case of a and i using ELM-R the best classification accuracy reached a plateau at 120582 =008 and declined after 014 However the optimal values of 120582are totally different among the pairwise combinations and allsubjects

Furthermore our optimized results were achieved in thegamma frequency band (30ndash70Hz) We also tested the otherfrequency ranges such as beta (13ndash30Hz) alpha (8ndash13Hz)and theta (4ndash8Hz) however the classification rates of thosebands were not much better than the chance level in everysubject and pairwise combination of syllables In additionthe results of our TFR and topographical analysis (Figures3 and 4) could support some relationship between gammaactivities and imagined speech processing As far as we knowin the EEG classification of imagined speech there have beenonly a few studies that examined the differences betweenmultiple frequency bands including gamma frequency [3334] Therefore our study might be the first report that thegamma frequency band could play an important role asfeatures for the EEG classification of imagined speech More-over several studies using ECoG reported quite good resultsin the gamma frequency for imagined speech classification[35 36] and these findings are consistent with our resultsHowever several studies have been conducted that suggestedthe role of gamma frequency band for speech processingin neurophysiological perspectives [37ndash39] However thosestudies usually used intracranial recordings and focused onthe analysis for the high gamma (70ndash150Hz) frequency bandThus suggesting a relevance between those results and ourclassification study is not easy However a certain relationbetween some information in low gamma frequencies as afeature for classification and its implication from a neuro-physiological view will be specified in future studies

Currently communication systems with various BCItechnologies have been developed for disabled people [40]For instance the P300 speller is one of the most widelyresearched BCI technologies to decode verbal thoughts fromEEG [41] Despite many efforts toward better and fasterperformance the P300 speller is still insufficient for usein normal conversation [42 43] whereas independent of

the P300 component efforts toward extraction and analysisof EEG or ECoG induced by imagined speech have beenconducted [44 45] In this context our results of highperformance from the application of ELM and its variantshave potential to advance BCI research using silent speechcommunication However the pairwise combinations withthe highest accuracies (see Table 2) differed in each subjectAfter experiment each participant reported different patternsof vowel discrimination For example one subject reportedthat he could not discriminate e from i and the othersubject reported the other pair was not easy to distinguishAlthough those reports were not exactly matched to theresults of classification these discrepancies of subjectivesensory perception might be related to process of imaginingspeech and classification results Besides we have not triedmulticlass classification in this study yet some attemptsin multiclass classification of imagined speech have beenperformed by others [8 46 47] These issues related tointersubject variability and multiclass systems should beconsidered for our future study to developmore practical andgeneralized BCI systems using silent speech

4 Conclusions

In the present study we used classification algorithms forEEG data of imagined speech Particularly we comparedELM and its variants to SVM-R and LDA algorithms andobserved that ELM and its variants showed better perfor-mance than other algorithms with our data These resultsmight lead to the development of silent speech BCI systems

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Authorsrsquo Contributions

Beomjun Min and Jongin Kim equally contributed to thiswork

10 BioMed Research International

Acknowledgments

This research was supported by the GIST Research Institute(GRI) in 2016 and the Pioneer Research Center Programthrough the National Research Foundation of Korea fundedby theMinistry of Science ICTamp Future Planning (Grant no2012-0009462)

References

[1] H-J Hwang S Kim S Choi and C-H Im ldquoEEG-based brain-computer interfaces a thorough literature surveyrdquo InternationalJournal of Human-Computer Interaction vol 29 no 12 pp 814ndash826 2013

[2] U Chaudhary N Birbaumer and A Ramos-MurguialdayldquoBrainndashcomputer interfaces for communication and rehabilita-tionrdquoNature Reviews Neurology vol 12 no 9 pp 513ndash525 2016

[3] M Hamedi S-H Salleh and A M Noor ldquoElectroencephalo-graphic motor imagery brain connectivity analysis for BCI areviewrdquo Neural Computation vol 28 no 6 pp 999ndash1041 2016

[4] C Neuper R Scherer S Wriessnegger and G PfurtschellerldquoMotor imagery and action observation modulation of sen-sorimotor brain rhythms during mental control of a brain-computer interfacerdquoClinical Neurophysiology vol 120 no 2 pp239ndash247 2009

[5] H-J Hwang K Kwon and C-H Im ldquoNeurofeedback-basedmotor imagery training for brainndashcomputer interface (BCI)rdquoJournal of Neuroscience Methods vol 179 no 1 pp 150ndash1562009

[6] G Pfurtscheller C Brunner A Schlogl and F H Lopes daSilva ldquoMu rhythm (de)synchronization and EEG single-trialclassification of different motor imagery tasksrdquo NeuroImagevol 31 no 1 pp 153ndash159 2006

[7] M Ahn and S C Jun ldquoPerformance variation inmotor imagerybrain-computer interface a brief reviewrdquo Journal of Neuro-science Methods vol 243 pp 103ndash110 2015

[8] S Deng R Srinivasan T Lappas and M DrsquoZmura ldquoEEG clas-sification of imagined syllable rhythm using Hilbert spectrummethodsrdquo Journal of Neural Engineering vol 7 no 4 Article ID046006 2010

[9] C S DaSalla H Kambara M Sato and Y Koike ldquoSingle-trialclassification of vowel speech imagery using common spatialpatternsrdquo Neural Networks vol 22 no 9 pp 1334ndash1339 2009

[10] X Pei D L Barbour E C Leuthardt and G Schalk ldquoDecodingvowels and consonants in spoken and imagined words usingelectrocorticographic signals in humansrdquo Journal of NeuralEngineering vol 8 no 4 Article ID 046028 2011

[11] F Lotte M Congedo A Lecuyer F Lamarche and B ArnaldildquoA review of classification algorithms for EEG-based brain-computer interfacesrdquo Journal of Neural Engineering vol 4 no2 article R01 pp R1ndashR13 2007

[12] K Brigham and B V K V Kumar ldquoImagined speech classifica-tion with EEG signals for silent communication a preliminaryinvestigation into synthetic telepathyrdquo in Proceedings of the4th International Conference on Bioinformatics and BiomedicalEngineering (iCBBE rsquo10) Chengdu China June 2010

[13] J Kim S-K Lee and B Lee ldquoEEG classification in a single-trialbasis for vowel speech perception using multivariate empiricalmode decompositionrdquo Journal of Neural Engineering vol 11 no3 Article ID 036010 2014

[14] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1-3 pp 489ndash501 2006

[15] Y Song and J Zhang ldquoAutomatic recognition of epileptic EEGpatterns via Extreme Learning Machine and multiresolutionfeature extractionrdquoExpert Systemswith Applications vol 40 no14 pp 5477ndash5489 2013

[16] Q YuanWZhou S Li andDCai ldquoEpileptic EEG classificationbased on extreme learning machine and nonlinear featuresrdquoEpilepsy Research vol 96 no 1-2 pp 29ndash38 2011

[17] M N I Qureshi B Min H J Jo and B Lee ldquoMulticlass clas-sification for the differential diagnosis on the ADHD subtypesusing recursive feature elimination and hierarchical extremelearning machine structural MRI studyrdquo PLoS ONE vol 11 no8 Article ID e0160697 2016

[18] L Gao W Cheng J Zhang and J Wang ldquoEEG classificationfor motor imagery and resting state in BCI applications usingmulti-class Adaboost extreme learning machinerdquo Review ofScientific Instruments vol 87 no 8 Article ID 085110 2016

[19] R Tibshirani ldquoRegression shrinkage and selection via the LassoRobert Tibshiranirdquo Journal of the Royal Statistical Society SeriesB (Statistical Methodology) vol 58 no 1 pp 267ndash288 2007

[20] J Friedman T Hastie and R Tibshirani ldquoRegularization pathsfor generalized linear models via coordinate descentrdquo Journal ofStatistical Software vol 33 no 1 pp 1ndash22 2010

[21] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[22] J Cao Z Lin and G-B Huang ldquoComposite function waveletneural networks with extreme learning machinerdquo Neurocom-puting vol 73 no 7-9 pp 1405ndash1416 2010

[23] G-B Huang X Ding and H Zhou ldquoOptimization methodbased extreme learning machine for classificationrdquo Neurocom-puting vol 74 no 1-3 pp 155ndash163 2010

[24] P L Bartlett ldquoThe sample complexity of pattern classificationwith neural networks the size of the weights is more importantthan the size of the networkrdquo Institute of Electrical and Electron-ics Engineers Transactions on InformationTheory vol 44 no 2pp 525ndash536 1998

[25] L-C Shi and B-L Lu ldquoEEG-based vigilance estimation usingextreme learning machinesrdquo Neurocomputing vol 102 pp 135ndash143 2013

[26] Y Peng and B-L Lu ldquoDiscriminative manifold extreme learn-ing machine and applications to image and EEG signal classifi-cationrdquo Neurocomputing vol 174 pp 265ndash277 2014

[27] N-Y Liang P Saratchandran G-B Huang and N Sundarara-jan ldquoClassification of mental tasks from EEG signals usingextreme learning machinerdquo International Journal of NeuralSystems vol 16 no 1 pp 29ndash38 2006

[28] J Kim H S Shin K Shin and M Lee ldquoRobust algorithmfor arrhythmia classification in ECG using extreme learningmachinerdquo Biomedical Engineering OnLine vol 8 article 312009

[29] S Karpagachelvi M Arthanari and M Sivakumar ldquoClas-sification of electrocardiogram signals with support vectormachines and extreme learning machinerdquo Neural Computingand Applications vol 21 no 6 pp 1331ndash1339 2012

[30] W Huang Z M Tan Z Lin et al ldquoA semi-automatic approachto the segmentation of liver parenchyma from 3D CT imageswith Extreme Learning Machinerdquo in Proceedings of the IEEE

BioMed Research International 11

Annual International Conference of the IEEE Engineering inMedicine and Biology Society (EMBCrsquo 12) pp 3752ndash3755 Bue-nos Aires Argentina September 2012

[31] R Barea L Boquete S Ortega E Lopez and J M Rodrıguez-Ascariz ldquoEOG-based eye movements codification for humancomputer interactionrdquo Expert Systems with Applications vol 39no 3 pp 2677ndash2683 2012

[32] GHuangG-BHuang S Song andKYou ldquoTrends in extremelearning machines a reviewrdquo Neural Networks vol 61 pp 32ndash48 2015

[33] B M Idrees and O Farooq ldquoVowel classification using waveletdecomposition during speech imageryrdquo in Proceedings of theInternational Conference on Signal Processing and IntegratedNetworks (SPIN rsquo16) pp 636ndash640 Noida India Feburary 2016

[34] A Riaz S Akhtar S Iftikhar A A Khan and A SalmanldquoInter comparison of classification techniques for vowel speechimagery using EEG sensorsrdquo in Proceedings of the 2nd Interna-tional Conference on Systems and Informatics (ICSAI rsquo14) pp712ndash717 IEEE Shanghai China November 2014

[35] S Martin P Brunner I Iturrate et al ldquoWord pair classificationduring imagined speech using direct brain recordingsrdquo Scien-tific Reports vol 6 Article ID 25803 2016

[36] X Pei J Hill and G Schalk ldquoSilent communication towardusing brain signalsrdquo IEEE Pulse vol 3 no 1 pp 43ndash46 2012

[37] A-L Giraud and D Poeppel ldquoCortical oscillations and speechprocessing emerging computational principles and operationsrdquoNature Neuroscience vol 15 no 4 pp 511ndash517 2012

[38] V L Towle H-A Yoon M Castelle et al ldquoECoG 120574 activityduring a language task differentiating expressive and receptivespeech areasrdquo Brain vol 131 no 8 pp 2013ndash2027 2008

[39] SMartin P Brunner C Holdgraf et al ldquoDecoding spectrotem-poral features of overt and covert speech from the humancortexrdquo Frontiers in Neuroengineering vol 7 article 14 2014

[40] J S Brumberg A Nieto-Castanon P R Kennedy and F HGuenther ldquoBrain-computer interfaces for speech communica-tionrdquo Speech Communication vol 52 no 4 pp 367ndash379 2010

[41] L A Farwell and E Donchin ldquoTalking off the top of your headtoward a mental prosthesis utilizing event-related brain poten-tialsrdquo Electroencephalography and Clinical Neurophysiology vol70 no 6 pp 510ndash523 1988

[42] D J Krusienski EW Sellers F Cabestaing et al ldquoA comparisonof classification techniques for the P300 Spellerrdquo Journal ofNeural Engineering vol 3 no 4 article 007 pp 299ndash305 2006

[43] E Yin Z Zhou J Jiang F Chen Y Liu and D Hu ldquoA novelhybrid BCI speller based on the incorporation of SSVEP intothe P300 paradigmrdquo Journal of Neural Engineering vol 10 no2 Article ID 026012 2013

[44] C Herff and T Schultz ldquoAutomatic speech recognition fromneural signals a focused reviewrdquo Frontiers in Neuroscience vol10 article 429 2016

[45] S Chakrabarti H M Sandberg J S Brumberg and D JKrusienski ldquoProgress in speech decoding from the electrocor-ticogramrdquoBiomedical Engineering Letters vol 5 no 1 pp 10ndash212015

[46] K Mohanchandra and S Saha ldquoA communication paradigmusing subvocalized speech translating brain signals intospeechrdquoAugmentedHumanResearch vol 1 no 1 article 3 2016

[47] O Ossmy I Fried and R Mukamel ldquoDecoding speech percep-tion from single cell activity in humansrdquo NeuroImage vol 117pp 151ndash159 2015

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

BioMed Research International 9

40455055606570758085

Clas

sifica

tion

accu

racy

()

002 004 016008 01 012 014 0180 006Regularization parameter

Figure 6 Effects of varying the regularization parameter on the classification accuracies obtained by ELM-R with sparse-regression-model-based feature selection for subject 2 The parameter value giving the highest accuracy is highlighted with a red circle

the training session in our study For example the changeof classification accuracy caused by varying 120582 for subject 1 isillustrated in Figure 6 In the case of a and i using ELM-R the best classification accuracy reached a plateau at 120582 =008 and declined after 014 However the optimal values of 120582are totally different among the pairwise combinations and allsubjects

Furthermore our optimized results were achieved in thegamma frequency band (30ndash70Hz) We also tested the otherfrequency ranges such as beta (13ndash30Hz) alpha (8ndash13Hz)and theta (4ndash8Hz) however the classification rates of thosebands were not much better than the chance level in everysubject and pairwise combination of syllables In additionthe results of our TFR and topographical analysis (Figures3 and 4) could support some relationship between gammaactivities and imagined speech processing As far as we knowin the EEG classification of imagined speech there have beenonly a few studies that examined the differences betweenmultiple frequency bands including gamma frequency [3334] Therefore our study might be the first report that thegamma frequency band could play an important role asfeatures for the EEG classification of imagined speech More-over several studies using ECoG reported quite good resultsin the gamma frequency for imagined speech classification[35 36] and these findings are consistent with our resultsHowever several studies have been conducted that suggestedthe role of gamma frequency band for speech processingin neurophysiological perspectives [37ndash39] However thosestudies usually used intracranial recordings and focused onthe analysis for the high gamma (70ndash150Hz) frequency bandThus suggesting a relevance between those results and ourclassification study is not easy However a certain relationbetween some information in low gamma frequencies as afeature for classification and its implication from a neuro-physiological view will be specified in future studies

Currently communication systems with various BCItechnologies have been developed for disabled people [40]For instance the P300 speller is one of the most widelyresearched BCI technologies to decode verbal thoughts fromEEG [41] Despite many efforts toward better and fasterperformance the P300 speller is still insufficient for usein normal conversation [42 43] whereas independent of

the P300 component efforts toward extraction and analysisof EEG or ECoG induced by imagined speech have beenconducted [44 45] In this context our results of highperformance from the application of ELM and its variantshave potential to advance BCI research using silent speechcommunication However the pairwise combinations withthe highest accuracies (see Table 2) differed in each subjectAfter experiment each participant reported different patternsof vowel discrimination For example one subject reportedthat he could not discriminate e from i and the othersubject reported the other pair was not easy to distinguishAlthough those reports were not exactly matched to theresults of classification these discrepancies of subjectivesensory perception might be related to process of imaginingspeech and classification results Besides we have not triedmulticlass classification in this study yet some attemptsin multiclass classification of imagined speech have beenperformed by others [8 46 47] These issues related tointersubject variability and multiclass systems should beconsidered for our future study to developmore practical andgeneralized BCI systems using silent speech

4 Conclusions

In the present study we used classification algorithms forEEG data of imagined speech Particularly we comparedELM and its variants to SVM-R and LDA algorithms andobserved that ELM and its variants showed better perfor-mance than other algorithms with our data These resultsmight lead to the development of silent speech BCI systems

Competing Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Authorsrsquo Contributions

Beomjun Min and Jongin Kim equally contributed to thiswork

10 BioMed Research International

Acknowledgments

This research was supported by the GIST Research Institute(GRI) in 2016 and the Pioneer Research Center Programthrough the National Research Foundation of Korea fundedby theMinistry of Science ICTamp Future Planning (Grant no2012-0009462)

References

[1] H-J Hwang S Kim S Choi and C-H Im ldquoEEG-based brain-computer interfaces a thorough literature surveyrdquo InternationalJournal of Human-Computer Interaction vol 29 no 12 pp 814ndash826 2013

[2] U Chaudhary N Birbaumer and A Ramos-MurguialdayldquoBrainndashcomputer interfaces for communication and rehabilita-tionrdquoNature Reviews Neurology vol 12 no 9 pp 513ndash525 2016

[3] M Hamedi S-H Salleh and A M Noor ldquoElectroencephalo-graphic motor imagery brain connectivity analysis for BCI areviewrdquo Neural Computation vol 28 no 6 pp 999ndash1041 2016

[4] C Neuper R Scherer S Wriessnegger and G PfurtschellerldquoMotor imagery and action observation modulation of sen-sorimotor brain rhythms during mental control of a brain-computer interfacerdquoClinical Neurophysiology vol 120 no 2 pp239ndash247 2009

[5] H-J Hwang K Kwon and C-H Im ldquoNeurofeedback-basedmotor imagery training for brainndashcomputer interface (BCI)rdquoJournal of Neuroscience Methods vol 179 no 1 pp 150ndash1562009

[6] G Pfurtscheller C Brunner A Schlogl and F H Lopes daSilva ldquoMu rhythm (de)synchronization and EEG single-trialclassification of different motor imagery tasksrdquo NeuroImagevol 31 no 1 pp 153ndash159 2006

[7] M Ahn and S C Jun ldquoPerformance variation inmotor imagerybrain-computer interface a brief reviewrdquo Journal of Neuro-science Methods vol 243 pp 103ndash110 2015

[8] S Deng R Srinivasan T Lappas and M DrsquoZmura ldquoEEG clas-sification of imagined syllable rhythm using Hilbert spectrummethodsrdquo Journal of Neural Engineering vol 7 no 4 Article ID046006 2010

[9] C S DaSalla H Kambara M Sato and Y Koike ldquoSingle-trialclassification of vowel speech imagery using common spatialpatternsrdquo Neural Networks vol 22 no 9 pp 1334ndash1339 2009

[10] X Pei D L Barbour E C Leuthardt and G Schalk ldquoDecodingvowels and consonants in spoken and imagined words usingelectrocorticographic signals in humansrdquo Journal of NeuralEngineering vol 8 no 4 Article ID 046028 2011

[11] F Lotte M Congedo A Lecuyer F Lamarche and B ArnaldildquoA review of classification algorithms for EEG-based brain-computer interfacesrdquo Journal of Neural Engineering vol 4 no2 article R01 pp R1ndashR13 2007

[12] K Brigham and B V K V Kumar ldquoImagined speech classifica-tion with EEG signals for silent communication a preliminaryinvestigation into synthetic telepathyrdquo in Proceedings of the4th International Conference on Bioinformatics and BiomedicalEngineering (iCBBE rsquo10) Chengdu China June 2010

[13] J Kim S-K Lee and B Lee ldquoEEG classification in a single-trialbasis for vowel speech perception using multivariate empiricalmode decompositionrdquo Journal of Neural Engineering vol 11 no3 Article ID 036010 2014

[14] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1-3 pp 489ndash501 2006

[15] Y Song and J Zhang ldquoAutomatic recognition of epileptic EEGpatterns via Extreme Learning Machine and multiresolutionfeature extractionrdquoExpert Systemswith Applications vol 40 no14 pp 5477ndash5489 2013

[16] Q YuanWZhou S Li andDCai ldquoEpileptic EEG classificationbased on extreme learning machine and nonlinear featuresrdquoEpilepsy Research vol 96 no 1-2 pp 29ndash38 2011

[17] M N I Qureshi B Min H J Jo and B Lee ldquoMulticlass clas-sification for the differential diagnosis on the ADHD subtypesusing recursive feature elimination and hierarchical extremelearning machine structural MRI studyrdquo PLoS ONE vol 11 no8 Article ID e0160697 2016

[18] L Gao W Cheng J Zhang and J Wang ldquoEEG classificationfor motor imagery and resting state in BCI applications usingmulti-class Adaboost extreme learning machinerdquo Review ofScientific Instruments vol 87 no 8 Article ID 085110 2016

[19] R Tibshirani ldquoRegression shrinkage and selection via the LassoRobert Tibshiranirdquo Journal of the Royal Statistical Society SeriesB (Statistical Methodology) vol 58 no 1 pp 267ndash288 2007

[20] J Friedman T Hastie and R Tibshirani ldquoRegularization pathsfor generalized linear models via coordinate descentrdquo Journal ofStatistical Software vol 33 no 1 pp 1ndash22 2010

[21] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[22] J Cao Z Lin and G-B Huang ldquoComposite function waveletneural networks with extreme learning machinerdquo Neurocom-puting vol 73 no 7-9 pp 1405ndash1416 2010

[23] G-B Huang X Ding and H Zhou ldquoOptimization methodbased extreme learning machine for classificationrdquo Neurocom-puting vol 74 no 1-3 pp 155ndash163 2010

[24] P L Bartlett ldquoThe sample complexity of pattern classificationwith neural networks the size of the weights is more importantthan the size of the networkrdquo Institute of Electrical and Electron-ics Engineers Transactions on InformationTheory vol 44 no 2pp 525ndash536 1998

[25] L-C Shi and B-L Lu ldquoEEG-based vigilance estimation usingextreme learning machinesrdquo Neurocomputing vol 102 pp 135ndash143 2013

[26] Y Peng and B-L Lu ldquoDiscriminative manifold extreme learn-ing machine and applications to image and EEG signal classifi-cationrdquo Neurocomputing vol 174 pp 265ndash277 2014

[27] N-Y Liang P Saratchandran G-B Huang and N Sundarara-jan ldquoClassification of mental tasks from EEG signals usingextreme learning machinerdquo International Journal of NeuralSystems vol 16 no 1 pp 29ndash38 2006

[28] J Kim H S Shin K Shin and M Lee ldquoRobust algorithmfor arrhythmia classification in ECG using extreme learningmachinerdquo Biomedical Engineering OnLine vol 8 article 312009

[29] S Karpagachelvi M Arthanari and M Sivakumar ldquoClas-sification of electrocardiogram signals with support vectormachines and extreme learning machinerdquo Neural Computingand Applications vol 21 no 6 pp 1331ndash1339 2012

[30] W Huang Z M Tan Z Lin et al ldquoA semi-automatic approachto the segmentation of liver parenchyma from 3D CT imageswith Extreme Learning Machinerdquo in Proceedings of the IEEE

BioMed Research International 11

Annual International Conference of the IEEE Engineering inMedicine and Biology Society (EMBCrsquo 12) pp 3752ndash3755 Bue-nos Aires Argentina September 2012

[31] R Barea L Boquete S Ortega E Lopez and J M Rodrıguez-Ascariz ldquoEOG-based eye movements codification for humancomputer interactionrdquo Expert Systems with Applications vol 39no 3 pp 2677ndash2683 2012

[32] GHuangG-BHuang S Song andKYou ldquoTrends in extremelearning machines a reviewrdquo Neural Networks vol 61 pp 32ndash48 2015

[33] B M Idrees and O Farooq ldquoVowel classification using waveletdecomposition during speech imageryrdquo in Proceedings of theInternational Conference on Signal Processing and IntegratedNetworks (SPIN rsquo16) pp 636ndash640 Noida India Feburary 2016

[34] A Riaz S Akhtar S Iftikhar A A Khan and A SalmanldquoInter comparison of classification techniques for vowel speechimagery using EEG sensorsrdquo in Proceedings of the 2nd Interna-tional Conference on Systems and Informatics (ICSAI rsquo14) pp712ndash717 IEEE Shanghai China November 2014

[35] S Martin P Brunner I Iturrate et al ldquoWord pair classificationduring imagined speech using direct brain recordingsrdquo Scien-tific Reports vol 6 Article ID 25803 2016

[36] X Pei J Hill and G Schalk ldquoSilent communication towardusing brain signalsrdquo IEEE Pulse vol 3 no 1 pp 43ndash46 2012

[37] A-L Giraud and D Poeppel ldquoCortical oscillations and speechprocessing emerging computational principles and operationsrdquoNature Neuroscience vol 15 no 4 pp 511ndash517 2012

[38] V L Towle H-A Yoon M Castelle et al ldquoECoG 120574 activityduring a language task differentiating expressive and receptivespeech areasrdquo Brain vol 131 no 8 pp 2013ndash2027 2008

[39] SMartin P Brunner C Holdgraf et al ldquoDecoding spectrotem-poral features of overt and covert speech from the humancortexrdquo Frontiers in Neuroengineering vol 7 article 14 2014

[40] J S Brumberg A Nieto-Castanon P R Kennedy and F HGuenther ldquoBrain-computer interfaces for speech communica-tionrdquo Speech Communication vol 52 no 4 pp 367ndash379 2010

[41] L A Farwell and E Donchin ldquoTalking off the top of your headtoward a mental prosthesis utilizing event-related brain poten-tialsrdquo Electroencephalography and Clinical Neurophysiology vol70 no 6 pp 510ndash523 1988

[42] D J Krusienski EW Sellers F Cabestaing et al ldquoA comparisonof classification techniques for the P300 Spellerrdquo Journal ofNeural Engineering vol 3 no 4 article 007 pp 299ndash305 2006

[43] E Yin Z Zhou J Jiang F Chen Y Liu and D Hu ldquoA novelhybrid BCI speller based on the incorporation of SSVEP intothe P300 paradigmrdquo Journal of Neural Engineering vol 10 no2 Article ID 026012 2013

[44] C Herff and T Schultz ldquoAutomatic speech recognition fromneural signals a focused reviewrdquo Frontiers in Neuroscience vol10 article 429 2016

[45] S Chakrabarti H M Sandberg J S Brumberg and D JKrusienski ldquoProgress in speech decoding from the electrocor-ticogramrdquoBiomedical Engineering Letters vol 5 no 1 pp 10ndash212015

[46] K Mohanchandra and S Saha ldquoA communication paradigmusing subvocalized speech translating brain signals intospeechrdquoAugmentedHumanResearch vol 1 no 1 article 3 2016

[47] O Ossmy I Fried and R Mukamel ldquoDecoding speech percep-tion from single cell activity in humansrdquo NeuroImage vol 117pp 151ndash159 2015

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

10 BioMed Research International

Acknowledgments

This research was supported by the GIST Research Institute(GRI) in 2016 and the Pioneer Research Center Programthrough the National Research Foundation of Korea fundedby theMinistry of Science ICTamp Future Planning (Grant no2012-0009462)

References

[1] H-J Hwang S Kim S Choi and C-H Im ldquoEEG-based brain-computer interfaces a thorough literature surveyrdquo InternationalJournal of Human-Computer Interaction vol 29 no 12 pp 814ndash826 2013

[2] U Chaudhary N Birbaumer and A Ramos-MurguialdayldquoBrainndashcomputer interfaces for communication and rehabilita-tionrdquoNature Reviews Neurology vol 12 no 9 pp 513ndash525 2016

[3] M Hamedi S-H Salleh and A M Noor ldquoElectroencephalo-graphic motor imagery brain connectivity analysis for BCI areviewrdquo Neural Computation vol 28 no 6 pp 999ndash1041 2016

[4] C Neuper R Scherer S Wriessnegger and G PfurtschellerldquoMotor imagery and action observation modulation of sen-sorimotor brain rhythms during mental control of a brain-computer interfacerdquoClinical Neurophysiology vol 120 no 2 pp239ndash247 2009

[5] H-J Hwang K Kwon and C-H Im ldquoNeurofeedback-basedmotor imagery training for brainndashcomputer interface (BCI)rdquoJournal of Neuroscience Methods vol 179 no 1 pp 150ndash1562009

[6] G Pfurtscheller C Brunner A Schlogl and F H Lopes daSilva ldquoMu rhythm (de)synchronization and EEG single-trialclassification of different motor imagery tasksrdquo NeuroImagevol 31 no 1 pp 153ndash159 2006

[7] M Ahn and S C Jun ldquoPerformance variation inmotor imagerybrain-computer interface a brief reviewrdquo Journal of Neuro-science Methods vol 243 pp 103ndash110 2015

[8] S Deng R Srinivasan T Lappas and M DrsquoZmura ldquoEEG clas-sification of imagined syllable rhythm using Hilbert spectrummethodsrdquo Journal of Neural Engineering vol 7 no 4 Article ID046006 2010

[9] C S DaSalla H Kambara M Sato and Y Koike ldquoSingle-trialclassification of vowel speech imagery using common spatialpatternsrdquo Neural Networks vol 22 no 9 pp 1334ndash1339 2009

[10] X Pei D L Barbour E C Leuthardt and G Schalk ldquoDecodingvowels and consonants in spoken and imagined words usingelectrocorticographic signals in humansrdquo Journal of NeuralEngineering vol 8 no 4 Article ID 046028 2011

[11] F Lotte M Congedo A Lecuyer F Lamarche and B ArnaldildquoA review of classification algorithms for EEG-based brain-computer interfacesrdquo Journal of Neural Engineering vol 4 no2 article R01 pp R1ndashR13 2007

[12] K Brigham and B V K V Kumar ldquoImagined speech classifica-tion with EEG signals for silent communication a preliminaryinvestigation into synthetic telepathyrdquo in Proceedings of the4th International Conference on Bioinformatics and BiomedicalEngineering (iCBBE rsquo10) Chengdu China June 2010

[13] J Kim S-K Lee and B Lee ldquoEEG classification in a single-trialbasis for vowel speech perception using multivariate empiricalmode decompositionrdquo Journal of Neural Engineering vol 11 no3 Article ID 036010 2014

[14] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1-3 pp 489ndash501 2006

[15] Y Song and J Zhang ldquoAutomatic recognition of epileptic EEGpatterns via Extreme Learning Machine and multiresolutionfeature extractionrdquoExpert Systemswith Applications vol 40 no14 pp 5477ndash5489 2013

[16] Q YuanWZhou S Li andDCai ldquoEpileptic EEG classificationbased on extreme learning machine and nonlinear featuresrdquoEpilepsy Research vol 96 no 1-2 pp 29ndash38 2011

[17] M N I Qureshi B Min H J Jo and B Lee ldquoMulticlass clas-sification for the differential diagnosis on the ADHD subtypesusing recursive feature elimination and hierarchical extremelearning machine structural MRI studyrdquo PLoS ONE vol 11 no8 Article ID e0160697 2016

[18] L Gao W Cheng J Zhang and J Wang ldquoEEG classificationfor motor imagery and resting state in BCI applications usingmulti-class Adaboost extreme learning machinerdquo Review ofScientific Instruments vol 87 no 8 Article ID 085110 2016

[19] R Tibshirani ldquoRegression shrinkage and selection via the LassoRobert Tibshiranirdquo Journal of the Royal Statistical Society SeriesB (Statistical Methodology) vol 58 no 1 pp 267ndash288 2007

[20] J Friedman T Hastie and R Tibshirani ldquoRegularization pathsfor generalized linear models via coordinate descentrdquo Journal ofStatistical Software vol 33 no 1 pp 1ndash22 2010

[21] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[22] J Cao Z Lin and G-B Huang ldquoComposite function waveletneural networks with extreme learning machinerdquo Neurocom-puting vol 73 no 7-9 pp 1405ndash1416 2010

[23] G-B Huang X Ding and H Zhou ldquoOptimization methodbased extreme learning machine for classificationrdquo Neurocom-puting vol 74 no 1-3 pp 155ndash163 2010

[24] P L Bartlett ldquoThe sample complexity of pattern classificationwith neural networks the size of the weights is more importantthan the size of the networkrdquo Institute of Electrical and Electron-ics Engineers Transactions on InformationTheory vol 44 no 2pp 525ndash536 1998

[25] L-C Shi and B-L Lu ldquoEEG-based vigilance estimation usingextreme learning machinesrdquo Neurocomputing vol 102 pp 135ndash143 2013

[26] Y Peng and B-L Lu ldquoDiscriminative manifold extreme learn-ing machine and applications to image and EEG signal classifi-cationrdquo Neurocomputing vol 174 pp 265ndash277 2014

[27] N-Y Liang P Saratchandran G-B Huang and N Sundarara-jan ldquoClassification of mental tasks from EEG signals usingextreme learning machinerdquo International Journal of NeuralSystems vol 16 no 1 pp 29ndash38 2006

[28] J Kim H S Shin K Shin and M Lee ldquoRobust algorithmfor arrhythmia classification in ECG using extreme learningmachinerdquo Biomedical Engineering OnLine vol 8 article 312009

[29] S Karpagachelvi M Arthanari and M Sivakumar ldquoClas-sification of electrocardiogram signals with support vectormachines and extreme learning machinerdquo Neural Computingand Applications vol 21 no 6 pp 1331ndash1339 2012

[30] W Huang Z M Tan Z Lin et al ldquoA semi-automatic approachto the segmentation of liver parenchyma from 3D CT imageswith Extreme Learning Machinerdquo in Proceedings of the IEEE

BioMed Research International 11

Annual International Conference of the IEEE Engineering inMedicine and Biology Society (EMBCrsquo 12) pp 3752ndash3755 Bue-nos Aires Argentina September 2012

[31] R Barea L Boquete S Ortega E Lopez and J M Rodrıguez-Ascariz ldquoEOG-based eye movements codification for humancomputer interactionrdquo Expert Systems with Applications vol 39no 3 pp 2677ndash2683 2012

[32] GHuangG-BHuang S Song andKYou ldquoTrends in extremelearning machines a reviewrdquo Neural Networks vol 61 pp 32ndash48 2015

[33] B M Idrees and O Farooq ldquoVowel classification using waveletdecomposition during speech imageryrdquo in Proceedings of theInternational Conference on Signal Processing and IntegratedNetworks (SPIN rsquo16) pp 636ndash640 Noida India Feburary 2016

[34] A Riaz S Akhtar S Iftikhar A A Khan and A SalmanldquoInter comparison of classification techniques for vowel speechimagery using EEG sensorsrdquo in Proceedings of the 2nd Interna-tional Conference on Systems and Informatics (ICSAI rsquo14) pp712ndash717 IEEE Shanghai China November 2014

[35] S Martin P Brunner I Iturrate et al ldquoWord pair classificationduring imagined speech using direct brain recordingsrdquo Scien-tific Reports vol 6 Article ID 25803 2016

[36] X Pei J Hill and G Schalk ldquoSilent communication towardusing brain signalsrdquo IEEE Pulse vol 3 no 1 pp 43ndash46 2012

[37] A-L Giraud and D Poeppel ldquoCortical oscillations and speechprocessing emerging computational principles and operationsrdquoNature Neuroscience vol 15 no 4 pp 511ndash517 2012

[38] V L Towle H-A Yoon M Castelle et al ldquoECoG 120574 activityduring a language task differentiating expressive and receptivespeech areasrdquo Brain vol 131 no 8 pp 2013ndash2027 2008

[39] SMartin P Brunner C Holdgraf et al ldquoDecoding spectrotem-poral features of overt and covert speech from the humancortexrdquo Frontiers in Neuroengineering vol 7 article 14 2014

[40] J S Brumberg A Nieto-Castanon P R Kennedy and F HGuenther ldquoBrain-computer interfaces for speech communica-tionrdquo Speech Communication vol 52 no 4 pp 367ndash379 2010

[41] L A Farwell and E Donchin ldquoTalking off the top of your headtoward a mental prosthesis utilizing event-related brain poten-tialsrdquo Electroencephalography and Clinical Neurophysiology vol70 no 6 pp 510ndash523 1988

[42] D J Krusienski EW Sellers F Cabestaing et al ldquoA comparisonof classification techniques for the P300 Spellerrdquo Journal ofNeural Engineering vol 3 no 4 article 007 pp 299ndash305 2006

[43] E Yin Z Zhou J Jiang F Chen Y Liu and D Hu ldquoA novelhybrid BCI speller based on the incorporation of SSVEP intothe P300 paradigmrdquo Journal of Neural Engineering vol 10 no2 Article ID 026012 2013

[44] C Herff and T Schultz ldquoAutomatic speech recognition fromneural signals a focused reviewrdquo Frontiers in Neuroscience vol10 article 429 2016

[45] S Chakrabarti H M Sandberg J S Brumberg and D JKrusienski ldquoProgress in speech decoding from the electrocor-ticogramrdquoBiomedical Engineering Letters vol 5 no 1 pp 10ndash212015

[46] K Mohanchandra and S Saha ldquoA communication paradigmusing subvocalized speech translating brain signals intospeechrdquoAugmentedHumanResearch vol 1 no 1 article 3 2016

[47] O Ossmy I Fried and R Mukamel ldquoDecoding speech percep-tion from single cell activity in humansrdquo NeuroImage vol 117pp 151ndash159 2015

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

BioMed Research International 11

Annual International Conference of the IEEE Engineering inMedicine and Biology Society (EMBCrsquo 12) pp 3752ndash3755 Bue-nos Aires Argentina September 2012

[31] R Barea L Boquete S Ortega E Lopez and J M Rodrıguez-Ascariz ldquoEOG-based eye movements codification for humancomputer interactionrdquo Expert Systems with Applications vol 39no 3 pp 2677ndash2683 2012

[32] GHuangG-BHuang S Song andKYou ldquoTrends in extremelearning machines a reviewrdquo Neural Networks vol 61 pp 32ndash48 2015

[33] B M Idrees and O Farooq ldquoVowel classification using waveletdecomposition during speech imageryrdquo in Proceedings of theInternational Conference on Signal Processing and IntegratedNetworks (SPIN rsquo16) pp 636ndash640 Noida India Feburary 2016

[34] A Riaz S Akhtar S Iftikhar A A Khan and A SalmanldquoInter comparison of classification techniques for vowel speechimagery using EEG sensorsrdquo in Proceedings of the 2nd Interna-tional Conference on Systems and Informatics (ICSAI rsquo14) pp712ndash717 IEEE Shanghai China November 2014

[35] S Martin P Brunner I Iturrate et al ldquoWord pair classificationduring imagined speech using direct brain recordingsrdquo Scien-tific Reports vol 6 Article ID 25803 2016

[36] X Pei J Hill and G Schalk ldquoSilent communication towardusing brain signalsrdquo IEEE Pulse vol 3 no 1 pp 43ndash46 2012

[37] A-L Giraud and D Poeppel ldquoCortical oscillations and speechprocessing emerging computational principles and operationsrdquoNature Neuroscience vol 15 no 4 pp 511ndash517 2012

[38] V L Towle H-A Yoon M Castelle et al ldquoECoG 120574 activityduring a language task differentiating expressive and receptivespeech areasrdquo Brain vol 131 no 8 pp 2013ndash2027 2008

[39] SMartin P Brunner C Holdgraf et al ldquoDecoding spectrotem-poral features of overt and covert speech from the humancortexrdquo Frontiers in Neuroengineering vol 7 article 14 2014

[40] J S Brumberg A Nieto-Castanon P R Kennedy and F HGuenther ldquoBrain-computer interfaces for speech communica-tionrdquo Speech Communication vol 52 no 4 pp 367ndash379 2010

[41] L A Farwell and E Donchin ldquoTalking off the top of your headtoward a mental prosthesis utilizing event-related brain poten-tialsrdquo Electroencephalography and Clinical Neurophysiology vol70 no 6 pp 510ndash523 1988

[42] D J Krusienski EW Sellers F Cabestaing et al ldquoA comparisonof classification techniques for the P300 Spellerrdquo Journal ofNeural Engineering vol 3 no 4 article 007 pp 299ndash305 2006

[43] E Yin Z Zhou J Jiang F Chen Y Liu and D Hu ldquoA novelhybrid BCI speller based on the incorporation of SSVEP intothe P300 paradigmrdquo Journal of Neural Engineering vol 10 no2 Article ID 026012 2013

[44] C Herff and T Schultz ldquoAutomatic speech recognition fromneural signals a focused reviewrdquo Frontiers in Neuroscience vol10 article 429 2016

[45] S Chakrabarti H M Sandberg J S Brumberg and D JKrusienski ldquoProgress in speech decoding from the electrocor-ticogramrdquoBiomedical Engineering Letters vol 5 no 1 pp 10ndash212015

[46] K Mohanchandra and S Saha ldquoA communication paradigmusing subvocalized speech translating brain signals intospeechrdquoAugmentedHumanResearch vol 1 no 1 article 3 2016

[47] O Ossmy I Fried and R Mukamel ldquoDecoding speech percep-tion from single cell activity in humansrdquo NeuroImage vol 117pp 151ndash159 2015

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom

Submit your manuscripts athttpwwwhindawicom

Stem CellsInternational

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MEDIATORSINFLAMMATION

of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Behavioural Neurology

EndocrinologyInternational Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Disease Markers

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

BioMed Research International

OncologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Oxidative Medicine and Cellular Longevity

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

PPAR Research

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

ObesityJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational and Mathematical Methods in Medicine

OphthalmologyJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Diabetes ResearchJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Research and TreatmentAIDS

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Gastroenterology Research and Practice

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Parkinsonrsquos Disease

Evidence-Based Complementary and Alternative Medicine

Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom