research article artificial neural network analysis of...

8
Hindawi Publishing Corporation Advances in Artificial Neural Systems Volume 2013, Article ID 560969, 7 pages http://dx.doi.org/10.1155/2013/560969 Research Article Artificial Neural Network Analysis of Sierpinski Gasket Fractal Antenna: A Low Cost Alternative to Experimentation Balwinder S. Dhaliwal 1 and Shyam S. Pattnaik 2 1 Guru Nanak Dev Engineering College, Ludhiana, Punjab 141006, India 2 National Institute of Technical Teachers’ Training and Research, Chandigarh 160019, India Correspondence should be addressed to Balwinder S. Dhaliwal; [email protected] Received 29 June 2013; Accepted 9 September 2013 Academic Editor: Ozgur Kisi Copyright © 2013 B. S. Dhaliwal and S. S. Pattnaik. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Artificial neural networks due to their general-purpose nature are used to solve problems in diverse fields. Artificial neural networks (ANNs) are very useful for fractal antenna analysis as the development of mathematical models of such antennas is very difficult due to complex shapes and geometries. As such empirical approach doing experiments is costly and time consuming, in this paper, application of artificial neural networks analysis is presented taking the Sierpinski gasket fractal antenna as an example. e performance of three different types of networks is evaluated and the best network for this type of applications has been proposed. e comparison of ANN results with experimental results validates that this technique is an alternative to experimental analysis. is low cost method of antenna analysis will be very useful to understand various aspects of fractal antennas. 1. Introduction Artificial neural networks (ANNs) have been used as efficient tools for modeling and prediction in almost all disciplines. e use of ANN has become widely accepted in antenna design and analysis applications. is is evident from the increasing number of publications in research/academic journals [18]. Angiulli and Versaci proposed a technique to evaluate the resonant frequency of microstrip antennas using neuro-fuzzy networks [1]. e use of ANN for the design of rectangular patch antenna is explained in [2]. Applications of ANN in various types of antennas and antenna arrays are explained in [3]. Neog et al. [4] have used a tunnel based ANN for the parameter calculation of the wideband microstrip antenna. Lebbar et al. [5] employed a geometrical methodology based ANN for the design of a compact broadband microstrip antenna. In [6] the authors proposed an ANN to predict the input impedance of a broadband antenna as a function of its geometric parameters. Guney and Sarikaya [7] presented a hybrid method based on a combination of ANN and fuzzy inference system to calculate simultaneously the resonant frequencies of various microstrip antennas of regular geometries. An equilateral triangular microstrip antenna has been designed using a particle swarm optimization driven radial basis function neural networks by [8]. However, the use of ANN in analysis & design of fractal antennas is at very early stage. A limited number of literatures are available in this field of antennas [912]. In this paper, the performance of three different ANNs on Sierpinski gasket fractal antenna analysis is investigated by means of two aspects: mean absolute error (MAE) and coefficient of correlation. e experimental analysis of antennas involves costly setup which include anechoic chamber and equipment like vector network analyzer, synthesized signal generator, pat- tern recorder, and so forth, in addition to other equip- ment required for antenna fabrication. Availability of such resources is limited in academic institutions due to limited budget. us, the proposed technique is a very low cost method as it requires only a computer and soſtware. So this method may be used as part of the lab experiments in undergraduate and postgraduate classes. e method is explained by taking an example of Sierpinski gasket fractal antenna for analysis. e term fractal was originally coined by Mandelbrot to describe a family of complex shapes that possess an inherent

Upload: others

Post on 13-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Research Article Artificial Neural Network Analysis of ...downloads.hindawi.com/archive/2013/560969.pdfFractal Antenna: A Low Cost Alternative to Experimentation BalwinderS.Dhaliwal

Hindawi Publishing CorporationAdvances in Artificial Neural SystemsVolume 2013 Article ID 560969 7 pageshttpdxdoiorg1011552013560969

Research ArticleArtificial Neural Network Analysis of Sierpinski GasketFractal Antenna A Low Cost Alternative to Experimentation

Balwinder S Dhaliwal1 and Shyam S Pattnaik2

1 Guru Nanak Dev Engineering College Ludhiana Punjab 141006 India2National Institute of Technical Teachersrsquo Training and Research Chandigarh 160019 India

Correspondence should be addressed to Balwinder S Dhaliwal bsdhaliwalymailcom

Received 29 June 2013 Accepted 9 September 2013

Academic Editor Ozgur Kisi

Copyright copy 2013 B S Dhaliwal and S S Pattnaik This is an open access article distributed under the Creative CommonsAttribution License which permits unrestricted use distribution and reproduction in any medium provided the original work isproperly cited

Artificial neural networks due to their general-purpose nature are used to solve problems in diverse fields Artificial neural networks(ANNs) are very useful for fractal antenna analysis as the development of mathematical models of such antennas is very difficultdue to complex shapes and geometries As such empirical approach doing experiments is costly and time consuming in thispaper application of artificial neural networks analysis is presented taking the Sierpinski gasket fractal antenna as an example Theperformance of three different types of networks is evaluated and the best network for this type of applications has been proposedThe comparison of ANN results with experimental results validates that this technique is an alternative to experimental analysisThis low cost method of antenna analysis will be very useful to understand various aspects of fractal antennas

1 Introduction

Artificial neural networks (ANNs) have been used as efficienttools for modeling and prediction in almost all disciplinesThe use of ANN has become widely accepted in antennadesign and analysis applications This is evident from theincreasing number of publications in researchacademicjournals [1ndash8] Angiulli and Versaci proposed a techniqueto evaluate the resonant frequency of microstrip antennasusing neuro-fuzzy networks [1] The use of ANN for thedesign of rectangular patch antenna is explained in [2]Applications of ANN in various types of antennas andantenna arrays are explained in [3] Neog et al [4] have useda tunnel based ANN for the parameter calculation of thewideband microstrip antenna Lebbar et al [5] employed ageometrical methodology based ANN for the design of acompact broadband microstrip antenna In [6] the authorsproposed an ANN to predict the input impedance of abroadband antenna as a function of its geometric parametersGuney and Sarikaya [7] presented a hybrid method basedon a combination of ANN and fuzzy inference system tocalculate simultaneously the resonant frequencies of variousmicrostrip antennas of regular geometries An equilateral

triangular microstrip antenna has been designed using aparticle swarm optimization driven radial basis functionneural networks by [8] However the use of ANN in analysisamp design of fractal antennas is at very early stage A limitednumber of literatures are available in this field of antennas [9ndash12] In this paper the performance of three different ANNson Sierpinski gasket fractal antenna analysis is investigatedby means of two aspects mean absolute error (MAE) andcoefficient of correlation

The experimental analysis of antennas involves costlysetup which include anechoic chamber and equipment likevector network analyzer synthesized signal generator pat-tern recorder and so forth in addition to other equip-ment required for antenna fabrication Availability of suchresources is limited in academic institutions due to limitedbudget Thus the proposed technique is a very low costmethod as it requires only a computer and software Sothis method may be used as part of the lab experimentsin undergraduate and postgraduate classes The method isexplained by taking an example of Sierpinski gasket fractalantenna for analysis

The term fractal was originally coined by Mandelbrot todescribe a family of complex shapes that possess an inherent

2 Advances in Artificial Neural Systems

0th iteration 1st iteration 2nd iteration 3rd iteration

Figure 1 First three iterations of Sierpinski gasket antenna

selfsimilarity in their geometrical structure [15] Fractalsrepresent a class of geometries with very unique propertiesthat have attracted antenna designers in recent years Thegeneral concept of fractals can be applied to develop variousantenna elements and such antennas are known as fractalantennas [16] Applying fractals for antenna elements resultin smaller resonant antennas that are multiband and maybe optimized for gain The advantages of fractal antennasinclude (i) miniaturization and space-filling (ii) multibandperformance (iii) input impedance matching (iv) efficiencyand effectiveness and (v) improved gain and directivity [17]

Various fractal geometries have been explored byresearcher in past two decades However the Sierpinskigasket fractal antenna geometry has been investigated mostwidely than any other geometry since its introduction in 1998by Puente-Baliarda et al [13] The Sierpinski triangle alsocalled the Sierpinski gasket is a fractal geometry named afterthe Polish mathematician Waclaw Sierpinski who describedit in 1915 The first three iterations of Sierpinski gasket areshown in Figure 1 The first iteration gasket is constructed bysubtracting central inverting triangle from a main triangleshape After the subtraction three equal triangles remain onthe structure each one being half of the size of the originalIf the same subtraction procedure is repeated on remainingtriangles the 2nd iteration is obtained and similarly ifiteration is carried out an infinite number of times the idealSierpinski gasket is obtained [18] The ideal Sierpinski gasketis a selfsimilar structure and each one of its three main partshas exactly the same shape as that of the whole object butreduced by a factor of two [13]

The Sierpinski triangle can also be generated by the useof the iterated function system (IFS) which is generally usedto construct selfsimilar fractal shapes The IFS is based ona set of affine transforms and if the side length ldquo119904rdquo of thebase equilateral triangle is known then the following iteratedfunction system gives the selfsimilar shape of Sierpinskigasket [18]

1198911(119909) = [

05 0

0 05] 119909

1198912(119909) = [

05 0

0 05] 119909 + [

05

0]

1198913(119909) = [

05 0

0 05] 119909 + [

0250

0433]

(1)

The Sierpinski antenna is a multiband antenna and thenumber of bands depends on the number of iterations ldquo119899rdquo[13] The base triangular shape which is also called 0thiteration has single fundamental resonance frequency Thefirst fractal iteration has two resonant frequencies and so on

Layer 1Input layer

(N neurons)

Layer 2Hidden layer

(N2 neurons)

Layer (L minus 1)Hidden layer(NLminus1 neurons)

Layer LOutput layer

(NL neurons)

x1

x2

x3

xN

y1

y2

yL

Figure 2 Basic structure of MLPNN

Also the actual value of resonant frequencies depends ondielectric constant ldquo120576

119903rdquo and on the thickness ldquo119905rdquo of the

substrate and side length ldquo119904rdquo So to determine the resonantfrequencies 119891

119899in a particular iteration the values of 120576

119903 119905 119904

and number of iterations ldquo119899rdquo must be known An expressionfor predicting the resonant frequency of this antenna has beenproposed in 2008 by Mishra et al [14]

The rest of the paper is organized as follows Section 2briefly describes the ANN types which are being investigatedSection 3 describes the details of ANN models results andthe performance comparison The conclusion is presented inSection 4

2 Artificial Neural NetworksBrief Description

The brief of three types of networks namely multilayerperceptron neural network (MLPNN) radial basis functionneural network (RBFNN) and general regression neuralnetwork (GRNN) which are used in this work are describedin the following sections

21 Multilayer Perceptron Neural Network Multilayer per-ceptron neural network (MLPNN) is a widely used neuralnetwork structure in antenna applications It consists ofmultiple layers of neurons that are connected in a feedforward manner [6] The base structure of an MLPNN isshown in Figure 2

Figure 2 depicts that the network has 119871 layers The firstlayer is the input layer and the last layer (119871th layer) is theoutput layer The other layers that is layers from 2 to 119871 minus 1

are hidden layers Each layer has a number of neurons [19]In the input and output layers the number of neurons areequal to the number of inputs and outputs respectively Thenumber of neurons in the hidden layers is chosen using a trialand error process so that accurate output is obtained Usuallythe MLPs with one or two hidden layers are commonly usedfor antenna applications Each neuron processes the inputs toproduce output using a function called activation functionInput neurons normally use a relay activation function andsimply relay (pass) the external inputs to the hidden layerneurons The most commonly used activation function forhidden layer neuron is the sigmoid function defined as

120590 (119903) =

1

1 minus 119890minus119903

(2)

Advances in Artificial Neural Systems 3

The connection between the units in subsequent layershas an associated weight which is computed during trainingusing error backpropagation algorithm [20] The backprop-agation learning algorithm consists of two steps In the firststep the input signal is passed forward through each layer ofthe network The actual output of the network is calculatedand this output signal is subtracted from desired outputsignal to generate an error signal In the second step erroris fed backward through the network from the output layerthrough the hidden layersThe synapticweights between eachlayer are updated based on the computational error Thesetwo steps are repeated for each input-output pair of trainingdata and process is iterated for a number of times until theoutput value converges to a desired solution [21] The detailsof MLPNN can be further seen from [22ndash25]

22 Radial Basis Function Neural Networks Radial basisfunction neural network (RBFNN) has several special charac-teristics like simple architecture and faster performance [26]It is a special neural network with a single hidden layer Thehidden layer neurons use radial basis function to apply anonlinear transformation from the input space to the hiddenspace [27] The nonlinear basis function is a function of thenormalized radial distance between the input vector and theweight vector The most widely used form of radial basisfunction is the Gaussian function given by [26]

Φ119895(119909) = exp(

minus

10038171003817100381710038171003817119909 minus 120583

119895

10038171003817100381710038171003817

2

21205902

119895

) (3)

where 119909 is 119889-dimensional input vector with elements 119909119894and

120583119895is vector determining the centre of basis function Φ

119895and

has elements 120583119895119894 The symbol 120590

119895denotes width parameter

whose value is determined during training The RBFNNinvolves two-stage training procedure In the first stage theparameters governing the basis functions that is centers(120583119895) and widths (120590

119895) are determined using the relatively fast

unsupervised methods The unsupervised training methodsuse only the input data and not the target data In second stageof training the weights of the output layer are determined[27] For these networks output layer is a linear layer sofast linear supervised methods are used Due to this reasonthese networks have faster performances [26] The numberof neurons in all layers of RBFNN that is input output andhidden layers is determined by the dimensions and numberof input-output data sets The basic structure of RBFNN isshown in Figure 3 RBFNN can be explored in detail from[25ndash29]

23 General Regression Neural Network General regressionneural network (GRNN) introduced by Specht in 1991 is aone-pass learning algorithm network with a highly parallelstructure [30] It is a memory based network that pro-vides estimates of continuous variables and converges tounderlying linear or nonlinear regression surface Nonlinearregression analysis forms the theoretical base of GRNN Ifjoint probability density function of random variables 119909 and119910 is 119891(119909 119910) andmeasured value of 119909 is119883 then the regression

Input layer Hidden layer Output

x1

x2

x3

xN

y1

y2

yL

Figure 3 Basic structure of RBFNN

of 119910with respect to119883 (which is also called conditional mean)is [31]

(119883) = 119864 [

119910

119883

] =

int

infin

minusinfin

119910 sdot 119891 (119883 119910) 119889119910

int

infin

minusinfin

119891 (119883 119910) 119889119910

(4)

where (119883) is the prediction output of 119884 when input is119883For GRNN Nose-Filho et al [32] have shown that (119883)

can be calculated by

(119883) =

sum119899

119894=1119884119894 exp (minus119863

2

11989421205902)

sum119899

119894=1exp (minus119863

2

11989421205902)

(5)

where 119883119894 and 119884

119894 are sample values of random variables 119909

and 119910 119899 is number of samples in training set 120590 is smoothingparameter and119863

2

119894is the Euclidean matrix defined by

1198632

119894= (119883 minus 119883

119894)

119879

(119883 minus 119883119894) (6)

The estimate (119883) can be visualized as a weighted averageof all of the observed values 119884

119894 where each observed value

is weighted exponentially according to its Euclidean distancefrom119883

The basic structure of GRNN is shown in Figure 4 Theinput units are merely distribution units which provide allof the inputs (scaled measurement variables) to all of theneurons on the second layer called pattern unitsThe numberof neurons in input layer and output layer of GRNN isequal to the dimensions of the input data and output datarespectively The pattern unit consists of neurons equal tonumber of sample in training set and each pattern unit isdedicated to one cluster centre When a new vector 119883 isentered into the network it is subtracted from the storedvector representing each cluster center The sum of squaresor the absolute values of the differences is calculated and fedinto a nonlinear activation function which is generally anexponential function

The outputs of pattern units are passed on to third layerneurons known as summation units There are two typesof neurons in summations units numerator neurons whosenumber is equal to output dimensions and one denominatorneuron The first type generates an estimate of 119891(119909)119870 byadding the outputs of pattern units weighted by the number

4 Advances in Artificial Neural Systems

Layer 1Input layer

Layer 2Pattern layer

Layer 3Summation layer

Layer 4Output layer

x1

x2

x3

xn

Yf(x)K

Y(x)

Y998400f(x)K

f(x)KY998400(x)

Figure 4 Basic structure of GRNN

120576r

t

s

n

ANNmodel

fn

Figure 5 Proposed ANNModel

of observations each cluster centre represents The secondtype generates estimate of 119891(119909)119870 and multiplies each valuefrom a pattern unit by the sum of samples associated withcluster center 119883119894 119870 is a constant determined by the Parzenwindow The desired estimate of 119884 is yielded by output unitby dividing 119891(119909)119870 by 119891(119909)119870 [32] The detailed explanationof GRNN can be seen from [30ndash32]

3 The Proposed ANN Analysis ofthe Sierpinski Gasket Fractal Antenna

In presented work three above discussed ANN models havebeen implemented and compared for Sierpinski gasket fractalantenna In each model the parameters dielectric constantldquo120576119903rdquo the thickness ldquo119905rdquo of the substrate side length ldquo119904rdquo and the

number of iterations ldquo119899rdquo of the antenna are taken as inputsand the resonant frequency (119891

119899) is taken as outputs A set

of 45 values of inputs along with desired output values aretaken for trainingThe block diagram of the proposedmodelsis depicted in Figure 5 Thus the networks models have fourinputs and one output with intermediate hidden layers Theparametric details of the models are shown in Figure 5

31 Multilayer Perceptron Neural Network Parameters Forthe training of MLPNN 4 input neurons 15 neurons inhidden layer and 1 output neuron are used The trainlmfunction is used as training function and learning rate isselected as 02 Trainlm is a network training function thatupdates weight and bias values according to the Levenberg-Marquardt algorithm The architecture used for the analysisis shown in Figure 6

32 Radial Basis Function Neural Networks Parameters Forthe training of RBFNN the value of spread constant has been

Input layer(N neurons) Hidden layer

(15 neurons)

Output layer(1 neurons)

120576r

t

s

n

fn

Figure 6 Architecture of MLPNN

Input layer4 neurons Hidden layer

45 neurons

Output layer1 neuron

120576r

t

s

n

fn

Figure 7 Architecture of RBFNN

selected as 105 The value of spread constant should be largeenough so that several radial basis neurons always have fairlylarge outputs at any given moment This makes the networkfunction smoother and results in better generalization fornew input vectors occurring between input vectors used inthe design However for very large value of spread constanteach neuron is effectively responding in the same large areaof the input space As there are 45 sets of training dataso number of neurons in hidden layer is 45 The detailedarchitecture is depicted in Figure 7

33 General Regression Neural Network Parameters For thetraining of GRNN the value of spread constant has beenselected as 085 In GRNN the transfer function of first hid-den layer is radial basis function For small spread constantthe radial basis function is very steep so the neuron with theweight vector closest to the input will have a much largeroutput than other neuronsThe network will tend to respondwith the target vector associated with the nearest design inputvector For large values of spread constant the radial basisfunctionrsquos slope becomes smoother and several neurons canrespond to an input vector The network then acts as if itis taking a weighted average between target vectors whosedesign input vectors are closest to the new input vector Asin case of RBFNN the number of neurons in hidden layer is45 The architecture is shown in Figure 8

The trained ANN models are used to predict resonantfrequencies (119891

0 1198911 1198912 1198913 1198914) of 4th iteration Sierpinski gas-

ket antenna with parameters 120576119903

= 25 119904 = 1027683mm

Advances in Artificial Neural Systems 5

Table 1 Simulation results of all three ANN models Absolute error calculated by subtracting from experimental results

Resonantfrequency

Experimentalresults (GHz) [13]

Theoretical results [14] MLPNN results RBFNN results GRNN resultsOutput(GHz)

Absoluteerror

Output(GHz)

Absoluteerror

Output(GHz)

Absoluteerror

Output(GHz)

Absoluteerror

1198910

0520 05140 00060 05122 00078 05123 00077 08922 037221198911

1740 17420 00020 17362 00038 17360 00040 19097 016971198912

3510 34840 00260 34712 00388 34717 00383 39282 041821198913

6950 69680 00180 69437 00063 69434 00066 76243 067431198914

1389 139340 00440 138865 00035 138867 00033 118325 20575

Table 2 Values of performance measures

Performance measure Theoretical results MLPNN results RBFNN results GRNN resultsMean absolute error (MAE) 00192 00120 00120 07384Coefficient of correlation 119877 10000 10000 10000 09899

Input layer4 neurons Pattern layer

45 neurons

Summation layer2 neurons

Output layer1 neuron

120576r

t

s

n

fn

Figure 8 Architecture of GRNN

and 119905 = 1588mm ANN results are compared with theexperimental and theoretical results as shown in Table 1The absolute error values are calculated by comparing withmeasured results of Puente-Baliarda et al [13] Also thetheoretical results obtained using the expressions of Mishraet al [14] are provided for comparison

The performance comparison of different types of ANNsis important to find the best suitable model for specificproblem In recent years a number of studies are done onperformance comparison of ANNs for a range of applications[19 21 26 33ndash39] The performance of the proposed modelsis compared on the basis of two different measures thatis (1) mean absolute error (MAE) which gives the qualityof training the model The smaller the value of MAE ofa model better will be its performance and (2) coefficientof correlation (119877) is the other measure of strength of thelinear relationship developed by a particular model Thevalues of 119877 close to 10 indicate a good model performanceThe quantitative performance is given in Table 2 The valuesof MAE and 119877 for theoretical results are also shown forcomparison purposes

The comparison between these three algorithms on thebasis of MAE shows that the MLPNN and RBFNN havesmallest error values and even better than those of theoreticalresults It may be seen that the values of 119877 are equal to 1 forMLPNN RBFNN and theoretical results and it is sufficientlyhigh for GRNN indicating satisfactory performance by allmodels

When both performance criteria are considered togetherthe most satisfactory model is RBFNN which has betterperformance thanMLPNNandGRNNand it is even accuratethan theoretical method in predicting resonant frequencies

4 Conclusion

The application of the artificial neural networks methodin Sierpinski gasket fractal antenna analysis is evaluated inthis work The performances of the three neural networksare evaluated to realize the possible applications of ANNin fractal antenna design The obtained ANN results arecompared with the published experimental results whichshow percentage of error less than 15 for RBFNN Due tofast adaptive properties of ANN the simulation time requiredis very less It has been observed that RBFNN methodoutperforms the theoretical method in accuracy Among thedifferent neural models compared the RBFNN is found asthe best suitable model for this type of antenna analysisThus ANN approach to Sierpinski fractal antenna analysisseems to be a low cost accurate and computationally fastapproachThe same concept can be explored for other fractalgeometries

References

[1] G Angiulli and M Versaci ldquoResonant frequency evaluationof microstrip antennas using a neural-fuzzy approachrdquo IEEETransactions on Magnetics vol 39 no 3 I pp 1333ndash1336 2003

[2] R K Mishra and A Patnaik ldquoDesigning rectangular patchantenna using the nurospectral methodrdquo IEEE Transactions onAntennas and Propagation vol 51 no 8 pp 1914ndash1921 2003

[3] A Patnaik D E Anagnostou R K Mishra C G Christo-doulou and J C Lyke ldquoApplications of neural networks inwireless communicationsrdquo IEEE Antennas and PropagationMagazine vol 46 no 3 pp 130ndash137 2004

[4] D K Neog S S Pattnaik D C Panda S Devi B Khuntia andM Dutta ldquoDesign of a wideband microstrip antenna and theuse of artificial neural networks in parameter calculationrdquo IEEEAntennas and Propagation Magazine vol 47 no 3 pp 60ndash652005

6 Advances in Artificial Neural Systems

[5] S Lebbar Z Guennoun M Drissi and F Riouch ldquoA compactand broadband microstrip antenna design using a geometrical-methodology-based artifical neural networkrdquo IEEE Antennasand Propagation Magazine vol 48 no 2 pp 146ndash154 2006

[6] Y Kim S Keely J Ghosh and H Ling ldquoApplication of artificialneural networks to broadband antenna design based on aparametric frequency modelrdquo IEEE Transactions on Antennasand Propagation vol 55 no 3 pp 669ndash674 2007

[7] K Guney and N Sarikaya ldquoA hybrid method based on com-bining artificial neural network and fuzzy inference systemfor simultaneous computation of resonant frequencies of rect-angular circular and triangular microstrip antennasrdquo IEEETransactions on Antennas and Propagation vol 55 no 3 pp659ndash668 2007

[8] V S Chintakindi S S Pattnaik O P Bajpai and S Devi ldquoPSOdriven RBFNN for design of equilateral triangular microstrippatch antennardquo Indian Journal of Radio and Space Physics vol38 pp 233ndash237 2009

[9] B S Dhaliwal S S Pattnaik and S Devi ldquoDesign of Sierpinskigasket fractal patch antenna using artificial neural networksrdquoin Proceedings of the IASTED International Conference onAntennas Radar and Wave Propagation (ARP rsquo09) pp 76ndash78July 2009

[10] P H D F Silva E E C Oliveira and A G DrsquoAssuncao ldquoUsinga multilayer perceptrons for accurate modeling of quasi-fractalpatch antennasrdquo in Proceedings of the International Workshopon Antenna Technology (iWAT rsquo10) pp 1ndash4 Lisbon PortugalMarch 2010

[11] P Arora and B S Dhaliwal ldquoParameter estimation of dual bandelliptical fractal patch antenna using ANNrdquo in Proceedings ofthe International Conference on Devices and Communications(ICDeCom rsquo11) pp 1ndash4 February 2011

[12] A Anuradha A Patnaik and S N Sinha ldquoDesign of custom-made fractal multi-band antennas using ANN-PSOrdquo IEEEAntennas and Propagation Magazine vol 53 no 4 pp 94ndash1012011

[13] C Puente-Baliarda J Romeu R Pous and A Cardama ldquoOnthe behavior of the sierpinski multiband fractal antennardquo IEEETransactions on Antennas and Propagation vol 46 no 4 pp517ndash524 1998

[14] R K Mishra R Ghatak and D Poddar ldquoDesign formula forSierpinski gasket pre-fractal planar-monopole antennasrdquo IEEEAntennas and Propagation Magazine vol 50 no 3 pp 104ndash1072008

[15] D H Werner R L Haupt and P L Werner ldquoFractal antennaengineering the theory and design of fractal antenna arraysrdquoIEEE Antennas and PropagationMagazine vol 41 no 5 pp 37ndash59 1999

[16] J P Gianvittorio and Y Rahmat-Samii ldquoFractal antennas anovel antenna miniaturization technique and applicationsrdquoIEEEAntennas and PropagationMagazine vol 44 no 1 pp 20ndash36 2002

[17] N S Song K L Chin D B B Liang and M Anyi ldquoDesignof broadband dual-frequency microstrip patch antenna withmodified sierpinski fractal geometryrdquo in Proceedings of the 10thIEEE Singapore International Conference on CommunicationsSystems (ICCS rsquo06) pp 1ndash5 Singapore November 2006

[18] D H Werner and S Ganguly ldquoAn overview of fractal antennaengineering researchrdquo IEEE Antennas and Propagation Maga-zine vol 45 no 1 pp 38ndash57 2003

[19] S Shrivastava and M P Singh ldquoPerformance evaluation offeed-forward neural network with soft computing techniques

for hand written English alphabetsrdquo Applied Soft ComputingJournal vol 11 no 1 pp 1156ndash1182 2011

[20] Q J Zhang K C Gupta and V K Devabhaktuni ldquoArtificialneural networks for RF and microwave designmdashfrom theoryto practicerdquo IEEE Transactions on Microwave Theory andTechniques vol 51 no 4 pp 1339ndash1350 2003

[21] A Jain and A Kumar ldquoAn evaluation of artificial neuralnetwork technique for the determination of infiltration modelparametersrdquo Applied Soft Computing Journal vol 6 no 3 pp272ndash282 2006

[22] V Bourdes S Bonnevay P Lisboa et al ldquoComparison ofartificial neural networkwith logistic regression as classificationmodels for variable selection for prediction of breast cancerpatient outcomesrdquo Advances in Artificial Neural Systems vol2010 Article ID 309841 11 pages 2010

[23] K Y Huang and K J Chen ldquoMultilayer perceptron for predic-tion of 2006 world cup football gamerdquo Advances in ArtificialNeural Systems vol 2011 Article ID 374816 8 pages 2011

[24] P T Pearson ldquoVisualizing clusters in artificial neural networksusing morse theoryrdquo Advances in Artificial Neural Systems vol2013 Article ID 486363 8 pages 2013

[25] S Haykins Neural Networks A Comprehensive FoundationPrentice Hall Upper Saddle River NJ USA 1998

[26] P Singh andM C Deo ldquoSuitability of different neural networksin daily flow forecastingrdquo Applied Soft Computing Journal vol7 no 3 pp 968ndash978 2007

[27] H Sarimveis P Doganis and A Alexandridis ldquoA classificationtechnique based on radial basis function neural networksrdquoAdvances in Engineering Software vol 37 no 4 pp 218ndash2212006

[28] S Mehrabi M Maghsoudloo H Arabalibeik R Noormandand Y Nozari ldquoApplication of multilayer perceptron andradial basis function neural networks in differentiating betweenchronic obstructive pulmonary and congestive heart failurediseasesrdquo Expert Systems with Applications vol 36 no 3 pp6956ndash6959 2009

[29] S Chen C F N Cowan and P M Grant ldquoOrthogonal leastsquares learning algorithm for radial basis function networksrdquoIEEETransactions onNeuralNetworks vol 2 no 2 pp 302ndash3091991

[30] D F Specht ldquoA general regression neural networkrdquo IEEE Trans-actions on Neural Networks vol 2 no 6 pp 568ndash576 1991

[31] D Tomandl and A Schober ldquoA modified general regressionneural network (MGRNN) with new efficient training algo-rithms as a robust ldquoblack boxrdquo-tool for data analysisrdquo NeuralNetworks vol 14 no 8 pp 1023ndash1034 2001

[32] K Nose-Filho A D P Lotufo and C R Minussi ldquoShort-termmultinodal load forecasting using amodified general regressionneural networkrdquo IEEE Transactions on Power Delivery vol 26no 4 pp 2862ndash2869 2011

[33] K L Du A K Y Lai K K M Cheng and M N S SwamyldquoNeural methods for antenna array signal processing a reviewrdquoSignal Processing vol 82 no 4 pp 547ndash561 2002

[34] R Shavlt and I Taig ldquoComparison study of pattern-synthesistechniques using neural networksrdquo Microwave and OpticalTechnology Letters vol 42 no 2 pp 175ndash179 2004

[35] M O Elish ldquoA comparative study of fault density predictionin aspect-oriented systems using MLP RBF KNN RT DENFISand SVR modelsrdquo Artificial Intelligence Review 2012

[36] P Jeatrakul and K W Wong ldquoComparing the performance ofdifferent neural networks for binary classification problemsrdquo

Advances in Artificial Neural Systems 7

in Proceedings of the 8th International Symposium on NaturalLanguage Processing (SNLP rsquo09) pp 111ndash115 BangkokThailandOctober 2009

[37] A Katidiotis K Tsagkaris and P Demestichas ldquoPerformanceevaluation of artificial neural network-based learning schemesfor cognitive radio systemsrdquoComputers and Electrical Engineer-ing vol 36 no 3 pp 518ndash535 2010

[38] M Mohamadnejad R Gholami and M Ataei ldquoComparisonof intelligence science techniques and empirical methods forprediction of blasting vibrationsrdquo Tunnelling and UndergroundSpace Technology vol 28 no 1 pp 238ndash244 2012

[39] A Rawat R N Yadav and S C Shrivastava ldquoNeural networkapplications in smart antenna arrays a reviewrdquo AEUmdashInternational Journal of Electronics and Communications vol66 no 11 pp 903ndash912 2012

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 2: Research Article Artificial Neural Network Analysis of ...downloads.hindawi.com/archive/2013/560969.pdfFractal Antenna: A Low Cost Alternative to Experimentation BalwinderS.Dhaliwal

2 Advances in Artificial Neural Systems

0th iteration 1st iteration 2nd iteration 3rd iteration

Figure 1 First three iterations of Sierpinski gasket antenna

selfsimilarity in their geometrical structure [15] Fractalsrepresent a class of geometries with very unique propertiesthat have attracted antenna designers in recent years Thegeneral concept of fractals can be applied to develop variousantenna elements and such antennas are known as fractalantennas [16] Applying fractals for antenna elements resultin smaller resonant antennas that are multiband and maybe optimized for gain The advantages of fractal antennasinclude (i) miniaturization and space-filling (ii) multibandperformance (iii) input impedance matching (iv) efficiencyand effectiveness and (v) improved gain and directivity [17]

Various fractal geometries have been explored byresearcher in past two decades However the Sierpinskigasket fractal antenna geometry has been investigated mostwidely than any other geometry since its introduction in 1998by Puente-Baliarda et al [13] The Sierpinski triangle alsocalled the Sierpinski gasket is a fractal geometry named afterthe Polish mathematician Waclaw Sierpinski who describedit in 1915 The first three iterations of Sierpinski gasket areshown in Figure 1 The first iteration gasket is constructed bysubtracting central inverting triangle from a main triangleshape After the subtraction three equal triangles remain onthe structure each one being half of the size of the originalIf the same subtraction procedure is repeated on remainingtriangles the 2nd iteration is obtained and similarly ifiteration is carried out an infinite number of times the idealSierpinski gasket is obtained [18] The ideal Sierpinski gasketis a selfsimilar structure and each one of its three main partshas exactly the same shape as that of the whole object butreduced by a factor of two [13]

The Sierpinski triangle can also be generated by the useof the iterated function system (IFS) which is generally usedto construct selfsimilar fractal shapes The IFS is based ona set of affine transforms and if the side length ldquo119904rdquo of thebase equilateral triangle is known then the following iteratedfunction system gives the selfsimilar shape of Sierpinskigasket [18]

1198911(119909) = [

05 0

0 05] 119909

1198912(119909) = [

05 0

0 05] 119909 + [

05

0]

1198913(119909) = [

05 0

0 05] 119909 + [

0250

0433]

(1)

The Sierpinski antenna is a multiband antenna and thenumber of bands depends on the number of iterations ldquo119899rdquo[13] The base triangular shape which is also called 0thiteration has single fundamental resonance frequency Thefirst fractal iteration has two resonant frequencies and so on

Layer 1Input layer

(N neurons)

Layer 2Hidden layer

(N2 neurons)

Layer (L minus 1)Hidden layer(NLminus1 neurons)

Layer LOutput layer

(NL neurons)

x1

x2

x3

xN

y1

y2

yL

Figure 2 Basic structure of MLPNN

Also the actual value of resonant frequencies depends ondielectric constant ldquo120576

119903rdquo and on the thickness ldquo119905rdquo of the

substrate and side length ldquo119904rdquo So to determine the resonantfrequencies 119891

119899in a particular iteration the values of 120576

119903 119905 119904

and number of iterations ldquo119899rdquo must be known An expressionfor predicting the resonant frequency of this antenna has beenproposed in 2008 by Mishra et al [14]

The rest of the paper is organized as follows Section 2briefly describes the ANN types which are being investigatedSection 3 describes the details of ANN models results andthe performance comparison The conclusion is presented inSection 4

2 Artificial Neural NetworksBrief Description

The brief of three types of networks namely multilayerperceptron neural network (MLPNN) radial basis functionneural network (RBFNN) and general regression neuralnetwork (GRNN) which are used in this work are describedin the following sections

21 Multilayer Perceptron Neural Network Multilayer per-ceptron neural network (MLPNN) is a widely used neuralnetwork structure in antenna applications It consists ofmultiple layers of neurons that are connected in a feedforward manner [6] The base structure of an MLPNN isshown in Figure 2

Figure 2 depicts that the network has 119871 layers The firstlayer is the input layer and the last layer (119871th layer) is theoutput layer The other layers that is layers from 2 to 119871 minus 1

are hidden layers Each layer has a number of neurons [19]In the input and output layers the number of neurons areequal to the number of inputs and outputs respectively Thenumber of neurons in the hidden layers is chosen using a trialand error process so that accurate output is obtained Usuallythe MLPs with one or two hidden layers are commonly usedfor antenna applications Each neuron processes the inputs toproduce output using a function called activation functionInput neurons normally use a relay activation function andsimply relay (pass) the external inputs to the hidden layerneurons The most commonly used activation function forhidden layer neuron is the sigmoid function defined as

120590 (119903) =

1

1 minus 119890minus119903

(2)

Advances in Artificial Neural Systems 3

The connection between the units in subsequent layershas an associated weight which is computed during trainingusing error backpropagation algorithm [20] The backprop-agation learning algorithm consists of two steps In the firststep the input signal is passed forward through each layer ofthe network The actual output of the network is calculatedand this output signal is subtracted from desired outputsignal to generate an error signal In the second step erroris fed backward through the network from the output layerthrough the hidden layersThe synapticweights between eachlayer are updated based on the computational error Thesetwo steps are repeated for each input-output pair of trainingdata and process is iterated for a number of times until theoutput value converges to a desired solution [21] The detailsof MLPNN can be further seen from [22ndash25]

22 Radial Basis Function Neural Networks Radial basisfunction neural network (RBFNN) has several special charac-teristics like simple architecture and faster performance [26]It is a special neural network with a single hidden layer Thehidden layer neurons use radial basis function to apply anonlinear transformation from the input space to the hiddenspace [27] The nonlinear basis function is a function of thenormalized radial distance between the input vector and theweight vector The most widely used form of radial basisfunction is the Gaussian function given by [26]

Φ119895(119909) = exp(

minus

10038171003817100381710038171003817119909 minus 120583

119895

10038171003817100381710038171003817

2

21205902

119895

) (3)

where 119909 is 119889-dimensional input vector with elements 119909119894and

120583119895is vector determining the centre of basis function Φ

119895and

has elements 120583119895119894 The symbol 120590

119895denotes width parameter

whose value is determined during training The RBFNNinvolves two-stage training procedure In the first stage theparameters governing the basis functions that is centers(120583119895) and widths (120590

119895) are determined using the relatively fast

unsupervised methods The unsupervised training methodsuse only the input data and not the target data In second stageof training the weights of the output layer are determined[27] For these networks output layer is a linear layer sofast linear supervised methods are used Due to this reasonthese networks have faster performances [26] The numberof neurons in all layers of RBFNN that is input output andhidden layers is determined by the dimensions and numberof input-output data sets The basic structure of RBFNN isshown in Figure 3 RBFNN can be explored in detail from[25ndash29]

23 General Regression Neural Network General regressionneural network (GRNN) introduced by Specht in 1991 is aone-pass learning algorithm network with a highly parallelstructure [30] It is a memory based network that pro-vides estimates of continuous variables and converges tounderlying linear or nonlinear regression surface Nonlinearregression analysis forms the theoretical base of GRNN Ifjoint probability density function of random variables 119909 and119910 is 119891(119909 119910) andmeasured value of 119909 is119883 then the regression

Input layer Hidden layer Output

x1

x2

x3

xN

y1

y2

yL

Figure 3 Basic structure of RBFNN

of 119910with respect to119883 (which is also called conditional mean)is [31]

(119883) = 119864 [

119910

119883

] =

int

infin

minusinfin

119910 sdot 119891 (119883 119910) 119889119910

int

infin

minusinfin

119891 (119883 119910) 119889119910

(4)

where (119883) is the prediction output of 119884 when input is119883For GRNN Nose-Filho et al [32] have shown that (119883)

can be calculated by

(119883) =

sum119899

119894=1119884119894 exp (minus119863

2

11989421205902)

sum119899

119894=1exp (minus119863

2

11989421205902)

(5)

where 119883119894 and 119884

119894 are sample values of random variables 119909

and 119910 119899 is number of samples in training set 120590 is smoothingparameter and119863

2

119894is the Euclidean matrix defined by

1198632

119894= (119883 minus 119883

119894)

119879

(119883 minus 119883119894) (6)

The estimate (119883) can be visualized as a weighted averageof all of the observed values 119884

119894 where each observed value

is weighted exponentially according to its Euclidean distancefrom119883

The basic structure of GRNN is shown in Figure 4 Theinput units are merely distribution units which provide allof the inputs (scaled measurement variables) to all of theneurons on the second layer called pattern unitsThe numberof neurons in input layer and output layer of GRNN isequal to the dimensions of the input data and output datarespectively The pattern unit consists of neurons equal tonumber of sample in training set and each pattern unit isdedicated to one cluster centre When a new vector 119883 isentered into the network it is subtracted from the storedvector representing each cluster center The sum of squaresor the absolute values of the differences is calculated and fedinto a nonlinear activation function which is generally anexponential function

The outputs of pattern units are passed on to third layerneurons known as summation units There are two typesof neurons in summations units numerator neurons whosenumber is equal to output dimensions and one denominatorneuron The first type generates an estimate of 119891(119909)119870 byadding the outputs of pattern units weighted by the number

4 Advances in Artificial Neural Systems

Layer 1Input layer

Layer 2Pattern layer

Layer 3Summation layer

Layer 4Output layer

x1

x2

x3

xn

Yf(x)K

Y(x)

Y998400f(x)K

f(x)KY998400(x)

Figure 4 Basic structure of GRNN

120576r

t

s

n

ANNmodel

fn

Figure 5 Proposed ANNModel

of observations each cluster centre represents The secondtype generates estimate of 119891(119909)119870 and multiplies each valuefrom a pattern unit by the sum of samples associated withcluster center 119883119894 119870 is a constant determined by the Parzenwindow The desired estimate of 119884 is yielded by output unitby dividing 119891(119909)119870 by 119891(119909)119870 [32] The detailed explanationof GRNN can be seen from [30ndash32]

3 The Proposed ANN Analysis ofthe Sierpinski Gasket Fractal Antenna

In presented work three above discussed ANN models havebeen implemented and compared for Sierpinski gasket fractalantenna In each model the parameters dielectric constantldquo120576119903rdquo the thickness ldquo119905rdquo of the substrate side length ldquo119904rdquo and the

number of iterations ldquo119899rdquo of the antenna are taken as inputsand the resonant frequency (119891

119899) is taken as outputs A set

of 45 values of inputs along with desired output values aretaken for trainingThe block diagram of the proposedmodelsis depicted in Figure 5 Thus the networks models have fourinputs and one output with intermediate hidden layers Theparametric details of the models are shown in Figure 5

31 Multilayer Perceptron Neural Network Parameters Forthe training of MLPNN 4 input neurons 15 neurons inhidden layer and 1 output neuron are used The trainlmfunction is used as training function and learning rate isselected as 02 Trainlm is a network training function thatupdates weight and bias values according to the Levenberg-Marquardt algorithm The architecture used for the analysisis shown in Figure 6

32 Radial Basis Function Neural Networks Parameters Forthe training of RBFNN the value of spread constant has been

Input layer(N neurons) Hidden layer

(15 neurons)

Output layer(1 neurons)

120576r

t

s

n

fn

Figure 6 Architecture of MLPNN

Input layer4 neurons Hidden layer

45 neurons

Output layer1 neuron

120576r

t

s

n

fn

Figure 7 Architecture of RBFNN

selected as 105 The value of spread constant should be largeenough so that several radial basis neurons always have fairlylarge outputs at any given moment This makes the networkfunction smoother and results in better generalization fornew input vectors occurring between input vectors used inthe design However for very large value of spread constanteach neuron is effectively responding in the same large areaof the input space As there are 45 sets of training dataso number of neurons in hidden layer is 45 The detailedarchitecture is depicted in Figure 7

33 General Regression Neural Network Parameters For thetraining of GRNN the value of spread constant has beenselected as 085 In GRNN the transfer function of first hid-den layer is radial basis function For small spread constantthe radial basis function is very steep so the neuron with theweight vector closest to the input will have a much largeroutput than other neuronsThe network will tend to respondwith the target vector associated with the nearest design inputvector For large values of spread constant the radial basisfunctionrsquos slope becomes smoother and several neurons canrespond to an input vector The network then acts as if itis taking a weighted average between target vectors whosedesign input vectors are closest to the new input vector Asin case of RBFNN the number of neurons in hidden layer is45 The architecture is shown in Figure 8

The trained ANN models are used to predict resonantfrequencies (119891

0 1198911 1198912 1198913 1198914) of 4th iteration Sierpinski gas-

ket antenna with parameters 120576119903

= 25 119904 = 1027683mm

Advances in Artificial Neural Systems 5

Table 1 Simulation results of all three ANN models Absolute error calculated by subtracting from experimental results

Resonantfrequency

Experimentalresults (GHz) [13]

Theoretical results [14] MLPNN results RBFNN results GRNN resultsOutput(GHz)

Absoluteerror

Output(GHz)

Absoluteerror

Output(GHz)

Absoluteerror

Output(GHz)

Absoluteerror

1198910

0520 05140 00060 05122 00078 05123 00077 08922 037221198911

1740 17420 00020 17362 00038 17360 00040 19097 016971198912

3510 34840 00260 34712 00388 34717 00383 39282 041821198913

6950 69680 00180 69437 00063 69434 00066 76243 067431198914

1389 139340 00440 138865 00035 138867 00033 118325 20575

Table 2 Values of performance measures

Performance measure Theoretical results MLPNN results RBFNN results GRNN resultsMean absolute error (MAE) 00192 00120 00120 07384Coefficient of correlation 119877 10000 10000 10000 09899

Input layer4 neurons Pattern layer

45 neurons

Summation layer2 neurons

Output layer1 neuron

120576r

t

s

n

fn

Figure 8 Architecture of GRNN

and 119905 = 1588mm ANN results are compared with theexperimental and theoretical results as shown in Table 1The absolute error values are calculated by comparing withmeasured results of Puente-Baliarda et al [13] Also thetheoretical results obtained using the expressions of Mishraet al [14] are provided for comparison

The performance comparison of different types of ANNsis important to find the best suitable model for specificproblem In recent years a number of studies are done onperformance comparison of ANNs for a range of applications[19 21 26 33ndash39] The performance of the proposed modelsis compared on the basis of two different measures thatis (1) mean absolute error (MAE) which gives the qualityof training the model The smaller the value of MAE ofa model better will be its performance and (2) coefficientof correlation (119877) is the other measure of strength of thelinear relationship developed by a particular model Thevalues of 119877 close to 10 indicate a good model performanceThe quantitative performance is given in Table 2 The valuesof MAE and 119877 for theoretical results are also shown forcomparison purposes

The comparison between these three algorithms on thebasis of MAE shows that the MLPNN and RBFNN havesmallest error values and even better than those of theoreticalresults It may be seen that the values of 119877 are equal to 1 forMLPNN RBFNN and theoretical results and it is sufficientlyhigh for GRNN indicating satisfactory performance by allmodels

When both performance criteria are considered togetherthe most satisfactory model is RBFNN which has betterperformance thanMLPNNandGRNNand it is even accuratethan theoretical method in predicting resonant frequencies

4 Conclusion

The application of the artificial neural networks methodin Sierpinski gasket fractal antenna analysis is evaluated inthis work The performances of the three neural networksare evaluated to realize the possible applications of ANNin fractal antenna design The obtained ANN results arecompared with the published experimental results whichshow percentage of error less than 15 for RBFNN Due tofast adaptive properties of ANN the simulation time requiredis very less It has been observed that RBFNN methodoutperforms the theoretical method in accuracy Among thedifferent neural models compared the RBFNN is found asthe best suitable model for this type of antenna analysisThus ANN approach to Sierpinski fractal antenna analysisseems to be a low cost accurate and computationally fastapproachThe same concept can be explored for other fractalgeometries

References

[1] G Angiulli and M Versaci ldquoResonant frequency evaluationof microstrip antennas using a neural-fuzzy approachrdquo IEEETransactions on Magnetics vol 39 no 3 I pp 1333ndash1336 2003

[2] R K Mishra and A Patnaik ldquoDesigning rectangular patchantenna using the nurospectral methodrdquo IEEE Transactions onAntennas and Propagation vol 51 no 8 pp 1914ndash1921 2003

[3] A Patnaik D E Anagnostou R K Mishra C G Christo-doulou and J C Lyke ldquoApplications of neural networks inwireless communicationsrdquo IEEE Antennas and PropagationMagazine vol 46 no 3 pp 130ndash137 2004

[4] D K Neog S S Pattnaik D C Panda S Devi B Khuntia andM Dutta ldquoDesign of a wideband microstrip antenna and theuse of artificial neural networks in parameter calculationrdquo IEEEAntennas and Propagation Magazine vol 47 no 3 pp 60ndash652005

6 Advances in Artificial Neural Systems

[5] S Lebbar Z Guennoun M Drissi and F Riouch ldquoA compactand broadband microstrip antenna design using a geometrical-methodology-based artifical neural networkrdquo IEEE Antennasand Propagation Magazine vol 48 no 2 pp 146ndash154 2006

[6] Y Kim S Keely J Ghosh and H Ling ldquoApplication of artificialneural networks to broadband antenna design based on aparametric frequency modelrdquo IEEE Transactions on Antennasand Propagation vol 55 no 3 pp 669ndash674 2007

[7] K Guney and N Sarikaya ldquoA hybrid method based on com-bining artificial neural network and fuzzy inference systemfor simultaneous computation of resonant frequencies of rect-angular circular and triangular microstrip antennasrdquo IEEETransactions on Antennas and Propagation vol 55 no 3 pp659ndash668 2007

[8] V S Chintakindi S S Pattnaik O P Bajpai and S Devi ldquoPSOdriven RBFNN for design of equilateral triangular microstrippatch antennardquo Indian Journal of Radio and Space Physics vol38 pp 233ndash237 2009

[9] B S Dhaliwal S S Pattnaik and S Devi ldquoDesign of Sierpinskigasket fractal patch antenna using artificial neural networksrdquoin Proceedings of the IASTED International Conference onAntennas Radar and Wave Propagation (ARP rsquo09) pp 76ndash78July 2009

[10] P H D F Silva E E C Oliveira and A G DrsquoAssuncao ldquoUsinga multilayer perceptrons for accurate modeling of quasi-fractalpatch antennasrdquo in Proceedings of the International Workshopon Antenna Technology (iWAT rsquo10) pp 1ndash4 Lisbon PortugalMarch 2010

[11] P Arora and B S Dhaliwal ldquoParameter estimation of dual bandelliptical fractal patch antenna using ANNrdquo in Proceedings ofthe International Conference on Devices and Communications(ICDeCom rsquo11) pp 1ndash4 February 2011

[12] A Anuradha A Patnaik and S N Sinha ldquoDesign of custom-made fractal multi-band antennas using ANN-PSOrdquo IEEEAntennas and Propagation Magazine vol 53 no 4 pp 94ndash1012011

[13] C Puente-Baliarda J Romeu R Pous and A Cardama ldquoOnthe behavior of the sierpinski multiband fractal antennardquo IEEETransactions on Antennas and Propagation vol 46 no 4 pp517ndash524 1998

[14] R K Mishra R Ghatak and D Poddar ldquoDesign formula forSierpinski gasket pre-fractal planar-monopole antennasrdquo IEEEAntennas and Propagation Magazine vol 50 no 3 pp 104ndash1072008

[15] D H Werner R L Haupt and P L Werner ldquoFractal antennaengineering the theory and design of fractal antenna arraysrdquoIEEE Antennas and PropagationMagazine vol 41 no 5 pp 37ndash59 1999

[16] J P Gianvittorio and Y Rahmat-Samii ldquoFractal antennas anovel antenna miniaturization technique and applicationsrdquoIEEEAntennas and PropagationMagazine vol 44 no 1 pp 20ndash36 2002

[17] N S Song K L Chin D B B Liang and M Anyi ldquoDesignof broadband dual-frequency microstrip patch antenna withmodified sierpinski fractal geometryrdquo in Proceedings of the 10thIEEE Singapore International Conference on CommunicationsSystems (ICCS rsquo06) pp 1ndash5 Singapore November 2006

[18] D H Werner and S Ganguly ldquoAn overview of fractal antennaengineering researchrdquo IEEE Antennas and Propagation Maga-zine vol 45 no 1 pp 38ndash57 2003

[19] S Shrivastava and M P Singh ldquoPerformance evaluation offeed-forward neural network with soft computing techniques

for hand written English alphabetsrdquo Applied Soft ComputingJournal vol 11 no 1 pp 1156ndash1182 2011

[20] Q J Zhang K C Gupta and V K Devabhaktuni ldquoArtificialneural networks for RF and microwave designmdashfrom theoryto practicerdquo IEEE Transactions on Microwave Theory andTechniques vol 51 no 4 pp 1339ndash1350 2003

[21] A Jain and A Kumar ldquoAn evaluation of artificial neuralnetwork technique for the determination of infiltration modelparametersrdquo Applied Soft Computing Journal vol 6 no 3 pp272ndash282 2006

[22] V Bourdes S Bonnevay P Lisboa et al ldquoComparison ofartificial neural networkwith logistic regression as classificationmodels for variable selection for prediction of breast cancerpatient outcomesrdquo Advances in Artificial Neural Systems vol2010 Article ID 309841 11 pages 2010

[23] K Y Huang and K J Chen ldquoMultilayer perceptron for predic-tion of 2006 world cup football gamerdquo Advances in ArtificialNeural Systems vol 2011 Article ID 374816 8 pages 2011

[24] P T Pearson ldquoVisualizing clusters in artificial neural networksusing morse theoryrdquo Advances in Artificial Neural Systems vol2013 Article ID 486363 8 pages 2013

[25] S Haykins Neural Networks A Comprehensive FoundationPrentice Hall Upper Saddle River NJ USA 1998

[26] P Singh andM C Deo ldquoSuitability of different neural networksin daily flow forecastingrdquo Applied Soft Computing Journal vol7 no 3 pp 968ndash978 2007

[27] H Sarimveis P Doganis and A Alexandridis ldquoA classificationtechnique based on radial basis function neural networksrdquoAdvances in Engineering Software vol 37 no 4 pp 218ndash2212006

[28] S Mehrabi M Maghsoudloo H Arabalibeik R Noormandand Y Nozari ldquoApplication of multilayer perceptron andradial basis function neural networks in differentiating betweenchronic obstructive pulmonary and congestive heart failurediseasesrdquo Expert Systems with Applications vol 36 no 3 pp6956ndash6959 2009

[29] S Chen C F N Cowan and P M Grant ldquoOrthogonal leastsquares learning algorithm for radial basis function networksrdquoIEEETransactions onNeuralNetworks vol 2 no 2 pp 302ndash3091991

[30] D F Specht ldquoA general regression neural networkrdquo IEEE Trans-actions on Neural Networks vol 2 no 6 pp 568ndash576 1991

[31] D Tomandl and A Schober ldquoA modified general regressionneural network (MGRNN) with new efficient training algo-rithms as a robust ldquoblack boxrdquo-tool for data analysisrdquo NeuralNetworks vol 14 no 8 pp 1023ndash1034 2001

[32] K Nose-Filho A D P Lotufo and C R Minussi ldquoShort-termmultinodal load forecasting using amodified general regressionneural networkrdquo IEEE Transactions on Power Delivery vol 26no 4 pp 2862ndash2869 2011

[33] K L Du A K Y Lai K K M Cheng and M N S SwamyldquoNeural methods for antenna array signal processing a reviewrdquoSignal Processing vol 82 no 4 pp 547ndash561 2002

[34] R Shavlt and I Taig ldquoComparison study of pattern-synthesistechniques using neural networksrdquo Microwave and OpticalTechnology Letters vol 42 no 2 pp 175ndash179 2004

[35] M O Elish ldquoA comparative study of fault density predictionin aspect-oriented systems using MLP RBF KNN RT DENFISand SVR modelsrdquo Artificial Intelligence Review 2012

[36] P Jeatrakul and K W Wong ldquoComparing the performance ofdifferent neural networks for binary classification problemsrdquo

Advances in Artificial Neural Systems 7

in Proceedings of the 8th International Symposium on NaturalLanguage Processing (SNLP rsquo09) pp 111ndash115 BangkokThailandOctober 2009

[37] A Katidiotis K Tsagkaris and P Demestichas ldquoPerformanceevaluation of artificial neural network-based learning schemesfor cognitive radio systemsrdquoComputers and Electrical Engineer-ing vol 36 no 3 pp 518ndash535 2010

[38] M Mohamadnejad R Gholami and M Ataei ldquoComparisonof intelligence science techniques and empirical methods forprediction of blasting vibrationsrdquo Tunnelling and UndergroundSpace Technology vol 28 no 1 pp 238ndash244 2012

[39] A Rawat R N Yadav and S C Shrivastava ldquoNeural networkapplications in smart antenna arrays a reviewrdquo AEUmdashInternational Journal of Electronics and Communications vol66 no 11 pp 903ndash912 2012

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 3: Research Article Artificial Neural Network Analysis of ...downloads.hindawi.com/archive/2013/560969.pdfFractal Antenna: A Low Cost Alternative to Experimentation BalwinderS.Dhaliwal

Advances in Artificial Neural Systems 3

The connection between the units in subsequent layershas an associated weight which is computed during trainingusing error backpropagation algorithm [20] The backprop-agation learning algorithm consists of two steps In the firststep the input signal is passed forward through each layer ofthe network The actual output of the network is calculatedand this output signal is subtracted from desired outputsignal to generate an error signal In the second step erroris fed backward through the network from the output layerthrough the hidden layersThe synapticweights between eachlayer are updated based on the computational error Thesetwo steps are repeated for each input-output pair of trainingdata and process is iterated for a number of times until theoutput value converges to a desired solution [21] The detailsof MLPNN can be further seen from [22ndash25]

22 Radial Basis Function Neural Networks Radial basisfunction neural network (RBFNN) has several special charac-teristics like simple architecture and faster performance [26]It is a special neural network with a single hidden layer Thehidden layer neurons use radial basis function to apply anonlinear transformation from the input space to the hiddenspace [27] The nonlinear basis function is a function of thenormalized radial distance between the input vector and theweight vector The most widely used form of radial basisfunction is the Gaussian function given by [26]

Φ119895(119909) = exp(

minus

10038171003817100381710038171003817119909 minus 120583

119895

10038171003817100381710038171003817

2

21205902

119895

) (3)

where 119909 is 119889-dimensional input vector with elements 119909119894and

120583119895is vector determining the centre of basis function Φ

119895and

has elements 120583119895119894 The symbol 120590

119895denotes width parameter

whose value is determined during training The RBFNNinvolves two-stage training procedure In the first stage theparameters governing the basis functions that is centers(120583119895) and widths (120590

119895) are determined using the relatively fast

unsupervised methods The unsupervised training methodsuse only the input data and not the target data In second stageof training the weights of the output layer are determined[27] For these networks output layer is a linear layer sofast linear supervised methods are used Due to this reasonthese networks have faster performances [26] The numberof neurons in all layers of RBFNN that is input output andhidden layers is determined by the dimensions and numberof input-output data sets The basic structure of RBFNN isshown in Figure 3 RBFNN can be explored in detail from[25ndash29]

23 General Regression Neural Network General regressionneural network (GRNN) introduced by Specht in 1991 is aone-pass learning algorithm network with a highly parallelstructure [30] It is a memory based network that pro-vides estimates of continuous variables and converges tounderlying linear or nonlinear regression surface Nonlinearregression analysis forms the theoretical base of GRNN Ifjoint probability density function of random variables 119909 and119910 is 119891(119909 119910) andmeasured value of 119909 is119883 then the regression

Input layer Hidden layer Output

x1

x2

x3

xN

y1

y2

yL

Figure 3 Basic structure of RBFNN

of 119910with respect to119883 (which is also called conditional mean)is [31]

(119883) = 119864 [

119910

119883

] =

int

infin

minusinfin

119910 sdot 119891 (119883 119910) 119889119910

int

infin

minusinfin

119891 (119883 119910) 119889119910

(4)

where (119883) is the prediction output of 119884 when input is119883For GRNN Nose-Filho et al [32] have shown that (119883)

can be calculated by

(119883) =

sum119899

119894=1119884119894 exp (minus119863

2

11989421205902)

sum119899

119894=1exp (minus119863

2

11989421205902)

(5)

where 119883119894 and 119884

119894 are sample values of random variables 119909

and 119910 119899 is number of samples in training set 120590 is smoothingparameter and119863

2

119894is the Euclidean matrix defined by

1198632

119894= (119883 minus 119883

119894)

119879

(119883 minus 119883119894) (6)

The estimate (119883) can be visualized as a weighted averageof all of the observed values 119884

119894 where each observed value

is weighted exponentially according to its Euclidean distancefrom119883

The basic structure of GRNN is shown in Figure 4 Theinput units are merely distribution units which provide allof the inputs (scaled measurement variables) to all of theneurons on the second layer called pattern unitsThe numberof neurons in input layer and output layer of GRNN isequal to the dimensions of the input data and output datarespectively The pattern unit consists of neurons equal tonumber of sample in training set and each pattern unit isdedicated to one cluster centre When a new vector 119883 isentered into the network it is subtracted from the storedvector representing each cluster center The sum of squaresor the absolute values of the differences is calculated and fedinto a nonlinear activation function which is generally anexponential function

The outputs of pattern units are passed on to third layerneurons known as summation units There are two typesof neurons in summations units numerator neurons whosenumber is equal to output dimensions and one denominatorneuron The first type generates an estimate of 119891(119909)119870 byadding the outputs of pattern units weighted by the number

4 Advances in Artificial Neural Systems

Layer 1Input layer

Layer 2Pattern layer

Layer 3Summation layer

Layer 4Output layer

x1

x2

x3

xn

Yf(x)K

Y(x)

Y998400f(x)K

f(x)KY998400(x)

Figure 4 Basic structure of GRNN

120576r

t

s

n

ANNmodel

fn

Figure 5 Proposed ANNModel

of observations each cluster centre represents The secondtype generates estimate of 119891(119909)119870 and multiplies each valuefrom a pattern unit by the sum of samples associated withcluster center 119883119894 119870 is a constant determined by the Parzenwindow The desired estimate of 119884 is yielded by output unitby dividing 119891(119909)119870 by 119891(119909)119870 [32] The detailed explanationof GRNN can be seen from [30ndash32]

3 The Proposed ANN Analysis ofthe Sierpinski Gasket Fractal Antenna

In presented work three above discussed ANN models havebeen implemented and compared for Sierpinski gasket fractalantenna In each model the parameters dielectric constantldquo120576119903rdquo the thickness ldquo119905rdquo of the substrate side length ldquo119904rdquo and the

number of iterations ldquo119899rdquo of the antenna are taken as inputsand the resonant frequency (119891

119899) is taken as outputs A set

of 45 values of inputs along with desired output values aretaken for trainingThe block diagram of the proposedmodelsis depicted in Figure 5 Thus the networks models have fourinputs and one output with intermediate hidden layers Theparametric details of the models are shown in Figure 5

31 Multilayer Perceptron Neural Network Parameters Forthe training of MLPNN 4 input neurons 15 neurons inhidden layer and 1 output neuron are used The trainlmfunction is used as training function and learning rate isselected as 02 Trainlm is a network training function thatupdates weight and bias values according to the Levenberg-Marquardt algorithm The architecture used for the analysisis shown in Figure 6

32 Radial Basis Function Neural Networks Parameters Forthe training of RBFNN the value of spread constant has been

Input layer(N neurons) Hidden layer

(15 neurons)

Output layer(1 neurons)

120576r

t

s

n

fn

Figure 6 Architecture of MLPNN

Input layer4 neurons Hidden layer

45 neurons

Output layer1 neuron

120576r

t

s

n

fn

Figure 7 Architecture of RBFNN

selected as 105 The value of spread constant should be largeenough so that several radial basis neurons always have fairlylarge outputs at any given moment This makes the networkfunction smoother and results in better generalization fornew input vectors occurring between input vectors used inthe design However for very large value of spread constanteach neuron is effectively responding in the same large areaof the input space As there are 45 sets of training dataso number of neurons in hidden layer is 45 The detailedarchitecture is depicted in Figure 7

33 General Regression Neural Network Parameters For thetraining of GRNN the value of spread constant has beenselected as 085 In GRNN the transfer function of first hid-den layer is radial basis function For small spread constantthe radial basis function is very steep so the neuron with theweight vector closest to the input will have a much largeroutput than other neuronsThe network will tend to respondwith the target vector associated with the nearest design inputvector For large values of spread constant the radial basisfunctionrsquos slope becomes smoother and several neurons canrespond to an input vector The network then acts as if itis taking a weighted average between target vectors whosedesign input vectors are closest to the new input vector Asin case of RBFNN the number of neurons in hidden layer is45 The architecture is shown in Figure 8

The trained ANN models are used to predict resonantfrequencies (119891

0 1198911 1198912 1198913 1198914) of 4th iteration Sierpinski gas-

ket antenna with parameters 120576119903

= 25 119904 = 1027683mm

Advances in Artificial Neural Systems 5

Table 1 Simulation results of all three ANN models Absolute error calculated by subtracting from experimental results

Resonantfrequency

Experimentalresults (GHz) [13]

Theoretical results [14] MLPNN results RBFNN results GRNN resultsOutput(GHz)

Absoluteerror

Output(GHz)

Absoluteerror

Output(GHz)

Absoluteerror

Output(GHz)

Absoluteerror

1198910

0520 05140 00060 05122 00078 05123 00077 08922 037221198911

1740 17420 00020 17362 00038 17360 00040 19097 016971198912

3510 34840 00260 34712 00388 34717 00383 39282 041821198913

6950 69680 00180 69437 00063 69434 00066 76243 067431198914

1389 139340 00440 138865 00035 138867 00033 118325 20575

Table 2 Values of performance measures

Performance measure Theoretical results MLPNN results RBFNN results GRNN resultsMean absolute error (MAE) 00192 00120 00120 07384Coefficient of correlation 119877 10000 10000 10000 09899

Input layer4 neurons Pattern layer

45 neurons

Summation layer2 neurons

Output layer1 neuron

120576r

t

s

n

fn

Figure 8 Architecture of GRNN

and 119905 = 1588mm ANN results are compared with theexperimental and theoretical results as shown in Table 1The absolute error values are calculated by comparing withmeasured results of Puente-Baliarda et al [13] Also thetheoretical results obtained using the expressions of Mishraet al [14] are provided for comparison

The performance comparison of different types of ANNsis important to find the best suitable model for specificproblem In recent years a number of studies are done onperformance comparison of ANNs for a range of applications[19 21 26 33ndash39] The performance of the proposed modelsis compared on the basis of two different measures thatis (1) mean absolute error (MAE) which gives the qualityof training the model The smaller the value of MAE ofa model better will be its performance and (2) coefficientof correlation (119877) is the other measure of strength of thelinear relationship developed by a particular model Thevalues of 119877 close to 10 indicate a good model performanceThe quantitative performance is given in Table 2 The valuesof MAE and 119877 for theoretical results are also shown forcomparison purposes

The comparison between these three algorithms on thebasis of MAE shows that the MLPNN and RBFNN havesmallest error values and even better than those of theoreticalresults It may be seen that the values of 119877 are equal to 1 forMLPNN RBFNN and theoretical results and it is sufficientlyhigh for GRNN indicating satisfactory performance by allmodels

When both performance criteria are considered togetherthe most satisfactory model is RBFNN which has betterperformance thanMLPNNandGRNNand it is even accuratethan theoretical method in predicting resonant frequencies

4 Conclusion

The application of the artificial neural networks methodin Sierpinski gasket fractal antenna analysis is evaluated inthis work The performances of the three neural networksare evaluated to realize the possible applications of ANNin fractal antenna design The obtained ANN results arecompared with the published experimental results whichshow percentage of error less than 15 for RBFNN Due tofast adaptive properties of ANN the simulation time requiredis very less It has been observed that RBFNN methodoutperforms the theoretical method in accuracy Among thedifferent neural models compared the RBFNN is found asthe best suitable model for this type of antenna analysisThus ANN approach to Sierpinski fractal antenna analysisseems to be a low cost accurate and computationally fastapproachThe same concept can be explored for other fractalgeometries

References

[1] G Angiulli and M Versaci ldquoResonant frequency evaluationof microstrip antennas using a neural-fuzzy approachrdquo IEEETransactions on Magnetics vol 39 no 3 I pp 1333ndash1336 2003

[2] R K Mishra and A Patnaik ldquoDesigning rectangular patchantenna using the nurospectral methodrdquo IEEE Transactions onAntennas and Propagation vol 51 no 8 pp 1914ndash1921 2003

[3] A Patnaik D E Anagnostou R K Mishra C G Christo-doulou and J C Lyke ldquoApplications of neural networks inwireless communicationsrdquo IEEE Antennas and PropagationMagazine vol 46 no 3 pp 130ndash137 2004

[4] D K Neog S S Pattnaik D C Panda S Devi B Khuntia andM Dutta ldquoDesign of a wideband microstrip antenna and theuse of artificial neural networks in parameter calculationrdquo IEEEAntennas and Propagation Magazine vol 47 no 3 pp 60ndash652005

6 Advances in Artificial Neural Systems

[5] S Lebbar Z Guennoun M Drissi and F Riouch ldquoA compactand broadband microstrip antenna design using a geometrical-methodology-based artifical neural networkrdquo IEEE Antennasand Propagation Magazine vol 48 no 2 pp 146ndash154 2006

[6] Y Kim S Keely J Ghosh and H Ling ldquoApplication of artificialneural networks to broadband antenna design based on aparametric frequency modelrdquo IEEE Transactions on Antennasand Propagation vol 55 no 3 pp 669ndash674 2007

[7] K Guney and N Sarikaya ldquoA hybrid method based on com-bining artificial neural network and fuzzy inference systemfor simultaneous computation of resonant frequencies of rect-angular circular and triangular microstrip antennasrdquo IEEETransactions on Antennas and Propagation vol 55 no 3 pp659ndash668 2007

[8] V S Chintakindi S S Pattnaik O P Bajpai and S Devi ldquoPSOdriven RBFNN for design of equilateral triangular microstrippatch antennardquo Indian Journal of Radio and Space Physics vol38 pp 233ndash237 2009

[9] B S Dhaliwal S S Pattnaik and S Devi ldquoDesign of Sierpinskigasket fractal patch antenna using artificial neural networksrdquoin Proceedings of the IASTED International Conference onAntennas Radar and Wave Propagation (ARP rsquo09) pp 76ndash78July 2009

[10] P H D F Silva E E C Oliveira and A G DrsquoAssuncao ldquoUsinga multilayer perceptrons for accurate modeling of quasi-fractalpatch antennasrdquo in Proceedings of the International Workshopon Antenna Technology (iWAT rsquo10) pp 1ndash4 Lisbon PortugalMarch 2010

[11] P Arora and B S Dhaliwal ldquoParameter estimation of dual bandelliptical fractal patch antenna using ANNrdquo in Proceedings ofthe International Conference on Devices and Communications(ICDeCom rsquo11) pp 1ndash4 February 2011

[12] A Anuradha A Patnaik and S N Sinha ldquoDesign of custom-made fractal multi-band antennas using ANN-PSOrdquo IEEEAntennas and Propagation Magazine vol 53 no 4 pp 94ndash1012011

[13] C Puente-Baliarda J Romeu R Pous and A Cardama ldquoOnthe behavior of the sierpinski multiband fractal antennardquo IEEETransactions on Antennas and Propagation vol 46 no 4 pp517ndash524 1998

[14] R K Mishra R Ghatak and D Poddar ldquoDesign formula forSierpinski gasket pre-fractal planar-monopole antennasrdquo IEEEAntennas and Propagation Magazine vol 50 no 3 pp 104ndash1072008

[15] D H Werner R L Haupt and P L Werner ldquoFractal antennaengineering the theory and design of fractal antenna arraysrdquoIEEE Antennas and PropagationMagazine vol 41 no 5 pp 37ndash59 1999

[16] J P Gianvittorio and Y Rahmat-Samii ldquoFractal antennas anovel antenna miniaturization technique and applicationsrdquoIEEEAntennas and PropagationMagazine vol 44 no 1 pp 20ndash36 2002

[17] N S Song K L Chin D B B Liang and M Anyi ldquoDesignof broadband dual-frequency microstrip patch antenna withmodified sierpinski fractal geometryrdquo in Proceedings of the 10thIEEE Singapore International Conference on CommunicationsSystems (ICCS rsquo06) pp 1ndash5 Singapore November 2006

[18] D H Werner and S Ganguly ldquoAn overview of fractal antennaengineering researchrdquo IEEE Antennas and Propagation Maga-zine vol 45 no 1 pp 38ndash57 2003

[19] S Shrivastava and M P Singh ldquoPerformance evaluation offeed-forward neural network with soft computing techniques

for hand written English alphabetsrdquo Applied Soft ComputingJournal vol 11 no 1 pp 1156ndash1182 2011

[20] Q J Zhang K C Gupta and V K Devabhaktuni ldquoArtificialneural networks for RF and microwave designmdashfrom theoryto practicerdquo IEEE Transactions on Microwave Theory andTechniques vol 51 no 4 pp 1339ndash1350 2003

[21] A Jain and A Kumar ldquoAn evaluation of artificial neuralnetwork technique for the determination of infiltration modelparametersrdquo Applied Soft Computing Journal vol 6 no 3 pp272ndash282 2006

[22] V Bourdes S Bonnevay P Lisboa et al ldquoComparison ofartificial neural networkwith logistic regression as classificationmodels for variable selection for prediction of breast cancerpatient outcomesrdquo Advances in Artificial Neural Systems vol2010 Article ID 309841 11 pages 2010

[23] K Y Huang and K J Chen ldquoMultilayer perceptron for predic-tion of 2006 world cup football gamerdquo Advances in ArtificialNeural Systems vol 2011 Article ID 374816 8 pages 2011

[24] P T Pearson ldquoVisualizing clusters in artificial neural networksusing morse theoryrdquo Advances in Artificial Neural Systems vol2013 Article ID 486363 8 pages 2013

[25] S Haykins Neural Networks A Comprehensive FoundationPrentice Hall Upper Saddle River NJ USA 1998

[26] P Singh andM C Deo ldquoSuitability of different neural networksin daily flow forecastingrdquo Applied Soft Computing Journal vol7 no 3 pp 968ndash978 2007

[27] H Sarimveis P Doganis and A Alexandridis ldquoA classificationtechnique based on radial basis function neural networksrdquoAdvances in Engineering Software vol 37 no 4 pp 218ndash2212006

[28] S Mehrabi M Maghsoudloo H Arabalibeik R Noormandand Y Nozari ldquoApplication of multilayer perceptron andradial basis function neural networks in differentiating betweenchronic obstructive pulmonary and congestive heart failurediseasesrdquo Expert Systems with Applications vol 36 no 3 pp6956ndash6959 2009

[29] S Chen C F N Cowan and P M Grant ldquoOrthogonal leastsquares learning algorithm for radial basis function networksrdquoIEEETransactions onNeuralNetworks vol 2 no 2 pp 302ndash3091991

[30] D F Specht ldquoA general regression neural networkrdquo IEEE Trans-actions on Neural Networks vol 2 no 6 pp 568ndash576 1991

[31] D Tomandl and A Schober ldquoA modified general regressionneural network (MGRNN) with new efficient training algo-rithms as a robust ldquoblack boxrdquo-tool for data analysisrdquo NeuralNetworks vol 14 no 8 pp 1023ndash1034 2001

[32] K Nose-Filho A D P Lotufo and C R Minussi ldquoShort-termmultinodal load forecasting using amodified general regressionneural networkrdquo IEEE Transactions on Power Delivery vol 26no 4 pp 2862ndash2869 2011

[33] K L Du A K Y Lai K K M Cheng and M N S SwamyldquoNeural methods for antenna array signal processing a reviewrdquoSignal Processing vol 82 no 4 pp 547ndash561 2002

[34] R Shavlt and I Taig ldquoComparison study of pattern-synthesistechniques using neural networksrdquo Microwave and OpticalTechnology Letters vol 42 no 2 pp 175ndash179 2004

[35] M O Elish ldquoA comparative study of fault density predictionin aspect-oriented systems using MLP RBF KNN RT DENFISand SVR modelsrdquo Artificial Intelligence Review 2012

[36] P Jeatrakul and K W Wong ldquoComparing the performance ofdifferent neural networks for binary classification problemsrdquo

Advances in Artificial Neural Systems 7

in Proceedings of the 8th International Symposium on NaturalLanguage Processing (SNLP rsquo09) pp 111ndash115 BangkokThailandOctober 2009

[37] A Katidiotis K Tsagkaris and P Demestichas ldquoPerformanceevaluation of artificial neural network-based learning schemesfor cognitive radio systemsrdquoComputers and Electrical Engineer-ing vol 36 no 3 pp 518ndash535 2010

[38] M Mohamadnejad R Gholami and M Ataei ldquoComparisonof intelligence science techniques and empirical methods forprediction of blasting vibrationsrdquo Tunnelling and UndergroundSpace Technology vol 28 no 1 pp 238ndash244 2012

[39] A Rawat R N Yadav and S C Shrivastava ldquoNeural networkapplications in smart antenna arrays a reviewrdquo AEUmdashInternational Journal of Electronics and Communications vol66 no 11 pp 903ndash912 2012

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 4: Research Article Artificial Neural Network Analysis of ...downloads.hindawi.com/archive/2013/560969.pdfFractal Antenna: A Low Cost Alternative to Experimentation BalwinderS.Dhaliwal

4 Advances in Artificial Neural Systems

Layer 1Input layer

Layer 2Pattern layer

Layer 3Summation layer

Layer 4Output layer

x1

x2

x3

xn

Yf(x)K

Y(x)

Y998400f(x)K

f(x)KY998400(x)

Figure 4 Basic structure of GRNN

120576r

t

s

n

ANNmodel

fn

Figure 5 Proposed ANNModel

of observations each cluster centre represents The secondtype generates estimate of 119891(119909)119870 and multiplies each valuefrom a pattern unit by the sum of samples associated withcluster center 119883119894 119870 is a constant determined by the Parzenwindow The desired estimate of 119884 is yielded by output unitby dividing 119891(119909)119870 by 119891(119909)119870 [32] The detailed explanationof GRNN can be seen from [30ndash32]

3 The Proposed ANN Analysis ofthe Sierpinski Gasket Fractal Antenna

In presented work three above discussed ANN models havebeen implemented and compared for Sierpinski gasket fractalantenna In each model the parameters dielectric constantldquo120576119903rdquo the thickness ldquo119905rdquo of the substrate side length ldquo119904rdquo and the

number of iterations ldquo119899rdquo of the antenna are taken as inputsand the resonant frequency (119891

119899) is taken as outputs A set

of 45 values of inputs along with desired output values aretaken for trainingThe block diagram of the proposedmodelsis depicted in Figure 5 Thus the networks models have fourinputs and one output with intermediate hidden layers Theparametric details of the models are shown in Figure 5

31 Multilayer Perceptron Neural Network Parameters Forthe training of MLPNN 4 input neurons 15 neurons inhidden layer and 1 output neuron are used The trainlmfunction is used as training function and learning rate isselected as 02 Trainlm is a network training function thatupdates weight and bias values according to the Levenberg-Marquardt algorithm The architecture used for the analysisis shown in Figure 6

32 Radial Basis Function Neural Networks Parameters Forthe training of RBFNN the value of spread constant has been

Input layer(N neurons) Hidden layer

(15 neurons)

Output layer(1 neurons)

120576r

t

s

n

fn

Figure 6 Architecture of MLPNN

Input layer4 neurons Hidden layer

45 neurons

Output layer1 neuron

120576r

t

s

n

fn

Figure 7 Architecture of RBFNN

selected as 105 The value of spread constant should be largeenough so that several radial basis neurons always have fairlylarge outputs at any given moment This makes the networkfunction smoother and results in better generalization fornew input vectors occurring between input vectors used inthe design However for very large value of spread constanteach neuron is effectively responding in the same large areaof the input space As there are 45 sets of training dataso number of neurons in hidden layer is 45 The detailedarchitecture is depicted in Figure 7

33 General Regression Neural Network Parameters For thetraining of GRNN the value of spread constant has beenselected as 085 In GRNN the transfer function of first hid-den layer is radial basis function For small spread constantthe radial basis function is very steep so the neuron with theweight vector closest to the input will have a much largeroutput than other neuronsThe network will tend to respondwith the target vector associated with the nearest design inputvector For large values of spread constant the radial basisfunctionrsquos slope becomes smoother and several neurons canrespond to an input vector The network then acts as if itis taking a weighted average between target vectors whosedesign input vectors are closest to the new input vector Asin case of RBFNN the number of neurons in hidden layer is45 The architecture is shown in Figure 8

The trained ANN models are used to predict resonantfrequencies (119891

0 1198911 1198912 1198913 1198914) of 4th iteration Sierpinski gas-

ket antenna with parameters 120576119903

= 25 119904 = 1027683mm

Advances in Artificial Neural Systems 5

Table 1 Simulation results of all three ANN models Absolute error calculated by subtracting from experimental results

Resonantfrequency

Experimentalresults (GHz) [13]

Theoretical results [14] MLPNN results RBFNN results GRNN resultsOutput(GHz)

Absoluteerror

Output(GHz)

Absoluteerror

Output(GHz)

Absoluteerror

Output(GHz)

Absoluteerror

1198910

0520 05140 00060 05122 00078 05123 00077 08922 037221198911

1740 17420 00020 17362 00038 17360 00040 19097 016971198912

3510 34840 00260 34712 00388 34717 00383 39282 041821198913

6950 69680 00180 69437 00063 69434 00066 76243 067431198914

1389 139340 00440 138865 00035 138867 00033 118325 20575

Table 2 Values of performance measures

Performance measure Theoretical results MLPNN results RBFNN results GRNN resultsMean absolute error (MAE) 00192 00120 00120 07384Coefficient of correlation 119877 10000 10000 10000 09899

Input layer4 neurons Pattern layer

45 neurons

Summation layer2 neurons

Output layer1 neuron

120576r

t

s

n

fn

Figure 8 Architecture of GRNN

and 119905 = 1588mm ANN results are compared with theexperimental and theoretical results as shown in Table 1The absolute error values are calculated by comparing withmeasured results of Puente-Baliarda et al [13] Also thetheoretical results obtained using the expressions of Mishraet al [14] are provided for comparison

The performance comparison of different types of ANNsis important to find the best suitable model for specificproblem In recent years a number of studies are done onperformance comparison of ANNs for a range of applications[19 21 26 33ndash39] The performance of the proposed modelsis compared on the basis of two different measures thatis (1) mean absolute error (MAE) which gives the qualityof training the model The smaller the value of MAE ofa model better will be its performance and (2) coefficientof correlation (119877) is the other measure of strength of thelinear relationship developed by a particular model Thevalues of 119877 close to 10 indicate a good model performanceThe quantitative performance is given in Table 2 The valuesof MAE and 119877 for theoretical results are also shown forcomparison purposes

The comparison between these three algorithms on thebasis of MAE shows that the MLPNN and RBFNN havesmallest error values and even better than those of theoreticalresults It may be seen that the values of 119877 are equal to 1 forMLPNN RBFNN and theoretical results and it is sufficientlyhigh for GRNN indicating satisfactory performance by allmodels

When both performance criteria are considered togetherthe most satisfactory model is RBFNN which has betterperformance thanMLPNNandGRNNand it is even accuratethan theoretical method in predicting resonant frequencies

4 Conclusion

The application of the artificial neural networks methodin Sierpinski gasket fractal antenna analysis is evaluated inthis work The performances of the three neural networksare evaluated to realize the possible applications of ANNin fractal antenna design The obtained ANN results arecompared with the published experimental results whichshow percentage of error less than 15 for RBFNN Due tofast adaptive properties of ANN the simulation time requiredis very less It has been observed that RBFNN methodoutperforms the theoretical method in accuracy Among thedifferent neural models compared the RBFNN is found asthe best suitable model for this type of antenna analysisThus ANN approach to Sierpinski fractal antenna analysisseems to be a low cost accurate and computationally fastapproachThe same concept can be explored for other fractalgeometries

References

[1] G Angiulli and M Versaci ldquoResonant frequency evaluationof microstrip antennas using a neural-fuzzy approachrdquo IEEETransactions on Magnetics vol 39 no 3 I pp 1333ndash1336 2003

[2] R K Mishra and A Patnaik ldquoDesigning rectangular patchantenna using the nurospectral methodrdquo IEEE Transactions onAntennas and Propagation vol 51 no 8 pp 1914ndash1921 2003

[3] A Patnaik D E Anagnostou R K Mishra C G Christo-doulou and J C Lyke ldquoApplications of neural networks inwireless communicationsrdquo IEEE Antennas and PropagationMagazine vol 46 no 3 pp 130ndash137 2004

[4] D K Neog S S Pattnaik D C Panda S Devi B Khuntia andM Dutta ldquoDesign of a wideband microstrip antenna and theuse of artificial neural networks in parameter calculationrdquo IEEEAntennas and Propagation Magazine vol 47 no 3 pp 60ndash652005

6 Advances in Artificial Neural Systems

[5] S Lebbar Z Guennoun M Drissi and F Riouch ldquoA compactand broadband microstrip antenna design using a geometrical-methodology-based artifical neural networkrdquo IEEE Antennasand Propagation Magazine vol 48 no 2 pp 146ndash154 2006

[6] Y Kim S Keely J Ghosh and H Ling ldquoApplication of artificialneural networks to broadband antenna design based on aparametric frequency modelrdquo IEEE Transactions on Antennasand Propagation vol 55 no 3 pp 669ndash674 2007

[7] K Guney and N Sarikaya ldquoA hybrid method based on com-bining artificial neural network and fuzzy inference systemfor simultaneous computation of resonant frequencies of rect-angular circular and triangular microstrip antennasrdquo IEEETransactions on Antennas and Propagation vol 55 no 3 pp659ndash668 2007

[8] V S Chintakindi S S Pattnaik O P Bajpai and S Devi ldquoPSOdriven RBFNN for design of equilateral triangular microstrippatch antennardquo Indian Journal of Radio and Space Physics vol38 pp 233ndash237 2009

[9] B S Dhaliwal S S Pattnaik and S Devi ldquoDesign of Sierpinskigasket fractal patch antenna using artificial neural networksrdquoin Proceedings of the IASTED International Conference onAntennas Radar and Wave Propagation (ARP rsquo09) pp 76ndash78July 2009

[10] P H D F Silva E E C Oliveira and A G DrsquoAssuncao ldquoUsinga multilayer perceptrons for accurate modeling of quasi-fractalpatch antennasrdquo in Proceedings of the International Workshopon Antenna Technology (iWAT rsquo10) pp 1ndash4 Lisbon PortugalMarch 2010

[11] P Arora and B S Dhaliwal ldquoParameter estimation of dual bandelliptical fractal patch antenna using ANNrdquo in Proceedings ofthe International Conference on Devices and Communications(ICDeCom rsquo11) pp 1ndash4 February 2011

[12] A Anuradha A Patnaik and S N Sinha ldquoDesign of custom-made fractal multi-band antennas using ANN-PSOrdquo IEEEAntennas and Propagation Magazine vol 53 no 4 pp 94ndash1012011

[13] C Puente-Baliarda J Romeu R Pous and A Cardama ldquoOnthe behavior of the sierpinski multiband fractal antennardquo IEEETransactions on Antennas and Propagation vol 46 no 4 pp517ndash524 1998

[14] R K Mishra R Ghatak and D Poddar ldquoDesign formula forSierpinski gasket pre-fractal planar-monopole antennasrdquo IEEEAntennas and Propagation Magazine vol 50 no 3 pp 104ndash1072008

[15] D H Werner R L Haupt and P L Werner ldquoFractal antennaengineering the theory and design of fractal antenna arraysrdquoIEEE Antennas and PropagationMagazine vol 41 no 5 pp 37ndash59 1999

[16] J P Gianvittorio and Y Rahmat-Samii ldquoFractal antennas anovel antenna miniaturization technique and applicationsrdquoIEEEAntennas and PropagationMagazine vol 44 no 1 pp 20ndash36 2002

[17] N S Song K L Chin D B B Liang and M Anyi ldquoDesignof broadband dual-frequency microstrip patch antenna withmodified sierpinski fractal geometryrdquo in Proceedings of the 10thIEEE Singapore International Conference on CommunicationsSystems (ICCS rsquo06) pp 1ndash5 Singapore November 2006

[18] D H Werner and S Ganguly ldquoAn overview of fractal antennaengineering researchrdquo IEEE Antennas and Propagation Maga-zine vol 45 no 1 pp 38ndash57 2003

[19] S Shrivastava and M P Singh ldquoPerformance evaluation offeed-forward neural network with soft computing techniques

for hand written English alphabetsrdquo Applied Soft ComputingJournal vol 11 no 1 pp 1156ndash1182 2011

[20] Q J Zhang K C Gupta and V K Devabhaktuni ldquoArtificialneural networks for RF and microwave designmdashfrom theoryto practicerdquo IEEE Transactions on Microwave Theory andTechniques vol 51 no 4 pp 1339ndash1350 2003

[21] A Jain and A Kumar ldquoAn evaluation of artificial neuralnetwork technique for the determination of infiltration modelparametersrdquo Applied Soft Computing Journal vol 6 no 3 pp272ndash282 2006

[22] V Bourdes S Bonnevay P Lisboa et al ldquoComparison ofartificial neural networkwith logistic regression as classificationmodels for variable selection for prediction of breast cancerpatient outcomesrdquo Advances in Artificial Neural Systems vol2010 Article ID 309841 11 pages 2010

[23] K Y Huang and K J Chen ldquoMultilayer perceptron for predic-tion of 2006 world cup football gamerdquo Advances in ArtificialNeural Systems vol 2011 Article ID 374816 8 pages 2011

[24] P T Pearson ldquoVisualizing clusters in artificial neural networksusing morse theoryrdquo Advances in Artificial Neural Systems vol2013 Article ID 486363 8 pages 2013

[25] S Haykins Neural Networks A Comprehensive FoundationPrentice Hall Upper Saddle River NJ USA 1998

[26] P Singh andM C Deo ldquoSuitability of different neural networksin daily flow forecastingrdquo Applied Soft Computing Journal vol7 no 3 pp 968ndash978 2007

[27] H Sarimveis P Doganis and A Alexandridis ldquoA classificationtechnique based on radial basis function neural networksrdquoAdvances in Engineering Software vol 37 no 4 pp 218ndash2212006

[28] S Mehrabi M Maghsoudloo H Arabalibeik R Noormandand Y Nozari ldquoApplication of multilayer perceptron andradial basis function neural networks in differentiating betweenchronic obstructive pulmonary and congestive heart failurediseasesrdquo Expert Systems with Applications vol 36 no 3 pp6956ndash6959 2009

[29] S Chen C F N Cowan and P M Grant ldquoOrthogonal leastsquares learning algorithm for radial basis function networksrdquoIEEETransactions onNeuralNetworks vol 2 no 2 pp 302ndash3091991

[30] D F Specht ldquoA general regression neural networkrdquo IEEE Trans-actions on Neural Networks vol 2 no 6 pp 568ndash576 1991

[31] D Tomandl and A Schober ldquoA modified general regressionneural network (MGRNN) with new efficient training algo-rithms as a robust ldquoblack boxrdquo-tool for data analysisrdquo NeuralNetworks vol 14 no 8 pp 1023ndash1034 2001

[32] K Nose-Filho A D P Lotufo and C R Minussi ldquoShort-termmultinodal load forecasting using amodified general regressionneural networkrdquo IEEE Transactions on Power Delivery vol 26no 4 pp 2862ndash2869 2011

[33] K L Du A K Y Lai K K M Cheng and M N S SwamyldquoNeural methods for antenna array signal processing a reviewrdquoSignal Processing vol 82 no 4 pp 547ndash561 2002

[34] R Shavlt and I Taig ldquoComparison study of pattern-synthesistechniques using neural networksrdquo Microwave and OpticalTechnology Letters vol 42 no 2 pp 175ndash179 2004

[35] M O Elish ldquoA comparative study of fault density predictionin aspect-oriented systems using MLP RBF KNN RT DENFISand SVR modelsrdquo Artificial Intelligence Review 2012

[36] P Jeatrakul and K W Wong ldquoComparing the performance ofdifferent neural networks for binary classification problemsrdquo

Advances in Artificial Neural Systems 7

in Proceedings of the 8th International Symposium on NaturalLanguage Processing (SNLP rsquo09) pp 111ndash115 BangkokThailandOctober 2009

[37] A Katidiotis K Tsagkaris and P Demestichas ldquoPerformanceevaluation of artificial neural network-based learning schemesfor cognitive radio systemsrdquoComputers and Electrical Engineer-ing vol 36 no 3 pp 518ndash535 2010

[38] M Mohamadnejad R Gholami and M Ataei ldquoComparisonof intelligence science techniques and empirical methods forprediction of blasting vibrationsrdquo Tunnelling and UndergroundSpace Technology vol 28 no 1 pp 238ndash244 2012

[39] A Rawat R N Yadav and S C Shrivastava ldquoNeural networkapplications in smart antenna arrays a reviewrdquo AEUmdashInternational Journal of Electronics and Communications vol66 no 11 pp 903ndash912 2012

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 5: Research Article Artificial Neural Network Analysis of ...downloads.hindawi.com/archive/2013/560969.pdfFractal Antenna: A Low Cost Alternative to Experimentation BalwinderS.Dhaliwal

Advances in Artificial Neural Systems 5

Table 1 Simulation results of all three ANN models Absolute error calculated by subtracting from experimental results

Resonantfrequency

Experimentalresults (GHz) [13]

Theoretical results [14] MLPNN results RBFNN results GRNN resultsOutput(GHz)

Absoluteerror

Output(GHz)

Absoluteerror

Output(GHz)

Absoluteerror

Output(GHz)

Absoluteerror

1198910

0520 05140 00060 05122 00078 05123 00077 08922 037221198911

1740 17420 00020 17362 00038 17360 00040 19097 016971198912

3510 34840 00260 34712 00388 34717 00383 39282 041821198913

6950 69680 00180 69437 00063 69434 00066 76243 067431198914

1389 139340 00440 138865 00035 138867 00033 118325 20575

Table 2 Values of performance measures

Performance measure Theoretical results MLPNN results RBFNN results GRNN resultsMean absolute error (MAE) 00192 00120 00120 07384Coefficient of correlation 119877 10000 10000 10000 09899

Input layer4 neurons Pattern layer

45 neurons

Summation layer2 neurons

Output layer1 neuron

120576r

t

s

n

fn

Figure 8 Architecture of GRNN

and 119905 = 1588mm ANN results are compared with theexperimental and theoretical results as shown in Table 1The absolute error values are calculated by comparing withmeasured results of Puente-Baliarda et al [13] Also thetheoretical results obtained using the expressions of Mishraet al [14] are provided for comparison

The performance comparison of different types of ANNsis important to find the best suitable model for specificproblem In recent years a number of studies are done onperformance comparison of ANNs for a range of applications[19 21 26 33ndash39] The performance of the proposed modelsis compared on the basis of two different measures thatis (1) mean absolute error (MAE) which gives the qualityof training the model The smaller the value of MAE ofa model better will be its performance and (2) coefficientof correlation (119877) is the other measure of strength of thelinear relationship developed by a particular model Thevalues of 119877 close to 10 indicate a good model performanceThe quantitative performance is given in Table 2 The valuesof MAE and 119877 for theoretical results are also shown forcomparison purposes

The comparison between these three algorithms on thebasis of MAE shows that the MLPNN and RBFNN havesmallest error values and even better than those of theoreticalresults It may be seen that the values of 119877 are equal to 1 forMLPNN RBFNN and theoretical results and it is sufficientlyhigh for GRNN indicating satisfactory performance by allmodels

When both performance criteria are considered togetherthe most satisfactory model is RBFNN which has betterperformance thanMLPNNandGRNNand it is even accuratethan theoretical method in predicting resonant frequencies

4 Conclusion

The application of the artificial neural networks methodin Sierpinski gasket fractal antenna analysis is evaluated inthis work The performances of the three neural networksare evaluated to realize the possible applications of ANNin fractal antenna design The obtained ANN results arecompared with the published experimental results whichshow percentage of error less than 15 for RBFNN Due tofast adaptive properties of ANN the simulation time requiredis very less It has been observed that RBFNN methodoutperforms the theoretical method in accuracy Among thedifferent neural models compared the RBFNN is found asthe best suitable model for this type of antenna analysisThus ANN approach to Sierpinski fractal antenna analysisseems to be a low cost accurate and computationally fastapproachThe same concept can be explored for other fractalgeometries

References

[1] G Angiulli and M Versaci ldquoResonant frequency evaluationof microstrip antennas using a neural-fuzzy approachrdquo IEEETransactions on Magnetics vol 39 no 3 I pp 1333ndash1336 2003

[2] R K Mishra and A Patnaik ldquoDesigning rectangular patchantenna using the nurospectral methodrdquo IEEE Transactions onAntennas and Propagation vol 51 no 8 pp 1914ndash1921 2003

[3] A Patnaik D E Anagnostou R K Mishra C G Christo-doulou and J C Lyke ldquoApplications of neural networks inwireless communicationsrdquo IEEE Antennas and PropagationMagazine vol 46 no 3 pp 130ndash137 2004

[4] D K Neog S S Pattnaik D C Panda S Devi B Khuntia andM Dutta ldquoDesign of a wideband microstrip antenna and theuse of artificial neural networks in parameter calculationrdquo IEEEAntennas and Propagation Magazine vol 47 no 3 pp 60ndash652005

6 Advances in Artificial Neural Systems

[5] S Lebbar Z Guennoun M Drissi and F Riouch ldquoA compactand broadband microstrip antenna design using a geometrical-methodology-based artifical neural networkrdquo IEEE Antennasand Propagation Magazine vol 48 no 2 pp 146ndash154 2006

[6] Y Kim S Keely J Ghosh and H Ling ldquoApplication of artificialneural networks to broadband antenna design based on aparametric frequency modelrdquo IEEE Transactions on Antennasand Propagation vol 55 no 3 pp 669ndash674 2007

[7] K Guney and N Sarikaya ldquoA hybrid method based on com-bining artificial neural network and fuzzy inference systemfor simultaneous computation of resonant frequencies of rect-angular circular and triangular microstrip antennasrdquo IEEETransactions on Antennas and Propagation vol 55 no 3 pp659ndash668 2007

[8] V S Chintakindi S S Pattnaik O P Bajpai and S Devi ldquoPSOdriven RBFNN for design of equilateral triangular microstrippatch antennardquo Indian Journal of Radio and Space Physics vol38 pp 233ndash237 2009

[9] B S Dhaliwal S S Pattnaik and S Devi ldquoDesign of Sierpinskigasket fractal patch antenna using artificial neural networksrdquoin Proceedings of the IASTED International Conference onAntennas Radar and Wave Propagation (ARP rsquo09) pp 76ndash78July 2009

[10] P H D F Silva E E C Oliveira and A G DrsquoAssuncao ldquoUsinga multilayer perceptrons for accurate modeling of quasi-fractalpatch antennasrdquo in Proceedings of the International Workshopon Antenna Technology (iWAT rsquo10) pp 1ndash4 Lisbon PortugalMarch 2010

[11] P Arora and B S Dhaliwal ldquoParameter estimation of dual bandelliptical fractal patch antenna using ANNrdquo in Proceedings ofthe International Conference on Devices and Communications(ICDeCom rsquo11) pp 1ndash4 February 2011

[12] A Anuradha A Patnaik and S N Sinha ldquoDesign of custom-made fractal multi-band antennas using ANN-PSOrdquo IEEEAntennas and Propagation Magazine vol 53 no 4 pp 94ndash1012011

[13] C Puente-Baliarda J Romeu R Pous and A Cardama ldquoOnthe behavior of the sierpinski multiband fractal antennardquo IEEETransactions on Antennas and Propagation vol 46 no 4 pp517ndash524 1998

[14] R K Mishra R Ghatak and D Poddar ldquoDesign formula forSierpinski gasket pre-fractal planar-monopole antennasrdquo IEEEAntennas and Propagation Magazine vol 50 no 3 pp 104ndash1072008

[15] D H Werner R L Haupt and P L Werner ldquoFractal antennaengineering the theory and design of fractal antenna arraysrdquoIEEE Antennas and PropagationMagazine vol 41 no 5 pp 37ndash59 1999

[16] J P Gianvittorio and Y Rahmat-Samii ldquoFractal antennas anovel antenna miniaturization technique and applicationsrdquoIEEEAntennas and PropagationMagazine vol 44 no 1 pp 20ndash36 2002

[17] N S Song K L Chin D B B Liang and M Anyi ldquoDesignof broadband dual-frequency microstrip patch antenna withmodified sierpinski fractal geometryrdquo in Proceedings of the 10thIEEE Singapore International Conference on CommunicationsSystems (ICCS rsquo06) pp 1ndash5 Singapore November 2006

[18] D H Werner and S Ganguly ldquoAn overview of fractal antennaengineering researchrdquo IEEE Antennas and Propagation Maga-zine vol 45 no 1 pp 38ndash57 2003

[19] S Shrivastava and M P Singh ldquoPerformance evaluation offeed-forward neural network with soft computing techniques

for hand written English alphabetsrdquo Applied Soft ComputingJournal vol 11 no 1 pp 1156ndash1182 2011

[20] Q J Zhang K C Gupta and V K Devabhaktuni ldquoArtificialneural networks for RF and microwave designmdashfrom theoryto practicerdquo IEEE Transactions on Microwave Theory andTechniques vol 51 no 4 pp 1339ndash1350 2003

[21] A Jain and A Kumar ldquoAn evaluation of artificial neuralnetwork technique for the determination of infiltration modelparametersrdquo Applied Soft Computing Journal vol 6 no 3 pp272ndash282 2006

[22] V Bourdes S Bonnevay P Lisboa et al ldquoComparison ofartificial neural networkwith logistic regression as classificationmodels for variable selection for prediction of breast cancerpatient outcomesrdquo Advances in Artificial Neural Systems vol2010 Article ID 309841 11 pages 2010

[23] K Y Huang and K J Chen ldquoMultilayer perceptron for predic-tion of 2006 world cup football gamerdquo Advances in ArtificialNeural Systems vol 2011 Article ID 374816 8 pages 2011

[24] P T Pearson ldquoVisualizing clusters in artificial neural networksusing morse theoryrdquo Advances in Artificial Neural Systems vol2013 Article ID 486363 8 pages 2013

[25] S Haykins Neural Networks A Comprehensive FoundationPrentice Hall Upper Saddle River NJ USA 1998

[26] P Singh andM C Deo ldquoSuitability of different neural networksin daily flow forecastingrdquo Applied Soft Computing Journal vol7 no 3 pp 968ndash978 2007

[27] H Sarimveis P Doganis and A Alexandridis ldquoA classificationtechnique based on radial basis function neural networksrdquoAdvances in Engineering Software vol 37 no 4 pp 218ndash2212006

[28] S Mehrabi M Maghsoudloo H Arabalibeik R Noormandand Y Nozari ldquoApplication of multilayer perceptron andradial basis function neural networks in differentiating betweenchronic obstructive pulmonary and congestive heart failurediseasesrdquo Expert Systems with Applications vol 36 no 3 pp6956ndash6959 2009

[29] S Chen C F N Cowan and P M Grant ldquoOrthogonal leastsquares learning algorithm for radial basis function networksrdquoIEEETransactions onNeuralNetworks vol 2 no 2 pp 302ndash3091991

[30] D F Specht ldquoA general regression neural networkrdquo IEEE Trans-actions on Neural Networks vol 2 no 6 pp 568ndash576 1991

[31] D Tomandl and A Schober ldquoA modified general regressionneural network (MGRNN) with new efficient training algo-rithms as a robust ldquoblack boxrdquo-tool for data analysisrdquo NeuralNetworks vol 14 no 8 pp 1023ndash1034 2001

[32] K Nose-Filho A D P Lotufo and C R Minussi ldquoShort-termmultinodal load forecasting using amodified general regressionneural networkrdquo IEEE Transactions on Power Delivery vol 26no 4 pp 2862ndash2869 2011

[33] K L Du A K Y Lai K K M Cheng and M N S SwamyldquoNeural methods for antenna array signal processing a reviewrdquoSignal Processing vol 82 no 4 pp 547ndash561 2002

[34] R Shavlt and I Taig ldquoComparison study of pattern-synthesistechniques using neural networksrdquo Microwave and OpticalTechnology Letters vol 42 no 2 pp 175ndash179 2004

[35] M O Elish ldquoA comparative study of fault density predictionin aspect-oriented systems using MLP RBF KNN RT DENFISand SVR modelsrdquo Artificial Intelligence Review 2012

[36] P Jeatrakul and K W Wong ldquoComparing the performance ofdifferent neural networks for binary classification problemsrdquo

Advances in Artificial Neural Systems 7

in Proceedings of the 8th International Symposium on NaturalLanguage Processing (SNLP rsquo09) pp 111ndash115 BangkokThailandOctober 2009

[37] A Katidiotis K Tsagkaris and P Demestichas ldquoPerformanceevaluation of artificial neural network-based learning schemesfor cognitive radio systemsrdquoComputers and Electrical Engineer-ing vol 36 no 3 pp 518ndash535 2010

[38] M Mohamadnejad R Gholami and M Ataei ldquoComparisonof intelligence science techniques and empirical methods forprediction of blasting vibrationsrdquo Tunnelling and UndergroundSpace Technology vol 28 no 1 pp 238ndash244 2012

[39] A Rawat R N Yadav and S C Shrivastava ldquoNeural networkapplications in smart antenna arrays a reviewrdquo AEUmdashInternational Journal of Electronics and Communications vol66 no 11 pp 903ndash912 2012

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 6: Research Article Artificial Neural Network Analysis of ...downloads.hindawi.com/archive/2013/560969.pdfFractal Antenna: A Low Cost Alternative to Experimentation BalwinderS.Dhaliwal

6 Advances in Artificial Neural Systems

[5] S Lebbar Z Guennoun M Drissi and F Riouch ldquoA compactand broadband microstrip antenna design using a geometrical-methodology-based artifical neural networkrdquo IEEE Antennasand Propagation Magazine vol 48 no 2 pp 146ndash154 2006

[6] Y Kim S Keely J Ghosh and H Ling ldquoApplication of artificialneural networks to broadband antenna design based on aparametric frequency modelrdquo IEEE Transactions on Antennasand Propagation vol 55 no 3 pp 669ndash674 2007

[7] K Guney and N Sarikaya ldquoA hybrid method based on com-bining artificial neural network and fuzzy inference systemfor simultaneous computation of resonant frequencies of rect-angular circular and triangular microstrip antennasrdquo IEEETransactions on Antennas and Propagation vol 55 no 3 pp659ndash668 2007

[8] V S Chintakindi S S Pattnaik O P Bajpai and S Devi ldquoPSOdriven RBFNN for design of equilateral triangular microstrippatch antennardquo Indian Journal of Radio and Space Physics vol38 pp 233ndash237 2009

[9] B S Dhaliwal S S Pattnaik and S Devi ldquoDesign of Sierpinskigasket fractal patch antenna using artificial neural networksrdquoin Proceedings of the IASTED International Conference onAntennas Radar and Wave Propagation (ARP rsquo09) pp 76ndash78July 2009

[10] P H D F Silva E E C Oliveira and A G DrsquoAssuncao ldquoUsinga multilayer perceptrons for accurate modeling of quasi-fractalpatch antennasrdquo in Proceedings of the International Workshopon Antenna Technology (iWAT rsquo10) pp 1ndash4 Lisbon PortugalMarch 2010

[11] P Arora and B S Dhaliwal ldquoParameter estimation of dual bandelliptical fractal patch antenna using ANNrdquo in Proceedings ofthe International Conference on Devices and Communications(ICDeCom rsquo11) pp 1ndash4 February 2011

[12] A Anuradha A Patnaik and S N Sinha ldquoDesign of custom-made fractal multi-band antennas using ANN-PSOrdquo IEEEAntennas and Propagation Magazine vol 53 no 4 pp 94ndash1012011

[13] C Puente-Baliarda J Romeu R Pous and A Cardama ldquoOnthe behavior of the sierpinski multiband fractal antennardquo IEEETransactions on Antennas and Propagation vol 46 no 4 pp517ndash524 1998

[14] R K Mishra R Ghatak and D Poddar ldquoDesign formula forSierpinski gasket pre-fractal planar-monopole antennasrdquo IEEEAntennas and Propagation Magazine vol 50 no 3 pp 104ndash1072008

[15] D H Werner R L Haupt and P L Werner ldquoFractal antennaengineering the theory and design of fractal antenna arraysrdquoIEEE Antennas and PropagationMagazine vol 41 no 5 pp 37ndash59 1999

[16] J P Gianvittorio and Y Rahmat-Samii ldquoFractal antennas anovel antenna miniaturization technique and applicationsrdquoIEEEAntennas and PropagationMagazine vol 44 no 1 pp 20ndash36 2002

[17] N S Song K L Chin D B B Liang and M Anyi ldquoDesignof broadband dual-frequency microstrip patch antenna withmodified sierpinski fractal geometryrdquo in Proceedings of the 10thIEEE Singapore International Conference on CommunicationsSystems (ICCS rsquo06) pp 1ndash5 Singapore November 2006

[18] D H Werner and S Ganguly ldquoAn overview of fractal antennaengineering researchrdquo IEEE Antennas and Propagation Maga-zine vol 45 no 1 pp 38ndash57 2003

[19] S Shrivastava and M P Singh ldquoPerformance evaluation offeed-forward neural network with soft computing techniques

for hand written English alphabetsrdquo Applied Soft ComputingJournal vol 11 no 1 pp 1156ndash1182 2011

[20] Q J Zhang K C Gupta and V K Devabhaktuni ldquoArtificialneural networks for RF and microwave designmdashfrom theoryto practicerdquo IEEE Transactions on Microwave Theory andTechniques vol 51 no 4 pp 1339ndash1350 2003

[21] A Jain and A Kumar ldquoAn evaluation of artificial neuralnetwork technique for the determination of infiltration modelparametersrdquo Applied Soft Computing Journal vol 6 no 3 pp272ndash282 2006

[22] V Bourdes S Bonnevay P Lisboa et al ldquoComparison ofartificial neural networkwith logistic regression as classificationmodels for variable selection for prediction of breast cancerpatient outcomesrdquo Advances in Artificial Neural Systems vol2010 Article ID 309841 11 pages 2010

[23] K Y Huang and K J Chen ldquoMultilayer perceptron for predic-tion of 2006 world cup football gamerdquo Advances in ArtificialNeural Systems vol 2011 Article ID 374816 8 pages 2011

[24] P T Pearson ldquoVisualizing clusters in artificial neural networksusing morse theoryrdquo Advances in Artificial Neural Systems vol2013 Article ID 486363 8 pages 2013

[25] S Haykins Neural Networks A Comprehensive FoundationPrentice Hall Upper Saddle River NJ USA 1998

[26] P Singh andM C Deo ldquoSuitability of different neural networksin daily flow forecastingrdquo Applied Soft Computing Journal vol7 no 3 pp 968ndash978 2007

[27] H Sarimveis P Doganis and A Alexandridis ldquoA classificationtechnique based on radial basis function neural networksrdquoAdvances in Engineering Software vol 37 no 4 pp 218ndash2212006

[28] S Mehrabi M Maghsoudloo H Arabalibeik R Noormandand Y Nozari ldquoApplication of multilayer perceptron andradial basis function neural networks in differentiating betweenchronic obstructive pulmonary and congestive heart failurediseasesrdquo Expert Systems with Applications vol 36 no 3 pp6956ndash6959 2009

[29] S Chen C F N Cowan and P M Grant ldquoOrthogonal leastsquares learning algorithm for radial basis function networksrdquoIEEETransactions onNeuralNetworks vol 2 no 2 pp 302ndash3091991

[30] D F Specht ldquoA general regression neural networkrdquo IEEE Trans-actions on Neural Networks vol 2 no 6 pp 568ndash576 1991

[31] D Tomandl and A Schober ldquoA modified general regressionneural network (MGRNN) with new efficient training algo-rithms as a robust ldquoblack boxrdquo-tool for data analysisrdquo NeuralNetworks vol 14 no 8 pp 1023ndash1034 2001

[32] K Nose-Filho A D P Lotufo and C R Minussi ldquoShort-termmultinodal load forecasting using amodified general regressionneural networkrdquo IEEE Transactions on Power Delivery vol 26no 4 pp 2862ndash2869 2011

[33] K L Du A K Y Lai K K M Cheng and M N S SwamyldquoNeural methods for antenna array signal processing a reviewrdquoSignal Processing vol 82 no 4 pp 547ndash561 2002

[34] R Shavlt and I Taig ldquoComparison study of pattern-synthesistechniques using neural networksrdquo Microwave and OpticalTechnology Letters vol 42 no 2 pp 175ndash179 2004

[35] M O Elish ldquoA comparative study of fault density predictionin aspect-oriented systems using MLP RBF KNN RT DENFISand SVR modelsrdquo Artificial Intelligence Review 2012

[36] P Jeatrakul and K W Wong ldquoComparing the performance ofdifferent neural networks for binary classification problemsrdquo

Advances in Artificial Neural Systems 7

in Proceedings of the 8th International Symposium on NaturalLanguage Processing (SNLP rsquo09) pp 111ndash115 BangkokThailandOctober 2009

[37] A Katidiotis K Tsagkaris and P Demestichas ldquoPerformanceevaluation of artificial neural network-based learning schemesfor cognitive radio systemsrdquoComputers and Electrical Engineer-ing vol 36 no 3 pp 518ndash535 2010

[38] M Mohamadnejad R Gholami and M Ataei ldquoComparisonof intelligence science techniques and empirical methods forprediction of blasting vibrationsrdquo Tunnelling and UndergroundSpace Technology vol 28 no 1 pp 238ndash244 2012

[39] A Rawat R N Yadav and S C Shrivastava ldquoNeural networkapplications in smart antenna arrays a reviewrdquo AEUmdashInternational Journal of Electronics and Communications vol66 no 11 pp 903ndash912 2012

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 7: Research Article Artificial Neural Network Analysis of ...downloads.hindawi.com/archive/2013/560969.pdfFractal Antenna: A Low Cost Alternative to Experimentation BalwinderS.Dhaliwal

Advances in Artificial Neural Systems 7

in Proceedings of the 8th International Symposium on NaturalLanguage Processing (SNLP rsquo09) pp 111ndash115 BangkokThailandOctober 2009

[37] A Katidiotis K Tsagkaris and P Demestichas ldquoPerformanceevaluation of artificial neural network-based learning schemesfor cognitive radio systemsrdquoComputers and Electrical Engineer-ing vol 36 no 3 pp 518ndash535 2010

[38] M Mohamadnejad R Gholami and M Ataei ldquoComparisonof intelligence science techniques and empirical methods forprediction of blasting vibrationsrdquo Tunnelling and UndergroundSpace Technology vol 28 no 1 pp 238ndash244 2012

[39] A Rawat R N Yadav and S C Shrivastava ldquoNeural networkapplications in smart antenna arrays a reviewrdquo AEUmdashInternational Journal of Electronics and Communications vol66 no 11 pp 903ndash912 2012

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 8: Research Article Artificial Neural Network Analysis of ...downloads.hindawi.com/archive/2013/560969.pdfFractal Antenna: A Low Cost Alternative to Experimentation BalwinderS.Dhaliwal

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014