spectrum sensing using probabilistic neural … sensing using probabilistic neural networks in...
TRANSCRIPT
Spectrum Sensing using Probabilistic Neural Networks in Cognitive Radio
Network
Pavithra Roy P1 , Dr. Muralidhar M2
1 Assistant Professor, Dept. of ECE, Vemana Institute of Technology, Bangalore, India.
E-mail: [email protected]
2 Principal, Sri Venkateswara College of Engg. & Technology (Autonomous), Chittoor, India.
Abstract
Prediction of potential idle times of different channels based on the past information allows a cognitive radio to select
the best channels for control and data transmission. For a successful cognitive radio, learning is one of the most
important factors. Cognitive radio with enhanced Artificial Intelligence (AI) techniques leads to design and development
of algorithm that enable computer to learn. The learning mechanisms are capable of utilizing measurements sensed
from the environment, gathered experience and stored knowledge to improve the radio operation and to guide decisions
and actions. This paper presents the learning schemes that are based on artificial neural networks; in particular, Neural
Network (NN) and probabilistic neural network (PNN) . Based on the intelligent radio learning schemes, an effective
training approach is proposed. Simulations demonstrate that the novel technique beats existing hierarchical
approaches, and present better performance.
1. Introduction
Intelligence is needed to keep up with the rapid
evolution of wireless communications, especially in
terms of managing and allocating the scarce, radio
spectrum in the highly varying and different modern
environments. Cognitive radio systems promise to
handle this situation. Cognitive radio [1] is an
intelligent wireless communication system that is aware
of its environment, can also learn from experience and
can make changes to certain operating parameters (e.g.
transmit-power, carrier-frequency, and modulation
strategy) to adapt to the incoming RF stimuli in real-
time. The main objective of a cognitive radio is to have
a highly reliable communications with efficient
utilization radio spectrum in order to satisfy the user
needs [2].
A wireless network designed based on the cognitive
radio technology is referred to as a Cognitive Radio
Network (CRN). A CRN is composed of two types of
users, namely, the primary users (licensed users) and
the secondary users (unlicensed users). The primary
users have a higher priority than the secondary users in
accessing the channels in the licensed spectrum. In
most cases, the secondary users in a CRN logically
divide the channels allocated to the primary users into
slots [3]. Within each slot the secondary user has to
sense the primary user activity for a short duration and
accordingly accesses the slot when it is sensed idle. The
idle slots are also called spectrum holes or white
spaces. Thus the secondary users can access the
licensed spectrum without causing any harmful
interference to the primary users. Once the neural
networks are trained, the computational complexity is
significantly reduced.
Cognitive radio is a merge of AI techniques and
wireless communication [4]. Cognitive radio with
enhanced AI techniques is used in a rapidly changing
environment. AI field that concerned with the design
Pavithra Roy et al , International Journal of Computer Science & Communication Networks,Vol 5(2),120-127
120
ISSN:2249-5789
and development of algorithm enable the computer to
learn. It suits for the situation that based on the past
experience, learn by example and learn by analogy [5].
Reasoning and learning techniques is a fundamental
constraint on cognitive radio field. The capability to
improve is provided by an intelligent software package
called a cognitive engine [6]. The cognitive engine
derives and enforces decisions to the software-based
radio by continuously adjusting its parameters,
Observing and measuring the outcomes and taking
actions to move the radio toward some desired
operational state [7].
Cognitive radios are capable of learning lessons and
storing them into a knowledge base, from where they
may be retrieved, when needed, to assist future
decisions and actions. The integration of a learning
engine can be important especially for the channel
estimation, for improving the stability and reliability of
channel and evaluation of the configuration
capabilities, without relying solely on the recent
measurements. There are many different learning
techniques available that can be used by a cognitive
radio ranging from pure lookup tables to arbitrary
combinations of machine learning techniques that
include artificial neural networks, evolutionary/genetic
algorithms, reinforcement learning, hidden Markov
models, etc. In this paper we proposed two learning
schemes NN and PNN, different issues such as learning
algorithm, performance, efficiency and learning rate are
discussed. The paper is aimed at overcoming the
existing drawbacks faced by conventional neural
networks and also aimed to having improved sensing of
the channel.
Section 2 briefly presents the basic NN algorithm. The
main purpose of this section is to introduce notation
and concepts which are needed to describe the NN
algorithm. The PNN algorithm is then presented in
Section 3. In Section 4 the PNN algorithm is compared
with the NN algorithm and with a variable learning rate
variation of back propagation. Section 5 contains a
summary and conclusions.
2. NEURAL NETWORK (NN) ALGORTHIM
Neural networks are computational models of the
biological brain. Like the brain, a neural network
comprises a large number of interconnected neurons.
Each neuron is capable of performing a simple
computation [8]. Artificial Neural Networks (ANN or
simply NN) on the other hand, are made up of artificial
neurons interconnected to each other to form a
programming structure that mimics the behaviour and
neural processing (organization and learning) of
biological neurons. Neural networks are basically
composed of input, hidden layers, output parameters
and components which have ability to learn eventually
by tuning the weight functions of the parametric model.
The neurons are connected by weighted links passing
signal from one neuron to another. An intermediate
layer couples the input and output signals together,
propagating the input signals along a set of connections
from the input to the output [9]. The output signal is
transmitted through the neurons outgoing connection.
The outgoing connection will splits in a number of
branches that transmit the same signal. The outgoing
branches terminate at the incoming connections of
other neurons in the network.
Neural network is renowned for the ability of non-
linear reflection, and could simulate the input and
output of a complex system [10]. The advantage of
neural networks over statistical models is that it does
not require a priori knowledge of the underlying
distributions of the observed process. Therefore, the
neural networks offer an attractive choice for modelling
the predictor. Neural network has the ability to classify
information and for pattern recognition and has been
used to classify modulation types in signals with great
accuracy. It can understand how many other radios are
nearby, types of communication which is of great use
in determining the optimization [9].
A typical artificial neuron and the modelling of a
multilayered neural network are illustrated in Fig. 1.
Referring to Fig 1, x1... xn is considered to be
unidirectional inputs, which are indicated by arrows,
and the signal flows towards neuron‟s output (O). The
neuron output signal O is given by the following
relationship:
Pavithra Roy et al , International Journal of Computer Science & Communication Networks,Vol 5(2),120-127
121
ISSN:2249-5789
Where wj is the weight vector, and the function f (net)
is referred to as an activation (transfer) function. The
variable „net‟ is defined as a scalar product of the
weight and input vectors,
Where „T‟ is the transpose of a matrix, and the output
value O is computed as
Where θ is called the threshold level; and this type of
node is called a linear threshold unit.
Fig. 1 Architecture of a Neural Network (NN)
Learning in NNs is achieved through particular training
algorithms which are expanded in accordance with the
learning laws, assumed to simulate the learning
mechanisms of biological system [11]. Neural network
has to be configured such that the application of a set of
inputs produces the desired set of outputs. This can be
achieved by properly adjusting the weights. The neuron
impulse is then computed as the weighted sum of the
input signals, transformed by the transfer function. This
process is called learning or training. The learning
capability of an artificial neuron is achieved by
adjusting the weights in accordance to the chosen
learning algorithm. Learning can be generally
distinguished between supervised and unsupervised
learning. In supervised learning, the neural network is
fed with teaching patterns and trained by letting it
change its weights according to some learning rule,
called back-propagation rule. It is a massively parallel,
interconnected network of neurons. The neurons are
organized into layers with no feedback or lateral
connections.
The training algorithm used in this work is the
Levenberg-Marquardt (LM) algorithm, which is a
variation of Newton‟s method that was designed for
minimizing functions that are sums of squares of
nonlinear functions.
3. PROBABILISTIC NEURAL NETWORK (PNN)
ARCHITECTURE
Probabilistic Neural Network (PNN), first proposed by
Donald F. Specht [12] is the nearest neighbour
classifiers to the complex domain neural network. This
network provides a general solution to pattern
classification problems, called Bayesian classifiers [13,
14]. PNN is a feed-forward neural network with a
complex structure and predominantly a classifier since
it can map any input pattern to a number of
classifications. PNN has become an effective tool for
solving many classification problems [15, 1, 16, 17,
and 18].
Network structure determination is an important issue
in pattern classification which is based on PNN. In this
study, a supervised network structure determination
algorithm is discussed. Despite its complexity, PNN
merely has a single training parameter. This is a
smoothing parameter of the Probability Density
Functions (PDFs) which are utilized for the activation
of the neurons in the pattern layer. Thereby, the training
process of PNN solely requires a single input-output
signal pass in order to compute network response..
The PNN [19] architecture is composed of many
interconnected processing units or neurons organized in
successive layers often used in classification problems.
When an input is present, the pattern layer computes
the distance from the input vector to the training input
vectors which produces a vector where its elements
indicate how close the input is to the training input. The
summation layer sums the contribution for each class of
Pavithra Roy et al , International Journal of Computer Science & Communication Networks,Vol 5(2),120-127
122
ISSN:2249-5789
inputs and produces its net output as a vector of
probabilities. At last, compete transfer function on the
output of the third layer picks the maximum of these
probabilities, and produces a 1 (positive identification)
for that class and a 0 (negative identification) for non-
targeted classes in the output layer.
The layered structure of a PNN is as shown in the
Figure 2.
Fig.2 Architecture of PNN
In input layer, each neuron represents a predictor
variable. The input layer does not perform any
computation and simply distributes the input to the
neurons in the pattern layer. It standardizes the range of
the values by subtracting the median and dividing by
the inter quartile range and the input neurons feed the
values to each of the neurons in the pattern layer. The
pattern layer consists of K pools of pattern nodes. Each
node in the pattern layer is connected with every node
in the input layer. The kth pool in the pattern layer
contains Nk number of pattern nodes. On receiving a
pattern z from the input layer, the neuron of the pattern
layer computes its output given by
)4(2
)()(exp
)2(
1)(
22
ij
T
ij
ij
zzzzz
Where, ε denotes the dimension of the pattern vector Z,
σ is the smoothing parameter and Zij is the neuron
vector. Pattern layer contains one neuron for each case
in the training data set. It stores the values of the
predictor variables for the case along with the target
value. A hidden neuron computes the Euclidean
distance of the test case from the neuron‟s center point
and then applies the RBF kernel function using the
sigma values and produces a vector whose elements
indicate how close the input is to a training input.
Gaussian function is often chosen as the activation
function, combined with a radial basis function in the
pattern layer.
For PNN networks [19] there is one pattern neuron for
each category of the target variable. The actual target
category of each training case is stored with each
hidden neuron; the weighted value coming out of a
hidden neuron is fed only to the pattern neuron that
corresponds to the hidden neuron‟s category. The
pattern neurons add the values for the class they
represent. This is carried out in summation layer. The
output layer compares the weighted votes for each
target category accumulated in the pattern layer and
uses the largest vote to predict the target category.
The summation layer consists of K nodes; each
summation node receives the outputs from pattern
nodes associated with a given class .The summation
layer neurons compute the maximum likelihood of
pattern being classified into by summarizing and
averaging the output of all neurons that belong to the
same class given by
)5(2
)()(exp
1
)2(
1)(
22
ij
T
ij
i
i
zzzz
Nzp
Where, Ni denotes the total number of samples in class
Cli. If the priori probabilities for each class are the
same, and the losses associated with making an
incorrect decision for each class are the same, the
decision layer unit classifies the pattern in accordance
with the Bayes‟ decision rule based on the output of all
the summation layer neurons given by
C (x)
Pm (x)
Pi (x)
P1 (x)
Output
Input Layer
Pattern Layer
Summation
Layer
Decision
Layer
Xm1
Xin
Xi2
Xi1
X1n
X12
1 X11
Xm2
Xmn
NN
x
Pavithra Roy et al , International Journal of Computer Science & Communication Networks,Vol 5(2),120-127
123
ISSN:2249-5789
)6(,...,2,1)},(max{arg)( xizpzlC ii
Where, )(zlC i denotes the estimated class of the
pattern and x is the total number of classes in the
training samples.
The chief advantages of PNN are high-speed training,
an inherently parallel structure, assured to converge to a
best possible classifier as the size of the representative
training set increases and training samples can be added
or removed without extensive retraining [20]. PNN
learns more quickly than many neural network models
and success on a variety of applications. On the other
hand, the main disadvantages are it is not as general as
back-propagation, requires large memory, slow in
execution of the network and requires a representative
training set compared to other types of NNs.
4. COMPARISION BETWEEN NN AND PNN
ARCHITECTURE RESULTS
The proposed work based on NNs, motivated by the
fact that NNs are widely different from conventional
information processing as they have the ability to learn
from the prior data, thus being also able to perform
better in cognitive tasks. Two NN based learning
schemes have been discussed here, the basic NN and
the PNN. While the former one aims at predicting
channel state by such learning schemes and apply them
into future cognitive radio based systems, the latter one
addresses that such a learning scheme should be
flexible in incorporating further information in the
learning process, which can be a objective merit to the
process. The proposed NN-based schemes concern the
channel state prediction of a specific radio
configuration through training process. In order to
proficient, the cognitive radio networks must be able to
select all the available channel states and to train
without exception. This could be implemented by
deploying multiple parallel training neural networks.
Furthermore, it must be noted that, the NN-based
channel prediction can be further fed with other
information that might crucially affect the throughput,
idle channel prediction for secondary user etc. This
could be subject of our research.
Learning is a continuous process during which, the
NN‟s parameters are adapted according to the
gravitational search algorithm .In order to increase the
validity and robustness, NNs should be deeply explored
especially taking into account more realistic input time-
series and environment situations. In conclusion,
cognitive radio systems have gained extremely high
attention from the wireless research world in the recent
years, as well as they will do in the upcoming years, we
advocate for implementing learning mechanisms like
the ones presented in this paper, to assist the cognitive
radios in deciding for their operating radio
configuration.
In this work, we presented an experimental evaluation
of the performance of PNN and NN. We performed a
comparative study of PNN and the NN learning
technique and, PNN outperformed from the evaluation
criteria adopted. This study presented a new method of
hybridizing of artificial neural networks with
gravitational search algorithm. Although the obtained
results have acceptable accuracy but increasing the
number of training data can improved the error
function. Future work can be focused on comparing the
effect of changing the type of neural networks or
optimization techniques, on minimizing the error
function.
Figures 3 to 5 gives the comparative plot for NN and
PNN. The comparison of our technique is made with
respect to HMM, LM based NN [21, 22, 23]. The
results show the throughput, SU and SUimp by varying
the number of channels from two to ten. Fig.3 shows
that the value of the throughput increases with the
number of channels. Three, five and nine channel states
have been considered, each with 100 time slots and the
figures are shown in 3a, 3b and 3c respectively. Fig. 4
shows that SU decrease with increase in number of
channels. We can observe that PNN technique has
achieved higher values for SU. Fig. 5 shows that the
SUimp increases with number of channels. Higher SU
and SUimp values show the effectiveness of the
technique. The highest SU and SUimp values achieved
by our technique are about 0.585 and 0.52 respectively.
Pavithra Roy et al , International Journal of Computer Science & Communication Networks,Vol 5(2),120-127
124
ISSN:2249-5789
a) 3 Channels
b) 5 Channels
c) 9 Channels
Fig. 3 Throughput Performance with varying
channel states
Fig. 4 SU with varying channel states
Fig. 5 SUimp with varying channel states
5. SUMMARY AND CONCLUSIONS
Despite numerous research efforts in probabilistic
neural networks, there has been an improvement in
spectrum sensing in channel prediction is concerned. In
this study, a supervised PNN structure determination
algorithm has been proposed. We have chosen the PNN
because of its implementation simplicity and high
computational speed in the training stage, when
compared to others algorithms. A key feature of this
supervised learning algorithm is that the requirements
on the network size and training of the predictive
channels are directly incorporated in the methodology.
As a consequence, the proposed algorithm often leads
to a fairly effective network structure with satisfactory
Pavithra Roy et al , International Journal of Computer Science & Communication Networks,Vol 5(2),120-127
125
ISSN:2249-5789
classification accuracy. This paper also introduces and
evaluates learning schemes that are based on NN and
can be used for discovering the performance (e.g.
throughput) that can be achieved in a cognitive radio
networks. So rate of channel prediction can be used to
determine the efficiency of the proposed training
algorithm. In order to design and use an appropriate
neural network structure, performance analysis has
been done in simulation environment using MATLAB.
The results have been compared and discussed in order
to show the benefits of artificial neural network based
learning schemes into cognitive radio systems.
REFERENCES
[1] S.Haykin, “Cognitive radio: brain-empowered
wireless communications," Selected Areas in
Communications, IEEE Journal on 23(2): pp. 201- 220,
2005
[2] N.Baldo and M. Zorzi, “Learning and Adaptation
in Cognitive Radios Using Neural Networks,”.in 5th
IEEE Consumer Communications and Networking
Conference (CCNC), 2008.
[3] A. He, K. K. Bae, T. R. Newman, J. Gaeddert, K.
Kim, R.Menon,L. Morales, J. Neel, Y. Zhao, J. H.
Reed, and W.H. Tranter, “A survey of artificial
intelligence for cognitive radios,” IEEE Trans.
Veh.Technol.,vol. 59, no. 4, pp. 1578–1592, May 2010.
[4] T. W. Rondeau and C. W. Bostian, “Cognitive
Techniques: Physical and Link Layers in Cognitive
Radio Technology”, B. Fette, 1st ed., (Elsevier), 2006
[5] M. Negnevitsky, Artificial Intelligence : A Guide
to Intelligent Systems. New York : Addision-Wesley
Pearson Education Limited, 2005.
[6] Clancy, C.; Hecker, J.; Stuntebeck, E.; O'Shea, T.,
"Applications of Machine Learning to Cognitive Radio
Networks," Wireless Communications, IEEE, vol.14,
no.4, pp.47-52, August 2007
[7] Le, B., Rondeau, T. W., and Bostian, C. W. 2007.
Cognitive radio realities. Wirel. Commun.Mob.
Comput. 7, 9 (Nov. 2007), 1037-1048.
[8] D.T. Pham, E. Koc, A. Ghanbarzadeh, S. Otri.
Optimisation of the weights of multi-layered
perceptrons using the bees algorithm. Proceedings
of 5th International Symposium on Intelligent
Manufacturing Systems. Sakarya University,
Department of Industrial Engineering, pp. 38–46, 2006.
[9] T. W. Rondeau and C. W. Bostian, “Cognitive
Techniques: Physical and Link Layers in Cognitive
Radio Technology”, B. Fette, 1st ed., (Elsevier),
2006
[10] A.S. Yilmaz, Z. Ozer, Pitch angle control in wind
turbines above the rated wind speed by multi-layer
percepteron and Radial basis function neural networks.
Expert Systems with Applications. 2009; 36: pp. 9767–
9775.
[11] Z. Zhang and X. Xie, “Application research of
evolution in cognitive radio based on GA,” 3rd IEEE
Conference on Industrial Electronics and Applications
(ICIEA), 2008.
[12] D.F. Specht, “Probabilistic neural networks”,
Neural Networks, vol. 3, no. 1, pp. 109-118, 1990.
[13] G.S. Gill and J. S. Sohal, “Battlefield Decision
Making: A Neural Network Approach” Journal of
Theoretical and Applied Information Technology, pp.
697 – 699, 2009.
[14] S.N. Sivandam, S. Sumathi, and S.N. Deepa.,
“Introduction to Neural networks using MATLAB 6.0”,
Tata Mcgraw-Hill Publishing Company Limited,
Copyright 2006.
[15] You-En Lin, Kun-Hsing Liu, and Hung-Yun
Hsieh, "On Using Interference-Aware Spectrum
Sensing for Dynamic Spectrum Access in
Cognitive Radio Networks", IEEE Transactions On
Mobile Computing, Vol. 12, No. 3, March 2013.
[16] I. F. Akyildiz, W. Lee, M. C. Vuran, and S.
Mohanty, “Next generation/dynamic spectrum
access/cognitive radio wireless networks: A survey,”
Computer Networks, vol. 50, no. 13, pp. 2127–2159,
Sep. 2006.
[17] P. Chang-hyun, K. Sang-won, and L. Sun-rain,
“HMM based channel status predictor for cognitive
radio,” in Proceedings of Asia-Pacific Microwave
Conference, 2007.
[18] Holliday D, Resnick R and Walker J,
“Fundamentals of physics”, John Wiley and Sons,
1993.
[19] Mao, K.Z., Tan, K.-C. and Ser W., "Probabilistic
neural-network structure determination for pattern
classification", IEEE Transactions on Neural Networks,
Vol.11 , No.4, pp.1009-1016, 2000.
[20]P. Wasserman, Advanced Methods in Neural
Computing. New York, NY: Van Nostrand Reinhold,
1993.
Pavithra Roy et al , International Journal of Computer Science & Communication Networks,Vol 5(2),120-127
126
ISSN:2249-5789
[21] Pavithra Roy P., and Muralidhar M., “Hidden
Markov Model based Channel state prediction in
Cognitive Radio Networks” International Journal of
Engineering Research & Technology, pp. 391-394,
Vol. 4, Issue 2, 2015.
[22] Pavithra Roy P., and Muralidhar M., “Throughput
Maximization in Cognitive Radio Networks using
Levenberg-Marquardt Algorithm", International
Journal of Advanced Research in Computer and
Communication Engineering, pp. 315-319, Vol. 4,
Issue 2, 2015.
[23] Pavithra Roy P and Muralidhar M., “Channel
State Prediction in a Cognitive Radio Network using
Neural Network Levenberg-Marquardt Algorithm”
International Journal of Wireless Communications and
Network Technologies, pp. 24-29, 4(2), 2015.
BIOGRAPHY
Pavithra Roy P. obtained her
Bachelor‟s degree in Electronics
and Communication Engineering
from Jawaharlal Nehru
Technological University,
Hyderabad, AP, India. Then she
obtained her Master‟s degree in
Electronics and Communication Engineering with
specialization in Digital Electronics and
Communication Systems from the Jawaharlal Nehru
Technological University, Anantapur, AP, India.
Currently, she is an Assistant Professor at the Faculty
of Electronics and Communication Engineering,
Vemana Institute of Technology, Bangalore, India. Her
specializations include Communication Systems, Logic
Design and Neural Networks. Her current research
interests are Wireless Communications, Cognitive
Radio Systems, and Spectrum Utilization.
Dr. Muralidhar M. (Muralidhar
Mahankali.) received B.E., M.E. and
Ph.D degrees from Sri Venkateswara
University, Tirupati, Andhra Pradesh,
India. He worked in different cadres in
the department of Electrical &
Electronics Engineering in Sri
Venkateswara University, Tirupati, Andhra Pradesh,
India from 1977 to 2011. Currently he is working as
Principal and Professor in EEE in Sri Venkateswara
College of Engineering & Technology (Autonomous),
Chittoor, Andhra Pradesh, India. His research interests
are Wireless sensor networks and heterogeneous
wireless networks. He is a senior Member of the
Institution of Engineers (India) and a Member of the
Chartered Engineers (India).
Pavithra Roy et al , International Journal of Computer Science & Communication Networks,Vol 5(2),120-127
127
ISSN:2249-5789