[ieee 2014 ieee students' technology symposium (techsym) - kharagpur (2014.2.28-2014.3.2)]...

5
ROBOG: Robo Guide with simple learning strategy Harish Y [email protected] Kranthi Kumar R [email protected] Irfan Feroz G.MD [email protected] Chakravarthi Jada [email protected] Anil Kumar V [email protected] Mounika Mesa [email protected] Rajiv Gandhi University of Knowledge Technologies, Basar, India. AbstractThis paper presents ROBOG; an experimental effort in building an autonomous robot that can learn the navigation system of a known terrain and use it for guiding. It is equipped with Artificial Neural Network for the task of Decision making. ROBOG was trained to learn the geographical structure of a floor in an academic block of RGUKT and is tested successfully to guide a person from anywhere to any specific classroom in the trained region. The training of ANN is done with Error Back Propagation algorithm and Particle Swarm Optimization. Results are provided showing the superiority of PSO over conventional EBP in training the ANN. It can easily be trained for other type of structures as well. Some outlook of future work and extensions are suggested. Keywords-; RoboGuide, Learning, Artificial Neural Networks, Multi Layer Perceptron, Error Back Propagation Algorithm, Particle Swarm Optimization, Branch, Node I. INTRODUCTION Robot can be defined as an automated machine which requires minimal human intervention in-field, with exception on the initial programming. Any robot basically consists of actuators, sensors, processors and controllers. The advancement in robotics is possible only because of microprocessor (controllers)’s speed, portability, affordability and ease of interfacing with sensors & actuators. Till now the scope of the robotics is to do any work based on pre- programmed instructions. But the ultimate goal is to make it autonomous i.e., to sense the surroundings and behave accordingly. Robots are widely used because they provide an excellent precision without compromising on efficiency. They can be used in unsafe environments without any boredom. But robots are not suited for the tasks involving creativity, innovations, independent thinking. They can’t learn from mistakes and can’t take complicated decisions on their own (till now). To be precise, they are not adaptable. One unique feature of humans is their ability in creating new things. Even with this capability we have not succeeded in creating a “universal intelligent machine”. However we are capable of using them for desired application with minimal human assistance. They are widely used in industrial applications from long back. Recently there is an increase in the use of robots for surgery in medical applications. Robot for assisting people is taken as a research topic now a days. One of the areas where robots can be used for assisting people is to guide them. For this task, robot, programmed with the navigation information of a known terrain, has to interact with people, sense the surroundings and execute instructions according to the user’s choice. This can be a futuristic work because personal assisting robot as a navigational tool in exploring unknown terrain will be helpful. Global Positioning System is used mostly for navigation now a days but it has an error in the order of tens of meters. Robot installed with navigation map of a local space can be a viable alternative for autonomous navigation. In industrial application, it can be used for local transportation. This can be used for continuous monitoring and surveillance. II. LITERATURE SURVEY The ability of animals to learn or react adequately to any unperceived situation is considered as a basic property of living systems. All living organisms acquire most of the knowledge during learning by experience. There is an enormous research going on to make a machine to learn from its experience. Machine Learning (ML) algorithms show a way to achieve this. The essence of ML is to answer: “How can we build computer system that automatically improves with experience and what are the fundamental laws that govern all learning processes?” [1]. This covers a broad range of learning tasks. However, models of living systems and mechanisms of learning are far from being clear. Precisely, a machine learns with respect to a particular task T, performing metric P and type of experience E, if the system reliably improves its performance P at task T following experience E. Depending on T, P and E, the learning tasks differ such as data mining, autonomous navigation, pattern recognition etc [1]. One way of incorporating learning from example is to study the human learning system and mimic it. Development of self-learning system capable of acquiring knowledge from the environment is one among the central problems in theory of Artificial Intelligence (AI). Models of the adaptive behavior of agents must a priori include all properties, which they may ever posses where new properties do not appear. So the present day learning systems can't set themselves new tasks and adapt to an arbitrary environment [8]. All modern systems of the AI are capable of learning only through information about the environment. Few of such techniques are heuristic methods, recursive search, expert systems and machine learning [8]. They involve pre-learning phase where - manually fed data is used for offline mode training. The algorithm may learn and generalize surrounding environment from these training samples. In few of the nature inspired techniques, expected performance in generalizing is not comparable to human performance. This may be because, Proceeding of the 2014 IEEE Students' Technology Symposium TS14P01 169 978-1-4799-2608-4/14/$31.00 ©2014 IEEE 224

Upload: mounika

Post on 28-Mar-2017

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: [IEEE 2014 IEEE Students' Technology Symposium (TechSym) - Kharagpur (2014.2.28-2014.3.2)] Proceedings of the 2014 IEEE Students' Technology Symposium - ROBOG: Robo guide with simple

ROBOG: Robo Guide with simple learning strategy Harish Y

[email protected] Kranthi Kumar R [email protected]

Irfan Feroz G.MD [email protected]

Chakravarthi Jada [email protected]

Anil Kumar V [email protected]

Mounika Mesa [email protected]

Rajiv Gandhi University of Knowledge Technologies, Basar, India.

Abstract— This paper presents ROBOG; an experimental effort in building an autonomous robot that can learn the navigation system of a known terrain and use it for guiding. It is equipped with Artificial Neural Network for the task of Decision making. ROBOG was trained to learn the geographical structure of a floor in an academic block of RGUKT and is tested successfully to guide a person from anywhere to any specific classroom in the trained region. The training of ANN is done with Error Back Propagation algorithm and Particle Swarm Optimization. Results are provided showing the superiority of PSO over conventional EBP in training the ANN. It can easily be trained for other type of structures as well. Some outlook of future work and extensions are suggested.

Keywords-; RoboGuide, Learning, Artificial Neural Networks, Multi Layer Perceptron, Error Back Propagation Algorithm, Particle Swarm Optimization, Branch, Node

I. INTRODUCTION Robot can be defined as an automated machine which

requires minimal human intervention in-field, with exception on the initial programming. Any robot basically consists of actuators, sensors, processors and controllers. The advancement in robotics is possible only because of microprocessor (controllers)’s speed, portability, affordability and ease of interfacing with sensors & actuators. Till now the scope of the robotics is to do any work based on pre-programmed instructions. But the ultimate goal is to make it autonomous i.e., to sense the surroundings and behave accordingly.

Robots are widely used because they provide an excellent precision without compromising on efficiency. They can be used in unsafe environments without any boredom. But robots are not suited for the tasks involving creativity, innovations, independent thinking. They can’t learn from mistakes and can’t take complicated decisions on their own (till now). To be precise, they are not adaptable.

One unique feature of humans is their ability in creating new things. Even with this capability we have not succeeded in creating a “universal intelligent machine”. However we are capable of using them for desired application with minimal human assistance. They are widely used in industrial applications from long back. Recently there is an increase in the use of robots for surgery in medical applications. Robot for assisting people is taken as a research topic now a days.

One of the areas where robots can be used for assisting people is to guide them. For this task, robot, programmed with the navigation information of a known terrain, has to interact with people, sense the surroundings and execute

instructions according to the user’s choice. This can be a futuristic work because personal assisting robot as a navigational tool in exploring unknown terrain will be helpful. Global Positioning System is used mostly for navigation now a days but it has an error in the order of tens of meters. Robot installed with navigation map of a local space can be a viable alternative for autonomous navigation. In industrial application, it can be used for local transportation. This can be used for continuous monitoring and surveillance.

II. LITERATURE SURVEY The ability of animals to learn or react adequately to any

unperceived situation is considered as a basic property of living systems. All living organisms acquire most of the knowledge during learning by experience. There is an enormous research going on to make a machine to learn from its experience. Machine Learning (ML) algorithms show a way to achieve this. The essence of ML is to answer: “How can we build computer system that automatically improves with experience and what are the fundamental laws that govern all learning processes?” [1]. This covers a broad range of learning tasks. However, models of living systems and mechanisms of learning are far from being clear.

Precisely, a machine learns with respect to a particular task T, performing metric P and type of experience E, if the system reliably improves its performance P at task T following experience E. Depending on T, P and E, the learning tasks differ such as data mining, autonomous navigation, pattern recognition etc [1]. One way of incorporating learning from example is to study the human learning system and mimic it.

Development of self-learning system capable of acquiring knowledge from the environment is one among the central problems in theory of Artificial Intelligence (AI). Models of the adaptive behavior of agents must a priori include all properties, which they may ever posses where new properties do not appear. So the present day learning systems can't set themselves new tasks and adapt to an arbitrary environment [8].

All modern systems of the AI are capable of learning only through information about the environment. Few of such techniques are heuristic methods, recursive search, expert systems and machine learning [8]. They involve pre-learning phase where - manually fed data is used for offline mode training. The algorithm may learn and generalize surrounding environment from these training samples. In few of the nature inspired techniques, expected performance in generalizing is not comparable to human performance. This may be because,

Proceeding of the 2014 IEEE Students' Technology Symposium

TS14P01 169 978-1-4799-2608-4/14/$31.00 ©2014 IEEE 224

Page 2: [IEEE 2014 IEEE Students' Technology Symposium (TechSym) - Kharagpur (2014.2.28-2014.3.2)] Proceedings of the 2014 IEEE Students' Technology Symposium - ROBOG: Robo guide with simple

we perceive the environment as patterns but computers take it as data. A mere ability of computer to perform digital logic operations at higher speed is not enough to achieve adaptability [6]. Though all these algorithms have their own performance limitations (limiting the area of application), an important thing is that they reduce the human effort needed to develop networks for new situations eliminating a need for hand programmed training examples.

Arthur Arsenio of MIT worked on teaching the robotic child machine learning strategy for a humanoid robot from social interactions, focused on developing a humanoid robot perceptual system through learning aids so that robots can learn of the world according to a child’s development phases [2].

Artificial Neural Networks are used in robotics to give appropriate directions to certain task of a robot. A robotic manipulator uses supervised learning to pick up an object by processing the data received from sensors to calculate joint angles and helps to generate nonlinear function to give the appropriate value of θ close to θ0 which on applying Neural Networks gives highly accurate tracking by its end-effector. Based on position of target (XTarget) and robot (XRobot), joint angles can be calculated by Neural Controller function [3].

Autonomous driving has a potential to be an ideal domain for a supervised learning algorithm since there is a readily available teaching signal in the form of the human driver's current steering direction. Autonomous Land Vehicle In a Neural Network (ALVINN), designed by Dean A Pomerleau, Carnegie Mellon University using Artificial Neural Network architecture. With training, it can drive in different circumstances (Traffic Levels, Roads) at a speed of 55 Miles per Hour and uses single 3-hidden layer Feed Forward Neural Network and training using Back Propagation algorithm. A 30X32 unit layer, uses sensor (camera) information, connects (each unit) to a 4 unit hidden layer, connected to an output layer to control steering direction [4].

The task we consider here is to create a guiding robot that can learn about its environment and show how primitive knowledge of decision taking and memory with suitable user interaction is used to create a futuristic application.

III. ARTIFICIAL NEURAL NETWORKS AND PARTICLE SWARM OPTIMIZATION

A. Artificial Neural Networks An Artificial Neural Network (ANN) is inspired from

biological neural network and it is a massively parallel distributed processor made up of simple processing units which has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain in two aspects:

i. Knowledge is acquired by the network from its environment through a learning process (updating weights).

ii. Inter-neuron connection strengths, known as synaptic weights, are used to store acquired knowledge.

A neural network derives its computing power through its parallel distributed structure and its ability to learn and generalize. Here, generalization refers to producing reasonable outputs for inputs not encountered during learning (training) phase. Neural networks are used to model complex relationships between inputs and outputs or to find patterns in data.

Figure 1: Multi Layer Perceptron Network Architecture

Multi Layer Perceptron (MLP), shown in Fig 1, is a Feed Forward Neural Network which consists of input layer, one or more hidden layers and one output layer. MLP is a generalization of Single Layer Perceptron. The input signal propagates from one layer to other layer in forward direction hence the name feed forward neural networks. A very popular “Error-correction learning rule” known as Back Propagation is used to train the network.

By adding one or more hidden layers, the network is enabled to extract higher order polynomial statistics. In a loose sense, the network acquires a global perspective despite its local connections as it exhibits a high degree of connectivity. Any change in the connectivity leads to change in synaptic connections.

Distinctive characteristics of MLP:

i. Non-linear activation function, included at the output is smooth and differentiable everywhere which enables MLP to classify and learn the patterns in non-linearly separable data also there by making it a powerful tool in machine learning.

ii. Neural Network contains hidden layers of neurons that enable it to learn complex tasks by extracting progressively more meaningful features from input patterns [5].

B. Error Back Propagation Basically, Error Back Propagation (EBP) learning consists

of two passes through the different layers of the Neural Network. In forward pass, the effect of input propagates through the network, layer by layer and at the output layer a set of outputs is produced which is the actual response of the network. This actual response is subtracted from the desired (target) response to produce an error signal. This error is then propagated backward through the network (backward pass) and used to adjust the weights to the desired effects with an error correction rule. It essentially uses gradient descent approach to update the weights such that the actual response of the network is closer to the desired response in a statistical sense.

Proceeding of the 2014 IEEE Students' Technology Symposium

TS14P01 169 978-1-4799-2608-4/14/$31.00 ©2014 IEEE 225

Page 3: [IEEE 2014 IEEE Students' Technology Symposium (TechSym) - Kharagpur (2014.2.28-2014.3.2)] Proceedings of the 2014 IEEE Students' Technology Symposium - ROBOG: Robo guide with simple

Normally, a constant η, the learning rate, is defined to determine how quickly learning should occur i.e., how large the steps should be in the search for the best set of weights. If η is too small, the training could take a long time. If it is too large, there is a risk of overstepping the best set of weights. In addition to a learning rate term, it is often useful to include a momentum term, α, that gives extra weight to continue the search in the same direction. This term helps avoid oscillation during the search [6].

The major drawback of EBP is the possibility of getting stuck at the local optimum and failed to find the best (global) optimum.

C. Particle Swarm Optimization Optimization problems are issues of real interest in many

fields because finding the best solution from all possible solutions turns out to be hard. Hence, the near optimal solutions became the choice in many cases. Particle Swarm Optimization (PSO) is one of the preferred choices to find this near-optimal solution.

PSO is a swarm-intelligence based stochastic optimization technique inspired from the bird-flocking behavior. This technique, first described by James Kennedy and Russell C. Eberhart in 1995, originates from two separate concepts: the idea of swarm intelligence based on the observation of swarming habits by certain kinds of animals (such as birds and fishes); and the field of evolutionary computation [7].

PSO algorithm works by simultaneously maintaining several candidate solutions in the search space. Here each particle is treated as an N-dimensional point in search space. During each iteration, each candidate solution is evaluated by the objective function being optimized determining the fitness of that solution. Each candidate can be thought as a particle flying in the fitness landscape. Here, each agent has its own position and velocity which are randomly initialized. Each particle keeps track of its "best" information called as particle-best and maintains a history of global-best from these particle-best. The velocity and position of each agent is updated using this “pbest” and “gbest” so that each agent will reach a global/local optimum solution. This paradigm can be implemented in few lines of code. It is requires primitive mathematical operators and is computationally inexpensive in terms of both memory requirement and speed.

PSO algorithm can be described as follows:

1. Evaluate the fitness of each particle 2. Update the particle best and global best positions 3. Update the velocity and position

In PSO, to optimize the weights, the objective function to be used is the mean square error of neural network for given training patterns.

There are two phases associated with using a neural network for robotics. First, the network must be trained, and then it can be tested. Training a network is accomplished by adjusting the weights of the network in order to map each

input pattern to the desired output pattern by using the techniques mentioned above or any other. That is, a function should be learned by neural network that maps from input to output. This function is represented by the weights of the networks. Once the network is trained, the function is evaluated by processing the inputs without updating the weights, and taking corresponding action. The following section explains implementation details of ROBOG.

IV. ROBOG ROBOG is a mobile robot designed to learn an

environment, specifically one floor of the academic block at RGUKT, Basara. The environment has 12 classes and ROBOG needs to guide to any class from anywhere. As it is a β-project, we created a virtual floor atmosphere in the Electrical Technology Lab at RGUKT, shown in Fig 2, and experiments were done which are explained in the following subsections.

Figure 2: Virtual map of one floor of Academic Block

Figure 3: Block Diagram of one floor of Academic Block

In Fig 3, circles represent classrooms; connecting lines represent branches which in turn represent black-strip path which are labeled in sequence. ROBOG is fed with initial branch number and destination class number. And ANN is expected to generate required output direction to navigate which is parallel ported to motors to steer the vehicle.

A. Vehicle Capabilities The robot designed in this work so far is a prototype built

by us. It is a 3-wheeled robot and has 4 Infrared (IR) sensors for sensing the surroundings. ROBOG can go to the specified class out of 12 classes (destinations) shown in Fig 3, with its learned capacity of decision making. The different components used in design of mobile robot are listed with their specification in Table I.

Proceeding of the 2014 IEEE Students' Technology Symposium

TS14P01 169 978-1-4799-2608-4/14/$31.00 ©2014 IEEE 226

Page 4: [IEEE 2014 IEEE Students' Technology Symposium (TechSym) - Kharagpur (2014.2.28-2014.3.2)] Proceedings of the 2014 IEEE Students' Technology Symposium - ROBOG: Robo guide with simple

Figure 4: ROBOG Vehicle

TABLE I. COMPONENT SPECIFICATION

Component Technical Specification

DC Motors 12V, 100RPM

IR Transmitters 4-20 mA, 13-28 mV

Photo Diode 4-20 mA, 13-28 mV

Arduino Mega 2560 8-bit, 16 MHz with auto-reset, using an stk500v2 boot loader.

L293D 36V,1.2A Power Supply 9V Battery

B. Implementation As mentioned in Section II, the aim is to make the robot a

navigating tool in academic block of RGUKT, Basara. The virtual map consists of 12 classes in a symmetrical manner simulating the actual environment. A black strip is used as the route from one location to the other. Among 4 IR sensors, 2 are used to detect junctions and other 2 are used for the purpose of tracking the path.

The robot is coded with Arduino Mega 2560 micro-controller to follow the path till it encounters a junction. When a junction is detected, the trained neural network gives the instruction to the mobile robot based on the destination specified by user. The arduino micro-controller is interfaced with MATLAB on laptop with serial communication cable.

The task would have been better with web-cam but a line following robot is used to reduce the effort on hardware and sensing the environment and to keep more focus on learning in neural network.

C. Neural Networks on vehicle Training the network indicates adjusting the weights so

that input patterns are mapped to an output pattern that corresponds to the correct action. The Neural Network inputs considered are branch number and destination class (an advantage in making the entire work as generalized one). The output actions in this case are Left, Right, Forward and Stop. The network outputs are represented according to the inputs of motors. These are specified in the following Table II.

In Table II the first two digits of motor code represent motor inputs for the two terminals of left wheel and next two for right wheel. 0 indicates grounded voltage and 1 indicates analog high value of 9 volts.

TABLE II. MOTOR CODES FOR NEURAL NETWORK OUTPUT

Steering Direction Motor Code

Left ‘0010’ Right ‘1000’

Forward ‘1010’ Stop ‘0000’

There are 35 branches, 12 classes and thus 35*12 sets of

input patterns. Training the ANN with such huge data is time consuming i.e., ANN takes more time to learn. One advantage we have is that the block is symmetrical. Using this, the network is trained for one-quarter of the entire network (3 classes and 11 branches) and hand-coded for the rest of the block using the symmetry of the block.

One difficulty faced during this project is thresholding for IR sensors as the values vary according to the outside light intensity. This is solved by completely covering the sensors from external environment.

The other difficulty is training the neural network. The network is trained in batch mode using EBP and PSO where testing is done for the entire created floor and the observed results along with conclusions are given in the next section.

V. RESULTS AND SCOPE OF FUTURE WORK

A. Results and Discussion For training the ROBOG, we have used MATLAB 2012a

on an intel i5 processor (3 GHz) and 4 GB RAM machine. For testing purpose we have used Dual Core AMD processor (2.2 GHz) with 2 GB RAM machine.

Neural network chosen for this application has one input layer with 3 input neurons (without bias), 3 hidden layers with 3 neurons (without bias) in each layer and one output layer with 4 neurons (Architecture: 3-3-3-3-4). The network architecture is shown in Fig 1. Here, the total free parameters which correspond to learning are 52. ANN is trained for one quarter of entire network with a data of 17x7 samples. The network is trained using both EBP and PSO. Both the algorithms are run with the terminating criteria of achieving tolerable error. The parameters chosen for NN with EBP and NN with PSO are given in Table III. In both the algorithms, the initial weights are randomly deployed with normal distribution. Fig 5 shows the plot of mean square error (MSE) variation using EBP as well as PSO for 5000 iterations. It is observed that PSO error minimization convergence is faster than EBP convergence and it can be noted that the EBP solution for weights is getting stuck at local optimum.

TABLE III. PARAMETERS OF NN WITH EBP AND NN WITH PSO

NN with EBP NN with PSO

Architecture 3-3-3-3-4 Particles 500

Learning Rate �1 = 0.0005 �2 = 0.0005 Inertia(ω) 0.35

Momentum Term α=0 Momentum(c1,c2) 2 Tolerable MSE 10e-3 Tolerable MSE 10e-3

Proceeding of the 2014 IEEE Students' Technology Symposium

TS14P01 169 978-1-4799-2608-4/14/$31.00 ©2014 IEEE 227

Page 5: [IEEE 2014 IEEE Students' Technology Symposium (TechSym) - Kharagpur (2014.2.28-2014.3.2)] Proceedings of the 2014 IEEE Students' Technology Symposium - ROBOG: Robo guide with simple

Figure 5: MSE Convergence in PSO and EBP

Figure 6: Motor-1 STATE GRAPH from Branch-1 to Class-8

Figure 7: Motor-2 STATE GRAPH from Branch-1 to Class-8

After performing the tests, we can see the motor positions with respect to time in Fig 6 and Fig 7 while the vehicle was travelling from Branch-1 to class-8.

Since we have used different types of IR sensors, there are variations in the voltage thresholds of 4-IR sensors for different sessions in a day, which are given in Table IV

TABLE IV. THRESHOLD VALUES OF IR SENSORS

Session Front-Left (Volts)

Front-Right (Volts)

Side-Left (Volts)

Side-Right (Volts)

I 0.3 0.3 1.46 0.3 II 0.24 0.24 1.46 0.17 III 0.49 0.49 3.17 1.71

B. Future scope The proposed design of Academic Block as branches and

nodes confines ROBOG to indoors and needs a black-strip as path for navigation. By employing a Web-Cam and using Digital Image Processing techniques, ROBOG can be made independent of strip path and used for outdoor maneuvering also. With this strategy it can learn to guide in any surroundings.

Graph Theory might also provide an easier plausible solution for the task. As discussed in the Section IV (B), every branch and node in the block diagram of the academic block are given specific numbers. A connection matrix with nodes (junctions) in rows and branches (part of the floor between two junctions) in columns can be formed and used as training set to an ANN to generate suitable output of navigation directions for all classes at once. Here, we expect that learning can be done in a simple manner with the help of easier and more efficient novel algorithms as an updation rule in support of or instead of ANN.

VI. CONCLUSIONS In this work the vehicle is designed as β-type and it is

working well for the considered task. Here it is observed that ANN training is better with PSO compared to Back Propagation and IR sensor levels can be effortlessly adjusted according to atmosphere. Also the vehicle can easily be trained for other type of structures as well. With the advantage of these capabilities, this can be implementable in real time with modifiable path exploring facility.

REFERENCES [1] Tom Mitchell, Machine Learning, McGraw Hill, New York, 1997. [2] Arsenio, A, “Developmental Learning on a Humanoid Robot”

Proceedings of IEEE International Joint Conference on Neural Networks,Budapest, 2004.

[3] Allon Guez, Ziauddin Ahmad, “ Solution to the Inverse Kinematic Problem in Robotics by Neural Networks”, ICNN '88, March 1988.

[4] Dean Pomerleau, “ Knowledge-based Training of Artificial Neural Networks for Autonomous Robot Driving” Robot Learning, J. Connell and S. Mahadevan, ed., 1993.

[5] S. Haykin, Neural Networks: A Comprehensive Foundation, 2nd Edition, Prentice-Hall, 1999.

[6] B. Yegnanarayana, Artificial Neural Networks, Prentice- Hall of lndia Private Limited, New Delhi, 1999.

[7] Kennedy, J.; Eberhart, R.. “Particle Swarm Optimization”. Proceedings of IEEE International Conference on Neural Networks IV, 1995. pp. 1942–1948.

[8] Alexey V. Melkikh: “Can an Organism Adapt Itself to Unforeseen Circumstances?” CoRR abs/cs/0604009 (2006)

Proceeding of the 2014 IEEE Students' Technology Symposium

TS14P01 169 978-1-4799-2608-4/14/$31.00 ©2014 IEEE 228