a wisard network approach for a bci-based robotic prosthetic … · 2020-07-03 · 2 high...

16
International Journal of Social Robotics (2020) 12:749–764 https://doi.org/10.1007/s12369-019-00576-1 A WiSARD Network Approach for a BCI-Based Robotic Prosthetic Control Mariacarla Staffa 1 · Maurizio Giordano 2 · Fanny Ficuciello 3 Accepted: 3 July 2019 / Published online: 24 July 2019 © Springer Nature B.V. 2019 Abstract There are significant research efforts underway in the area of automatic robotic-prosthesis control based on brain–computer interface aiming at understanding how neural signals can be used to control these assistive devices. Although these approaches have made significant progresses in the ability to control robotic manipulators, the realization of portable and easy of use solutions is still an ongoing research endeavor. In this paper, we propose a novel approach relying on the use of (i) a Weightless Neural Network-based classifier, whose design lends itself to an easy hardware implementation; (ii) a robotic hand designed in order to fit with the main requirements of these kind of technologies (such as low cost, high performance, lightness, etc.) and (iii) a non-invasive light-weight and easy-donning EEG-helmet in order to provide a portable controller interface. The developed interface is connected to a robotic hand for controlling open/close actions. The preliminary results for this system are promising in that they demonstrate that the proposed method achieves similar performance with respect to state-of-the-art classifiers by contemporaneously representing a most suitable and practicable solution due to its portability on hardware devices, which will permit its direct implementation on the helmet board. Keywords Automatic robotic prosthetic control · Weightless neural network · Brain computer interface · EEG-signal processing 1 Introduction In the last years, we witnessed a great interest in the field of brain–computer interface (BCI) based control of robotic devices, with particular focus on health-related applica- tions, where the adoption of BCI-based control of prosthetic devices aims at increasing the quality of life for patients with diseases causing temporary or permanent paralysis or, in the B Mariacarla Staffa [email protected] Maurizio Giordano [email protected] Fanny Ficuciello fanny.fi[email protected] 1 Department of Physics “E. Pancini”, University of Naples Federico II, Via Cinthia, 21, 80126 Napoli, Italy 2 High Performance Computing and Networking Institute, National Research Council of Italy, Via Pietro Castellino 111, Naples, Italy 3 Department of Electrical Engineering and Information Technologies, University of Naples Federico II, via Claudio, 21, 80125 Napoli, Italy worst case, the lost of limbs [14]. Diseases causing temporary or permanent paralysis are many and very often it is difficult or even impossible to restore lost of limbs or basic func- tions (e.g., ALS (Amyotrophic Lateral Sclerosis) patients who progressively lose the use of the limbs in relation to the disease course). In most cases, the brain continues to per- form its activity even if the body does not respond to stimuli as it should. For this reason, the research has focused on the study of cognitive processes that are triggered in any type of activity to transform electrical signals into commands that can be physically executed no longer by human beings but by a machine. This technology requires the interaction between the user and the device [57] through a control interface that detects the user’s movement intention. Brain Computer Interface (BCI) systems are literally defined as neural interfaces able to communicate the cen- tral nervous system (CNS) with an external device. The aim is to exploit the potential of the physiological signals emitted by the human body, such as the heart, the brain, the blood, to improve the living conditions of people with severe disabili- ties [40]. To decode the brain impulses and understand what movements an individual intends to perform, it is necessary 123

Upload: others

Post on 15-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A WiSARD Network Approach for a BCI-Based Robotic Prosthetic … · 2020-07-03 · 2 High Performance Computing and Networking Institute, National Research Council of Italy, Via Pietro

International Journal of Social Robotics (2020) 12:749–764https://doi.org/10.1007/s12369-019-00576-1

AWiSARD Network Approach for a BCI-Based Robotic ProstheticControl

Mariacarla Staffa1 ·Maurizio Giordano2 · Fanny Ficuciello3

Accepted: 3 July 2019 / Published online: 24 July 2019© Springer Nature B.V. 2019

AbstractThere are significant research efforts underway in the area of automatic robotic-prosthesis control based on brain–computerinterface aiming at understanding how neural signals can be used to control these assistive devices. Although these approacheshave made significant progresses in the ability to control robotic manipulators, the realization of portable and easy of usesolutions is still an ongoing research endeavor. In this paper, we propose a novel approach relying on the use of (i) aWeightlessNeural Network-based classifier, whose design lends itself to an easy hardware implementation; (ii) a robotic hand designedin order to fit with the main requirements of these kind of technologies (such as low cost, high performance, lightness, etc.)and (iii) a non-invasive light-weight and easy-donning EEG-helmet in order to provide a portable controller interface. Thedeveloped interface is connected to a robotic hand for controlling open/close actions. The preliminary results for this systemare promising in that they demonstrate that the proposed method achieves similar performance with respect to state-of-the-artclassifiers by contemporaneously representing a most suitable and practicable solution due to its portability on hardwaredevices, which will permit its direct implementation on the helmet board.

Keywords Automatic robotic prosthetic control · Weightless neural network · Brain computer interface · EEG-signalprocessing

1 Introduction

In the last years, we witnessed a great interest in the fieldof brain–computer interface (BCI) based control of roboticdevices, with particular focus on health-related applica-tions, where the adoption of BCI-based control of prostheticdevices aims at increasing the quality of life for patients withdiseases causing temporary or permanent paralysis or, in the

B Mariacarla [email protected]

Maurizio [email protected]

Fanny [email protected]

1 Department of Physics “E. Pancini”, University of NaplesFederico II, Via Cinthia, 21, 80126 Napoli, Italy

2 High Performance Computing and Networking Institute,National Research Council of Italy, Via Pietro Castellino 111,Naples, Italy

3 Department of Electrical Engineering and InformationTechnologies, University of Naples Federico II, via Claudio,21, 80125 Napoli, Italy

worst case, the lost of limbs [14]. Diseases causing temporaryor permanent paralysis are many and very often it is difficultor even impossible to restore lost of limbs or basic func-tions (e.g., ALS (Amyotrophic Lateral Sclerosis) patientswho progressively lose the use of the limbs in relation tothe disease course). In most cases, the brain continues to per-form its activity even if the body does not respond to stimulias it should. For this reason, the research has focused on thestudy of cognitive processes that are triggered in any typeof activity to transform electrical signals into commands thatcan be physically executed no longer by human beings but bya machine. This technology requires the interaction betweenthe user and the device [57] through a control interface thatdetects the user’s movement intention.

Brain Computer Interface (BCI) systems are literallydefined as neural interfaces able to communicate the cen-tral nervous system (CNS) with an external device. The aimis to exploit the potential of the physiological signals emittedby the human body, such as the heart, the brain, the blood, toimprove the living conditions of people with severe disabili-ties [40]. To decode the brain impulses and understand whatmovements an individual intends to perform, it is necessary

123

Page 2: A WiSARD Network Approach for a BCI-Based Robotic Prosthetic … · 2020-07-03 · 2 High Performance Computing and Networking Institute, National Research Council of Italy, Via Pietro

750 International Journal of Social Robotics (2020) 12:749–764

to extract from the signal the characteristics connected tothese activities [39,52]. Selecting this information carefullyand finding the optimality in the classification in terms of“intentions” is the means of the BCI systems to achieve itsultimate goal, namely to provide the communication chan-nel between the brain and the outside world. The first systemdeveloped with these features dates back to 1964 when theneuro-physiologist Walter Gray used the EEG to control aslide projector [40]. However, it was Professor Jacques Vidalwho coined the expression “Brain Computer Interface” in1970, during one of his researches in bio-communication andhuman–machine interaction in which Visual Evoked Poten-tials (VEP) waves recorded on the visual cortex were usedto determine the direction of the eyes [18] and, in response,the position of a cursor on the screen [40].

The intense research activity in the field ofBCI is currentlyfed both from medical research advances in the knowledgeof the functioning of the brain’s mechanisms and how it reg-ulates the main functionalities of the body, and from roboticsprogresses in the creation of newcomponents for themanage-ment of BCI outputs. In fact, intelligent prosthesis, accurateand ever cheaper implants are increasingly present on theexperimental and rehabilitative medicine market [1]. Despitevarious BCI-control applications have been developed dur-ing last years providing significant progress in the ability tocontrol highly dexterous robotic manipulators and prostheticdevices [10,22,69], the realization of portable and easy ofuse solutions is still an ongoing research endeavor. Many ofthe realized systems [69], are based on Steady state visualevoked potential (SSVEP) analysis of the EEG signals, thatis related to the evoked EEG activity, i.e., the measurementof specific transient cortical potentials that arise in responseto external, visual, acoustic or somato-sensory stimuli. In ourwork, we consider cortical spontaneous activity, or potentialsgenerated by normal brain functioning.

This paper suggests a non-invasive cost effective and reli-able solution, in that it consists in the adoption of: (i) a lowcost robotic hand, (ii) a non-invasive and easy-donning Elec-tro Encephalo Graphy (EEG)-helmet and (iii) a classificationtechnique adapt to be implemented directly in hardware.Namely, the proposed system combines elements of theautonomous PRISMARoboticHandwith a particular kind ofWeightless Neural Network named WiSARD to maneuver agrasp task performed by a robotic hand through the decodingof brain waves via the epoc+ helmet during grasping motionimagination. We provide an experimental analysis where wecompare our framework with the most popular techniques atthe state of the art starting from in-house experimental testscarried out by recording EEG signals of two subjects actingand imagining the opening-then-closing of the hand. Addi-tionally, we perform the same comparison on a third-partydataset of EEG signals captured for a subject performing

different trials in which he/she moved or imagined to movebackward and forward one hand at a time.

2 State of the Art

The research and development of BCI EEG-based systemshas undergone explosive growth only in the last 20 years,in which researchers and teams around the world have pro-duced prototypes of systems, tested in the laboratory, withthe aim of simplifying the lives of people with serious motorhandicaps. Even in recent years, hardware devices and opensource software projects have been developed for everydayuse as an alternative to medical equipment [47]. However,the first studies on the importance of brain waves in the con-trol of the joints are not started in the 21st century but in the60s, when the neuro-physiologist Gray Walter discoveredthe effect Contingent Negative Variation (CNV) also calledReadiness Potential, in Italian Potential of Will: through theEEG, Walter succeeded in isolating a negative electric peakthat appears about half a second before a person is aware ofthe movements she/he wants to accomplish. The discoveryallowed to give a true definition of “conscience” or “will”from the scientific point of view. Subsequently, many otherresearchers began to take an interest in this type of study: itincludes, for example, Jacques Vidal who, in the ’70s, deter-mined the direction of a person’s eyes by exploiting a typeof signal, called VEP, recorded by the scalp by means ofEEG. The purpose of the experiment was to drive a cursor ona computer screen [40]. Two similar experiments were con-ducted byElbert in 1980,which, on the contrary, used anothertype of signal called SCP (Slow Cortical Potential), and byFarwell and Donchin in 1988 who instead used P300 typeEvent-Related Potential (ERPs) to write letters on a screen[40]. These are just some of the research activities carried outin the 20th century which became more and more complextoday.

EEG-Classifiers Commonly, the structures used for train-ing and classification were the Artificial Neural Networks(ANNs), achieving an accuracy of 65% and 71% respec-tively on the training set and on the test set, through off-linedifferentiation of the hand and of the wrist separately [12].Authors in [66] presented a method of analysis of EEG sig-nals using wavelet transform and classification using ANNsand logistic regression (LR). ANNs have been also com-pared to Support Vector Machines (SVMs) as classifiers ofepilepsy diagnosis starting from EEG signals [49]. In 2014,Dokare [68] developed a BCI system that exploited the EEGto classify the imagined movement of the left and right hand.The signals used were the Event-Related Synchronization(ERS) and Event-Related Desynchronization (ERD) peaksthat allowed to apply Pattern Recognition procedures in the

123

Page 3: A WiSARD Network Approach for a BCI-Based Robotic Prosthetic … · 2020-07-03 · 2 High Performance Computing and Networking Institute, National Research Council of Italy, Via Pietro

International Journal of Social Robotics (2020) 12:749–764 751

Motor Imagery. The authors used DWT to perform FeatureExtraction and SVMs for the two-class classification. In [44],the authors exploited EEG recordings to identify movementsof the fingers of one hand. EEG signals with 0.3 Hz fre-quency were recorded via EEGLAB toolbox from elevenhealthy right handed subjects. To extract the features char-acterizing the movements of the fingers the authors used thePower Spectral Density (PSD) and the Principal ComponentAnalysis (PCA).

These are just a few examples of relatedworks, whereas todate, there are over 400 works worldwide in the BCI researchfield, each of them centered on different methods of signalacquisition, characteristics extraction and translation [46].

We here propose a novel approach based on the use ofclassification capabilities of Weightless Neural Networks(WNNs). Up to our knowledge, a very few approaches existusing WNNs to classify EEG signals. In [62], Simoes et al.applied WNNs to EEG classification. Differently from ourapproach, the authors focused on the P300 wave in the EEGsignal, showing that its adoption might help increase the pre-diction accuracy of P300-based BCI systems compared tostate-of-the-art techniques. In [26] a WNN approach is pro-posed for the early detection of epileptic seizure. In this latterwork, the authors used raw data, meaning that the EEG datawere not filtered, neither possible artifacts were removed.Differently to this work, we here propose a pre-processingphase characterized by the cancellation of the noise caused bythe movement of the facial muscles and environmental noisethrough the application in sequence of the Butterworth filterand a type IIR filter (Infinite Impulse Response). Addition-ally, a Feature Extraction was performed using the DiscreteWavelet Transform (DWT), for the decomposition of the sig-nal, and the construction of the Feature Vector using theminimum, maximum, standard deviation and average statis-tical measurements. We made use of the DWTmethod, sincemany studies identified it as the most efficient method forsignal segmentation, alternatingwith Discrete Fourier Trans-form [8,56]. Compared to the latter, in fact, the DWT has theadvantage of having a good spatial resolution, since it man-ages to capture both the value of the frequencies and theirlocation over time. We tested of the proposed EEG-signalprocessing pipeline of our framework on two subjects bycomparing the realized classifier with the most performingstate-of-the-art classifiers. Additionally, we applied the samedata pre-processing pipeline with particular attention to thedigital wavelet feature extraction step also on a third-partyBCI dataset.

BCI for Robotic manipulator In the last 10 years, manyresearchers and companies invested in the prosthetic field toproduce and commercialize multifingered prosthetic hands[21,23,54]. A complete survey on robotic devices for upperlimb rehabilitation can be found in [45]. The most impres-

sive commercial results are represented by the iLimb1 hand,which represents a multi-articulating prosthesis enablingautomated grip by means of simple gestures selectablethrough a mobile app; the Michelangelo prosthetic hand2

that, with its natural human-like design, permits to graspand hold objects with greater control and less effort andthe multi-articu-lating myoelectric BeBionic3 hand ensur-ing comfortability, precision and intuitiveness for end-users.Another impressive work is that of John Hopkins Univer-sity’s team that developed a robotic arm controlled by brainsignals [42]. All these devices are able to provide differentgrip patterns by means of myoelectric control that utilizeselectromyography (EMG) signals as inputs for activating thecorresponding joint actuator(s) proportionally. In [37], anelectromyography (EMG)-based controller for a hand roboticassistive device is presented.

Recently, non-invasive neural recordings methods actu-ated through electroencephalography (EEG) devices4 havebeen extensively used to record person’s brain waves andconvert it in commands for the arm [13,61]. This method hasbeen very successful due to its main features of being costeffective, accurate and not causing discomfort. An exam-ple is the work of the authors in [63], where EEG signalsare used to interpret the users’ intention to grasp an objectand to translate it to online action of a hand exoskele-ton.

Although BCI-control of highly dexterous robotic manip-ulators havemade significant progress, many issues concern-ing the design of these prosthetic devices still exist. Firstof all, the costs associated to these devices as well as thecomplexity of the control system. Furthermore, in most ofcases the proposed solutions are hard to install and main-tain, or may require surgical procedures [13]. In order toface with the above issues, we here propose the use of aRobotic hand that exploits the concept of synergies to reducethe complexity in terms of number of independent degreesof freedom to be controlled, by contemporary maintaininga bio-inspired design with a number of DoFs comparableto the human counterpart to enhance performance and func-tionality [29,31,33,36]. Additionally, it brings the advantageof being designed in order to be a lightweight and low costdevice easy to commercialize and comfortable to use. It willbe used in combination with a non-invasive EEG-based braincomputer interface to provide comfortable, intuitive con-trol.

1 http://www.touchbionics.com/.2 http://www.living-with-michelangelo.com/home/.3 http://www.bebionic.com.4 http://www.instructables.com/id/Brain-Controlled-Wheelchair.

123

Page 4: A WiSARD Network Approach for a BCI-Based Robotic Prosthetic … · 2020-07-03 · 2 High Performance Computing and Networking Institute, National Research Council of Italy, Via Pietro

752 International Journal of Social Robotics (2020) 12:749–764

Fig. 1 a EPOC+ 14-channels helmet b arrangement of the sensors onthe EPOC + headband compared to the international 10/20 system

3 Materials andMethods

3.1 BCI_helmet

In this work we used the EPOC+ 14-channels helmet pro-duced by theEmotive5 (see Fig. 1a). EPOC+ is a non-invasiveand ease of use EEG-helmet characterized by light weight,easy donning, and wireless connection. Its sensors are madeof polymer, able to exploit the humidity of both the air and theskin to optimize the acquisition of the signal, and removableto guarantee replacement in case of failure or wear. Thus, theelectrodes of the EPOC simply need to be dampened usinga saline solution that in disinfectant rather than requiring aspecial gel. Additionally, this helmet is endowed with wire-less technology to the transfer of data towards the computer.We adopted such a kind of helmet, since due to its charac-teristics, it results easy to use even in everyday life while theuser performs usual actions such as moving, eating, reading,etc.

The positioning of the sensors on the cranial vault followsthe so-called International procedure 10/20, even if with amuch lower number of positions occupied than the standard.Namely, the channels used are AF3, AF4, F3, F4, FC5, FC6,F7, F8, T7, T8, P7, P8, O1, O2 accompanied by 2 reference

5 https://www.emotiv.com.

Fig. 2 Servo-pulley to provide tendon traction

sensors also in CMS / DRL configuration but identified byP3 sensors (in CMS) and P4 (in DRL) relative to the 10/20standard (see Fig. 1b).

3.2 PRISMA Robotic Hand

As system actuator, we used the PRISMAHand I. It has beendeveloped by taking into account the main issue related tothe design of such a kind of prostheses, such as:

(i) low cost to let the device be accessible to a largeaudience. In this extent, the PRISMA Hand has beenconstructed using economic hardware and mechanicalcomponents. In particular, the fingers, palm, whiffle-tree systems and back cover of the hand are constructedusing 3D printing technology and Fused DepositionModeling (FDM), while, the joints are obtained usingsteel bolts and nylon and elastic synthetic fibers tendons.

(ii) lightness of the design for easy fit and human-like aes-thetics. The total weight of the device is less then onekilogram. It is based on a bio-inspired design of kine-matics and motion couplings by means of two motorsynergies. The hand has two motors, a servo actuatorthat moves all the fingers for the closure and openingof the hand, and a small size actuator that provides theadduction/abduction motion of the thumb [30].

(iii) high performances comparable to those of a real limb.In fact, one of themain problems, besides costs, remainsthe relationship between performance and acceptabilityfrom the patient side.

The hand closure is entrusted to the servo motor whosemotion is transformed into a linear motion for tendon trac-tion with the aid of the servo-pulleys A and B (See Fig. 2).The hand traction system provides a single tendon per fingerwhich passes entirely inside the finger (see Figs. 3, 4).

123

Page 5: A WiSARD Network Approach for a BCI-Based Robotic Prosthetic … · 2020-07-03 · 2 High Performance Computing and Networking Institute, National Research Council of Italy, Via Pietro

International Journal of Social Robotics (2020) 12:749–764 753

Fig. 3 PRISMA robotic hand in a open and b close state

Fig. 4 PRISMA Hand finger during: a adduction and b abduction(shows the phalanxes during the closure phase which get in contactin correspondence of the parts circled with a continuous line. Blue cir-cles highlight the protrusions designed in order to prevent the fingerhyperextension). (Color figure online)

Thisway, the hand is capable to handle awide set of graspslike power and precise grasp also of small objects.

For further details about the PRISMA Hand please referto [30,32,34].

3.3 Motor-Imagery Rationale

When performing motor activities, the oscillations changefrequency and amplitude, observable above all through EEGand MEG [28]. The variations can be divided into two com-ponents, respectively the one that precedes and the one thatfollows the action or the imagination of the action. It is calledERD (Event-Related Desynchronization) the flow before theaction, about 2 s, during which there is a decrease in theamplitude of the low band of the Alpha and Beta waves.On the contrary, it is called Event-Related Synchronization(ERS) the flow following the action, duringwhich an increasein amplitude occurs [51]. This information is useful accom-panied by the spatio-temporal coordinates of the cerebralcortex from which these variations are generated.

Surprisingly, it has been observed that the ERS/ERD vari-ation occurs, to a lesser extent, even when an individual doesnot directly perform an activity but only intends to do so, aphenomenon calledMotor Imagery: the brain sets the actionareas that intend to execute already 200 ms before the execu-

tion, allowing almost to predict its intentions in the immediatefuture based on the space-time pattern identified in the EEG[48].

Starting from this assumption, we designed a datasetacquisition test where a subject concentrates first on theimplementation of a movement and then on the same mentaltask (i.e., the subject is just imagining to perform that task)generating different patterns in the brainwaves. TheBCI sys-tem will have to recognize these patterns and classify themin actions defined a priori, as demonstrated by the MotorImagery theory [48]. It has been shown that after some train-ing sessions, the user usually learns to control her/his brainactivity [40] and therefore also the classifiers improve theirperformance with use. The testing procedure is detailed inthe experimental session.

3.4 TheWiSARD-Classifier

Weightless Neural Networks (WNNs) differ from artificialneural networks (ANNs) in that learned information is storedin memory cells (RAMs) located in neurons while neuronconnections have no weights. WiSARD6 was originally con-ceived as a pattern recognition device [6]. Nevertheless, ithas been widely used in literature as multi-class classifiers inseveral machine leaning applications, such as, for example,text categorisation [11], HIV–1 sub-types classification [64],and WiFi signal pattern recognition [20]. WiSARD patternrecognition capability has been investigated also in robotics[59,65], ad image/video processing domain [24].

Since WiSARD is a WNN where neurons are modeledas RAM memories, it can be easily implemented in digitalre-programmable hardware [7]. In addition, the capabilityof learning in single iteration makes this model very fast intraining and suitable to be implemented on small hardwaredevice which generally have low computational power andenergy requirements.

A recent work [25] reported experiences and comparisonsof WiSARD’s performance with state-of-the-art classifiers,whereas the model is employed as a general supervised clas-sification method for multi-variable data in the real numericdomain and integrated in the Scikit Learn popular machinelearning toolkit [53]. The same classifier is also implementedand integrated in the Weka 3 Machine Learning Software inJava [67]. This implementation in the one we used in theexperimental part of the current work. In what follows weformally reintroduce the model as we did in the previouswork [25] but here in a more succinct way and focusing onthe mathematical description of the functioning of the modelfor a clearer understanding of the reader.

We assume as input a bit array u = {u1, u2, . . . , uL} ofsize L=N ×n, with N and n integers. The bit vector is called

6 Wilkes, Stonham and Aleksander Recognition Device.

123

Page 6: A WiSARD Network Approach for a BCI-Based Robotic Prosthetic … · 2020-07-03 · 2 High Performance Computing and Networking Institute, National Research Council of Italy, Via Pietro

754 International Journal of Social Robotics (2020) 12:749–764

Fig. 5 WiSARD discriminator model

retina. A set of n distinct bit locations randomly selectedfrom u forms an n-tuple. The collection of N different n-tuples (which covers the entire retina) is called inputmappingfunction, and can be so defined:

Mi j = the j-th bit location in the i-th n-tuple, for i ∈{1, . . . , N }, j ∈ {1, . . . , n}.

With this definition, through the input mapping function wehave that:

uMi j = the bit at j-th location in the i-th n-tupleextracted from the bit array u.

TheWiSARD n-tuple classifier is formed by as many dis-criminators as the number of classes it has to discriminatebetween. Each discriminator consists of a set of N RAMsthat, during the training phase, learn the occurrences of n-tuples extracted from the bit patterns u loaded on the retina.

Figure 5 describes the discriminator’s structure and howRAM memory cells are accessed during training and clas-sification of WiSARD. In the model, n-tuples selected fromthe input binary vector are regarded as the “features” of theinput pattern to be recognized.

Let the θ function be:

θ(x) ={x if 0 ≤ x ≤ 1

1 if x > 1(1)

and let u be a binary array: it can be either a training samplelabelled as “belonging to class c”, and, thus, it is input totrain one discriminator which is dedicated for learning sam-

ples from the same class c; or u can be an unlabelled (new)sample that we want to classify, and, thus, it is input to alldiscriminators (one for each class of the problem) to get sim-ilarity responses from all of them. In both cases, Each RAMi

of a discriminator is uniquely associated to the i-th n-tupleextracted from the input u: RAMi has 2n cells with addressesα ∈ [0, 2n −1]. An address of RAMi can be generated start-ing from the pattern u in the following way:

αi (u) =n∑j=1

uMi j 2j−1 (2)

that is, the mapping Mi j selects the bits of the i-th n-tuplefrom the pattern u, whereas bit locations are randomly placedin the bit vector. The address αi (u) is used to stimulate (toaccess) one memory cell of RAMi which is associated to thei-th n-tuple, either in writing (training) or reading (classifi-cation) mode.

Training Let mciα be the memory cell with address α in theRAMi belonging to the discriminator of the class c. Duringthe training phase, the memory cell mciα (initialized withzero contents) is updated with the formula:

mciα = θ(∑

u∈T Sc δααi (u)

)δααi (u) =

{1 if α = αi (u)

0 elsewhere

(3)

where δ is the Kronecker delta function, and T Sc is the setof training samples u for the class c. As the formula of Eq.3 shows, the training procedure of RAMi in WiSARD (andin RAMnet models in general) is implemented by summingthe write effects at each memory cellmciα of the RAM stim-ulated by the i-th n-tuple extracted from the training patternu. Moreover, the order by which patterns are input to thenetwork during training is irrelevant with respect to the finalknowledge stored after the training.

It should be noted that the θ function guarantees that ifat least one n-tuple is used to access mciα then its content isupdated to 1 (further occurrences of the same n-tuple causethe same update, thus the memory is unchanged). Otherwise,if no n-tuple from the training samples generates the genericaddress α then the memory cellmciα is not changed (remain-ing zeroed).

Such a definition of training procedure is the main differ-ence of theWiSARDmodel with respect to adaptive learningof typical neural networks such asmulti-layer perceptron anddeep learning. In general, this is also the main advantage ofRAMnet models that provide faster learning procedures.Classification Once training of WiSARD is finished, we canclassify an unseen pattern u. The WiSARD n-tuple recog-nizer operates simply as follows:

123

Page 7: A WiSARD Network Approach for a BCI-Based Robotic Prosthetic … · 2020-07-03 · 2 High Performance Computing and Networking Institute, National Research Council of Italy, Via Pietro

International Journal of Social Robotics (2020) 12:749–764 755

Fig. 6 BCI Architecture modules

A pattern u is classified as belonging to the class c ifit has the most features in common with at least onetraining pattern of that class.

Formally, pattern u is assigned to class c, if c is the classwhose discriminator gives the highest output:

argmaxc

(N∑i=1

mciαi (u)

). (4)

On the other hand, the output (response) of the discrimina-tor associated to class c on input pattern u is a similarityprobability given by:

rc(u) = 1

N

(N∑i=1

mciαi (u)

). (5)

The WiSARD-Classifier7 [25] is a supervised method formulti-variable numeric data classification in machine learn-ing domain based onWiSARD. Themain advantage of usingthis technology in this context is its easy portability on hard-ware device [27,60]. Apart from the hardware feasibility,training and test speed is a very important issue. For instance,the WiSARD WNN model is reported to be orders of mag-nitude faster than SVM in the evaluation of financial creditproblems [19]. Additionally, a WNN is characterized by thecapacity to learn from few examples, that could be is crucialfor BCI-based real applications [62].

7 Available for download at: https://github.com/giordamaug/WiSARD4WEKA.

4 The Proposed Architecture

In Fig. 6 the proposed BCI-based architecture is presented.It consists of five main components, which correspond tothe main phases of the EEG signals elaboration, WISARD-based classification andRobotic-hand control. Thefirst phasecorresponds to data acquisition:

1. Data Acquisition—is the module responsible for theacquisition of physiological signals from brain waves.The recordings are made through the EPOC+ helmet.Data detected by the helmet are sent to a PC via wirelesscommunication. In particular, the helmet communicatesvia Bluetooth with a Wireless USB Receiver connectedto the PC.

Once sent to the PC, data are first pre-processed by filteringtechniques and feature extraction methods and then analyzedby theWISARD-based classifier, that will be able to classifythe open/close actions thought by the patient:

2. Pre-Processing—this module is to remove unneces-sary/irrelevant information and to label data with thenames of the discriminant classes the classifier has torecognize (Channel Signal Sampling phase). This mod-ule is also responsible for the selection of salient channels(Channel Selection phase), which are usually selectedwith respect to the saliency of the brain signals asso-ciated to a particular activity. Since, in our case study(as will be detailed in Sect. 5), we consider the signalspicked up by the EEG for the movements of opening andclosing of the hands, we rely on the sensors FC3, FC2,FC4, C3, C1, C2, C4, CZ, which have been shown torepresent more discriminative signals associated to handmovements [9]. It is well known that EEG signals are

123

Page 8: A WiSARD Network Approach for a BCI-Based Robotic Prosthetic … · 2020-07-03 · 2 High Performance Computing and Networking Institute, National Research Council of Italy, Via Pietro

756 International Journal of Social Robotics (2020) 12:749–764

very subject to noise which has, sometimes, a randomdistribution caused by interference, while, other times, ithas a normal distribution (White GaussianNoise,WGN),difficult to remove completely. Hence, the pre-processingphase will also include a Signal Filtering phase, whichconsists in removing the DC offset (i.e., the noise causedby the movement of the facial muscles and environmen-tal noise) from raw data, through the application of theButterworth High-Pass filter and a IIR (Infinite ImpulseResponse) filter.

3. Feature Extraction—after noise removal, feature extrac-tion from raw EEG signals is required in order torecognize brain signal patterns. Hence, this module is incharge of the extraction of ad hoc features from signalsto be used to (accurately) identify user’s intention. TheContinuous Wavelet Transform (CWT) was used, whichanalyzes the signal through functions based on scalingand translation. The filtering techniques that exploit thediscrete transform (Discrete Wavelet Transform, DWT)allow to obtain a representation of the time scale andresolution of the digital signal [3,48].

4. Data Classification—after feature extraction from dig-ital signal streams of data has been accomplished, thenew EEG signal representation is an array with elementsmulti-variable numeric data: the array represents the tem-poral sequences of signal segments sampled from thesignal streams, and each array element consists in a fea-ture vector (DWT coefficients) calculated on the sensorsignal segment originated by every helmet sensor at aspecific time interval [t, t + �t]. The array of multi-variable numeric data is so ready to be transmitted ina standard format (e.g., CSV) to WiSARD-Classifier intraining or classification modalities depending on thephase of the application. We adopted such a weightlesssystem because of its good performance in classificationproblems, very quick training and classification time, theopportunity of being trained on-line and, maybe the mainreason, because of its actual implementation in hardware[27,60].

Then the classification results are used to control therobotic hand, which is interfaced with the PC via themicrocontroller ArduinoUNO based onATmega328P, whichrepresents a low cost to be very versatile prototyping mean.

5. Control System (Device Output)—this module is respon-sible for the conversion of the classification outcomesinto commands to the peripherals. These commands areinterpreted and acted by the Control module, whose algo-rithmic logic should be designed in such a way to adaptto continuous changes of the user’s brain waves to fullyexploit the potentiality of the peripheral devices. Theoutput devices can be in general machines, robots or

prostheses. Obviously the requirements of this modulemust be a good adaptability of the system to the user and,more importantly, efficiency and accuracy in the execu-tion of the task [16]. In our case study, the end-effector isrepresented by the PRISMA Hand I, which will be con-trolled to perform opening and closing actions accordingthe WiSARD-Classifier outputs. Since, EEG-based con-trol has some limitations when the objective is to controla multi-DoF prosthesis to generate different motion pat-terns, our aim here is only to recognize simple openingand closing movements by leading the PRISMA Hand Ito adapt the kind of grasping action thanks to its designcharacteristics, which permit the robotic hand to adapt todifferent shapes and materials of the objects to grasp.

5 Experimental Session

We start from the Motor Imagery rationale, according towhich the brain sets inmotion the areas assigned to the actionthat intends to perform already 200 ms before the execution,allowing almost to predict its intentions already when one isthinking to actuate it [48].

5.1 Experimental Protocol

In the first step, data is acquired by an experiment conductedto control the robotic hand using EEG signals. In partic-ular, the EEG-helmet is used to collect physiological dataof two patients who need to command a robotic hand withthought. Data detected by the helmet are sent to a PC viawireless communication. Once sent to the PC, data are firstpre-processed by filtering techniques and feature extractionmethods and then are analyzed by theWISARD-based classi-fier, that will be able to classify the open/close action thoughtby a patient to control the robotic hand. Only two subjectsparticipated in the study, since our aim is to propose a solu-tion whose peculiarity is not to generalize, but to work wellfor a single subject. At the beginning of the data acquisi-tion process, the participants are briefly informed about theaims and objectives of the project and a written consent istaken. Each subject undergoes a brief phase of training, dur-ing which he/she starts to familiarize with the BCI interface.The experiment for each patient is conducted in an isolatedroom to prevent any external distractions according to typicalBCI-based experimentation protocols, which are structuredaccording to rules and conventions to obtain a brain signalthat is as clean and precise as possible. As other works focus-ing on the rehabilitation of patients with total amputations orparalysis, we rely on the Motor Imagery theory to emulatelimb muscle activities. The imagination of a movement, infact, produces in the brain signals of fluctuations very similarto those generated during the same movement actually per-

123

Page 9: A WiSARD Network Approach for a BCI-Based Robotic Prosthetic … · 2020-07-03 · 2 High Performance Computing and Networking Institute, National Research Council of Italy, Via Pietro

International Journal of Social Robotics (2020) 12:749–764 757

Fig. 7 Experimental sessiontime slots

formed. In the absence of the latter, theMotor Imagery allowsto exploit the peaks caused by the imagination to controlmechanical limbs in place of the real ones. For this reason,we collected EEG signals for each subject in two differentsettings: respectivelywhile he/she is performing the action ofopening and closing of the hand and while he/she is imagingto perform the same actions.

In particular, each subject carried out 8 consecutive tri-als by acting (or imagining) the opening-then-closing ofhis/her hand two times in each trial, according to the ses-sion timetable of Fig. 7.

Each trial has the duration of 40 s. The subjects had enoughtime to relax before performing each trial. The start and endofthe tasks to be performedwas punctuated by a timer that emit-ted a sound about 0.2 s before the defined temporal instant, toallow the brain to respond to the subject and record the actualopening and closing taking into account the delay muscle.For example, the first Open Hand movement, defined in thesecond 3, was introduced by the sound at the instant 2.8.

5.2 Data Pre-processing

Data Acquisition The recordings were made through theEPOC+ helmet. The EPOC+ helmet produces 14-channelssignals measurements at a high rate of 2048 samples persecond. Nevertheless, after signal filtering and conversionfrom Microvolt to unsigned 14-bit integers, signal data areproduced at a lower rate in the range 128–256 samples persecond and per channel. Pre-processing In our experiments,helmet data are pre-processed according to the following pro-cess pipeline:

– Channel Signal Sampling—Once we removed in eachtrial the samples related to relax intervals, we obtainedfor each trial a dataset with 8192 labeled samples,half labeled “C” referring to Close Hand movement/imagination, and the other half labeled “O” for The OpenHandmovement/imagination. Each sample consists in 14unsigned integers representing the recorded (and filtered)sensor signals.

– Channel Selection—once all available sensor signalswere recorder in each trial, we selected sensors which,following the results of precedent works [9], providemore discriminative signals associated to hand move-ments. According to the International System 10/20 these

sensors are FC3, FC2, FC4, C3, C1, C2, C4, Cz, whichare positioned on the top of the head. Since in our EPOC+helmet no one of these sensors is available, we choose toconsider only signals of 6 sensors which are very closeto the brain area of interest, that is sensors F3, F4, FC5,FC6, T7 and T8, and by ignoring sensors in the occipitaland frontal regions.

– Signal Filtering—once 6 signals were selected, on thesesensors data we applied the Butterworth High-Pass filterat 0.16 Hz in order to remove the DC offset (i.e., thenoise caused by the movement of the facial muscles andenvironmental noise). After noise removal, we appliedthe IIR (Infinite Impulse Response) filter.

Feature Extraction After signal selection and filtering, wecalculated Discrete Wavelet Transformations (DWTs) of the6 selected signals by considering segments of consecutivesamples with two different size: in the first case we con-sidered 32-sample segments, and we decomposed the signalwith DWTs of the Daubechies 1 family, with decompositionlevel of 3. The decomposition outputs 4 vector coefficientswith different resolutions.8 For each chosen signal and foreach one of its 4 decomposition DWT coefficients, 4 fea-tures are then calculated: the minimum, maximum, standarddeviation and average statisticalmeasurements. In the secondexperiment setting, we considered 64-sample segments andwe decomposed the signal with DWTs of the Daubechies1 family, with decomposition level of 4. The decomposi-tion outputs 5 vector coefficients with different resolutions.9

As before, for each chosen signal and for each one of its5 decomposition DWT coefficients, features are computedas the minimum, maximum, standard deviation and averagestatistical measurements.

Bymerging data of the feature vectors of the 6 chosen sig-nals extracted on 32-sized segments of samples and recordedin all phases of the trials referring to both C and Omovementacting (A),we obtain for subjectX the datasetsSubX-A-32,while for C and O movement imagination (B) the resultingdataset is SubX-B-32. Both these datasets contains 1920new samples (corresponding to the number of segments) and6 channels × 4 × 4 features (i.e. the minimum, maximum,

8 cA1 with 16, cA2 with 8, cA3 and cD3 with 4 components.9 cD1 with 32, cD2 with 16, cD3 with 8, and cD4 and cA4 with 4components.

123

Page 10: A WiSARD Network Approach for a BCI-Based Robotic Prosthetic … · 2020-07-03 · 2 High Performance Computing and Networking Institute, National Research Council of Italy, Via Pietro

758 International Journal of Social Robotics (2020) 12:749–764

mean and std deviation of 4 DWT coefficients) for a total of96 features.

Bymerging data of the feature vectors of the 6 chosen sig-nals extracted on 64-sized segments of samples and recordedin all phases of the trials referring to both C and O handmovement action (A), we obtain for subject X the datasetsSubX-A-64, while for C and O hand movement imagina-tion (B) the resulting dataset is SubX-B-64. Both thesedatasets contains 960 new samples (corresponding to thenumber of doubled segments) and 6 channels × 4 × 5 fea-tures (i.e. the minimum, maximum, mean and std deviationof 5 DWT coefficients) for a total of 120 features.Data Classification WiSARD-Classifier supports learningand classification of multi-variable data in the real numericdomain. This is fine since the feature extraction module pro-duces a dataset representing EEG signal streams in term of anarray of feature vectors: each feature vector is calculated onone temporal segment sampled from the signal streams. Forexample, datasets SubX-A-32 and SubX-B-32 contains1920 samples (segments), and each sample is a vector of 96feature values (multi-variable datum).

WiSARD-Classifier relies on the WiSARD neural net-work model. As highlighted in Sect. 4, WiSARD is spe-cialized to learn and recognize binary inputs. Therefore,WiSARD-Classifier is equipped with a binary encodingmechanism to extend its learning capability to multi-variabledata. Data binarization is based on the thermometer encod-ing introduced in [5] and used in [26]: each real componentof the multi-variable datum is scaled and rounded up to aninteger x in the range [0, z−1]; each integer component x ofthe transformed datum is binarized into a sequence of z bitswith all first x bits set to 1, while the other bits are zeroed. Bygrouping the bit sequences as columns of a matrix we get a2D binary array which will be the WiSARD stimulus loadedon the retina. Another important parameter of the methodis n, that is the number of bits randomly extracted from thebinarized input and used to access oneRAMcell of the neuralnetwork. The parameter n captures in the WiSARD modeldata correlation in each input sample, as well as it has strongaffect on its generalization capability [4].

In Fig. 8 we provide a preview about how EEG signals areconverted10 into a binary input useful for aWiSARD, as wellas how a random-extracted n-tuple of bits from this retina isused as address of one RAM neuron.Robotic-hand Control The WiSARD-Classifier’s outputsactivate the PRISMA Hand I motors for respectively clos-ing or opening it. The hand closure is entrusted to the servomotor whose motion is transformed into a linear motion fortendon traction with the aid of servo-pulleys. The transmis-sion, called whiffletree, is composed by 3 elements PART1,PART2, PART3 shown in Fig. 9. The tendons composing the

10 Internally in WiSARD-Classifier software.

Fig. 8 EEG signals conversion to binary retina

Fig. 9 Mechanical components of the PRISMA Hand I

hand will be used to pull back these three elements to repro-duce the hand closing action. Analogously, they are releasedfor the opening action. Please, refer to [35] for additionaldetails.

5.3 Validation on In-House BCI Datasets

For the validation of datasets,11 obtained from the experi-ments we carried out in our lab on two patients, we used10-fold stratified cross-validation of a set of six machinelearning methods implemented in the Weka 3 MachineLearning Software in Java [67]:

WNN WiSARD-Classifier with parameters n = 32 and z =4096;

MLP Multi-Layer Perceptron implemented in Deeplearn-ing4j12 with one 30-node hidden layer with ReLuactivations, a multi-class Cross Entropy loss func-tion, and an output layer with softmax activation;

11 All datasets used in the experiments are available for download at:https://github.com/giordamaug/WiSARD4WEKA.12 http://deeplearning4j.org.

123

Page 11: A WiSARD Network Approach for a BCI-Based Robotic Prosthetic … · 2020-07-03 · 2 High Performance Computing and Networking Institute, National Research Council of Italy, Via Pietro

International Journal of Social Robotics (2020) 12:749–764 759

Table 1 Ten-foldcross-validation accuracy of aset of classifiers applied ondatasets produced from in-houseexperiments on the 1st subject

WNN (%) SVM (%) RF (%) MLP (%) kNN (%) FLDA (%)

Sub1-A-32

96 Features (all) 89.11 92.18 94.64 94.74 90.05 85.62

12 Features (by RF) 87.03 88.54 93.49 93.33 88.64 84.79

Sub1-A-64

120 Features (all) 92.60 91.66 95.94 94.58 91.87 86.87

19 Features (by RF) 92.40 86.56 95.72 94.79 90.31 86.35

Sub1-B-32

96 Features (all) 65.99 59.79 76.71 73.12 59.01 67.34

23 Features (by RF) 70.78 54.63 78.02 74.64 63.64 65.99

Sub1-B-64

120 Features (all) 67.29 58.33 76.97 74.79 64.17 71.97

36 Features (by RF) 72.5 56.67 78.44 75.31 64.79 73.12

Table 2 Ten-foldcross-validation accuracy of aset of classifiers applied ondatasets produced from in-houseexperiments on the 2nd subject

WNN (%) SVM (%) RF (%) MLP (%) kNN (%) FLDA (%)

Sub2-A-32

96 Features (all) 65.57 65.78 76.61 74.48 55 69.16

28 Features (by RF) 68.39 64.89 75.62 73.12 55.78 69.06

Sub2-A-64

120 Features (all) 66.98 70.10 79.48 75.83 57.18 73.02

31 Features (by RF) 73.33 68.75 80.52 78.23 67.4 74.06

Sub2-B-32

96 Features (all) 53.71 52.92 58.59 53.42 54.10 57.52

42 Features (by RF) 57.03 54.88 60.64 56.74 53.52 59.37

Sub2-B-64

120 Features (all) 55.27 57.23 61.91 60.16 50.39 60.15

51 Features (by RF) 54.88 56.05 62.5 60.16 50.39 62.10

SVM a Support Vector Machine [38,43,55] with poly ker-nel and gamma 1.0;

RF Random Forest [15] with an ensemble by bagging of100 decisions-tree classifiers.

kNN k-nearest neighbor clustering method [2].FLDA Fisher’s Linear Discriminant Analysis method, with

threshold selected so that the separator is half-waybetween centroids and the ridge is 10−6.

For each dataset we carried out two evaluations: first wemeasured 10-fold cross-validation accuracy on the originaldataset containing all features; then, before cross-validation,we carried out a future selection by an ensemble of decisiontrees (Random Forest). Experimental results are summarizedin Tables 1 and 2.

With reference to datasets related to the experiments ofthe first patient (see Table 1), a first comment is about howfeature selection affects the overall results. We can say that,in both segmentation settings of DWTs, feature removal pro-

vides, with respect to the overall best performer (RF), up to1% accuracy degradation in accuracy in the case A (handmovement action), and up to 2% accuracy improvement inthe case B (hand movement imagination). This is interestingsince the removal of almost the 80% of features in the caseA and 60% in the case B does result, respectively, in a littledegradation and a limited improvement. With reference tothe hand movement imagination (case B), we experienced alower accuracy than the case in which we classify real handmovements (case A). Nevertheless, the performance slightlyimproves with feature removal carried out by RF, reachinga 78.4% accuracy in the case of DWTs based on 64-samplesegmentation and by using the RF classifier with only 36 fea-tures over 120. In the same case study only WNN and MLPprovide good responses with, respectively, 72.5% and 75.3%accuracy.

This results would require a further investigation aboutthe importance of features computed by the feature selec-tion scheme in order to realize whether other signals can be

123

Page 12: A WiSARD Network Approach for a BCI-Based Robotic Prosthetic … · 2020-07-03 · 2 High Performance Computing and Networking Institute, National Research Council of Italy, Via Pietro

760 International Journal of Social Robotics (2020) 12:749–764

ignored in the data acquisition without a lost in performance.For example, less discriminative feature vectors could beassociated to signals of sensors T7 and T8 which are fartherfrom the interest top-central region of the brain.

With reference to datasets related to the experiments ofthe second patient (see Table 2), the results demonstratedhow this second round of experiments produced datasets notvery suitable for training any machine learning method insuch a way to build up an accurate predictor of EEG sig-nals. In particular, signals collected from the second patient’shand movement lead to the training of models with limitedaccuracy (∼80% in the case of 64-sample sized segmentsand DWTs with decomposition level 4) when discriminatingclosing from opening hand movements. On the other hand,signal collected from hand motion imagery do not producean actual discriminative capacity in built models as the bestaccuracy is evaluated around the 60%.

5.4 Validation on Third-Party BCI Datasets

We decided to apply the same data pre-processing pipeline,with particular attention to theDWT feature extraction phase,on a third-party BCI dataset. For the purpose, we identifiedtheEEGmotor activity dataset13 from theBCI researchgroupof National University of Sciences and Technology of Pak-istan.

The dataset includes EEG recordings of actual an imag-ined left and right hand movements with eyes closed of a 21year old subject, right handed male with no medical condi-tions. EEG signals are recorded by FP1, FP2, F3, F4, C3, C4,P3, P4, O1, O2, F7, F8, T3, T4, T5, T6, Fz, Cz, and Pz elec-trodes at a 500 MHz sampling rate using a Neurofax EEGSystem.

The dataset is composed by data from the following trials,each one related to a specific class of event (CE) we want todetect:

– LFM1,2,3—three trials left hand forward movement with3008 samples each;

– LBM1,2,3—three trials left hand backward movementwith 3008 samples each;

– LFI—single trial imagined left hand forward movementwith 7040 samples;

– LBI—single trial imagined left hand backward move-ment with 7040 samples;

– RFM1,2,3—three trials right hand forward movementwith 3008 samples each;

– RBM1,2,3—three trials right hand backward movementwith 3008 samples each;

– RFI—single trial imagined right hand forwardmovementwith 7488 samples;

13 https://sites.google.com/site/projectbci.

– RBI—single trial imagined right hand backward move-ment with 7040 samples.

On this dataset of EEG signals we first selected only theelectrodes that provide more discriminative signals associ-ated to hand movements. As already mentioned in Sect. 5.3,according to the International System 10/20 these sensorsare FC3, FC2, FC4, C3, C1, C2, C4, Cz, which are posi-tioned on the top of the head. Among these sensors theNeurofax EEG System provides only C3, C4 and Cz. Never-theless, this system provides electrodes F3 and F4 which arevery close to FC3 and FC4 (like in the case of the EPOC+helmet). Therefore, we extracted from the third-party EEGdataset only signals from five sensors: C3, C4, Cz, F3 andF4.

Based only on these five channels we computed DWTcoefficients of the Daubechies 1 family and with decompo-sition level of 3 in the case of 32-sample segmentation ofsignals; we computed DWT coefficients of the Daubechies1 family and with decomposition level of 4 in the case of64-sample segmentation of signals. Feature extracted fromdata and related to the eight classes of events were packedtogether to form a unique dataset that we used to trainmodelsas multi-class classifiers of EEG signals.

After feature extraction, in the case of 32-sample seg-mentation we obtained a dataset (BCI-db1lv3-32) witha total of 2022 samples (segments) and 5 channels × 4 × 4features (i.e. the minimum, maximum, mean and std devia-tion of four DWT coefficients), for a total of 80 features. Inthe case of 64-sample segmentation we produced a dataset(BCI-db1lv4-64)with a total of 1011 samples (segments)and 5 channels × 4 × 5 features (i.e. the minimum, maxi-mum, mean and std deviation of five DWT coefficients), fora total of 100 features.

We used the same14 machine learning methods from theWeka 3 software for a 10-fold cross-validation on thesedatasets. The results are summarized in Table 3.

This second case study shows how the performance ofWiSARD-Classifier is not competitive with respect to state-of-the-art classifiers when data have high dimension in termof number of features. This effect has been experienced alsoin the previous case study. Nevertheless, here the improve-ment of WiSARD is more significant when the dimension ofthe feature vector decreases. This is a limit of the proposedclassifier since its core learning scheme works fine on binarypattern: when data is in the form of numeric vectors with ahigh number of elements, these are transformed and packedtogether into large and sparse binary 2D arrays on which ourneuralmodels have difficulties to extract feature correlations.

14 Note that the FLDA method could not be applied since this casestudy is a multi-class classification problem.

123

Page 13: A WiSARD Network Approach for a BCI-Based Robotic Prosthetic … · 2020-07-03 · 2 High Performance Computing and Networking Institute, National Research Council of Italy, Via Pietro

International Journal of Social Robotics (2020) 12:749–764 761

Table 3 Ten-foldcross-validation accuracy of aset of classifiers applied ondatasets produced fromthird-party experiments

WNN (%) SVM (%) RF (%) MLP (%) kNN (%)

BCI-db1lv3-32

All features 81.31 81.30 97.48 87.64 62.51

17 of 80 by RF 99.80 76.56 99.11 97.87 99.85

BCI-db1lv4-64

All features 76.66 77.54 94.56 80.71 60.93

20 of 100 by RF 99.21 79.82 97.82 95.15 97.23

Table 4 Confusionmatrix of 10-fold cross-validation accuracy ofWiS-ARD on the BCI-db1lv4-64 dataset with feature selection

LFA LBA LFI LBI RFA RBA RFI RBI

LFA 140 1 0 0 0 0 0 0

LBA 4 136 1 0 0 0 0 0

LFI 0 1 109 0 0 0 0 0

LBI 0 0 0 109 0 0 0 1

RFA 0 0 0 0 141 0 0 0

RBA 0 0 0 0 0 141 0 0

RFI 0 0 0 0 0 0 117 0

RBI 0 0 0 0 0 0 0 110

Indeed, in bothdatasets (segmentation sizes) theWiSARD-Classifier’s performance improves significantly when thedatasets is reduced by selecting only relevant features (asevaluated by a Random Forest classifier). In both datasets,our method performs better than the other competitors; themodel building time is in the same order of magnitude15

with respect to the multilayer perceptrons. Nevertheless ourmethod has still higher building times that the other methods.Table 4 reports the confusion matrix of the 10-fold cross-validation in the case of DWTs based on 64-sample sizedsegments.

6 Conclusions

In this work, we presented a BCI-framework able to convertthe activity of the human central nervous system into under-standable and executable directives from a robotic hand, andso permitting a person, who has lost the use of an arm or ahand, to be able to command a possible robotic prosthesiswith thought.

The framework has been designed in order to be as lessinvasive as possible. To this extent, one of the strengthenspoint of the proposed architecture is the adoption of non-

15 TheWiSARDmodel building time is 5.25 s compared to 9.3 s neededto build the MLP model in the case BCI-db1lv3-32 with 17 features; inthe BCI-db1lv4-64 case study with 20 featuresWiSARD building takes3.5 s compared to 3.1 s needed for MLP.

invasive and ease of use EEG-helmet characterized by lightweight, easy donning, and wireless connection. The EEGhelmet portability and relative ease of use make it far moreuseful for the mobile collection of brain activity data andapplicable for both self-use in-door and out-door.

Additionally, we proposed the use of the WiSARD-Classifier taking inmind a possible hardware implementationof the whole system in order to have a wearable intel-ligent device capable of classifying EEG signals in-situand of communicating to the robotic hand. In compari-son with state-of-the-art classifiers, the WiSARD-Classifiershowed competing performances, by contemporary havingthe advantage of an easy implementation in hardware suchas digital mobile devices with limited memory and process-ing capabilities which must reduce energy consumption asthey mostly rely solely on battery use.

We have tested the EEG signal processing pipeline pro-posed in this work on two case studies: the first relatedto in-house experiments we carried out by recording EEGsignals of two subjects acting and imagining the opening-then-closing of hands; the second case study is a third-partydataset of EEG signals captured for a subject performingdifferent trials in which he/she moved or imagined to movebackward and forward one hand at a time. In both case stud-ies we pre-processed EEG signals by elaborating features onsignal time-segments with different sizes as well as by select-ing only relevant features before building the classifiers onreduced data. Both test cases validated the EEG signal pre-processing-then-classifying pipeline proposed in the currentwork. The experimental results proved the suitability of usingWiSARD-Classifier as reference model for EEG signal clas-sification on small devices since its performance in terms ofaccuracy is comparable to ones offered by state-of-the-artmethods.

Additionally, we adopted the PRISMA hand prototypecharacterized by lightweight design, bio-inspired under-actuation and easy user interface for intuitive and efficientcontrol, whose main advantage is the use of economichardware and mechanical components able to preserve per-formance leading to a very high cost reductionwith respect tocommercial solutions. Its characteristics makes the PRISMAHand a very good trad-off between the state-of-the-art device

123

Page 14: A WiSARD Network Approach for a BCI-Based Robotic Prosthetic … · 2020-07-03 · 2 High Performance Computing and Networking Institute, National Research Council of Italy, Via Pietro

762 International Journal of Social Robotics (2020) 12:749–764

that delivers the ultimate in technology and performance, anda device easy to commercialize and to use.

As a future work, we are currently working on theextension of the feature vectors to also consider the user’scognitive states. We can currently measure 6 differentcognitive states in real time—Excitement (Arousal), Inter-est (Valence), Stress (Frustration), Engagement/ Boredom,Attention (Focus) and Meditation (Relaxation). Our hypoth-esis is that this will allow a more accurate classification ofthe user’s intentions not only on the basis of the brain waves,but also regarding his cognitive/psychological and emotionalstate [17,50]. Also personality factors will be consideredin future works in order to improve the adaptability of therobotic-hand moves to the users’ treats [58].

We also intend to conduct more experiments carrying outmany tests on the same subject. Indeed, by collecting lot ofdata we will be able to explore EEG signal classification byapproaching it as a time-series prediction problem, in whichwe intend to exploit jointly the capabilities of WiSARD andLSTM Recurrent Neural Networks [41]. Furthermore, weaim at addressing the same problem by considering differentsubjects and evaluate how it is possible to adapt the proposedapproach in order to cope with the inter-subject variability.

Funding Part of this work is funded by the European Project MUSHA(MUltifunctional Smart HAnds: novel insight for new technologicalinsight for mini-invasive surgical tools and artificial anthropomorphichands) under the Grant Agreement No: 320992, whose Principal Inves-tigator is the third author of this paper, Fanny Ficuciello.

Compliance with Ethical Standards

Conflict of interest The first author of this paper, Mariacarla Staffa, ispart of the Editorial Board of this Special Issues. The authors declarethat they have no other conflict of interest to disclose.

References

1. Abdulkader SN, Atia A, Mostafa MSM (2015) Brain com-puter interfacing: applications and challenges. Egypt Inform J16(2):213–230. https://doi.org/10.1016/j.eij.2015.06.002

2. Aha D, Kibler D (1991) Instance-based learning algorithms. MachLearn 6:37–66. https://doi.org/10.1007/BF00153759

3. Al-Fahoum AS, Al-Fraihat AA (2014) Methods of EEG signalfeatures extraction using linear analysis in frequency and time-frequency domains. In: ISRN neuroscience

4. Aleksander I (1970)Microcircuit learning nets: Hamming-distancebehaviour. Electron Lett 6(5):134–136. https://doi.org/10.1049/el:19700092

5. Aleksander I, Albrow RC (1968) Pattern recognition with adaptivelogic elements. In: Proceedings of the IEE-NPL conference onpattern recognition, pp 68–74

6. Aleksander I, Morton H (1990) An introduction to neural comput-ing. Chapman & Hall, London

7. Aleksander I, Thomas W, Bowden P (1984) Wisard·a radical stepforward in image recognition. Sens Rev 4(3):120–124. https://doi.org/10.1108/eb007637

8. Alomari MH, Awada EA, Samaha A, AlKamha K (2014) Wavelet-based feature extraction for the analysis of EEG signals associatedwith imagined fists and feet movements. Comput Inf Sci 7(2):17–27

9. Alomari MH, Samaha A, AlKamha K (2013) Automated classifi-cation of l/r hand movement EEG signals using advanced featureextraction and machine learning. Int J Adv Comput Sci Appl4(6):207–212. https://doi.org/10.14569/IJACSA.2013.040628

10. Athanasiou A, Xygonakis I, Pandria N, Kartsidis P, Arfaras G,Kavazidi KR, Foroglou N, Astaras A, Bamidis PD (2017) Towardsrehabilitation robotics: off-the-shelf BCI control of anthropomor-phic robotic arms. In: BioMed research international

11. Badue C, Pedroni F, Souza A (2008) Multi-label text catego-rization using VG-RAM weightless neural networks. In: Neuralnetworks, 2008. SBRN ’08., pp 105–110. https://doi.org/10.1109/SBRN.2008.29

12. Bang J, Choi JS, Park K (2013) Noise reduction in brainwavesby using both EEG signals and frontal viewing camera images.Sensors (Switzerland) 13(5):6272–6294. https://doi.org/10.3390/s130506272

13. Beyrouthy T, Al Kork SK, Korbane JA, Abdulmonem A (2016)EEG mind controlled smart prosthetic arm. In: 2016 IEEE inter-national conference on emerging technologies and innovativebusiness practices for the transformation of societies (EmergiTech).pp 404–409. https://doi.org/10.1109/EmergiTech.2016.7737375

14. Bi L, Fan X, Liu Y (2013) EEG-based brain-controlled mobilerobots: a survey. IEEE Trans Hum Mach Syst 43(2):161–176.https://doi.org/10.1109/TSMCC.2012.2219046

15. Breiman L (2001) Random forests. Mach Learn 45(1):5–3216. Broquère X, Finzi A, Mainprice J, Rossi S, Sidobre D, Staffa M

(2014) An attentional approach to human–robot interactive manip-ulation. Int J Soc Robot 6(4):533–553

17. Burattini E, Finzi A, Rossi S, Staffa M (2012) Attentional human–robot interaction in simple manipulation tasks. In: Proceedings ofthe seventh annual ACM/IEEE international conference on human-robot interaction. ACM, Boston, pp 129–130. https://doi.org/10.1145/2157689.2157719

18. Caesarendra W, Ariyanto M, Pambudi KA, Amri MF, Turnip A(2017) Classification of EEG signals for eye focuses using artificialneural network. Internetworking Indones J 9(1):15–20

19. Cardoso DO, Carvalho DS, Alves DSF, de Souza DFP, CarneiroHCC, Pedreira CE, Lima PMV, França FMG (2016) Financialcredit analysis via a clustering weightless neural classifier. Neu-rocomputing 183:70–78

20. Cardoso D, Gama J, De Gregorio M, França FMG (2012) Wips:the wisard indoor positioning system. In: ESANN’12, pp 521–526

21. Cempini M, Cortese M, Vitiello N (2015) A powered finger-thumb wearable hand exoskeleton with self-aligning joint axes.IEEE/ASME Trans Mechatron 20(2):705–716. https://doi.org/10.1109/TMECH.2014.2315528

22. Chen X, Zhao B,Wang Y, Xu S, Gao X (2018) Control of a 7-DOFrobotic arm system with an SSVEP-based BCI. Int J Neural Syst28:1850018. https://doi.org/10.1142/S0129065718500181

23. Colombo R, Pisano F, Micera S, Mazzone A, Delconte C, Car-rozzaMC, Dario P, Minuco G (2005) Robotic techniques for upperlimb evaluation and rehabilitation of stroke patients. IEEE TransNeural Syst Rehabilit Eng 13(3):311–324. https://doi.org/10.1109/TNSRE.2005.848352

24. De Gregorio M, Giordano M (2017) Background estimationby weightless neural networks. Pattern Recognit Lett 96:55–65.https://doi.org/10.1016/j.patrec.2017.05.029 Scene BackgroundModeling and Initialization

25. De Gregorio M, Giordano M (2018) An experimental evaluationof weightless neural networks for multi-class classification. ApplSoft Comput 72:338–354. https://doi.org/10.1016/j.asoc.2018.07.052

123

Page 15: A WiSARD Network Approach for a BCI-Based Robotic Prosthetic … · 2020-07-03 · 2 High Performance Computing and Networking Institute, National Research Council of Italy, Via Pietro

International Journal of Social Robotics (2020) 12:749–764 763

26. de Aguiar K, França FMG, Barbosa VC, Teixeira CAD (2015)Early detection of epilepsy seizures based on a weightless neuralnetwork. In: EMBC, IEEE, pp 4470–4474. http://dblp.uni-trier.de/db/conf/embc/embc2015.html#AguiarFBT15

27. Ferreira VC, França FMG, Nery AS (2018) A smart disk for in-situface recognition. In: 2018 IEEE international parallel and dis-tributed processing symposium workshops, pp 1241–1249

28. Festante F, Vanderwert RE, Sclafani V, Paukner A, Simpson EA,Suomi SJ, Fox NA, Ferrari PF (2018) EEG beta desynchronizationduring hand goal-directed action observation in newborn monkeysand its relation to the emergence of hand motor skills. Dev CognitNeurosci 30:142–149. https://doi.org/10.1016/j.dcn.2018.02.010

29. Ficuciello F (2018) Hand-arm autonomous grasping: synergisticmotions to enhance the learning process. Intell Serv Robot 12:17–25. https://doi.org/10.1007/s11370-018-0262-0

30. FicucielloF (2018)Synergy-based control of underactuated anthro-pomorphic hands. IEEE Trans Ind Inf 15:1144–1152. https://doi.org/10.1109/TII.2018.2841043

31. Ficuciello F, Carloni R, Visser L, Stramigioli S (2010) Port-Hamiltonian modeling for soft-finger manipulation. In: Proceed-ings of IEEE/RSJ international conference on intelligent robotsand systems. Taipei, Taiwan, pp 4281–4286

32. Ficuciello F, Federico A, Lippiello V, Siciliano B (2018) Syn-ergies evaluation of the SCHUNK S5FH for grasping control.Springer, Cham, pp 225–233. https://doi.org/10.1007/978-3-319-56802-7_24

33. Ficuciello F, Palli G, Melchiorri C, Siciliano B (2012) Planningand control during reach to grasp using the three predominant UBHand IV postural synergies. In: Proceedings of IEEE internationalconference on robotics and automation. Saint Paul, pp 2255–2260

34. Ficuciello F, Palli G, Melchiorri C, Siciliano B (2012) Posturalsynergies and neural network for autonomous grasping: a tool forDextrous prosthetic and robotic hands, chap. converging clinicaland engineering research on neurorehabilitation, biosystems andbiorobotics. Springer, Berlin, Heidelberg, pp 467–480

35. Ficuciello F, Palli G, Melchiorri C, Siciliano B (2014) Posturalsynergies of the UB hand IV for human-like grasping. Robot AutonSyst 62(4):515–527. https://doi.org/10.1016/j.robot.2013.12.008

36. Ficuciello F, Zaccara D, Siciliano B (2016) Synergy-based policyimprovement with path integrals for anthropomorphic hands. In:Proceedings of IEEE international conference on intelligent robotsand systems. Daejeon, Korea, pp 1940–1945

37. Gandolla M, Ferrante S, Ferrigno G, Baldassini D, Molteni F,Guanziroli E, Cotti Cottini M, Seneci C, Pedrocchi A (2016) Arti-ficial neural network EMG classifier for functional hand graspmovements prediction. J Int Med Res 45:1831–1847. https://doi.org/10.1177/0300060516656689

38. Hastie T, Tibshirani R (1998) Classification by pairwise coupling.In: JordanMI, KearnsMJ, Solla SA (eds) Advances in neural infor-mation processing systems, vol 10. MIT Press, Cambridge

39. He B, Gao S, Yuan H, Wolpaw J (2013) Brain–computer inter-faces. Springer, New York, pp 87–151. https://doi.org/10.1007/9781461452270

40. He B, Gao S, Yuan H, Wolpaw JR (2013) Brain-computerinterfaces. Springer, Boston. https://doi.org/10.1007/978-1-4614-5227-0_2

41. Hochreiter S, Schmidhuber J (1997) Long short-term memory.Neural Comput 9(8):1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735

42. Jeong Y, Lee D, Kim K, Park JO (2000) A wearable robotic armwith high force-reflection capability. In: Proceedings 9th IEEEinternational workshop on robot and human interactive commu-nication. IEEE RO-MAN 2000 (Cat. No.00TH8499), pp 411–416.https://doi.org/10.1109/ROMAN.2000.892639

43. Keerthi S, Shevade S, Bhattacharyya C,Murthy K (2001) Improve-ments to Platt’s SMO algorithm for SVM classifier design. NeuralComput 13(3):637–649

44. Liao K, Xiao R, Gonzalez J, Ding L (2014) Decoding individualfinger movements from one hand using human EEG signals. PLOSONE 9(1):1–12. https://doi.org/10.1371/journal.pone.0085192

45. Maciejasz P, Eschweiler J, Gerlach-Hahn K, Jansen-Toy A, Leon-hardt S (2014) A survey on robotic devices for upper limbrehabilitation. J Neuroeng Rehabilit 11:3. https://doi.org/10.1186/1743-0003-11-3

46. Mak JN, Wolpaw JR (2009) Clinical applications of brain–computer interfaces: current state and future prospects. IEEERev Biomed Eng 2:187–199. https://doi.org/10.1109/RBME.2009.2035356

47. Mao X, Li M, Li W, Niu L, Xian B, Zeng M, Chen G (2017)Progress in EEG-based brain robot interaction systems. ComputIntell Neurosci. https://doi.org/10.1155/2017/1742862

48. Mulder T (2007) Motor imagery and action observation: cogni-tive tools for rehabilitation. J Neural Transm 114(10):1265–1278.https://doi.org/10.1007/s00702-007-0763-z

49. Narang A, Batra B, Ahuja A, Yadav J, Pachauri N (2018) Clas-sification of EEG signals for epileptic seizures using Levenberg–Marquardt algorithm based multilayer perceptron neural network.J Intell Fuzzy Syst 34:1669–1677. https://doi.org/10.3233/JIFS-169460

50. Iengo S, Origlia A, Staffa M, Finzi A (2012) Attentional andemotional regulation in human-robot interaction. In: IEEE RO-MAN: The 21st IEEE international symposium on robot andhuman interactive communication. pp 1135–1140. https://doi.org/10.1109/ROMAN.2012.6343901

51. Ortner R, Gruenbacher E, Guger C (2018) State of the artin sensors, signals and signal processing. http://www.gtec.at/content/download/1659/10347/file/StateOfTheArt_Physio_SensorsSignals.pdf

52. Pattnaik PK, Sarraf J (2018) Brain computer interface issues onhand movement. J King Saud Univ Comput Inf Sci 30(1):18–24.https://doi.org/10.1016/j.jksuci.2016.09.006

53. PedregosaF,VaroquauxG,GramfortA,MichelV,ThirionB,GriselO, Blondel M, Prettenhofer P, Weiss R, Dubourg V, VanderplasJ, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E(2011) Scikit-learn: machine learning in Python. JMach Learn Res12:2825–2830

54. Pedrocchi A, Ferrante S, Ambrosini E, Gandolla M, Casellato C,Schauer T, Klauer C, Pascual J, Vidaurre C, Gföhler M, Reichen-felser W, Karner J, Micera S, Crema A, Molteni F, Rossini M,Palumbo G, Guanziroli E, Jedlitschka A, Hack M, Bulgheroni M,d’Amico E, Schenk P, Zwicker S, Duschau-Wicke A, Miseikis J,Graber L, Ferrigno G (2013) Mundus project: multimodal neu-roprosthesis for daily upper limb support. J NeuroEng Rehabilit10(1):66. https://doi.org/10.1186/1743-0003-10-66

55. Platt J (1998) Fast training of support vector machines usingsequential minimal optimization. In: Schoelkopf B, Burges C,Smola A (eds) Advances in kernel methods–support vector learn-ing.MIT Press, Cambridge. https://pdfs.semanticscholar.org/d1fa/8485ad749d51e7470d801bc1931706597601.pdf

56. Prochazka A, Kukal J, Vysata O (2008) Wavelet transform usefor feature extraction and EEG signal segments classification. In:2008 3rd International symposiumon communications, control andsignal processing, pp 719–722. https://doi.org/10.1109/ISCCSP.2008.4537317

57. Rao RPN, Scherer R (2010) Brain-computer interfacing [in thespotlight]. IEEE Signal Process Mag 27(4):152. https://doi.org/10.1109/MSP.2010.936774

58. Rossi S, Staffa M, Bove L, Capasso R, Ercolano G (2017) User’spersonality and activity influence on HRI comfortable distances.In: KheddarA,Yoshida E,Ge SS, SuzukiK,Cabibihan JJ, Eyssel F,

123

Page 16: A WiSARD Network Approach for a BCI-Based Robotic Prosthetic … · 2020-07-03 · 2 High Performance Computing and Networking Institute, National Research Council of Italy, Via Pietro

764 International Journal of Social Robotics (2020) 12:749–764

HeH (eds.) ICSR, Lecture Notes in Computer Science, vol. 10652.Springer, pp 167–177. https://doi.org/10.1007/978-3-319-70022-9_17

59. Rossi S, Staffa M, Giordano M, De Gregorio M, Rossi A, Tam-burro A, Vellucci C (2015) User tracking in HRI applicationswith the human-in-the-loop. In: Proceedings of the tenth annualACM/IEEE international conference on human–robot interactionextended abstracts, HRI’15 extended abstracts, pp 33–34. ACM,New York, NY, USA. https://doi.org/10.1145/2701973.2701980

60. Santiago L, Patil VC, Prado CB, Alves TA, Marzulo LA, FrançaFM,KunduS (2017)Realizing strongPUF fromweakPUFvia neu-ral computing. In: 2017 IEEE international symposium on defectand fault tolerance in VLSI and nanotechnology systems (DFT),pp 1–6. https://doi.org/10.1109/DFT.2017.8244433

61. Sequeira S, Diogo C, Ferreira F (2013) EEG-signals based con-trol strategy for prosthetic drive systems. In: IEEE 3rd Portuguesemeeting in bioengineering, Braga, pp 1–4

62. Simões M, Amaral C, França F, Carvalho P, Castelo-Branco M(2019)Applyingweightless neural networks to a p300-based brain-computer interface. In: Lhotska L, Sukupova L, Lackovic I, IbbottGS (eds) World congress on medical physics and biomedical engi-neering 2018. Springer, Singapore, pp 113–117

63. Soekadar SR, Witkowski M, Gómez C, Opisso E, Medina J,Cortese M, Cempini M, Carrozza MC, Cohen LG, BirbaumerN, Vitiello N (2016) Hybrid EEG/EOG-based brain/neural handexoskeleton restores fully independent daily living activities afterquadriplegia. Sci Robot 1(1):eaag3296-1. https://doi.org/10.1126/scirobotics.aag3296

64. Souza C, Nobre F, Lima P, Silva R, Brindeiro R, França F (2012)Recognition of hIV-1 subtypes and antiretroviral drug resistanceusing weightless neural networks. In: ESANN’12, pp 429–434

65. Staffa M, Rossi S, Giordano M, De Gregorio M, Siciliano B(2015) Segmentation performance in tracking deformable objectsvia WNNs. In: 2015 IEEE international conference on roboticsand automation (ICRA), pp 2462–2467. https://doi.org/10.1109/ICRA.2015.7139528

66. Subasi A, Erçelebi E (2005) Classification of eeg signals using neu-ral network and logistic regression. ComputMethods Prog Biomed78(2):87–99. https://doi.org/10.1016/j.cmpb.2004.10.009

67. Witten IH, Frank E, Hall MA (2011) Data mining: practicalmachine learning tools and techniques, 3rd edn. Series in DataManagement Systems. Morgan Kaufmann, Amsterdam

68. WolpawJR,BirbaumerN,McFarlandDJ, PfurtschellerG,VaughanTM (2002) Brain–computer interfaces for communication and con-trol. Clin Neurophysiol 113(6):767–791. https://doi.org/10.1016/S1388-2457(02)00057-3

69. Yang C, Wu H, Li Z, He W, Wang N, Su CY (2018) Mind controlof a robotic arm with visual fusion technology. IEEE Trans Ind Inf14(9):3822–3830

Publisher’s Note Springer Nature remains neutral with regard to juris-dictional claims in published maps and institutional affiliations.

Mariacarla Staffa is a Assistant Professor of the Department of Physicsof the University of Naples Federico II, Italy. She received the M.Sc.degree in Computer Science from the University of Naples FedericoII with honors, in 2008. She got a Ph.D. in Computer Science andAutomation Engineering from the University of Napoli in 2011. Shewas visiting scholar at the “Institute de Système Intelligentes et deRobotique” of the University of Paris “Pierre et Marie Curie”. She ismember of the PRISCA (Projects of intelligent robotics and advancedcognitive systems) Laboratories, making research in the fields of Cog-nitive Robotics, Artificial Intelligence and Social and Assistive Robotics.She is also member of the QUASAR (Quantum Computing and SmartSystem) Laboratory, where she collaborates with experts of machinelearning and signal processing. She is mainly interested in exploringcomputational neuroscience and cognitive robotics to generate innova-tive strategies and solutions for scientific problems and technologicallimitations.

Maurizio Giordano graduated in Physics in 1992 with specialization inCybernetics. He is a research scientist at the High Performance Com-puting and Network Institute of National Council of Research of Italy.He has been lecturer for twelve years at the Computer Science Facultyof University of Naples. His main research topics are High Perfor-mance Computing and Artificial Neural Networks. He has participatedin several European Project (Esprits, Streps and NoE from FP4 toFP7). He authored eighty papers published in international journalsand conferences. He has been member of conference committees andevaluation boards for research grant and acquisition of HPC infrastruc-tures. In 2016 he was awarded as “Best Perfomer in the sbv ImproverSysTox Computational Challenge”.

Fanny Ficuciello has obtained the Laurea degree magna cum laudein Mechanical Engineering from the University of Naples Federico IIin 2007 with the thesis: “Control Strategies for Light-Weight robotwith elastic joints for a safe and dependable Physical Human-RobotInteraction”. She received the Ph.D. degree in Computer and Automa-tion Engineering at the University of Naples Federico II, in November2010 with thesis title “Modelling and Control for Soft Finger Manipu-lation and Human-Robot Interaction”. From September to March 2010she was a visiting scholar in the Control Engineering Group at theUniversity of Twente (Netherlands) under the supervision of Prof. Ste-fano Stramigioli. Currently she is a Post-Doc at University of NaplesFederico II. Her research interests include: biomechanical design andbio-aware and control strategies for anthropomorphic artificial hands,grasping and manipulation with hand/arm and dual hand/arm roboticsystems, human-robot interaction control, variable impedance controland redundancy resolution strategies.

123