neural nets wirn09 · gabriele colombini, davide sottara, luca luccarini and paola mello vii. ......

24
ISBN 978-1-60750-072-8 ISSN 0922-6389 Frontiers in Artificial Intelligence and Applications NEURAL NETS WIRN09 204 NEURAL NETS WIRN09 L.C. Jain Knowledge-Based Intelligent Engineering Systems Centre SCT-Building University of South Australia Adelaide, Mawson Lakes SA 5095, Australia [email protected] About the book series FAIA-Knowledge-Based Intelligent Engineering Systems (KBIES) Editors: L.C. Jain and R.J. Howlett Aim of the Series The aim of the KBIES series is to report on the tremendous range of applications arising out of investigations into intelligent systems, coupled with the latest generic research that makes these applications possible. The series provides a leading resource for researchers, engineers, managers and all others concerned with this area of research, or wanting to know more about it. Invitation to Propose a Contribution Proposals are invited for text books, edited books, hand books and conference proceedings. Proposals may be sent to the series editors: For details please visit: www.kesinternational.org/bookseries.php www.iospress.com R.J. Howlett Director of Brighton KTP Centre Head of Intelligent Signal Processing Labs School of Engineering University of Brighton Brighton BN2 4GJ, United Kingdom [email protected] Bruno Apolloni Simone Bassis Carlo F. Morabito B. Apolloni, S. Bassis and C.F. Morabito (Eds.) Proceedings of the 19th Italian Workshop on Neural Nets,Vetri Sul Mare, Salerno, Italy May 28-30-2009 faia 204 apolloni:faia204 apollini 06-11-09 12:01 Pagina 1

Upload: dothuan

Post on 15-Feb-2019

224 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

ISBN 978-1-60750-072-8ISSN 0922-6389

Frontiersin

ArtificialIntelligence

andApplications

NEU

RAL

NET

SW

IRN09

204

NEURAL NETSWIRN09

9 781607 500728

ISBN 978-1-60750-072-8

L.C. Jain

Knowledge-Based Intelligent

Engineering Systems Centre

SCT-Building

University of South Australia

Adelaide,

Mawson Lakes

SA 5095, Australia

[email protected]

About the book series FAIA-Knowledge-Based Intelligent Engineering Systems

(KBIES)

Editors: L.C. Jain and R.J. Howlett

Aim of the SeriesThe aim of the KBIES series is to report on the tremendous range of applications

arising out of investigations into intelligent systems, coupled with the latest generic

research that makes these applications possible. The series provides a leading

resource for researchers, engineers, managers and all others concerned with this

area of research, or wanting to know more about it.

Invitation to Propose a ContributionProposals are invited for text books, edited books, hand books and conference

proceedings. Proposals may be sent to the series editors:

For details please visit:

www.kesinternational.org/bookseries.php

www.iospress.com

R.J. Howlett

Director of Brighton KTP Centre

Head of Intelligent Signal Processing

Labs

School of Engineering

University of Brighton

Brighton

BN2 4GJ, United Kingdom

[email protected]

Bruno ApolloniSimone BassisCarlo F. Morabito

B. Apolloni, S. Bassis and C

.F. Morabito (Eds.)

Proceedings of the 19th Italian Workshopon Neural Nets, Vetri Sul Mare, Salerno, ItalyMay 28-30-2009

faia 204 apolloni:faia204 apollini 06-11-09 12:01 Pagina 1

Page 2: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

NEURAL NETS WIRN09

Page 3: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

Frontiers in Artificial Intelligence and

Applications

Volume 204

Published in the subseries

Knowledge-Based Intelligent Engineering Systems

Editors: L.C. Jain and R.J. Howlett

Recently published in KBIES:

Vol. 203. M. Džbor, Design Problems, Frames and Innovative Solutions

Vol. 196. F. Masulli, A. Micheli and A. Sperduti (Eds.), Computational Intelligence and

Bioengineering – Essays in Memory of Antonina Starita

Vol. 193. B. Apolloni, S. Bassis and M. Marinaro (Eds.), New Directions in Neural Networks –

18th Italian Workshop on Neural Networks: WIRN 2008

Vol. 186. G. Lambert-Torres et al. (Eds.), Advances in Technological Applications of Logical

and Intelligent Systems – Selected Papers from the Sixth Congress on Logic Applied

to Technology

Vol. 180. M. Virvou and T. Nakamura (Eds.), Knowledge-Based Software Engineering –

Proceedings of the Eighth Joint Conference on Knowledge-Based Software

Engineering

Vol. 170. J.D. Velásquez and V. Palade, Adaptive Web Sites – A Knowledge Extraction from

Web Data Approach

Vol. 149. X.F. Zha and R.J. Howlett (Eds.), Integrated Intelligent Systems for Engineering

Design

Recently published in FAIA:

Vol. 202. S. Sandri, M. Sànchez-Marrè and U. Cortés (Eds.), Artificial Intelligence Research

and Development – Proceedings of the 12th International Conference of the Catalan

Association for Artificial Intelligence

Vol. 201. J.E. Agudo et al. (Eds.), Techniques and Applications for Mobile Commerce –

Proceedings of TAMoCo 2009

Vol. 200. V. Dimitrova et al. (Eds.), Artificial Intelligence in Education – Building Learning

Systems that Care: From Knowledge Representation to Affective Modelling

Vol. 199. H. Fujita and V. Mařík (Eds.), New Trends in Software Methodologies, Tools and

Techniques – Proceedings of the Eighth SoMeT_09

Vol. 198. R. Ferrario and A. Oltramari (Eds.), Formal Ontologies Meet Industry

Vol. 197. R. Hoekstra, Ontology Representation – Design Patterns and Ontologies that Make

Sense

ISSN 0922-6389

Page 4: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

Neural Nets WIRN09

Proceedings of the 19th Italian Workshop on Neural Nets,

Vietri sul Mare, Salerno, Italy, May 28–30 2009

Edited by

Bruno Apolloni

Università degli Studi di Milano, Dipartimento di Scienze dell’Informazione,

Via Comelico 39, 20135 Milano, Italy

Simone Bassis

Università degli Studi di Milano, Dipartimento di Scienze dell’Informazione,

Via Comelico 39, 20135 Milano, Italy

and

Carlo F. Morabito

Università di Reggio Calabria, IMET, Loc. Feo di Vito,

89128 Reggio Calabria, Italy

Amsterdam • Berlin • Tokyo • Washington, DC

Page 5: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

© 2009 The authors and IOS Press.

All rights reserved. No part of this book may be reproduced, stored in a retrieval system,

or transmitted, in any form or by any means, without prior written permission from the publisher.

ISBN 978-1-60750-072-8

Library of Congress Control Number: not yet known

Publisher

IOS Press BV

Nieuwe Hemweg 6B

1013 BG Amsterdam

Netherlands

fax: +31 20 687 0019

e-mail: [email protected]

Distributor in the USA and Canada

IOS Press, Inc.

4502 Rachael Manor Drive

Fairfax, VA 22032

USA

fax: +1 703 323 3668

e-mail: [email protected]

LEGAL NOTICE

The publisher is not responsible for the use which might be made of the following information.

PRINTED IN THE NETHERLANDS

Page 6: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

Preface

Human Beings leave, the Science continues.

This volume collects contributions to the 19th Italian Workshop of the Italian So-

ciety for Neural Network (SIREN). The conference held a few days after the death of

prof. Maria Marinaro, who was a founder and a solid leader of the society. The confer-

ence was sad for this, but more intense at the same time. With neural networks we are

exploring thought mechanisms that share the two features of an efficient computational

tool and a representative of the physics of our brain, having the loops of our thoughts as

an ultimate product. It is not a duty of our discipline sentencing what happens when

these loops stop, but is a fascinating goal shedding light on how these loops run and

which tracks they leave.

The Science continues, and we dedicate these selected papers to Maria. We have

grouped them within the five themes of : “modeling”, “signal processing” , “medicine

application”, “economy”, and “general applications”. They come from three regular

sessions of the conference plus two specific workshops on “Computational Intelligence

for Economics and Finance” and “COST 2102: Cross Modal Analysis of Verbal and

Nonverbal Communications”, respectively. The editors would like to thank the invited

speakers as well as all those who contributed to the success of the workshops with pa-

pers of outstanding quality. Finally, special thanks go to the referees for their valuable

input.

Neural Nets WIRN09B. Apolloni et al. (Eds.)IOS Press, 2009© 2009 The authors and IOS Press. All rights reserved.

v

Page 7: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger
Page 8: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

Contents

Preface v

Chapter 1. Models

The Discriminating Power of Random Features 3

Stefano Rovetta, Francesco Masulli and Maurizio Filippone

The Influence of Noise on the Dynamics of Random Boolean Network 11

A. Barbieri, M. Villani, R. Serra, S.A. Kauffman and A. Colacci

Toward a Space-Time Mobility Model for Social Communities 19

Bruno Apolloni, Simone Bassis and Lorenzo Valerio

Notes on Cutset Conditioning on Factor Graphs with Cycles 29

Francesco Palmieri

Neural Networks and Metabolic Networks: Fault Tolerance

and Robustness Features 39

Vincenzo Conti, Barbara Lanza, Salvatore Vitabile and Filippo Sorbello

Chapter 2. Signal Processing

The COST 2102 Italian Audio and Video Emotional Database 51

Anna Esposito, Maria Teresa Riviello and Giuseppe Di Maio

Face Verification Based on DCT Templates with Pseudo-Random Permutations 62

Marco Grassi and Marcos Faundez-Zanuy

A Real-Time Speech-Interfaced System for Group Conversation Modeling 70

Cesare Rocchi, Emanuele Principi, Simone Cifani, Rudy Rotili,

Stefano Squartini and Francesco Piazza

A Partitioned Frequency Block Algorithm for Blind Separation

in Reverberant Environments 81

Michele Scarpiniti, Andrea Picaro, Raffaele Parisi and Aurelio Uncini

Transcription of Polyphonic Piano Music by Means of Memory-Based

Classification Method 91

Giovanni Costantini, Massimiliano Todisco and Renzo Perfetti

A 3D Neural Model for Video Analysis 101

Lucia Maddalena and Alfredo Petrosino

A Wavelet Based Heuristic to Dimension Neural Networks for Simple

Signal Approximation 110

Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello

vii

Page 9: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

Support Vector Machines and MLP for Automatic Classification

of Seismic Signals at Stromboli Volcano 116

Ferdinando Giacco, Antonietta Maria Esposito, Silvia Scarpetta,

Flora Giudicepietro and Maria Marinaro

Chapter 3. Economy and Complexity

Thoughts on the Crisis from a Scientific Perspective 127

Jaime Gil-Aluja

Aggregation of Opinions in Multi Person Multi Attribute Decision Problems

with Judgments Inconsistency 136

Silvio Giove and Marco Corazza

Portfolio Management with Minimum Guarantees: Some Modeling

and Optimization Issues 146

Diana Barro and Elio Canestrelli

The Treatment of Fuzzy and Specific Information Provided by Experts

for Decision Making in the Selection of Workers 154

Jaime Gil-Lafuente

An Intelligent Agent to Support City Policies Decisions 163

Agnese Augello, Giovanni Pilato and Salvatore Gaglio

“Pink Seal” a Certification for Firms’ Gender Equity 169

Tindara Addabbo, Gisella Facchinetti, Giovanni Mastroleo and Tiziana Lang

Intensive Computational Forecasting Approach to the Functional Demographic

Lee Carter Model 177

Valeria D’Amato, Gabriella Piscopo and Maria Russolillo

Conflicts in the Middle-East. Who Are the Actors? What Are Their Relations?

A Fuzzy LOGICal Analysis for IL-LOGICal Conflicts 187

Gianni Ricci, Gisella Facchinetti, Giovanni Mastroleo, Francesco Franci

and Vittorio Pagliaro

Chapter 4. Biological Aspects

Comparing Early and Late Data Fusion Methods for Gene Function Prediction 197

Matteo Re and Giorgio Valentini

An Experimental Comparison of Random Projection Ensembles with Linear

Kernel SVMs and Bagging and BagBoosting Methods for the Classification

of Gene Expression Data 208

Raffaella Folgieri

Changes in Quadratic Phase Coupling of EEG Signals During Wake and Sleep

in Two Chronic Insomnia Patients, Before and After Cognitive

Behavioral Therapy 217

Stephen Perrig, Pierre Dutoit, Katerina Espa-Cervena,

Vladislav Shaposhnyk, Laurent Pelletier, François Berger

and Alessandro E.P. Villa

viii

Page 10: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

SVM Classification of EEG Signals for Brain Computer Interface 229

G. Costantini, M. Todisco, D. Casali, M. Carota, G. Saggio, L. Bianchi,

M. Abbafati and L. Quitadamo

Role of Topology in Complex Neural Networks 234

Luigi Fortuna, Mattia Frasca, Antonio Gallo, Alessandro Spata

and Giuseppe Nunnari

Non-Iterative Imaging Method for Electrical Resistance Tomography 241

Flavio Calvano, Guglielmo Rubinacci and Antonello Tamburrino

Role of Temporally Asymmetric Synaptic Plasticity to Memorize Group-Synchronous

Patterns of Neural Activity 247

Silvia Scarpetta, Ferdinando Giacco and Maria Marinaro

Algorithms and Topographic Mapping for Epileptic Seizures Recognition

and Prediction 261

N. Mammone, F. La Foresta, G. Inuso, F.C. Morabito, U. Aguglia

and V. Cianci

Computational Intelligence Methods for Discovering Diagnostic Gene Targets

About aGVHD 271

Maurizio Fiasché, Maria Cuzzola, Roberta Fedele, Domenica Princi,

Matteo Cacciola, Giuseppe Megali, Pasquale Iacopino and

Francesco C. Morabito

Dynamic Modeling of Heart Dipole Vector for the ECG and VCG Generation 281

Fabio La Foresta, Nadia Mammone, Giuseppina Inuso

and Francesco Carlo Morabito

Chapter 5. Applications

The TRIPLE Hybrid Cognitive Architecture: Connectionist Aspects 293

Maurice Grinberg and Vladimir Haltakov

Interactive Reader Device for Visually Impaired People 306

Paolo Motto Ros, Eros Pasero, Paolo Del Giudice, Vittorio Dante

and Erminio Petetti

On the Relevance of Image Acquisition Resolution for Hand Geometry

Identification Based on MLP 314

Miguel A. Ferrera, Joan Fàbregas, Marcos Faundez-Zanuy,

Jesús B. Alonso, Carlos Travieso and Amparo Sacristan

Evaluating Soft Computing Techniques for Path Loss Estimation

in Urban Environments 323

Filippo Laganà, Matteo Cacciola, Salvatore Calcagno, Domenico De Carlo,

Giuseppe Megali, Mario Versaci and Francesco Carlo Morabito

ix

Page 11: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

The Department Store Metaphor: Organizing, Presenting and Accessing Cultural

Heritage Components in a Complex Framework 332

Umberto Maniscalco, Gianfranco Mascari and Giovanni Pilato

Subject Index 339

Author Index 341

x

Page 12: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger
Page 13: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

Subject Index

ACE 163

algorithmic inference 19

ANN 271

artificial intelligence 29

artificial neural network(s) 306, 323

associative memory 247

attractors 11

audio and video recordings 51

background modeling 101

background subtraction 101

BagBoosting 208

Bagging 208

Bayesian decision networks 163

Bayesian networks 29

bicoherence 217

biometric recognition 62

biometrics 314

bispectrum 217

blind source separation 81

brain mapping 261

chatbot 163

Choquet integral 136

classification 91, 229

cognitive behavioral therapy 217

cognitive modeling 293

complex network(s) 39, 234

conflict 187

consensus management 136

constant Q transform 91

conversation modeling 70

convex weighting 154

cortical dynamics 247

cortico-cortical resonances 217

data fusion 29

data integration 197

decision fusion 197

decision making 154

decision templates 197

distance 154

DSS 163

dynamic portfolio management 146

dynamic similarity assessment 293

early fusion 197

ECG 281

electrical resistance tomography 241

electroencephalography

(EEG) 229, 261

emergent computation 11

ensemble 208

entropy 261

epilepsy 261

equal opportunities 169

evaluating forecasts 323

face recognition 62

feature selection 271

forecasting 177

foreground modeling 101

frequency domain algorithms 81

functional demographic model 177

fuzzy expert system 169

fuzzy logic 169, 187

fuzzy number 154

gender equity 169

gene function prediction 197

gene targets 271

GEP 271

group decision theory 136

GVHD 271

hand-geometry 314

haptic interfaces 306

heart dipole model 281

heart diseases 281

image processing 306

insomnia 217

inverse problems 241

keyword spotting 70

late fusion 197

Lee Carter model 177

linear predictive coding 116

MATLAB 29

metabolic networks 39

Middle East 187

minimum guarantee 146

mobility model 19

moving object detection 101

multi criteria analysis 136

Neural Nets WIRN09B. Apolloni et al. (Eds.)IOS Press, 2009© 2009 The authors and IOS Press. All rights reserved.

339

Page 14: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

multilayer perceptron 116

music transcription 91

Naive Bayes combiner 197

neural network(s) 39, 101, 116, 314

neuron 234

noise 11, 234

non additive measures 136

non iterative imaging methods 241

non-destructive testing 241

onset detection 91

optical character recognition 306

pareto-like distribution law 19

partitioned block algorithms 81

path loss prediction 323

perceptual assessment 51

privacy 62

processes with memory 19

propagation of belief 29

random Boolean networks 11

random projection 208

random subspace 208

randomized maps 208

real-time systems 306

resolution 314

retrieval and mapping 293

reverberant environment 81

robustness and fault tolerance

comparison 39

scenario 146

security 62

seismic signals discrimination 116

self organization 101

smoothing 177

SNR 271

social communities 19

spatio-temporal patterns 247

stopped object 101

support vector machine(s)

(SVM) 91, 116, 229, 323

synchrony 247

tabletop 70

topology 234

uncertainty 154

urban environment 323

VCG 281

vector space integration 197

vocal and facial expression

of emotion 51

weighted averaging 197

340

Page 15: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

Author Index

Abbafati, M. 229

Addabbo, T. 169

Aguglia, U. 261

Alonso, J.B. 314

Apolloni, B. 19

Augello, A. 163

Barbieri, A. 11

Barro, D. 146

Bassis, S. 19

Berger, F. 217

Bianchi, L. 229

Cacciola, M. 271, 323

Calcagno, S. 323

Calvano, F. 241

Canestrelli, E. 146

Carota, M. 229

Casali, D. 229

Cianci, V. 261

Cifani, S. 70

Colacci, A. 11

Colombini, G. 110

Conti, V. 39

Corazza, M. 136

Costantini, G. 91, 229

Cuzzola, M. 271

D’Amato, V. 177

Dante, V. 306

De Carlo, D. 323

Del Giudice, P. 306

Di Maio, G. 51

Dutoit, P. 217

Espa-Cervena, K. 217

Esposito, A. 51

Esposito, A.M. 116

Fàbregas, J. 314

Facchinetti, G. 169, 187

Faundez-Zanuy, M. 62, 314

Fedele, R. 271

Ferrera, M.A. 314

Fiasché, M. 271

Filippone, M. 3

Folgieri, R. 208

Fortuna, L. 234

Franci, F. 187

Frasca, M. 234

Gaglio, S. 163

Gallo, A. 234

Giacco, F. 116, 247

Gil-Aluja, J. 127

Gil-Lafuente, J. 154

Giove, S. 136

Giudicepietro, F. 116

Grassi, M. 62

Grinberg, M. 293

Haltakov, V. 293

Iacopino, P. 271

Inuso, G. 261, 281

Kauffman, S.A. 11

La Foresta, F. 261, 281

Laganà, F. 323

Lang, T. 169

Lanza, B. 39

Luccarini, L. 110

Maddalena, L. 101

Mammone, N. 261, 281

Maniscalco, U. 332

Marinaro, M. 116, 247

Mascari, G. 332

Mastroleo, G. 169, 187

Masulli, F. 3

Megali, G. 271, 323

Mello, P. 110

Morabito, F.C. 261, 271, 281, 323

Motto Ros, P. 306

Nunnari, G. 234

Pagliaro, V. 187

Palmieri, F. 29

Parisi, R. 81

Pasero, E. 306

Pelletier, L. 217

Perfetti, R. 91

Perrig, S. 217

Petetti, E. 306

Petrosino, A. 101

Piazza, F. 70

Picaro, A. 81

Neural Nets WIRN09B. Apolloni et al. (Eds.)IOS Press, 2009© 2009 The authors and IOS Press. All rights reserved.

341

Page 16: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

Pilato, G. 163, 332

Piscopo, G. 177

Princi, D. 271

Principi, E. 70

Quitadamo, L. 229

Re, M. 197

Ricci, G. 187

Riviello, M.T. 51

Rocchi, C. 70

Rotili, R. 70

Rovetta, S. 3

Rubinacci, G. 241

Russolillo, M. 177

Sacristan, A. 314

Saggio, G. 229

Scarpetta, S. 116, 247

Scarpiniti, M. 81

Serra, R. 11

Shaposhnyk, V. 217

Sorbello, F. 39

Sottara, D. 110

Spata, A. 234

Squartini, S. 70

Tamburrino, A. 241

Todisco, M. 91, 229

Travieso, C. 314

Uncini, A. 81

Valentini, G. 197

Valerio, L. 19

Versaci, M. 323

Villa, A.E.P. 217

Villani, M. 11

Vitabile, S. 39

342

Page 17: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

Support Vector Machines and MLP forautomatic classification of seismic signals

at Stromboli volcano

Ferdinando GIACCOa,1, Antonietta Maria ESPOSITOb, Silvia SCARPETTAa,c,d,Flora GIUDICEPIETROb and Maria MARINAROa,c,d

a Department of Physics, University of Salerno, Italyb Istituto Nazionale di Geofisica e Vulcanologia (Osservatorio Vesuviano), Napoli, Italy

c INFN and INFM CNISM, Salerno, Italyd Institute for Advanced Scientific Studies, Vietri sul Mare, Italy, Germany

Abstract. We applied and compared two supervised pattern recognition techniques,namely the Multilayer Perceptron (MLP) and Support Vector Machine (SVM),to classify seismic signals recorded on Stromboli volcano. The available data arefirstly preprocessed in order to obtain a compact representation of the raw seismicsignals. We extract from data spectral and temporal information so that each inputvector is made up of 71 components, containing both spectral and temporal infor-mation extracted from the early signal. We implemented two classification strate-gies to discriminate three different seismic events: landslide, explosion-quake, andvolcanic microtremor signals. The first method is a two-layer MLP network, witha Cross-Entropy error function and logistic activation function for the output units.The second method is a Support Vector Machine, whose multi-class setting is ac-complished through a 1vsAll architecture with gaussian kernel. The experimentsshow that although the MLP produces very good results, the SVM accuracy is al-ways higher, both in term of best performance, 99.5%, and average performance,98.8%, obtained with different sampling permutations of training and test sets.

Keywords. Seismic signals discrimination, Linear Predictive Coding, NeuralNetworks, Support Vector Machine, Multilayer Perceptron.

Introduction

Automatic discrimination among seismic events is a critical issue for the continuousmonitoring of seismogenic zones and active volcanic areas. This is the case of the Strom-boli island (southern Italy) where the seismic activity is intense and the analysis of thedata should be very fast in order to communicate, as soon as possible, the significance ofthe recorded information to civil defense authorities.

The available data are provided through a broadband seismic network, installed dur-ing the crisis of December 2002, to monitor the evolution of the volcanic processes [1,2].Since its installation, the network has recorded many thousands of explosion-quake and

1Corresponding Author: Ferdinando Giacco, Department of Physics, University of Salerno, Via S. Allende,84081 Baronissi (SA), Italy; E-mail: [email protected].

Page 18: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

landslide signals. The detection of landslide seismic signals and their discrimination fromthe other transient signals was one of the most useful tools for monitoring the stabilityand the activity of the northwest flank of the volcano.

In recent years, several methods have been proposed for detecting and discriminat-ing among different seismic signals, based on spectral analysis [3,11] , cross-correlationtechnique [5,6] and neural networks [7,8,9,4,10]. In this paper we report on two differ-ent supervised approaches for discrimination among explosion-quake, landslide and mi-crotremor signals, which characterize the strombolian activity. The first method is basedon one of the widespread used neural network, the Multilayer Perceptron (MLP), whilethe second is the Support Vector Machine algorithm (SVM) [13].

The Support Vector Machines, originally developed for the discrimination of two-classes problems, have then been extended to multi-class settings [15], and nowadaysmulti-class SVMs architectures like 1vs1 and 1vsAll are widely used in different fields[16,17,18,20], including recent application on seismic signals recognition [19].

The reminder of the paper is organized as follows: Section I describes data and thepreprocessing techniques to represent data in a meaningful and compressed form; SectionII describes the classification techniques, namely Multilayer perceptron (A) and SupportVector Machine (B); lastly, in Section III, the conclusion on the experimental results arereported.

1. Seismic Data and preprocessing

Stromboli is a volcanic island in the Mediterranean Sea located north of eastern Sicily.Stromboli exhibits continuous eruptive activity generally involving the vents at the top ofthe cone. This activity consists of individual explosions emitting gasses and pyroclasticfragments typically six to seven times per hour. Seismic signals recorded on Stromboliare characterized by microtremor and explosion-quakes, usually associated with Strom-bolian explosions. This typical Strombolian activity sometimes stops during sporadic ef-fusive episodes characterized by lava flows. The most recent effusive phases occurred in1930, 1974, 1985, in December 2002 and the last one in February 2007. The December2002 effusive phase began with a large landslide on the “Sciara del Fuoco”, a depres-sion on the northwest flank of the volcano that generated a tsunami with maximum waveheight of about 10 m.

After this episode the northwest flank became unstable and as many as 50 landslidesignals per day were recorded by the seismic monitoring network operated by the IstitutoNazionale di Geofisica e Vulcanologia (INGV) [12].

The broadband network operated by the INGV for the seismic monitoring ofStromboli volcano has operated since January 2003. It consists of 13 digital stationsequipped with three-component broadband Guralp CMG-40 seismometers, with fre-quency response of 60 sec (see Fig. 1). The data are acquired by digital recorders,with a sampling rate of 50 samples/sec, and are continuously transmitted via Internetto the recording center in Naples at the Vesuvius Observatory (INGV). A more detaileddescription of the seismic network and data-acquisition system can also be found atwww.ov.ingv.it/stromboli.html [1]. Since its installation, the network has recorded astransient signals some hundreds of thousands of explosion-quakes and thousands of land-slides, in addition to continuous volcanic microtremor signals. The explosion-quakes are

Page 19: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

Figure 1. Map showing the current network geometry of 13 digital broadband stations deployed on StromboliIsland.

characterized by a signal exhibiting no distinct seismic phases and having a frequencyrange of 1U10 Hz. Landslide signals are higher in frequency than the explosion-quakesand their typical waveform has an emergent onset. The microtremor is a continuous sig-nal having frequencies between 1 and 3 Hz. The network has also recorded local, re-gional and teleseismic events. The data set includes1159 records from the three com-ponents of five seismic stations: STR1, STRA, STR8, STR5, STRB (see Fig. 1). It ismade up of 430 explosion-quakes, 267 landslides, and 462 microtremor signals. For eachevent, a record of 23 sec is taken, at 50-Hz sampling frequency. The arrival-time pickingof the explosion-quake and landslide signals has been performed by the analysts, usingdata windows having about 3 sec of pre-event signal. Hence, we used 5/8 of the availabledata as the training set (724 samples) and the remaining 3/8 provides the testing set (435samples).

The preprocessing stage is performed using the Linear Predictive Coding (LPC)technique [21], a technique frequently used in the speech recognition field to extractcompact spectral information. LPC tries to predict a signal sample by means of a linearcombination of various previous signal samples, that is:

s∗(n) ' c1s(n− 1) + c2s(n− 2) + . . . + cps(n− p) (1)

wheres(n) is the signal sample at timen ands∗(n) is its prediction,p is the model order(the number of the prediction coefficients). The estimate of the prediction coefficientsci , for i = 1, . . . p, is obtained by an optimization procedure that tries to reduce theerror between the real signal at time n and its LPC estimate. The number of predictioncoefficientsp is problem dependent. This number must be determined via a trade-off be-tween preserving the information content and optimizing compactness of the representa-tion. Here, we choose a 256-point window usingp = 6 LPC coefficients for each signal.Increasingp does not improve the information content significantly, but decreases thecompactness of the representation markedly. Therefore we extract six coefficients from

Page 20: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

each of the eight Hanning windows (5 secs) in which we divided the file signal, eachwindow overlapping with the previous one by 2.5 sec. Because LPC provides frequencyinformation [21], we have also added time-domain information. We use the functionfm

computed as the difference, properly normalized, between the maximum and minimumsignal amplitudes within a 1-sec windowWm:

fm =(max[si]−min[si])×N∑N

n=1(max[si]−min[si]), i ∈ Wm, m = 1, . . . , N. (2)

Thus, for aN = 23 sec signal, we obtain a 23-element time-features vector. There-fore, each signal (composed of 1159 samples), is encoded with a 71-feature vector(6 × 8 = 48 frequency features+23 time features). The use of both spectral and tem-poral features more closely approximates the waveform characteristics considered byseismologists when visually classifying signals.

In the following, we design two supervised techniques to build an automatic clas-sifier and we train them to distinguish between landslides, explosion-quakes, and mi-crotremor.

2. Classification techniques

2.1. Multi-layer Perceptron

The multilayer perceptron (MLP) using back-propagation learning algorithm ([19]) isone of the most widely used neural network. There are two kinds of information process-ing performed in multilayer perceptron. The first one is the forward propagation of theinput by the environment through the network from the input units to the output units.The other one is the learning algorithm, which consists of the backpropagation of the er-rors by the environment through the network from the output units to the input units, andweight and bias updates. The purpose of back propagation is to adjust the internal state(weights and biases) of the multilayer perceptron so that to produce the desired outputfor the specified input.

In our experiments we built a two-layer MLP network for the three-class discrim-ination problem [25]. Weight optimization is carried out during the training proce-dure through minimisation of the Cross-Entropy Error Function [22] using the Quasi-Newton algorithm [22]. The network output activation function is the logistic while thehyperbolic-tangent is used for the hidden units. Moreover, when logistic output units andcross-entropy error function are used together, the network output represents the prob-ability of an input vector to belong to one of the investigated classes. The number ofhidden units and training cycles has been chosen empirically by trial and error.

Lastly, to verify the generalization ability of the network, after the training stepwe test the MLP on a subset (the testing set) not used to train the network. To assessthe system robustness we test the network several times, changing randomly the weightinitialization and the permutation of data. In this way the network performance is theaverage of the percentages of correct classification obtained with each test. The TableI shows the error matrix corresponding to the best classification performance, obtainedwith an MLP architecture made up of 5 hidden units and 110 training cycles. The best

Page 21: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

Table 1. Error matrix corresponding to the best MLP performance, obtained with a network architecture madeup of 5 hidden units and 110 training cycles. The overall accuracy is 98.39 %.

Classes Landslide Explosion-quake Microtremor

Landslide 97 0 4

Explosion-quake 0 167 1

Microtremor 2 0 164

Figure 2. SVM optimal solution in a two dimensional space for a non-linearly separable classification prob-lem. The distance between the optimal hyperplane and the nearest datum is called margin, while the data cor-responding to the filled circles and the filled rectangle aresupport vectors. The slack variablesξi andξj arehere introduced (see Eqn. 3 ) to allow the violation of constraints for some training samples.

overall accuracy is 98.39% while the average value taken on 10 different permutationsof training and test set is 97.2 %.

2.2. Support Vector Machines

Support Vector Machines (SVMs) have become a popular method in pattern classifica-tion for their ability to cope with small training set and high-dimensional data [13,14,15].

The SVM algorithm goal is to find the separating decision function with the max-imum separating margin, in order to maximize the generalization ability when a newsample is presented. This can be formulated through a lagrangian minimization problemwith inequality constraints on data separation.

If the training data are linearly separable all the samples lie above the maximummargin, while the data lying on the margins are calledsupport vectors. In our study weused an SVM formulation assuming that data are not linearly separable (see Fig. 2). Inthis case, we allow the violation of some constraints by introducing the non-negativeslack variables,ξi ≥ 0, into the lagrangian problem. Namely, the lagrangianQ (for Mtraining samples) is given by

Page 22: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

Q(w, b, ξ) =12‖w‖2 + C

M∑

i=1

ξi (3)

wherew is an m-dimensional vector which locates the optimal hyperplane,b is a biasterm andC is a parameter determining the weight of the slack variablesξi. The inequalityconstraints are then given by

yi(wT xi + b) ≥ 1− ξi for i = 1, . . . , M, (4)

wherexi are the training samples andyi the associated labels (i.e.±1 in a binary setting).The solution of the SVM problem is then achieved by introducing lagrangian multipliers,α1, . . . , αM , and looking for the related “dual problem”, given by

Q(α) =M∑

i=1

αi − 12

M∑

i,j=1

αiαjyiyjxTi xj (5)

subject to the constraints

M∑

i=1

yiαi = 0, C ≥ αi ≥ 0 for i = 1, . . . , M. (6)

One of the advantage of the SVM algorithm is that the solution is unique and it onlydepends on the support vectors.

Hence, to enhance linear separability, the original input space is mapped into a high-dimensional dot-product space called the feature space. The advantage of using kernelsis that we need not treat the high-dimensional feature space explicitly, namely, in solvingEqn. 5 we useK(xi, xj) instead ofxT

i xj . The most used “kernel functions” in literatureare polynomials (of different degrees) and gaussians. In our experiments we tried bothkernel choices, finding that the best performance is achieved using a gaussian kernel,namely

K(xi, xj) = exp(−γ‖xi − xj‖2), (7)

whereγ is an additional parameter manually determined.Concerning the possibilities to extend the originally binary SVMs to multi-class

settings, there has been quite some research recently [23,24]. Two main architectureswere originally proposed for anl-classes problem [15]:

• One versus All (1vsAll):l binary classifiers are applied on each class versus theothers. Each sample is assigned to the class with the maximum output.

• One versus One (1vs1):l(l − 1)/2 binary classifiers are applied, one for eachpair of classes. Each sample is assigned to the class getting the highest numberof votes. A vote for a given class is defined as a classifier assigning the pattern tothat class.

Page 23: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

In the current case, we focus on the 1vsAll approach, building 3 different SVMs, each ofwhich is able to separate one specific class from all the others. We tried several values forthe parametersC and used the gaussian kernel reported in Eqn. 7. The best classificationhas an overall accuracy of 99.54% (see Table II), while the average value computed on10 different permutations of training and test is 98.76 % .

Table 2. Error matrix corresponding to the best 1vsAll SVM performance with gaussian kernel. The overallaccuracy is 99.54%

Classes Landslide Explosion-quake Microtremor

Landslide 91 0 1

Explosion-quake 0 169 0

Microtremor 1 0 173

3. Conclusions

Two supervised strategies have been implemented to discriminate among three differentseismic events: landslides, explosion-quakes and microtremor. Looking at the results wecan state that the discrimination performance is very good for both MLP and SVM al-gorithms. The MLP best performance has a percentage of correct classification of 98.4,while the average value obtained on several training and test sampling is 97.2%. How-ever, the 1vsAll SVM with gaussian kernel always achieves higher accuracy both in termof best, 99.5%, and average, 98.8%, performance.

We also remark that the extracted features used as parametric and compressed repre-sentation of the seismic signals give robust information on their nature. This can be alsoargued by looking at the SVMs results, where the solution only depends on the supportvectors, acting as the relevant part of the training set. Indeed, different permutations ofthe training and test sets provide different results, meaning that many support vectorsare present within the data, that is the data representation is well suited for the requiredclassification task.

References

[1] W. De Cesare, M. Orazi, R. Peluso, G. Scarpato, A. Caputo, L. D’Auria, F. Giudicepietro, M. Martini, C.Buonocunto, M. Capello, A. M. Esposito (2009) - The broadband seismic network of Stromboli volcano(Italy), Seismological Research Letters. In press.

[2] M. Martini, F. Giudicepietro, L. D’auria, A. M. Esposito, T. Caputo, R. Curciotti, W. De Cesare, M.Orazi, G. Scarpato, A. Caputo, R. Peluso, P. Ricciolino, A. Linde, S. Sacks (2008) - Seismologicalmonitoring of the February 2007 effusive eruption of the Stromboli volcano, Annals of Geophysics, Vol.50, N. 6, December 2007, pp. 775-788.

[3] Hartse, H. E., W. S. Phillips, M. C. Fehler, and L. S. House (1995). Singlestation spectral discriminationusing coda waves, Bull. Seism. Soc. Am. 85, 1464U1474.

[4] Del Pezzo, E., A. Esposito, F. Giudicepietro, M. Marinaro, M. Martini, and S. Scarpetta (2003). Dis-crimination of earthquakes and underwater explosions using neural networks, Bull. Seism. Soc. Am. 93,no. 1, 215U223.

[5] Joswig, M. (1990). Pattern recognition for earthquake detection, Bull. Seism. Soc. Am. 80, 170U186.

Page 24: NEURAL NETS WIRN09 · Gabriele Colombini, Davide Sottara, Luca Luccarini and Paola Mello vii. ... Katerina Espa-Cervena, Vladislav Shaposhnyk, Laurent Pelletier, François Berger

Table 3. Long Table caption. Dic, quaeso, mihi: quae est ista, quae consurgens ut aurora rutilat ut beneficam.Nationi, et iudicia sua non manifestavit eis

Quia fecit Quia respexit

Fecit Nationi, et iudicia sua non manifestavit eis. Et exultavit spiritus meus in salutari meo

Respexit Et misericordia eius a progenie

[6] Rowe, C. A., C. H. Thurber, and R. A. White (2004). Dome growth behavior at Soufriere Hills volcano,Montserrat, revealed by relocation of volcanic event swarms, 1995U1996, J. Volc. Geotherm. Res. 134,199U221.

[7] Dowla, F. U. (1995). Neural networks in seismic discrimination, in Monitoring a Comprehensive TestBan Treaty, E. S. Husebye and A. M. Dainty (Editors), NATO ASI, Series E, Vol. 303, Kluwer, Dor-drecht, The Netherlands, 777U789.

[8] Wang, J., and T. Teng (1995). Artificial neural network based seismic detector, Bull. Seism. Soc. Am.85, 308U319.

[9] Tiira, T. (1999). Detecting teleseismic events using artificial neural networks, Comp. Geosci. 25,929U939.

[10] Esposito, M., F. Giudicepietro, L. D’Auria, S. Scarpetta, M. G. Martini, M. Coltelli, and M. Marinaro(2008). Unsupervised Neural Analysis of Very-Long-Period Events at Stromboli Volcano Using theSelf-Organizing Maps, Bull. Seism. Soc. Am., Vol. 98, No. 5, pp. 2449U2459.

[11] Gitterman, Y., V. Pinky, and A. Shapira (1999). Spectral discrimination analysis of Eurasian nuclear testsand earthquakes recorded by the Israel seismic network and the NORESS array, Phys. Earth. Planet.Interiors 113, 111U129.

[12] Martini, M., B. Chouet, L. DŠAuria, F. Giudicepietro, and P. Dawson (2004). The seismic source stabil-ity of the Very Long Period signals of the Stromboli volcano, in I General Assembly AbstractsUEGU,Nice, 25U30 April 2004.

[13] Vapnik, V. N. (1995). The Nature of Statistical Learning Theory, Springer.[14] Webb, A. R. (2002). Statistical Pattern Recognition, John Wiley and Sons.[15] Schffolkopf, B. and A.J. Smola (2002). Learning with Kernels: Support Vector Machines, Regulariza-

tion, Optimization and Beyond, MIT Press.[16] Melgani, F. and L. Bruzzone (2004). Classification of hyperspectral remote sensing images with support

vector machines, IEEE Trans. on Geoscience and Remote Sensing, vol. 42, pp. 1778-1790.[17] Foody, G. F. and Ajay Mathur (2004). A relative evaluation of multiclass image classification by support

vector machines, IEEE Trans. on Geoscience and Remote Sensing, vol. 42, pp. 1335-1343.[18] Hsu, C. W. and C. J. Lin (2002). A comparison of methods for multiclass support vector machines, IEEE

Trans. on Neural Networks, vol. 13, pp. 415-425.[19] Masotti, M., S. Falsaperla, H. Langer, S. Spampinato, and R. Campanini (2006), Application of Support

Vector Machine to the classification of volcanic tremor at Etna, Italy, Geophys. Res. Lett., 33, L20304,doi:10.1029/2006GL027441.

[20] Kahsay, L., F. Schwenker and G. Palm (2005). Comparison of multiclass SVM decomposition schemesfor visual object recognition, LNCS, Springer, vol. 3663, pp. 334-341.

[21] Makhoul, J. (1975). Linear prediction: a tutorial review, Proc. IEEE 63, 561-580.[22] Bishop, C. (1995). Neural Networks for Pattern Recognition, Oxford University Press, New York, 500

pp.[23] F.Giacco, S. Scarpetta, L. Pugliese, M. Marinaro and C. Thiel. Application of Self organizing Maps to

multi-resolution and multi-spectral remote sensed images, “New directions in neural networks”, Pro-ceedings of 18th Italian Workshop on neural networks (WIRN 2008), IOS Press (Netherlands), pp. 245-253.

[24] C. Thiel, F. Giacco, F. Shwenker G. Palm. Comparison of neural Classification Algorithms applied toland cover mapping, “New directions in neural networks”, Proceedings of 18th Italian Workshop onneural networks (WIRN 2008), IOS Press (Netherlands), pp. 254-263.

[25] A. M. Esposito, F. Giudicepietro, S. Scarpetta, L. D’Auria, M. Marinaro, M. Martini (2006) - Automaticdiscrimination among landslide, explosion-quake and microtremor seismic signals at Stromboli volcanousing Neural Networks, Bull. Seismol. Soc. Am. (BSSA) Vol. 96, No. 4A, pp. 1230-1240, August 2006,doi: 10.1785/0120050097