application of multimedia techniques in the physical rehabilitation of parkinson's patients
TRANSCRIPT
THE JOURNAL OF VISUALIZATION AND COMPUTER ANIMATION
J. Visual. Comput. Animat. 2003; 14: 269–278 (DOI: 10.1002/vis.324)* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Application ofmultimedia techniquesin thephysical rehabilitationof Parkinson’s patients
By Antonio Camurri, BarbaraMazzarino,GualtieroVolpe*, PietroMorasso,Federica Priano andCristina Re* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
This paper presents and discusses some experiments having the purpose of planning,
developing and validating aesthetically resonant environments for different types of
sensorimotor impairments which affect Parkinson’s patients. From a technical point of view
the aim is to develop a computational open architecture in which it is possible to integrate
modules for gesture analysis and recognition and for interactive construction of therapeutic
exercises based on multimedia stimulation in real time. The clinical objective is to
experiment with a device of sensorimotor stimulation that supports akinesia compensation
by controlling movement rhythmic structures in subjects with Parkinson’s disease. The
EyesWeb open architecture has been used to analyse patients’ motion and to produce visual
feedback during therapy sessions in real time. A pilot study has been conducted on two
Parkinson’s disease patients in the framework of the EU IST project CARE-HERE.
Copyright # 2003 John Wiley & Sons, Ltd.
Received: 1 October 2002; Revised: 1 March 2003
KEY WORDS: therapy and rehabilitation; Parkinson’s disease; motion analysis; multimediatechniques
IntroductionIn the framework of the EU IST project CARE-HERE, we
have carried out experiments with the purpose of plan-
ning, developing and validating aesthetically resonant
environments for different types of sensorimotor im-
pairments which affect Parkinson’s patients.
The underlying idea of aesthetic resonance is to give
patients a visual and acoustic feedback depending on a
qualitative analysis of their (full-body) movement, in
order to evoke ludic aspects (and consequently intro-
duce emotional–motivational elements) without the
need for the rigid standardization required in typical
motion analysis labs, or invasive techniques: the subjects
are free to move in 3D with no sensors/markers on body.
The scientific community is addressing a growing inter-
est to the investigation and development of therapeutic
exercises based on interactive multimedia technologies.1–3
In the specific context of Parkinson’s disease (PD)
Albani and colleagues4 showed that virtual reality
(VR) can work as an effective external stimulus in order
to explore the motor plans by means of creation of
mental images. In this study, a VR environment has
been employed reproducing situations related to com-
mon daily activities at home, such as eating or using the
bathroom. The VR environment has been successfully
tested on two women with PD aged 68 and 69 years, and
on 10 normal control subjects.
Music therapy, including singing, music listening,
sharing and discussion of songs, learning to play instru-
ments, song writing, moving to music and participation
in music activities, also proved its effectiveness. For
example, Pacchetti and colleagues5,6 developed an ac-
tive music therapy programme combining both motor
and emotional rehabilitation for PD patients. Their
programme consisted of several group sessions, each
based on the correct ordering of listening tasks, exer-
cises and improvisation. Since significant differences
have been found among pre- and post-therapy assess-
ments demonstrating a significant improvement in hy-
pokinesia and in daily performances, the authors
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Copyright # 2003 John Wiley & Sons, Ltd.
*Correspondence to: Gualtiero Volpe, InfoMus Lab, DIST—Universita degli Studi di Genova, Viale Causa 13, I-16145,Genova, Italy. E-mail: [email protected]
Contract/grant sponsor: EU IST project CARE HERE.
propose active music therapy as a new method to be
included in PD rehabilitation programmes.
Our research tackles the rehabilitation task in both its
technical and clinical aspects. From a technical point of
view the aim is twofold:
1. To develop a computational open architecture in
which it is possible (i) to integrate modules for
gesture analysis and recognition and for real-time
generation of multimedia feedback, (ii) to design
dynamic and interactive therapeutic exercises, and
(iii) to perform the exercises in real time.
2. To develop algorithms for real-time motion analysis
that, despite the limitation given by the lack of on-
body markers/sensors, are reliable and precise
enough (i) to enable the generation of suitable audio
and visual feedbacks in aesthetically resonant
environments, and (ii) to allow the therapist to
evaluate the progress of the therapy by monitoring
the measures associated with a collection of motion
features the algorithms provide him.
The clinical objective is to experiment with a device of
sensorimotor stimulation that supports akinesia com-
pensation by controlling movement rhythmic structures
in subjects with PD. The training in recognizing and
reproducing these structures can help the subject to
control motor tasks that are more complex than those
adjusted with simple isochronous signal stimuli. A
major fluency in movement recovered with those meth-
ods can be transformed in visual or acoustic information
produced by the movement itself, which can guide the
internal representation of voluntary control.
This paper discusses a pilot experiment we carried out
in order to test the developed techniques on patients
with PD. Patients were chosen on the basis of a previous
experimental study developed at the hospital of Are-
nzano, Italy.7,8 The EyesWeb open software platform
(www.eyesweb.org) has been adopted as the basic fra-
mework in which the motion analysis techniques have
been integrated and the therapeutic exercises developed.
Materials andMethods
Subjects and Protocol
In a preliminary phase of the study we selected two
parkinsonian subjects: a man (aged 67) and a woman
(aged 73). The protocol consisted of 12 trials for the male
patient during a period of 6 months and six trials for the
female patient during a period of 2 months. During
every session, held weekly, the subjects were filmed
both during the execution of some simple physical
exercises and during audio-visual analyses developed
with the EyesWeb open platform.9 In these pilot studies
we used both standard material distributed with Eye-
sWeb and new exercises suggested by the physicians of
the operative unit, as, for example, an exercise in which
the subject is allowed to paint with his own body by
moving it in the space.
Starting from the considered symptoms (postural
instability, gait difficulties and bradykinesia) we se-
lected some simple tasks in order to provide a preli-
minary taxonomy from which rehabilitation and
evaluation exercises can be built. In particular, here we
show an example of implementation of the Stand Sit
Exercise also used in PD functional evaluation, as in the
UPDRS.10
The EyesWebOpen Software Platform
The EyesWeb open hardware and software platform9
(www.eyesweb.org) has been adopted for the design,
development and real-time performance of physical
exercises, the extraction of relevant motion parameters
and analysis of the obtained data.
EyesWeb is an open hardware and software platform
originally conceived for the design and development of
real-time music and multimedia applications. It sup-
ports the user in experimenting with computational
models of non-verbal expressive communication and
in mapping gestures from different modalities (e.g. hu-
man full-body movement, music) onto multimedia out-
put (e.g. sound, music, visual media). It allows fast
development and experiment cycles of interactive per-
formance set-ups by including a visual programming
language allowing mapping, at different levels, of
movement and audio into integrated music, visual and
mobile scenery.
EyesWeb is the basic platform of the MEGA (Multi-
sensory Expressive Gesture Applications) IST-20410 EU
Project (www.megaproject.org) and has also been
adopted in the IST CARE HERE Project on therapy
and rehabilitation. EyesWeb is fully available at its
website (www.eyesweb.org). Public newsgroups also
exist and are managed daily to support the growing
EyesWeb community (more than 500 users at the mo-
ment), including universities, research institutes and
industries.
In the particular framework of this study, EyesWeb
has been selected since it (i) allows interactive mapping
A. CAMURRI ET AL.* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Copyright # 2003 John Wiley & Sons, Ltd. 270 J. Visual. Comput. Animat. 2003; 14: 269–278
of motion parameters onto sounds and visual media in a
multimedia scenario, (ii) allows integration of novel
analysis techniques as new libraries or extensions to
existing libraries, (iii) allows fast design, development
and testing of novel interactive therapeutic exercises,
(iv) can display in real time the analysed physical
measurements, and (v) supports different types of sen-
sors (including wireless), one or more cameras, and can
be programmed to perform specific analysis of move-
ment in real time. For this last task, the EyesWeb
Expressive Gesture Processing and Motion Analysis
libraries11,12 have been employed, including software
modules for extraction and pre-processing of physical
signals (e.g. video from video cameras), and extraction
and processing of motion parameters (such as contrac-
tion index, directness index, stability index, quantity of
motion, pause and motion durations).
Extraction and ProcessingofMotionInformation
EyesWeb has been used both to (i) extract and analyse
relevant motion features, and (ii) produce aesthetically
resonant visual feedback. An archive of recordings from
therapy sessions has been collected for preliminary
testing and tuning of the developed exercises. Patients
have then performed the exercises during therapy ses-
sions with the system working in real time.
A layered approach has been followed to move from
low-level physical measures (e.g. position, speed, accel-
eration of body parts) towards descriptors of overall
motion features (e.g. motion fluency, directness, impul-
siveness) related to PD symptoms.
These high-level descriptors are grounded on both the
consolidated tradition of biomechanics and on studies
by researchers on human movement in the field of
performing arts, such as the choreographer and human
movement researcher Rudolf Laban and his Theory of
Effort,13,14 already used in computational models of
human movement.11,12,15,16
Motion Detection and Tracking. Incoming video
frames are processed to segment motion and no-motion
regions and to detect and obtain information about the
motion that is actually occurring. This task is accom-
plished by means of consolidated computer vision
techniques usually employed for real-time analysis
and recognition of human motion and activity (see, for
example, the temporal templates technique for repre-
sentation and recognition of human movement de-
scribed by Bobick and Davis.17). It should be noticed
that, unlike the research of Bobick and Davis, here we do
not aim at detecting or recognizing a specific kind of
motion or activity, but rather our goal is to calculate
values of cues able to numerically describe qualitative
aspects of motion (such as motion fluency). The techni-
ques we use include feature tracking based on the
Lucas–Kanade algorithm,18 skin colour tracking to ex-
tract positions and trajectories of hands and head, and
silhouette motion images (SMIs).
An SMI is an image carrying information about
variations in the silhouette shape and position in the
last few frames. SMIs are inspired to motion–energy
images (MEIs) and motion–history images (MHIs).17,19
They differ from MEIs in the fact that the silhouette in
the last (more recent) frame is removed from the output
image: in this way only motion is considered, while the
current posture is skipped. Thus, SMIs can be consid-
ered as carrying information about the amount of mo-
tion occurring in the last few frames. Information about
time is implicit in the image and not explicitly recorded.
We also use an extension of SMIs, which takes into
account the internal motion in silhouettes, and decom-
position of the silhouette into sub-regions (see Figure 1).
Information that motion detection and tracking pro-
vide to the upper levels is actually encoded in two
different forms: positions and trajectories of points on
the body (possibly related to specific body parts, e.g.
hands, head, feet), and images directly resulting from
the processing of the input frames (e.g. human silhou-
ettes, SMIs). Such information is used to extract a
collection of motion features. As examples, we describe
in the following two such features: quantity of motion
(QoM) and contraction index (CI).
Quantity of Motion. The simplest use of an SMI is
calculating its area. The result, which we call quantity of
motion, can be considered as an overall measure of the
amount of detected motion. QoM can be thought as a
first, rough approximation of the physical momentum,
that is, q¼mv, where m is the mass of the moving body
and v is its velocity. The shape of the QoM graph is close
to the shape of the graphs of velocity of a marker put on
a limb.
QoM has two problems: the measure depends on the
distance from the camera; difficulties emerge when
comparing measures from different patients. We solved
these problems by scaling the SMI area by the area of the
most recent silhouette:
PHYSICAL REHABILITATION OF PARKINSON’S PATIENTS* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Copyright # 2003 John Wiley & Sons, Ltd. 271 J. Visual. Comput. Animat. 2003; 14: 269–278
Quantity-of-Motion ¼ AreaðSMI½t; n�Þ=AreaðSilhouette½t�Þ
In this way, the measure becomes independent from the
camera’s distance (in a range depending on the resolu-
tion of the video camera), and it is expressed in terms of
fractions of the body area that moved. For example, it is
possible to say that at instant t a movement correspond-
ing to 2.5% of the total area covered by the silhouette
occurred.
Contaction Index. The contraction index is a mea-
sure, ranging from 0 to 1, of how the patient’s body is
using the space surrounding it. It is inspired by previous
studies of the authors on dance performances12 and is
related to Laban’s ‘personal space’.14
The algorithm to compute CI combines two different
techniques: the individuation of an ellipse approximat-
ing the body silhouette and computations based on the
bounding region. The former is based on an analogy
between image moments and mechanical moments: in
this perspective, the three central moments of second
order build the components of the inertial tensor of
rotation of the silhouette around its centre of gravity:
this allows computation of the axes (corresponding to
the main inertial axes of the silhouette) of an ellipse that
can be considered as an approximation of the silhouette:
eccentricity of such an ellipse is related to contraction/
expansion; orientation of the axes is related to the
orientation of the body.20 The second technique used
to compute CI relates to the bounding region, i.e. the
minimum rectangle surrounding the patient’s body. The
algorithm compares the area covered by this rectangle
with the area actually covered by the silhouette. Intui-
tively, if the limbs are fully stretched and not lying along
the body, this component of CI will be low, while, if the
limbs are kept tightly nearby the body, it will be high
(near to 1). While the patient is moving, CI varies
continuously. Even if it is used with data from only
one camera, its information is still reliable, being almost
independent from the distance of the patient from the
camera. A use of this feature consists of sampling its
values at the end and the beginning of a stretch move-
ment, in order to classify that movement as a contraction
or expansion. Suitable multimedia feedbacks can then
be mapped onto actions of contraction or expansion
depending on the therapeutic goals (e.g. pleasant music
can be associated with expansion, if therapy aims at
motivating the patient in doing expansion actions).
Motion Segmentation. SMIs have interesting prop-
erties: evolution in time of their (normalized) area (what
we called quantity of motion) resembles the evolution of
velocity of biological motion, which can be roughly
described as a sequence of bell-shaped curves (motion
bells). In order to segment motion, the sequence of these
motion bells and their features, e.g. peak value and
duration, have been extracted. A first attempt consists
of recognizing phases during which the patient is
Figure 1. (a) Multiple bounding regions. (b) Measure of internal motion using SMIs.
A. CAMURRI ET AL.* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Copyright # 2003 John Wiley & Sons, Ltd. 272 J. Visual. Comput. Animat. 2003; 14: 269–278
moving (motion phases) and phases during which he or
she does not appear to move (pause phases). An em-
pirical threshold has been defined: the patient is con-
sidered to be moving if the area of the motion image is
greater than 2.5% of the total area of the silhouette.
Figure 2 shows motion bells after segmentation: a mo-
tion bell characterizes each motion phase.
Fluency and Impulsiveness. Motion segmentation
can be considered as a first step towards the analysis
of the rhythmic aspects of a patient’s motion. Analysis of
the sequence of pause and motion phases and their
relative time durations can lead to a first evaluation of
how much motion is even and regular, or hesitating.
Parameters from pause phases are also extracted to
individuate real still positions from active pauses invol-
ving low motion (hesitating or oscillating movements).
Starting from these data motion fluency and impul-
siveness are evaluated. Fluency can be estimated start-
ing from an analysis of the temporal sequence of motion
bells. An action (e.g. standing up) performed with
frequent stops and restarts (i.e. characterized by a high
number of short pause and motion phases) will be less
fluent than the same movement performed in a contin-
uous, ‘harmonic’ way (i.e. along a few long motion
phases). The hesitating, bounded performance will be
characterized by a higher percentage of acceleration and
deceleration in the time unit (due to the frequent stops
and restarts), a parameter that has been demonstrated to
be of relevant importance in motion flow evaluation
(see, for example, Zhao,16 where a neural network is
used to evaluate Laban’s flow dimension). A first mea-
sure of impulsiveness can be obtained from the shape of
a motion bell. In fact, since QoM is directly related to the
amount of detected movement, a short motion bell
having a high peak value will be the result of an
impulsive movement (i.e. a movement in which speed
rapidly moves from a value near or equal to zero, to a
peak and back to zero). On the other hand, a sustained,
continuous movement will show a motion bell charac-
terized by a relatively long time duration in which the
QoM values have little fluctuations around the average
value (i.e. the speed is more or less constant during the
movement).
Therapeutic Exercises
We developed a number of EyesWeb exercises for
experiments with PD patients. One of them, for one
patient at a time, allows the subject to paint using his or
her body.
The patient sees himself on a large screen painting in
real-time through his motion in the space. Previous
work in the performing arts field exists where engage-
ment of the audience is obtained by combined genera-
tion of music, sounds and visual media (see, for
example, the PAGe—Painting by Aerial Gesture sys-
tem21). With PAGe the user can interact through an
interaction paradigm like the MS Paint software, using
his hands while standing in front of a large video screen:
the user can select a colour or an action with one hand,
then can paint with that colour with the other hand, etc.
Our exercise is slightly different: the interaction is
based on some of the movement cues described in the
previous sections. For example, the colour may depend
on fluency; QoM may be associated with intensity of the
colour trace; pauses in movement (using the segmenta-
tion technique previously described) allow restarting
the process and reassigning/adapting interaction map-
pings. In this way, by a careful choice of colours, e.g. by
creating ‘pleasant’ colour associations/mappings with
fluent and non-hesitating movements, it is possible to
create a sort of visual feedback encouraging improve-
ment of movement in patients. During this exercise the
subject looks at the picture painted on the screen and
continuously changes it while moving.
On another monitor the researcher analyses the para-
meters and eventually corrects them in order to tune the
exercise on the patient’s needs. Figure 3 shows some
excerpts from a session with a patient.
Several other exercises, involving integrated audio
and visual feedback, are currently the subject of inves-
tigation. In one of them we analysed fine movements, in
particular hand recognition and position (Figure 4). The
EyesWeb system allows extracting fine movements in a
precise manner and also detects the movement intention
by recognizing arm movement in order to be easily
suited to severely impaired persons.
Figure 2. Motion segmentation.
PHYSICAL REHABILITATION OF PARKINSON’S PATIENTS* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Copyright # 2003 John Wiley & Sons, Ltd. 273 J. Visual. Comput. Animat. 2003; 14: 269–278
We also analysed simple tasks in order to provide a
preliminary taxonomy from which we may design
rehabilitation and evaluation exercises. In particular,
here we show an example of implementation of the
stand–sit exercise also used in PD functional evaluation
as in the UPDRS (Figure 5).
The challenge and the aim of our preliminary study
was to evaluate if such kinds of systems are able to
generate aesthetically resonant feedback in PD patients,
providing the emotional/motivational involvement to
allow them at least partially to overcome their disease.
This will be the subject of a future clinical study.
At the moment, however, an analysis of a question-
naire about Quality of Life that the patients filled in after
each therapy session (12 trials for the male patient
during a period of 6 months and six trials for the female
patient during a period of 2 months) demonstrated an
overall positive trend and an average increment of
Figure 3. The painting exercise.
A. CAMURRI ET AL.* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Copyright # 2003 John Wiley & Sons, Ltd. 274 J. Visual. Comput. Animat. 2003; 14: 269–278
Figure 4. Analysis of hand movements.
Figure 5. Stand–sit exercise.
PHYSICAL REHABILITATION OF PARKINSON’S PATIENTS* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Copyright # 2003 John Wiley & Sons, Ltd. 275 J. Visual. Comput. Animat. 2003; 14: 269–278
patients’ satisfaction that moved from 33% at the begin-
ning of the therapy to 60% after it.
Discussion
The pilot experiments demonstrate the feasibility of the
project: patients have understood in a natural way the
dynamics of the proposed resonant environments and
they obtained benefits from them, if nothing else, from a
motivational point of view. The acquisitions made dur-
ing the interactive exercises highlighted a general im-
provement of movement fluency, and appropriate
EyesWeb patches for automatic analysis will be devel-
oped. Of course, this is only a preliminary study because
it could be applied only to a very limited population of
patients. However, the experience was essential for
understanding the practical problems that could allow
the organization of a clinical trial in the near future. In
particular, we defined the environmental requirements,
the acquiring protocol and some guidelines for design-
ing the proper exercises for analysis and improvement
of movement rhythmic structures.
Overall, the EyesWeb architecture demonstrated great
versatility, giving a new possibility to the physician to
focus exercises and analyses on the desired rehabilitative
target and to evaluate the efficacy of the therapy.
ACKNOWLEDGEMENTS
The work described in this paper has been partially supported
by the EU IST project CARE HERE. We thank our colleagues
Riccardo Trocca and Matteo Ricchetti at the InfoMus Lab,
physicians Luigi Baratto and Marcello Farinelli, Psiche Gian-
noni and Emanuela Cervetto, Grazia Ornato, Luciana Tabbone
and Alessandro Tanzini at CBC for their contribution to this
work. We also thank the EyesWeb development team (Paolo
Coletta, Massimiliano Peri and Andrea Ricci) for their support.
References
1. Bianchi Bandinelli R, Saba A. Proposals and solutions forautonomy of the disabled and elderly. Studies in HealthTechnology and Informatics 1998; 48: 140–144.
2. Wong SK, Tam SF. Effectiveness of a multimedia pro-gramme and therapist-instructed training for childrenwith autism. International Journal of Rehabilitation Research2001; 24(4): 269–278.
3. Brown DJ, Powell HM, Battersby S, Lewis J, Shopland N,Yazdanparast M. Design guidelines for interactive multi-media learning environments to promote social inclusion.Disability and Rehabilitation 2002; 24(11–12): 587–597.
4. Albani G, Pignatti R, Bertella L, Priano L, Semenza C,Molinari E, Riva G, Mauro A. Common daily activitiesin the virtual environment: a preliminary study in parkin-sonian patients. Neurological Sciences 2002; 23(Suppl. 2):S49–S50.
5. Pacchetti C, Aglieri R, Mancini F, Martignoni E, Nappi G.Active music therapy and Parkinson’s disease: methods.Functional Neurology 1998; 13: 57–67.
6. Pacchetti C, Aglieri R, Mancini F, Martignoni E, Nappi G.Active music therapy in Parkinson’s disease: an integrativemethod for motor and emotional rehabilitation. Psychoso-matic Medicine 2000; 62: 386–393.
7. Morasso P, Baratto L, et al. A new method for the evalua-tion of postural stability in Parkinson’s disease. In Proceed-ings of Medical and Biological Engineering and Computation1999; 37(Suppl. 2): 822–823.
8. Re C, Baratto L, et al. Analysis of movement control strate-gies in Parkinson’s disease (Abstract). Gait and Posture2001; 13: 158.
9. Camurri A, Coletta P, Peri M, Ricchetti M, Ricci A,Trocca R, Volpe G. A real-time platform for interactivedance and music systems. In Proceedings of the InternationalComputer Music Conference ICMC2000, Berlin, 2000.
10. Fahn S, Elton R. Unified Parkinson’s disease rating scale. InRecent Developments in Parkinson’s Disease, vol 2, Fahn S,Marsden CD, Calne DB, Goldstein M (eds). MacmillanHealth Care Information, 1987. pp. 153–163, 213–304.
11. Camurri A, Lagerlof G, Volpe G. Recognizing emotionfrom dance movement: comparison of spectator recogni-tion and automated techniques. International Journal ofHuman–Computer Studies; at press.
12. Camurri A, Trocca R, Volpe G. Real-time analysis ofexpressive cues in human movement. In Proceedings of theInternational Computer Music Conference ICMC2002,Gothenburg, 2002.
13. Laban R, Lawrence FC. Effort. Macdonald & Evans:London, 1947.
14. Laban R. Modern Educational Dance. Macdonald & Evans:London, 1963.
15. Camurri A, Hashimoto S, Ricchetti M, Trocca R, Suzuki K,Volpe G. EyesWeb: toward gesture and affect recognitionin dance/music interactive systems. Computer Music Jour-nal 2000; 24: 57–69.
16. Zhao L. Synthesis and acquisition of Laban movement ana-lysis: qualitative parameters for communicative gestures.PhD dissertation, University of Pennsylvania, 2001.
17. Bobick AF, Davis J. The recognition of human movementusing temporal templates. IEEE Transactions on PatternAnalysis and Machine Intelligence 2001; 23(3): 257–267.
18. Lucas B, Kanade T. An iterative image registration techni-que with an application to stereo vision. In Proceedings ofthe International Joint Conference on Artificial Intelligence, 1981.
19. Bradsky G, Davis J. Motion segmentation and pose recog-nition with motion history gradients. Machine Vision andApplications 2002; 13: 174–184.
20. Kilian J. Simple Image Analysis By Moments. OpenCV librarydocumentation, 2001.
21. Tarabella L, Bertini G. Wireless technology in gesture con-trolled computer generated music. In Proceedings of theWorkshop on Current Research Directions in Computer Music,Barcelona, 2001.
A. CAMURRI ET AL.* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Copyright # 2003 John Wiley & Sons, Ltd. 276 J. Visual. Comput. Animat. 2003; 14: 269–278
Authors’biographies:
Antonio Camurri is Associate Professor at University ofGenova where he teaches the courses of ‘Software en-gineering’ and ‘Multimedia Systems’ (Computer Engi-neering curriculum). He is the founder and scientificdirector of the Laboratorio di Informatica Musicale atDIST, member of the Board of Directors of AIMI (ItalianAssociation for Musical Informatics), member of theExecutive Committee (ExCom) of the IEEE CS TechnicalCommittee on Computer Generated Music, foundingmember of the Italian Association for Artificial Intelli-gence (AI*IA), Associate Editor of the internationalJournal of New Music Research (Swets and Zeitlinger).He is author of more than 70 scientific internationalpublications, and served in the scientific committees ofseveral international conferences. He is owner of patentson software and computer systems, algorithms andmultimedia systems. He is Chairman of the followingscientific events: ‘First Intl Workshop on Kansei—TheTechnology of Emotion’, Genova, October 1997, AIMIand IEEE CS. Track on ‘Kansei Information Processing’at the IEEE Intl Conf SMC’98, San Diego (3 ScientificSessions). Track on ‘Kansei Information Processing’ atthe IEEE Intl Conf SMC’99, Tokyo, co-chair of the 5thIntl. Gesture Workshop, and Guest Editor of a specialissue of IEEE Multimedia J. on ‘Multi-sensory commu-nication and experience through multimedia’ (Jul–Sep2004).
Barbara Mazzarino was born in Genova, Italy, on the30th of July 1976. She received her Laurea degree inComputer Engineering from the University of Genovaon April 23rd 2002, with a thesis on analysis of expres-
sive gesture in human full-body movement. Now she isa PhD Student in Computer Science Engineering andshe works at the DIST—InfoMus Lab of the Universityof Genova, directed by Prof. Antonio Camurri. Sheworks on the modeling and real-time analysis andsynthesis of expressive content in dance and humanmovement.
Gualtiero Volpe computer engineer, was born in Gen-ova on March 24th, 1974. He received his degree incomputer engineering from the University of Genova in1999. He will discuss his PhD dissertation in June 2003.His research interests include intelligent and expressivehuman-machine interaction, modeling and real-timeanalysis and synthesis of expressive content in musicand dance, KANSEI information processing and expres-sive gesture communication. At the moment he is re-searcher at the DIST—InfoMus Lab at the University ofGenova. He is co-chair of the 5th Intl. Gesture Workshop(Genova, April 15–17, 2003).
Pietro G. Morasso was born in Genova, Italy, on April30, 1944. In 1968 he received his Laurea degree inElectronic Engineering cum laude from the Universityof Genova. He has been associated with the MIT Depart-ment of Psychology, Cambridge, Mass. USA, in theNeurophsysiological laboratory of Prof. Emilio Bizzi asa post-doctoral fellow (1970–1972), Fullbright fellow1973, vsiting professor 1978, 1979, 1980. He had differentpermanent positions in the Engineering Faculty of theUniversity of Genova since 1970. Currently he is fullprofessore of Anthropomorphic Robotics and chairmanof the Laurea Programme in Biomedical Engineering.
PHYSICAL REHABILITATION OF PARKINSON’S PATIENTS* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Copyright # 2003 John Wiley & Sons, Ltd. 277 J. Visual. Comput. Animat. 2003; 14: 269–278
He is founder and scientific director of the Bioiengineer-ing Center at the Rehabilitation Hospital La Colletta inArenzano since 1995. His scientific interests include thefollowing topics: analysis of the motor control system;computational neuroscience; neuroengineering; anthro-pomorphic robotics; rehabilitation engineering. He isauthor/coauthor of 6 books, over 300 papers, and twopatents (one on robotic navigation and one on laser-therapy).
Federica Priano was born in Genoa on October 25, 1966.She received her degree in Psychology at the Universityof Padua in 1991 and was licensed as a Psychotherapistin 1999. In 1997, she received her PhD in ‘Psychody-namic and Neurophysiology’ at the University of Genoadiscussing a thesis on ‘Cognitive Rehabilitation’. Shefound the Laboratory of Clinical and RehabilitationNeuropsychology in the Rehabilitation Department(Chairman Dr. Baratto) of ‘La Colletta’ Hospital inArenzano (Genoa) where she dealt with Neuropsycho-logical Assessments and Cognitive Rehabilitation pro-grams to patients with neurological diseases (stroke,Parkinson’s disease, Multiple Sclerosis, traumatic braininjury, tumors, dementia, etc). She also cooperates withthe Bioengineering Center at the same Hospital. She isauthor/co-author of 19 scientific publications.
Cristina Re was born in Albenga, Italy, in 1971. Shereceived her degree in electronic engineering from Uni-versity of Genoa in 1999 with the thesis: ‘Analysis andclassification of posturographic parameters by neuralnetworks computation’ supervised by prof P. G.Morasso. Currently she is PhD student in Bioengineer-ing and Bioelectronic at University of Genoa and attendsher researches in the staff of prof. P. G. Morasso and MDL. Baratto at the Centre of Bioengineering—‘La Colletta’hospital—Arenzano. Her research interests includePosturographic and Surface Electromiographic DataAnalysis, Motion Analysis, Neural Networks. She hasalso been involved in the project CAREHERE financedby the European Community.
A. CAMURRI ET AL.* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
Copyright # 2003 John Wiley & Sons, Ltd. 278 J. Visual. Comput. Animat. 2003; 14: 269–278