copyright by nitish kelagote 2017
TRANSCRIPT
Copyright
by
Nitish Kelagote
2017
HUMAN MOTION MODELING AND EVALUATION USING
WEARABLE SENSOR DEVICES
by
Nitish Nagendrappa Kelagote, B.E.
THESIS
Presented to the Faculty of
The University of Houston-Clear Lake
In Partial Fulfillment
Of the Requirements
For the Degree
MASTER OF SCIENCE
in Computer Engineering
THE UNIVERSITY OF HOUSTON-CLEAR LAKE
DECEMBER, 2017
HUMAN MOTION MODELING AND EVALUATION USING
WEARABLE SENSOR DEVICES
by
Nitish Nagendrappa Kelagote
APPROVED BY
__________________________________________
Jiang Lu, Ph.D., Committee Chair
__________________________________________
Kewei Sha, Ph.D., Committee Member
__________________________________________
Ishaq Unwala, Ph.D., Committee Member
APPROVED/RECEIVED BY THE COLLEGE OF SCIENCE AND
ENGINEERING:
Said Bettayeb, Ph.D., Interim Associate Dean
__________________________________________
Ju H. Kim, Ph.D., Ph.D., Dean
Dedicated
To my parents.
v
ACKNOWLEDGEMENTS
First, I would like to sincerely thank my mentor and committee chair Professor
Dr. Jiang Lu, for giving me the opportunity to enter the world of research, and for
providing guidance, motivation and countless advice during my academic journey. This
would not have been possible without his excellent support and continue faith shown
towards me during my personal health and family problem. The motivation and
guidance he provided, has improved me as a student and pumped my knowledge on the
machine learning and its implementation on this research. I am grateful to all my
committee members for helping me out in performing my thesis. I would also like to
thank Dr. Kewei Sha and Dr. Ishaq Unwala for being my committee member. In
addition, I would like to express my special thanks to Dr. Kewei Sha for sharing his
suggestions and ideas for further improvements in every step.
My sincere thanks to Sanobar Kadiwal, Praveen Tata for sharing their insights
in improving this research and for their continuous support and motivation.
I would like to thank all the faculty and students in University of Houston Clear
Lake who supported me directly or indirectly and who encouraged, advised and guided
me in many ways throughout my Master’s Degree.
Finally, I must express my very profound gratitude to my parents and to my
brother for providing me with unfailing support and continuous encouragement
throughout my years of study and financial support they have provided. This
accomplishment would not have been possible without them. Thank you.
vi
ABSTRACT
HUMAN MOTION MODELING AND EVALUATION USING
WEARABLE SENSOR DEVICES
Nitish Nagendrappa Kelagote
University of Houston-Clear Lake, 2017
Thesis Chair: Jiang Lu, Ph.D.
Wearable sensors devices are getting integrated into our daily life. Giving
precise and accurate data on individual's exercises and practices are one of the most
important tasks in extensive computing. These sensor devices provide a range of
applications for development like entertainment, medical, security and tactical
scenarios. Even though current research provides a variety of techniques for
recognizing gesture movement, there are still key aspects that need to be addressed for
recognizing human activities.
In this research, we propose a technique that has not only provide a gesture
movement recognition but also calculate the accuracy percentage with which the patient
or subject is going to do the gesture movement with respect to the accurate model. In
this process, we first receive the raw sensor data that are subjected to the preprocessing
and feature extraction techniques prior sending these data to calculate the accuracy
percentage. The final data that are obtained by passing through activity detection
algorithm and accuracy calculation technique is then transferred to the cloud, where
physiotherapist or specialist will analyze these data and provide the required feedback
to the patient or subject.
vii
TABLE OF CONTENT
List of Table ................................................................................................................... x
List of Figure................................................................................................................. xi
Chapter Page
CHAPTER 1: INTRODUCTION .................................................................................. 1
1.1 Background and Significance ...................................................................... 1
1.2 Motivation .................................................................................................... 4
1.3 Related Work and Proposed Solution .......................................................... 5
1.4 Organization ................................................................................................. 7
CHAPTER 2: SYSTEM SETUP ................................................................................... 9
2.1 Hardware ...................................................................................................... 9
2.2 Software ..................................................................................................... 10
2.2.1 Data Acquisition .............................................................................. 11
2.2.2 Modelling .......................................................................................... 13
2.2.3 Evaluation ......................................................................................... 15
2.3 System Overview ....................................................................................... 16
CHAPTER 3: SYSTEM ANALYSIS .......................................................................... 18
3.1 Data Preprocessing...................................................................................... 18
3.2 3D Modelling .............................................................................................. 18
3.2.1 Basic Models ..................................................................................... 19
3.2.2 Movement Models ............................................................................ 19
3.3 Store Models into Cloud ............................................................................. 20
CHAPTER 4: MOTION ANALYSIS ......................................................................... 21
4.1 Feature Extraction ....................................................................................... 21
4.1.1 One Dimensional Dynamic Time Warping ...................................... 21
4.1.2 N-Dimensional Dynamic Time Warping .......................................... 23
4.2 Sensor Fusion Algorithm ............................................................................ 25
CHAPTER 5: RESULTS ............................................................................................. 31
5.1 Experimental Setup ..................................................................................... 31
5.2 Data Collection ........................................................................................... 34
5.3 Performance Evaluation .............................................................................. 37
5.3.1 DTW accuracy calculation ................................................................ 37
5.3.2 Comparison matrix: with different activities, with different
subjects .............................................................................................. 43
viii
CHAPTER 6: CONCLUSION AND FUTUREWORKS ............................................ 46
REFERENCES ............................................................................................................ 47
ix
LIST OF TABLES
Table Page
Table 2.1: Provide the specification for MetaWear C Series......................................... 9
Table 3.1: The basic models for gesture movements .................................................. 19
Table 3.2: List of Activities and their activity groups ................................................. 20
Table 5.1: Accuracy Percentage calculation for Movement 1 ..................................... 40
Table 5.2: Accuracy Percentage calculation for Movement 2 ..................................... 41
Table 5.3: Accuracy Percentage calculation for Movement 3 ..................................... 41
Table 5.4: Accuracy Percentage calculation for Movement 4 ..................................... 41
Table 5.5: Accuracy Percentage calculation for Movement 5 ..................................... 42
Table 5.6: Accuracy Percentage calculation for Tennis .............................................. 43
Table 5.7: Accuracy Percentage calculation for Stretching ......................................... 43
Table 5.8: Accuracy Percentage calculation for Weightlifting .................................... 43
Table 5.9: Accuracy percentage calculation for different model on different subject . 45
x
LIST OF FIGURES
Figures Page
Figure 1.1: Classification for Gesture Movement Recognition ................................. 5
Figure 1.2 a: Euclidean distance ................................................................................... 7
Figure 1.2 b: DTW distance .................................................................................... 7
Figure 2.1: MbientLab Metawear C series sensor ..................................................... 9
Figure 2.2: System Data Flow Chart........................................................................ 11
Figure 2.3: Raw data from sensors .......................................................................... 12
Figure 2.4(a): raw accelerometer data ......................................................................... 13
Figure 2.4(a): raw gyroscope data ............................................................................... 13
Figure 2.5: The filtered and unfiltered signal of accelerometer data ...................... 14
Figure 2.6: Cost matrix of length m*n ..................................................................... 15
Figure 2.7: System Diagram of motion evaluation system ...................................... 16
Figure 4.1: Optimal DTW distance path for the cost matrix ................................... 23
Figure 4.2: The necessity for ND-DTW ................................................................... 25
Figure 4.3: Shows the Sensor fusion technique for accelerometer, gyroscope, and
magnetometer ............................................................................................................... 27
Figure 5.1: Shows the real subject wearing the device and performing different
activities ....................................................................................................................... 34
Figure 5.2: Accuracy percentage calculated in spyder IDE. ..................................... 34
Figure 5.3: Represent raw sensor data of accelerometer .......................................... 35
Figure 5.4: Raw sensor data gyroscope. ................................................................... 35
Figure 5.5: Filtered accelerometer data .................................................................... 37
Figure 5.3: Shows the cost matrix for the maximum and minimum similarity
between two N dimension signals................................................................................ 39
1
CHAPTER 1:
INTRODUCTION
1.1 Background and Significance
During the past decade, there has been a phenomenal improvement in the field of
computational capacity of raw data acquired from the sensors. These ubiquitous sensors
such as accelerometers, gyroscopes, magnetometers, with small size, low cost, and low
power consumption help us to provide an active research in the field of extracting
valuable motion information from the raw data available in these sensors. Especially, in
the case of wearable sensors, they provide an efficient way for the recognition of human
exercise in the field of medical, military and security applications [1]. For instance,
subjects like patients with obesity, diabetes, stroke, heart disease are often required to
take a well-defined exercise routine as part of their treatment. Therefore, recognizing
training exercises, for example, walking, running, jumping, or cycling turns out to be very
valuable feedback to the physiotherapist about the patient's improvement [1] [2]. In
military applications, exact information on the soldiers’ activities such as their locations
and health conditions is highly valuable for their performance and safety. Such
information is also helpful to support decision making in both combat and training
scenarios.
Research in the recent development of pervasive sensing is mainly due to the
scaling down of the sensor hardware size and a corresponding increase in the advances
of the wireless technologies [29] [40]. Nowadays with the help of wearable sensor
devices, researchers are able to analyze a large amount of data using various methods like
pattern recognition and data mining methods [2]. There are also a lot of research taking
2
place on the practical deployment of sensors in the real-time to overcome the various
problems like improving the efficiency of battery life of sensor and reducing the number
of sensor nodes to obtain same throughput [2].
Gestures have been utilized to recognize and track human activities. A remarkable
current research based on gesture recognition and tracking which is presently available is
the Kinect game console method developed by Microsoft [38] [3]. This technique allows
the user to interact with the game with the help of gesture movement without any
controller device. Thus, capturing, recognizing, and tracking the gesture are the three key
tasks. With respect to all this, gesture learning has some issues in recognizing human
activities [31]:
1. The first is privacy as not everybody will be present all time observed and
recorded by cameras.
2. The second one is extensiveness since video recording gadgets are hard to
connect to target people with a specific end goal to get images of their whole
body amid everyday living exercises.
3. The observed individual ought to stay inside a border characterized by the
position and the capacities of the camera.
4. The last issue would be complexity. Since video processing techniques are
highly expensive and require large computational capability. These devices
are not easily portable and require a high level of expertise to set up.
The other technique in gesture learning is based on wearable devices. These
devices are the miniaturized such that they can be used to wear on the human body (e.g.
head, wrist, waist, foot, etc.) and continuously capture people’s activity signal [4]. The
3
advantages of using wearable sensors are: (1) is small size, low-cost, low power
consumption; (2) has low data throughput; (3) can be wear in an outdoor environment
and (4) has portable and mobility. In this research, we will build a wearable body sensor
network system with the help of gyroscopes, accelerometers, and magnetometers. We are
able to project the 3D model orientations (row, pitch, and yaw) of human activities with
the raw data available obtained from the sensors.
In medical applications, behavioral learning provides certain intrinsic patterns of
a patient’s behaviors (such as limb motions, walking balance, trajectory, etc.). For
example, a stroke patient can show body’s recovery progress with physical therapy. Such
progress can be quantified by patient’s behavior metrics like gait patterns and
walking/body balance. Through gait patterns and upper limb mobility assessment, one
can assess rehabilitation-training progression. During rehabilitation, a patient’s recovery
conditions can be evaluated to by the degree of similarity in motions. For example, we
can compare a patient’s upper limb movement with a healthy person or doctor’s upper
limb movement, and then calculate the similarity between them.
In this research, we build a motion evaluation/assessment system for upper limbs
based on body sensor networks. We use two Metawear C series sensors on one arm (one
is attached on the elbow; one is attached on the wrist). The raw accelerometer, gyroscope,
magnetometer data will be collected and convert to 3D signal streams, the orientation.
Various motion models are built by using the 3D orientation signals. The evaluation can
be done by comparing the real model with a true model. The objectives of this research
are:
(1) Build body sensor network with multiple wearable sensors
4
(2) Create upper limb motion models
(3) Evaluate motion effectiveness
1.2 Motivation
The recent research for the wearable sensor devices is mainly focused on the
gesture movement and the normal physical and behavioral changes in the movement of
the body. In real-world applications, people are not only interested in recognized
gestures, behavioral, but also interested in learning how the training/working processes
are. For example, the patients with stroke need to know their performance of
rehabilitation; the athletes need to know their performance of playing. However, very
few researchers determine the accuracy percentage of the gesture movement with which
the real model compare to the true model proposed by the specialists. For example, in
case of the medical purposes where the physiotherapist can able to treat the patient by
supervised or unsupervised technique as shown in Fig. 1.1. In case of supervised
technique where the physiotherapist will personally attend to supervise the patient and
helps to rectify if there are any changes in the training routine. Whereas in case of
unsupervised technique, the patient is going to wear sensor devices in which the
physiotherapist will collect the data obtained from the gesture movement from sensor
devices and compare it with the accurate gesture movement of the body set by the
physiotherapist to determine the accuracy percentage[17].
5
Gesture Movement Recognition
External Sensing
SupervisedNon-
Supervised
Kinect Game Console
Wearable Sensing
Online Offline
Figure 1.1 Classification for Gesture Movement Recognition
The challenges to assessing the accuracy percentage of the gesture movement
1 How to integrate high dimensional data and extract important features
2 How to model the gesture movements with multiple sensors
3 How to evaluate the accuracy of a movement.
1.3 Related Work and Proposed Solution
Sensors such as an inertial sensor, motion sensor, pressure sensor, medical
sensor are widely used for human activity recognition and behavioral learning [5].
Wearable systems using activity recognition are appealing for applications in the
performing arts, e.g. by allowing dancers to augment their performance with interactive
multimedia content that matches their motions. Such systems are who employ wearable
inertial sensors combined with machine learning techniques in order to record, classify
and visualize the motion of dancers. For entertainment and gaming systems in general,
6
the adoption by users may be faster than in other domains, since classification accuracy
is less crucial than e.g. for healthcare systems, and since they usually raise fewer privacy
concerns. Two out of many examples of gaming applications are the system described,
in which a motion-sensing clamp attached to the body or other objects is used to control
video-games, or the system by in which wearable inertial sensors are used to recognize
moves to control martial arts games [5].
In this proposed thesis, we are able to calculate the accuracy percentage with
which that real gesture movement acquired from the subject is behaved by comparing
with the accurate model proposed by the specialist. To find the accuracy percentage we
have proposed a technique based on Dynamic time warping (DTW) [6]. This DTW
technique helps us compute the similarity between two time-series, even if the lengths
of the time-series do not match. We could have used the Euclidean distance formula,
but one of the main issues with using a distance measure (such as Euclidean distance)
to measure the similarity between two time-series is that the results can sometimes be
very unintuitive. If for example, two time-series are identical, but slightly out of phase
with each other, then a distance measure such as the Euclidean distance will give a very
poor similarity measure. Figure 2.a illustrates this problem. DTW overcomes this
limitation by ignoring both local and global shifts in the time dimension as shown in
figure 2.b [6] [19]
7
Figure 1.2(a) Euclidean method Figure 1.2(b) DTW method
The above method we have studied for the DTW will only work for the two-
dimensional time series data. However, the raw data we received from the sensor data
contains accelerometer, gyroscope, and magnetometer. Each sensor data contain X, Y,
and Z orientation. Therefore, we have come up with an idea of Using N-dimensional
DTW technique.
1.4 Organization
The thesis is organized in the following manner. In Chapter 2, we will talk about
the specification of hardware components used and system setup of the experiment is
defined along with the system diagram. In this paper, we are going to find the accuracy
percentage of the different gesture performed by the participant as explained in chapter
3. To calculate accuracy percentage we are going to perform DTW technique on the
sensor data obtained from the sensor device. But DTW technique is used on the single
dimension time series data, in chapter 4 we explain the DTW technique on the N-
8
dimensional time series data. Chapter 5 provides the accuracy percentage result for the
different activities performed by the subject. The conclusion and future work are shown
in chapter 6.
9
CHAPTER 2:
SYSTEM SETUP
2.1 Hardware
Figure 2.1 MbientLab Metawear C series sensor
The MetaWear C series comes in 2 flavors, the MetaWear C, and the MetaWear
CPRO. The C series comes as a tiny round board, the size of a button or quarter, perfect
for low cost and low power applications which can be used as a wearable sensor. The
board is programmable using your Smartphone.
Table 2.1 Provide the specification for MetaWear C Series:
Product
dimension
Diameter: 0.94in / 24mm
Weight 0.2oz
Connectivity
Bluetooth LE 4.0 – 2.4Ghz
Up to 100ft of range – typical 10m
Stream sensor Data from 1 Hz to 100 Hz
Log sensor Data from 1 Hz to 400 Hz
10
Core
Nordic nRF51822 32-bit ARM® Cortex™ M0
CPU with 256kB/128kB flash + 32kB/16kB RAM
RGB LED
Micro push-button
I2C + 4 GPIOs
Logging / Memory
Streaming ONLY Sensor
80KB FLASH Memory – Onboard
5K – 10K sensor data entries with timestamp
(actual # of entries depends on # of sensors turned
on, sampling frequency, and recording time)
Sensor data is sent live directly to your Device
(Smartphone, Tablet, Hub)
Data is available as a CSV file on your Device
once the Stream or Download finishes
Battery CR2032 Coin cell battery – not rechargeable (included)
Accelerometer
±2g: 16384LSB/g
±4g: 8192LSB/g
±8g: 4096LSB/g
±16g: 2048LSB/g
Gyroscope
±125°/s: 262.4 LSB/°/s
±250°/s: 131.2 LSB/°/s
±500°/s: 65.6 LSB/°/s
±1000°/s: 32.8 LSB/°/s
±2000°/s: 16.4 LSB/°/s
Temperature -40…85°C range
±0.5°C, typ. Accuracy
2.2 Software
Figure 2.2 shows the data flow chart of our system. The system consists of three
main components: Data Acquisition, Modelling, and Classifier.
11
Processing Feature Extraction
Modelling
Exercise Type Recognition
Classifier ResultsClassified Exercise
Accelerometer Gyroscope Magnetometer
Data Acquisition
Evaluation Accuracy
Figure 2.2 System Data Flow Chart
2.2.1. Data Acquisition
The main purpose of this step is to collect the data from gyroscope, accelerometer,
and magnetometer as shown in figure 2.4. The magnetometer can easily suffer from the
interference from the environment. The accelerometer data obtained as show in the
figure 2.4(a) shows X-axis in blue signal, Y-axis in red signal and Z-axis in purple
color. These signal contain noises from the high-frequency oscillation and the ambient
environment which can be removed during preprocessing step with. The gyroscope data
obtained from the sensor helps us to determine the periodic angular variation of the arm
rotation as shown in figure 2.4(b). The figure 2.3 shows the process of data acquisition
obtained from the metabase sensor application in which Bluetooth device is connected
12
to the sensor device. Figure 2.3(a) shows us the type of sensor present in sensor device.
Figure 2.3(a) shows us the sensor to be activated. Figure 2.3(c) Starts logging of sensor
data. Figure 2.3(d) shows the continuous stream of data acquired with number of
samples.
(a)Types of Sensor present in Sensor device (b) Sensors to be activated
(c)Start logging of Sensor data (d) Streaming of Sensor data
Figure 2.3 Raw data from sensors
13
Figure 2.4(a) raw accelerometer data Figure2.4(b)raw data of gyroscope data
2.2.2. Modelling:
This component consists of two stages: preprocessing and feature extraction.
a. Preprocessing stage:
In this stage, several tasks will be done.
1) The raw sensor data provided by the sensor device is fairly noisy. These raw
data consist of high-frequency oscillations from the device and ambient
environment which will skew the clean and noise free sensor data and in
turn affect the accuracy. In order to remove this noise, we put the
acceleration data through a lowpass band filter to remove the noise and give
us a cleaner signal to work with as shown in figure 2.5.
2) The filtered raw sensor data consist of accelerometer, gyroscope, and
magnetometer of X, Y, and Z orientation is then passed to a sensor fusion
technique which will convert to a single orientation of X, Y and Z axis and
normalization of data also take place in sensor fusion technique.
14
Figure 2.5 the filtered and unfiltered signal of accelerometer data.
b. Feature extraction:
We are going to extract the feature list with the help of temporal techniques
using DTW technique. Dynamic Time Warping (DTW) is a technique we mainly used
find the maximum alignment of two signals. This DTW algorithm mainly calculates
the distance between the two possible pair of points in signals which may differ in phase
or the nearest possible feature values [14]. This technique mainly calculates the distance
between the two point using Euclidean distance or absolute distance and to create a
cross matrix which helps us to find the least expensive path around the matrix [6]. This
performance can be improved by constraining the amount of warping allowed.
15
Figure 2.6 cost matrix of length m*n
Fig 2.6 Show the two-time series signals A and B of length m, n and the possible
warping path between two signals in cost matrix. The minimum warping path is shown
in dotted points.
2.2.3. Evaluation:
In this step, there are two tasks, exercise type recognition, and evaluation.
Exercise type recognition: The type of exercise can be recognized by
combining the data of gyroscope and accelerometer to form a 3D model. The type of
gesture recognition we are able to find whether it is a basic model and the movement
model.
16
Accuracy Evaluation: We are mainly focused on calculating the accuracy
percentage of the model by comparing from the real model (from the patient) to the
accurate model (true model from a physical therapist). For example, the features we
have extracted from the temporal technique method (DTW) from signals will be
compared to the accurate features to provide the accuracy percentage [20]. Moreover,
the resultant data like accuracy percentage and feature list is transferred to the cloud for
further process to understand the data.
2.3 System Overview
Home Area Network
BluetoothWireless
Communiction
Hospital
External Area Network
Health Care Evaluation System
Raspberry Pi
Wearable Sensor Device
Figure 2.7 System Diagram of motion evaluation system
The system architecture is shown in Figure 2.7 in which the raspberry pi device
receives the raw data available from the sensor devices containing accelerometer,
gyroscope, and magnetometer. These signals are received wireless through a Bluetooth
connection which is then subjected to a series of modeling techniques containing data
preprocessing and feature extraction. In case of signal preprocessing, the data will be
17
processed to remove erroneous signals by passing through the filters. In case of feature
extraction technique, we are going to obtain feature list containing basic model or the
movement model. These gesture movements are evaluated with the accurate model
(containing same gesture movement) to find the accuracy percentage [33]. The accurate
model is preinstalled and can be obtained from the local computer in testing
environment or in the real environment we are going to get through the cloud. The final
results containing feature list and evaluation accuracy percentage are then passed to the
cloud or saved to the local computer (in a testing environment). The specialist or
physiotherapist can able to access these results from the cloud and analyze the results.
The required feedback can be provided to the patient or subject based on his
improvement. The specialist can also able to install the accurate model to the cloud.
18
CHAPTER 3:
SYSTEM ANALYSIS
3.1 Data Preprocessing
Nowadays the implementation of the motion tracking devices is visual based,
and the advantages of visual-based tracking devices are visualization, the accurate
positioning of the movement, and rich in details. However, these devices are to be in
fixed position and having a limited range of the field of views. Video-based tracking
or processing can be a time-consuming process due to the amount of data that is
contained in the video [34]. In addition, these devices are not easily portable and require
a high level of expertise to setup and maintenance. Therefore, an alternative approach
is the wearable sensor devices which allow the users to wear on the body and it is very
easy to port and is not limited by the capabilities of the camera. Our wearable sensor
devices consist of accelerometer, gyroscope, and magnetometer which help us to record
the movement of the body in real time. The raw data collected from these sensor devices
passes to the preprocessing stage where we further process the data for removing errors
and combine the data of accelerometer, magnetometer, and gyroscope with the help of
Euler’s theorem to X, Y, Z axis orientations. The system chooses accelerometer,
gyroscope, and magnetometer as inputs to record the subject movement.
3.2 3D Modeling
In this technique of 3D modeling, we are going to use the raw data of the sensor
devices from the gyroscope, accelerometer and magnetometer with the help of Euler’s
formula to produce the axial orientation in X, Y, and Z direction. These axial
orientations help us to determine the gesture movement of the body.
19
3.2.1 Basic Models
There are several basic gesture movements such as ankle movement, hand
movement, leg movement and the body movement [35]. These gesture movements can
be easily determined with the help of orientations. Moreover, there are some angular
rotations involved in the wearable sensor devices like the rotational movement of wrist
hand and shoulder movement. Table 3.1 gives some basic models for gesture
movements like wrist, leg, arm, and thigh.
Table 3.1 the basic models for gesture movements
Movement Description
Movement 1 Lifting arm and bending toward left
Movement 2 Both hands moving above
Movement 3 Lifting both arm and bending toward left
Movement 4 Stretching both arms on both sides
Movement 5 Stretching left arm on one side
3.2.2 Movement Models
Movement models are nothing but a sequential collection of different basic
models. Accelerometer alone may not be enough to provide the features list of
information for gesture movement that is why they need to combine with some other
sensors like gyroscope and magnetometer to provide more accurate activity of features
classification [36]. The practical requirement of the wearable sensor devices is that it
can able to send maximum information content of gestures by placing at a different
location. This would increase wearability and avoid the use of manual labeling of
different activities. In order to gather data simultaneously from a large number of
wearable sensors with these data available consisting of different basic models which
20
will produce movement models as shown in the below table. Table 3.2 is the list of
activities and their classification into activity groups [32] [3].
Table 3.2 List of Activities and their activity groups
Activity Activity group
Sports Tennis, cricket, swimming
Physiotherapist Working with Patients
Military Soldiers Preparation for battle
3.3 Store Models into Cloud
A database is created to save the evaluation results of movements. The database
is on a remote server. Users (trainees) and trainers can both have access to the database.
The saved data can be used for long-term training performance analysis. First, we have
to connect the sensor devices to the raspberry pi, the mode of communication between
sensor devices and raspberry pi is through a Bluetooth connection. The data we received
from the sensor devices can publish to the cloud using MQTT connection (a protocol
for communicating messages from machine to machine) [23]. Each sensor device is
provided with a unique sensor ID and with the help of these unique sensor ID we can
able to publish data in cloud.
21
CHAPTER 4:
MOTION ANALYSIS
4.1 Feature Extraction
In this section, we are going to extract the feature list from the two N-
dimensional sensor data which helps us to find the accuracy percentage. The method
we have proposed to extract the feature list is Dynamic Time Warping. Traditionally
DTW is mainly proposed for one-dimensional time series data but in this paper, we
have implemented the DTW technique for N-dimensional time series data [30]. To
implement the N-dimensional DTW technique, we should understand one dimensional
DTW technique.
4.1.1 One Dimensional Dynamic Time Warping
Given two, One-dimensional time series data, x = {𝒙𝟏, 𝒙𝟐, … … . , 𝒙|𝒙|}𝑻 and
y = {𝒚𝟏, 𝒚𝟐, … … . , 𝒚|𝒚|}𝑻, with respective length |x| and |y|. Construct a warping path
w = {𝒘𝟏, 𝒘𝟐, … … . , 𝒘|𝒘|}𝑻 so that |w|, the length of w is:
𝑀𝑎𝑥{|𝑥|, |𝑦|} ≤ |𝑤| < |𝑥| + |𝑦| (1)
Where the Kth value is given by:
𝑊𝑘 = (𝑋𝑖, 𝑦𝑗) (2)
A number of constraints are placed on the warping path, which is as follows
[39]:
Boundary Condition: The warping path must start at w1 = (1, 1).
Monotonicity condition: The warping path must end at: 𝑊|𝑤| = (|x|, |y|)
22
Continuity condition: The warping path must be continuous i.e if 𝑊|𝑘| = (i, j)
then 𝑊|𝑘+1| must be equal to either (i, j), (i+1, j), (i, j+1) or (i+1, j+1).The warping path
must exhibit monatomic behavior, i.e. the warping path cannot move backwards.
They are exponentially many warping paths that satisfy the above condition.
However, we are interested in finding the warping path that minimizes the normalized
total warping cost given by:
𝑀𝑖𝑛(1
𝑤∑ 𝐷𝐼𝑆𝑇(𝑊𝑘𝑖, 𝑊𝑘𝑗)
|𝑤|𝑘=1 ) (3)
Where 𝐷𝐼𝑆𝑇(𝑊𝑘𝑖, 𝑊𝑘𝑗)the distance function calculated using Euclidean or absolute
distance between the point i in time series x and j in time series y, given by 𝑊𝑘. The
minimum total warping distance can be found by using Dynamic programming to fill a
two-dimensional (|x| by |y|) cost matrix C. Each cell in the cost matrix represent the
accumulated minimum warping cost so far in the warping between the time series x and
y up to the position of the cell. The value in the cell at 𝐶(𝑖,𝑗) is therefore given by:
𝐶(𝑖,𝑗) = 𝐷𝐼𝑆𝑇(𝑊𝑘𝑖, 𝑊𝑘𝑗) + 𝑚𝑖𝑛{𝐶(𝑖−1,𝑗), 𝐶(𝑖,𝑗−1), 𝐶(𝑖−1,𝑗−1)} (4)
which is the distance between point i in the time series X and point j in the time series
y, plus the accumulated distance from the three previous cell that neighbor the cell i, j
(the cell above it to its left and the cell as its diagonal). When the cost matrix has been
filled, the minimum possible warping path can easily be calculated by navigating
through the cost matrix in reverse order, starting at C(|x|,|y|), until cell C(1,1) has been
reached, as illustrated. At each step, the cells to the left, above and diagonally of the
current cell are searched to find the minimum value. The cell with the minimum value
is then moved to and the previous three cell search is repeated until C(1,1) has been
23
reached. The warping path then gives the minimum normalized total warping distance
between x and y:
DTW(x, y) = 1
𝑊 ∑ 𝐷𝐼𝑆𝑇(𝑊𝑘𝑖 , 𝑊𝑘𝑗)
|𝑤|𝑘=1 . (5)
Here, 1
𝑊 is used as a normalization factor to allow comparison of warping path with
varying lengths.
We continue with several remarks about the DTW distance. First, note that the
DTW distance is well-defined even though there may be several warping paths of
minimal total cost [15] as shown in figure 4.1. Second, it is easy to see that the DTW
distance is symmetric in case that the local cost measure c is symmetric.
Figure 4.1 optimal DTW distance path for the cost matrix
4.1.2 N-Dimensional Dynamic Time Warping
For this instance, we are going to implement the [8] [9] Dynamic Time warping
between the N-dimensional sensor data acquired from the real model with the N-
dimensional sensor data proposed by the specialist. To compute the distance between
two N-dimensional time- series. This takes the summation of distance errors between
each dimension of an N-dimensional template and the new N-dimensional time-series.
The total distance across all N dimensions is then used to construct the warping matrix
24
C. We will use the Euclidean distance as a distance measure across the N dimensions
of the template and new time-series.
DIST(i, j) = √∑ (𝑖𝑛 − 𝑗𝑛)2𝑁𝑛=1 (6)
The N dimension Dynamic Time warping is the actual extension of the standard
one dimension DTW. In which the 𝑋𝑖 and 𝑋𝑗 are the two N dimensional data containing
length of 𝑖𝑡ℎ and 𝑗𝑡ℎ and we can able to compute the N dimension DTW and the form
X = {𝑥1, 𝑥2, 𝑥3 … … 𝑥𝑁}.
ND − DTW(X, Y) = min1
𝑊∑ 𝐷𝑖𝑠𝑡(𝑊𝐾𝑖
, 𝑊𝐾𝑗 ,)|𝑤|𝐾=1 (7)
The benefits of ND-DTW can be seen when multidimensional series are
considered that have synchronization information distributed over different dimensions
[18]. Take the artificial 2D series shown in figure 4.2(a). It is clear that for the first half
(in time) of the series, dimension 1 is used for finding the correct synchronization,
whereas dimension 2 is uninformative. The converse is true for the second half of the
series. If we were to perform 1D-DTW on this series using dimension 1, the result
would be as shown in figure 4.2(b). The second half of the series is uniformly
synchronized since there is no information for 1D-DTW to work with. But it can be
seen that for dimension 2, this is not the ideal synchronization. 1D- DTW on dimension
2 gives a similar (but converse) result. MD-DTW takes both dimensions into account
in finding the optimal synchronization. [10] The result is a synchronization that is as
ideal as possible for both dimensions, as shown in figure 4.2 (c). This is the advantage
of MD-DTW over regular DTW.
25
Figure 4.2 the necessity for ND-DTW
In Figure 4.2(a), show two artificial 2D time series of equal mean and variance.
Dimension 1 is shown in column 1, dimension 2 in column 2. The series contains
synchronization information in both dimensions. If 1D-DTW is performed using the
first dimension, the result is suboptimal for dimension 2, as illustrated in figure 4.3(b).
Note that the peaks and valleys in dimension 2 are not aligned properly. 1D-DTW on
dimension 2 gives a similar suboptimal match for dimension 1. MD-DTW takes both
dimensions into account and finds the best synchronization as a shown in figure 5.2(c).
4.2 Sensor Fusion Algorithm
The Sensor device is composed of a tri-axis gyroscope, a tri-axis accelerometer,
and a tri-axis magnetometer. A Kalman filter is implemented to yield the reliable
orientation [25] [27]. Tilt compensation is applied to compensate the tilt error. The
sensor in the gyroscopes uses the Coriolis acceleration effect on a vibrating mass to
detect angular rotation [22]. The gyroscope measures the angular velocity, which is
26
linear to the rate of rotation. It responds quickly and accurately, and the rotation can be
computed by time-integrating the gyroscope output. The rotational angle is obtained by
the trapezoidal integration from the gyroscope signal. Trapezoidal integration method
equation [8] is shown in equation 1.
∫ 𝑓(𝑥)𝑑𝑥𝑏
𝑎= (b – a)f(a) +
1
2 (𝑏 − 𝑎)[𝑓(𝑏) − 𝑓(𝑎)] (8)
Accelerometers measure linear acceleration based on the acceleration of gravity [11].
The problem with accelerometers is that they measure both accelerations due to the
device’s linear movement and acceleration due to earth’s gravity, which is pointing
toward the earth. Since it cannot distinguish between these two accelerations, there is a
need to separate gravity and motion acceleration by filtering. Filtering makes the
response sluggish and it is the reason why the accelerometer is mostly used by the
gyroscope [23] [25].
By utilizing the accelerometer output, rotation around the X-axis (roll) and
around the Y-axis (pitch) can be calculated. If 𝐴𝑐𝑐𝑒𝑙_𝑋, 𝐴𝑐𝑐𝑒𝑙_𝑌, and 𝐴𝑐𝑐𝑒𝑙_𝑍 are
accelerometer measurements in the X-, Y- and Z-axes respectively, equations 2 and
4.2.3 show how to calculate the pitch and roll angles [28]:
Pitch = arctan(𝐴𝑐𝑐𝑒𝑙_𝑋
(𝐴𝑐𝑐𝑒𝑙_𝑋)2+ (𝐴𝑐𝑐𝑒𝑙_𝑌)2) (9)
Roll = arctan(𝐴𝑐𝑐𝑒𝑙_𝑌
(𝐴𝑐𝑐𝑒𝑙_𝑋)2+ (𝐴𝑐𝑐𝑒𝑙_𝑌)2) (10)
These equations provide angles in radians and they can be converted to degrees later.
In order to measure rotation around the Z-axis (yaw), the other sensors need to be
27
incorporated with the accelerometer. In order to measure rotation around the Z-axis
(yaw), the other sensors need to be incorporated with the accelerometer.
The applied sensor fusion system is depicted in Figure 4.3. The calibrated
accelerometer signal is used to obtain roll* and pitch* by equations 2 and 3. Roll* and
pitch* are noisy calculations and the algorithm combines them with the gyroscope
signal through a Kalman filter to acquire clean and not-drifting roll and pitch angles
[24]. On another hand, a tilt compensation unit is implemented, which uses a
magnetometer signal in combination with roll and pitch to calculate the challenging
yaw rotation [23].
3 Axis Raw Gyroscope
3 Axis Raw Accelerometer
3 Axis Raw Magnetometer
Calibration Unit
Kalman Filter
Pitch & Roll Calculation
Tilt Compensation
Accelerometer
Gyroscope
MagnetometerYaw
Pitch
Roll
Roll*
Pitch*
Fig 4.3 shows the Sensor fusion technique for accelerometer, gyroscope, and
magnetometer
28
Kalman filtering is a recursive algorithm which is theoretically ideal for fusion
the noisy data. Implementation of the Kalman filter calls for physical properties of the
system [26]. Kalman filter estimates the state of the system at a time (t) by using the
state system at the time (t-1). The system should be described in a state space form, like
the following:
𝑥𝑘+1 = 𝐴𝑥𝑘 + 𝑤𝑘 (11)
𝑧𝑘 = 𝐻𝑥𝑘 + 𝑣𝑘 (12)
Where; 𝑥𝑘is the state vector at time k, A is the state transition matrix, 𝑤𝑘 is the state
transition noise, 𝑧𝑘is measurement of x at time k, H is the observation matrix and 𝑣𝑘 is
the measurement noise. State variables are the physical quantity of the system like
velocity, position, etc. Matrix A describes how the system changes with time and matrix
H represents the relationship between the state variable and the measurement. In our
Kalman filter, the input vector is defined as follows:
X = [𝑤𝜑] (13)
A = [ 1 −Δ𝑡0 1
] (14)
H = [1 0] (15)
Where w is the angular velocity from the gyroscope and 𝜑 is the rotation angle, which
is calculated by the accelerometer signal. To implement the Kalman filter, the steps in
algorithm 1 should be executed [10]. The A, H, Q and R should be calculated before
implementing the filter. Q and R are covariance matrices of 𝑤𝑘 and 𝑤𝑘respectively,
which are diagonal matrices. 𝑧𝑘 is the system measurement and 𝑥𝑘 is the filter output.
29
As mentioned earlier, computing the rotation around the Z-axis is challenging
(the Z-axis is perpendicular to the earth’s surface.). This angle is also called the heading
or azimuth. If the gyroscope is used to calculate the heading, not only is the drift
problem encountered, but the initial heading must be known [12]. The earth’s magnetic
field is parallel to the earth’s surface. Therefore, while the tri-axis magnetometer is
parallel with the earth’s surface, it can measure the heading accurately through the
direction of the earth’s magnetic field [12]. However, in most applications, the
magnetometer is attached to the object and it moves with the object and goes out of the
horizontal plane. By tilting the magnetometer, the direction of axial sensitivity will
change [13]. Consequently, it will be difficult to measure the heading. Depending on
how much the magnetometer tilts, different amounts of error appear in the calculations.
The tilt compensation process maps the magnetometer data to the horizontal plane and
provides the accurate heading calculation regardless of the position of the
magnetometer. The roll and pitch angles are utilized in combination with magnetometer
data to correct the tilt error, regardless of the magnetometer’s position.
As Figure 4.3 shows, the roll and pitch angles come from Kalman filter output.
If 𝑚𝑥, 𝑚𝑦, and 𝑚𝑧are calibrated and normalized magnetometer outputs, and α, β and γ,
present roll, pitch and yaw respectively, the heading is calculated by equation 9.
Equations 7 and equation 8 are used to transform the magnetometer reading to the
horizontal plane. When magnetometer data is in the flat plane, equation 9 obtains a
reliable calculation.
𝑋𝐻 = 𝑚𝑥 cos 𝛽 + 𝑚𝑦 sin 𝛼 sin 𝛽 + 𝑚𝑧 sin 𝛽 cos 𝛼 (16)
𝑌𝐻 = 𝑚𝑦 cos 𝛼 + 𝑚𝑧 sin 𝛼 (17)
30
𝛾 = 𝑎 tan 2(𝑋𝐻
𝑌𝐻) (18)
The difference between the regular inverse tangent and the MATLAB’s
command “atan2” is that the first one returns the results in the range of [-π/2, π/2], while
“atan2” calculates the results in the range of [-π, π].
31
CHAPTER 5:
RESULTS
5.1 Experimental Setup
We collected the dataset for eight different participants. We selected these
activities based on the movement model discussed in Section 3.2. Eight healthy male
participants (age range: 23–28) took part in our data collection experiments for the
duration 2 minutes each. Eight activities are performed by all eight participants for both
basic model and the movement model and activities were performed indoors except
playing tennis. For all the Basic model the participants are respectively stand stood and
move only hands which provide a significant change in accelerometer data, but in the
case of movement model, the participant will be moving the entire body which will
changes the gyroscope data significantly. All participants performing these activities
are alone and in a controlled environment.
Lifting arm and bending toward left: When conducting this activity, the
participant has to change the movement of his hand every 20 seconds for the duration
of 2 minutes as shown in the figure 5.1a. The data collected from this activity will be
compared with the accurate model to calculate accuracy percentage.
Hands moving above: In this activity, the participants were moving one hand
above and below by standing still at the same position every 20 seconds for the duration
of 2 minutes as shown in the figure 5.1b. The sensor data obtained by mounting the
sensor devices on the hands and accuracy percentage will be calculated by comparing
the real model and accurate model.
Lifting both arms and bending toward left: For this activity depicted in figure
5.1c. The participants were asked to lift both arms and bend toward left for every 20
seconds of 2 minutes. These data collected from the participant will find the accuracy
percentage.
32
Stretching both arms on both sides: This activity mainly performed by moving
both hands horizontal standing still and bringing the hands down for every 20 seconds
for the duration of 2 minutes as shown in the figure 5.1d.
Stretching left hand on one side: In this activity, the participant will move only
left hand horizontal and bringing the left hand down by standing still as shown in the
figure 5.1e.
Weightlifting: This exercise is done in the gym. In which the participant lying
flat on a bench holding the dumbbells directly above chest and arms extended and then
lowering dumbbells to the chest in a controlled manner and again press dumbbells back
to starting position and repeat as shown in figure 5.1f.
Stretching: In this case, the participant will move the left hand and touches the
right leg without bending the knee and moves the right hand by touching to the left leg
and this step has to repeat for the period of 2 minutes as shown in the figure 5.1f.
Tennis: In this case, the participant is trying to perform the forehand movement
by hitting the ball at comfortable reach which means that you don’t have to extend your
arm fully to hit the ball and just hit the ball when it’s in front of you near your front hip.
The sensor device will be mounted on his hand holding the bat as shown in figure 5.1g
collect the sensor data from the device to calculate the accuracy percentage as shown
in figure 5.2.
From my analysis of calculating the accuracy percentage. I observed that the
activity performed by standing still and moving only hands has provided a greater
amount of change in accelerometer data while if the participant is performing the
activities like stretching and playing tennis provided a significant change in gyroscope
data. The evaluation system we have proposed in this paper can be used to expand our
datasets to a wide range of activities. This would result in higher number of calculating
accuracy percentage for a different set of activities. Figure 5.2 shows the calculation of
accuracy percentage in spyder IDE for the activity tennis.
33
(a). Lifting arm and bending toward left (b). Hand moving above
(c). Lifting both arms and bending toward left (d). Stretching both arms on both
sides
(d). Stretching left hand on one side (e). Weightlifting
34
(f). Stretching (g). Tennis
Figure 5.1 shows the real subject wearing the device and performing different
activities
Figure 5.2 Accuracy percentage calculated in spyder IDE.
5.2 Data Collection
During the data collection, all the participants are made to wear the sensor
device on their hand while performing the activities [16] [37]. There is also other
position where the sensor device can be mounted, such as knee, hip or ankle position.
But in our paper, we are mainly focused on mounting the sensor device on hands where
there will be a maximum change in the speed and the rotational movement of the hand
35
direction which result in significant difference in the readings of accelerometer,
gyroscope, and magnetometer sensor data. We collected the data at 50 samples per
second from the sensor device for the duration of two minutes as shown in figure 5.3
and figure 5.5. These data we collected from the sensor device is with the help of
MetaHub application in which the Bluetooth connection is paired with a sensor device
to obtain a stream of data.
Figure 5.3 represents raw sensor data of accelerometer
Figure 5.4 raw sensor data of gyroscope.
36
As we have already discussed in the experimental setup that the accelerometer
and gyroscope data will complement each other for various activities at various single
body position for example if we stand at a position and move only the hand position the
accelerometer is going to change significantly. If we are performing activities like
playing tennis and weightlifting the gyroscope data will change significantly. In our
paper, we are having eight different subjects performing eight different activities and
each subject is mounted with two sensor device. Each sensor device provides 3 X, Y,
and Z orientation data and the amount of data we received from two sensor device
consist of 18 orientation data of X, Y and Z axis for the duration of two minutes. The
total number of dataset we obtain from eight different activity from eight participants
is 384. The raw accelerometer data obtained from the sensor device contain noises from
the high-frequency oscillation and the ambient environment. Therefore, these
accelerometer data are then passed through lowpass band filter which will provide us a
cleaner data to work with. The X, Y and Z orientation of sensor data obtained from the
gyroscope, accelerometer, and magnetometer are then passed to the sensor fusion
technique in which all the nine orientation of sensor data is converted to pitch, roll and
yaw from each sensor device. The two sensor devices mounted on the subject provide
us with two set of pitch, roll and yaw angles and these 6 dimensions is passed to N
dimension Dynamic Time Warping method which helps us to calculate the accuracy
percentage by comparing the real model with the accurate model as explained in next
section 5.2.
37
Figure 5.5 filtered accelerometer data
5.3 Performance evaluation
5.3.1 DTW accuracy calculation
In the case of performance evaluation, the training data is subjected to series of
calculation of N dimension Dynamic time warping technique. This is an important step
to calculate the accuracy percentage in which the sensor data collected from 8 random
people for 8 different activities is subjected to the ND-DTW technique. For each
38
participant, a continuous stream of data is collected and trained in the DTW model. The
continuous stream of data consists of pitch, roll and yaw angle obtained from the sensor
fusion technique. Dynamic time warping is a well know technique to find the optimal
alignment between two-time dependent sequences and these sequences are warped in a
nonlinear fashion to match each other. Traditionally DTW technique is used for one-
dimensional time series data, but in our case, we have received the sensor data consist
of 6 dimensions from two sensor devices mounted on the body. We have to calculate
the best match between the received sensor data with respect to accurate data proposed
by the specialist.
The two N dimension time series data obtained from the sensor device will be
out of phase and to calculate the optimal path between the two sequences we have to
run the running the sliding window of size w for the entire length of the sequence. For
each of the participant, we have provided a maximum warping window for us to
calculate the cost matrix. From the cost matrix, we are able to obtain the optimal path
consisting of a minimum distance between the two N-dimensional sequences as shown
in figure 5.8. The minimum distance obtained by the DTW algorithm helps us to
calculate the accuracy percentage. Figure 5.8 shows DTW path in similarity matrix,
which denotes the correlation (similarity) of two signals. Hence the path will tend to
pick darker blocks since it’ll maximize the matching performance. Note that
minimizing the distance is identical to maximizing the similarity. The grayer block from
the cost matrix indicates minimum similarity between the two signals and maximum
DTW distance between two signals.
39
Figure 5.8 shows the cost matrix for the maximum and minimum similarity between
two N dimension signals.
We have conducted an experiment to test the capability of ND-DTW algorithm
and to calculate the accuracy percentage. In this case we took eight different subject to
perform eight different exercise. For each exercise we have a data set of accurate model
performed by the specialist and each subject will repeat the same exercise and the
corresponding accuracy percentage is calculated as shown in the below table. Eight
different activities is performed by each subject for both the basic model and the
movement model. For all the basic model the subject will respectively move only hands
by remaining at the same place which provide a significant change in accelerometer
data. In case of the movement model the subject will be performing activities like
stretching, weightlifting and playing tennis and there will be significant change in both
gyroscope and accelerometer data. The amount of data set obtained by each subject
repeating each exercise for five times is 30 and the data collected from eight subject for
each exercise is 240. A total of eight exercise is performed in this experiment by each
subject and the total amount of data obtained from all these exercise are 1920(240 each
subject * 8 exercise). Table below shows the calculation of accuracy percentage
obtained from both the basic model and the movement model. Table 5.1, Table 5.2,
40
Table 5.3, Table 5.4, Table 5.5 provide the accuracy percentage for the basic model
movement for eight different subject. The accuracy calculation for the Movement 1 i.e.
Table 5.1 and Movement 3 i.e. Table 5.2 is comparatively less than that of the
Movement 2, Movement 4 and Movement 5 shown in the Table 5.2, Table 5.4 and
Table 5.5, because in case movement 1 and Movement 3 the subject has to bend the
hand to the left side standing still. The participant we obtain accurate model for these
movement is very fit and the subjects we used to compare it to the accurate model will
have different physical characteristics and the angle of the hand bending towards left
will varies from subject to subject so the accuracy percentage is very less. The accuracy
percentage for the Movement 2, Movement 4 and Movement 5 is higher because the
subjects has to move his hand either vertically or horizontal by standing still at the same
position. So there will be only change in the values of accelerometer data.
Table 5.1 Accuracy Percentage calculation for Movement 1
Movement 1
Subject
1
Subject
2
Subject
3
Subject
4
Subject
5
Subject
6
Subject
7
Subject
8
Trial
1 95.84 95.18 95.64 95.12 95.89 94.72 95.03 95.28
Trial
2 96.36 95.32 96.12 94.73 95.67 94.23 94.26 94.87
Trial
3 94.83 95.96 95.71 94.84 95.8 94.62 94.59 95.11
Trial
4 95.63 94.89 95.27 95.06 95.36 93.98 95.11 94.73
Trial
5 95.12 95.05 95.91 94.61 94.97 94.41 94.48 93.95
41
Table 5.2 Accuracy Percentage calculation for Movement 2
Movement 2
Subject
1
Subject
2
Subject
3
Subject
4
Subject
5
Subject
6
Subject
7
Subject
8
Trial
1 97.10 96.58 97.43 97.18 95.86 97.12 97.24 95.71
Trial
2 96.73 97.15 96.83 96.73 96.16 97.81 96.83 95.12
Trial
3 97..23 96.92 96.37 97.04 96.81 97.61 96.95 95.30
Trial
4 96.94 96.85 97.08 96.27 95.94 96.92 96.73 96.07
Trial
5 96.37 97.41 96.57 97.53 96.21 97.27 97.16 95.57
Table 5.2 Accuracy Percentage calculation for Movement 3
Movement 3
Subject
1
Subject
2
Subject
3
Subject
4
Subject
5
Subject
6
Subject
7
Subject
8
Trial
1 93.89 93.78 94.26 95.83 94.08 94.13 94.71 95.06
Trial
2 94.24 94.15 94.73 95.37 93.61 93.87 94.13 94.57
Trial
3 94.85 94.02 94.08 95.04 93.78 94.04 94.26 94.88
Trial
4 94.47 93.81 93.92 94.71 94.21 95.13 95.37 95.26
Trial
5 95.17 94.31 94.13 95.17 94.57 95.37 94.85 94.90
Table 5.2 Accuracy Percentage calculation for Movement 4
Movement 4
Subject
1
Subject
2
Subject
3
Subject
4
Subject
5
Subject
6
Subject
7
Subject
8
Trial
1 96.57 97.71 97.92 98.06 97.16 97.27 97.09 97.43
Trial
2 96.89 97.26 98.37 97.94 96.72 96.93 96.51 97.17
Trial
3 97.01 97.57 98.12 97.50 96.85 97.04 96.86 97.38
Trial
4 97.63 97.42 98.61 97.29 97.53 97.61 97.23 96.90
Trial
5 96.93 97.85 98.03 98.42 97.37 97.50 96.95 97.07
42
Table 5.5 Accuracy Percentage calculation for Movement 5
Movement 5
Subject
1
Subject
2
Subject
3
Subject
4
Subject
5
Subject
6
Subject
7
Subject
8
Trial
1 98.74 98.51 97.83 96.07 97.34 98.53 97.58 98.13
Trial
2 98.61 98.27 97.55 96.71 97.47 98.08 96.93 98.04
Trial
3 98.24 98.15 97.38 96.19 96.95 98.40 97.04 97.50
Trial
4 98.37 98.64 97.73 96.49 97.16 97.75 97.76 97.17
Trial
5 97.91 97.92 98.04 96.83 97.67 98.17 97.37 97.28
The accuracy percentage obtained from movement model as shown in Table
5.6. Table 5.7, Table 5.8 for eight different subject is comparatively less with respect
to the accuracy percentage of the basic model. In the movement model the subject
perform different exercises like playing tennis, Stretching and weightlifting. Initially,
we will use a participant who is specialized in this particular exercise as an accurate
model and these eight subjects are made to perform these same exercise. The accuracy
percentage obtained from these eight subject will be less as each subject will perform
these activities based on their physical abilities and the frequency, rotational
movements of the hands will vary significantly compared to the accurate model. Table
5.6, Table 5.7 and Table 5.8 provide the accuracy percentage for Playing Tennis,
Stretching and weightlifting.
43
Table 5.6 Accuracy Percentage calculation for Tennis
Tennis
Subject
1
Subject
2
Subject
3
Subject
4
Subject
5
Subject
6
Subject
7
Subject
8
Trial
1 89.73 91.07 90.43 91.23 92.81 92.14 91.23 92.34
Trial
2 91.26 88.57 92.37 89.81 91.51 91.85 90.64 91.73
Trial
3 90.14 89.64 90.89 91.39 92.13 91.03 89.91 91.67
Trial
4 90.84 90.18 89.43 88.73 90.73 90.71 88.51 90.93
Trial
5 89.51 89.43 88.73 90.27 90.17 89.67 89.35 91.27
Table 5.7 Accuracy Percentage calculation for Stretching
Stretching
Subject
1
Subject
2
Subject
3
Subject
4
Subject
5
Subject
6
Subject
7
Subject
8
Trial
1 90.75 89.37 89.17 91.34 92.13 88.73 87.93 92.31
Trial
2 89.62 88.12 88.91 91.81 91.46 88.91 87.19 91.78
Trial
3 90.61 88.20 90.03 91.40 91.30 88.24 87.75 91.62
Trial
4 91.37 87.73 90.25 90.73 91.63 87.81 88.63 89.85
Trial
5 90.59 88.91 89.61 91.16 91.87 89.63 88.35 90.47
Table 5.8 Accuracy Percentage calculation for Weightlifting
Weightlifting
Subject
1
Subject
2
Subject
3
Subject
4
Subject
5
Subject
6
Subject
7
Subject
8
Trial
1 92.18 91.33 92.25 90.33 89.06 91.57 87.83 93.04
Trial
2 91.73 89.92 91.61 90.07 88.71 91.03 87.56 92.73
Trial
3 91.57 90.68 91.70 89.45 88.19 91.25 87.43 92.02
Trial
4 90.87 90.13 91.46 88.64 87.63 89.71 89.27 91.89
Trial
5 91.26 91.49 90.81 90.48 90.17 88.92 88.62 91.75
44
5.3.2 Comparison matrix: with different activities, with different subjects
The sensor data obtained from each subject will be consider as a real model or
the ground truth data and is subjected to series of calculation of N dimension Dynamic
time warping technique by comparing each model with the accurate model. The cost
matrix obtained from the DTW method provide us the distance matrix between the real
and accurate model which helps us to obtain the accuracy percentage. In the same way
we have calculated the accuracy percentage result from 8 different subjects performing
8 different activities as show in table5.1. In case of calculating the accuracy percentage
for the Basic movement model, if we going to move only the hand position by standing
still at the same position every 20 seconds for the duration of 2 minutes. The
accelerometer data is going to change significantly for every 20 seconds and then
remains same and there will no rotational movement of the arm. Therefore, when we
are comparing these data with the accurate model we are going to find maximum
accuracy percentage. If we are performing activities like playing tennis and
weightlifting both the gyroscope data and accelerometer data will change significant.
Therefore there is a decrease in accuracy percentage. The result shown in table5.1
suggest that the N dimensional DTW algorithm performed well by calculating the
distance between the two time dependent which are out of phase. These accuracy
percentage result obtained from these experiment helps us to validate the capability of
N dimension DTW algorithm.
45
Mo
del
Ba
sic M
ovem
ent
Mo
vem
ent
Mo
vemen
t 1
Mo
vemen
t 2
Mo
vemen
t 3
Mo
vemen
t 4
Mo
vemen
t 5
Ten
nis (F
ore h
an
d)
Stretch
ing
W
eig
htliftin
g
Accuracy Percentage
Su
bject1
9
5.6
8
96
.87
94
.52
97
.00
98
.37
90
.29
90
.58
91
.52
Su
bject2
9
5.2
8
96
.98
94
.01
97
.56
98
.29
89
.77
88
.26
90
.71
Su
bject3
9
5.7
3
96
.85
94
.22
98
.21
97
.70
90
.37
89
.59
91
.56
Su
bject4
9
4.8
7
96
.95
95
.22
97
.84
96
.45
90
.28
91
.28
89
.79
Su
bject5
9
5.5
4
96
.19
94
.05
97
.12
97
.31
91
.47
91
.67
88
.75
Su
bject6
9
4.3
9
97
.34
94
.50
97
.27
98
.18
91
.08
88
.66
90
.49
Su
bject7
9
4.6
9
96
.98
94
.66
96
.92
97
.33
89
.92
87
.97
88
.14
Su
bject8
9
4.7
8
95
.55
94
.93
97
.19
97
.62
91
.58
91
.20
92
.28
Tab
le 5.9
. Accu
racy p
ercentag
e calcu
latio
n fo
r differen
t mo
del o
n d
ifferent su
bject
46
CHAPTER 6:
CONCLUSION AND FUTURE WORK
In this paper, we have presented the N-dimensional Dynamic Time Warping
algorithm which has been specifically used to perform the accuracy percentage of the
gesture movement by comparing the real model with an accurate model. The DTW
technique we have proposed provide a more accurate time series similarity measure.
Furthermore, our approach may extend to human activity recognition and can provide
a more accurate result for multi-dimensional time series. For the future work, we can
propose a new different algorithm along with dynamic time warping to provide better
accuracy percentage. Currently, I have calculated accuracy percentage by mounting the
sensor device on both hands. But in future, we can able to mount sensor device on
different parts of the body and have additional sensor data like heart rate which can help
to measure the heart rate. In addition to that, some fitness functions are very complex
and required a longer period to perform which will require a considerable amount of
execution time to evaluate accuracy percentage. We can propose a new method which
will reduce the execution time.
47
REFERENCES
[1] D. M. Karantonis, M. R. Narayanan, M. Mathie, N. H. Lovell and B. G. Celler,
"Implementation of a Real-Time Human Movement Classifier Using a Triaxial
Accelerometer for Ambulatory Monitoring.," IEEE Transactions on Information
Technology in Biomedicine, vol. 10, no. 1, pp. 156-167, 2006.
[2] J. Shotton, R. Girshick, A. Fitzgibbon, T. Sharp, M. Cook, M. Finocchio, R.
Moore, P. Kohli, A. Criminisi, A. Kipman and A. Blake, "Efficient Human Pose
Estimation from Single Depth Images," IEEE Transactions on Pattern Analysis
and Machine Intelligence, vol. 35, no. 12, pp. 2821-2840, 2013.
[3] N. Gillian and S. O’Modhrain, "Recognition Of Multivariate Temporal Musical
Gestures Using N-Dimensional Dynamic Time Warping.".
[4] S. Patel, H. Park, P. Bonato, L. Chan and M. Rodgers, "A review of wearable
sensors and systems with application in rehabilitation," J. Neuroeng. Rehabil.,
2012.
[5] E. A. Heinz, K. S. Kunze, M. Gruber, D. Bannach and P. Lukowicz, "Using
wearable sensors for real-time recognition tasks in games of martial arts–An initial
experiment," Proceedings of the 2nd IEEE Symposium on Computational
Intelligence and Games, pp. 98-102, 2006.
[6] H. Sorenson, "Special issue on applications of Kalman filtering," IEEE Trans.
Automatic Control, 1983.
[7] S. S. a. P. Chan., "Toward accurate dynamic time warping in linear time and space.
Intelligent Data Analysis," pp. 561-580, 2007.
[8] J. F. Lichtenauer, G. A. t. Holt, E. A. Hendriks and M. J. T. Reinders, "Sign
language detection using 3D visual cues," 2007 IEEE Conference on Advanced
Video and Signal Based Surveillance, pp. 435-440, 2007.
[9] P. Kim, "Kalman Filter for Beginners with Matlab Examples," CreateSpace
Independent Publishing Platform 2011.
[10] M. H. Ko, G. West, S. Venkatesh and M. Kumar, "Online Context Recognition in
Multisensor Systems using Dynamic Time Warping," 2005 International
Conference on Intelligent Sensors, Sensor Networks and Information Processing,
pp. 283-288, 2005.
[11] D. Vargha, "Motion Processing Technology Driving New Innovations in
Consumer Products, InvenSense.".
48
[12] M. Reyes, G. Domínguez and S. Escalera, "Featureweighting in dynamic
timewarping for gesture recognition in depth data," 2011 IEEE International
Conference on Computer Vision Workshops (ICCV Workshops), pp. 1182-1188,
2011.
[13] P. O. Kristensson, T. Nicholson and A. Quigley, "Continuous recognition of one-
handed and two-handed gestures using 3d full-body motion tracking sensors," In
Proceedings of the 2012 ACM International Conference on Intelligent User
Interfaces. ACM, 2012.
[14] D. M. Mayhew, "Multi-rate sensor fusion for GPS navigation using Kalman
filtering," PhD Thesis, Dpt of electrical engineering, Virginia Polytechnic
Institute and State University, 1999.
[15] S. Julier, J. Uhlmann and H. F. Durrant-Whyte, "A new method for the nonlinear
transformation of means and covariances in filters and estimators," IEEE
Transactions on Automatic Control, vol. 45, no. 3, pp. 447-482, 2000.
[16] R. Brown and P. Hwang, "Introduction to Applied Kalman Filtering," John Wiley,
1992.
[17] A. H. Ali, A. Atia and M. Sami, "A comparative study of user dependent and
independent accelerometer-based gesture recognition algorithms," In Distributed,
Ambient, and Pervasive Interactions, Springer, 2014.
[18] Y. Bar-Shalom, "Multi-Target Multi-Sensor Tracking," Artec House, 1990.
[19] V. Mantyla, "Discrete hidden Markov models with application to isolated user-
dependent hand gesture recognition," VTT publications, 2001.
[20] A. N.Bingaman, "Tilt-Compensated Magnetic Field Sensor," master dissertation,
Virginia Polytechnic Institut and State University, 2010.
[21] O. Amft, H. Junker and G. Troster, "Detection of eating and drinking arm gestures
using inertial body-worn sensors," Ninth IEEE International Symposium on
Wearable Computers (ISWC'05), pp. 160-163, 2005.
[22] Z. Zhang, "Microsoft Kinect Sensor and Its Effect," IEEE MultiMedia, vol. 19,
no. 2, pp. 4-10, 2012.
[23] P. Dhar and P. Gupta, "Intelligent parking Cloud services based on IoT using
MQTT protocol," 2016 International Conference on Automatic Control and
Dynamic Optimization Techniques (ICACDOT), pp. 30-34, 2016.
49
[24] K. Murao, T. Terada, A. Yano and R. Matsukura, "Evaluating Gesture
Recognition by Multiple-Sensor-Containing Mobile Devices," 2011 15th Annual
International Symposium on Wearable Computers, pp. 55-58, 2011.
[25] M. Vlachos, M. Hadjieleftheriou, D. Gunopulos, and E. Keogh, "Indexing multi-
dimensional time-series with support for multiple distance measures.,"
Proceedings of the 9th ACM SIGKDD int. conf. on Knowledge discovery and
data mining, 2003.
[26] G. S. Chambers, S. Venkatesh, G. A. W. West and H. H. Bui, "Hierarchical
recognition of intentional human gestures for sports video annotation," Object
recognition supported by user interaction for service robots, vol. 2, pp. 1082-
1085, 2002.
[27] D. Trabelsi, S. Mohammed, F. Chamroukhi, L. Oukhellou and Y. Amirat, "n
Unsupervised Approach for Automatic Activity Recognition Based on Hidden
Markov Model Regression," IEEE Transactions on Automation Science and
Engineering, vol. 10, no. 3, pp. 829-835, 2013.
[28] G.Welch and G. Bishop, "An Introduction to Kalman Filter," Unc Tech. Report
Tr 95-041, 2006.
[29] J. Liu, Z. Wang, L. Zhong, J. Wickramasuriya and V. Vasudevan, "uWave:
Accelerometer-based personalized gesture recognition and its applications," 2009
IEEE International Conference on Pervasive Computing and Communications,
pp. 1-9, 2009.
[30] A. Hernández-Vela, M. Á. Bautista, X. Perez-Sala, V. Ponce, X. Baró, O. Pujol,
C. Angulo and S. Escalera, "BoVDW: Bag-of-Visual-and-Depth-Words for
gesture recognition," Proceedings of the 21st International Conference on Pattern
Recognition (ICPR2012), pp. 449-452, 2012.
[31] H. R. Hashemipour, S. Roy and A. J. Laub, "Decentralized structures for parallel
Kalman filtering," IEEE Transactions on Automatic Control, vol. 33, no. 1, pp.
88-94, 1988.
[32] A. Akl and S. Valaee, "Accelerometer-based gesture recognition via dynamic-
time warping, affinity propagation, & compressive sensing," 2010 IEEE
International Conference on Acoustics, Speech and Signal Processing, pp. 2270-
2273, 2010.
[33] C. S. Myers and L. R. Rabiner, "A comparative study of several dynamic time-
warping algorithms for connected-word recognition," The Bell System Technical
Journal, vol. 60, no. 7, pp. 1389-1409, 1981.
50
[34] S. Preece, J. Goulermas, L. Kenney, D. Howard, K. Meijer and Crompton, "R.
Activity identification using body-mounted sensors. A review of classification
techniques," Physiol. Meas., 2009.
[35] L. Patras, I. Giosan and S. Nedevschi, "Body gesture validation using multi-
dimensional dynamic time warping on Kinect data," 2015 IEEE International
Conference on Intelligent Computer Communication and Processing (ICCP), pp.
301-307, 2015.
[36] D. Mace, W. Gao and A. Coskun, "Accelerometer-based hand gesture recognition
using feature weighted naıve Bayesian classifiers and dynamic time warping," In
ACM Conference on Intelligent User Interfaces (companion). ACM, 2013.
[37] M. Wullmer, M. Al-Hames, F. Eyben, B. Schuller, and G. Rigoll, "A
multidimensional dynamic time warping algorithm for efficient multimodal
fusion of asynchronous data streams," Neurocomputing, 2009.
[38] H. Durrant-Whyte, "Introduction to Estimation and The Kalman Filter,"
Australian Centre for Field Robotics, 2000.
[39] D. Catlin, "Estimation, Control and the Discrete Kalman Filter," Springer Verlag,
1984.
[40] P. E. Taylor, G. J. M. Almeida, T. Kanade and J. K. Hodgins, "Classifying human
motion quality for knee osteoarthritis using accelerometers," 2010 Annual
International Conference of the IEEE Engineering in Medicine and Biology, pp.
339-343, 2010.