inertial motion capture for ambulatory analysis of human...

54
ACTA UNIVERSITATIS UPSALIENSIS UPPSALA 2020 Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1946 Inertial motion capture for ambulatory analysis of human movement and balance FREDRIK OLSSON ISSN 1651-6214 ISBN 978-91-513-0965-1 urn:nbn:se:uu:diva-409894

Upload: others

Post on 17-Aug-2020

11 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

ACTAUNIVERSITATIS

UPSALIENSISUPPSALA

2020

Digital Comprehensive Summaries of Uppsala Dissertationsfrom the Faculty of Science and Technology 1946

Inertial motion capture forambulatory analysis of humanmovement and balance

FREDRIK OLSSON

ISSN 1651-6214ISBN 978-91-513-0965-1urn:nbn:se:uu:diva-409894

Page 2: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

Dissertation presented at Uppsala University to be publicly examined in ITC 2446,Polacksbacken, Lägerhyddsvägen 2, Uppsala, Friday, 28 August 2020 at 09:15 for the degreeof Doctor of Philosophy. The examination will be conducted in English. Faculty examiner:Professor Peter H. Veltink (University of Twente, Enschede, The Netherlands).

AbstractOlsson, F. 2020. Inertial motion capture for ambulatory analysis of human movementand balance. Digital Comprehensive Summaries of Uppsala Dissertations from theFaculty of Science and Technology 1946. 43 pp. Uppsala: Acta Universitatis Upsaliensis.ISBN 978-91-513-0965-1.

Inertial sensors (accelerometers and gyroscopes) are ubiquitous in today’s society, where theycan be found in many of our everyday mobile devices. These sensors are capable of recording themovement of the device, and by extension, the movement of humans carrying or interacting withthe device. Human motion capture is frequently used for medical purposes to assess individualbalance performance and movement disorders. Accurate and objective assessment is importantin the design of individualized interventions and rehabilitation strategies.

The increasing availability of inertial sensors, combined with their mobility and low cost, hasmade inertial motion capture highly relevant as a more accessible alternative to the laboratorybased gold standard. However, mobile solutions need to be adopted for plug-and-play use withthe end user in mind. Methods that automatically calibrate the sensors, and methods that detectand record relevant motions are required.

This thesis contributes to the development of human inertial motion capture as a plug-and-play technology. A method for accelerometer calibration, which allows for compensation ofsystematic sensor errors, is proposed. The method fuses accelerometer and gyroscope datato allow dynamic rotation of the sensor during the calibration procedure. Other proposedmethods handles sensor-to-segment calibration in a biomechanical model. The position of a jointcenter and the direction of a hinge joint’s rotation axis, are identified in each sensor’s intrinsiccoordinate system. This is done by fitting recorded motions to the kinematic constraints of theunderlying biomechanical model. The methods are evaluated on real sensor data collected frommechanical joint systems that mimics human limbs.

The state of the current knowledge regarding objective human balance assessment isstudiedin the form of a systematic review, that includes methods for modeling and identifyingneuromuscular control of human balance. A similar modeling framework is then applied toidentify feedback controllers in physical and artificial (simulated) systems. Finally, inertialsensors are applied in tremor quantification in Parkinson’s disease (PD) and essential tremor(ET). The method uses only the inertial sensors in a standard smart phone, and is applied ondata from human subjects with PD or ET.

Keywords: Inertial sensors, Human motion capture, Calibration, Neuromuscular control,Human balance, Tremor quantification

Fredrik Olsson, Department of Information Technology, Division of Systems and Control, Box337, Uppsala University, SE-75105 Uppsala, Sweden.

© Fredrik Olsson 2020

ISSN 1651-6214ISBN 978-91-513-0965-1urn:nbn:se:uu:diva-409894 (http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-409894)

Page 3: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

Tröghetssensorer (eng. inertial sensors) (accelerometrar och gyroskop) ärallmänt förekommande i dagens samhälle, där de finns i många av våravardagliga mobila enheter. Dessa sensorer kan spela in rörelsen hos en-heten, och således även rörelsen hos människor som bär med sig eller in-teragerar med enheten. Mänsklig rörelsefångst (eng. motion capture) an-vänds ofta för medicinska ändamål för att bedöma individuell balans- ochrörelseförmåga. Noggrann och objektiv bedömning är viktig för att utformaindividualiserade interventioner och rehabiliteringsstrategier.

Den ökade tillgängligheten av tröghetssensorer, i kombination med de-ras bärbarhet och låga kostnad, har gjort inertial motion capture mycketrelevant som ett mer tillgängligt alternativ till den laboratoriebaserade guld-standarden. Men mobila lösningar måste anpassas för plug-and-play ochmed slutanvändaren i åtanke. Metoder som automatiskt kalibrerar sensor-erna, och metoder som detekterar och fångar in relevanta rörelser är nöd-vändiga.

Den här avhandlingen bidrar till utvecklingen av mänsklig motion cap-ture som en plug-and-play teknologi. En metod för accelerometerkalibrering,som möjliggör kompensering av systematiska sensorfel, föreslås. Metodenkombinerar accelerometer- och gyroskopdata för att tillåta dynamisk rota-tion av sensorn under kalibreringsproceduren. Andra föreslagna metoderhanterar kalibrering av sensor till segment i en biomekanisk modell. Posi-tionen för ett ledcentrum och riktningen av en gångjärnsleds rotationsaxel,identifieras i varje sensors egna koordinatsystem. Detta görs genom att an-passa uppmätta rörelser till de kinematiska bivillkoren hos den underlig-gande biomekaniska modellen. Metoderna utvärderas på verkliga sensor-data insamlade från mekaniska ledsystem som efterliknar mänskliga lem-mar.

Det nuvarande kunskapstillståndet angående objektiv bedömning av män-sklig balans studeras i form av en systematisk granskning, som omfattarmetoder för modellering och identifiering av neuromuskulär kontroll av män-

iii

Page 4: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

sklig balans. Ett liknande modelleringsramverk tillämpas sedan för att up-pskatta regulatorer i fysiska och artificiella (simulerade) system. Till slut såtillämpas tröghetssensorer för kvantifiering av tremor, som är symptom förParkinsons sjukdom (PD) och essentiell tremor (ET), som orsakar rörels-enedsättning. Metoden använder utnyttjar enbart tröghetssensorerna i envanlig smarttelefon, och appliceras på data från mänskliga patienter medPD eller ET.

Page 5: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

This thesis contains a compilation of the following papers, which arereferred to in the text by their Roman numerals.

I F. Olsson, M. Kok, K. Halvorsen, and T. B. Schön. “Accelerometercalibration using sensor fusion with a gyroscope”. In: StatisticalSignal Processing Workshop (SSP), 2016 IEEE (Palma de Mallorca,Spain). IEEE. 2016, pp. 1–5

II F. Olsson and K. Halvorsen. “Experimental evaluation of jointposition estimation using inertial sensors”. In: Information Fusion(Fusion), 2017 20th International Conference on (Xi’an, China). IEEE.2017, pp. 1–8

III F. Olsson, M. Kok, T. Seel, and K. Halvorsen. Robust plug-and-playjoint axis estimation using inertial sensors. Submitted. 2020

IV F. Olsson, K. Halvorsen, and A. C. Åberg. Neuromuscular controllermodels for quantifying standing balance in older people: A systematicreview. Under revision. 2020

V F. Olsson, K. Halvorsen, D. Zachariah, and P. Mattsson. “Identificationof nonlinear feedback mechanisms operating in closed loop usinginertial sensors”. In: 18th IFAC Symposium on System Identification(SYSID) (Stockholm, Sweden). Vol. 51. 2018, pp. 473–478

VI F. Olsson and A. Medvedev. “Nonparametric time-domain tremorquantification with smart phone for therapy individualization”. In:IEEE Transactions on Control Systems Technology 28.1 (Jan. 2020),pp. 118–129

Reprints were made with permission from the publishers.

v

Page 6: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

The following papers are not included in the compilation but are of relevanceto the thesis.

1. K. Halvorsen and F. Olsson. “Pose estimation of cyclic movement us-ing inertial sensor data”. In: Statistical Signal Processing Workshop(SSP), 2016 IEEE (Palma de Mallorca, Spain). IEEE. 2016, pp. 1–5

2. K. Halvorsen and F. Olsson. “Robust tracking of periodic motion inthe plane using inertial sensor data”. In: IEEE Sensors 2017 (Glasgow,Scotland). 2017

3. A. Medvedev, F. Olsson, and T. Wigren. “Tremor quantification throughdata-driven nonlinear system modeling”. In: Decision and Control(CDC), 2017 IEEE 56th Annual Conference on (Melbourne, Australia).IEEE. 2017, pp. 5943–5948

4. F. Olsson and A. Medvedev. “Tremor severity rating by Markov chains”.In: Proceedings of the 18th IFAC Symposium on System Identification(SYSID) (Stockholm, Sweden). 2018, pp. 317–322

5. A. Medvedev, R. Cubo, F. Olsson, V. Bro, and H. Andersson. “Control-Engineering Perspective on Deep Brain Stimulation: Revisited”. In:2019 American Control Conference (ACC). IEEE. 2019, pp. 860–865

6. F. Olsson, T. Seel, D. Lehmann, and K. Halvorsen. “Joint axis estima-tion for fast and slow movements using weighted gyroscope and ac-celeration constraints”. In: Information Fusion (Fusion), 2019 22th In-ternational Conference on (Ottawa, Canada). IEEE. 2019, pp. 1–8

vi

Page 7: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

Time flies when you are having fun they say. There may be some truth inthis idiom, as it feels to me as if the last five years went by really fast. Ihave also had a lot of fun. I have been given the opportunity to travel tomany interesting locations. I will never forget the first conference I went to,ERNSI in Varberg 2015 (and the amazing road trip). Since then I have beento Mallorca, Venice, China, France, Scotland, Canada and The Netherlands.I would like to thank VR (Grant number 2015-05054) for the funding thatallowed me to go to all of these places to present my research.

I have also had the privilege to meet some amazing people. First andforemost, I would like to thank my supervisor Kjartan Halvorsen for guidingme on this journey. I am deeply grateful that you were always there whenI need feedback or support even though we live on different continents. Itwas always a pleasure going to conferences with you, I especially rememberour trip to the Terracotta Army in 40◦C! I am also glad to have the braggingrights for having an actor from an Oscar-awarded movie as my supervisor. Iwish you and your family all the best for the future.

I have also been lucky to have some great collaborators that has mademuch of this work possible. Thanks to Manon Kok and Thomas Schön forthe great collaboration on accelerometer calibration when I had just startedmy journey, and special thanks to Manon for inviting me to visit your re-search group in Delft in the last year of my PhD, which resulted in yet an-other great collaboration! Thanks to Thomas Seel, who was a great discus-sion leader at my Licentiate seminar and then became an even better col-laboration partner. Thanks to Anna Cristina Åberg for your assistance in ourtireless work with the systematic review. Thanks to Alexander Medvedevfor involving me in the DBS project and for a great collaboration on tremorquantification. Thanks to Dave Zachariah and Per Mattsson for our collab-oration on closed loop identification and for sharing your knowledge esti-mation and system identification.

vii

Page 8: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

Life at work has always been great thanks to my awesome former andcurrent colleagues at SysCon! Thanks to everyone for always contributingto an amazing atmosphere, both during and after work. Special thanks toCalle, Anna, David (frisbee, bodypump, running), Rubén (beer festivals andgaming), Marcus and Andreas (all-knowing oracles of SysCon). Colleaguesfrom other divisions are also cool! Thanks for the fun ski conferences in Åre,Polhacks, the late afternoon boardgame events, TNDR social events and ofcourse all of the awesome PhD defenses and parties I have been invited to!Thanks to my current co-organizers of the Friday pub, Johan, Gustav andPer and all former organizers. Thanks to all members of East Aros Ultimateand everyone that I have played with at SLU, Campus or in a random park.Thanks to my second cousin, friend and former flatmate Mark for the amaz-ing time in 27B and all of the crazy Valborgs. Also thanks to all of the Proxxiguys. Thanks to Ludwig and Marcus for the crazy shenanigans. Thanks tomy cousins Tobias and Andreas and the rest of the Åhlberg family for alwaysbeing great to hang out with.

Last but not least I would like to thank my sisters Viktoria and Sofia, mymother Susanne and my father Mikael. Thanks for being awesome, alwaysbelieving in me and for supporting me. I would not have gotten to this pointwithout you. I love you.

With that said, I would also like to thank anyone that decides to read thisthesis. Hopefully you will find something interesting.

Fredrik OlssonUppsala, May 2020

viii

Page 9: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

iii

11.1 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

92.1 Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.1.1 Euler angles . . . . . . . . . . . . . . . . . . . . . . . . . . 112.1.2 Unit quaternions . . . . . . . . . . . . . . . . . . . . . . . 13

2.2 Accelerometers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.3 Gyroscopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.4 MEMS inertial sensors . . . . . . . . . . . . . . . . . . . . . . . . 182.5 Error characteristics and calibration . . . . . . . . . . . . . . . . 18

2.5.1 Measurement noise . . . . . . . . . . . . . . . . . . . . . 192.5.2 Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.5.3 Other systematic errors . . . . . . . . . . . . . . . . . . . 22

233.1 Orientation estimation . . . . . . . . . . . . . . . . . . . . . . . . 23

3.1.1 Strapdown gyroscope integration . . . . . . . . . . . . . 243.1.2 Inclination measurement . . . . . . . . . . . . . . . . . . 243.1.3 Magnetometers . . . . . . . . . . . . . . . . . . . . . . . . 253.1.4 Extended Kalman filter . . . . . . . . . . . . . . . . . . . 26

3.2 Position estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 283.3 Kinematic chains . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.4 Joint models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.5 Inverted pendulum models . . . . . . . . . . . . . . . . . . . . . 33

3.5.1 Neuromuscular control . . . . . . . . . . . . . . . . . . . 36

ix

Page 10: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

x

Page 11: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

Chapter 1

Inertial sensors are ubiquitous in today’s society, and can be found in manyof our everyday wearable and mobile devices. These sensors are capable ofrecording the movement of the device, and by extension, the movement ofhumans interacting with the device. Many commercial products are basedon the concept of humans interacting with inertial sensors. Apps for smartdevices that track the physical activities of the user are common and areoften designed to engage the user through gamification, where challengesand achievements serves as a carrot on a stick to encourage an active lifestyle.Motion controls in video games has become a mainstay in the industrysince the Nintendo Wii successfully introduced them to the mainstreammarket [32]. Both the film and the gaming industry use motion capture tocreate lifelike animations for computer generated characters [48].

The widespread use of inertial sensors have caught the attention of re-searchers in the biomedical field. Assessment of human balance and move-ment disorders is performed predominantly in movement laboratories orphysiotherapy clinics. Assessments often rely on physicians’ expert opin-ions. To obtain accurate and objective assessment, high fidelity camera-based motion capture systems are often used. However, this type of assess-ment is expensive since it uses work hours of the physicians and limitedlaboratory resources. Because of the limited resources, assessment can-not be performed frequently, and the day-to-day variability cannot be cap-tured. Because inertial sensors are wearable and cheap they do not pos-sess these types of limitations. Furthermore, they allow measurements tobe performed outside the laboratory, preferably even in patients’ homes.Ideally, systems based on inertial sensors should be plug-and-play, requiresbare minimum knowledge and not interfere with the normal daily lives ofthe users. However, inertial sensors have other types of limitations thatneed to be overcome in order to make this happen.

Page 12: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

The remainder of this chapter describes the outline and the main con-tributions of the thesis, which includes the introductory chapters followedby a compilation of the papers I–VI. The individual contributions of eachpaper are summarized, then follows a discussion of potential future work.

1.1 ContributionThe first contribution of this thesis is mainly towards the application ofinertial sensors in human motion capture. The problem of how the sen-sors should be calibrated to provide the user with accurate information isconsidered. Particularly for human applications, the problem of sensor-to-segment calibration, how recorded motion from the sensor can be relatedto the motion of a physical biomechanical model of a human, is consid-ered. Algorithms for how to obtain the desired calibration parameters areproposed and experimentally validated for real physical systems.

The second contribution of the thesis concerns mathematical modelingand quantification of human balance mechanisms and movement disor-ders. A modeling framework, that uses system identification methods tomodel the neuromuscular control mechanisms of standing human balancewas systematically reviewed. A similar framework was applied to estimatethe controller models in physical and simulated systems, using inertial sen-sors to collect the data required to construct a model. A method for quan-tifying tremor symptoms in patients with Parkinson’s disease using only theinertial sensors in a standard smart phone has been proposed and appliedon data from human subjects.

More specific details on the contributions of the individual papers is pro-vided in the following section.

1.2 Thesis outlineThe thesis consists of two parts. The first part is a basic introduction to in-ertial motion capture. The physical quantities measured by inertial sensorsis explained and measurement models are derived. Methods used to esti-mate orientation and position, and thus capture the motion of the sensors,are described. The second part is a comprehensive summary of the maincontributions in the form of the papers I–VI, with only minor corrections ofthe text and formatted to fit the format of the thesis. The papers are directedtowards more specific problems, and are self-contained.

Paper I: Accelerometer calibration using sensor fusion with a gyroscope

This work concerns the calibration of accelerometers in an inertial mea-surement unit (IMU) with 6 degrees of freeom (DOF), which contains one

Page 13: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

triaxial accelerometer and one triaxial gyroscope. It is known that low-qualityaccelerometers might require additional calibration parameters to compen-sate for systematic measurement errors. Other than static gain and additivebias, the measurements can also be affected by cross-axis interference andmisalignment with respect to (w.r.t.) other sensors in the IMU. A methodfor calibrating a triaxial accelerometer in a 6-DOF IMU is proposed. Themethod uses a sensor fusion approach, where calibration parameters arefound by solving a maximum likelihood (ML) problem for the output resid-uals in an orientation estimation extended Kalman filter (EKF). The EKFfuses gyroscope and accelerometer information to compute an estimate ofthe sensor’s orientation w.r.t. a global coordinate system. The gyroscopehas information about the dynamic rotation (time update) of the orienta-tion state and the accelerometer has information about the absolute orien-tation of the sensor frame w.r.t. the global frame (measurement update).The method is validated using Monte Carlo simulations of virtual sensors,where the true calibration parameters are known. Furthermore, the methodis evaluated on experimental data from real sensors in smart phones, and itis shown that after calibration, the accelerometer gives more accurate read-ings of the gravitational acceleration, which in turn improves the accuracyof the roll and pitch angles of the orientation estimates.

Paper II: Experimental evaluation of joint position estimation usinginertial sensors

In this work, a sensor-to-segment calibration problem is studied for a sys-tem consisting of two rigid body segments connected by a joint. Each seg-ment has one IMU attached to it, and the sensor-to-segment calibrationinvolves estimating the position of the joint center in the intrinsic coordi-nate systems of each IMU. Three estimators of the joint position vectors arecompared; a linear least squares estimator (LS), a nonlinear least squaresestimator (SS) and a least absolute deviation estimator (SA). The estima-tors uses the accelerometer and gyroscope measurements from both IMUsand fits the joint position vectors to a kinematic model of the two segments.The contributions of this work are the proposal of the LS and SA estimatorsand the experimental evaluation and comparison of all three estimators.Data was collected from a mechanical joint system, which consisted of tworigid cylindrical segments connected by a universal joint. Wireless IMUswere attached to each segment on velcro straps. The system performedmotions with different types of relative rotation of the two segments. Theground truth joint position vectors were measured with respect to the sen-sor placement. The SA estimator had the best performance in terms of ac-curacy and consistency across all different types of motion. The SS estima-tor performed similarly to the SA estimator for the most part, but was moresensitive to outliers and returned inconsistent estimates for some motions.

Page 14: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

The LS estimator had the worst performance, because it is also dependenton accurate information of the relative orientation of the two IMUs, whichadds an additional source of uncertainty.

Paper III: Robust plug-and-play joint axis estimation using inertialsensors

A similar rigid body system as in Paper II is considered, except here, thejoint connecting the two segments is a hinge joint instead of a universaljoint. In such hinge joint systems, the relative orientation of the segmentshas only one degree of freedom. When two segments are connected by ahinge joint, for example in human knee or finger joints as well as in manyrobotic limbs, then the joint axis vector must be identified in the intrinsicsensor coordinate systems. Methods for estimating the joint axis using ac-celerations and angular rates of arbitrary motion have been proposed, butthe user must perform sufficiently rich motion in a predefined initial timewindow to accomplish complete identifiability. Another drawback of cur-rent state of the art methods is that the user has no way of knowing if thecalibration was successful or not. The main contributions of this work isthe proposal of a novel plug-and-play method for joint axis estimation thatovercomes these limitations of current state of the art methods. The limi-tation of a dedicated initial time window is overcome by a proposed sam-ple selection method, that make use of recent results on joint axis identi-fiability [41] to select a subset of appropriate samples of arbitrary motionto use for estimation. Gyroscope and accelerometer information based onthe kinematic joint constraints is fused together to assure that the motiononly needs to fulfill minimum required conditions to achieve identifiabil-ity. We also provide an uncertainty quantification method that assures thatthe method returns valid and accurate estimates in a timely manner, whensufficient excitation of the joint system has been observed. The methodis experimentally validated on a mechanical 3D printed joint setup, whichperformed a large range of different motions with different identifiabilityproperties. The results show that the method consistently identifies esti-mates with angular errors in the order of 2◦ w.r.t. the ground truth joint axisvector. Using the proposed sample selection method we achieve a similaraccuracy using only a small subset of samples, which reduces the computa-tional complexity of the estimation. The sample selection is also shown toimprove the accuracy when the majority of the observed motions are non-informative. With the uncertainty quantification method, we show that ac-curate estimates are returned shortly after a sufficient number of samplesof informative motion has been gathered.

Page 15: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

Paper IV: Neuromuscular controller models for quantifying standingbalance in older people: A systematic review

Objective quantification of the balancing mechanisms in humans is stronglyneeded in health care of older people, yet is largely missing among currentclinical balance assessment methods. The main goal of this work was toidentify methods that have the potential to meet that need. In this work,methods from 73 studies were systematically reviewed. These 73 studieswere selected from an extensive literature study, where a total of 1069 stud-ies were individually screened and included or excluded based on prede-fiend eligibility criteria. The methods of the selected studies deal with iden-tification of neuromuscular controller models of human upright standingfrom empirical data. The results contain an overview of the experimen-tal procedures that were used to collect data from individuals, which is re-quired to identify the unknown model parameters and construct a modelthat is a quantitative description of the balancing mechanisms of the indi-vidual. The experimental procedures were also categorized based on theirpotential usefulness if efforts were to be made to move these proceduresfrom laboratory settings towards clinical use in caring facilities or at home.Furthermore, the reviewed studies were analyzed with the particular aim tounderstand to what degree such methods would be useful solutions for as-sessing the balance of older individuals aged above 60 years. There were 16studies that included a subject population with individuals in this age cat-egory, that were therefore especially examined. It was found that the vastmajority of the studies included healthy individuals and that the samplesizes were rather small, especially regarding older individuals. A vast ma-jority of the reviewed studies were research-oriented, and did not considerthe clinical applicability of the methods. Therefore, it was concluded that aknowledge gap exists, specifically regarding the applicability of the methodsfor the general older population.

Paper V: Identification of nonlinear feedback mechanisms operating inclosed loop using inertial sensors

Human balance is controlled by neuromuscular feedback mechanisms, wheresensory information is processed by the central nervous system, which re-sponds by transmitting activation signals back to specific muscle groups.The feedback dynamics in human upright standing has often been describedas a linear system, which is typically a good approximation near the equilib-rium, but nonlinearities tend to dominate as the system moves further fromthe equilibrium. In this work we study the system identification problem ofsuch feedback mechanisms, that operate in closed loop. Two example sys-tems are considered. The first system is a simulated standing human, mod-eled as an inverted double pendulum that is controlled by a time delayedproportional-derivative controller. The second system is a physical posi-

Page 16: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

tion servo using a DC motor that is controlled by a lead-filter. Static non-linearities were applied to the input of the first controller and the output ofthe second controller in some of the experiments. Identification data werecollected for both systems with and without the static nonlinearities. Thelatent variable (LAVA) approach [36] was used as the identification method.This method combines a linear dynamical predictor model (ARX) with anoverparametrized nonlinear predictor model. However, the method reg-ularizes the overparametrized model towards the linear predictor modelclass, promoting sparsity in the nonlinear parameter vector. Nonlinear pa-rameters will only be nonzero when the linear model cannot sufficientlycapture the observed dynamics. The LAVA method was able to identifysparse nonlinear models for the two nonlinear controllers. For the linearcontrollers, the method identified linear models, and even though nonlin-ear parameters were included they had no affect on the result. The LAVAmethod provides flexibility through the inclusion of nonlinear parameters,yet favours the parsimonious linear and sparse models.

Paper VI: Non-parametric time-domain tremor quantification withsmart phone for therapy individualization

A method for quantifying the severity of tremor symptoms in patients’ withParkinsons’s disease (PD) or essential tremor (ET) is proposed. The methoduses the inertial sensors in a standard smart phone to capture the tremorsignal during a motion where the patient lifts the phone to the ear, as if an-swering a phone call. Data from inertial sensors are used to estimate the ori-entation and position trajectories of the phone during the movement. Thetremor is modeled as the high frequency deviations from a low frequencyvoluntary motion. Separation of tremor and voluntary motion is done usinga Savitzky-Golay filter. A 2D tremor signal is obtained through orthogonalprojection on to planes that are perpendicular to the voluntary motion. Theamplitudes of the tremor signal are used as a measure of the severity of thesymptom, and are modeled as a stochastic Markov chain. The stationaryprobability distribution of the Markov chain gives the probabilities for ob-serving tremor of different amplitudes, which quantifies the severity of thesymptoms. Distributions with a high peak at low amplitudes that decay ex-ponentially indicate milder symptoms whereas more flat and heavy-taileddistributions indicate presence of debilitating tremor. The method was ap-plied to data collected from three patients (one with PD and two with ET)and one healthy control. The patients were treated with deep brain stimu-lation (DBS), and data were collected with DBS off and with different stimu-lation settings. The proposed method was compared to conventional tech-niques of spectral analysis. The proposed method was clearly able to detectthe case of severe tremor when DBS was turned off. The spectral estimatesof the raw sensor readings of severe tremor were also clearly distinguish-

Page 17: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

able, but the spectra varied significantly for different sensor axes. Further-more, the proposed method supported the decision made by the physicianas to which stimulation setting that should be used, whereas the spectralmethod was inconclusive due to the significant variation of spectral esti-mates. These results demonstrate the benefits of the proposed method.

1.3 Future workInertial motion capture remains an active research field. For biomedicalapplications in particular, there are many open problems that still need tobe solved and current methods that have the potential for further devel-opment. The results presented in Paper IV point towards a knowledge gapthat exists between research and clinical application of methods for iden-tifying the mechanisms of standing balance. This is an area that has a bigpotential for development, both towards finding good mobile alternativesto expensive and obtrusive sensor technologies and towards designing ef-fective solutions that are suitable for a general older population that wouldbenefit most from mobile assessment of balance.

In order to make use of inertial sensors as an accessible and mobile al-ternative to current laboratory-based methods, they need to be adapted forplug-and-play use. Methods that automatically calibrates the sensors andmethods that detect and record relevant motions without direct interactionfrom the user are desired. Furthermore, methods applied clinically need toprovide timely and accurate reports to medical personnel and feedback tothe user. The calibration methods presented in Papers I and II were devel-oped to make calibration less obtrusive, by allowing the methods to workfor measurements of almost arbitrary motions. However, they lack featuresthat would make them truly plug-and-play, such as the ability to select sam-ples of appropriate motion and the ability to quantify the uncertainty of theestimates, so that only valid estimates of sufficient accuracy are returned tothe user. These features were included in the sensor-to-segment calibrationmethod introduced in Paper III, making it a truly plug-and-play method forjoint axis estimation. The validations of the methods in Paper II and III wereperformed on mechanical joint systems where the sensors were rigidly at-tached. There is a good reason for this, since in such setups, accurate mea-surements of the ground truth can be obtained, which allows for evalua-tion of the ideal performance of the method. On humans however, the sen-sors are not rigidly attached since human skin and soft tissue can stretchand move with respect to the bones. Human joints are also very differentfrom mechanical joint and human limbs have varying degrees of mobility. Itwould therefore be interesting to evaluate these sensor-to-segment calibra-tion methods for biomechanical joints, to quantify the effect of soft tissuemotion, and to extend their use to different joint types.

Page 18: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43
Page 19: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

Chapter 2

Inertial sensors collectively refers to two subcategories of sensors, accelerom-eters and gyroscopes. Accelerometers measure linear acceleration and gy-roscopes measure angular velocity. These sensors can be microelectrome-chanical systems (MEMS), constructed from components on the microme-ter (10−6m) scale, which makes them very suitable for mobile devices, seeFigure 2.1. A single sensor can measure in one direction, which is referredto as the sensitive axis of the sensor. By combining three sensors, whosesensitive axes are orthogonal, the sensors can measure quantities in threedimensions (3D). An inertial measurement unit (IMU) contains both ac-celerometers and gyroscopes, and if both sensors have three sensitive axes,the IMU is sometimes referred to as a 6-axis or 6 degrees-of-freedom (DOF)IMU. Even though a 6-DOF IMU technically contains 6 sensors, an IMU istypically referred to as one sensor. A measurement from a 6-DOF IMU is adirect measurement of 3D motion, since it captures both linear and angularcomponents in all directions. IMUs with MEMS inertial sensors are there-fore particularly suitable for human motion capture due to their mobility.

In this chapter we will introduce the concepts and working principles ofMEMS inertial sensors. We will formulate mathematical models used to de-scribe the measurements, and discuss how parameters of the measurementmodels are calibrated to compensate for systematic errors.

2.1 OrientationAn IMU measures quantities in R3 in a reference frame which is fixed withrespect to (w.r.t.) the sensor axes. We will refer to this reference frame as thesensor frame (S), which is defined to have its axes and origin aligned withthe axes and origin of the accelerometer.

Page 20: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

Figure 2.1: Examples of devices containing inertial sensors. From left to right:Xsens MTW Awinda wireless IMU [62], Google Pixel smart phone, Garmin Vivos-mart 3 activity tracker containing an accelerometer, Nintendo Wii controller con-taining an accelerometer with the MotionPlus addon containing a gyroscope.

In many applications, including motion capture, we want to representthe measured quantities in a reference frame that is fixed in the environ-ment. We refer to such a reference frame as the global frame (G). Suchreference frames are also commonly referred to as navigation frames. Anillustration depicting the sensor frame and the global frame can be seen inFigure 2.2. In practice, global reference frames are often defined to haveits axes aligned with two horizontal directions, one aligned with the localsouth-north and one with the east-west directions, and one axis alignedwith the global vertical direction, pointing radially towards or away fromthe center of Earth. If these directions cannot be determined, it is commonto define a global frame from an initial orientation of the sensor frame.

If we consider a quantity represented by a vector v ∈ R3, it can be ex-pressed in the sensor frame vS or in the global frame vG, which can bethought of as each reference frame using a different set of orthonormal ba-sis vectors to span R3. If we know v in the sensor frame, we can express it inthe global frame in the following way

vG = RGSvS, (2.1)

where R ∈ SO(3) is a 3×3 rotation matrix, and the superscripts denote thatthe matrix multiplication with a vector in the sensor frame results in a vectorthat is expressed in the global frame. Rotation matrices belong to the specialorthogonal group of dimension 3, denoted SO(3). The special orthogonalgroup of dimension n is defined as

SO(n) = {R ∈Rn×n |R�R = RR� = In , detR = 1}, (2.2)

where � denotes the transpose of a matrix or vector, In is the n ×n identitymatrix and detR is the determinant of R. As a consequence of this, an im-

Page 21: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

z

x

y

S–frame

x

y

z

G–frame

Figure 2.2: An illustration of the difference between the sensor frame (S), which hasits axes fixed with respect to the sensor platform and the global frame (G), which hasits axes fixed with respect to Earth.

portant property of rotation matrices is that they preserve the magnitude ofthe vectors they are multiplied with

‖Rv‖ =√

v�R�Rv = ‖v‖, (2.3)

where ‖v‖ gives us the magnitude of the vector v as the Euclidean norm,meaning that the multiplication Rv can only affect the direction of v .

Rotation matrices in SO(3) have 9 parameters. However, due to the con-straints R�R = RR� = I3 and detR = 1, these 9 parameters are not indepen-dent. In the following sections we will show how rotation matrices can beparameterized using only 3 or 4 parameters through the use of Euler anglesand unit quaternions.

2.1.1 Euler anglesEuler angles were invented by Leonhard Euler to describe any orientationin R3 using 3 parameters. The principle behind Euler angles is that any ro-tation in three dimensions can be described by three subsequent rotationsaround linearly independent axes. If we consider the three standard basis

Page 22: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

x y

z

xy

z

x y

z

φx

φy

φz

φx

φy

φz

Figure 2.3: From left to right; rotations around the z–, y– and x–axes using thecorresponding Euler angles φz , φy and φx .

vectors used to span R3

ex =⎡⎣1

00

⎤⎦ , ey =⎡⎣0

10

⎤⎦ , ez =⎡⎣0

01

⎤⎦ , (2.4)

as the axes of rotation. The Euler angles φx ,φy ,φz represent the rotationangles around these axes. These Euler angles are often called the roll, pitchand yaw angles, which comes from aircraft navigation, where these termsare used to denote rotations with respect to the principal axes of an aircraft.The rotation matrix can be decomposed into three separate rotation matri-ces, each describing a planar rotation around one axis. One example is

R =⎡⎣1 0 0

0 cosφx −sinφx

0 sinφx cosφx

⎤⎦⎡⎣ cosφy 0 sinφy

0 1 0−sinφy 0 cosφy

⎤⎦⎡⎣cosφz −sinφz 0sinφz cosφz 0

0 0 1

⎤⎦=

[cosφy cosφz −cosφy sinφz sinφy

cosφx sinφz + sinφx sinφy cosφz cosφx cosφz − sinφx sinφy sinφz −sinφx cosφysinφx sinφz −cosφx sinφy cosφz sinφx cosφz +cosφx sinφy sinφz cosφx cosφy

] ,

(2.5)

which represents a rotation around the z–axis followed by rotations aroundthe y– and x–axes, these rotations are illustrated in Figure 2.3. We say thatthe rotation sequence is z–y–x. Note that other rotation sequences areequally valid, meaning that they can be used to represent any rotation inR3, even rotation sequences such as z–y–z are valid if the two z–rotationshave two different Euler angle parameters, since the second rotation aroundthe y–axis changes the orientation of the z–axis, meaning that the first andthird rotations can be around linearly independent axes.

The Euler angles are not unique in defining a specific rotation. Due to theperiodicity of the trigonometric functions, a given Euler angle φ will yieldthe same rotation matrix as φ+ 2π. The inverse mapping from a rotation

Page 23: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

matrix to Euler angles is almost one-to-one. From the rotation sequencez–y–x in (2.5) we see that one possible Euler angle parametrization can beobtained from e.g.

φy = sin−1(R13) (2.6)

φz = cos−1(

R11

cosφy

)(2.7)

φx = cos−1(

R33

cosφy

). (2.8)

However, if φy =±π2 we can no longer find unique solutions for φz and φx .

In this case, the rotation matrix becomes

R =⎡⎣ 0 0 1

cosφx sinφz + sinφx cosφz cosφx cosφz − sinφx sinφz 0sinφx sinφz −cosφx cosφz sinφx cosφz +cosφx sinφz 0

⎤⎦=

⎡⎣ 0 0 1sin(φx +φz ) cos(φx +φz ) 0cos(φx +φz ) sin(φx +φz ) 0

⎤⎦ , (2.9)

and we see that it is only possible to observe the rotation φx +φz . The ro-tation φy = ±π

2 has aligned the x– and z–axes, and we have effectively lostone degree of freedom until φy is changed. This phenomenon is knownas Gimbal lock. Since information about φx and φz is lost until the orienta-tion is moved away from Gimbal lock position, this can have unwanted con-sequences in navigation systems that rely on having accurate informationabout all three degrees of freedom. For this reason most modern systemschoose to use different parameterizations to avoid this issue.

2.1.2 Unit quaternionsUnit quaternions is a parameterization of rotations in R3 in the form of aunit vector q ∈R4, where the elements have the basis (1, i , j ,k), where i , j ,kare imaginary numbers that satisfy the formula

i 2 = j 2 = k2 = i j k =−1. (2.10)

Quaternions were invented by William Rowan Hamilton [20], who is saidto have carved the above formula in the foundations of the Broom Bridge inDublin. The imaginary basis numbers of quaternions are non-commutative

i j =− j i = k, (2.11)

j k =−k j = i , (2.12)

ki =−i k = j , (2.13)

Page 24: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

which properties defines the quaternion multiplication. If we consider twoquaternions q and p

q =

⎡⎢⎢⎣q1

q2

q3

q4

⎤⎥⎥⎦ , p =

⎡⎢⎢⎣p1

p2

p3

p4

⎤⎥⎥⎦ , (2.14)

the quaternion multiplication, expressed by the operator � equals

q �p = (q1 +q2i +q3 j +q4k)(p1 +p2i +p3 j +p4k) (2.15)

= q1p1 −q2p2 −q3p3 −q4p4

+ (q1p2 +q2p1 +q3p4 −q4p3)i

+ (q1p3 −q2p4 +q3p1 +q4p2) j

+ (q1p4 +q2p3 −q3p2 +q4p1)k

. (2.16)

The quaternion product can also be expressed as a matrix-vector multipli-cation in the following two ways

q �p =Ql (q)p =Qr (p)q, (2.17)

where Ql ,Qr ∈ R4×4 takes the left and the right quaternion in the multipli-cation as its respective argument

Ql (q) =

⎡⎢⎢⎣q1 −q2 −q3 −q4

q2 q1 −q4 q3

q3 q4 q1 −q2

q4 −q3 q2 q1

⎤⎥⎥⎦ (2.18)

Qr (p) =

⎡⎢⎢⎣p1 −p2 −p3 −p4

p2 p1 p4 −p3

p3 −p4 p1 p2

p4 p3 −p2 p1

⎤⎥⎥⎦ (2.19)

A unit quaternion can be expressed on the form

q =[

cos φ2

e sin φ2

], (2.20)

where e ∈ R3, ‖e‖ = 1 and φ ∈ R. Let v ∈ R3, then we define the quaternionform of v as

v = [0 v1 v2 v3

]�. (2.21)

If the rotation from a reference frame A to a reference frame B is describedby the unit quaternion qBA. Then the rotation of the vector v from the A–frame to the B–frame is given by

vB = qBA � vA � (qBA)c

, (2.22)

Page 25: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

where c denotes the quaternion conjugate, which is defined as

qc = [q1 −q2 −q3 −q4

]�. (2.23)

With the unit quaternion defined as in (2.20), this is equal to a rotation ofangle φ around the axis e. Using (2.18)–(2.19) the quaternion rotation of vcan be expressed as

vB = qBA �Qr

((qBA)c

)vA =Ql

(qBA)Qr

((qBA)c

)vA, (2.24)

where the product Ql (q)Qr (qc ) gives

Ql (q)Qr (qc ) =Q(q) =[

1 01×3

03×1 R(q)

], (2.25)

where R(q) is the rotation matrix given by

R(q) =⎡⎣ 2q2

1 +2q22 −1 2(q2q3 −q1q4) 2(q2q4 +q1q3)

2(q2q3 +q1q4) 2q21 +2q2

3 −1 2(q3q4 −q1q2)2(q2q4 −q1q3) 2(q3q4 +q1q2) 2q2

1 +2q24 −1

⎤⎦ . (2.26)

From this, it is also clear that −q represents the same rotation as q , sinceR(q) = R(−q).

Alternatively, a unit quaternion can be obtained from the exponentialmapping of a vector v ∈R3 to R4 (see e.g. [24])

exp(v) :=

⎧⎪⎪⎨⎪⎪⎩[

cos‖v‖v

‖v‖ sin‖v‖

], v �= 0[

1 0 0 0]�

, v = 0

, (2.27)

where we see that if we let v = e φ2 , the same unit quaternion as in (2.20) is

obtained. Through this exponential mapping, any vector in v ∈ R3 can beexpressed as a unit quaternion that parametrizes a rotation with magnitude2‖v‖ around the axis v

‖v‖ .To show that the Euler angles and unit quaternions can describe the same

rotation matrices, consider the unit quaternion

q =

⎡⎢⎢⎢⎣cos φx

2

sin φx2

00

⎤⎥⎥⎥⎦�

⎡⎢⎢⎢⎣cos

φy

20

sinφy

20

⎤⎥⎥⎥⎦�

⎡⎢⎢⎢⎣cos φz

200

sin φz2

⎤⎥⎥⎥⎦= qx �qy �qz

, (2.28)

which is composed of three unit quaternions that describes rotations aroundthe basis vectors ex ,ey ,ez . If we consider v ′ to be the rotation of the vector

Page 26: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

v by the unit quaternion q , this is given by

v ′ = qx �qy �qz � v �qcz �qc

y �qcx

=Q(qx )Q(qy )Q(qz )v ⇔ v ′ = R(qx )R(qy )R(qz )v(2.29)

We have that

R(qx ) =

⎡⎢⎣2cos2 φx2 +2sin2 φx

2 0 0

0 2cos2 φx2 −1 −2cos φx

2 sin φx2

0 2cos φx2 sin φx

2 2cos2 φx2 −1

⎤⎥⎦=

⎡⎣1 0 00 cosφx −sinφx

0 sinφx cosφx

⎤⎦ , (2.30)

which can be recognized from (2.5). In the same fashion, R(qy ) and R(qz )returns the remaining two rotation matrices in (2.5).

It is always possible to compute a unit quaternion from an arbitrary ro-tation matrix R. The rotation axis e does not change direction when mul-tiplied by R. We have that Re = e, and it is clear that e is the eignevectorwhich corresponds to the eigenvalue one. The only case where e cannot bedetermined is when R = I , in which case the rotation angle φ= 0, and it fol-lows from (2.20) that q2 = q3 = q4 = 0. The rotation angle φ can be obtainedfrom the trace of the rotation matrix

trR = 6q21 +2q2

2 +2q23 +2q2

4 −3 = 4q21 −1

= 4cos2 φ

2−1 = 2cosφ+1 ⇔φ= cos−1

(trR −1

2

), (2.31)

where we have used (2.26) in the first step, the identity q21 +q2

2 +q23 +q2

4 = 1and the definition of q1 from (2.20) in the second step. Note that this map-ping always returns a non-negative rotation angle, however, the negativerotation −φ around the flipped axis −e corresponds to the same rotation.

2.2 AccelerometersAccelerometers are sensors that measures the sum of all non-gravitaionaland gravitational linear accelerations of the sensor. The measured quantityis

RSG (aG + g G)

, (2.32)

where g is the gravitational acceleration and a is the acceleration of the sen-sor frame w.r.t. the global frame. The vertical direction of the global frameis defined by the direction of g , such that

g G = [0 0 ‖g‖]� , (2.33)

Page 27: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

where ‖g‖ is used to denote the magnitude of the gravitational accelera-tion. The magnitude ‖g‖ changes depending on where on Earth the sensoris located. Near the poles of the Earth ‖g‖ ≈ 9.83m/s2 and near the equator‖g‖ ≈ 9.78m/s2, because the distance to the center of the Earth is shorterat the poles than at the equator. However, other factors such as the den-sity of the bedrock will also affect ‖g‖. However, in applications where thesensors do not move long distances, ‖g‖ can be assumed to be constant. Astationary accelerometer provide information about its inclination, i.e. therotation angle between g G and g S = RSGg G.

Another approximation that we make use of is that we consider aG to bethe acceleration of the sensor frame w.r.t. the global frame. However theglobal frame is fixed w.r.t. Earth, and is therefore a rotating reference frame.Accelerations due to centrifugal and Coriolis forces are therefore present,but can often be neglected. The accelerations due to centrifugal forces havethe magnitude

‖−ωE × (ωE × rE )‖ ≤ ‖ωE‖2‖rE‖. (2.34)

The Earth rotates at the rate of ‖ωE‖ ≈ 2π/(23 hours 56 minutes 4 seconds) = 7.2921× 10−5rad/s and the average distance to the center of the Earthis rE = 6371km, which makes the acceleration due to centrifugal forces ap-proximately 0.0339m/s2, which is in the order of 0.35% of the gravitationalacceleration. Coriolis forces affect objects moving at a certain velocity v ina rotating reference frame, on Earth accelerations due to the Coriolis forcehave the magnitude

‖−2ωE × v‖ ≤ 2‖ωE‖‖v‖. (2.35)

The fastest human running speed measured thus far is v = 12.42m/s, whichwas achieved in 2009 by Usain Bolt. At that speed, the acceleration dueto Coriolis force is approximately 0.0018m/s2. For comparison, the fastestcommercially operating train in the world is the Shanghai Maglev (Mag-netic Levitation) train, with a top operating speed of 431km/hour, i.e. v =119.72m/s, which at that speed would experience an acceleration of ap-proximately 0.0175m/s2 due to the Coriolis force.

2.3 GyroscopesGyroscopes measure the angular rateωS of the sensor frame, which on Earthis approximately equal to the angular rate of the sensor frame w.r.t. theglobal frame

ωS ≈ RSGωG. (2.36)

This is assuming that the angular rate of the global frame is negligible. Thisis typically a good approximation since the angular rate due to the Earth’s

Page 28: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

rotation have the magnitude 7.2921 × 10−5rad/s and we assume that theglobal frame is stationary w.r.t. the Earth.

2.4 MEMS inertial sensorsMEMS inertial sensors are in principle microscopic mass-spring systems,where displacements due to contractions or extensions of springs are mea-sured and then related to the physical quantities we want to measure [55].

A one-axis MEMS accelerometer consists of a proof mass m suspendedby springs with a known stiffness constant k. The mass is constrained so itcan only move in the direction of the springs, this direction constitudes thesensitive axis of the sensor. The displacement Δx of the mass due to a forceF for a linear spring is given by Hooke’s law

Δx = F

k= ma

k, (2.37)

where a is the acceleration or specific force experienced by the mass in thedirection of the sensitive axis. Figure 2.4a illustrates the working principleof a MEMS accelerometer that stationary, which is therefore only affectedby the gravitational acceleration g . The magnitude of the gravitational ac-celeration along the sensitive axis is g cosθ, where θ is the inclination angleof the sensor frame w.r.t. the global frame.

A one-axis MEMS gyroscope uses a similar mass-spring system as the ac-celerometer, except the proof mass is now driven to resonate at some ve-locity v , which makes MEMS gyroscopes relatively complex compared toMEMS accelerometers [55]. The resonating mass-spring system is encasedand is itself suspended by springs that can contract and extend in the direc-tion perpendicular to v . When this system rotates with angular velocity ω

along the direction which is perpendicular to the sensitive axes of both ofthe mass-spring subsystems, the subsystem with the resonating mass willbe displaced due to the Coriolis force Fc , and the displacement can be re-lated to the angular velocity through Hooke’s law

Δx = Fc

k= −2mvω

k. (2.38)

This working principle is illlustrated in Figure 2.4b.

2.5 Error characteristics and calibrationHere we will formulate the measurement models of the accelerometer andthe gyroscope, which will include systematic errors and measurement noise

Page 29: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

m

ggcos

x

a) Accelerometer.

VFc

x

m

b) Gyroscope.

Figure 2.4: The working principles of single-axis MEMS accelerometers and gyro-scopes. The physical quantities are measured from the displacements they causein microscopic mass-spring systems.

in addition to the measured quantities. The measured quantities are time-varying and continuous, but the sensors are sampled at discrete time in-stances tk , where the subscript k is an integer sample index such that tk−1 <tk < tk+1. We will use ya(tk ) and yω(tk ) to denote the accelerometer and gy-roscope measurements at time tk , respectively. The measurement modelsare

ya(tk ) = RSG(tk )(aG(tk )+ g G)+ba +ea,k , (2.39)

yω(tk ) = RSG(tk )ωG(tk )+bω+eω,k , (2.40)

where the terms ba and bω models bias as a constant offset, and ea,k andeω,k models measurement noise. In the remainder of this section we willexplore the typical characteristics of these error sources.

2.5.1 Measurement noiseThe measurement noise terms describe the stochastic components of themeasurements. Typically, these noise terms are assumed to be zero-meanGaussian random variables ea,k ∼ N (0,Σa) and eω,k ∼ N (0,Σω), with co-variance matrices Σa and Σω, respectively. This means that the measure-ments themselves are random variables that have a multivariate Gaussian

Page 30: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

distribution, which have a probability density function (PDF) defined by

p(y(tk )|μ(tk ),Σ) =N (y(tk )|μ(tk ),Σ) = 1√(2π)3|Σ|

exp

(−1

2‖y(tk )−μ(tk )‖2

Σ−1

),

(2.41)

where μ(tk ) ∈ R3×1 is the mean, which depend on the observed quantityat time tk , and Σ ∈ R3×3 is a positive semi-definite covariance matrix. Itis useful to know these covariance matrices, since they have informationabout how accurate we can expect our measurements to be. The diagonalelements of the covariance matrices represents the noise variance for eachsensor axis, where a larger variance means that the measurements havelarger random deviations from the measured quantities. The off-diagonalelements represent the covariance of noise terms for different axes. ForMEMS sensors, where each axis can be seen as a single one-dimensionalsensor, the covariances are typically close to zero, meaning that the noiseterms for different sensor axes are independent. Therefore, Σa and Σω areoften modeled as diagonal matrices. It is considered a standard procedureto estimate the covariance matrices before using the sensors as many algo-rithms used in motion capture make use of the covariances to combine orfuse information from multiple sensors. It is common to weigh informationfrom multiple sensors with the inverse of the noise variance, giving moreimportance to sensors with reliable information. The covariance matricescan be estimated by letting the sensors be at rest, since then, the measuredquantities will be constant and any variation observed in the measurementscan be considered to be caused by noise. Then, the best estimator for thecovariance matrices based on N measurements is the sample covariance

Σ= 1

N −1

N∑k=1

y(tk )y�(tk ), (2.42)

where the division by N − 1 is used to obtain an unbiased estimate. Fig-ure 2.5 shows the normalized histograms of N = 6000 measurements com-pared to the estimated Gaussian PDF:s with means and variances corre-sponding to the sample means and sample variances. The histograms andthe estimated Gaussian PDF:s appear to coincide for the most part.

2.5.2 BiasThe bias terms ba and bω describe systematic errors in the form of con-stant offsets. However, in truth, these biases are known to vary slightlyover time. Low-frequency flicker noise in the electronics and temperaturechanges causes the biases to fluctuate [61]. Sensor manufacturers often in-clude bias stability metrics in their data sheets, that indicates the expected

Page 31: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

−0.1 0 0.10

10

20

30

40

ya,x [m/s2]

p(y

a)

x–axis

0 0.1 0.20

10

20

30

40

ya,y [m/s2]

y–axis

9.8 9.9 100

10

20

30

40

ya,z [m/s2]

z–axis

0 0.01 0.020

100

200

300

yg ,x [rad/s]

p(y

g)

−0.01 0 0.010

100

200

300

yg ,y [rad/s]−0.02 −0.01 00

100

200

300

yg ,z [rad/s]

Figure 2.5: The normalized histograms of N = 6000 measurements collected froman Xsens MTw sensor that was at rest on a flat surface for one minute compared tothe PDFs p(ya) and p(yω) of Gaussian distributions (red curves) with means andvariances corresponding to the sample means and sample variances.

rate of change in the bias parameters. Many modern sensors also includeautomatic correction for temperature changes [1]. Therefore, the bias termscan often be well approximated as constants, at least for short experiments.

The gyroscope bias can be estimated by computing the sample meanfrom a sensor that is not undergoing any rotation

bω = 1

N

N∑k=1

yω(tk ). (2.43)

Estimating the accelerometer bias is not as straightforward since the mea-surements of stationary sensors will still be affected by the gravitational ac-celeration. In order to estimate the accelerometer bias in a similar way, theorientation of the sensor in the global frame must be known

ba = 1

N

N∑k=1

ya(tk )−RSG(tk )g G. (2.44)

This type of accelerometer calibration can be performed by mounting thesensor on a mechanical platform, where the orientation can be preciselycontrolled. More recently, less obtrusive methods have been proposed thatonly require the sensor to be rotated into multiple orientations, where thesensor is allowed to be stationary for a short time [10, 11, 12, 43, 45, 60].

Page 32: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

0 50 100 150 200 250 300 3509.7

9.8

9.9

10

10.1

Time [s]

‖ya

(tk

)‖[m

/s2

]

Figure 2.6: The magnitude of accelerometer measurements collected from a smartphone that was at rest in six different orientations. It can be seen that a change inorientation corresponds to a distinguishable change in the average magnitude.

2.5.3 Other systematic errorsThe sensor models (2.39)–(2.40) are accurate for modern middle- to high-end sensors. However, lower quality sensors can have other systematic er-rors that need to be compensated for by calibration. For example, accelerom-eters in smart phones are known to have errors that vary with the orienta-tion of the sensor, which cannot be described by one additive bias term.This is illustrated in Figure 2.6, where the magnitude of accelerometer mea-surements, collected from a MPU-6500 inertial sensor platform [21] in asmart phone is plotted. We let the sensor be at rest in six different (approx-imately orthogonal) orientations for approximately one minute per orien-tation. It is clear that one constant bias term, which is fixed for all orien-tations, cannot describe the behaviour illustrated in Figure 2.6. In Paper Iwe present a practical method for accelerometer calibration that considersthese types of errors. There we extend the sensor model in (2.39) to

ya(tk ) = DRSG(tk )(aG(tk )+ g G)+ba +ea,k , (2.45)

where the matrix D ∈R3×3 is used to model gains, misalignment of the sen-sor axes and misalignments between the accelerometer and the gyroscopeaxes in the IMU. A similarly formulated method for magnetometer (see Sec-tion 3.1.3) calibration have also been proposed in [25].

Page 33: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

Chapter 3

Motion capture refers to the process of recording movement of people, ani-mals or objects. During this process, data is stored, which can then be usedto reproduce and analyze the recorded motion. This technology see abun-dant use in the film and gaming industry, to create lifelike animations forcomputer generated characters [39]. It is used in sports, allowing profes-sional athletes to analyze and optimize their movement [2]. In medicine, itis used to quantify symptoms of movement disorders. The gold-standard,in terms of accuracy, of motion capture is camera-based systems. Thesesystems have cameras placed such that their collective fields of view coversan area of interest from multiple different directions. The cameras recordthe coordinates of reflective or light-emitting balls, typically using infraredlight. By tracking and recording the positions of multiple balls attached toe.g. different body parts of a person, the motion can be reconstructed fromthe position trajectories. Camera-based systems have downsides, being ex-pensive and often constrained to one room.

For human motion capture, biomechanical models can be used to aidin capturing variables of interest by exploiting constraints imposed by themodel. Especially when few sensors are used. This introductory chapterserves to introduce the theory and ideas behind widely used methods ininertial human motion capture and the biomechanical models that are usedspecifically in analysis of human balance control.

3.1 Orientation estimationHere we will discuss how to obtain information about the orientation of thesensor frame with respect to an arbitrary global frame that is fixed w.r.t.

Page 34: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

the environment. We will be using unit quaternions q (see Section 2.1.2)to parametrize the orientation of one frame w.r.t. the other, which avoidsissues such as Gimbal Lock and uses less parameters than the full 3×3 ro-tation matrix.

3.1.1 Strapdown gyroscope integrationThe gyroscope has information of how the orientation of the sensor framechanges from one sampling instance to another. Here we will make the as-sumption that the angular velocity is approximately constant between sam-pling instances. Then the rotation of the sensor frame from time tk−1 to tk

is given by (tk − tk−1)ω(tk−1), which is equivalent to a rotation of magnitude‖(tk − tk−1)ω(tk−1)‖ around the rotation axis ω(tk−1)

‖ω(tk−1)‖ . We know from (2.20)and (2.27) that a unit quaternion describing such rotations can be obtainedthrough the vector exponential

exp

(tk − tk−1

2ω(tk−1)

)=

⎡⎣ cos(

tk−tk−12 ω(tk−1)

)ω(tk−1)‖ω(tk−1)‖ sin

(tk−tk−1

2 ω(tk−1))⎤⎦ . (3.1)

This gives us the means to use gyroscope measurements to estimate howthe unit quaternions evolve over time

q(tk )GS = q(tk−1)GS �exp

(tk − tk−1

2

[yω(tk−1)−bω

]), (3.2)

this procedure is commonly known as strapdown integration [52] or deadreckoning of the gyroscope. Note that we subtract the bias bω from the gy-roscope measurements. The true bias is typically not known so in practice,we will use the estimated bias bω from (2.43) instead. The long-term ac-curacy of strapdown gyroscope integration relies heavily on accurate biasestimates. To illustrate this, we show in Figure 3.1 how the orientation ofthe sensor axes change over time for 10s of strapdown integration of a sta-tionary sensor when bias is compensated for and not compensated for.

3.1.2 Inclination measurementEven with optimal bias compensation, strapdown integration still involvesintegrating random variables. This will cause a random walk drift in the ori-entation estimates, that has a standard deviation that increases over time[61]. For this reason, strapdown integration is only reliable for a short pe-riod of time. We can however obtain absolute observations of the sensor’sinclination in the local gravitational field from a stationary accelerometer

ya(tk ) = R�(qGS(tk ))g G +ba +ea,k . (3.3)

Page 35: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

Figure 3.1: The orientation drift of the sensor axes after strapdown integration of astationary sensor. Right and left figures show the cases with and without bias com-pensation, respectively. Note the different scales of the vertical axes. The gyroscope

bias was estimated by (2.43) to be bω = [−0.0137 −0.0193 −0.0161]�

rad/s.

Here we use the same measurement model as in (2.39) but for a stationarysensor (aG = 0) and the rotation matrix is parametrized by the unit quater-nion RSG = R�(qGS).

3.1.3 MagnetometersAnother sensor that is often used to obtain absolute orientation informa-tion of an IMU is the magnetometer. Magnetometers are sensors that mea-sure the local magnetic field. Magnetometers are not inertial sensors butthey are often found in sensor platforms together with inertial sensors. Some-times, such sensor platforms are referred to as magneto-inertial measure-ment units (mIMU) or a 9-DOF IMU, since the magnetometer provides threeadditional degrees of freedom to the measurements. Their most commonuse is similar to that of a regular compass; to provide an absolute observa-tion of the heading. In the most commonly used reference frames in navi-gation, the horizontal axes will point in the longitudinal and latitudinal di-rections. In such a reference frame, the horizontal components of the Earthmagnetic field will point to the magnetic north pole and (c.f. the x–axis inFigure 2.2) a heading of 0◦ typically aligns with this direction. A commonlyused measurement model for magnetometer is

ym(tk ) = R�(qGS(tk ))m0 +bm +em,k , (3.4)

where m0 denotes the local magnetic field as the measured quantity, bm isadditive bias and em,k ∼ N (0,Σm) is measurement noise, assumed to bezero-mean Gaussian with covariance Σm . In orientation estimation, the lo-cal magnetic field is typically assumed to be homogeneous, hence m0 isassumed to be constant and known after calibration of the magnetome-ter. However, magnetometers are susceptible to magnetic disturbances andthe assumption of a homogeneous magnetic field does not hold when they

Page 36: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

are surrounded by metallic or electrically conducting objects that have theirown magnetic fields superimposed to the Earth magnetic field. This is im-portant to be aware of when using magnetometer measurements to observethe heading. Having a prior model of the local Earth magnetic field can helpto decide which measurements that may be affected by magnetic distur-bances.

3.1.4 Extended Kalman filterNow we have seen the orientation information that can be obtained fromdifferent sensors. To obtain an accurate orientation estimate we want tocombine these different types of information, a process that is often referredto as sensor fusion [17]. Throughout this thesis we make frequent use of ex-tended Kalman filters (EKF) to achieve this. The EKF is a well known adap-tion for nonlinear state-space models of the popular Kalman filter proposedby Kalman [22].

We consider a state-space model on the form

x(tk ) = f (x(tk−1),u(tk ), vk ) (3.5)

y(tk ) = h(x(tk ))+ek , (3.6)

where f (·) and h(·) are nonlinear differentiable functions. The state, x(tk ),is the quantity that we are interested in estimating. Equation (3.5) is theprocess model which describes how the state evolves in time, where u(tk )is a known input signal and vk ∼ N (0,Σv ) is the process noise, which ac-counts for the uncertainties in the process model. Here we assume the pro-cess noise to be zero-mean Gaussian with covariance matrix Σv . Equation(3.6) is the measurement model, that describes the relationship between themeasurements y(tk ) and the state, and ek ∼ N (0,Σe ) is the measurementnoise, assumed to be additive zero-mean Gaussian with covariance matrixΣe .

The EKF is first initialized at the state estimate x(t0|t0) with the state co-variance P (t0|t0), which represents the prior knowledge of the state. Afterinitialization, the EKF is run by performing a time update followed by ameasurement update repeatedly for as long as there is data available, i.e.k = 1, . . . , N . The time update performs a one-step ahead prediction usingthe process model (3.5) according to

x(tk |tk−1) = f (x(tk−1|tk−1),u(tk ),0) (3.7)

P (tk |tk−1) = Fk P (tk−1|tk−1)F�k +GkΣvG�

k , (3.8)

where

Fk = ∂

∂x(tk−1)f (x(tk−1),u(tk ), vk )

∣∣∣x(tk−1)=x(tk−1|tk−1),vk=0

(3.9)

Page 37: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

Gk = ∂

∂vkf (x(tk−1),u(tk ), vk )

∣∣∣x(tk−1)=x(tk−1|tk−1),vk=0

. (3.10)

The measurement update uses available measurements to update the pre-dicted state estimates using the measurement model (3.6) according to

y(tk ) = y(tk )−h(x(tk |tk−1)) (3.11)

Sk = Hk P (tk |tk−1)H�k +Σe (3.12)

Kk = P (tk−1|tk−1)H�k S−1

k (3.13)

x(tk |tk ) = x(t −k|tk−1)+Kk y(tk ) (3.14)

P (tk |tk ) = (I −Kk Hk )P (tk |tk−1), (3.15)

where

Hk = ∂

∂x(tk )h(x(tk ))

∣∣∣x(tk )=x(tk |tk−1)

, (3.16)

y(tk ) is often referred to as the residual or innovation, Sk is the residual co-variance and Kk is the Kalman gain. The classic Kalman filter is the optimalstate estimator for linear state-space models with additive Gaussian noise.The EKF replicates the Kalman filter by linearizing the state-space modelswith respect to the most recent state estimate as in (3.9), (3.10) and (3.16).Therefore, the EKF is not an optimal estimator in general, and its perfor-mance will depend on how close the state-space model is to a linear Gaus-sian model.

EKF for orientation estimationEKFs have been frequently used to perform sensor fusion of inertial andmagnetic sensors for orientation estimation, see e.g. [24, 35, 50]. This sec-tion will serve as a brief introduction to two different EKFs for orientationestimation. For an extensive derivation and description of these methods,we refer the reader to [24]. In [24] these two methods are presented as EKFwith quaternion states and EKF with orientation deviation states.

The EKF with quaternion states has the state variables selected as theunit quaternion x(tk ) = qGS(tk ). The time update (3.7)–(3.8) is performedwith f (x(tk−1),u(tk ), vk ) defined by the strapdown gyroscope integration(3.2), where the input is selected as the gyroscope measurement at the pre-vious time step u(tk ) = yω(tk−1) and vk = eω,k−1. The measurement up-dates (3.11)–(3.15) can be performed with accelerometer and magnetome-ter measurements, in which case h(x(tk )) is defined by stacking the mea-surement models (3.3) and (3.4). Important to note is that we do not wantto perform measurement updates when the measurement models are notaccurate. The accelerometer model assumes that the sensor is stationaryand the magnetometer model is not accurate if magnetic disturbances arepresent. A simple solution is to check if ‖ya(tk )‖ ≈ ‖g‖ and ‖ym(tk )‖ ≈ ‖m0‖,

Page 38: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

assuming that g and m0 are known. Depending on which of these condi-tions that are satisfied at a given time step, both updates, a single updateor no update can be performed by the EKF. Another thing to note is thatafter the update step (3.14), x(tk |tk ) will no longer be a unit quaternion.Therefore, the state estimate x(tk |tk ) and its covariance P (tk |tk ) must benormalized before the next time update, because the strapdown integrationrequires that the state is a unit quaternion.

The EKF with orientation deviation states represents the unit quaternionas

qGS(tk ) = exp

(η(tk )

2

)� qGS(tk ), (3.17)

where qGS(tk ) is a linearization point and η ∈R3 is a rotation vector describ-ing the deviation from the linearization point. This version of the EKF usesthe state x(tk ) = η(tk ) and keeps track of the linearization point qGS(tk ). Theupdates to the state differ from the standard version of the EKF. In the timeupdate, the orientation deviation is considered to be zero, i.e. x(tk−1|tk−1) =0, and the step (3.7) updates the linearization point qGS(tk ) instead of η(tk ).The measurement updates are performed in the same way as in the EKFwith quaternion states but for the orientation deviation state, meaning thatthe measurement models (3.3) and (3.4) are considered as functions ofη(tk ).After the measurement updates, the linearization point is updated accord-ing to

qGS(tk |tk ) = exp

(η(tk |tk )

2

)� qGS(tk |tk−1), (3.18)

and the deviation state is reset to back to zero, η(tk |tk ) = 0, before the nexttime update. The orientation estimates are obtained as the updated lin-earization point, i.e. qGS(tk ) = qGS(tk |tk ). Note that the quaternion prod-uct always returns a unit quaternion. Therefore, there is no need to re-normalize as in the EKF with quaternion states. Another advantage of theEKF with orientation deviation states is that it uses a 3–dimensional stateinstead of a 4–dimensional state, which makes it computationally attrac-tive.

3.2 Position estimationEstimating the position and velocity of the sensor frame w.r.t. to the globalframe requires that we know the non-gravitational acceleration (i.e. the spe-cific force) of the sensor. We can obtain this by rotating the accelerome-ter measurements to the global frame and subtracting the constant gravita-tional acceleration

aG(tk ) = R(qGS(tk ))ya(tk )− g G. (3.19)

Page 39: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

Note that this operation depends on the orientation estimates qGS(tk ).The velocity vG and position pG can then estimated by integrating the

acceleration aG(tk ) twice

vG(tk ) = vG(tk−1)+ (tk − tk−1)aG(tk ) (3.20)

pG(tk ) = pG(tk−1)+ (tk − tk−1)vG(tk−1)+ tk − tk−1

2aG(tk ), (3.21)

where we assume that the acceleration and velocity is constant betweensample instances. We typically initialize the sensors from a stationary posi-tion in the origin, i.e. pG(t0) = 0 and vG(t0) = 0 unless we have any meansto know the initial position and velocity. This double integration of accel-eration to obtain the position relative to a known initial position is knownas strapdown integration of the acceleration or dead reckoniong. Similar tothe strapdown gyroscope integration, the errors of the position and velocityestimates will grow over time, which makes this procedure more unreliablefor longer experiments.

Unlike orientation estimation, the inertial sensors do not provide anymeasurements of absolute position and velocity. For that, we have to rely onother, external sensors. Global Positioning System (GPS) sensors are mobileand can be used to measure absolute position and velocity when the sen-sors have line of sight to the Global Navigation Satellite System (GNSS) [15,16]. However, the accuracy of GPS sensors is typically poor in indoor envi-ronments or in urban areas with tall buildings, which reduces their useful-ness in human motion capture.

In indoor environments radio signals have been used, for example ultra-wideband (UWB) [6, 23]. In these setups, a radio transmitter can be wornby a person and multiple stationary receivers can be placed in the environ-ment. Radio pulses are transmitted and the time of arrival (TOA) at the re-ceivers can be used to estimate the distance between the transmitter andthe receiver. The position of the transmitter can then be estimated frommultiple TOA measurements from different receivers. In other approaches,the received signal strength indicator (RSSI) from wireless network nodes[9, 49] or the local fluctuations in the magnetic field [26, 57] has been usedto construct maps of indoor environments. Wireless receivers and magne-tometers can then be used to localize the sensor using these maps. Anotherapproach is simultaneous localization and mapping (SLAM), which buildsa map of the environment while simultaneously tracking the position of thesensor. Camera sensors have been used track visual features of the environ-ment and perform SLAM in visual-inertial odometry [33, 34].

The major downside of the approaches mentioned above is that addi-tional sensors are required, which increases the cost and complexity of themotion capture system. For foot-mounted inertial navigation [40], the po-sition error due to integration drift can be significantly reduced by resettingthe velocity estimates to zero when a foot is stationary on the ground, so

Page 40: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

called zero-velocity update [56]. Some motions are of cyclic nature, mean-ing that the sensor returns to the same position at regular intervals, whichcan be exploited to limit integration drift [18, 19]. For human motion cap-ture we are typically interested in tracking the relative positions and orien-tations of different body segments rather than the absolute positions. Tothis end, it is useful to have a biomechanical model that describes how thebody segments are connected and can move with respect to each other.

3.3 Kinematic chainsKinematic chains are models consisting of multiple rigid body segmentsthat are connected by joints. The joints can have 1, 2 or 3–DOF, whichdetermines how many unique variables, or joint angles, that are requiredto describe the relative orientation of two connected segments. In humanmotion capture, the rigid body segments represent the limbs and the jointsdetermine how the limbs are connected and can move with respect to eachother. Therefore, the kinematic chain model can be used to constrain posi-tion estimates obtained from individual sensors if the position of the sen-sors relative to the joints are known.

Consider the kinematic chain depicted in Figure 3.2, where we have twosegments, each with one sensor attached, with sensor frames S1 and S2, re-spectiely. Assuming that the sensors are rigidly attached, the relative posi-tion of the two sensors must satisfy

‖pG1 +RGS1 r S1

1 − (pG2 +RGS2 r S2

2 )‖ = 0, (3.22)

where pG1 and pG

2 are the sensors’ positions in the global frame and r S11 and

r S22 are the positions of the joint center, which are constant in each sensor’s

intrinsic coordinate frame. If we estimate the positions of each sensor indi-vidually, we can use the information in (3.22) to limit the relative positionerror caused by integration drift.

To use the information contained in (3.22) requires that the position ofthe joint center is known in each sensor’s coordinate frame. This process isreferred to as sensor-to-segment calibration. However, it is time consumingto manually measure the sensors’ position with respect to the joints, and onhuman subjects it can be hard to determine the exact location of the jointcenters since they are located beneath the skin. An idea that has gainedtraction in recent years is to fit the sensor-to-segment calibration parame-ters to the kinematic constraints of the system using the measured acceler-ations and angular velocities of the inertial sensors [42, 51, 54]. For a systemwith two sensors as in Figure 3.2 we have that for Sensor i ∈ {1,2} that

aSii (tk ) = aSi

0 (tk )+ωSii (tk )× (ωSi

i (tk )× r Sii )+ ω

Sii (tk )× r Si

i

= aSi0 (tk )+K

Sii (tk ),ωSi

i (tk ))

r Sii

, (3.23)

Page 41: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

r1

G

S2

r2r

S1

p2

p1

Figure 3.2: A kinematic chain model with two rigid body segments connected by ajoint. One IMU is assumed rigidly attached to each segment. The positions of thejoint center in each sensor’s coordinate frame, denoted r S1

1 and r S22 , respectively,

are important sensor-to-segment calibration parameters.

where aSi0 (tk ) denotes the linear acceleration of the joint center and ω

Sii (tk )

is the angular acceleration of the sensor. The matrix K(ω

Sii (tk ),ωSi

i (tk ))∈

R3×3 contains the angular velocity and angular acceleration terms, whichmakes (3.23) linear in r Si

i . In Paper II we show how we can estimate r Sii from

inertial sensor data and three estimators are evaluated using data recordedfrom different motions of a mechanical system with a universal joint.

3.4 Joint modelsJoints with less than 3–DOF imposes further constraints on the relative mo-tion of segments. Consider for example the kinematic chain in Figure 3.3,which has a joint with 1–DOF, a so called hinge joint. This could for exam-ple model the human knee or finger joints. In kinematic chains with hingejoints, the two segments can only rotate independently along the joint axisj ∈R3, which means that only one joint angle φ is required to track the rela-tive motion of the two segments. To show how knowing the joint axis simpli-fies the tracking motion tracking of hinge joint systems we will outline themethod proposed in [53]. The change in the joint angle over time can beestimated using strapdown integration of the two gyroscopes by projecting

Page 42: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

the measurements onto the joint axis

φ(tk ) = φ(tk−1)+ (tk − tk−1)(y�ω,1(tk−1) j S1

1 − y�ω,2(tk−1) j S2

2 ), (3.24)

where we use yω,1 and yω,2 to denote the gyroscope measurements of thetwo sensors, respectively, and the two unit vectors j S1

1 and j S22 correspond

to the direction of the joint axis in the two sensor frames. Furthermore, ifr S1

1 and r S22 are known, the acceleration of the joint center in each IMU’s

respective coordinate frame can be estimated using (3.23). For Sensor i ∈{1,2} we have that

aSi0 (tk ) = ya,i (tk )−K

(yω,i (tk ), yω,i (tk )

)r Si

i , (3.25)

where yω,i (tk ) is a pseudo-measurement of the angular acceleration, whichis obtained through numeric differentiation of sequential gyroscope mea-surements. After obtaining the accelerations of the joint center, an absolutemeasurement of the joint angle can be obtained by projecting these acceler-ations on to the joint plane that is perpendicular to the joint axis [53]. Givenany unit vector e ∈R3 that satisfies e ∦ j1 and e ∦ j2, basis vectors spanningthe joint plane can be obtained as

x1 = j1 ×e, y1 = j1 ×x1, x2 = j2 ×e, y2 = j2 ×x2, (3.26)

and the absolute measurement of the joint angle can be obtained as

yφ(tk ) =�([

aS10 (tk ) · x1

aS10 (tk ) · y1

],

[aS2

0 (tk ) · x2

aS20 (tk ) · y2

]), (3.27)

where�(v1, v2) denotes the signed angle between the two vectors v1 and v2,of the same dimension. Using (3.24) and (3.27) we can construct and EKFthat fuses strapdown integration and absolute measurements of the jointangle. This approach is computationally attractive since we are estimating asingle joint angle to capture the relative motion of the two segments, ratherthan estimating the positions and orientations of the two IMUs separately.Furthermore, this approach does not require a magnetometer since we donot need to know the heading of the two IMUs in the global frame, and istherefore not sensitive to magnetic disturbances.

For hinge joint systems as in Figure 3.3, we consider j1 and j2 to be sensor-to-segment calibration parameters. Just like r1 and r2, the direction of thejoint axis can be hard to determine in human subjects. For hinge joint sys-tems, there are additional constraints on the angular velocities and accel-erations of the two segments that can be exploited in order to identify j1

and j2 directly from inertial sensor data. Several such methods have beenproposed [29, 38, 44, 54] and in Paper III, we propose a novel method forplug-and-play joint axis estimation that is designed to work for arbitrarymotions of the hinge joint system.

Page 43: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

j

S1

S2

G

j2

j1

r1

r2r

Figure 3.3: A kinematic chain with two rigid body segments connected by a hingejoint. One IMU is assumed rigidly attached to each segment. For such systems, thedirection of the joint axis j in each sensor’s coordinate frame, denoted j1 and j2,respectively, are important sensor-to-segment calibration parameters.

3.5 Inverted pendulum modelsThe remainder of this introductory chapter will be used to introduce biome-chanical models that are used when analyzing human balance, which aremajor topics of Papers IV and V. The most common biomechanical mod-els used in analysis of human balance are inverted pendulum models [13,59]. These models are kinematic chains, often assumed to be planar, wherethe joints are approximated as hinge joints, which constrains the motionof the pendulum to either the frontal or sagittal anatomical planes. Unlikekinematic chains, inverted pendulum models also describe the forces thatmust be overcome in order to achieve upright balance, which can be usedto determine if specific balancing strategies are efficient.

Lagrangian formalismWe will make use of the Lagrangian formalism to derive the equations ofmotion (EoM) that describe the rigid body dynamics of the inverted pendu-lum models. First we form the Lagrangian

L(q, q) = Ek (q, q)−Ep (q), (3.28)

where Ek (q, q) is the kinetic energy and Ep (q) is the potential energy of thesystem. The Lagrangian depends on the generalized coordinates q ∈Rn and

their time derivatives (generalized velocities) q = d qd t , where n is the number

of degrees of freedom of the system. For each generalized coordinate we get

Page 44: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

one EoM, which is computed as

d

d t

∂L

∂qi− ∂L

∂qi= 0, i = 1, . . . ,n. (3.29)

For the case of the inverted pendulum the generalized coordinates will bethe angles φ between each link and the global vertical direction, and thegeneralized velocities will be the angular velocities φ.

Inverted single pendulumStarting with the simplest case, the inverted single pendulum, we first notethat the Cartesian coordinates (x, y), of the CoM are uniquely determinedby the joint angle φ

x = c sinφ (3.30)

y = c cosφ, (3.31)

where c is the distance from the joint to the CoM. The Lagrangian is

L = mv2

2+ I φ

2

2−mg y, (3.32)

where m is the mass of the link, v is the linear velocity of the CoM, I is themoment of inertia w.r.t. the rotation axis and g is the gravitational acceler-ation. Using (3.30)-(3.31) we can express the Lagrangian in terms of φ andφ

L = m

2(x2 + y2)+ I φ

2

2−mg y

= mc2 φ2

2(cos2φ+ sin2φ)+ I φ

2

2−mg c cosφ

= mc2 + I

2−mg c cosφ. (3.33)

Using (3.29) we find that the EoM is

(mc2 + I ) φ−mg c sinφ= 0, (3.34)

which is also straightforward to derive using Newton’s second law.

Inverted double pendulumIn an inverted double pendulum, see Figure 3.4, the CoM coordinates foreach separate link is given by

x1 = c1 sinφ1 (3.35)

y1 = c1 cosφ1 (3.36)

Page 45: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

φ1

φ2

m1g

m2g

c1

c2

l1

x

y

Figure 3.4: The inverted double pendulum model.

x2 = l1 sinφ1 +c2 sinφ2 (3.37)

y2 = l1 cosφ1 +c2 cosφ2, (3.38)

where l is the length of the link and subscripts now indicate which link acertain quantity belong to. The Lagrangian then becomes

L = m1v21

2+ m2v2

2

2+ I1 φ

21

2+ I2 φ

22

2−m1g y1 −m2g y2

Page 46: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

= m1

2(x2

1 + y21)+ m2

2(x2

2 + y22)+ I1 φ

21

2+ I2 φ

22

2−m1g y1 −m2g y2

= m1c21 φ

21

2(cos2φ1 + sin2φ1)+ m2

2(l 2

1 φ21 cos2φ1 +c2

2 φ22 cos2φ2

+2l1c2 φ1 φ2 cosφ1 cosφ2 + l 21 φ

21 sin2φ1 +c2

2 φ22 sin2φ2

+2l1c2 φ1 φ2 sinφ1 sinφ2)+ I1 φ21

2+ I2 φ

22

2−m1g y1 −m2g y2

= m1

2c2

1 φ21+

m2

2(l 2

1 φ21+c2

2 φ22+2l1c2 φ1 φ2 cos(φ1 −φ2))

+ I1 φ21

2+ I2 φ

22

2−m1g c1 cosφ1 −m2g (l1 cosφ1 +c2 cosφ2), (3.39)

and the two equations of motion are then given by

m1c21 φ1+m2

(l 2

1 φ1+l1c2 φ2 cos(φ1 −φ2)+ l1c2 φ22 sin(φ1 −φ2)

)+ I1 φ1−m1g c1 sinφ1 −m2g l1 sinφ1 = 0,

(3.40)

and

m2

(c2

2 φ2+l1c2 φ1 cos(φ1 −φ2)− l1c2 φ21 sin(φ1 −φ2)

)+ I2 φ2−m2g c2 sinφ2 = 0.

(3.41)

For systems with multiple degrees of freedom it is often convenient to ex-press the EoM on the form

M(φ) φ+C (φ, φ) φ+G(φ) = 0, (3.42)

where we see that inserting (3.40)-(3.41) yields[m1c2

1 +m2l 21 + I1 m2l1c2 cos(φ1 −φ2)

m2l1c2 cos(φ1 −φ2) m2c22 + I2

]︸ ︷︷ ︸

=M(φ)

[φ1φ2

]

+[

0 m2l1c2 φ2 sin(φ1 −φ2)−m2l1c2 φ1 sin(φ1 −φ2) 0

]︸ ︷︷ ︸

=C (φ,φ)

[φ1φ2

]

+[−m1g c1 sinφ1 −m2g l1 sinφ1

−m2g c2 sinφ2

]︸ ︷︷ ︸

=G(φ)

= 0.

(3.43)

3.5.1 Neuromuscular controlSo far, the equations of motion (EoM) that we have derived for inverted sin-gle and double pendula only contain the forces of gravity. Inverted pen-dula are inherently unstable; for any deviation from the vertical equilibrium

Page 47: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

point φ= 0, the pendulum will start to fall. Humans are similar, without theaid of our muscles to counter the force of gravity we would meet the samefate. When the muscles that connect to the joint tendons contract, the re-sult can be seen as a torque that is generated and acts on the adjacent bodysegments. This torque will be referred to as the joint torque. The humanmusculoskeletal system is inherently redundant in the sense that multiplemuscles (more than the degrees of freedom of the model) act across eachjoint and some muscles act across multiple joints. For example, the gas-trocnemius act at the ankle and knee joints and the hamstrings act at theknee and hip joints. Therefore, the joint torque is used to describe the nettorque from all contributing muscles and not the activation of individualmuscles. We can expand the EoM (3.42) to include the joint torques u, as anexternal forces

M(φ) φ+C (φ, φ) φ+G(φ) = u. (3.44)

The muscle activity is controlled by the CNS, which sends activation sig-nals out to the muscles. To choose appropriate activation signals, the CNSreceives information, y , from biological sensors in our body and chooses anappropriate response depending on which task that should be performed.This is similar to how controllers work in a feedback loop. The task canbe seen as a certain desired configuration or state of the body segments,and is the equivalent of a reference signal, r . The reference signal is com-pared to the information about the current state and the control action thatis generated by the CNS are the joint torques that should move the bodysegments closer to the desired state. Therefore, a control mechanism in theCNS that takes information from our biological sensors as input and out-puts a joint torque u by activating muscles will be referred to as a neuro-muscular controller. In this context, u is the neuromuscular control signal.As φ= φ= 0 corresponds to upright standing in our biomechanical model,the reference signal in that case will correspond to this equilibrium point.Figure 3.5 shows a block diagram, which illustrates the closed neuromuscu-lar control loop.

Neuromuscularcontroller

Musculoskeletalsystem (plant)

d

ur

Biologicalsensors φ

y

Figure 3.5: A closed neuromuscular control loop in humans.

Page 48: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

Apart from the forces of gravity and the joint torques generated by theneuromuscular controller, the plant may also be affected by external distur-bances, denoted by d in Figure 3.5. External disturbances play an impor-tant role in the identification of the neuromuscular controller. The externaldisturbances can be designed as an exogenous input signal to the closedneuromuscular control loop, which allows information about the controllerto be extracted via observations of the movement of the human [7, 27].

The most common models of the neuromuscular controller in standinghuman balance fall under the category of linear time-invariant (LTI) sys-tems [14]. These models include but are not limited to:

• Frequency response functions (FRF) [3, 8, 28],• Proportional-integral-derivative (PID) controllers [5, 58]• PID controllers with neural time delay [4, 37, 47],• State feedback and linear quadratic (LQ) controllers [30, 31, 46].

For a thorough overview, we refer the reader to Paper IV which containsa systematic review of neuromuscular controller models and methods foridentifying them. In Paper V we show how such controllers that operate inclosed loop can be identified from experimental and simulated IMU data.

References[1] P. Aggarwal, Z. Syed, X. Niu, and N. El-Sheimy. “A standard testing and

calibration procedure for low cost MEMS inertial sensors and units”.In: The Journal of Navigation 61.2 (2008), pp. 323–336.

[2] A. Avci, S. Bosch, M. Marin-Perianu, R. Marin-Perianu, and P. Havinga.“Activity recognition using inertial sensing for healthcare, wellbeingand sports applications: A survey”. In: Architecture of computing sys-tems (ARCS), 2010 23rd international conference on. VDE. 2010, pp. 1–10.

[3] T. A. Boonstra, A. C. Schouten, and H. van der Kooij. “Identification ofthe contribution of the ankle and hip joints to multi-segmental bal-ance control”. In: J Neuroeng Rehabil 10 (Feb. 2013), p. 23.

[4] M. Cenciarini, P. J. Loughlin, P. J. Sparto, and M. S. Redfern. “Stiffnessand Damping in Postural Control Increase With Age”. In: IEEE Trans-actions on Biomedical Engineering 57.2 (Feb. 2010), pp. 267–275. ISSN:0018-9294.

[5] M. Cenciarini, P. J. Loughlin, P. J. Sparto, and M. S. Redfern. “Stiffnessand damping in postural control increase with age”. In: IEEE TransBiomed Eng 57.2 (Feb. 2010), pp. 267–275.

[6] A. De Angelis, J. Nilsson, I. Skog, H. Peter, and P. Carbone. “Indoor po-sitioning by ultrawide band radio aided inertial navigation”. In: Metrol-ogy and Measurement Systems (2010), pp. 447–460.

Page 49: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

References

[7] D. Engelhart, T. A. Boonstra, R. G. K. M. Aarts, A. Schouten, and H.van der Kooij. “Comparison of closed-loop system identification tech-niques to quantify multi-joint balance control”. In: Annual Reviews inControl 41 (2016), pp. 58–70.

[8] D. Engelhart, A. C. Schouten, R. G. K. M. Aarts, and H. van der Kooij.“Assessment of Multi-Joint Coordination and Adaptation in StandingBalance: A Novel Device and System Identification Technique”. In: IEEETransactions on Neural Systems and Rehabilitation Engineering 23.6(Nov. 2015), pp. 973–982. ISSN: 1534-4320.

[9] F. Evennou and F. Marx. “Advanced integration of WiFi and inertialnavigation systems for indoor mobile positioning”. In: EURASIP Jour-nal on Advances in Signal Processing 2006.1 (2006), p. 086706.

[10] T. Forsberg, N. Grip, and N. Sabourova. “Non-iterative calibration foraccelerometers with three non-orthogonal axes, reliable measurementsetups and simple supplementary equipment”. In: Measurement Sci-ence and Technology 24.3 (2013), p. 035002.

[11] I. Frosio, F. Pedersini, and N. A. Borghese. “Autocalibration of MEMSAccelerometers”. In: IEEE Transactions on Instrumentation and Mea-surement 58.6 (June 2009), pp. 2034–41.

[12] I. Frosio, F. Pedersini, and N. A. Borghese. “Autocalibration of Triax-ial MEMS Accelerometer With Automatic Sensor Model Selection”. In:IEEE Sensors Journal 12.6 (June 2012), pp. 2100–2108.

[13] W. H. Gage, D. A. Winter, J. S. Frank, and A. L. Adkin. “Kinematic andkinetic validity of the inverted pendulum model in quiet standing”. In:Gait & posture 19.2 (2004), pp. 124–132.

[14] T. Glad and L. Ljung. Control theory. CRC press, 2014.

[15] M. S. Grewal, L. R. Weill, and A. P. Andrews. Global positioning systems,inertial navigation, and integration. John Wiley & Sons, 2007.

[16] H. F. Grip, T. I. Fossen, T. A. Johansen, and A. Saberi. “Nonlinear ob-server for GNSS-aided inertial navigation with quaternion-based at-titude estimation”. In: 2013 American Control Conference. IEEE. 2013,pp. 272–279.

[17] F. Gustafsson. Statistical sensor fusion. Studentlitteratur, 2010.

[18] K. Halvorsen and F. Olsson. “Pose estimation of cyclic movement us-ing inertial sensor data”. In: Statistical Signal Processing Workshop (SSP),2016 IEEE (Palma de Mallorca, Spain). IEEE. 2016, pp. 1–5.

[19] K. Halvorsen and F. Olsson. “Robust tracking of periodic motion in theplane using inertial sensor data”. In: IEEE Sensors 2017 (Glasgow, Scot-land). 2017.

Page 50: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

[20] W. R. Hamilton. “XI. On quaternions; or on a new system of imaginar-ies in algebra”. In: The London, Edinburgh, and Dublin PhilosophicalMagazine and Journal of Science 33.219 (1848), pp. 58–60.

[21] InvenSense. 6 Axis MotionTracking. Sept. 2017. URL:.

[22] R. E. Kalman. “A new approach to linear filtering and prediction prob-lems”. In: Journal of basic Engineering 82.1 (1960), pp. 35–45.

[23] M. Kok, J. D. Hol, and T. B. Schön. “Indoor positioning using ultrawide-band and inertial measurements”. In: IEEE Transactions on VehicularTechnology 64.4 (2015), pp. 1293–1303.

[24] M. Kok, J. D. Hol, and T. B. Schön. “Using Inertial Sensors for Positionand Orientation Estimation”. In: Foundations and Trends® in SignalProcessing 11.1-2 (2017), pp. 1–153. ISSN: 1932-8346.

[25] M. Kok and T. B. Schön. “Magnetometer calibration using inertial sen-sors”. In: IEEE Sensors Journal 16.14 (2016), pp. 5679–5689.

[26] M. Kok, N. Wahlström, T. B. Schön, and F. Gustafsson. “MEMS-basedinertial navigation based on a magnetic field map”. In: Acoustics, Speechand Signal Processing (ICASSP), 2013 IEEE International Conferenceon. IEEE. 2013, pp. 6466–6470.

[27] H. van der Kooij, E. van Asseldonk, and F. C. van der Helm. “Compari-son of different methods to identify and quantify balance control”. In:J. Neurosci. Methods 145.1-2 (June 2005), pp. 175–203.

[28] H. van der Kooij and E. de Vlugt. “Postural responses evoked by plat-form pertubations are dominated by continuous feedback”. In: J. Neu-rophysiol. 98.2 (Aug. 2007), pp. 730–743.

[29] A. Küderle, S. Becker, and C. Disselhorst-Klug. “Increasing the Robust-ness of the automatic IMU calibration for lower Extremity Motion Anal-ysis”. In: Current Directions in Biomedical Engineering 4.1 (2018), pp. 439–442.

[30] A. D. Kuo. “An optimal control model for analyzing human posturalbalance”. In: IEEE Transactions on Biomedical Engineering 42.1 (Jan.1995), pp. 87–101. ISSN: 0018-9294.

[31] A. D. Kuo. “An optimal state estimation model of sensory integrationin human postural balance”. In: J Neural Eng 2.3 (Sept. 2005), S235–249.

[32] J. J. LaViola Jr. “Bringing VR and spatial 3D interaction to the massesthrough video games”. In: IEEE Computer Graphics and Applications28.5 (2008).

[33] S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and P. Furgale. “Keyframe-based visual–inertial odometry using nonlinear optimization”. In: TheInternational Journal of Robotics Research 34.3 (2015), pp. 314–334.

Page 51: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

References

[34] M. Li and A. I. Mourikis. “High-precision, consistent EKF-based visual-inertial odometry”. In: The International Journal of Robotics Research32.6 (2013), pp. 690–711.

[35] J. L. Marins, X. Yun, E. R. Bachmann, R. B. McGhee, and M. J. Zyda. “Anextended Kalman filter for quaternion-based orientation estimationusing MARG sensors”. In: Proceedings 2001 IEEE/RSJ International Con-ference on Intelligent Robots and Systems. Expanding the Societal Roleof Robotics in the the Next Millennium (Cat. No. 01CH37180). Vol. 4.IEEE. 2001, pp. 2003–2011.

[36] P. Mattsson, D. Zachariah, and P. Stoica. “Recursive nonlinear-systemidentification using latent variables”. In: Automatica 93 (2018), pp. 343–351.

[37] C. Maurer, T. Mergner, and R. Peterka. “Multisensory control of humanupright stance”. In: Experimental Brain Research 171.2 (May 2006),pp. 231–250.

[38] T. McGrath, R. Fineman, and L. Stirling. “An Auto-Calibrating KneeFlexion-Extension Axis Estimator Using Principal Component Anal-ysis with Inertial Sensors”. In: Sensors 18.6 (2018). ISSN: 1424-8220.

[39] A. Menache. Understanding motion capture for computer animation.Elsevier, 2011.

[40] J.-O. Nilsson, A. K. Gupta, and P. Händel. “Foot-mounted inertial nav-igation made easy”. In: 2014 International Conference on Indoor Posi-tioning and Indoor Navigation (IPIN). IEEE. 2014, pp. 24–29.

[41] D. Nowka, M. Kok, and T. Seel. On Motions That Allow for Identifica-tion of Hinge Joint Axes from Kinematic Constraints and 6D IMU Data.accepted for European Control Conference (ECC). Nice, France, 2019.

[42] F. Olsson and K. Halvorsen. “Experimental evaluation of joint posi-tion estimation using inertial sensors”. In: Information Fusion (Fu-sion), 2017 20th International Conference on (Xi’an, China). IEEE. 2017,pp. 1–8.

[43] F. Olsson, M. Kok, K. Halvorsen, and T. B. Schön. “Accelerometer cal-ibration using sensor fusion with a gyroscope”. In: Statistical SignalProcessing Workshop (SSP), 2016 IEEE (Palma de Mallorca, Spain). IEEE.2016, pp. 1–5.

[44] F. Olsson, T. Seel, D. Lehmann, and K. Halvorsen. “Joint axis estimationfor fast and slow movements using weighted gyroscope and accelera-tion constraints”. In: Information Fusion (Fusion), 2019 22th Interna-tional Conference on (Ottawa, Canada). IEEE. 2019, pp. 1–8.

Page 52: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

[45] G. Panahandeh, I. Skog, and M. Jansson. “Calibration of the Accelerom-eter Triad of an Inertial Measurement Unit, Maximum Likelihood Esti-mation and Cramér-Rao Bound”. In: Procceedings of the InternationalConference on Indoor Positioning and Indoor Navigation (IPIN). Zurich,Switzerland, Sept. 2010, pp. 1–6.

[46] S. Park, F. B. Horak, and A. D. Kuo. “Postural feedback responses scalewith biomechanical constraints in human standing”. In: Exp Brain Res154.4 (Feb. 2004), pp. 417–427.

[47] R. J. Peterka and R. J. Peterka. “Sensorimotor integration in humanpostural control”. In: J. Neurophysiol. 88.3 (Sept. 2002), pp. 1097–1118.

[48] D. Roetenberg, H. Luinge, and P. Slycke. “Xsens MVN: full 6DOF hu-man motion tracking using miniature inertial sensors”. In: Xsens Mo-tion Technologies BV, Tech. Rep 1 (2009).

[49] A. R. J. Ruiz, F. S. Granja, J. C. P. Honorato, and J. I. G. Rosas. “Accuratepedestrian indoor navigation by tightly coupling foot-mounted IMUand RFID measurements”. In: IEEE Transactions on Instrumentationand measurement 61.1 (2011), pp. 178–189.

[50] A. M. Sabatini. “Quaternion-based extended Kalman filter for deter-mining orientation by inertial and magnetic sensing”. In: IEEE trans-actions on Biomedical Engineering 53.7 (2006), pp. 1346–1356.

[51] S. Salehi, G. Bleser, A. Reiss, and D. Stricker. “Body-IMU autocalibra-tion for inertial hip and knee joint tracking”. In: Proceedings of the 10thEAI International Conference on Body Area Networks. 2015, pp. 51–57.

[52] P. G. Savage. “Strapdown inertial navigation integration algorithm de-sign part 1: Attitude algorithms”. In: Journal of guidance, control, anddynamics 21.1 (1998), pp. 19–28.

[53] T. Seel, J. Raisch, and T. Schauer. “IMU-based joint angle measure-ment for gait analysis”. In: Sensors 14.4 (2014), pp. 6891–6909.

[54] T. Seel, T. Schauer, and J. Raisch. “Joint axis and position estimationfrom inertial measurement data by exploiting kinematic constraints”.In: IEEE International Conference on Control Applications. 2012, pp. 45–49.

[55] D. K. Shaeffer. “MEMS inertial sensors: A tutorial overview”. In: IEEECommunications Magazine 51.4 (2013), pp. 100–109.

[56] I. Skog, P. Handel, J.-O. Nilsson, and J. Rantakokko. “Zero-velocity de-tection—An algorithm evaluation”. In: IEEE transactions on biomedi-cal engineering 57.11 (2010), pp. 2657–2666.

[57] A. Solin, M. Kok, N. Wahlström, T. B. Schön, and S. Särkkä. “Model-ing and interpolation of the ambient magnetic field by Gaussian pro-cesses”. In: arXiv preprint arXiv:1509.04634 (2015).

Page 53: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

References

[58] D. A. Winter, A. E. Patla, F. Prince, M. Ishac, and K. Gielo-Perczak. “Stiff-ness control of balance in quiet standing”. In: J. Neurophysiol. 80.3(Sept. 1998), pp. 1211–1221.

[59] D. A. Winter. “Human balance and posture control during standingand walking”. In: Gait & posture 3.4 (1995), pp. 193–214.

[60] S. P. Won and F. Golnaraghi. “A Triaxial accelerometer Calibration MethodUsing a Mathematical Model”. In: IEEE Transactions on Instrumenta-tion and Measurement 59.8 (Aug. 2010), pp. 2144–2153.

[61] O. J. Woodman. An introduction to inertial navigation. Tech. rep. 696.University of Cambridge Computer Laboratory, Aug. 2007.

[62] Xsens. MTw Awinda Wireless Motion Tracker. (Online)

accessed December 2017. 2017.

Page 54: Inertial motion capture for ambulatory analysis of human …uu.diva-portal.org/smash/get/diva2:1427920/FULLTEXT01.pdf · 2020. 5. 19. · Faculty of Science and Technology 1946. 43

Acta Universitatis UpsaliensisDigital Comprehensive Summaries of Uppsala Dissertationsfrom the Faculty of Science and Technology 1946

Editor: The Dean of the Faculty of Science and Technology

A doctoral dissertation from the Faculty of Science andTechnology, Uppsala University, is usually a summary of anumber of papers. A few copies of the complete dissertationare kept at major Swedish research libraries, while thesummary alone is distributed internationally throughthe series Digital Comprehensive Summaries of UppsalaDissertations from the Faculty of Science and Technology.(Prior to January, 2005, the series was published under thetitle “Comprehensive Summaries of Uppsala Dissertationsfrom the Faculty of Science and Technology”.)

Distribution: publications.uu.seurn:nbn:se:uu:diva-409894

ACTAUNIVERSITATIS

UPSALIENSISUPPSALA

2020