junior year internship report

Upload: kfuscribd

Post on 06-Apr-2018

226 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 Junior year internship report

    1/77

    MIDDLE EAST

    TECHNICAL UNIVERSITY

    ELECTRICAL AND ELECTRONICS ENGINEERING

    DEPARTMENT

    EE 400

    SUMMER PRACTICE REPORT

    KADIR FIRAT UYANIK

    1444405

    OSAKA UNIVERSITY

    INTELLIGENT ROBOTICS LABORATORY

    22.07.2008 - 10.09.2008

  • 8/3/2019 Junior year internship report

    2/77

    Table of Contents

    A. Introduction....................................................................................................... 2B. Description of Workspace..................................................................................3I. Osaka University.......................................................................................... 3

    a. Outline of the University.........................................................................4b. History.................................................................................................... 5

    II. Intelligent Robotics Laboratory.................................................................... 7III. JST Erato Asada Synergistic Intelligence Project.......................................... 8

    C. Work Done During Summer Practice................................................................. 10I. Motion Capture System(MCS) Analysis........................................................ 11

    a. MCS calibration...................................................................................... 11b. Logging Data..........................................................................................12c. Translation of MCS Coordinate System(CS)............................................13

    d. Calculation of Euler Angles.....................................................................14

    e. Rotation of MCS CS.................................................................................17f. Euler Angles to Quaternion Conversion..................................................18

    g. Visualization...........................................................................................22II. Sensor Board Analysis................................................................................. 24

    a. Hardware................................................................................................24i. Gyro Sensor...................................................................................... 25ii. Accelerometer Sensors.....................................................................26iii. Analog to Digital Converter(ADC)..................................................... 27

    b. Analysis..................................................................................................28i. Logging Data.................................................................................... 28ii. Calculation of Sensor Offsets............................................................31iii. Rough Calibration............................................................................. 34

    iv. Calculation of Euler Angles...............................................................36v. Euler Angles to Quaternion Conversion............................................36III. Intensive Sensor Calibration........................................................................ 37

    a. Interpolation of MCS Data...................................................................... 38b. Smoothing Sensor Data......................................................................... 40

    i. Angular positions..............................................................................42ii. Sensor Data......................................................................................43

    D. Conclusion......................................................................................................... 50E. References.........................................................................................................51F. Appendices........................................................................................................52

    I. Self Balancer Class Implementation............................................................ 52a. Header File............................................................................................. 52b. Implementation File................................................................................54c. Sample Codes........................................................................................ 66

    i. How To Visualize Sensor Logged-Data in Java-3D............................. 66ii. How To Visualize Sensor Stream-Data in Java-3D............................. 67iii. How To Visualize MCS Logged-Data in Java-3D.................................68

    II. Kalman Filter Implementation......................................................................69a. Header File............................................................................................. 69b. Implementation File............................................................................... 70

    III. Compilation..................................................................................................72IV. Article about Android Science......................................................................73

    1

  • 8/3/2019 Junior year internship report

    3/77

    B.INTRODUCTION

    I have performed my second Summer Practice in Intelligent Robotics Laboratory(IRL) of OsakaUniversity. IRL is one of the world's leading laboratory in the area of Humanoid Robotics andHuman Robot Interaction. My practice started at 22th of July and continued until 10th of September.My research was mainly on Probabilistic Approaches in Robotics and Gyro sensor calibration tomake a humanoid robot self balance itself. During first two weeks of my internship, I worked in

    Intelligent Robotics Laboratory being directed by Prof. Hiroshi Ishiguro. I conducted research onStochastic approaches in Robotics Science. Then I moved to Erato Laboratory directed by MinoruAsada. The robot I worked on is called CB2 which is a part of a joint project of those twolaboratories and I was a member of Ishiguro Group. My supervisor was Tamoyuki Noda who had hisB.Sc, M.Sc. and PhD. Degrees from Osaka University.

    CB2(Child-Robot with Biometric Body) is a child humanoid robot having 130 cm height and 33 kgweight. It features 56 air cylinders that serve as muscles. With cameras for eyes and microphonesfor ears, and with 197 tactile sensors embedded in the layer of soft silicone skin covering its entirebody, CB2 is well-equipped to take in environmental stimuli. When CB2s shoulders are tapped, itblinks as if surprised, stops moving, and turns its gaze toward the person who touched it, andwhen a toy is dangled in front of its eyes, it appears to devote all its energy to try to reach for it.CB2 also has a set of artificial vocal chords that it uses to speak baby talk.

    My ultimate aim was to enable CB2 self Balance itself whenever a side force is applied while it is atsitting down position. But I had mostly dealt with sensor calibration-consisted of 3 one-axial gyroand one 3-axis accelerometer sensors-to use them during self balancing process.

    This report includes description of the workspace, every details of calibration process, conclusionand appendices parts respectively, Source and sample codes of the programs can be found atappendices part.

    2

  • 8/3/2019 Junior year internship report

    4/77

    C.DESCRIPTION OF THE WO RKSPACE

    I . Osaka Universi ty 1

    3

    Figure 1: Osaka University

  • 8/3/2019 Junior year internship report

    5/77

    a. Outline of the University

    President WASHIDA Kiyokazu, Field: Clinical Philosophy, Ethics

    Trustee/Vice President

    NISHIDA Shogo

    KOIZUMI Junji

    NISHIO Shojiro

    TAKASUGI Eiichi

    MONDEN Morito

    TAKEDA Sachiko

    TSUJI Kiichiro

    Trustee/Director-Generalof

    AdministrationBureau

    TSUKIOKA Hideto

    AdministrationBureau

    7 Departments, 23 Divisions and 6 Offices.

    Organization 11 Schools with 10 Corresponding Graduate Schools:

    School of Letters, Human Sciences, Foreign Studies(School only), Law and Politics, Economics,

    Science, Medicine, Dentistry, Pharmaceutical Sciences, Engineering, Engineering Science.

    5 Independent Graduate Schools:

    Graduate School of Language and Culture, Osaka School

    of International Public Policy, Graduate School of Information Science and Technology, Graduate

    School of Frontier Biosciences, Law School.

    5 Research Institutes:Research Institute for Microbial Diseases, Institute of Scientific and Industrial Research, Institute for

    Protein Research, Institute of Social and Economic Research, Joining and Welding Research Institute.

    2 University Hospitals:

    University Hospital, University Dental Hospital.

    1 Library with 3 Branches:

    Main Library, Life Sciences Branch Library, Suita Branch Library, Minoh Branch Library.

    20 Joint-Use Facilities:

    International Student Center, Global Collaboration Center, Research Institute for World Languages, etc.

    3 National Joint-Use Facilities:

    Research Center for Nuclear Physics, Cybermedia Center, Institute of Laser Engineering.

    Faculty and Staff(2008)

    Number of academic staff: 2,877 Number of non-academic staff: 2,369 Number of part-time staff: 3,162

    Total: 8,408

    Students (2008) Undergraduate Entering Course: 3,245 Total: 16,204 Graduate Entering Masters Course: 1,773

    Entering Doctors Course: 929

    5 Year Doctorate Course (including Master's Course):

    55

    4

  • 8/3/2019 Junior year internship report

    6/77

    Doctor Course (3 Years):

    874 Law School: 100 Total Graduate Students: 8,037 Research Students 1,123 Grand Total 25,364

    International Students Undergraduate: 241 Graduate: 784 Research Students: 360 Total: 1,385

    Campus Suita Campus:996,659.32m2 (Administrative Bureau, School of Human Sciences, Medicine, Dentistry,

    Pharmaceutical Sciences, Engineering, Joint-Use Facilities, etc.).

    Toyonaka Campus:

    445,851.08m2 (University Library, School of Letters, Law, Economics, Science, Engineering Science,

    etc.).

    Minoh Campus:

    145,125.08m2 (School of Foreign Studies, etc.).

    Nakanoshima Center:

    1,000m2 (Osaka University Nakanoshima Center.).

    Budget (FY2008) Revenue: 119.103 Billion Yen

    Expenditure: 119.103 Billion Yen

    b. History of University

    The academic origins of Osaka University trace back to Kaitokudo, the Edo-periodschool for citizens, and Tekijuku, the school of Rangaku* (founded in 1838). It isbelieved that the spirit of the university's humanities faculties stemmed fromKaitokudo, while that of the science faculties, including medicine, came from

    Tekijuku.Kaitokudo was founded at Amagasaki, Osaka (now Imabashi,Chuo-ku, Osaka City) in 1724 by citizens. Its liberal atmospherefree from any academic schools or dogmas was welcomed andsupported by Osaka merchants and contributed to upgrading the cultural and

    intellectual levels of Osaka, and it attracted students from all over the countryas an academic center in western Japan. Although the school building was burntdown during World War II, books and other literature, about 48,000 items in all,survived the flames and were later presented to Osaka University from theKaitokudo Commemorating Society. They are collectively stored in theuniversity's library as "Kaitokudo Bunko".

    Tekijuku was opened by Ogata Koan, a doctor and scholar of Rangaku, at Osaka's Kawaramachitowards the end of the Edo- period (It was later relocated to present-day Kitahama, Chuo-ku,Osaka City). The school produced an array of talented individuals who pioneered Japan's modernera, mainly in medicine but also in such fields as physics, science, and military science. Among thegraduates from Tekijuku are Fukuzawa Yukichi, the founder of Keio University, Omura Masujiro,known for building the basis of Japan's modern military, and Takamatsu Ryoun, who contributed to

    the dissemination of modern medicine in Japan. Koan himself was an able medical doctor andexcellent educator. His basic ideas towards human beings have been inherited by Osaka Universityas its mental backbone. A plan is under way at the university to reproduce the ideas of Kaitokudoand Tekijuku in digital images, store them in a database, and pass them on as a symbol of OsakaUniversity's spirit and ideals.

    *The study of Western sciences by means of the Dutch language, the only Western language theTokugawa regime recognized during its reign.

    5

    Figure 3

  • 8/3/2019 Junior year internship report

    7/77

    Osaka Imperial University was inaugurated as the sixth imperial university in Japan in 1931. Itstarted with two faculties; medicine and science. The School of Engineering was added as a thirdfaculty two years later. Osaka Imperial University changed its name to Osaka University in 1947. In1949, as a result of the government's education system reform, Osaka University started itspostwar career with five faculties: science, medicine, engineering, letters and law. Although it is anational university, Osaka University was established in response to the requests of local industrialcircles and citizens. This is reflected in the many faculties that were founded through the financing

    of voluntary contributors.

    Unique and innovative faculties, graduate schools and research institutes have been establishedone after another. They include the School of Engineering Science, the first of its kind in a nationaluniversity, which is situated between the Schools of Engineering and Science, and the School ofHuman Sciences which covers psychology, sociology and education. In 1993, Osaka UniversityHospital was relocated from Nakanoshima in Osaka City to the Suita campus. This completed theimplementation of the university's long cherished plan to integrate all major facilities into the Suitaand Toyonaka campuses.

    In 1953, graduate schools were set up in Japanese universities as part of the government'seducation system reform program. All the faculties of Osaka University, which had by thenincreased to ten, inaugurated graduate schools. The number of graduate schools reached 15 in

    2004. They include the Graduate School of Language and Culture, the Osaka School ofInternational Public Policy, the Graduate School of Information Science and Technology, theGraduate School of Frontier Biosciences and Law School; cross-faculty and cross-institutionalindependent graduate schools.

    Research institutes were also established in rapid succession. In additionto the Research Institute for Microbial Diseases and the Institute ofScientific and Industrial Research which existed before World War II, theInstitute for Protein Research, the Institute of Social and EconomicResearch, and the Welding Research Institute (the current Joining andWelding Research Institute) were set up, respectively separated fromtheir parent faculties. Added to these institutes were Nationwide Joint-UseFacilities, Campus-wide Joint-Use Facilities, and the Museum of Osaka

    University. In total, there are 23 centers, research facilities andlaboratories in operation at Osaka University today.

    6

    Figure 4: Osaka Imperial University

    Figure 5:

  • 8/3/2019 Junior year internship report

    8/77

    I I . Int el l igen t Robotics Labora tory2

    A Perceptual Information Infrastructure monitors and recognizes real environment through sensornetworks. The sensor network tracks people in real-time and recognizes human behaviors whichprovide rich information for understanding real world events and helps peoples and robots workingin the real world.

    An Intelligent Robot Infrastructure is aninteraction-based infrastructure. By interactingwith robots, people can establish nonverbalcommunications with the artificial systems. Thatis, the purpose of a robot is to exist as a partner

    and to have valuable interactions with people.

    Our objective is to develop technologies for thenew generation information infrastructuresbased on Computer Vision, Robotics andArtificial Intelligence.

    7

    Figure 6: Social Robots interacting with people

    Figure 7: Prof. Hiroshi Ishiguro and his android twin

  • 8/3/2019 Junior year internship report

    9/77

    III.JST ERATO Asada Synergistic Intel l igence Project 3

    The 21st century is known as the "Century of the Brain" and will be an era of robothabitation with humans. Currently, these notions are not related to one another,despite the related subjects of understanding and realizing the emergence ofhuman intelligence. With advanced technologies, such as noninvasive imagingdevices, recent advances in neuroscience are now approaching problems regardingcognition and consciousness, which have not yet been dealt with in science.However, current means are not sufficient to proceed with research regardingcommunication and language acquisition based on "embodiment" that epitomize

    the abilities of cognition and development. It is important for life sciences in the21st century to regard the brain as a complete system, and to clarify how brainsrealize function through interactions with the external world based on its dynamics.

    Humanoids are a class of human-type robot, and Japan is a world leader in thisfield. Despite rapid technological development, only superficial functions have beenrealized and there is no established design methodology for the emergence ofhuman intelligence based on embodiment. It is indispensable to merge the scienceand technology through design, implementation, and operation of artifacts (robots)after clarifying the background deep-layer structures for the emergence ofintelligence based on not simply the engineering to realize superficial functions butalso the collaboration with the science fields, such as neuroscience and cognitivesciences. If these fields are merged organically, a means of verification utilizing

    robot technologies can be expected to promote a new brain science unique toJapan. We may also synthetically tackle the subjects of consciousness and mind,which used to be difficult to understand with conventional philosophy, psychology,sociology, cognitive science, linguistics and anthropology. Designing artifactscapable of passing these verifications necessitates innovations in current materials,such as robot sensors and actuators, and conventional artificial intelligence andcontrol technologies. Based on this basic concept, this study derives the design ofintelligence lacking in current humanoid studies from interactions with a dynamicenvironment, including humans (interactions between environment and robot orbetween human and robot). In other words, we realize a new understanding andconstruction of human intelligence from a complex of science and technologythrough the design, implementation, and operation of humanoids, and by verifyingthe constructive model using the methods of cognitive and brain sciences. The newdomain based on this complex is designated "synergistic intelligence."

    8

  • 8/3/2019 Junior year internship report

    10/77

    This name refers to both the co-creation of intelligenceby science and technology, and the co-creation of

    intelligence through the interactions between the bodyand environments. This new domain has four aspects. Thegeneration of dynamic motions by artificial muscles allowsco-creation of the intelligence though the interactionbetween the body and the environment and is called"Physically synergistic int elligence (P hysio-SI)."

    The emergence of somatosensory and motion in fetalmodels allows co-creation in the uterine environment. Inthe process of developing cognition from various

    interactions with the fosterer, the fosterer is the mostimportant environmental factor. Therefore, the syntheticmodel of the process is called "Interpersonallysynergistic intelligence (Perso-SI)."

    In the emergence of communication between manyhumans and robots, the effect of the multi-agentenvironment is important and "Socially synergisticintelligence (Socio-SI)" serves as the core.

    "Synergistic intelligence mechanism (SI-Mechanism)"verifies these processes by comparing autism and William'ssyndrome, which highlight extreme aspects of language andcognition capabilities, and promote the construction of a newconstructive model. We will promote harmonious study withinthe group, and design strategies to realize a synergisticintelligence system in humanoids that require close linkagebetween groups.

    The idea of "synergistic intelligence" promoted in this study is expected to help

    people recognize the importance of a constructive model and introduce moremacroscopic views into the conventional studies of brain science and physiology,which tend to be microscopic for further development in various fields related tothe brain. In addition, the idea is expected to lead to understanding by humansconducting high-grade social activities, and to the solutions of various problemsregarding the psychological development and education of children.

    9

  • 8/3/2019 Junior year internship report

    11/77

    D. WORK DO NE DURING SUM MER PRACTICE

    I explained the work under 3 major parts such as

    I. Motion Capture System(MCS) Analysis,

    II. Sensor Board Analysis,III.Intensive Calibration

    respectively.

    10

    Figure 8: (from left to right)Me, CB2 and my supervisor Tamoyuki Noda

  • 8/3/2019 Junior year internship report

    12/77

    I . MO TION CAPTURE SYSTEM(MCS) ANALYSIS

    In this part, angular variation of the sensor board will be analyzed by tracking 3 markers locatedon the test plate.

    Main parts of the analysis are ;I.a.MCS calibrationI.b.Logging DataI.c.Translation of MCS Coordinate System(CS)I.d.Calculation of Euler Angles

    I.e.Rotation of MCS CSI.f. Euler Angles to Quaternion ConversionI.g.Visualization

    I .a.MCS Calibration

    Motion capture , motion tracking , or mocap are termsused to describe the process of recording movement andreferring the movement onto a digital model. It is mostlyused in military , entertainment , film making. In film makingit corresponds to recording actions of human actors, andusing that information to animate digital character models in3D Animation.

    In the motion capture system that I have used there were 7infrared cameras. The idea behind it is very simple. Themarkers which are tracked are capable of reflecting the lightvery well. Infrared cameras have powerful infrared LEDs onthem. Reflected light from the surface of markers is sensedby he cameras and 3D model of the object is obtained usingsome mathematical calculations from the 2D position of themarker on each camera.

    Those systems are still expensive. The one I have used costs about200.000 USD.

    Calibration process of MCS requires good care because MCS is themajor tool to calibrate sensor module. If there is intolerable amountof error in the positions of the markers, this will cause a wrong sensorcalibration and during the integration of the sensor data, this error

    will blow up because of the accumulation.

    Before starting calibration we have relocated the cameras as in thefigure then some predefined-shaped-materials are used and using aruler, accuracy of the MCS was measured. The result of the MCScalibration was impressive owing to the 0.1 mm sensitivity.

    11

    Figure 1: A dancer wearing a suit used

    in an optical motion capture system

    Figure 3: Erato Lab MCS Software

    Figure 2: Used to search human

    biomechanics

  • 8/3/2019 Junior year internship report

    13/77

    I .b.Logging Dat a

    By using the MCS software GUI, markers' positions -obtained during experiments- are easilylogged. To record the position of a marker, it should be seen by at least two camera at the sametime because it is necessary to have at least two position info in two different coordinate systems.In other words, real world position(3D) of a marker can be obtained using at least two 2D locationsin two different camera frames.Since cameras has 60 fps refresh rate we can have 60 data per second. Some sample data asfollows:

    Test Plate is a hand made object having 3 markers and sensorboard on it. Markers were aligned so that the one at the cornerwill represent the origin of the test plate and the farther markerindicates x coordinate of the 'test plate coordinatesystem'(tpcs). The closer marker is used to obtain z axis of thetpcs via cross product of those two vectors. Once the vectorrepresenting the z axis of the tpcs is produced then again bytaking cross product of x and z axis vectors correct y axisvector is produced.

    12

    Figure 4: 3 Markers indicating the Test Plate

    coordinate systemFigure 5: Tools used for calibration

    PathFileT4 (X/Y/Z) move_1.trc

    DataRate CameraRNumFramNumMarkUnits OrigData OrigData OrigNumFrames

    60 60 2021 6 mm 60 1 2021

    Frame# Time M1 M2 M3

    X1 Y1 Z1 X2 Y2 Z2 X3 Y3 Z3

    1 0 -430.2 499.83 -294 -361.32 499.8 -236.17 -401.03 501.29 -186.28

    2 0.02 -430.15 499.89 -294.03 -361.32 499.8 -236.17 -401.12 501.15 -186.4

    3 0.03 -430.16 499.67 -293.86 -361.32 499.8 -236.17 -401.09 501.29 -186.3

    4 0.05 -430.2 499.91 -293.94 -361.32 499.8 -236.17 -401.09 501.29 -186.3

    5 0.07 -430.13 499.8 -293.83 -361.32 499.8 -236.17 -401.05 501.24 -186.33

    6 0.08 -430.16 499.97 -293.97 -361.28 499.77 -236.2 -401.09 501.29 -186.3

    Table 1: 3D coordinates of the markers at the motion capture system coordinate system

  • 8/3/2019 Junior year internship report

    14/77

    To be more clear ,M3 marker is just used toproduce z axis unit vector.

    M1 marker is used to indicate xcoordinateM2 marker is the origin of thetpcs.

    If we think of M2M1 vector as V1 and M2M3 vector as V2, it can easily be seen from the illustrationthat they may not be perpendicular to each other because this test plate is hand made. Thereforereproduction of axis vectors is a must here.

    I.c.Translation of MCS CS

    In order to map mcs coordinate system onto the tpcs first we have to translate the marker

    coordinates to the mcs's origin which will be represented by the marker M2. Therefore if thecoordinate values of M2 are subtracted from all markers, coordinates of the markers are translatedto the origin. The result of the translation is:

    One can compare this table with table-1 and see the relation between them. These coordinatevalues can be taken as the values with respect to the M2 marker.

    13

    PathFileT4 (X/Y/Z) move_4.trc

    DataRate CameraR NumFramNumMarkUnits OrigData OrigData OrigNumFrames

    60 60 1044 6 mm 60 1 1044

    Frame# Time M1 M2 M3

    1 0 -68.87 0.03 -57.83 0 0 0 -39.71 1.49 49.89

    2 0.02 -68.83 0.09 -57.86 0 0 0 -39.8 1.35 49.78

    3 0.03 -68.84 -0.13 -57.69 0 0 0 -39.77 1.49 49.88

    4 0.05 -68.88 0.11 -57.77 0 0 0 -39.77 1.49 49.885 0.07 -68.8 0 -57.66 0 0 0 -39.73 1.44 49.84

    6 0.08 -68.88 0.2 -57.77 0 0 0 -39.82 1.52 49.91

    Table 2: 3D coordinates of the markers after translating the tcps to the origin of mcs

    Figure 6: Test plate coordinate system(tpcs)

    F ormula 1: Cross Product of two vectors

  • 8/3/2019 Junior year internship report

    15/77

    I.d. Calculation of Euler Angles

    Euler Angles

    Euler angles are a means of representing the spatialorientation of any frame of the space as acomposition of rotations from a reference frame. Inthe following the fixed system is denoted in lowercase (x,y,z) and the rotated system is denoted inupper case letters (X,Y,Z). The intersection of thexy and the XYcoordinate planes is called the line ofnodes(N).

    is the angle between the1 x-axis and theline of nodes.

    is the angle between the2 z-axis and the Z-axis.

    is the angle between the line of nodes and3the X-axis4.

    Indeed, euler sequences are divided naturally intotwo classes;

    type I sequences have no repeating axle (e.g. 123);type I I sequences repeat the external axle (e.g. 131).

    There are 12 possible Euler sequences (123, 132, 213, 231, 312, 321, 121, 131, 212, 232, 313,and 323). There exists a two-fold ambiguity in the internal rotation angle for both sequences.4Because of the ambiguity in the internal angle, there are 24 possible Euler sequences.

    1 For a type I sequence, the ambiguity is typically resolved by choosing to be between -904and 90 degrees, which gives the cosine of the angle to be >= 0.

    1 For a type II sequence, the ambiguity is usually resolved by choosing sin to be >= 0,4which puts between 0 and 180 degrees.4

    123 456 Note:( , , ) no ta tio n corresponds to ( , , ) no ta tion.

    Singularities also exist for both types of sequences. For type I sequences, singular values occur atmultiples of 180 degrees for the internal angle . For a type II sequence, singular values occur at4odd multiples of 90 degrees for the internal angle . H4 owever, that several notational conventionsfor the angles are in common use;

    Goldstein (1980, pp. 145-148) and Landau and Lifschitz (1976) use ,

    Tuma (1974) says is used in aeronautical engineering in the analysis ofspace vehicles (but claims that is used in the analysis of gyroscopic

    motion), whi le Bate e t a l. (1971) use . Golds te in remarks that

    continental authors usually use , and warns that left-handed coordinatesystems are also in occasional use (Osgood 1937, Margenau and Murphy

    1956-64). Varshalovich (1988, pp. 21-23) uses the notation orto deno te the Euler angles, and gives three different angle conventions, none ofwhich corresponds to the x-convention(namely zxz convention)5

    14

    Figure 7: Euler Angles in zxz convention

  • 8/3/2019 Junior year internship report

    16/77

    As we can see, since there are various kinds of namings, conventions and ambiguities, it is difficultto decide which sequence or convention is better to use or easy to calculate etc.

    On the other hand euler angles has one more disadvantage which is gimbal lock.

    Gimbal Lock

    As rotations in the Euler representation are donewith respect to the global axis, a rotation in oneaxis could 'override' a rotation in another,causing to lose one degree of freedom. This iscalled gimbal lock .

    Say, if the rotation in the Y axis rotates a vector(parallel to the x axis) so that the vector is

    parallel to the z axis. Then, any rotations in the zaxis would have no effect on the vector.

    In my application, I have mostly used zxz convention and zyx convention and in order to decreasethe effect of gimbal lock I have used euler-angle to quaternion conversions. Those will beexplained hereafter.

    Euler Angle Formulas

    To find euler angles in zxz convntion one can use those formulas,

    Formula2: Euler angle formulas

    where arg(u.v)=atan2(v,u).

    Here atan2() function can correctly find the angle since it takes the information about quadrantswhereas atan() function doesn't take and gives incorrect result.More detailed information about how to find euler angles can be found from (0).

    123 4Note:( , , ) n ota ti on corres ponds t o ( ,56, ) convention where the sequence of the angles shows

    the rotation sequence of the coordinate system.

    In my application I use euler angles, first to map mcs coordinate system onto test plate coordinatesystem then to find the angular deviation of the markers with the motion of the test plate.

    Then those angles are used to calibrate sensor board by comparing the two datasets; mcs log dataand sensor log data respectively.

    15

    Figure 9: Gimbal Lock

  • 8/3/2019 Junior year internship report

    17/77

    Now my aim is to make M2M1 vector as an x-axis indicator that means M1 has to have (a,0,0) kindof coordinates where a is a postive real number. Besides M2M1 and M2M3 vectors should be at x-yplane that means M3 has to have (u,v,0) sort of coordinates where v is a positive real number and

    u is any number but it is expected to be as small as possible. Since I have located those 3 markersby hand it is almost impossible to make M2M1 and M2M3 vectors perpendicular to each other. Butanyway, I have solved this problem by reproducing virtual z and y axis vectors.

    To make things more clear I will summarize what I did until now and finally I will give the mappingeuler angles which are used for mcs cs and tpcs mapping.

    1)logging mcs data:

    2)translating mcs data:

    3)mapping mcs data:

    Euler angles to map data:4( ,56, ) =(0.886668 , 0.0230814, -0.0132174)rad=( 50.82810 , 1.32313 , -0.75768)deg

    16

    Table 3: Logged MCS Data

    PathFileT4 (X/Y/Z) move_1.trc

    DataRate CameraRNumFramNumMarkUnits OrigData OrigData OrigNumFrames

    60 60 2021 6 mm 60 1 2021

    Frame# Time M1 M2 M3X1 Y1 Z1 X2 Y2 Z2 X3 Y3 Z3

    8 0.12 -430.24 499.72 -293.81 -361.4 499.73 -236.17 -401.4 501.67 -186.34

    9 0.13 -430.16 499.97 -293.97 -361.28 499.77 -236.2 -401.13 501.31 -186.33

    10 0.15 -430.24 499.85 -293.91 -361.32 499.8 -236.17 -401.11 501.24 -186.35

    Table 4: Translated MCS Data

    8 0.12 -68.84 -0.01 -57.65 0 0 0 -40 1.94 49.83

    9 0.13 -68.88 0.2 -57.77 0 0 0 -39.85 1.54 49.87

    10 0.15 -68.91 0.05 -57.73 0 0 0 -39.79 1.44 49.83

    Table 5: Mapped MCS Data

    8 0.12 89.79 0.02 0.02 0 0 0 -1.34 63.92 0.47

    9 0.13 89.9 -0.04 0.23 0 0 0 -1.48 63.84 0.07

    10 0.15 89.9 0 0.07 0 0 0 -1.5 63.76 -0.03

  • 8/3/2019 Junior year internship report

    18/77

    That means two vectors on mcs coordinate system (M1', M3') should be rotated first around z axisby 50.83 deg then around x axis by 1.32 deg after that by -0.75 deg so that norm(M1') becomes(1,0,0) and norm(M3')becomes (0,1,0). Here, M1' and M3' are translated coordinates of the

    corresponding markers.Lets see the results of rotation

    As we can see coordinate system mapping process is done successfully. Rotation of MCSCS isexplained at the next section.

    I.e.Rotation of MCS CS

    In order to rotate coordinate axis vectors, one has to find a rotation matrix . In fact rotation matrixconsists of 3 rotation matrices. Those 3 matrices corresponds to 3 consecutive rotations about 3axis unit vectors. Therefore;

    If we say A=R3(5)R1(6)R3(7)where A is the 3D rotation matrix, to rotate a vector = ,8

    rotation procedure occurs as follows;

    = = '8

    where ' is the rotated version of .8 8

    From now on, the coordinate system becomes the test plate coordinate system and it is ready totake meaningful data from the motion capture system and make comparison between betweensensor board output.

    17

    Formula 3: Euler angle to rotation matrix conversion

    Table 6: Normalized vectors shows that mapping is successful

    8 0.12 1.00 0.00 0.00 0.00 0.00 0.00 -0.02 1.00 0.01

    9 0.13 1.00 0.00 0.00 0.00 0.00 0.00 -0.02 1.00 0.00

    10 0.15 1.00 0.00 0.00 0.00 0.00 0.00 -0.02 1.00 0.00

  • 8/3/2019 Junior year internship report

    19/77

    I.f.Euler Angles to Qua ter nion Conversion

    Now one can find how much deviation occurs in terms of euler angles while sensor board is moving.After applying formula 2 during experiment, I get that sort of information;

    Euler Angles corresponding to those data set are shown on the table 9 in terms of radian unit.

    18

    Table 7: Experimental Data showing the 3D coordinates of the 3 markers

    PathFileT4 (X/Y/Z) move_4.trc

    DataRate CameraR NumFramNumMarkUnits OrigData OrigData OrigNumFrames

    60 60 1044 6 mm 60 1 1044

    Frame# Time M1 M2 M3

    491 8.17 -304.7 552.53 -147.43 -299.89 473.58 -105.73 -362.75 471.88 -96.26

    492 8.18 -304.89 552.46 -148.7 -299.9 474.14 -106.29 -362.68 471.91 -96.86

    493 8.2 -304.77 552.44 -149.72 -299.93 474.58 -106.96 -362.23 472.1 -97.5

    494 8.22 -304.67 552.92 -150.75 -299.91 474.98 -107.27 -362.43 472.87 -98.13

    495 8.23 -304.42 552.8 -151.29 -299.79 475.15 -107.61 -362.59 473.37 -98.27

    496 8.25 -304.34 552.74 -151.43 -299.69 475.11 -107.79 -362.35 473.21 -98.43

    497 8.27 -304.32 552.77 -151.45 -299.39 474.97 -108.05 -362.06 473.07 -98.47

    498 8.28 -304.27 552.69 -151.66 -298.43 476.58 -107.44 -361.92 473.23 -98.42

    499 8.3 -304.26 553.21 -152.47 -298.66 477.11 -108.15 -362.33 473.88 -98.82

    500 8.32 -304.92 553.69 -154.03 -299.08 477.87 -108.88 -362.68 474.33 -99.72

    501 8.33 -305.35 553.59 -156.18 -299.32 478.42 -110.18 -362.8 475.45 -101.21

    502 8.35 -305.76 554.06 -158.66 -299.8 479.08 -111.9 -363.24 475.89 -102.57

    503 8.37 -305.71 554.23 -160.79 -299.79 479.68 -113.17 -363.37 476.93 -103.97

    504 8.38 -305.92 554.47 -162.86 -299.77 480.37 -114.59 -363.86 477.93 -105.14

    505 8.4 -305.89 554.53 -164.23 -300.12 480.72 -115.54 -363.41 478.19 -106.33

    506 8.42 -305.71 554.44 -165.77 -299.97 480.96 -116.51 -363.25 478.77 -106.72

    507 8.43 -305.4 554.62 -167 -299.54 481.41 -117.54 -363.35 479.7 -107.3

    508 8.45 -305.65 555.01 -167.99 -299.82 481.95 -118.1 -363.25 480.09 -108.38

    509 8.47 -305.59 554.7 -169.39 -299.97 482.23 -118.98 -363.18 480.6 -109.1

    510 8.48 -305.84 554.73 -170.74 -299.77 482.41 -119.86 -363.28 480.8 -109.45

    511 8.5 -305.92 554.96 -172.29 -300.16 482.87 -120.71 -363.54 481.58 -110.49

    512 8.52 -306.01 554.89 -173.82 -300.43 483.58 -121.75 -363.69 481.68 -111.68

    513 8.53 -306.53 555.09 -175.81 -300.82 484.23 -123.07 -364.07 482.83 -112.96

    514 8.55 -306.87 554.96 -177.72 -301 484.46 -124.2 -364.43 483.45 -114.24

    515 8.57 -306.64 554.93 -180.1 -301.01 485.05 -125.77 -364.36 483.86 -115.58516 8.58 -306.94 554.91 -181.87 -301.21 485.6 -126.95 -364.26 484.44 -116.65

    517 8.6 -307.02 554.51 -184.04 -301.17 486.22 -128.65 -364.55 485.43 -118.22

  • 8/3/2019 Junior year internship report

    20/77

    Those angles are used when the quaternion matrix is formed.

    Once quaternion matrix is formed, quaternion information is sent to the visualization tool to see

    whether all those formulas work or not. Visualization tool will be used during sensor calibration also.

    Sometimes translation information can be added to the quaternion matrices. In table 8, first four entriesare for quaternion rotation and the last three entries are for 3D translation.

    19

    Table 8: Qaternion conversion of the same experimental data.Table 9: Euler angles of the test

    plate in an experiment

    -1.43 1.53 -0.02

    -1.42 1.53 -0.02

    -1.41 1.53 -0.01

    -1.4 1.53 -0.02

    -1.39 1.54 -0.02

    -1.38 1.54 -0.02

    -1.37 1.54 -0.02

    -1.37 1.55 -0.02

    -1.36 1.56 -0.02

    -1.36 1.55 -0.02

    -1.35 1.55 -0.03

    -1.34 1.56 -0.02

    -1.34 1.56 -0.03

    -1.33 1.55 -0.03

    -1.32 1.56 -0.03

    -1.31 1.56 -0.03

    -1.3 1.56 -0.03

    -1.29 1.56 -0.03

    -1.28 1.57 -0.03

    -1.27 1.57 -0.04

    -1.25 1.57 -0.04

    -1.24 1.57 -0.04

    -1.23 1.58 -0.05

    -1.23 1.6 -0.04

    -1.21 1.6 -0.04

    -1.2 1.59 -0.04

    -1.18 1.59 -0.04

    0.53 -0.45 -0.48 0.54 0 0 0

    0.53 -0.45 -0.47 0.54 0 0 0

    0.53 -0.44 -0.47 0.55 0 0 0

    0.54 -0.44 -0.47 0.55 0 0 0

    0.54 -0.44 -0.46 0.55 0 0 0

    0.54 -0.44 -0.46 0.55 0 0 0

    0.54 -0.43 -0.46 0.55 0 0 0

    0.55 -0.43 -0.46 0.55 0 0 0

    0.69 -0.08 -0.19 0.69 0 0 0

    0.55 -0.43 -0.45 0.55 0 0 0

    0.55 -0.43 -0.45 0.55 0 0 0

    0.56 -0.43 -0.45 0.55 0 0 0

    0.56 -0.43 -0.45 0.55 0 0 0

    0.56 -0.42 -0.45 0.56 0 0 0

    0.56 -0.42 -0.44 0.56 0 0 0

    0.57 -0.42 -0.44 0.56 0 0 0

    0.57 -0.41 -0.44 0.56 0 0 0

    0.57 -0.41 -0.44 0.56 0 0 0

    0.57 -0.41 -0.43 0.56 0 0 0

    0.58 -0.41 -0.43 0.56 0 0 0

    0.58 -0.4 -0.43 0.56 0 0 0

    0.58 -0.4 -0.42 0.57 0 0 0

    0.59 -0.39 -0.42 0.57 0 0 0

    0.59 -0.4 -0.41 0.56 0 0 0

    0.6 -0.4 -0.41 0.56 0 0 0

    0.6 -0.39 -0.41 0.57 0 0 0

    0.6 -0.38 -0.4 0.57 0 0 0

  • 8/3/2019 Junior year internship report

    21/77

    Qaternion Matrix

    A quaternion is an element of a 4 dimensional vector-space. It's defined as w + xi + yj + zk where i, jand k are imaginary numbers. Alternatively, a quaternion is obtained when a scalar and a 3d vector are

    added.

    Every unit quaternion represents a 3D rotation, and every 3D rotation has two unit quaternion

    representations. Unit quaternions are a way to compactly represent 3D rotations while avoiding

    singularities or discontinuities like gimbal lock.

    Some examples of qauternions are;

    Rotation Quaternion

    Identity (no rotation) 1, -1

    180 degrees about x-axis i, -i

    180 degrees about y-axis j, -j

    180 degrees about z-axis k, -k

    5angle , axis (unit vector)

    Euler to Qat ernion Conversion

    There are various kinds of conversion formula because of the wide variety of euler angle

    representations. I have used two of them, one is convert zxz convention to quaternion during mcs

    experiments , the other one I have used to convert zyx convention to quaternion during sensor board

    calibration. I will talk about the second one later on.

    20

  • 8/3/2019 Junior year internship report

    22/77

    Some advantages and disadvantages of quaternion are as follows:

    2 Quaternions don't suffer from gimbal lock, unlike Euler angles.

    2 They can be represented as 4 numbers, in contrast to the 9 numbers of a rotations matrix.

    2 The conversion to and from axis/angle representation is trivial.

    3 They are hard to visualize.

    3 One has to convert them to get a human-readable representation (Euler angles) or something.

    In my application of course it is impossible to use only quaternions since it necessitates to solve a lot

    many nonlinear and linear equations. Visualizer that I have used needs a quaternion to rotate the cube

    -representing the sensor board or the head of the robot- so I found euler angles then converted them intoquaternion matrix.

    21

  • 8/3/2019 Junior year internship report

    23/77

    I.g.Visualization

    To see the results of the calculation in a more understandable way, I have used a Java 3D visualizerwritten by my supervisor. Here are some snapshots that indicate the motion of the test plate. Indeed this

    visual data is taken from an experiment, some sample data of it was given at the tables 8-9-10.

    22

  • 8/3/2019 Junior year internship report

    24/77

    In some experiments, gimbal lock problem is observed which is illustrated on the images below:

    Here movement is almost the same as in figure 11, but 2 middle frames are indicating the gimbal lock.

    23

  • 8/3/2019 Junior year internship report

    25/77

    II .SENSOR BOARD ANALYSIS

    I I .a .Hardware

    During my internship I have used a sensor board having three one-axialgyro sensors and one 3 axis accelerometer onboard. I mainly dealt with

    gyro sensors to find the angular position of the robot's head. Now I will

    give some information about those sensors and other hardware

    equipments.

    24

    Figure 12: Inside view of the CB2's head

  • 8/3/2019 Junior year internship report

    26/77

    i . Gyro Sensors

    Overview

    It is necessary to use the output of this sensor throughan A/D converter as shown later on.

    Positive voltage (+) and negative voltage (-) areobtained in the clockwise and counter clockwise directions,respectively, with the static output as a reference.

    It is better to use A/D converter of 12 bits or more.

    Resolution of A/D converter will affect measurementaccuracy. One has to decide on the sampling rate of theADC according to the application.

    The sampling frequency used for measurement shouldbe 50 times/sec minimum. Sampling frequency will affectmeasurement accuracy.

    There are some problems related with gyro sensors such as:

    Bias:If the system operates unaided (without odometer/velocity or GPS ormagnetometer aiding), the gyro bias indicates the increase of angular error over time(in deg/h or deg/s)

    Scale Factor Error:Angular error which occurs during rotation

    Misalignment:A misalignment between the gyro axes causes a cross-couplingbetween the measurement axes. A misalignment of 0.1 mrad inside of the system (e.g.residual calibration mismatch) leads to a roll error of 0.036 degree during one revolutionaround the yaw axis (if the system is unaided).

    Random Walk:Given in deg/sqrt(hr), shows the noise of the used gyro.

    25

    Drawing 1: Gyro Sensor output characteristics

    Figure 12: How to obtain meaningful

    data from sensors

  • 8/3/2019 Junior year internship report

    27/77

    i i . Accelerome ter Sensors

    Overview

    Accelerometers are sensors or transducers that measure acceleration. They generally measure

    acceleration forces applied to a body by being mounted directly onto a surface of the acceleratedbody.

    In this case, they are mounted to the circuit board and circuit board is located inside the robot'shead as in the figure 12.

    Accelerometer sensors also have some problems like gyros:

    Misalignment: A misalignment between the gyro axes (or accelerometer axes) causes across-coupling between the measurement axes.

    Acceleromet er Offset: An offset on the accelerometer leads to an error during alignment,i.e. determination of initial roll and pitch angle. An offset of 0.1 mg(g is gravitational acceleration)

    leads to approx. 0.006 degree angular error (attitude error). The sensor offsets can be estimatedduring operation by the system integrated Kalman filter*.

    *If you investigate in an inertial measurement system...by Dr.-Ing. E. v. Hinueber, iMAR GmbH

    26

    Drawing 2: Accelerometer output characteristics Figure 13: Sensor Board;

    blue is accelerometer and

    reds are gyro sensors

  • 8/3/2019 Junior year internship report

    28/77

    i ii .Analog to Digit al Conver ter(A DC)

    Overview

    Sensors' output signals must be converted into digital, using a circuit calledADC (Analog-to-Digital Converter), before they can be manipulated by digital

    equipment, computer in this case. What the ADC circuit does is simply to take

    samples from the analog signal from time to time. Each sample will be converted

    into a number, based on its voltage level. Our ADC has a 100Hz sampling rateand 16bit resolution that enables us to acquire realistic digital data withsufficient refresh rate.

    27

    Figure 14:

    ADC -100Hz

    sampling

    rate-16 bit

    Resolution

    Drawing 3: 16-bit resolution can recognize even small changes in the

    sensor outputs. It is in the range of microvolts.

  • 8/3/2019 Junior year internship report

    29/77

    II.b.ANALYSIS

    Now lets move on to sensor board analysis. It is mainly consisted of;

    II. Logging DataIII. Calculation of Sensor OffsetsIV. Rough CalibrationV. Calculation of Euler AnglesVI. Euler Angles to Quaternion Conversion

    i. Logging Da ta

    Robot's hardware are mainly controlled by several servers and to get balancing module(sensor

    board) information it is necessary to make it run then stream out the information. One can inputthis stream by connecting the server and running necessary programs. Output of sensor modulelooks like this:

    In this table system clock is in terms of millisecond, Gy indicates gyro output and Ac

    accelerometer, _x/_y/_z indicates the axis of the test plate coordinate system. In this experimentdata is logged while the sensor module stand still. One can observe that although the plate standstill sensors still output some non-zero signal which can be called as output value. In addition,

    there is some fluctuation in the output signal which is noise. I will calculate that offset value in the

    next section. Before that those are the graphs taken from one of the experiments that I have conducted.

    28

    DEBUG: port1 = ' 11000 '

    DEBUG: host = ' 192.168.5.121 '

    tring to connect to host1... DEBUG: o.k. connected to host1.

    1220896923.503553 30271 30377 30086 29742 30240 43593

    1220896923.513553 30231 30390 30044 29644 30320 43616

    1220896923.523567 30296 30408 30051 29752 30317 43669

    1220896923.533567 30291 30345 30030 29831 30360 43706

    1220896923.543555 30277 30406 30025 29832 30341 43682

    1220896923.553567 30207 30418 30058 29617 30491 43705

    1220896923.563569 30262 30374 30057 29782 30426 43584

    1220896923.573568 30228 30431 30060 29692 30284 43712

    1220896923.583567 30278 30359 30066 29871 30512 43622

    1220896923.593567 30333 30357 30083 29848 30323 43639

    1220896923.603567 30242 30413 30074 29736 30281 43650

    1220896923.613567 30266 30367 30067 29903 30346 43640

    Table 10: Sensor Data

    System Clock Gy_y Gy_x Gy_z Ac_x Ac_y Ac_z

  • 8/3/2019 Junior year internship report

    30/77

    In this experiment I tried to make only one-axis-

    rotation but since test bed was not so good the

    motion became not planar but spatial. Besides,

    sensor output is wavy owing to the vibrative

    motion and the sensor itself. To smooth data I will

    try Kalman Filter later on. Following graphs are

    from the accelerometer sensor output:

    29

  • 8/3/2019 Junior year internship report

    31/77

    As we can see x axis gyro outputs higher values

    than the other axis gyros since the movement is

    rotation about x axis(approximately) that means

    motion on yz plane. This can be observed from the

    y and z direction accelerometer sensor's output

    also.

    30

  • 8/3/2019 Junior year internship report

    32/77

    i i . Calculation of SensorOffsets

    To have meaningful data, it is necessary toget rid of offset value. Sensor offset behaviorhas an important role to calibrate sensorscorrectly. Now lets have a close look at thestand still output of the sensor which is takenfrom first 400 frames of the movementabove(corresponding to 4 seconds)

    31

  • 8/3/2019 Junior year internship report

    33/77

    First 3 graphs are from gyro sensors and the last3 graphs from the accelerometer 3 axis output.

    By taking the mean of 400 frames, offsets of thesensors are calculated as:

    Gyro X Gyro Y Gyro Z

    30375.8 30255.4 30048.9

    Accelero X Accelero Y Accelero Z

    29781.1 30326.3 43632.7

    32

  • 8/3/2019 Junior year internship report

    34/77

    I also tried to calculate offset value by finding the median value of the first 400 stand still frame.Because sometimes owing to the magnetic effects offset value can be too large by making a hugepeak value. This may change the average value by a considerable amount but when the median of

    this data set is taken it just shifts the mean by 1 element which may correspond to approximatelynothing.

    This approach is used excessively in image processing to get rid of salt and pepper noises on theimage during threshold calculations. This is due to the fact that salt and pepper noises correspondto sudden peaks in the pixel values. For sure, if the noise density is too much one has to think ofanother solution for threshold calculation.

    To sort the 400 hundred data and to find median of that data set quicksort type sorting techniqueis used which has a complexity of (nlogn) while the traditional method of bubble sort has n2..

    Gyro X Gyro Y Gyro Z Accelero X Accelero Y Accelero Z

    30374 30257 30049 29786 30332 43636

    If the meaned offset and medianed offset are compared with each other, it can be seen that theresults are pretty close to each other. This is why the dataset is too large. Getting that much datameans waiting for 4 sec when you start the robot. Sometimes this can be a little boring. In thiscase medianed offset can be more powerful since that much dataset will not be obtained. Inaddition if the dataset is small meaned offset can be more sensitive to the peak noises.

    For instance this data set is obtained from an experiment in which to calculate offset value I havewaited for just 1/3 of a second. Here are the results:

    Meaned Offset:

    Gyro X Gyro Y Gyro Z Accelero X Accelero Y Accelero Z

    30374 30254 30044 29784 30327 43645.02

    Medianed Offset:

    Gyro X Gyro Y Gyro Z Accelero X Accelero Y Accelero Z

    30372 30249 30048 29794 30335 43636

    This results shows that medianed offset more robust then the meaned offset because change inthe offset values are less then in the meaned case. Especially Accelerometer z axis data can be aproof of this.

    Alhough it looks like both of them can be used as a offset calculation method, since this values will

    be inegrated to find the angular position in time those errors will accumulate and have reallyconsiderable values. Therefore robustness is a critical issue in calculating the sensor offset values.

    33

  • 8/3/2019 Junior year internship report

    35/77

    iii.Rough Calibration

    Due to the fact that sensor output values

    are not degree/sec but something

    different, it is necessary to convert those

    values into degree/sec equivalents by

    finding (a) calibration coefficient(s).

    To have a better idea about calibration

    coefficient , first I simply integrated each

    offset-outed sensor data and equate them

    to the equivalents coming from the

    motion capture system. Those figures

    shows the offset-outed data obtainedfrom the sensors.

    If the only change occur on the one axle's

    angle one can basically integrate that

    axle's gyro output and obtain the angular

    position corresponding to that axle but

    this is a special case. If the object moves

    and changes more than one euler angles,

    you cannot obtain the corresponding

    euler_angle position by simply

    integrating the data coming from the gyro

    sensors because each euler angles are

    affected from others. This will be

    explained in the next section(3.4.3

    Intensive Calibration)

    In order to calculate the calibration

    coefficient I assumed that this experiment

    cause only one euler angle alteration

    Actually it is not.

    Now lets see euler angles obtained from

    the motion capture system.

    34

  • 8/3/2019 Junior year internship report

    36/77

    According to the calculations initials euler angles- according to the zxz convention- are :

    ( ,1 ,2 3) (initial)=

    -90.0925 90.0436 0.0317544

    final angles are:

    ( ,1 ,2 3) (final)=

    -88.2222 145.183 0.754217

    As we can see initial assumption is not so bad since other 2 euler angles weren't almost change so we

    expect to see a calibration coefficient close to the real one.

    If we say CC is calibration coeefficient then:

    CC =( (5)(final) 5- ( ) (initial) ) / integral(gyro z)=(90.0436 - 145.183 ) / (-1644102)

    =3.35377* 10-5

    35

  • 8/3/2019 Junior year internship report

    37/77

    iv.Calculation of Euler Angles

    One of the most important part of obtaining angular position information from gyro sensors is how toconvert gyro output to euler angle velocities. Following formula does this perfectly;

    Once the angular velocities are obtained the rest is just integration or summation in discrete time

    domain.

    v. Euler Angles to Quat ernion Conversion

    Once we get the euler angles to visualize the motion on the computer it is necessary to convert it into

    quaternion. We cannot apply the formula being used in the mcs visualization because euler angle

    conventions are different than each other. This is because the gyro sensors give us rotation information

    about x, y and z axis.

    Here is the formula to convert xyz convention euler angles to quaternion:

    Now sensor data can be visualized easily as in the case of mcs.

    In the next section my effort will be on intensive sensor calibration. I will use kalman filter to smooth

    sensory data and I will interpolate mcs data so that it can be compared with the sensor data.

    36

    Formula 5: Euler Angle Velocity

    Formula 6: Euler Angles to Quaternion conversion

  • 8/3/2019 Junior year internship report

    38/77

    III .I NTENSI VE SENSOR CALIBRATION

    In this part my effort will be on finding the calibration coefficients more precisely.

    First I would like to mention main problems that I had come across and the solutions I had putforward,

    Problems:

    3 Sampling rate difference between MCS and ADC

    3 Noisy sensory data

    3 Aligning problem in test plate and sensor board

    3 Aligning problem in sensors on the sensor-board

    3 Gyro drift

    Solutions:

    2 Interpolation of MCS Data2 Kalman Filter as a smooth filter

    2 Buying new self-calibrated Sensor Module

    37

  • 8/3/2019 Junior year internship report

    39/77

    I I I .a.Int erpolation of MCS Dat a

    As I mentioned before, motion capture system has infrared cameras having 60Hz refresh ratewhereas analog to digital converter which is used for digitizing the sensor output has 100Hzsampling rate.

    This difference makes it difficult to compare the angles. In order to make it simple, I just duplicatesome frames of MCS so as to achieve 100frame per second. This approach is used some videoprocessing applications like encoding or to record a video in a different format, this is sometimescalled as padding. In this application it is necessary to pad 2 frames of each 3 frames of mcs, thatmakes 40 padded frames in 100 frames. The idea is copy first and second frames and let the thirdone as it is.

    MCS 0 16.67 33.34 50.01

    ADC 0 10 20 30 40 50

    MCS_interpol 0 16.67 16.67 33.34 33.34 50.01

    In this table we can see relation between data sets. I have assumed that we acquired data fromthe ADC and MCS starting from 0msec and continue until 50msec. Then by interpolating mcs dataI get this result. Maximum error is 6.67msec. If we think of the motion having maximum speed of30cm/sec, interpolation can cause 0.2cm error in worst case which is tolerable.

    Now lets see euler angles obtained from interpolated mcs data set and sensor data set.

    I assume that interpolated mcs data is almost totally correct. Indeed there is no error inmathematical calculations of euler angles using mcs dataset since they are back and forthchecked. On the other hand there is about 0.2mm error in mcs system and 0.2cm error ininterpolation process.

    Therefore this error can be neglected and interpolated mcs data can be assumed as real eulerangle values.

    38

  • 8/3/2019 Junior year internship report

    40/77

    In this process I come across with some problems such as synchronization and noisy data in

    mcsdataset. Since I didnt have much time to solve synchronization problem, I couldn't finishintensive calibration with much success. But it can easily be seen from the right figure thatconversion between euler conventions is really successful, besides rough calibration which is donebefore gives almost the actual values of the calibration coefficients because we almost have theactual angles from the sensors.

    On the other hand, one can easily overcome noisy parts in mcs dataset by using mcs software.After deleting this noisy parts, by using nice interplation futures of mcs software disconnectedpoints can be merged easily with a great precision.

    How to change e uler angle convention;

    ADC gives 100 data per second while MCS gives 60 data per second. In order to compare and

    contrast those I need to have data corresponding to same instance. On the other hand in mcsanalysis I have used zxz convention euler angles where as in sensor board analysis I have usedzyx convention euler angles. Therefore I need to somehow make a relation between those 2conventions or convert one to another one. To do that I have converted zxz convention eulerangles to the quaternoion matrix and I have converted it back to zyx convention euler angles.

    39

    gure 15: Unsynchronized datasets corresponding to sensors-mcs Figure 16: Manually Synchronized datasets

  • 8/3/2019 Junior year internship report

    41/77

    I I I .b.Smoothing Sensor Da ta

    The output of sensor board is so noisy because of the environmental effects like electromagneticfields, temperature etc.

    I have implemented a kalman filter and tested it on sensors output and also a mobile robot'sposition data. Kalman filter both predicts next state of the data and also smooths output withsome tolerable error.

    The problem that I have observed is difficulty in modeling sensors' measurement noises or processnoise etc. Although Kalman filter itself assumes system is linear and the noises are gaussiandistributed, we cannot expect enough efficiency from these sensors if the model is not so realistic.

    Here are some results of next state prediction using Kalman Filter and a third order filter of mine:

    Blue lines indicate real path while red one is predicted path.

    Drawing 4: Next state estimation using 3rd Order Linear Filter

    40

  • 8/3/2019 Junior year internship report

    42/77

    Drawing 5: Next state estimation using Kalman Filter

    As we can see Kalman filter sometimes leads and sometimes lags the real path and the sum ofthose errors are almost zero.

    I have used my Kalman Filter class on sensor output and I have observed that it works very well tosmooth noisy data. One can decide on how much to smooth data by simply changing the errorcovariance factor.

    Results are as follows;

    41

  • 8/3/2019 Junior year internship report

    43/77

    i . Angular Posi t ions:

    Drawing 6: Angular positions; kalmaned and normal data respectively

    It is difficult to see the difference between kalmaned and normal output by just looking at theeuler angles. Indeed the error covariance factor was close to 1, so we cannot expect muchdifference at output.

    42

  • 8/3/2019 Junior year internship report

    44/77

    i i. Sensor Data

    43

    Drawing 7: Original Data(for y axis gyro);

  • 8/3/2019 Junior year internship report

    45/77

    44

    Drawing 8: Kalmaned Data(for y axis gyro) when error_cov=0.001

  • 8/3/2019 Junior year internship report

    46/77

    45

    Drawing 9: Kalmaned Data(for y axis gyro) when error_cov=0.01

  • 8/3/2019 Junior year internship report

    47/77

    46

    Drawing 10: Kalmaned Data(for y axis gyro) when error_cov=0.1

  • 8/3/2019 Junior year internship report

    48/77

    47

    Drawing 11: Kalmaned Data(for y axis gyro) when error_cov=1

  • 8/3/2019 Junior year internship report

    49/77

    As we can see from the graphs , for this application it is more appropriate to choose error_covvalue between 0.001 and 0.01 so that system can achieve the correct value as fast as possible

    and it eliminates the noise as much as possible. Kalman filter can even overcome gimbal lockproblem since it depends on the past inputs and can avoid sudden changes at output.

    How to find most appropriate value for error_cov depends on the designer and there are manyapproaches for this.One approach which I also prefer is to make computer learn the best value or best range of values.

    To do that we can define some criteria likesettling timeprecisionerror

    For sure, less error-less settling time-more precision is always preferred for those kinds of systemsbut there is a trade of here and designer should decide on this according to the application.

    I didn't have much time to spend on this topic so I couldn't finish intensive calibration part totally.Actually other problems that I came across have a major role thats why I am listing down themhere again;

    Aligning problem in test plate and sensor boardAligning problem in sensors on the sensor-boardEarly sensor saturationGyro drift

    As I have mentioned before. I didn't have enough time and tools to solve first two problems duringmy internship and also last one was maybe the biggest problem I have ever had.

    Aligning problems are simply because of custom designated sensor modules and also hand made

    test bed which is my test plate in this case. Sensors were saturating themselves although thechange is not so sudden. During my calibration process I always assumed that those errors arenegligible but indeed they are not. I will talk about the solutions in conclusion so let me move onto main problem in gyro calibration which is gyro drift.

    GYRO DRIFT

    Gyro drift is a function of frictional losses in the gyro mechanism, time, temperature and ambientvibration levels and probably a hundreds of many things cause this problem. It is necessary tocalibrate the unit for Bias and Scale, linearity and temperature dependency also should bemodeled (of both bias and scale) to estimate actual angular value correctly. Gyro drift isdivided into 2 parts as follows;

    -Levelling Gyro Drift: the random precession of gyros will tend to turn the platform away fromthe horizontal causing an oscillation action as the accelerometers try to connect. This oscillation,depending on its period, will cause velocity and subsequent distance errors.

    -Azimuth Gyro Drift; Small position errors may occur due to azimuth gyro drift. However, gyrodrift about the azimuth axis produces in-flight azimuth errors that are small compared to the initialmisalignment errors in azimuth6.

    48

  • 8/3/2019 Junior year internship report

    50/77

    In my application major effect that causes gyro drift was temperature. I didn't have much time tomodel temperature dependency but I had recorded 3 different offset values in 1 hour. While

    system was working someone switched on the air-conditioner in the laboratory and in 1 hour Irecorded 3 different offset values. Since this was just a coincidence, not a controlled experiment Idon't have exact record times but those 3 data in right order and showing that offset value isdecreasing by decreasing temperature.

    Gyro Y Offset Gyro X offset Gyro Z offset

    30079.7 30203.8 29876.1

    30071.9 30194.3 29865.7

    30065.8 30188 29858.4

    Table 10: Gyro offsets changes by temperature(YXZ sequence is because of the sequence of data stream)

    People solves gyro drift problem generally by developing a model which represents environmentalfactors and behavior of the sensors to them. Kalman filter is also excessively used to compansateerrors. Usually modeling of the dependencies can be very difficult thats why some learningtechniques are used like neural networks as in this work7.

    To recapitulate; Sensor calibration task is not an easy one. There are many many academic worksabout gyro drift and it is almost impossible to make a perfect test plate and perfect alignmentbetween sensors on sensor board.

    Because of those problems I have decided to look for a ready to use gyro and accelerometersensor module. I thought it would be much more exciting to work just on the robot and develop

    behaviors for it. Then I have come across with a very nice sensor module whose mainspecifications are as follows,

    Tri-axis gyroscope with digital range scaling75/s, 150/s, 300/s settings14-bit resolution

    Tri-axis accelerometer10 g measurement range14-bit resolution

    350 Hz bandwidthFactory calibrated sensitivity, bias, and alignmentDigitally controlled bias calibrationDigitally controlled sample rate

    Digitally controlled filteringEmbedded temperature sensor

    The sensor is manufactured by Analog Devices with the model of ADIS163508. Specifications areall as I wanted to have. Unfortunately the module arrived to the lab after I returned to Turkey. So Ididn't have a chance to play with it.

    49

  • 8/3/2019 Junior year internship report

    51/77

    E.CONCLUSION

    During my internship, I had a chance to work on one of the most popular area of Robotics calledprobabilistic robotics. I grasped the idea behind prediction techniques such as bayesianestimation, kalman filtering and condensation filters(particle filtering). I also used some popular c++ libraries like, OpenCV and OpenGL. I have decided to become a real Linux user since I havegained much experience on Ubuntu and Kubuntu Linux distributions. I have learned some basicnetworking concepts such as TCP/IP, server-client relations, file sharing and subversion usage. Ialso used WIKI to write small reports for almost everyday and share ideas with other lab membersetc.

    Robotics Laboratories in Japan like Intelligent Robotics Laboratory or Asada Laboratory have agreat experience and background about Robotics as a whole. They are dealing with various kindsof problems at the same time. They also approach Robotics not only as a branch of engineeringbut also as a branch of science. That's why Robotics is a cross-interdiciplinary framework for themwhich gathers scientists and engineers from various kinds of departments from all over the world .Members of the laboratories were mainly Computer Scientists, Electronics Engineers, MechanicalEngineers, Cognitive Scientists, NeuroScientists. Psychologists etc.

    My work in the laboratory was like an intermediate level work to enable CB2 self balance itself.I have intensively used gyro modules and got great experience about those kinds of sensors anddecided on to buy a new, robust and high performance sensor for this project which has a criticalrole for other stages of the project.

    If a student is interested in robotics and have some background about robots or that kind ofcomplicated systems accommodating hardware and software parts intensively, I wouldrecommend to apply for an internship in that sort of laboratories in Japan.

    My first internship was in Tubitak* Space Technologies Research Institute at the group of YAKUT**.I worked on image processing during my first internship to recognize and classify the vehicles on ahighway. Those two internships have common parts since both of them were almost a softwareinternship. I have used my knowledge about image processing in some parts of this internship.

    This internship was much more better for me since it is directly related to my interest area which ishumanoid robotics. On the other hand I would again prefer doing this internship as a second one,because the first one made me have great experience about various kinds of image processingalgorithms and increased my software skills so that I could write C++ classes and recursivefunctions for my second internship although I didn't take any courses related to object orientedprogramming or algorithm courses before rather than image processing course from computerengineering department. I used Linux Operating System in both of my internships. It was Susedistribution in the first one so it didn't take so much time for me to get accustomed to other LinuxDistros during my second internship.

    * Tubitak is the Turkish abbreviation of The Scientific and Technological Research Counsil of Turkey**Yakut is the Turkish abbreviation of Computer Vision, Speech Processing, Pattern Recognition and

    50

  • 8/3/2019 Junior year internship report

    52/77

    Remote Sensing Group

    F.REFERENCES

    0. http://www.atacolorado.com/eulersequences.docI. http://www.osaka-u.ac.jp/eng/about/outline.html and

    http://www.osaka-u.ac.jp/eng/about/history.htmlII. http://www.ed.ams.eng.osaka-u.ac.jp/aboutus/III. http://www.jeap.org/web/top.htmlIV. http://en.wikipedia.org/wiki/Euler_angles

    V. http://mathworld.wolfram.com/EulerAngles.htmlVI. http://www.flightsimaviation.com/faq_9_q1_What_is_gyro_drift.htmlVII.Chen Xiyuan (2003) Modeling Random Gyro Drift by Time Series neural networks and

    by t radit ioanl me thod,IEEE Int. Conf. Neural Networkds & Signal Processing NanjingChina, December 14-17

    VIII.http://www.analog.com/en/other/multi-chip/adis16355/products/data-sheets/resources.html?display=popup

    51

  • 8/3/2019 Junior year internship report

    53/77

    G.APPENDICES

    Source codes,datasets and videos can be found in the Compact Disc being given by this report.

    I . SelfBalancer C+ + Class Implem ent at ion

    a. Header File:

    #ifndef SELFBALANCER_H#define SELFBALANCER_H

    #include #include #include #include //Needed for "exit" function#include #include #include #include

    #include #include #include

    #define safetyWidth 10#define PI 3.1415enum textType{SB_SENSOR,SB_MCS};//represents the logged data type

    enum offsetCalType{SB_MEAN,SB_MEDIAN};

    enum sensorInputype{SB_LOG,SB_STREAM};

    using namespace std;

    class Balancer{

    private:int bufferSize;int dataStart;int lastDataLine;int dataLineWidth;int lineCnt;

    ofstream outFile;

    ofstream sensorOffFile;

    double* m1Data;double* m3Data;

    public:

    CvMat* m1 ;//represents the marker one as a vectorCvMat* m3 ;//represents the marker three as a vectorCvMat* mCross;//this will represent the cross product of m1 and m3

    52

  • 8/3/2019 Junior year internship report

    54/77

    double* sensorOffset;//6+1, 1-6 for 6 sensor and the 0 is not used

    double* mappingAngles;

    CvMat* rotMat;

    int getIt;int loopCnt;

    double* offgy;double* offgx;double* offgz;double* offax;double* offay;double* offaz;

    Balancer(void);

    //reads the log file to the bufferchar* readTextFile(string inFileName,char* buffer,int txtType) ;

    //parser functions parse logged data or stream datavector parseTextData(vector parsedData,vectordata,char* buffer,int

    txtType);vector parseStreamData(vector parsedData,vectordata);

    //finds how much m1 and m3 vectors deviated from the desired positions which are x and y axisdouble* findEulerAngles(CvMat* m1,CvMat* m3);

    //calculates euler angles using the sensor output(stream data) and the calibration coefficientsdouble* eulerAngularPosition(double* eulerAngles,vector sensorData,double kx,double

    ky,double kz);

    //calculates the rotation matrix corresponding to the input 'eulerAngles'//'rotDirection' represents whether it is from stationary frame to rotated frame or vice versa//detailed information can be found at "http://www.atacolorado.com/eulersequences.doc"CvMat* rotationMatrix(double* eulerAngles,int rotDirection);

    //converts euler angles represented in zxz convention to quaternion, commonly used for motion capture//system visualization because findEulerAngles function finds the angles in zxz convention whereas//eulerAngularPosition function is mainly used for sensor visualization which calculates in zyx conventionfloat* euler_zxzToQuaternion(double* eulerAngles,float x_trans,float y_trans,float z_trans,float* quat);

    //mainly used to convert zyx eulers to quaternion to visualize sensor logged data when offlinefloat* euler_zyxToQuaternion(double* eulerAngles,float x_trans,float y_trans,float z_trans,float* quat);

    //quaternion to euler converters can be used to check the validity of euler convention transformationsdouble* quaternionToEuler_zxz(float quat[7],double* eulerAngles);double* quaternionToEuler_zyx(float quat[7],double* eulerAngles);

    //Inerpolator can be used when it is necessary to compare sensor data and mcs data//it simply duplicates 2 of 3 frames to have 100fps dataset from 60fps dataset//and prints data to the screen. if you want, you can log data by using ">" in the terminalvoid mcsDataInterpolator(vector mcsData);

    vector translateMarkersToOrigin(vector mcsData);void createM1M3Vectors(vector mcsData);vector rotateM1M3Vectors(vectormcsData);int loopCounter();

    //maps the mcs coordinate system and sensor board coordinate system to each othervector coordSystemMapper(vector mcsData);

    //flushes the contents of the quaternion as an output, this function is commonly used to visualize

    53

  • 8/3/2019 Junior year internship report

    55/77

    //logged data or streamed data on the computervoid visualizer(float* quat);

    void recordSensorOffsetMedian(vector sensorData,string inFileName,int nFrame,int inType);void recordSensorOffsetMean(vector sensorData,string inFileName,int nFrame,int inType);

    //quick sort function it simply sorts the 'arr' array by a complexity of nlogndouble* quickSort(double* arr,int lMost, int rMost);double* scanFromR(double* arr,int* lIt,int* rIt,double pick,int* flag);double* scanFromL(double* arr,int* lIt,int* rIt,double pick,int* flag);

    //calculates the sensor offset values by taking the 'mean' or 'median' of first 'nFrame'//number of sensor data and subtracts them from the sensor values. In additon stores the//offset values to "offset_mean.txt" or "offset_median.txt" file. At each operation it//simply logs the offset values by tagging them with the name of the input log file.vector offsetOutSensorData(vector sensorData,string inFileName,int nFrame,int

    type,int inType);

    //no need to use those functions if you work on terminal because one can easily show//the output or store them in a txt file by '>'.void screenData(vector data,int txtType);void logData(vector data);void closeOutFile();void openOutFile(string outFileName);

    };

    #endif SELFBALANCER_H

    b.Im pelementa tion Fi le

    #include"selfBalancer.h"

    Balancer::Balancer(void){

    bufferSize=0;dataStart=0;lastDataLine=0;dataLineWidth=0;lineCnt=0;getIt=1;loopCnt=0;

    m1 = cvCreateMat(3,1,CV_64FC1);m3 = cvCreateMat(3,1,CV_64FC1);mCross= cvCreateMat(3,1,CV_64FC1);

    m1Data=new double[3];m1Data= m1->data.db;m3Data=new double[3];m3Data= m3->data.db;

    mappingAngles = new double[3];for(int i=0;i

  • 8/3/2019 Junior year internship report

    56/77

    offgz=0;offax=0;offay=0;offaz=0;

    rotMat= cvCreateMat(3,3,CV_64FC1);

    }

    char* Balancer::readTextFile(string inFileName,char* buffer,int txtType){ifstream inFile ;inFile.open( inFileName.c_str() , ios::binary ) ;

    if( !inFile ){cout

  • 8/3/2019 Junior year internship report

    57/77

    cout

  • 8/3/2019 Junior year internship report

    58/77

    delete []temp;}

    parsedData.clear() ;numSpace=0 ;

    lineCnt++ ;return data;

    }

    vector Balancer::parseStreamData(vector parsedData,vectordata){int numSpace = 0 ;char buffer[80]={0};

    char* temp=0;int numData=7;//if stream data is sensorydatacin.getline(buffer,80);if(buffer[10]=='.' && buffer[17]==' '){for(int i=0;i

  • 8/3/2019 Junior year internship report

    59/77

    double* m1Data=new double[3];m1Data= m1->data.db;double* m3Data=new double[3];m3Data= m3->data.db;

    double* mCrossData=new double[3];mCrossData=mCross->data.db;double* m3virData=new double[3];m3virData=m3vir->data.db;double* xData=new double[3];xData = x->data.db;double* yData=new double[3];yData = y->data.db;double* zData=new double[3];zData = z->data.db;

    double* NData=new double[3];NData = N->data.db;

    double* tmpData=new double[3];tmpData = tmp->data.db;

    xData[0]=1;xData[1]=0;xData[2]=0;

    yData[0]=0;yData[1]=1;yData[2]=0;

    zData[0]=0;zData[1]=0;zData[2]=1;

    double m1Length=sqrt(pow(m1Data[0],2)+pow(m1Data[1],2)+pow(m1Data[2],2));for(int i=0;i

  • 8/3/2019 Junior year internship report

    60/77

    else{double costheta=0.0001;psi+=(-ky*sensorData[1]*sin(psi)-kz*sensorData[3]*cos(psi))/cos(theta);

    }

    theta+=-ky*sensorData[1]*cos(psi)+kz*sensorData[3]*sin(psi);if(cos(theta))psi+=-kx*sensorData[2]-ky*sensorData[1]*sin(psi)*tan(theta)-kz*sensorData[3]*cos(psi)*tan(theta);

    else{double costheta=0.0001;psi+=-kx*sensorData[2]-ky*sensorData[1]*sin(psi)*sin(theta)/costheta-kz*sensorData[3]*cos(psi)*sin(theta)/costheta;

    }eulerAngles[0]=phi;eulerAngles[1]=theta;eulerAngles[2]=psi;

    return eulerAngles;}

    double* Balancer::quaternionToEuler_zyx(float quat[7],double* eulerAngles){float q1=quat[0];float q2=quat[1];float q3=quat[2];float q4=quat[3];

    eulerAngles[0]=atan(2*(q1*q2+q3*q4)/(q1*q1-q2*q2-q3*q3+q4*q4)); //phieulerAngles[1]=asin(-2*(q1*q3-q2*q4)); //thetaeulerAngles[2]=atan(2*(q2*q3+q1*q4)/(-q1*q1-q2*q2+q3*q3+q4*q4));//psi

    return eulerAngles;}

    float* Balancer::euler_zxzToQuaternion(double* eulerAngles,float x_trans,float y_trans,float z_trans,float* quat){double alpha = eulerAngles[0] ;double beta = eulerAngles[1] ;

    double gama = eulerAngles[2] ;

    quat[0] = cos(alpha/2)*sin(beta/2)*cos(gama/2) + sin(alpha/2)*sin(beta/2)*sin(gama/2);quat[1] = -cos(alpha/2)*sin(beta/2)*sin(gama/2) + sin(alpha/2)*sin(beta/2)*cos(gama/2);quat[2] = sin(alpha/2)*cos(beta/2)*cos(gama/2) + cos(alpha/2)*cos(beta/2)*sin(gama/2);quat[3] = -sin(alpha/2)*cos(beta/2)*sin(gama/2) + cos(alpha/2)*cos(beta/2)*cos(gama/2);quat[4] = x_trans;quat[5] = y_trans;quat[6] = z_trans;return quat;

    }

    float* Balancer::euler_zyxToQuaternion(double* eulerAngles,float x_trans,float y_trans,float z_trans,float* quat){double alpha = eulerAngles[0] ;//phidouble beta = eulerAngles[1] ;//thetadouble gama = eulerAngles[2] ;//psi

    quat[0] = sin(alpha/2)*sin(beta/2)*cos(gama/2) + cos(alpha/2)*cos(beta/2)*sin(gama/2);quat[1] = cos(alpha/2)*sin(beta/2)*cos(gama/2) + sin(alpha/2)*cos(beta/2)*sin(gama/2);quat[2] = -cos(alpha/2)*sin(beta/2)*sin(gama/2) + sin(alpha/2)*cos(beta/2)*cos(gama/2);quat[3] = sin(alpha/2)*sin(beta/2)*sin(gama/2) + cos(alpha/2)*cos(beta/2)*cos(gama/2);quat[4] = x_trans;quat[5] = y_trans;quat[6] = z_trans;

    return quat;

    59

  • 8/3/2019 Junior year internship report

    61/77

    }

    CvMat* Balancer::rotationMatrix(double* eulerAngles,int rotDirection){// phi = alpha = a eulerAngles[0]

    // theta = beta = b eulerAngles[1]// psi = gama = g eulerAngles[2]

    CvMat* phiRotMat = cvCreateMat(3,3,CV_64FC1);CvMat* thetaRotMat = cvCreateMat(3,3,CV_64FC1);CvMat* psiRotMat = cvCreateMat(3,3,CV_64FC1);CvMat* rotMat = cvCreateMat(3,3,CV_64FC1);CvMat* tempMat = cvCreateMat(3,3,CV_64FC1);

    cvSetIdentity( phiRotMat , cvScalar(1) ) ;cvSetIdentity(thetaRotMat , cvScalar(1) ) ;cvSetIdentity( psiRotMat , cvScalar(1) ) ;

    double* phiRotMatData=new double[9];phiRotMatData= phiRotMat->data.db;

    double* thetaRotMatData=new double[9];thetaRotMatData = thetaRotMat->data.db;

    double* psiRotMatData=new double[9];psiRotMatData = psiRotMat->data.db;

    double* rotMatData=new double[9];rotMatData = rotMat->data.db;

    double* tempMatData=new double[9];tempMatData = tempMat->data.db;

    cvmSet(phiRotMat,0,0, cos(eulerAngles[0])); // Set M(i,j):ith row jth columncvmSet(phiRotMat,0,1, sin(eulerAngles[0]));cvmSet(phiRotMat,1,0,-sin(eulerAngles[0]));cvmSet(phiRotMat,1,1, cos(eulerAngles[0]));

    cvmSet(thetaRotMat,1,1, cos(eulerAngles[1])); // Set M(i,j):ith row jth columncvmSet(thetaRotMat,1,2, sin(eulerAngles[1]));cvmSet(thetaRotMat,2,1,-sin(eulerAngles[1]));cvmSet(thetaRotMat,2,2, cos(eulerAngles[1]));

    cvmSet(psiRotMat,0,0, cos(eulerAngles[2])); // Set M(i,j):ith row jth columncvmSet(psiRotMat,0,1, sin(eulerAngles[2]));cvmSet(psiRotMat,1,0,-sin(eulerAngles[2]));cvmSet(psiRotMat,1,1, cos(eulerAngles[2]));

    cvMatMul(psiRotMat,thetaRotMat,tempMat);cvMatMul(tempMat,phiRotMat,rotMat);

    if(rotDirection)//stationary frame to rotated framereturn rotMat;

    else//rotated frame to stationary frame,inverse rotation matrix{cvTranspose(rotMat,rotMat);return rotMat;

    }}

    void Balancer::openOutFile(string outFileName){

    60

  • 8/3/2019 Junior year internship report

    62/77

    outFile.open(outFileName.c_str(),ofstream::out);if(!outFile){cout

  • 8/3/2019 Junior year internship report

    63/77

    m3Data[0]=mcsData[8];m3Data[1]=mcsData[9];m3Data[2]=mcsData[10];

    //Here by renaming the coordinates axis possibility of being caught by gimbal lock is decreasedm1Data[0]=-mcsData[4];

    m1Data[1]=-mcsData[2];m1Data[2]= mcsData[3];

    m3Data[0]=-mcsData[10];m3Data[1]=-mcsData[8];m3Data[2]= mcsData[9];

    }vector Balancer::rotateM1M3Vectors(vectormcsData){

    cvMatMul(rotMat,m1,m1);cvMatMul(rotMat,m3,m3);

    mcsData[2]=m1Data[0];mcsData[3]=m1Data[1];mcsData[4]=m1Data[2];

    mcsData[8]=m3Data[0];mcsData[9]=m3Data[1];mcsData[10]=m3Data[2];

    return mcsData;}void Balancer::visualizer(float* quat){

    for(int i=0;i

  • 8/3/2019 Junior year internship report

    64/77

    {if(!type)recordSensorOffsetMean(sensorData,inFileName,nFrame,inType);

    elserecordSensorOffsetMedian(sensorData,inFileName,nFrame,inType);

    if(loopCnt>=nFrame){for(int i=1;i

  • 8/3/2019 Junior year internship report

    65/77

    }}

    arr[*rIt]=pick;//*rIt=*lIt so it doesn't matter which one you use here*flag=1;return arr;

    }double* Balancer::quickSort(double* arr,int lMst, int rMst){int lMost=lMst;int rMost=rMst;

    int* lIt=new int;*lIt=lMost;int* rIt=new int;*rIt=rMost;

    double pick=arr[lMost];

    int* flag=new int;*flag=0;

    while(!(*flag)){arr=scanFromR(arr,lIt,rIt,pick,flag);if(!(*flag))

    arr=scanFromL(arr,lIt,rIt,pick,flag);else

    {if( ((*lIt)-1) > lMost)

    arr=quickSort(arr,lMost,(*lIt)-1);if( rMost > ((*lIt)+1) )

    arr=quickSort(arr,(*lIt)+1,rMost);return arr;

    }}

    if( ((*lIt)-1) > lMost)arr=quickSort(arr,lMost,(*lIt)-1);

    if( rMost > ((*lIt)+1) )

    arr=quickSort(arr,(*lIt)+1,rMost);return arr;

    }

    void Balancer::recordSensorOffsetMedian(vector sensorData,string inFileName,int nFrame,int inType){if(!loopCnt){

    offgy = new double[nFrame];offgx = new double[nFrame];offgz = new double[nFrame];

    offax = new double[nFrame];offay = new double[nFrame];offaz = new double[nFrame];

    }if(loopCnt

  • 8/3/2019 Junior year internship report

    66/77

    offgy=quickSort(offgy,0,nFrame-1);offgx=quickSort(offgx,0,nFrame-1);offgz=quickSort(offgz,0,nFrame-1);offax=quickSort(offax,0,nFrame-1);offay=quickSort(offay,0,nFrame-1);

    offaz=quickSort(offaz,0,nFrame-1);

    if(!nFrame%2)//meaning even number of frame{

    sensorOffset[1]=(offgy[nFrame/2-1]+offgy[nFrame/2])/2;sensorOffset[2]=(offgx[nFrame/2-1]+offgx[nFrame/2])/2;sensorOffset[3]=(offgz[nFrame/2-1]+offgz[nFrame/2])/2;sensorOffset[4]=(offax[nFrame/2-1]+offax[nFrame/2])/2;sensorOffset[5]=(offay[nFrame/2-1]+offay[nFrame/2])/2;sensorOffset[6]=(offaz[nFrame/2-1]+offaz[nFrame/2])/2;

    }else

    {sensorOffset[1]=offgy[(nFrame-1)/2];sensorOffset[2]=offgx[(nFrame-1)/2];sensorOffset[3]=offgz[(nFrame-1)/2];sensorOffset[4]=offax[(nFrame-1)/2];sensorOffset[5]=offay[(nFrame-1)/2];sensorOffset[6]=offaz[(nFrame-1)/2];

    }string outFileName;outFileName="offset_median.txt";sensorOffFile.open(outFileName.c_str(),ofstream::app);if(!inType)

    sensorOffFile

  • 8/3/2019 Junior year internship report

    67/77

    c. Sample Codes

    i . How To Visualize Sensor Logged-Da ta in Java-3D

    //In this program sensor data is read from a text file and eulerAngles are calculated.//Then euler angles are converted to the quaternion and quaternion information is sent to the//java program by pipeing.//To do that// 1.euler velocities are found and then they are integrated// 2.Euler angles are converted to the quaternion using a conversion formula.

    #include"selfBalancer.h"

    using namespace std;

    int main(int argc,char* argv[]){string inFileNameSensor;if(argc==1)inFileNameSensor="/home/robocupper/Desktop/internship/work/selfBalancer/sensor_dataset/move1.txt";

    if(argc>1)inFileNameSensor=argv[1];

    char* sensorBuffer;vectorsensorParsedData;vector sensorData;

    for(int i=0;i=nFrame)

    {//calculates euler angles using euler angle velocity formulas

    66

  • 8/3/2019 Ju