cegt201.bradley.educegt201.bradley.edu/projects/proj2019/acopt/assets/finalreport.pdf · abstract...

110
Area Coverage Optimization by Dakota Adra and Eric Jones Advisor: Dr. Suruz Miah Electrical and Computer Engineering Department Caterpillar College of Engineering and Technology Bradley University c Dakota Adra and Eric Jones, Peoria, Illinois, 2019

Upload: others

Post on 24-Oct-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

  • Area Coverage Optimization

    by

    Dakota Adra and Eric Jones

    Advisor: Dr. Suruz Miah

    Electrical and Computer Engineering Department

    Caterpillar College of Engineering and Technology

    Bradley University

    c© Dakota Adra and Eric Jones, Peoria, Illinois, 2019

    http://personalpages.bradley.edu/~smiah/

  • Abstract

    The project implements an area coverage optimization algorithm using a cost-effectiveand open-source platform of multiple autonomous robots. The cost-effective ad open-source platform developed in this work is Multi-Agent Framework Open-Source Software(MAFOSS). A team of mobile robots are deployed in a spatial area in a manner so that theasymptotic configurations of robots in the area yields its maximum coverage. A scalar field,called herein the density, is used to define the coverage metric of the area where the robotsare deployed. A large body of research has been conducted in the literature to date for solv-ing such area coverage optimization problems. However, most of the algorithms are eithertested in simulations are implemented using a particular set of robotic platforms, whichare commercially available for limited operations. In this project, we generalize the imple-mentation platform using the proposed MAFOSS system, where a team of mobile robotsis not only employed for developing an area coverage algorithm but also for developing aset of motion control algorithms using multiple autonomous homogeneous/heterogeneousrobots operating in a two-dimensional area. The proposed implementation platform istested in a commercially available simulation platform, Virtual Robot ExperimentationPlatform (V-REP) in cooperation with robot operating system (ROS). In addition, a setof experiments using multiple open-source robotic platforms, EduMODs, in cooperationwith ROS has been conducted to demonstrate the effectiveness of the proposed platform,MAFOSS.

    ii

  • Acknowledgements

    We would like to thank Dr. Suruz Miah for his guidance and mentorship throughout thisproject, the Department of Electrical Engineering at Bradley University for the laboratoryfacilities and knowledge they have provided, the Coordinated Flow and Robotic’s Controllaboratory at the University of California San Diego for their original design of the eduMIProbot, and our families for their support throughout our college careers.

    iii

  • Table of Contents

    List of Tables vii

    List of Figures viii

    Nomenclature 1

    1 Introduction 2

    1.1 Background Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    2 Modeling Area Coverage Optimization 5

    2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    2.2 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    3 MAFOSS 10

    3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    3.2 Overall MAFOSS Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 11

    3.3 Software Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    3.4 MAFOSS Networking Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    4 VREP-ROS Simulations 16

    4.1 Simulation Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    4.2 Line Following . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    4.3 Leader Follower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    4.4 Area Coverage Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    iv

  • 5 Experimental Validation 21

    5.1 Implementation Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    5.2 eduMOD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    5.3 Line Following . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    5.4 Leader Follower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    5.5 Area Coverage Implement . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

    6 Conclusion and Future Work 27

    APPENDICES 27

    A ROS Network Setup 28

    A.1 Setting up the ROS network in general . . . . . . . . . . . . . . . . . . . . 28

    A.2 ROS Network Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    A.3 ROS Network Implementation . . . . . . . . . . . . . . . . . . . . . . . . . 29

    B Implementation Appendix 31

    B.1 SD Card Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    B.1.1 Non-source Installation . . . . . . . . . . . . . . . . . . . . . . . . . 33

    B.2 Building the eduMOD robot . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    C Simulation Appendix 34

    C.1 Creating and Modelling the eduMOD robot in VREP-ROS . . . . . . . . . 34

    D EduMOD Assembly Tutorial 35

    D.1 Required materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    D.2 Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    D.3 Drivetrain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    D.4 Bulkhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    D.5 Complete Assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

    E OpenSCAD Tutorial 39

    E.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    E.2 Union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    E.3 Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    E.4 Difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    E.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    v

  • F 3D Modeling Code 41

    F.1 Open SCAD Code for Caster . . . . . . . . . . . . . . . . . . . . . . . . . 41

    F.2 Open SCAD Code for Battery Clip . . . . . . . . . . . . . . . . . . . . . . 42

    F.3 Matlab Code for Standoffs . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

    G 3D-Printing Tutorial 44

    G.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

    G.2 Slicing Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

    G.3 Material Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

    G.4 Print Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

    G.5 Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

    G.6 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    G.7 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    H Modelling 47

    H.1 Area Coverage Matlab Sim . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

    I ROS and VREP Interface Code 58

    I.1 Lua Code for Agent 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

    I.2 Additional Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

    J Simulation Code 60

    J.1 Line Follower VREP Pioneer . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    J.2 Leader Follower VREP Pioneer . . . . . . . . . . . . . . . . . . . . . . . . 65

    J.3 Area Coverage VREP Pioneer . . . . . . . . . . . . . . . . . . . . . . . . . 71

    K Implementation Code 84

    K.1 Local Control Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

    K.2 Line Follower Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

    K.3 Leader Follower Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

    K.4 Area Coverage Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

    vi

  • List of Tables

    vii

  • List of Figures

    2.1 Plot of Agents in their Respective Voronoi Cells . . . . . . . . . . . . . . . 6

    2.2 Density distribution centered at point (50,85)m. . . . . . . . . . . . . . . . 7

    3.1 Overall architecture of MAFOSS. . . . . . . . . . . . . . . . . . . . . . . . 11

    3.2 High-level software architecture of MAFOSS. . . . . . . . . . . . . . . . . . 12

    3.3 ROS networking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    4.1 V-REP simulation of line following algorithm. . . . . . . . . . . . . . . . . 17

    4.2 Performance in line–follower algorithm in simulation. . . . . . . . . . . . . 17

    4.3 V-REP simulation of leader follower algorithm. . . . . . . . . . . . . . . . 18

    4.4 Performance in leader–follower algorithm simulation. . . . . . . . . . . . . 18

    4.5 V-REP simulation of area coverage algorithm. . . . . . . . . . . . . . . . . 20

    5.1 eduMIP Robot and eduMOD Robot Comparison . . . . . . . . . . . . . . 22

    5.2 Running line–following algorithm using an eduMOD differential drive mobilerobot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    5.3 Performance in running line–follower algorithm. . . . . . . . . . . . . . . . 23

    5.4 Running leader–follower algorithm using two eduMOD differential drive mo-bile robots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    5.5 Performance in running leader–follower algorithm. . . . . . . . . . . . . . . 25

    5.6 Performance of area coverage algorithm through data sent to Matlab. . . . 26

    5.7 Performance of area coverage algorithm. . . . . . . . . . . . . . . . . . . . 26

    D.1 eduMOD Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    D.2 eduMOD Drivetrain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    D.3 eduMOD Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    D.4 eduMOD Assembled Drivetrain . . . . . . . . . . . . . . . . . . . . . . . . 37

    D.5 eduMOD Assembled Bulkhead Front View . . . . . . . . . . . . . . . . . . 37

    viii

  • D.6 eduMOD Assembled Bulkhead Rear View . . . . . . . . . . . . . . . . . . 38

    D.7 eduMOD Complete Assembly . . . . . . . . . . . . . . . . . . . . . . . . . 38

    ix

  • Abbreviations

    EduMOD EduModified inverse pendulum Differential–drive mobile robot

    MAFOSS Multi-Agent Framework using Open–Source Software

    Pose Position and orientation

    ROS Robotics Operating System

    TCP Transmission Control Protocol

    CAD Computer Assisted Design

    E. Jones, D. Adra, S. Miah Page 1 of 99

  • Chapter 1

    Introduction

    In a framework of multi-agent systems, the coverage optimization of a spatial area usinga team of mobile agents (robots) is of paramount importance due to its variety of uses inrobotic applications. Most notability in roles where an agent (or a mobile robot) is prefer-able to a human or when a task is simply impossible for a human to perform. Typicalapplications for area coverage optimization in the field of mobile robotics include searchand rescue, surveillance, environmental monitoring, cooperative estimation, and indoornavigation, among others. These problems are usually solved using an array of networkedmobile agents operating collectively. Recently the implementation of area coverage algo-rithms has been restricted to a fleet of homogeneous robots at high monetary cost. Thepurpose of this project is to lower the costs of entry due to the selected robot platformby providing an easily accessible framework for heterogeneous robots that can be imple-mented both quickly and efficiently. Note that the terms robots and agents will be usedinterchangeably from now on.

    This research is a continuation of a previous project conducted by authors in [?] whereinan area coverage algorithm with a static density is implemented in real-time. The currentwork seeks to extend this idea by implementing a density that is dynamic. The term densityrefers to the importance of a particular point in a given two-dimensional plane. In addition,we are looking to rigorously define a framework that multiple agents will use to implementthe area coverage optimization algorithm. Therefore, we focus on developing a multi-agentframework using open-source software (MAFOSS) to standardize this algorithm. TheMAFOSS is also aimed to be employed as an implementation platform for other multi-robot control algorithms, the leader-follower and the cyclic pursuit, for example.

    1.1 Background Study

    Many papers have been published on algorithms requiring multiagent systems to optimizearea coverage. However, few of these works describe a flexible framework that provides themeans to achieve such a coverage task.

    E. Jones, D. Adra Page 2 of 99

  • Chapter 1. Introduction 1.1. Background Study

    Authors in [?] Kilinic attempts to optimize area coverage of an interconnected networkof sensors and actuators that act as a control system. Kilinics goal is to maximize thearea coverage of the wireless network control system (WNCS) whilst still maintainingconvergence of the system in a large environment. Possible applications for the such acontrol system include smart grid, automatic management and navigation systems.

    The system is arranged in several heterogeneous subnetworks that communicate ad-hoc.A single packet of information will hop multiple times through the network until it reachesthe Kalman filter. Each of the subnetworks has a connection to the Kalman filter whilstremaining separate from each other. The Kalman filter is used to account for asymmetricpacket arrival times as well as packet loss.

    In [?] Lee attempts to optimize the area of a given region of interest for a time variantdensity. Dynamic density coverage has additional application compared to static densitycoverage. An example of this additional utility can be seen in a search and rescue scenariowhere the probability of a missing person being found in a particular area is time variant.

    Lees algorithm works by using voronoi regions and a robotic cost function to determinearea coverage. Each robot is responsible for covering a particular voronoi region. Timeinformation is used in the robot movement to account for the rapid changes in the densityfunctions.

    In [?] Miah adapts previous attempts of area coverage by using a fleet of heterogeneousmodular cost-effective robots in real time. This allows for heterogeneous or non-uniformresource allocation and stabilizes the algorithm to allow for variation in the abilities of theconstituent robots.

    The algorithm in Miahs research is very similar to that of Lees. Miah adds additionalconsideration for the robot actuation limitations for a heterogeneous case whilst Lee doesnot. Each individual robot is given a coverage metric to describe its area coverage perfor-mance.

    In [?] Pimenta is attempting to adapt the area coverage model for intruder tracking.Multiple intruders in a specified region can be tracked using this method even if theirlocation is unknown. The coverage is optimized and therefore the probability or detectingany existing intruders in the area is also maximized.

    Pimenta’s algorithm allows an individual robot to track an intruder in its Voronoi cellwhilst the other robots respond to provide optimal area coverage. When an intruder islocated within a given distance from a robot it will trigger that particular robot to followthe intruder.

    In [?] Varposhti uses area coverage optimization to distribute mobile directional sensorsover an area. The sensors can move around freely and can adapt in the case of an outagein a particular area. Such networks can be used in target tracking, search and rescue, andsurveillance.

    Varposhti uses a distributed learning algorithm to achieve area coverage in his research.Each sensor collaborates with its neighbors to determine the best position and orientationfor each sensor. Each sensor moves in a random direction and turns in a random positionuntil the coverage is maximized.

    E. Jones, D. Adra, S. Miah Page 3 of 99

  • Chapter 1. Introduction 1.1. Background Study

    In [?] Yu attempts to find the optimal area coverage for deployable reconnaissancesensors. His approach involves considering the areas in which the sensors can be deployed,the connectivity and the coverage. Such an algorithm provides the ability to deploy sensorsalmost anywhere for the purpose of surveillance and intelligence.

    Yu’s algorithm uses genetic neural networks and particle swarm optimization to achieveoptimal deployment. In the genetic algorithm the best performing configurations are passedon to the next generation whilst the poorly performing configurations are slowly phasedout. Additionally a mutation rate is maintained to allow for other behavior that was notpreviously available in an older generation.

    E. Jones, D. Adra, S. Miah Page 4 of 99

  • Chapter 2

    Modeling Area CoverageOptimization

    2.1 Introduction

    In this section, the mathmatical models necessary to implement the area coverage algorithmare derived. These include the mathmatical models for the Vorinoi regions, coverage metric,and density distribution.

    We began this work by looking at the project previously conducted and attempting tosimulate their results in MATLAB from scratch. To start, we developed a way to map aset density to a square area. For this we represented any x-y coordinate within the squarearea as the set q. Furthermore, we assigned q̄ to be the desired point for that agent totravel to in order to maximize coverage. As an example:

    q̄ =

    [x̄ȳ

    ](2.1)

    The phi function is the measure of the density at any given point q. Through simplificationit can be seen that:

    φ(q) = exp(−0.5 ∗ norm(q− q̄)2

    σ2) (2.2)

    (2.3)

    The variable σ seen in Equation (2.1) represents how the density is spread throughout thearea. A larger σ denotes a larger spread of density. Further simplification yields:

    φ(q) = exp(

    −0.5 ∗ norm([xy

    ]−[x̄ȳ

    ])2

    σ2) (2.4)

    φ(q) = exp(−0.5 ∗ norm((x− x̄)2 + (y − ȳ)2)

    σ2) (2.5)

    E. Jones, D. Adra Page 5 of 99

  • Chapter 2. Modeling Area Coverage Optimization 2.1. Introduction

    See figure 2.1 for an example of the square area we are attempting to map.

    V1 V2

    V3

    Agent 1

    Agent 2

    Agent 3

    High

    Density

    Point

    Figure 2.1: Plot of Agents in their Respective Voronoi Cells

    This square area has been divided up into i Voronoi regions, for i number of agentsoperating in this area. These regions are represented in figure 2.1 as V1, V2, and V3. Inour case we are using three agents, though note that this number is somewhat arbitraryand can be scaled to any reasonable number.

    A Voronoi region is a partition of a plane such that all points within the partition’s bound-ary must be closer to points within its boundary than to points outside. As such, theregions themselves must be convex two dimensional shapes. The Voronoi regions them-selves represent a specific area covered by each agent within the region.

    The density function itself is applied to each x-y coordinate in the square area. Figure 2.2shows how a single density region is represented in three-dimensional space.

    E. Jones, D. Adra, S. Miah Page 6 of 99

  • Chapter 2. Modeling Area Coverage Optimization 2.1. Introduction

    0

    150

    0.2

    0.4

    0.6

    100 150

    0.8

    100

    1

    5050

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    Figure 2.2: Density distribution centered at point (50,85)m.

    Now that we had an idea of the how we would represent density we began looking atwe would address the concept of coverage in our system.

    Dr. Miah provided us with a derivation for coverage that I will detail here. The areacoverage metric H in our system for i number of agents is defined as:

    H =n∑i=1

    ∫Vi

    φ(q)f(r2i )dQ (2.6)

    Where f(r2i ) is a function that models sensor characteristics.

    f(r2i ) = α ∗ e−βr2i (2.7)

    Note that alpha and beta from the above represent ideal sensor parameters that we arenot interested in modelling exactly. Our goal is to find a point in each Voronoi region p[i]

    such that H is maximized. In order to do such we first find dHdp[i]

    and set it to zero.For that let us define an individual case:

    Hi =

    ∫Vi

    φ(q)f(r2i )dQ. (2.8)

    First let us take the derivative:

    dH

    dp[i]=

    d

    dp[i]

    n∑i=1

    Hi =n∑i=1

    ∫Vi

    f(r2i )φ(q)dQ (2.9)

    E. Jones, D. Adra, S. Miah Page 7 of 99

  • Chapter 2. Modeling Area Coverage Optimization 2.1. Introduction

    As the derivative of the coverage metricsn∑i=1

    Hi is zero at all other indices other than the

    current agent index i, we can ignore those values and further simplify Equation (2.9).

    dH

    dp[i]=

    dHidp[i]

    =d

    dp[i]

    ∫Vi

    f(r2i )φ(q)dQ =∫Vi

    df(ri2)

    dri2dri

    2

    p[i]φ(q)dQ (2.10)

    After the algebraic manipulation of derivative variables, note that we can find the derivativeof the introduced range:

    dridp[i]

    =d

    dp[i]

    √(x− x[i]])2 + (y − y[i]])2 (2.11)

    The function is then squared in order to simplify the derivative calculation:

    dri2

    dp[i]=

    d

    dx[i]dy[i][(x− x[i])2 + (y − y[i])2] = (−2x+ x[i]) + (−2y + y[i]) = −2(q− p[i])

    (2.12)

    Using Equations (2.11) and (2.12), we are able to simplify Equation (2.10) even further:

    dH

    dp[i]=

    ∫Vi

    [−2φ(q) dfdri2

    ](q− p[i])dQ (2.13)

    The quantity [−2φ(q) dfdri2

    ] shown in Equation (2.13) is introduced as the modified φ(q)function. This function takes into account the sensor parameters. We call this new functionφ̃(q,p[i]). From Equation (2.13):

    dH

    dp[i]=

    ∫Vi

    φ̃(q,p[i])(q− p[i])dQ (2.14)

    Using properties of integrals, we split the integral into two:

    dH

    dp[i]=

    ∫Vi

    qφ̃(q,p[i])dQ− p[i]∫Vi

    φ̃(q,p[i])dQ (2.15)

    Finally, we make an algebraic substitution and compare this equation to the mass andcentroid equations previously discussed:

    dH

    dp[i]=

    ∫Vi

    φ̃(q,p[i])dQ∫Vi

    qφ̃(q,p[i])dQ∫Viφ̃(q,p[i])dQ

    − p[i]∫Vi

    φ̃(q,p[i])dQ (2.16)

    We know that the mass is defined as the integral of our density function relative to abounded area Vi. As such the modified mass is expressed as M̃Vi =

    ∫Viφ̃(q,p[i])dQ. The

    modified centroid is expressed as any point within the bounded area scaled by our modified

    φ̃(q,p[i]) function. We call it C̃Vi =

    ∫Vi

    qφ̃(q,p[i])dQ∫Viφ̃(q,p[i])dQ .

    Therefore we can express:

    dH

    dp[i]= M̃ViC̃Vi − p[i]M̃Vi = 0 (2.17)

    M̃Vi(C̃Vi − p[i]) = 0 (2.18)C̃Vi = p

    [i] (2.19)

    E. Jones, D. Adra, S. Miah Page 8 of 99

  • Chapter 2. Modeling Area Coverage Optimization 2.2. Conclusion

    2.2 Conclusion

    In conclusion, as the mass can not be zero, the coverage is maximized when the positionof the agent is equal to the modified centroid of a bounded area.

    E. Jones, D. Adra, S. Miah Page 9 of 99

  • Chapter 3

    MAFOSS

    3.1 Introduction

    This chapter presents a new multi-agent framework using open-source software (MAFOSS).The proposed framework is a modular and cost-effective open-source hardware and softwareplatform that is intended to help develop multi-agent systems for research and education.Numerous multi-agent platforms have been developed in the literature to date that areused in various robotic applications, such as surveillance, target localization, cooperativeestimation, among others. However, most of them are either tailored towards particularapplications or driven by expensive software and hardware. The proposed MAFOSS systemis developed for robotic applications, where a team of mobile agents (robots) is deployedto achieve a common goal. The software architecture of the current framework mostlyrelies on robot operating system (ROS). Regardless of internal hardware and/or softwarearchitecture, appropriate actions can be applied to actuators of an individual or a teamof mobile agents for controlling their motions. A few case studies have been conducted toevaluate the performance of MAFOSS.

    The development of multi-agent systems has been of crucial importance in modernautonomous agent-based applications, such as area coverage [?, ?, ?], perimeter surveillanceand environment monitoring [?, ?, ?], cooperative estimation and formation control [?, ?,?, ?], indoor navigation [?, ?] using mobile robots, among others. Typical problems insuch applications are usually solved using an array of networked mobile agents operatingcollectively. Recently, the implementation of various algorithms has been restricted toa fleet of homogeneous robots at high monetary cost. The purpose of this research isto lower the costs of entry due to the selected robot platform by providing an easilyaccessible framework for heterogeneous robots that can be implemented both expedientlyand efficiently. Note that the terms robot and agent will be used interchangeably from nowon. While the performance of the most promising multi-agent control algorithms proposedin the literature is evaluated using computer simulations only, a few multi-agent controlalgorithms have been validated using experiments that fit for particular implementationplatforms (i.e., use of specific robots, for example). See the work performed by authorsin [?, ?, ?], for instance. Therefore, this paper aims to develop an open hardware/software

    E. Jones, D. Adra Page 10 of 99

  • Chapter 3. MAFOSS 3.2. Overall MAFOSS Architecture

    Server

    eduMOD Khepera IV Pioneer 3-DX

    Agent #1 Agent #2 Agent #3 Agent #n

    Main control node

    . . .

    . . .

    . . .

    Figure 3.1: Overall architecture of MAFOSS.

    architecture, MAFOSS, to implement multi-agent control algorithms. Multiple case studieshave been implemented using MAFOSS. Some of the major advantages of the MAFOSSinclude its scalability, low cost, robustness, and open source nature. These qualities enableusers to develop algorithms on a larger scale than was previously possible. The opensource environment brings a wide range of support whilst maintaining a fast moving andnon-proprietary code base. There are many packages in ROS that allow a user to interfacesensors, actuators, and other hardware directly into their system with minimal developmentrequired. As such, it is an excellent tool for anyone looking to expand or help standardizetheir implementation of different algorithms in the fields of cooperative estimation andnavigation.

    3.2 Overall MAFOSS Architecture

    The overall architecture of the proposed MAFOSS is shown in Fig. 3.1. Therein, n (mobile)agents and the main control node are connected through a local or wide area network usedin the proposed framework. After the network has been configured, a multi-agent algorithmcan be implemented in the central node. Using this algorithm, the actuator commandsdetermined by the central node are sent to all agents for appropriate actions. The agentscollect the measurements and pass relevant data back to the central node as well as prepareto receive new data. All the agent platforms are running ROS, an open source roboticsoperating system. The client software installed in the central node depends on a particularapplication within a multi-agent systems. For example, the authors of this report havedocumented multiple case studies where MATLAB’s Robot System Toolbox∗ was used asthe client software. Note that MATLAB is not open-source, however the decision to useMATLAB was done to save time in the implementation process.

    As a proof of concept, let us consider an area coverage problem [?, ?, ?, ?], where agroup of mobile autonomous agents are deployed in an area of interest. The problem is todeploy a group of mobile agents such that the coverage metric is maximized. In this case,

    ∗ https://www.mathworks.com/products/robotics.html

    E. Jones, D. Adra, S. Miah Page 11 of 99

  • Chapter 3. MAFOSS 3.3. Software Architecture

    Higher–level application program runningat central node

    • Application program • ROS logging

    Robot Operating System (ROS)

    • ROS agents • ROS networking

    Lower–level APIs running at highbandwidth

    • Robot control libraries, such aslibrobotcontrol, ROS aria, and libkhepra

    Agent #1 . . . Agent #n running lower–level actuator commands at high band-width

    Console interface

    TCP

    Kernel Headers

    Low bandwidth

    High bandwidth

    Figure 3.2: High-level software architecture of MAFOSS.

    the central node of the proposed MAFOSS architecture is assigned to implement the areacoverage algorithm. For that, appropriate actuator commands for each mobile agent aredispatched at a regular time intervals. The details on how these area coverage algorithmswork can be sought in [?, ?, ?, ?]. The next section illustrates the software architectureof the MAFOSS that can be used to implement a set of algorithms that use autonomousagents for solving problems of multi-agent systems.

    3.3 Software Architecture

    Fig 3.2 shows the higher-level software architecture of the proposed MAFOSS. The softwarearchitecture is divided into four layers. Each layer executes a set of APIs and dispatchesthe output to the next layer. The higher-level commands are executed at the centralnode. Therefore, the bandwidth of operations generated by the algorithm running atthe main control node is low. The low-level actuator commands that are run on eachagent are executed at high-bandwidth. A detailed description of each layer of the softwarearchitecture is provided below.

    E. Jones, D. Adra, S. Miah Page 12 of 99

  • Chapter 3. MAFOSS 3.3. Software Architecture

    Fig. 3.1 details three different robots that act as MAFOSS agents. From left to rightthere is the eduMOD robot, the Khepera IV robot designed by K-TEAM∗, and the Pio-neer P3-DX designed by Pioneer Robotics†. The eduMOD robot is based on the eduMiPproject developed by University of California San Diego’s Flow Control and CoordinatedRobotics Lab‡. This robot has been modified into a tricycle configuration (hence the nameeduMOD), due to the author’s desire to improve the accuracy and consistency of its move-ment. The eduMOD’s low-level control APIs and front caster assemblage were designedand developed by the authors of this report. The eduMOD was created to act as a low-cost,easy to use, and powerful alternative to the large variety of similar tricycle configurationdifferential-drive mobile robotic platforms.

    • Higher-level application program - The high-level application program in theMAFOSS takes user input, such as agents’ initial poses (positions and orientations),alongside workspace parameters, and feeds them into a server that implements multi-agent algorithms. This server communicates to the Main control node and forwardsthis information through ROS to the rest of the system.

    • Robot Operating System - The Robotic Operating System (ROS) is a flexibleframework for writing robot software. It is a collection of tools, libraries, and con-ventions that aim to simplify the task of creating complex and robust robot behavioracross a wide variety of robotic platforms. In the current MAFOSS, it uses the net-working abilities of ROS to define individual agents as nodes to standardize the waya user can approach the algorithm’s implementation.

    • Lower-level APIs - The lower–level control APIs in this architecture are each de-pendent on the target agent/robot the user is attempting to interface with. For anembedded computer, such as Beaglebone Blue§, the authors use the well-documentedrobot control library, librobotcontrol, as it is specifically intended to be implementedon the Beaglebone. For the Khepera and the Pioneer, the authors will be using theproprietary interfacing firmware developed by their respective companies: the ROSAria library and libkhepera. The direct hardware interfacing will be done throughthe Linux Headers, as described in each API.

    • Agents - Here are the agents that are interfaced with via the Control APIs as seenin Fig. 3.1. These agents are controlled via low-level control code that is present aspart of the firmware of each robot or as developed by the user in the case of theeduMOD.

    To summarize, the software architecture consists of a High level application, the RoboticsOperating System, Low-level High bandwidth APIs and the Agents themselves. In gen-eral the high level applications consist of user defined behaviors that determine the usagescenario for the MAFOSS architecture. Next, ROS is used for its networking abilities andflexibility when handling a variety of hardware configurations. The Lower level APIs areused to interface directly with the hardware prior to receiving command from ROS. Finally,the Agents themselves are designated by the end user to accomplish the desired task.

    ∗ https://www.k-team.com/ † https://www.pioneer-robotics.no/ ‡ https://www.ucsdrobotics.org/§ https://beagleboard.org/blue

    E. Jones, D. Adra, S. Miah Page 13 of 99

  • Chapter 3. MAFOSS 3.4. MAFOSS Networking Setup

    publishD

    esired

    subscribeDesired

    subscribeCurrent

    publishC

    urrent

    Messa

    geto

    Conso

    le and

    Logs

    MASTER

    Example:

    nodenameXML=RPC = host : portTCP data = host : port

    Agent #1

    Agent #1XML=RPC = (Agent#1 id) : 1234TCP data = (Agent#1 id) : 2345

    DATADATADATA

    Agent #2

    Agent #3

    Figure 3.3: ROS networking.

    3.4 MAFOSS Networking Setup

    The purpose of this section is to highlight the setup of the network architecture of theMAFOSS. There are various ways to ship data around a network, and each has advantagesand disadvantages, depending largely on the application. The communication protocol,such as transmission control protocol (TCP) is widely used because it provides a simpleand reliable communication stream. TCP packets always arrive in order and lost packetsare resent from source to destination until they arrive (see [?, Ch. 3] for details).

    Fig. 3.3 gives a holistic view on the networking within the MAFOSS using ROS∗.Note that the ports specified to the TCP/IP sockets in the Example section are arbitrary.Further, the full URI for the master node will include the standard 11311 port in ROS.

    • Nodes - Nodes are executables that use ROS to communicate to other nodes throughTCPROS across a corresponding Topic. In Fig. 3.3, the nodes in question are Agent#1, Agent #2, and Agent #3.

    • Topics - Topics are named buses over which nodes exchange messages. In general,nodes are not aware of who they are communicating with. Instead, nodes that areinterested in data subscribe to the relevant topic; nodes that generate data publish tothe relevant topic. Fig. 3.3 lists four topics associated with the MAFOSS networkingas subscribeCurrent, subscribeDesired, publishCurrent, and publishDesired. Each nodecommunicating with the MASTER in the MAFOSS system will get its own set offour topics with further designation between each node. For Agent #1, the full topicname would be agent#1PublishDesired : this convention will be carried throughoutthe system independent of the number of agents.

    • Master - The ROS Master provides naming and registration services to the rest ofthe nodes in the ROS system. It tracks publishers and subscribers to topics. The

    ∗ http://wiki.ros.org/ROS/Concepts

    E. Jones, D. Adra, S. Miah Page 14 of 99

  • Chapter 3. MAFOSS 3.5. Conclusion

    role of the Master is to enable individual ROS nodes to locate one another. Oncethese nodes have located each other they will begin to communicate with each otherpeer-to-peer.

    • Output data - Output data in the MAFOSS is sent to the main control node andcomputer running the master node as the raw x and y positions. This data is alsosent to a corresponding ROS system log file for each node. Users are able to use theselog files to create graphs relating the Agent’s desired position to its actual position.

    • Example - Note the Example box in Fig. 3.3. This gives a more in-depth view atwhat occurs inside the nodes in the figure. Given a publisher URI, a subscribing nodenegotiates a connection with that publisher, via an Extensible Markup LanguageRemote Procedure Call (XMLRPC). The result of the negotiation is that the twonodes are connected, with messages streaming from publisher to subscriber. Eachtransport has its own protocol for how the message data is exchanged. For example,using TCP, the negotiation would involve the publisher giving the subscriber the IPaddress and port on which to call connect. The subscriber then creates a TCP/IPsocket to the specified address and port. The nodes exchange a Connection Headerthat includes information like the message type and the name of the topic, and thenthe publisher begins sending serialized message data directly over the socket.

    The key steps to implement/operate the MAFOSS using a team of mobile agents areas follows.

    Step: 1 Select target mobile robotic platform.

    Step: 2 Install ROS on the target platform or on a networking-capable microcontrollerthat can communicate with target platform over a wireless network.

    Step: 3 Setup a ROS master node as well as an application node on a computer andconnect to a router via wireless adapter. This combination will be your maincontrol node.

    Step: 4 Develop a multi-agent control algorithm and use ROS to send pertinent informa-tion throughout the nodes in the agent network.

    Step: 5 Update the multi-agent algorithm with input from the agents present in the sys-tem.

    3.5 Conclusion

    The proposed framework was used to implement at series of algorithms in order to verifyits use in an area coverage algorithm. This will be demonstrated in the proceeding sectionsregarding simulation and implementation.

    E. Jones, D. Adra, S. Miah Page 15 of 99

  • Chapter 4

    VREP-ROS Simulations

    4.1 Simulation Introduction

    In this section, we will be looking at the the simulations ran in order to test the validity ofthe MAFOSS framework in developing multi-agent algorithms. The simulator we decidedto use for simulation is V-REP, the Virtual Robotic Experimentation Platform. Thissoftware allows us to use the highly realistic physics present in the software to get a closeapproximation to what our implementation would be like. V-REP is used in industryand is constantly being developed and updated. Note that in the following simulationswe are using the Pioneer model, instead of the eduMOD model, as later discussed in theImplementation section 5.1. This is due to the fact that the model we began testing on inVREP, a model that we created via CAD modelling software, was not doing an adequatejob. The model had issues with clipping and an incorrect center of balance. Both modelsare DDMR, simply with different scales, so we thought it would be appropriate to use thismodel instead.

    4.2 Line Following

    To demonstrate the working principle, a simple line following algorithm is first imple-mented. In this case, the model attempts to simply follow a line defined by

    ax+ by + c⇒ y = x.

    The linear speed ν(t) = ν = 0.15 [m · s−1] and the angular speed of the eduMOD robot iscomputed by

    ω(t) =1

    `tan(γ(t)),

    where ` is the distance between the two driving wheels tied using an axle and the angleγ(t) is defined as:

    γ(t) = −Kpd(t) +Kγ(θ[ref] − θ(t))

    E. Jones, D. Adra Page 16 of 99

  • Chapter 4. VREP-ROS Simulations 4.3. Leader Follower

    with d(t) being the orthogonal distance between the robot position and the line at timet ≥ 0, θ[ref] = atan2(−a, b), and the proportional gains Kp, Kγ > 0. The agent is initiallyplaced at (x, y) = (0.9, 0.3) m with an orientation of θ = π/2.

    (a) (b)

    Figure 4.1: V-REP simulation of line following algorithm.

    The corresponding robot trajectory is plotted in Fig. 4.2(a). Fig. 4.2(b) shows theorthogonal distance between the model and the line. Note that the purpose here is todemonstrate different mobile robot/agent based navigation algorithms using MAFOSS.

    -1 -0.5 0 0.5 1 1.5 2

    x [m]

    -1

    -0.5

    0

    0.5

    1

    y [m

    ]

    (a)

    0 5 10 15 20 25 30

    Time [s]

    -0.2

    -0.1

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    d [

    m]

    (b)

    Figure 4.2: Performance in line–follower algorithm in simulation.

    4.3 Leader Follower

    The aim of this section is to further evaluate MAFOSS by running a relatively complexscenario, such as the leader–follower problem, using two models. In this scenario, the leaderattempts to follow a circular trajectory and the follower attempts follow behind the leader.The radius of the target circle was set to be 1 [m], its center was set to (0, 0) m, and thedesired distance offset between the leader and follower was 50 [cm]. The leader was initiallyplaced at (x, y) = (0.4, 0) m with an orientation of θ = 0 while the follower was initiallyplaced at (x, y) = (0,−0.5) m with an orientation of θ = 0

    E. Jones, D. Adra, S. Miah Page 17 of 99

  • Chapter 4. VREP-ROS Simulations 4.3. Leader Follower

    (a) (b)

    Figure 4.3: V-REP simulation of leader follower algorithm.

    follow a leader robot that navigates along a circular path defined by

    x[d](t) = xc +R cos(αt) (4.1a)

    y[d](t) = yc +R sin(αt), (4.1b)

    where (xc, yc) is the coordinate of the center of the circular path with radius R > 0, α isthe angular rate parameter of the circle at time t ≥ 0. The tangent of the trajectory (4.1)gives the angle θ[d](t), which is computed as θ[d](t) = atan2(ẏ[d], ẋ[d]). The linear and angularspeeds of the leader robot are given by

    ν [d] =√

    (ẋ[d])2 + (ẏ[d])2 and ω[d] =ÿ[d]ẋ[d] − ẍ[d]ẏ[d]

    (ν [d])2.

    The follower robot can be modeled as a unicycle with its kinematics given in (5.1). Theproblem is to find the linear and angular speeds, ν(t) and ω(t), of the follower robot so thatit follows the leader robot that follows the reference trajectory (4.1) while maintaining aconstant geometric distance.

    -3 -2 -1 0 1 2 3 4

    x [m]

    -3

    -2

    -1

    0

    1

    2

    3

    y [

    m]

    Leader and Follower X-Y Position

    leader

    follower

    (a)

    0 50 100 150

    Time [s]

    1

    1.5

    2

    2.5

    Le

    ad

    er

    line

    ar

    sp

    ee

    d [

    m/s

    ]

    0 50 100 150

    Time [s]

    0

    1

    2

    3

    Le

    ad

    er

    an

    gu

    lar

    sp

    ee

    d [

    rad

    /s]

    0 50 100 150

    Time [s]

    1

    1.1

    1.2

    1.3

    1.4

    Fo

    llow

    er

    line

    ar

    sp

    ee

    d [

    m/s

    ]

    0 50 100 150

    Time [s]

    0

    0.5

    1

    1.5

    Fo

    llow

    er

    an

    gu

    lar

    sp

    ee

    d [

    rad

    /s]

    (b)

    Figure 4.4: Performance in leader–follower algorithm simulation.

    E. Jones, D. Adra, S. Miah Page 18 of 99

  • Chapter 4. VREP-ROS Simulations 4.4. Area Coverage Simulation

    For the robot to follow the circular path, let us define the position error

    e(t) =√

    (x[d](t)− x(t))2 + (y[d](t)− y(t))2 − doffset

    with doffset being the distance that the robot supposed to maintain from the desired position(x[d](t), y[d](t)). The robot’s linear speed to follow the path is given by

    ν(t) = Kve(t) +Ki

    ∫ t0

    [e(t)]dt,

    where Kv, Ki > 0 are constant proportional gains. The robot is directed to steer to-wards the circle with a steering angle given byγ = Kγ θ̃, where θ̃ = (θ

    ′ − θ) with θ′ =atan2(y[d]− y, x[d]−x) and the proportional gain Kγ > 0. Note that the angular differenceθ̃ ∈ (−π, π]. The values of the parameters used in this experiment are Kv = 0.35, Ki = 0and Kγ = 0.45. The performance of MAFOSS in running a simple leader–follower sim-ulation is demonstrated in Fig. 5.4, where the follower is following the leader robot onthe circular path. The X-Y trajectories of these robots and their distance are shown inFig. 4.4. As can be seen, the simulations are able to run multi-agent algorithms withvarious complexities.

    4.4 Area Coverage Simulation

    The aim of this section is to simulate the full area coverage algorithm as described in themodelling section 2.1. Fig 4.5 shows how the simulation performed. We can see that astime goes to infinity, the agents approach the centroids and the coverage in the system ismaximized.

    E. Jones, D. Adra, S. Miah Page 19 of 99

  • Chapter 4. VREP-ROS Simulations 4.4. Area Coverage Simulation

    (a) (b)

    (c) (d)

    Figure 4.5: V-REP simulation of area coverage algorithm.

    E. Jones, D. Adra, S. Miah Page 20 of 99

  • Chapter 5

    Experimental Validation

    5.1 Implementation Introduction

    In this section, we will be looking at the results from the implementation of the simulationcases. To perform this, the authors introduce the eduMOD differential drive mobile robots(DDMRs) developed in the robotics laboratory of Bradley University to be used as themobile agents.

    5.2 eduMOD

    The eduMOD platform itself is based on the pre–existent eduMIP platform developed bythe University of California San Diego’s Coordinated Flow and Robotic’s Control labo-ratory. The eduMIP platform itself uses a microcomputer, the Beaglebone Blue, that isattached to a chassis in order to control two motors. These two motors balance the eduMIPplatform in an inverse pendulum configuration (hence the name eduMachineInversePendu-lum). We started out by purchasing the kit from a vendor and quickly realized that theinverse pendulum configuration would not work for our application. The PID controllerthat robot uses to make itself stay upright is not good enough to make the robot completelystill. It would change its position by plus or minus 5 [cm]. This would not work in our ap-plication as we wanted more accuracy in the system. To rectify this we decided to modifythe eduMIP robot and convert it into a three-wheeled configuration with a ball–caster inthe rear. We also wrote low–level control code to have the robot dead–reckon its currentposition. The eduMOD robot can be seen here:

    E. Jones, D. Adra Page 21 of 99

  • Chapter 5. Experimental Validation 5.3. Line Following

    Figure 5.1: eduMIP Robot and eduMOD Robot Comparison

    The algorithms are tested using the eduMOD DDMR and its kinematic model is de-scribed by a unicycle model:

    ẋ(t) = ν(t) cos θ(t) (5.1a)

    ẏ(t) = ν(t) sin θ(t) (5.1b)

    θ̇(t) = ω(t). (5.1c)

    where q(t) ≡ [x(t), y(t), θ(t)]T ∈ R2 × S1 is the eduMOD’s pose, and ν(t) and ω(t) areits linear and angular speeds at time t ≥ 0, respectively. Note that the lower level actua-tor commands ν(t) and ω(t) are implemented at high bandwidth using a dead-reckoningalgorithm in cooperation with a conventional proportional-integral-derivative (PID) con-troller. Further note that the robots used to implement these algorithms have no way ofexternally verifying their position. If the wheels have any slippage, or if the robots runsinto any barriers, the dead reckoning algorithm cannot correct itself. This is particularlynoticeable on the eduMOD as the robot is not as mechanically robust as its counterpartsand is relatively small. In regards to the size, if the robot is off its target by even a fewcentimeters, it will be almost an entire wheelbase away from its target. If the Pioneerrobot was to be off by a few centimeters the difference would appear far less striking. Inthe following, two different examples using eduMOD robots/agents are provided.

    5.3 Line Following

    To demonstrate the working principle, a simple line following algorithm is first implementedin the proposed MAFOSS. In this case, the eduMOD robot attempts to simply follow aline defined by

    ax+ by + c⇒ y = x+ 0.06.

    The linear speed ν(t) = ν = 0.15 [m · s−1] and the angular speed of the eduMOD robot iscomputed by

    ω(t) =1

    `tan(γ(t)),

    E. Jones, D. Adra, S. Miah Page 22 of 99

  • Chapter 5. Experimental Validation 5.3. Line Following

    where ` is the distance between the two driving wheels tied using an axle and the angleγ(t) is defined as:

    γ(t) = −Kpd(t) +Kγ(θ[ref] − θ(t))

    with d(t) being the orthogonal distance between the robot position and the line at timet ≥ 0, θ[ref] = atan2(−a, b), and the proportional gains Kp, Kγ > 0. A six centimeteroffset is added to account for some inconsistencies in the testing area. However, this isof no concern to the algorithm itself as it will be able to follow any reasonable line asdemonstrated in Fig. 5.2. The eduMOD robot is initially placed at (x, y) = (0.9, 0.3) mwith an orientation of θ = π/2.

    (a) (b)

    Figure 5.2: Running line–following algorithm using an eduMOD differential drive mobilerobot.

    The corresponding robot trajectory is plotted in Fig. 5.3(a). Fig. 5.3(b) shows theorthogonal distance between the robot and the line. Note that the purpose here is todemonstrate different mobile robot/agent based navigation algorithms using the MAFOSSso that we can use it to implement the area coverage case.

    -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4

    x [m]

    -0.2

    0

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    y [

    m]

    (a)

    0 2 4 6 8 10 12

    Time [s]

    -0.1

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    d [

    m]

    (b)

    Figure 5.3: Performance in running line–follower algorithm.

    E. Jones, D. Adra, S. Miah Page 23 of 99

  • Chapter 5. Experimental Validation 5.4. Leader Follower

    5.4 Leader Follower

    The aim of this section is to further evaluate the MAFOSS by running a relatively complexscenario, such as the leader–follower problem, using two EduMOD robots. In this scenario,the leader attempts to follow a circular trajectory and the follower attempts follow behindthe leader. The radius of the target circle was set to be 32 [cm], its center was set to(0, 0) m, and the desired distance offset between the leader and follower was 15 [cm]. Theleader was initially placed at (x, y) = (0.1,−0.2) m with an orientation of θ = 0 while thefollower was initially placed at (x, y) = (−0.2,−0.2) m with an orientation of θ = 0

    (a) (b)

    Figure 5.4: Running leader–follower algorithm using two eduMOD differential drive mobilerobots.

    follow a leader robot that navigates along a circular path defined by

    x[d](t) = xc +R cos(αt) (5.2a)

    y[d](t) = yc +R sin(αt), (5.2b)

    where (xc, yc) is the coordinate of the center of the circular path with radius R > 0, α isthe angular rate parameter of the circle at time t ≥ 0. The tangent of the trajectory (5.2)gives the angle θ[d](t), which is computed as θ[d](t) = atan2(ẏ[d], ẋ[d]). The linear and angularspeeds of the leader robot are given by

    ν [d] =√

    (ẋ[d])2 + (ẏ[d])2 and ω[d] =ÿ[d]ẋ[d] − ẍ[d]ẏ[d]

    (ν [d])2.

    The follower robot can be modeled as a unicycle with its kinematics given in (5.1). Theproblem is to find the linear and angular speeds, ν(t) and ω(t), of the follower robot so thatit follows the leader robot that follows the reference trajectory (5.2) while maintaining aconstant geometric distance.

    E. Jones, D. Adra, S. Miah Page 24 of 99

  • Chapter 5. Experimental Validation 5.5. Area Coverage Implement

    -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5

    x [m]

    -0.5

    -0.4

    -0.3

    -0.2

    -0.1

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    y [

    m]

    Leader

    Follower

    (a)

    0 5 10 15 20 25 30

    Time [s]

    0.24

    0.25

    0.26

    0.27

    0.28

    0.29

    0.3

    0.31

    0.32

    0.33

    0.34

    Dis

    tance [m

    ]

    (b)

    Figure 5.5: Performance in running leader–follower algorithm.

    For the robot to follow the circular path, let us define the position error

    e(t) =√

    (x[d](t)− x(t))2 + (y[d](t)− y(t))2 − doffset

    with doffset being the distance that the robot supposed to maintain from the desired position(x[d](t), y[d](t)). The robot’s linear speed to follow the path is given by

    ν(t) = Kve(t) +Ki

    ∫ t0

    [e(t)]dt,

    where Kv, Ki > 0 are constant proportional gains. The robot is directed to steer to-wards the circle with a steering angle given byγ = Kγ θ̃, where θ̃ = (θ

    ′ − θ) with θ′ =atan2(y[d]− y, x[d]−x) and the proportional gain Kγ > 0. Note that the angular differenceθ̃ ∈ (−π, π]. The values of the parameters used in this experiment are Kv = 0.35, Ki = 0and Kγ = 0.45. The performance of MAFOSS in running a simple leader–follower algo-rithm is demonstrated in Fig. 5.4, where the follower is following the leader robot on thecircular path. The X-Y trajectories of these robots and their distance are shown in Fig. 5.5.As can be seen, the proposed MAFOSS has the ability to run multi-agent algorithms withvarious complexities.

    5.5 Area Coverage Implement

    The aim of this section is to implement the full area coverage algorithm as described inthe modelling section 2.1. We have modelled the system differently than we did in thesimulation case simply to further test the validity of the algorithm. Here we define onlytwo density points and have one of the density points weighted twice as heavily in thesystem.

    E. Jones, D. Adra, S. Miah Page 25 of 99

  • Chapter 5. Experimental Validation 5.5. Area Coverage Implement

    (a) (b)

    Figure 5.6: Performance of area coverage algorithm through data sent to Matlab.

    The figure above Fig 5.6 shows us the what data was sent back to Matlab from the edu-MODs while running the algorithm. The next set of figures will show the actual positionsof the eduMODs in the implementation area. Fig 5.7 shows how close the implementationon the eduMODs themselves was to the data being sent to Matlab. Note that the whitemarkings on the floor represent the final locations of the density points in the system andthe blue markings represent the starting positions of the robots.

    (a) (b)

    (c) (d)

    Figure 5.7: Performance of area coverage algorithm.

    E. Jones, D. Adra, S. Miah Page 26 of 99

  • Chapter 6

    Conclusion and Future Work

    In conclusion, we have demonstrated the validity of MAFOSS in implementing and sim-ulating various multi-agent algorithms. We used this framework to implement the areacoverage algorithm on our custom built platform, the eduMOD. We were able to see ex-cellent performance across all of the test cases, in both simulation and implementation.For future work there are many different avenues that a group could take to extend thisproject. The most pertinent next step for a future group would be to give the agents inthe network some external position data. Feeding this external data into the system wouldallow the robots to correct the small concatenating errors created from the dead reckoningcode running on the agents and greatly improve accuracy when running for a significantamount of time. Another contribution would be to implement these systems with heteroge-neous agents. These agents would be heterogeneous in that they have different actuators,wheel base lengths, and wheel radii. Furthermore, a future group could even go so far asto add in a non-static density source.

    E. Jones, D. Adra Page 27 of 99

  • Appendix A

    ROS Network Setup

    A.1 Setting up the ROS network in general

    ROS itself is a very useful tool that allow us to communicate between the agents in oursystem. To begin the process of running ROS in the system, you must first have ROSdownloaded on all of the agents in the system, as well as on a server computer. Onceyou have done so, you can begin setting up ROS in the system. Firstly, you must makesure that the server computer is connected to the router in the lab. Note that the IPassociated with that router is NOT static, however it seems to change very rarely. Thisis accomplished by plugging in a USB network adapter into the lab computer and simplyconnecting to the ECE-Robots-1 local area network. Now we need to check the IP of theserver computer on this local area network. Again, this IP will not be static, so you mayneed to change it every once in while. To check this IP, access the command prompt inWindows and type in

    ipconfig

    We are looking for the Wireless network adapter connection for IPV4. In our case the IPwas 192.168.1.95. Now that we have this IP we can use it later to set up the individualrobots ROS IPs. Once we have made sure that our server computer is connected over thelab’s router we can move on to the proceeding steps.

    A.2 ROS Network Simulation

    In order to use ROS and VREP together for simulation, the user must first download VREPfor Linux. Once you have done this, the VREP ROS interface plugin will automatically bedownloaded as well. Now you will need to register the ROS master node in your system.For the purposes of our project we decided to use the ROS functionality provided to us byMatlab’s Robotics System toolbox. However, you are not required to use this toolbox, oreven Matlab itself, as setting up the ROS master can be done with any machine runningROS. Regardless, for our simulation we ran the command

    E. Jones, D. Adra Page 28 of 99

  • Appendix A. ROS Network Setup A.3. ROS Network Implementation

    rosinit

    in the Matlab prompt to register the ROS Master in the network. If succesful, the com-mand will return to you that the matlab global node was generated.In order to run VREP, you must navigate to the vrep folder and run the following com-mands:

    export ROS_MASTER_URI=http://:11311

    as an example we used a server computer IP of 192.168.1.95

    export ROS_MASTER_URI=http://192.168.1.95:11311

    ./vrep.sh

    You will know if the ROS interface plugin was loaded succesfully by checking the terminalyou used to run the previous commands. If the ROS interface plugin gives you a loadsucceded tag then you know the network has been setup correctly. If you receive anadditional message regarding simROS not loading simply disregard that message as it isuneccesary in our system. You can further check the validity of your setup by using someROS tools to determine whether or not VREP ROS was setup correctly. Type in thefollowing commands into the Matlab prompt:

    rosnode list

    If the console outputs v-rep as a node then you know the system recognized it. From hereyou will be able to run a LUA script as detailed in the ROS and VREP interface codeappendix alongside a Matlab script to truly test communication within the system.

    A.3 ROS Network Implementation

    The code for interfacing with agents for implementation requires setting up the servercomputer and Matlab similar to the simulation case. However, now we will need to specifyeach robot in our network. ROS needs to know the specific IP of each robot in the networkin order to setup the peer to peer communication schema. To do so we must first accesseach ROS agent. In our case we simply use PuTTY to set up a SSH connection with aeduMOD robot. Once we are inside the eduMOD robot we begin by sourcing the correctfiles to allow ROS to know where to look when we use ROS functionalities. Use thefollowing commands:

    source ros_catkin_ws/install_isolated/setup.bash

    source catkin_tmp/devel/setup.bash

    Once we have done this we will be able to use ROS commands like rostopic list and rosnodelist among others. Now we need to make sure that the eduMOD robot is connected to thecorrect wireless local area network. Enter the following commands:

    E. Jones, D. Adra, S. Miah Page 29 of 99

  • Appendix A. ROS Network Setup A.3. ROS Network Implementation

    connmanctl

    agent on

    enable wifi

    scan wifi

    services

    After entering the services command we will see a list of all the networks seen by thiseduMOD robot. We are looking to connect to the ECE-Robots-1 network provided by thelab router. In order to connect to the network we must type connect followed by the wifitag associated with this network. As an example:

    connect wifi_199293_10213991120109

    After we have connected to this network we type:

    quit

    to exit from the connmanctl tool. Now we are connected to the network. This can bechecked by going back into connmanctl (following the steps laid out in the above) andlooking at the servies seen by the eduMOD. If the ECE-Robots-1 network has a AR* tagnext to it we know that the eduMOD is already connected.Now we want to check to see what IP this agent is operating as. To do this we type in thecommand:

    hostname -I

    and look at the third entry from the right. That will be the entry associated with thecurrently connected wiifi network. As an example it could be 192.168.1.74. Now that wehave this and the IP from the server computer we can tell ROS exactly what it needs toknow in order to operate. Enter the following commands to tell ROS what these valuesneed to be:

    export ROS_MASTER_URI=http://:11311

    export ROS_IP=

    At this point we have all the setup required for the system to operate. At this point youshould already have the Matlab ROS master set up, but if not make sure you enter:

    rosinit

    in the Matlab prompt to register the master in the system.

    E. Jones, D. Adra, S. Miah Page 30 of 99

  • Appendix B

    Implementation Appendix

    B.1 SD Card Configuration

    This appendix presents necessary steps for downloading ROS Jade onto an SD card viasource installation.This guide was made for someone with an understanding of Linux. Tocheck which version of Linux you are running, enter the following command:

    lsb_release a

    This is simply to check and make sure that you are indeed using Debian Jessie version8.7. If this is not the returned version, please install Debian 8.7 on the SD card beforecontinuing. Run this command before attempting contact with keyserver:

    sudo apt-get dirmngr

    Then run:

    sudo -E apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80

    --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116

    sudo sh -c "echo \"deb http://packages.ros.org/ros/ubuntu $DISTRO main\" >

    /etc/apt/sources.list.d/ros-latest.list"

    sudo apt-get update

    sudo apt-get install python-rosdep python-rosinstall-generator python-wstool

    python-rosinstall

    build-essential

    sudo apt-get install cmake python-nose libgtest-dev

    E. Jones, D. Adra Page 31 of 99

  • Appendix B. Implementation Appendix B.1. SD Card Configuration

    Here we are getting many different dependencies in order to allow ROS to be installedsuccessfully. These dependences will undoubtebly change in the time since writing this.Looking at the relevant error messages from the wstool later on will be vital to troubleshootsuch issues.Next run:

    sudo -E rosdep init

    rosdep update

    mkdir ~/ros_catkin_ws

    cd ~/ros_catkin_ws

    Now to actually install the workspace (NOTE: these last steps will take app. 90 minutesto complete):

    rosinstall_generator ros_comm --rosdistro jade --deps --wet-only --tar >

    jade-ros_comm-wet.rosinstall

    wstool init -j8 src jade-ros_comm-wet.rosinstall

    sudo apt-get install -y libboost-all-dev python-empy libconsole-bridge-dev

    libtinyxml-dev liblz4-dev libbz2-dev

    ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release

    Finally:

    sudo apt-get install python-netifaces

    From here you will be able to run ROS within your catkin workspace!Supplementary Instructions!To make sure that your apt-get commands are indexing repositories correctly:In sources.list, edit and add components ”contrib” and ”non-free” if not already there.In ros-latest.list, add deb-src https://packages.ros.org/ros/ubuntu jessie mainAs an example:

    deb main contrib non-free

    deb-src main contrib non-free

    Note that to edit ros-latest.list you will need to cd into sources.list.d.You can navigate to these locations by running the following command:

    E. Jones, D. Adra, S. Miah Page 32 of 99

  • Appendix B. Implementation Appendix B.2. Building the eduMOD robot

    cd /etc/apt/

    Note that you will need to be at the machine level to run this command.Some sites may not be approved by your OS. It is possible that during the apt-get updateportion of the above instructions you receive an error wherein the index was not checkeddue to it failing certification. In order to provide a Certificate of Authority to certain sites,enter the following command:

    apt-get update -o Acquire::https::deb.nodesource.com::Verify-Peer=false

    portion from above is dependent on the site you want certified to your system.

    B.1.1 Non-source Installation

    If you are not able to do a source installation, there are a few options available to you.You can either clone on the images already available from on one of the eduMODs andflash it onto a new eduMOD, OR you can try an installation on another image. In thepast a previous group decided to download ROS on an Ubuntu image designed specificallyfor beaglebones, called bone-ubuntu. ROS itself is intended for installation on the Ubuntuplatform, and you will be able to simply follow the ROS wiki for installation. However, ifyou do decide to adopt this approach, you will need to download the librobotcontrol libraryonto your new platform as well as make ROS understand that you are adding in an externallibrary into the system. Both of these tasks were deemed too ardous and time-consumingso they were not pursued.

    B.2 Building the eduMOD robot

    In order to build the eduMOD robot we referenced the following video made by MarkYoder: https://www.youtube.com/watch?v=BIMb8D5RdGAIn this video Mark does a good job of going through how to set up the eduMIP platformfrom the Renaissance robotics kit. For our application we changed this into a tricycleDDMR, the eduMOD. As such we recorded our own video building the eduMOD robotthat includes attaching the seperate caster attachment along side setting up the verticalstandoffs. If you are looking to build the robot, either video should suffice.

    E. Jones, D. Adra, S. Miah Page 33 of 99

    https://www.youtube.com/watch?v=BIMb8D5RdGA

  • Appendix C

    Simulation Appendix

    C.1 Creating and Modelling the eduMOD robot in

    VREP-ROS

    In order to begin modelling the eduMOD robot in VREP-ROS we started by looking atthe following video tutorial: https://www.youtube.com/watch?v=nLKLu4Hw_mU. In thistutorial the author does an excellent job of showing each step as they complete it. Forour simulation model, we did not use the eduMOD platform we designed in VREP dueto problems with how this model was created. As can be seen from the video, the modelis not very accurate to the eduMOD itself, and the way that ball casters are modelledin VREP is very non-ideal. In order to add in a frictionless ball caster, as is required tomodel the eduMOD correctly, the only part the designer can use is a force sensor. Thiscan create many problems for lateral movement as the force sensor was never intended tobe used as a ball caster, and vigourous movements can cause the model to freak out andclip into the ground.

    E. Jones, D. Adra Page 34 of 99

    https://www.youtube.com/watch?v=nLKLu4Hw_mU

  • Appendix D

    EduMOD Assembly Tutorial

    D.1 Required materials

    • Beaglebone Blue microcomputer

    • eduMIP kit

    • super glue

    • 3d printed battery clip

    • 3d printed ball caster adapter

    • 3d printed standoffs

    • Pololu ball caster with 1 plastic ball and rollers

    • Solder

    • Soldering Iron

    • Phillips head screwdriver set

    D.2 Start

    The parts shown in Fig. D.1 are the parts as they have been shipped from the vendors.Make sure that you have all of the pieces shown in the figure before continuing.

    E. Jones, D. Adra Page 35 of 99

  • Appendix D. EduMOD Assembly Tutorial D.3. Drivetrain

    Figure D.1: eduMOD Parts

    D.3 Drivetrain

    Included in the eduMIP kit should be a drivetrain and two motors as seen in Fig. D.2.

    Figure D.2: eduMOD Drivetrain

    To attach the motor leads it is first nessesary to disassemble the drivetrain. This canbe done by removing the screws highlighted above. Next, it is nessesary to solder the 2-pinZH connectors onto the motor terminals. To do this arrange the motors as seen in Fig.D.3 with the positive terminals facing upwards and solder the ZH connectors on.

    Figure D.3: eduMOD Motors

    E. Jones, D. Adra, S. Miah Page 36 of 99

  • Appendix D. EduMOD Assembly Tutorial D.4. Bulkhead

    After this is done, the drivetrain should be reassembled as shown in Fig. D.4.

    Figure D.4: eduMOD Assembled Drivetrain

    D.4 Bulkhead

    After completing the drivtrain assembly it is now time to attach the bulkhead. The firststep is attaching the 4-pin zh connectors to the encoders, then all the wires should berouted to the front of the drivetrain and the bulkhead should be placed over top of themas can be seen in Fig. D.5

    Figure D.5: eduMOD Assembled Bulkhead Front View

    Next, the bulkhead should be screwed in using the long machine threaded screws thatare included in the eduMIP kit as can be seen in Fig. D.6

    E. Jones, D. Adra, S. Miah Page 37 of 99

  • Appendix D. EduMOD Assembly Tutorial D.5. Complete Assembly

    Figure D.6: eduMOD Assembled Bulkhead Rear View

    D.5 Complete Assembly

    All that is nessesary to complete the assembly is to glue the Pololu Ball caster to theadapter, glue the adapter to the battery clip, and screw the battery clip into the standoffs.The completed assembly of the eduMOD can be seen in Fig. D.7.

    Figure D.7: eduMOD Complete Assembly

    E. Jones, D. Adra, S. Miah Page 38 of 99

  • Appendix E

    OpenSCAD Tutorial

    E.1 Introduction

    OpenSCAD is a easy to use open source 3d-modeling software that forgoes many of thecommon gui-based interface styles in favor of a programing environment. While this soft-ware may not be for everyone it has many great advantages in the field of engineering.OpenSCAD removes the tedium of learning a 3d modeling software and all its associatedquirks, allows for a quick transition from programing to 3d modeling, provides as much pre-cision any other modeling sofware on the market, allows for modular and rapidly changable3d models, provides unique functionality such as for loops, and allows for much faster 3dmodeling than would be possible with a traditional software.

    E.2 Union

    Union can be used to take the sum of two shapes. In this example the sphere and the cubewill be combined to make a cube with round protrusions.

    unionExample.scad

    1 cubeSize = 15;

    2 sphereSize = 10;

    3

    4 union() {

    5 cube(cubeSize, center=true);

    6 sphere(sphereSize, center=true);

    7 }

    E.3 Intersection

    Intersection can be used to take the shared volume of two shapes. In this example theshape that is produced is a combination of a cube and a sphere. The edges and corners of

    E. Jones, D. Adra Page 39 of 99

  • Appendix E. OpenSCAD Tutorial E.4. Difference

    the cube are cutoff because the sphere does not intersect with them. Likewise the resultingshape has 6 flat sides due to the cube not intersecting with the entirety of the sphere.

    intersectionExample.scad

    1 cubeSize = 15;

    2 sphereSize = 10;

    3

    4 intersection() {

    5 cube(cubeSize, center=true);

    6 sphere(sphereSize, center=true);

    7 }

    E.4 Difference

    Difference can be used to take the difference of two shapes. In this example the order ofthe two shapes matters. Since the cube is first, the sphere will be subtracted from it. Inthis case all that is left are the edges and corners of the cube.

    differenceExample.scad

    1 cubeSize = 15;

    2 sphereSize = 10;

    3

    4 difference() {

    5 cube(cubeSize, center=true);

    6 sphere(sphereSize, center=true);

    7 }

    E.5 Further Reading

    Hopefully these three examples help to highlight the usefulness of openSCAD. More readingcan be found in the help section at the top of the window. Full documentation as well asa cheatsheet are provided with the program.

    E. Jones, D. Adra, S. Miah Page 40 of 99

  • Appendix F

    3D Modeling Code

    F.1 Open SCAD Code for Caster

    eduMODCaster.scad

    1 $fn = 100;

    2

    3 module new_bracket(){

    4 radius = 17;

    5 height = 20;

    6 thickness = 2;

    7 mounting_rad = 7;

    8 mountinghole_rad = 1.45;

    9 screw_length = 6;

    10 top_radius = 10.5;

    11 poly_height = 12;

    12 poly_width = 6;

    13 poly_depth = 3;

    14 poly_radius = (sqrt(pow(radius,2)-pow((poly_width/2),2)));

    15 poly_radius_top = (sqrt(pow(top_radius,2)-pow((poly_width/2),2)));

    16 poly_points = [

    17 [(poly_radius-poly_depth),(poly_width/2),thickness], //0 bottom left

    18 [(poly_radius-poly_depth),(-poly_width/2),thickness], //1 bottom right

    19 [poly_radius,(-poly_width/2),thickness], //2 bottom right

    20 [poly_radius,(poly_width/2),thickness], //3 bottom left

    21 [(poly_radius_top-poly_depth),(poly_width/2),poly_height+thickness], //4

    top left↪→

    22 [(poly_radius_top-poly_depth),(-poly_width/2),poly_height+thickness], //5

    top right↪→

    23 [poly_radius_top,(-poly_width/2),poly_height+thickness], //6 top right

    24 [poly_radius_top,(poly_width/2),poly_height+thickness]]; //7 top left

    25

    26 poly_faces = [

    27 [0,1,2,3], // bottom

    28 [4,5,1,0], // front

    29 [7,6,5,4], // top

    E. Jones, D. Adra Page 41 of 99

  • Appendix F. 3D Modeling Code F.2. Open SCAD Code for Battery Clip

    30 [5,6,2,1], // right

    31 [6,7,3,2], // back

    32 [7,4,0,3]]; // left

    33

    34

    35 difference(){

    36 union(){

    37 cylinder(h = thickness,r = radius); //base

    38 for(extrusion = [0:120:360]){

    39 rotate([0,0,extrusion]){

    40 rotate([0,0,180]){

    41 polyhedron(poly_points,poly_faces); //Arm connections

    42 }

    43 translate(v = [mounting_rad, 0, thickness]){

    44 cylinder(h =(screw_length-thickness), r =

    mountinghole_rad+1.5); // hole extrusions↪→

    45 }

    46 }

    47 }

    48 translate(v = [0,0,(poly_height+thickness)]){

    49 cylinder(h = thickness,r = top_radius); // top

    50 }

    51 }

    52 for(extrusion = [0:120:360]){

    53 rotate([0,0,extrusion]){

    54 translate(v = [mounting_rad, 0, -1]){

    55 cylinder(h =(screw_length + 2), r = mountinghole_rad); //holes

    56 }

    57 }

    58 }

    59 }

    60 }

    61

    62

    63

    64

    65 new_bracket();

    F.2 Open SCAD Code for Battery Clip

    eduMODBatteryClip.scad

    1 $fn = 50;

    2 length = 15;

    3 radius = 1;

    4 offset = radius * 2;

    5 union(){

    6 import("C:/Users/dakot/Documents/3dModels/EduMipClip.stl");

    7 minkowski(){

    8 union(){

    9 translate(v = [6 + radius,-length,6 + radius]){

    E. Jones, D. Adra, S. Miah Page 42 of 99

  • Appendix F. 3D Modeling Code F.3. Matlab Code for Standoffs

    10 cube([4 - offset,length,17 - offset]);

    11 }

    12 translate(v = [44.7 + radius,-length,6 + radius]){

    13 cube([4 - offset,length,17 - offset]);

    14 }

    15 translate(v = [6 + radius,-length,21.5 + radius]){

    16 cube([42.7 - offset,length,2.5 - offset]);

    17 }

    18 }

    19 rotate(a = 90, v= [1,0,0]){

    20 cylinder(r=radius,h=length);

    21 }

    22 }

    23 }

    F.3 Matlab Code for Standoffs

    eduMODStandoff.scad

    1 $fn = 100;

    2

    3 height = 24; //mm

    4 hole_radius = 1.15; //mm

    5 wall_thickness = 1.75; //mm

    6

    7 difference(){

    8 cylinder(h = height, r = hole_radius+wall_thickness);

    9 translate(v = [0,0,-1]){

    10 cylinder(h = height+2, r = hole_radius);

    11 }

    12 }

    E. Jones, D. Adra, S. Miah Page 43 of 99

  • Appendix G

    3D-Printing Tutorial

    G.1 Introduction

    3d-printing is typically a very easy process that requires little time investment to get right,however it can take a lot of time investment to get perfict. This tutorial is to give somebeginners tips on ensuring print quality and troubleshooting problems as they arise.

    G.2 Slicing Software

    There are many commonly used slicing software on the market. The most commonly usedones being Sli3er and Cura. Despite differences in interface and features, all slicers offerthe same basic functionality. They take a 3d-model and convert it to usable gcode for the3d printer. As of the writing of this tutorial Cura seems to be the most user freindly out ofthe box. It has many presets for a number of 3d printers and provides great print quality.Sli3er is not as user freindly and lacks the great interface that cura has, however it has someadditional features that Cura lacks. For the purposes of this tutorial it is recommended touse Cura.

    G.3 Material Choice

    There are many commonly used types of 3d printer filament. The two most commonlyused filaments are ABS and PLA based. ABS is a tough petrolium based industrial plasticthat provides higher temperature resistance than PLA along with greater flexibilty. ABStends to expand and contract a lot whilst printing and can be very tough to print withouta heated bed. In contrast, PLA is considered to be the easiest plastic to print and providesvery stiff parts at the expense of being slghtly brittle. Additionally, PLA is suceptible tosoftening due to heat when placed in intense direct sunlight.

    E. Jones, D. Adra Page 44 of 99

  • Appendix G. 3D-Printing Tutorial G.4. Print Settings

    G.4 Print Settings

    For PLA it is recommended to use a temperature of around 185C to 220C and a bed tem-perature of about 60C. ABS needs to be printed at 220C to 250C with a bed temperatureof 85C to 100C. Cooling should generally remain off for ABS to mitigate warping. PLAprints can usually handle maximum fan speed.

    It is recommened to orient parts to minimize overhangs, and if siginficant overhangs arestill present it may be nessesary to enable supports in the slicer.

    Make sure to match all of the print settings to the printer that is being used. The fil-ament diameter, nozzle size, and bed size and z axis size will all need to be setup in theslicer that is being used.

    Layer height is entirely up to the user, smaller layer heights result in much higher qualityprints at the expense of dramatically increasing print time. Care should be taken not toattempt to print at too high of layer heights or too small of layer heights. Typically witha 0.4mm nozzle layer heights should not be much smaller than 0.1mm or much larger than0.2mm.

    Infill is a minor consideration in contrast with the other settings. Infill has a small ef-fect on print time unless the part that is being printed has a lot of volume. Increase theinfill percentage for stronger parts.

    There are many other settings that have a minor role in print quality. However, for thepurposes of this tutorial these are the important ones.

    G.5 Printing

    Before printing it is first nessesary to level the print bed. This can be done fairly simplyby sliding a sheet of paper underneath the hotend of the printer. The bed leveling screwscan be loosened until the paper begins to be pressed against the hotend and noticeablefriction is felt. If the paper cannot fit between the bed and the hotend, the bed screwsneed to be lowered. The best approach when leveling the bed is to check one corner at atime in a clockwise or counter-clockwise direction.

    Next the SD card with gcode can be inserted into the printer and the 3d printer canbe powered on. Next, select the file to print on the display. The printer will then beginheating up. After the printer had finished heating it will home all the axis and beginprinting. After the printer has finished printing the part can be taken off of the bed.

    E. Jones, D. Adra, S. Miah Page 45 of 99

  • Appendix G. 3D-Printing Tutorial G.6. Troubleshooting

    G.6 Troubleshooting

    If parts are looking saggy or melted it is likely that temperature is too high or the printer isoverextruding. These problems are somewhat related, but the first course of action shouldbe to reduce the print temperature and trying again. If doing this still does not result ingood prints the extrustion multiplier in the slicer could be reduced.

    If the printer is showing layers that are lacking material, pockmarks, poor part strength, orother underextrusion related defects the temperature could be increased, or the extrustionmultiplier could be increased.

    If a print comes off of the print bed in the middle of a print the main suspect is poor bedadhesion. To improve this the hotend can be moved closer to the printbed, the printbedtemperature can be increased, or the hotend temperature can be increased for the first layer.

    G.7 Further Reading

    More in depth information is availible online and my require reading different sites totroubleshoot issues effectively.

    E. Jones, D. Adra, S. Miah Page 46 of 99

  • Appendix H

    Modelling

    H.1 Area Coverage Matlab Sim

    This code was used to begin testing the area coverage algorithm itself, simulated entirelyin Matlab.

    Area Coverage Matlab Sim Code

    1 %function areaCoverageAdJo_V2()

    2 close all

    3 clear

    4 clc

    5

    6 area = [0,5,5,0;

    7 0, 0, 5,5]; % [m]; vertices of area

    8 agents = [1, 4, 3.25;

    9 2.5, 3, 0.75]; % [m] agents position in cm

    10

    11 gridSize = 0.05; % [m]; grid size of the area

    12

    13 t = 0; % [s] Time variable

    14 Ts = 0.5; % [s] Simulation time

    15 Tf = 20; % [s] Final Simulation time

    16 Ti = Tf/Ts; % Time used in indexing

    17 Vk = 0.25; % [m/s] Velocity of agents - NOTE: the speed is much higher than it

    18 % will be in implementation for

    19 % simulation purposes

    20

    21 % %%=================== 3D video of risk density function =================%%

    22 % vid = VideoWriter('videos/3DPhi.avi');

    23 % vid.Quality = 100;

    24 % vid.FrameRate = 5;

    25 % open(vid);

    26 % %%=======================================================================

    27 %%=========================================================================

    28 q = 1; % Variables used to index in for loops

    E. Jones, D. Adra Page 47 of 99

  • Appendix H. Modelling H.1. Area Coverage Matlab Sim

    29 r = 1;

    30 s = 1;

    31

    32 yy = min(area(2,:)):gridSize:max(area(2,:));

    33 xx = min(area(1,:)):gridSize:max(area(1,:));

    34

    35 sigma = 0.35; % Density factor

    36 n = 3; % number of agents

    37 alpha = 5; % Scaling variable used in sensor function

    38 beta = 0.5; % Scaling variable used in sensor function

    39

    40 qBar = [2.5, 4, 1.5;

    41 4, 0.75, 1];

    42

    43 positions(1,:) = [agents(1,1), agents(2,1)];

    44 positions(2,:) = [agents(1,2), agents(2,2)];

    45 positions(3,:) = [agents(1,3), agents(2,3)];

    46

    47 gridArray = zeros(1,2);

    48 for i=1:length(xx)

    49 gridArray = [gridArray;repmat(xx(i),length(yy),1),yy'];

    50 end

    51

    52 gridArray = gridArray(2:end,:);

    53

    54 phi = getDensity(gridArray,qBar,sigma); % Phi function to calculate density

    55

    56 gridArrayDensity = [gridArray phi];

    57

    58 figure (1) %Plotting the Coverage metric

    59 xlabel('Time [s]','Interpreter','latex','fontsize',14);

    60 ylabel('Coverage','Interpreter','latex','fontsize',14);

    61 hold on

    62

    63

    64 figure (2) %Plotting the agents moving to modified centroids

    65

    66 for t = 0:Ts:Tf % Main timing loop

    67

    68 agents = [positions(1,1), positions(2,1), positions(3,1);

    69 positions(1,2), positions(2,2), positions(3,2)];

    70

    71 %======================================================================

    72 %======================================================================

    73 for i=1:size(gridArray,1)

    74 dist2Agent1(i) = sqrt(((gridArray(i,1) - agents(1,1))^2) +

    ((gridArray(i,2) - agents(2,1))^2));↪→

    75 dist2Agent2(i) = sqrt(((gridArray(i,1) - agents(1,2))^2) +

    ((gridArray(i,2) - agents(2,2))^2));↪→

    76 dist2Agent3(i) = sqrt(((gridArray(i,1) - agents(1,3))^2) +

    ((gridArray(i,2) - agents(2,3))^2));↪→

    77

    78 %For V1

    E. Jones, D. Adra, S. Miah Page 48 of 99

  • Appendix H. Modelling H.1. Area Coverage Matlab Sim

    79 if ((dist2Agent1(i) < dist2Agent2(i)) && (dist2Agent1(i) < dist2Agent3(i)))

    80 index(i) = 1;

    81 tempPointsV1(i,:) = gridArray(i,:);

    82 end

    83 %For V2

    84 if ((dist2Agent2(i) < dist2Agent1(i)) && (dist2Agent2(i) < dist2Agent3(i)))

    85 index(i) = 2;

    86 tempPointsV2(i,:) = gridArray(i,:);

    87 end

    88 %For V3

    89 if ((dist2Agent3(i) < dist2Agent1(i)) && (dist2Agent3(i) < dist2Agent2(i)))

    90 index(i) = 3;

    91 tempPointsV3(i,:) = gridArray(i,:);

    92 end

    93

    94 end

    95 %======================================================================

    96 %======================================================================

    97 for i = 1:size(tempPointsV1)

    98 %==================================================================

    99 if (index(i) == 1)

    100 r_i_1(q) = sqrt(((gridArrayDensity(i,1) - agents(1,1))^2) +

    ((gridArrayDensity(i,2) - agents(2,1))^2));↪→

    101 f_r1(q) = alpha * exp(-beta*(r_i_1(q)^2));

    102 pointDensityV1(q,:) = gridArrayDensity(i,:);

    103 q = q+1;

    104 end

    105 end

    106 for i = 1:size(tempPointsV2)

    107 %==================================================================

    108 if (index(i) == 2)

    109 r_i_2(r) = sqrt(((gridArrayDensity(i,1) - agents(1,2))^2) +

    ((gridArrayDensity(i,2) - agents(2,2))^2));↪→

    110 f_r2(r) = alpha * exp(-beta*(r_i_2(r)^2));

    111 pointDensityV2(r,:) = gridArrayDensity(i,:);

    112 r = r+1;

    113 end

    114 end

    115 for i = 1:size(tempPointsV3)

    116 %==================================================================

    117 if (index(i) == 3)

    118 r_i_3(s) = sqrt(((gridArrayDensity(i,1) - agents(1,3))^2) +

    ((gridArrayDensity(i,2) - agents(2,3))^2));↪→

    119 f_r3(s) = alpha * exp(-beta*(r_i_3(s)^2));

    120 pointDensityV3(s,:) = gridArrayDensity(i,:);

    121 s = s+1;

    122 end

    123 end

    124 q = 1; % Resetting looping variables

    125 r = 1;

    126 s = 1;

    127 %======================================================================

    128 %======================================================================

    E. Jones, D. Adra, S. Miah Page 49 of 99

  • Appendix H. Mode