final report project #15 autonomous crane for warehouse

42
Aalto University, School of Electrical Engineering Automation and Electrical Engineering (AEE) Master's Programme ELEC-E8004 Project work course Year 2019 Final Report Project #15 Autonomous crane for warehouse management 1st place in SICK innovation competition 2019. Photo: Tero Lehto (Kauppalehti). Date: 30.5.2019 Joakim Högnäsbacka Mikko Lähteenmäki Joonas Pulkkinen Janne Salovaara Page 1 of 42

Upload: others

Post on 30-Apr-2022

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Final Report Project #15 Autonomous crane for warehouse

Aalto University, School of Electrical Engineering Automation and Electrical Engineering (AEE) Master's Programme ELEC-E8004 Project work course Year 2019

Final Report

Project #15 Autonomous crane for warehouse

management

1st place in SICK innovation competition 2019. Photo: Tero Lehto (Kauppalehti).

Date: 30.5.2019 Joakim Högnäsbacka Mikko Lähteenmäki

Joonas Pulkkinen Janne Salovaara

Page 1 of 42

Page 2: Final Report Project #15 Autonomous crane for warehouse

Information page Students Joakim Högnäsbacka Mikko Lähteenmäki Joonas Pulkkinen Janne Salovaara Project manager Mikko Lähteenmäki Official Instructor Timo Oksanen Panu Kiviluoma Other advisors Juuso Autiosalo Matti Kleemola Starting date 10.1.2019 Completion date 5.4.2019 Approval The Instructor has accepted the final version of this document Date: 30.5.2019

Page 2 of 42

Page 3: Final Report Project #15 Autonomous crane for warehouse

Abstract

The project “Autonomous crane for warehouse management” started as an entry to the SICK innovation competition 2019 in addition to being a project in the Project Work course for AEE students and in the Mechatronics Project course for the mechanical engineering students in the project team. The team was provided with a high-end LIDAR by SICK and the ultimate goal of the project in the context of the competition was to utilize the LIDAR in creating a new innovation. Given access to a programmable overhead crane and a project budget, it was possible to build a system combining the crane and the LIDAR. With the time and resources available and with the team's desire to build something practical and challenging, the scope of the project was set to enabling the crane to operate automatically in a warehouse type of environment using the LIDAR. The desired functionality was that the crane would detect an object in a certain area, move the crane hook on top of it allowing it to be attached manually to the hook and moving and placing the object to a suitable position in a target location. Additionally, the crane was supposed to be able to recalculate its path to the next location and avoid unexpected obstacles and humans. In the end, the final implementation of the enhanced crane possessed most of the planned features and was able to perform a pick-and-place demonstration. The features left out due to time and complexity constraints were mainly the final object positioning and human avoidance. The system is also limited by the fact that it can currently hold only a limited map of the obstacles. Nevertheless, the autonomous crane successfully demonstrates the feasibility of such an application and it achieved the 1st place in the SICK innovation competition.

Page 3 of 42

Page 4: Final Report Project #15 Autonomous crane for warehouse

Table of Contents 1. Introduction 5

2. Objective 5

3. Project plan 5

4. Ilmatar Smart crane 5

5. Electronics and computers 6 5.1 Stepper motor code 7

6. Dataflow 7

7. LIDAR attachment 9

8. Data handling and object detection 11

9. Crane planning and movement 14 9.1 Crane control logic 15

10. Demonstration video 16

11. Overview of results 18

12. Reflection of the project 18 12.1 Reaching objective 18 12.2 Timetable 19 12.3 Risk analysis 19 12.4 Project meetings 20 12.5 Quality management 20

13. Discussion and conclusions 21

List of appendices 21 References 22

Page 4 of 42

Page 5: Final Report Project #15 Autonomous crane for warehouse

1. Introduction The final product of the project was an autonomous crane that can operate in a warehouse environment and evade common obstacles in its path during operation. The project team consisted of 4 Control, Robotics and Autonomous Systems students and 3 Mechatronics students. The project was also our entry in the SICK innovation competition 2019. Our entry for the competition won first place, and the prize was 10 000 euros worth of sensors or schooling for the university and a trip to the SICK headquarters in Germany. We were given access to the Aalto University K3 laboratory and at our disposal we had the Konecranes Ilmatar crane and the MRS6000 3D-LIDAR by SICK.

2. Objective The objective of the project was to test the feasibility of creating an autonomous crane system that can detect obstacles in its path and use that information to re-plan, if necessary, its route from the pick-up area to the drop-off area. The autonomous crane system could be implemented in warehouse environments and evade objects and obstacles common in this type of operating environment. The initial objective was that it could evade dynamic obstacles, such as forklifts and mobile robots operating in the same area, as well as detect and differentiate humans from other obstacles and create a larger safe parameter around humans, compared to other obstacles. The idea was to develop a functional prototype that is partially integrated to the crane. The crane could detect the exact position of the object to be picked up within the pick-up area and move to the centre of the object. The final design was expected to be able to perform a demonstration illustrating the key features of the autonomous crane including sufficient object detection accuracy and adaptive path-planning.

3. Project plan

The purpose of the project plan (in appendix 3) was to be a reference for what was initially envisioned to be a satisfactory outcome of the project and estimate what arrangements, schedule and resources would be necessary to achieve this. The most central part of the document is where the actual system to be built is decomposed into many work packages each representing a distinct subsystem or functionality which are in turn split into several tasks. The work packages were ‘LIDAR data handling’, ‘LIDAR attachment and movement’, ‘Object detection’, ‘Trajectory planning’ and ‘Integration’. These work packages were used in project scheduling and distributing the estimated workload as well as acted as milestones for measuring the progress of the project. Besides this breakdown, the project plan also included the rough estimates of the available weekly working hours and total costs and, in addition, stating the team’s policy on project management, meetings, communication, risks and quality management.

4. llmatar Smart crane The Konecranes Ilmatar crane is an overhead crane with a maximum lifting weight of 3.2 tonnes and has an anti-sway function embedded. The anti-sway effectively removes any sway that occurs during movement, which makes the operation of the crane both safer and faster. This is essential to our implementation as then we do not need to consider the effects of sway, and it makes the movement more robust and precise. The Ilmatar smart crane was one of the starting points of the project as we were allowed to use it for the purposes of the project work. The crane is located on the Aalto university Otaniemi campus in K3 laboratory where the group members could work with it. In the same room, another, solely manually operated crane shares the same space and both cranes use the same railings to move across almost the full length of the room. The size of the room is approximately 30 x 12 meters. The smart crane has the feature of slowing down and eventually stopping automatically if it moves too close to the walls or to the other crane. The smart crane is

Page 5 of 42

Page 6: Final Report Project #15 Autonomous crane for warehouse

semi-automatic and intelligent where certain coordinates can be saved and moved to with one button. The non-modified smart crane cannot automatically get around obstacles, but certain “off-limits” areas can be added to the crane where the crane stops if it moves too close to that area. For example, the monitor room in the middle of the hall is considered off-limits. The crane can also be operated remotely with a remote controller with joysticks. The crane can move in 3 degrees of freedom such that the bridge moves along the long side of the room, the trolley moves along the short side and the hoist moves up and down from the floor up to a height of 3 meters. The crane can also have an OPC UA communication protocol server which can be used to operate the crane remotely and with minimal interaction with the remote controller. The OPC UA server can be connected to by an OPC UA client and by providing the server a valid access code requested from Konecranes. A watchdog value needs to be constantly incremented if the user wants to keep controlling the crane via OPC UA. It is possible to set a desired speed and direction of the bridge, trolley and hoist separately to control the movement of the crane. The crane was used prior to this project to automatically move with a Python script that connected to the crane OPC UA server. The script could be used to acquire the position of the crane and giving the speed control commands using OPC UA Python library from GitHub [1]. The script did not work correctly at first as the OPC UA architecture had changed so the script was changed such that the OPC UA addresses were correct and the value setting was fixed to send the data value instead of just the default set value.

5. Electronics and computers

To control and collect data, there was a need to have a relatively small microcontroller on the crane and a more powerful main computer which could be located nearby and act as the user interface. The microcontroller chosen for this is Raspberry Pi model 3B as it had a processor sufficiently powerful to collect the data from the LIDAR scanner with a 1.4 GHz 64-bit quad-core ARM Cortex-A53 processor and an easy-to-work-with Linux based operating system. WIFI, ethernet and USB connections are also essential to the working principles of the system as the communication is done through WIFI to the main computer, the communication to the LIDAR is done through the Ethernet connection and the communication to the stepper motor rotating the LIDAR is done through USB connection. The Raspberry Pi is powered by a 24V power source which is attached to a 230V power socket located on top of the crane trolley. On the Raspberry Pi are the C# executable program, which collects the data from the LIDAR, transcribes it and sends it to the main computer using TCP protocol. A python script is run from the C# program which controls the rotation and collects the angle of the stepper motor rotating the LIDAR. The main computer used for general processing of the system is a Windows laptop which runs another C# executable connecting to the microcontroller via WIFI and the TCP protocol. For easier use of C#, it was decided that the computer should be running on a Windows operating system. All of the wireless communication was done on the same wireless network so that the crane, microcontroller and the main computer were connected to each other. The crane controller Python script is also run from this computer which runs inside the C# program. The laser scanner used for this project is SICK MRS6000 3D LIDAR scanner. It uses a laser to measure the distance of objects by measuring the time of flight and the angle of the laser. The laser is angled by a rotating mirror which gives the scanner its 3D features. The LIDAR has a 120 x 15 degree view which is split into 24 layers which it scans in 10 Hz frequency. Its working range is from 0.5 meters to 200 meters which is rather excessive for this project as the scanner needs to point downwards and can only see up to about 5 meters around it. The scanner is powered by another 24V power supply that is connected to the crane trolley’s 230V power socket. The communication to the scanner is done by ethernet and TCP protocol.

Page 6 of 42

Page 7: Final Report Project #15 Autonomous crane for warehouse

The LIDAR was rotated by a Trinamic PD-1161 stepper motor to allow a larger field of view for it. The stepper motor works with 24V power supply which was connected to the 230V power socket located on top of the crane trolley. The motor has 1 Nm of torque which was considered a powerful enough motor to rotate the LIDAR smoothly and so that the motor could reverse the direction of the rotation almost instantly when it reaches the edge of the desired 180 degree turn. The motor has also an internal encoder, which was used to monitor the angle of the motor in 25600 steps for one rotation. The encoder always starts from the same position and as such is perfect for getting the angle correct every time. The motor is connected to the Raspberry Pi by a USB cable which sends the motor the control signals and collects the current angle via a Python script. The motor has its own Trinamic motion control language which is used in the Python script to call the correct motor parameters.

5.1 Stepper motor code The Python script that controls the stepper motor is called by the executable C# script running on the Raspberry Pi. The script uses a Python library from GitHub [2] called pyTMCL to send commands to the stepper motor. TMCL stands for the Trinamic Motion Control Language that is widely used to control trinamic stepper motors. We first tried to use the original TMCL Python library [3], but we ran into a ValueError when trying to send stepper angle values to the encoder. Luckily, another person had the same issue and they had managed to fix it in their clone of the same library, so we used their version instead and the problem was fixed. The Python script opens a serial port in the Raspberry Pi with the rs485 adapter, which we can use to connect a bus instance using the open serial port. From that point we can start issuing TMCL commands according to the documentation [4] to move the motor at a desired angular velocity, acceleration and direction. We first tried to move the motor with absolute values, but it ended up causing too much error accumulation over time and thus the amount of rotation was slightly off each turn. We ended up making the final implementation with very simple logic: keep reading the encoder values and keep issuing turn right commands until the motor is at the right endpoint limit. At that point we then stop and then turn the motor left until it reaches the left limit and stop. Repeating this logic inside a while-loop we can make the stepper motor rotate just as we desire and it allows us to set and change the right and left endpoints when necessary. While testing the code we found out that repeatedly giving move commands 5 degrees at a time did not cause much or any jitter as would be expected. Inside the while-loop, we also keep printing the encoder value every second or so to the standard output, which is then read by the main C# program on the Raspberry Pi. This gives the program the information of the rotation status, i.e. the rotation angle how the stepper motor is currently turning the LIDAR around. The rotation angle data is then sent to the Windows laptop along with the LIDAR data in the same C# struct so we know exactly how to rotate the point cloud obtained by the current scan of the LIDAR.

6. Dataflow Overall, the equipment relevant to the data flow of the system, shown in figure 1, include the SICK MRS6000 LIDAR, the Raspberry Pi 3B, the Trinamic PD-1161 stepper motor, the main computer and the Ilmatar smart crane. The LIDAR and the Raspberry Pi are connected in their own subnetwork via Ethernet and they can communicate using TCP protocol. By sending a “sEN LMDscandata 1” command to the scanner, it starts a continuous dataflow of the status, angle, step, distance, energy and layer data of the measured points. The C# program inside the Raspberry Pi creates this TCP connection, reads the hexadecimals that the LIDAR sends and parses it to readable values which are stored in a C# struct seen in figure 1. The Raspberry Pi is also connected to the Trinamic PD-1161 stepper motor via USB connection. This connection is controlled by a Python

Page 7 of 42

Page 8: Final Report Project #15 Autonomous crane for warehouse

script, which reads the current encoder position of the motor in steps and gives the command for the motor to turn until it hits 12800 steps from the initial position of the motor (180 degrees) and then repeats the same command in the opposite direction until it reaches the initial position minus 12800 steps. The stepper motor encoder value is transformed into angle data by dividing the current step value by 25600 and multiplying it by 360 degrees, which is then stored in the C# struct as the stepper motor angle. The main computer is wirelessly connected to the Raspberry Pi and the Ilmatar smart crane via WLAN. The main computer acts as a server for the Raspberry Pi TCP communication and reads the struct of the scan data. This data is then used in the C# program which acts as the main program that controls the Python scripts. The struct data is transformed into a 3D point cloud which is visualized using OpenGL library. The C# also parses the point cloud into an occupancy grid, which is stored in the ‘walls’ text file. These ‘walls’ are the obstacles that the crane must evade. The C# program launches the Python scripts as a subprocess in new threads that start running parallel to the data collection. The Python program keeps writing the current position of the crane inside the C# console, which is read simultaneously by the C# program. The C# reads the current planned path from the path text file and uses it to visualize the current path to the OpenGL window. The Python scripts are used for connecting to the crane via OPC UA, planning the path and deciding the control signals for the crane. The Python script reads the current walls from the walls.txt file and prints out the crane position. Once the crane is close enough to the target position or pick-up position, the C# code gives the Python script its final coordinates that are hard coded for small adjustments in position in the subprocess arguments. The Python script creates the OPC UA client to the crane’s OPC UA server at the start of the subprocess and uses this communication to set and get values such as speed control, direction control and the crane position.

Page 8 of 42

Page 9: Final Report Project #15 Autonomous crane for warehouse

Figure 1. The data flow of the system where the main computer gets its data from Raspberry Pi and the crane.

7. LIDAR attachment The LIDAR attachment is a simple structure designed to hold the LIDAR firmly in place on the crane trolley while the crane is moving and to provide a solid point from which the LIDAR scans could be performed. The top shaft structure is made out of thick aluminium to reduce the vibration of the whole attachment during normal operation. The bottom part consists of the stepper motor mounting part which allows the stepper motor to rotate the LIDAR mounting part around 360° if necessary. The motor is placed on a rectangular aluminium plate and the motor shaft is connected to a bearing and a larger shaft. The bearing was needed to minimize the friction during rotation. The larger shaft is directly connected to the piece holding up the LIDAR. The main principle for the design was that it should be sturdy in order to minimize vibration, light to minimize the cost of the motor due to the smaller required torque and easy calculation of the transformation matrix for state estimation. The LIDAR attachment and metal piece connecting the trolley to the aluminium piece on which the

Page 9 of 42

Page 10: Final Report Project #15 Autonomous crane for warehouse

motor is placed, needed to be long enough so as the bridge would not block the view. At the same time the attachment could not be too long as it will make it more unstable and heavier. If the attachment was too long it would decrease the height from the floor and thus minimize the region of visibility from the LIDAR. The attachment is connected to the trolley with 2 allen head bolts and at the LIDAR with 2 m6 bolts. The weight of the entire attachment including the LIDAR was used to calculate the required torque for the motor. The LIDAR weighs a little over 2 kilogrammes and the attachment another 2 kilogrammes. The motor chosen was based on the torque needed and that it had embedded encoder, and it was relatively easily integratable with Raspberry Pi. Due to the way the wires are connected to the LIDAR, we decided to limit the rotation to 180° to avoid any issues with the wires twisting excessively. The final implementation of the attachment can be seen in figure 2. Our implementation for the attachment is not optimal, but sufficient enough to meet the current requirements. Our initial goal was to mount the LIDAR in such a place that it could see the entire area around the crane. This proved difficult because, while the field of view of the LIDAR is fairly wide, it has a limited view in the horizontal direction. Since the LIDAR had to see the area around the crane dynamically and it needed to move with the crane, the only practical solution found was to mount it on the trolley. This implementation of the attachment comes with at least two small flaws: the LIDAR might have issues seeing the area “behind” when the crane is moving backwards as the LIDAR is mounted only on one corner of the trolley, and the LIDAR might not see through the object that is being lifted, limiting the available view. In the early designs, it was attempted to avoid these issues by using a configuration where the LIDAR would be mounted on a rail loop and would circle around the hook. However, this idea was later scrapped due to being too complex to implement, and it was thought to be better to go back to the more simple design of the attachment but with a stepper motor added to enable the LIDAR a better view of the room.

Page 10 of 42

Page 11: Final Report Project #15 Autonomous crane for warehouse

Figure 2. The LIDAR scanner is attached to the trolley of the crane and rotated by the stepper

motor.

8. Data handling and object detection The LIDAR has a 120x15 degree view, and it is mounted on a stepper motor, rotating back and forth 180 degrees, enabling detection in all directions around the crane trolley. It is connected via ethernet TCP communication to a Raspberry Pi that sends the data to a computer wirelessly, where the data is parsed. The parsing of the data was performed by knowing the structure of the telegram and in which order the information is sent. The data consisted of distance information in spherical coordinates, angle and energy intensity for each point. The data is scaled and an offset is added. A coordinate transformation is performed to change from spherical to cartesian coordinates, to give spatial distances in x,y,z coordinates. The z-axis is the height of the hook and x and y the position of the crane bridge and trolley, respectively. There are more advanced and robust filtering algorithms available on the internet, but as we had limited processing power we developed an online filtering method that is in its simplicity fast, providing us little added latency, and is computationally light. The limited processing power comes from the computer running several threads and programs on one laptop in parallel. The data is filtered according to spatial coordinates, with upper and lower limits in z direction. Data points in the pointcloud close to the floor, under 0.2 m above floor level as well as points above 2 meters from the floor are discarded. The data is filtered according to energy intensity to minimize the effect of reflections and other noise occurring in the pointcloud, and thus intensities below a certain threshold are disregarded. The work area is discretized into 0.5x0.5 meter 2D grids, and occupancy

Page 11 of 42

Page 12: Final Report Project #15 Autonomous crane for warehouse

of grids is determined according to the density of points. Grids with densities below a certain defined limit are considered free, and the crane is allowed to move freely there. Rotational transformations for direction of the LIDAR are updated according to the angle of the stepper motor, and translational transformations are performed to take into account the offset between the position of the LIDAR and the centre of the hook. The stepper motor used for rotating the LIDAR was the Trinamic PD-1160, which outputs the current angle of the stepper motor with high resolution and low latency. The current angle of the LIDAR was transmitted via Raspberry Pi, coded in Python to the computer, which is depicted in figure 1. The stepper angle was read by the computer in C# and converted to give the angle in radians, by taking the fraction of the current value and the value corresponding half a rotation. This allowed for accurate updating of the LIDAR direction with respect to the hook. The direction was calculated by using the current angle and performing translation with matrix multiplications in C#. Defining translation and scaling matrices in C# is easily achieved with the OpenTK library. Obstacles are defined during movement as grids having high enough density of points within it. The pick-up area is a hardcoded 2x2 meter area, and objects within the pick-up area are not regarded as obstacles, to prevent the path planning algorithm from evading the object. The pick-up area is discretized into smaller grids, 0.05x0.05 m, to enable higher resolution detection of the object. The centre of the object is calculated as the average of the occupied grids from the object within the pick-up area, in x,y coordinates. The height of the object was calculated by discarding the highest 20% of the z-values, to minimize the effect of noise. The height was determined as the arithmetic average of the rest of the z-points within the pick-up area, with an extra offset of +0.3 m, for allowing attachment of the object to the crane hook. The graphical view of the program is shown in figure 3. It shows how the data was visualized in C# using the OpenTK library and OpenGL. A vertex buffer object (vbo) is created to store the buffer data received from the LIDAR, and for the room grid and A* grid drawing. The LIDAR data is in 24 layers and has 924 points per layer, the buffer data consists of 2772 points, as we have three dimensions. Functions for drawing the points, grids, A* path and trolley and LIDAR were developed in C#. The DrawLayer for visualizing the points is defined as a private void function, with two parameters, the frame in the buffer and the layer. It uses OpenGL built-in functions in the library, such as BindBuffer and DrawArrays. The BindBuffer function takes two parameters, a target buffer and the vbo. The underlying principle for the other functions are similar, but have different parameters. The DrawGrid and DrawAstarPath functions do not have parameters. The function used to draw the squares, such as the pick-up area, the trolley and the LIDAR, has two parameters; the 4x4 matrix with position of square in the room and a color vector. The color vector defines the color of the square in the visualization and uses OpenGL to translate the vectors to color. All points in the buffer are drawn by using the DrawLayer function and draws every layer and every frame in the buffer. The buffer continuously stores the 40 latest frames. The window for visualization is created with OpenTK. The visualization included the point-cloud data, the grids regarded as obstacles, the trolley, and the LIDAR and the pick-up area. Points with a height value above a certain threshold are depicted as red and the rest as green, the pick-up area is a blue rectangle. The trolley is a tilted green rectangle and the LIDAR is depicted as a smaller rotating rectangle, the route calculated by the A* algorithm is visualized as a line to the pick-up area. The connection to the LIDAR is achieved with C#. The A* path is calculated in Python on the computer and is read in C# on the computer from a file.

Page 12 of 42

Page 13: Final Report Project #15 Autonomous crane for warehouse

Figure 3. The visualization made with OpenGL in C#.

The algorithm for obstacle detection was not optimal as a result of the limited view of its surrounding area and because small, thin and dark obstacles were not always noticed by the LIDAR, until it was close to them. The object detection within the pick-up area worked very well, and it was able to calculate the exact position of the object with high accuracy in both x,y and z-direction. This can be seen in the demonstration video, in appendix 1. In the video it can also be seen that the LIDAR does not provide long enough horizon, resulting in the crane choosing a route first thought to be optimal. The map of its surrounding is updated once it is close nearby, and the algorithm notices that the first optimal solution is not free. The algorithm recalculates the route from there, as it can see obstacles clearer as it is closer, and finds a way out. The data from the LIDAR is stored in a buffer, allowing multiple frame scans from different angles when observing the environment to be regarded during path planning. The initial plan included human detection and dynamic evasion. The human detection was omitted after discovering that the passive infrared sensor (PIR) meant to be used in sensor fusion with the LIDAR was too sensitive to any kind of movement. The PIR sensor would be faced in the same direction as the LIDAR and if the PIR output was true and the LIDAR detected something in the area it could be concluded that it was a human being. The field of view from the PIR was, however, too wide and it was tested by inserting it inside a coffee cup and taping the insides to avoid noise and to limit the field of view. The output of the sensor reacted in a substantially narrower field of view and would output true for around 5 seconds if a human came in the view and was static. The output would then change to false unless something in the environment changed. The sensor could not differentiate between movement in front of the sensor and the sensor moving. The output was tested and the sensor would return to false repeatedly around 1-2 seconds after having been moved. The PIR was initially planned to be added on the crane, but as it was too sensitive to movement and there was no way of conclusively saying whether it was caused by internal or external movement, the idea of the PIR was omitted. The point cloud data generated in this application was too sparse to be able to conclusively identify humans in the vicinity, especially non-static, this also meant that segmentation of the data would have been inconclusive.

Page 13 of 42

Page 14: Final Report Project #15 Autonomous crane for warehouse

9. Crane planning and movement The crane moves automatically using the laser scanned map of the area and a path planning algorithm. The work area of the crane is split into 0.5x0.5 meter two-dimensional grids, which are either occupied or not. Discretizing area makes the path planning more simple as there are less positions to calculate. The grid size was set to 0.5 meters so that the crane has enough room around it for each position and so that the crane and its cargo do not hit the obstacles, but the path planning algorithm still has enough room to find a proper path through the area. The occupancy is decided by the laser scanner data which is then given to a Python based program via a text file, where each occupied grid position is given in a list of tuples of x and y coordinates. These positions are then read by the Python program and they are determined to positions on the grid where the crane cannot move to. In this Python program a 50 x 20 grid is created such and the walls read are added to the matrix. Each grid movement costs one unit and such they are weighted the same. It was decided that the crane should move only in the north, south, west and east directions so the crane could not weave through small gaps. In the program the directions are set by having only N,S,E,W neighbours accessible from each position. The A* algorithm uses a priority queue method which starts from the initial position and each new shortest possible path position is added to the queue. By using an A* algorithm the program calculates the shortest path with the heuristic formula 1, where the cost of the function is minimized such that the g(n) is the length of the path to the current position on the grid and the h(n) is the shortest Manhattan distance from the current position to the goal.

(n) (n) (n)f = g + h (1) The algorithm compares the last smallest cost to the neighbouring costs and adds the costs that smaller. From this shortest path the next position in the grid is then transformed into the crane’s own world coordinates. The crane can only move by using speed and direction control of the bridge and the trolley of the crane, so they are calculated by using a proportional controller where P is the proportional gain, v is the desired crane velocity, x is the crane’s current position and the xt is the target position for the crane.

.13P = max output

ramp threshold = 0 (2)

(x )v = P − xt (3) This velocity is then sent to the crane via wireless network and OPC UA client and updated constantly. The target is decided by the first step of the path and once that is reached the algorithm calculates the path again and controller moves the crane to the next position on the path.

Page 14 of 42

Page 15: Final Report Project #15 Autonomous crane for warehouse

Figure 4. A* algorithm calculates the shortest path in a discretized grid from start to goal using

heuristic function while avoiding obstacles.

The path planning algorithm is biased towards the goal node, but does not remember its choices and has no reinforcement learning methods implemented, meaning there is a possibility of making the same mistakes during the next iteration. Currently, the occupancy grid is constantly updated with some buffering of old scans of the area, but in the future there could be implementation detecting obstacles that do not move that are stored to memory, which would ensure faster and safer movement in the warehouse.

9.1 Crane control logic

The whole operation of the crane is split into 4 phases. The first phase consists of the crane moving to the target pick-up area while avoiding obstacles, the second phase consists of locating the pickable object and lowering and raising the hook on the object. The third phase uses the same logic as the first phase, but this time the target destination is a predetermined placement point, the crane will still try to avoid the obstacles along the way. The fourth phase uses the same logic as the second phase, but this time it does not scan for the pickable object because the crane is already holding it. Instead, in the fourth phase the crane will just lower and raise the hook to drop off the obstacle. At this point we decided to end the demonstration of the automatic crane here so there are no further steps the crane will take from this point on, but it would be very easy to modify the code so that the crane would keep looping these four phases over and over to get a continuous operation of this pick-and-place task. The Python file crane_move.py contains the logic for the movement of the crane in the first and third phases while the file crane_move_phase2.py contains the logic for lowering and raising the hook during the second and fourth phases. The actual scanning of the object inside the target pick-up area is coded in the C# file called HelperFunctions.cs with the function PointCloudTargetCenterFinal2. The Python script crane_move.py contains a while-loop that keeps running until the crane reaches its target or something else breaks. Inside the loop the script reads the walls.txt file to find out the

Page 15 of 42

Page 16: Final Report Project #15 Autonomous crane for warehouse

grid squares where the crane cannot go to. Then, based on its current position, it calculates the shortest path to the goal using the A* algorithm written in the run_astar.py file while avoiding the squares in the walls.txt file. This shortest path is a list of tuples which contain the x and y coordinates of the next step the crane will take, the final step being the goal coordinates. The path is then written to the path.txt file as such. The main C# program will use the path found in the path.txt file to draw a line for visualization that corresponds to the path the crane is about to take, but the text file is not used for controlling the crane as such. After writing the path to the text file, the Python script determine if the next step is the goal or not. If the next step is not the goal position, the script will move the crane to the next step with proportional controller off. If the next step is the goal position, the crane will move with the proportional controller on. This is done to make sure the crane smoothly stops at the final target coordinates, but if the proportional controller was turned on for all the steps it would make the crane awkwardly slow down every step. The proportional controller logic is implemented in the crane_fix.py file. The crane_move.py script moves the crane by setting the target location as the coordinates of the next step and then calling the move functions implemented in crane_fix.py for both the bridge and the trolley until the next step has been reached. Inside this other while-loop, the script keeps running the A* logic every second or so, and makes a new path based on what is written in the walls.txt file now. This is how the crane will update and reroute the path based on new information. After finishing the path, the Python script exits with a successful exit value so that the C# program knows to move to the next phase. If the Python script returns a bad exit value, the C# program tries to run the script again, and the crane can continue its movement from where it left off if something failed. The other Python script crane_move_phase2.py contains a very simple implementation for controlling the hook during the pick-up and placement phases. The target coordinates are given by the C# program that calls the Python script in the form of command line arguments. The coordinates are the x, y and z coordinates that the C# program has calculated to be the pickable object location. The Python script first sets the trolley and bridge target positions, calls the moving functions in crane_fix.py just like it was did in crane_move.py, and waits until the crane is directly over the object. Then, the script sets the hoist target position, which is the height the hook needs to be lowered, calls the movehoist function and waits until the hook is at the target height. It then waits for confirmation from the operator that the object has been fastened and begins to move to the set height where the object is lifted, the default value being 2.9 meters. Now that the object has been lifted, the Python program ends with a successful exit value so that the C# program knows to move from phase 2 to phase 3. This same logic is applied again in phase 4 when the object needs to be lowered and placed on the ground.

10. Demonstration video From the beginning of the project, we had already planned to capture a video of the completed system in order to properly demonstrate the actual functionality of the autonomous crane and especially because performing a live demonstration would require some arrangements done in the K3 hall and would be tied to this location either way. For the video demonstration, the plan was to have the crane carry out a fairly simple pick-and-place task where it would need to rely on the information provided by the LIDAR to avoid going over any obstacles in the room. The actual video, attached here as appendix 1, was made by filming the operation of the crane with a smartphone camera and simultaneously recording the screen of the main computer with Open Broadcaster Software and finally joining these two clips together afterwards using Wondershare Filmora9 video editor. Additionally, due to the full demonstration being a bit slow, several videos were produced with different speeds: real-time, 4 times the speed and 8 times the speed as is the case in the video in appendix 1.

Page 16 of 42

Page 17: Final Report Project #15 Autonomous crane for warehouse

As seen in figure 5, on the right side of the video, the actual crane operation can be seen matched with the corresponding visualization window to the left of it in the center, and the leftmost window is for responding to user prompts and showing the calculations for the walls. In the visualization, the yellow diamond and the small rotating yellow diamond shape in the middle of the window correspond to the crane hoist and the LIDAR respectively. The purple line connected to the yellow diamond shape is the planned movement trajectory to the next destination. The green square grids are the detected obstacles, and the observed points outside of the grids are assumed to be noise. The blue square represents the 2x2 meter pick-up area, which has its location hardcoded in the system. Once the crane has moved to the pick-up area, it will automatically search for a pickable object within it, calculate its exact position and move the hook above the center point of the object. The target position, where the object is to finally be moved to, is a hardcoded point marked to the floor with an x with white tape.

Figure 5. A screenshot from the video demonstrating the capabilities of the autonomous crane.

As it can be seen in the video, the crane moves automatically to the target pick-up area while avoiding obstacles. The only non-autonomous part of the demonstration is fastening the object to the hook, as the crane only has a normal hook attached to it, we can not automatically pick up the object. As such, we halted the operation of the crane when the hook was lowered around 30 cm above the object and attached the object to the hook manually. The object in the demonstration is a simple cardboard box that we fastened to the hook with straps. Figure 6 shows the attachment of the object to the hook. We also had to remove the object from the hook manually when the crane reached the final placement location.

Page 17 of 42

Page 18: Final Report Project #15 Autonomous crane for warehouse

Figure 6. Attaching the object to the hook.

11. Overview of results

The final result of the project is a semi-automatic intelligent crane system, which can move an object from a pick-up area to a predetermined spot while avoiding obstacles. The system is meant for automated warehouse management working with other autonomous vehicles and robots. This is accomplished by using a LIDAR laser scanner, a stepper motor, a Windows PC, a Raspberry Pi -microcontroller, a path planning algorithm and an overhead crane with wireless control. The crane used for this project is a Konecranes intelligent overhead crane with no pre-existing obstacle avoidance systems. The crane was already usable at the start of the project with OPC UA communication protocol for crane position value and speed control. The LIDAR scanner was provided by SICK as the starting point for their innovation competition. The finished system can start from any position inside the working area of the overhead crane and by using scanner data and path planning algorithms the crane automatically avoids obstacles on its way to the preprogrammed pick-up area. Within the pick-up area, the system is able to detect the pickable object from the floor and moves the crane hook on top of the object so the hook can be attached to it. For this project, there was no automatic gripping system to pick up the object and thus the object needs to be attached by hand. After this, the object is lifted up and transported to a preprogrammed storage position while continuing to avoiding obstacles. When the storage position is reached, the crane lowers the object down and once the object is detached, the crane can start the process again from the start.

12. Reflection of the project

12.1. Reaching objective The expected output of the project was to develop an autonomous crane system for warehouse environments that could evade static and dynamic objects, including humans. The idea was that it could distinguish humans from other obstacles and create a larger safety-perimeter around humans. This could either be done by infrared sensors or machine learning algorithms. Our initial approach was using a cheap passive infrared sensor in sensor fusion with the LIDAR data; if both sensors reacted to an obstacle in the same area, it was assumed to be a human. The infrared sensor reacted, however, to all changes in its environment and could not differentiate between change caused by

Page 18 of 42

Page 19: Final Report Project #15 Autonomous crane for warehouse

movement in front of the sensor or sensor movement. This is a reason it could not be implemented in our project, as the initial plan was to attach it to the crane. The autonomous crane could detect humans in its vicinity that are static. Dynamic objects could not be detected as a result of the narrow field of view from the LIDAR and latencies within the system allowing it not to react fast enough. The resolution of the LIDAR was not high enough to identify humans, especially non-static humans. The reasons for changing objectives were both external and internal, but also a result of being overly optimistic with respect to the time available and not realising how time-demanding implementing the dynamic evasion is, and that it required knowledge we did not possess. Internal reasons, such as bottleneck from not building an initial prototype fast enough, which was needed for object detection and filtering, caused delays to the original plan. Another internal problem was the group having to share resources such as the LIDAR, which caused some delays as well, as the LIDAR was needed for planning the attachment, measuring and testing functionality, but also for testing the object detection and filtering. The project was simplified to meet our level of knowledge and our time schedule, which was a risk we had anticipated in our project plan. The final product complies with our measurement for a successful project we had set in the project plan, even though it was simplified to allow us becoming finished within time schedule. The prototype works as expected as long as the obstacles are static during scanning, and the result can be seen in appendix 1.

12.2. Timetable

The planned schedule was followed mostly only during the beginning of the project. Even though the tighter schedule imposed by the competition deadline was taken into account in the project plan, there were still several factors contributing to the eventual falling behind the defined project timeline. First of all, the available hours of each team member listed in the project plan poorly reflected the reality of how much time was actually possible to be invested into the project and also did not consider the team’s tendency to mostly work on it during opportune periods of time when several team members could be present. Secondly, a lot of the functionality of the system was tied to the handling of the LIDAR data and this caused a bottleneck in how it was possible to progress the project since only one team member had the competency to do the data handling. Later, when the LIDAR was integrated with the rest of the subsystems, many remaining tasks were not possible to be worked on concurrently due to the need to share the integrated system. The planned number of work hours for each task was actually overestimated because this was scale according to the listed available hours and, in the end, there were no such work resources that could be spend to the task. Even then, the amount of effort needed for the crane movement was the most overestimated one, many of its tasks being completed in a fairly short time whereas the LIDAR attachment and object detection were taking likely more time than expected worsened also by the bottleneck problems.

12.3. Risk analysis

A week prior to the mechatronics circus, the Raspberry Pi memory card was corrupted. Luckily, it was easy to physically replace as an instructor had extra SD-cards. However, this caused some time-delays as we spent time investigating the problem and tried retrieving the lost data from the corrupted card. The corrupted card also required re-writing some of the code needed for the stepper control and data handling that was lost, and configuring the new SD card and setting IP addresses. This could have been avoided by adding the updated Raspberry Pi codes to our shared github, and by writing down the configuration procedure after the first setup. In precautionary terms of

Page 19 of 42

Page 20: Final Report Project #15 Autonomous crane for warehouse

corrupting the card again, in case it was a result of too high or too low supply voltage or overclocking, we uploaded the codes to github and the documentation of the configuration. The group was divided quite early into one group responsible for the LIDAR attachment and movement and the other for LIDAR data handling, path planning and crane movement. This streamlined the project in ways, as everyone had set tasks and were mostly responsible for their area of expertise, but it also required more communication between subgroups to know their status and time schedule. The severity of this risk was not anticipated and it resulted in some bottlenecks mentioned in subchapter 8.1.

12.4. Project meetings

The project meetings involving the instructors were arranged in a very regular manner as they were held every week during the project with most of the team members present each time. An agenda was prepared for each meeting by the project manager and which was shared in the project team’s Google Drive and sent to the instructors beforehand. Additionally, a meeting memo was written during each meeting by one of the team member and which was available in the Google Drive afterwards. As the project was run in a relatively tight schedule, it turned out to be a good practice to hold the official meetings even weekly even if there might not have so much tangible progress between some meetings. The project team preferred to share information and ideas both in person and via a messaging app (Whatsapp) throughout the course of the week so the purpose of the official weekly meetings ended up being to update the instructors about the project’s progress and future prospects while serving as a platform for discussion and feedback based on these.

12.5. Quality management

In general, we hardly encountered anything that would have felt as problematic in terms of quality at least in the case of the final implementation. We had specified a few quality requirements in our project plan and these were possible to be satisfied as the natural design and building process for each feature took care of possible quality issues by itself. On the other hand, some quality requirements were, in the end, seen as irrelevant in performing the actual demonstration. The compliance of the final implementation with the quality plan can be evaluated as follows: The data handling in the Raspberry Pi and Windows PC does not experience any problematic latency despite the great number of data points from the LIDAR that have to be handled. However, the quality of accurate object detection is heavily related to the innate properties of the LIDAR measurements and poor detection results have been observed when trying to detect objects with a dark-colored surface or thin shape. Furthermore, as it was necessary to keep the LIDAR attachment design within a reasonable scope and so was placed on a single corner in the crane trolley, it was not possible to deal with the resulting blind spot from the load attached to the crane hook but this was not an issue when running a demonstration. The motorized attachment frame, on the other hand, was built resistant to vibrations such that good measurement accuracy is retained even while the crane is moving and the frame has a sturdy design overall. The trajectory planning of the system is working as originally envisioned in that it is able to incorporate information about collidable objects to constantly calculate an optimal path. However, only the recently detected objects are considered so the calculated path may not reflect the true structure of the environment and even a trial-and-error approach may work rather inefficiently. On top of this, the final system with the trajectory planning has seen only fairly limited testing in terms of different obstacles and their layout so the full extend of its capabilities and shortcomings is not known very deeply. All in all, these subsystems, despite the individual faults in some, work in good synergy together as the complete implementation and it can be said that they comply with the most essential requirement stated in the quality plan.

Page 20 of 42

Page 21: Final Report Project #15 Autonomous crane for warehouse

13. Discussion and conclusions The hardest parts of the project were the integration of different systems together to get it working. Having multiple programming languages and subsystems made the integration more complex, but we learned that having clearer communication and good documentation helps with designing the systems so that they have the proper interfaces for integration. It was also very clear how important it is to have clear and simple roles for each person such that they can provide help for each of the work packages. These packages should be then split into even smaller pieces which should tracked by percentage of completion which will help with splitting the workload for each of the members more evenly. Having multiple people learning the different key techniques and systems that are being used will also help with sharing the workload. Saving and sharing your work regularly is also important, as we learned from the broken Raspberry Pi SD card incident. The initial plan for the project was altered and simplified as concluded earlier to fit with our timetable. This resulted in features such as evasion of dynamic objects and human detection being omitted. The data outputted by the LIDAR did not have high enough resolution for easy segmentation and identification of humans, especially non-static humans. Despite this, the final prototype is functional and can effectively explore the area and detect and evade static objects in its surroundings, while searching for an optimal route to the pick-up and drop-off area. Most of the technical learning happened with the LIDAR itself as it was a complicated version with multiple layers of scans. This required some help from our instructor and programming knowledge to get working correctly but we managed to get it working fairly quickly. It was also nice to see graph search used in a real product where we could utilize our theoretical knowledge of the subject of AI.

List of appendices ● Appendix 1 Autonomous crane demonstration video ● Appendix 2 User manual ● Appendix 3 Project plan ● Appendix 4 Business aspects

Page 21 of 42

Page 22: Final Report Project #15 Autonomous crane for warehouse

References [1] OPC UA Python library, GitHub, https://github.com/FreeOpcUa/python-opcua [2] pyTMCL Python library, GitHub, https://github.com/LukeSkywalker92/pyTMCL

[3] TMCL Python library, GitHub, https://github.com/NativeDesign/python-tmcl

[4] TMCL Firmware Manual, Trinamic, https://www.trinamic.com/fileadmin/assets/Products/Modules_Documents/TMCM-1630_TMCL_Firmware_Manual_Rev2.04.pdf

Page 22 of 42

Page 23: Final Report Project #15 Autonomous crane for warehouse

Autonomous crane for warehouse management User manual

Quick start instructions: 1. Make sure both the Raspberry Pi and the computer you are running the program are

in the same network as the Ilmatar crane. (WiFi name starts with KC and ends in 2) 2. Run LIDAR_data-Program27032019.exe on the Raspberry Pi. We used VNC Viewer to

access the Raspberry Pi remotely. 3. Run CrossPlatformCSharpTest.exe on your computer to connect to the Raspberry Pi.

If the connection is successful, Raspberry Pi will start sending data that it received from the LIDAR to your computer and the point cloud graphics are drawn on the window that opens. The stepper motor should also start rotating the LIDAR around at this point.

4. Following the directions, in the Python window that opens, type ‘y’ and press ENTER to confirm you are ready to start moving the crane (phase 1). Remember to do this every time when entering a new phase (every moving phase and pick-up phase).

5. The crane can now be run with the crane controller by pushing the left side button furthest away from the operator. The crane should follow the trajectory drawn by the purple line in the graphics window (OpenTK).

Required source files In Raspberry Pi:

LIDAR_data-Program27032019.cs Raspberry Pi source file in C#, it also runs the stepper motor code

stepperfinal.py Stepper motor code in Python.

Remember to put the files in the same directory: /home/pi/Documents/. If you change the directory, make sure to change the Arguments line in ProcessStartInfo accordingly, in LIDAR_data-Program27032019.cs. The stepper motor code requires the pyTMCL library: https://github.com/LukeSkywalker92/pyTMCL. In the computer you are running the main program:

Program25022019.cs Main source file. Calls other Python scripts in /src directory.

HelperFunctions.cs Contains some functions needed for the main program.

App.config Contains the paths where the main program looks for the Python files.

In App.config, set pythonExePath value to the path of your version of python.exe in the computer (tested with Python version 3.7, but should work on previous 3.x versions too). Set

Page 23 of 42

Page 24: Final Report Project #15 Autonomous crane for warehouse

pythonScriptsFolder value to the path of the /src directory where most of the Python scripts are located. For creating the graphics, the program uses OpenTK: https://opentk.net/. Required Python files in the /src directory:

crane_fix.py Connection to the crane via OPC UA script.

crane_move.py Crane move logic script.

crane_move_phase2.py Crane move script for pick-up phase.

run_astar.py A* algorithm script.

path.txt A text file where the main C# program writes the planned path of the crane.

walls.txt A text file where the main C# program writes the grid locations where the crane cannot go to.

Make sure that in crane_move.py the variables “PATHFILEPATH” and “WALLPATH”point to the correct text files “path.txt” and “walls.txt”, respectively, which are both located in the /src folder. Crane_fix.py requires the opcua library: https://github.com/FreeOpcUa/python-opcua. Furthermore, it is helpful (but not required) to run crane_print.py by itself for debugging purposes and testing the connection, it will print out the location of the crane.

Changing the default pick-up and placement areas If you wish to change the default pick-up and placement areas, you need to change the values of the variables “x_goalmin” and “y_goalmin”, which correspond to the corner where the first pick-up area grid starts. These variables are located in the HelperFunctions.cs file. The next variable, “goalgridsize” has been set to 2, which corresponds to the size of the pick-up area. These variable values are set in meters. So, choose the starting corner with “x_goalmin” and “y_goalmin” coordinate values, and the program will add the value in “goalgridsize” to the positive x and y directions to determine the size of the pick-up grid square. Due to an unfortunate oversight, the crane will try to go the default pick-up area, unless the target area is also changed elsewhere in the code. You will need to modify the function call “HelperFunctions.StartMoveCraneToPosition(25, 8, bufferData);” in Program25022019.cs on line 490. Change the values 25 and 8 to your new x and y coordinates, respectively. Note that these values are in the crane coordinates, which by default were divided into 0.5x0.5 meter grids, meaning 25 and 8 actually correspond to 12.5 and 4 meters. It should be enough that the position set to these coordinates roughly fits inside the pick-up area so that the LIDAR can see the pickable object. To change the coordinates of the other area where the crane places the object it picked up, change the variables “x_targetGoal” and “y_targetGoal” to your desired x and y coordinates, respectively. These variable values are also set in meters and they are found in the HelperFunctions.cs file.

Building the projects

The projects can be built with any C# compiler. We used Mono to compile the Raspberry Pi C# code and Microsoft Visual Studio 2017 to compile the C# code on the computer.

Page 24 of 42

Page 25: Final Report Project #15 Autonomous crane for warehouse

Aalto University ELEC-E8004 Project work course MEC-E5002 Mechatronics Project Year 2019

Project plan Project #15 Autonomous crane for warehouse management Date: 4.2.2019 Arnab Chattopadhyay Joakim Högnäsbacka Mikko Lähteenmäki Kaarle Patomäki Joonas Pulkkinen Janne Salovaara Sampo Simolin

Page 25 of 42

Page 26: Final Report Project #15 Autonomous crane for warehouse

Information page Students Arnab Chattopadhyay Joakim Högnäsbacka Mikko Lähteenmäki Kaarle Patomäki Joonas Pulkkinen Janne Salovaara Sampo Simolin Project manager Mikko Lähteenmäki Official Instructor Timo Oksanen Panu Kiviluoma Other advisors Juuso Autiosalo Matti Kleemola Starting date 10.1.2019 Approval The Instructor has accepted the final version of this document Date: 4.2.2019

Page 26 of 42

Page 27: Final Report Project #15 Autonomous crane for warehouse

1) Background

The task at hand is to make a functional and smarter crane with an integrated LIDAR that is provided by SICK. Our group consists of 4 CRAS-students and 3 mechatronics students. The crane used will be the Ilmatar overhead crane by Konecranes. The LIDAR gives measurements about distance, with some systematic error.

2) Expected output

The operating principle of our system is as follows: A warehouse manager wants certain objects to be moved from a designated pick up area to a respective storage area. This is achieved by detecting the object inside the pick up area and automatically calling an overhead crane to its location. The object is fastened to the crane hook manually. Next, the crane moves with the object towards the storage area along an automatically calculated trajectory. The crane will stop or recalculate its trajectory when an obstacle, including humans, is detected in its path. After reaching the storage area, a suitable space is detected and the object is dropped to that location. The system continues to monitor the pick up area for other pickable objects. At the end of the project, the system should be demonstrable in a live demonstration where the crane is shown storing a box or a similarly simple object in a specific area simulating a warehouse environment. This product is usable in a dynamic warehouse environment, where there is need for automation and safety features for overhead cranes.

3) Phases of project M1: Preparations (gathering the team together, instruction for LIDAR and crane), DL: 23.1.2019 M2: Brainstorming, DL: 25.1.2019 M3: Developing the concept, DL: 28.1.2019 M4: Project planning, DL: 1.2.2018 M5: Getting to know the tools (LIDAR, crane) and the workspace, DL: 8.2.2019 M6: Prototyping, DL: 1.3.2019 M7: Assembling modules, DL: 15.3.2019 M8: Testing, DL: 29.3.2019 M9: Mechatronics circus, DL: 5.4.2019 M10: Final report, DL: 19.4.2019 M11: Final gala, DL: 21.5.2019

Page 27 of 42

Page 28: Final Report Project #15 Autonomous crane for warehouse

4) Work breakdown structure (WBS) Hardware breakdown

1. Crane 2. LIDAR attachment

2.1 Frame 2.2 Motors 2.3 LIDAR to crane interface (microcontroller)

3. LIDAR 4. Other sensors

4.1 Camera 4.2 Infrared sensor 4.3 Sonar - if necessary

Software breakdown

1. LIDAR data extraction 2. Object detection 3. Crane movement logic

5) Work packages and Tasks of the project and Schedule

5.1) Work packages and tasks WP0: Project initialization

Task 0.1 Project preparation - 60 hours Task 0.2 Project planning - 100 hours

WP1: LIDAR Data handling Task 1.1 Learning how to extract data from LIDAR - 60 hours Task 1.2 Getting LIDAR position and angle data - 40 hours

WP2: LIDAR attachment and movement Task 2.1 Planning attachment - 60 hours Task 2.2 Planning needed movement for the LIDAR - 60 hours Task 2.3 Crafting the attachment frame - 60 hours Task 2.4 Crafting the motorized part - 60 hours

WP3: Object detection Task 3.1 Pickable object detection - 60 hours Task 3.2 Obstacle detection - 60 hours

Page 28 of 42

Page 29: Final Report Project #15 Autonomous crane for warehouse

Task 3.3 Human detection - 70 hours Task 3.4 Obstacle avoidance - 120 hours Task 3.5 Storage space detection - 80 hours

WP4: Trajectory planning Task 4.1 Planning what kind of system is used for trajectory planning - 50 hours Task 4.2 Moving the crane automatically - 50 hours Task 4.3 Moving the crane to pickable object - 80 hours Task 4.4 Simple trajectory planning - 80 hours Task 4.5 Place the object - 80 hours Task 4.6 Advanced trajectory: Stopping the crane before hitting an obstacle - 110 hours Task 4.7 Advanced trajectory: Moving around the obstacle - 140 hours

WP5: Integration Task 5.1 Combining LIDAR with its attachment - 80 hours Task 5.2 Combining LIDAR and crane - 80 hours

WP6: Final report Task 6.1 Drawing electronic and physical diagrams - 40 hours Task 6.2 Writing the final report - 95 hours

Total: 1775 hours

5.2) Detailed schedule

Figure 1. Gantt chart of the tasks

Page 29 of 42

Page 30: Final Report Project #15 Autonomous crane for warehouse

6) Work resources

6.1) Personal availability during the project Table 1. Number of hours available for the project (excluding lectures and seminars) per week.

Arnab Joakim Mikko Kaarle Joonas Janne Sampo Week 2 15 20 16 14 21 12 21 Week 3 15 20 16 14 21 16 21 Week 4 15 20 16 14 21 16 21 Week 5 15 20 20 14 21 16 21 Week 6 15 20 20 14 21 20 21 Week 7 15 20 20 14 21 20 21 Week 8 15 17 20 21 21 20 21 Week 9 17 10 20 21 18 18 21 Week 10 17 19 20 21 18 18 21 Week 11 17 19 20 21 18 16 21 Week 12 19 19 20 21 18 20 21 Week 13 19 19 20 21 18 20 21 Week 14 19 19 20 21 18 20 21 Week 15 - 8 10 - 6 8 - Week 16 - 8 10 - 6 8 - Week 17 - - - - - - - Week 18 - - - - - - - Week 19 - - - - - - - Week 20 - - - - - - - Week 21 - - - - - - - Week 22 - - - - - - - Total 228 258 268 231 267 250 273 Total: 1775 hours

Page 30 of 42

Page 31: Final Report Project #15 Autonomous crane for warehouse

7) Cost plan and materials Budget of this project is managed by the project group and the orders for necessary materials are handled by Panu. The budget usage will be recorded in the google drive with an Excel table which is updated by Janne. However, staying within budget is the whole groups’ responsibility, ultimately supervised by the project manager. SICK has donated the necessary MRS6000 LIDAR to be used according to the competition rules. The smart overhead crane is at our disposal in the K3 lab for this project. A laptop from the school has been allotted for this project and it is being stored in the K3 lab with the LIDAR. Some of the materials needed to construct the attachment parts can be found for free in the K3 lab. Estimated needs for this project are electric motors for moving the LIDAR, extra parts needed for fastening the LIDAR frame to the crane, cabling for data transfer, microcontroller such as Raspberry Pi for transmitting sensor data, and power source for the LIDAR. Extra sensors such as an infrared camera are also likely to be needed. Table 2. Cost estimation

Part Estimated cost (in euros)

Electric motors 100

Microcontroller x2 100

Extra frame parts 50

Other sensors 50

Cabling 30

Power source 20

Total 350

8) Other resources Other resources at our disposal are the working place at the K3 lab area, where we can store our materials and work with the LIDAR and the crane. The lab has tools available, but for safety reasons any heavy machinery should not be used before the operator is properly acquainted with it and has been properly schooled to use it. For accessing the K3 lab after hours, Panu has agreed to provide everyone with an access card after everyone has gone through the safety training.

Page 31 of 42

Page 32: Final Report Project #15 Autonomous crane for warehouse

9) Project management and responsibilities Responsibilities for the project manager Mikko:

● Chairman for the official meetings ● Contact person of the group ● Coordinates the progress of the project between project meetings ● Supervises the progress of the project through work package leaders

Responsibilities for the instructors ● Timo: Familiarizing students with the LIDAR and possible troubleshooting help with

problems that may occur with it ● Panu: Getting key-cards to the workspace and handling the orders for materials ● Both are available for help when facing technical issues

Responsibilities for work package leaders ● Keep track of the progress within their respective work package

Table 3. Responsibility matrix of work packages (Work package leader: X, Some involvement: O)

LIDAR data handling

attachment + movement

Object detection

Trajectory planning

Integration

Arnab - O O O -

Joakim O - X O -

Mikko O O O O X

Kaarle - X O - -

Joonas O - O O -

Janne O - O X O

Sampo X O O - O

Other advisors for the project are the LIDAR and crane experts Matti Kleemola and Juuso Autiosalo, respectively. With problems regarding the LIDAR, Matti Kleemola can be contacted through email. This is done through one person to simplify communications and to make sure all questions and information can be organised. Juuso Autiosalo can be contacted in any questions regarding the Konecranes overhead crane by email or by meeting him at K3 lab.

Page 32 of 42

Page 33: Final Report Project #15 Autonomous crane for warehouse

10) Project Meetings Project meetings are arranged weekly at K3 lab at Puumiehenkuja 5 with both instructors and all group members if possible. For the 3rd school period all the meetings are on Thursdays from 14:15 to 15:00. New meeting times will be set at the meeting the 21st of February. Chairman for the meetings will be Mikko and the memokeeper will be Janne. Mikko will send the agenda for the meetings in email and add it to the google drive. Memos are written such that missing members can understand what was discussed and decided during the meeting. This should be in a long form detailing each talking point of the agenda. Memos are stored in the google drive in their own folder. General practice for the meetings are to go through:

● Brief run down of the agenda ● What was accomplished after last meeting ● Questions for the instructors ● Discussion of any issues that may have arised ● What progress should be made before next week’s meeting

11) Communication plan The group communicates internally using a WhatsApp group chat that every group member has joined. The common documents, agendas and memos are stored in a Google Drive that every group member has access to. Software code is stored in a project folder in Aalto Version Control System (Aalto GitLab), which makes it easier to share code and different versions between group members. The group communicates with the persons outside the group (instructors and advisors) using email. The project manager Mikko is the main contact person between the group and outsiders, and he is responsible for delivering the message to other group members. Official meetings are arranged regularly with the instructors and most of the group members present so that the whole group is aware of the current situation of the project.

12) Risks Table 4. Detected risks

Risk Severity Likelihood Consequences Solution

Project too complex severe small we run out of time replan and simplify project

No good solution for LIDAR placement

medium medium blind spots simplify working principle

Somebody gets sick small medium project deadlines may

get delayed reorganize their tasks

Page 33 of 42

Page 34: Final Report Project #15 Autonomous crane for warehouse

LIDAR fails/breaks severe small-medi

um project deadlines may get delayed

fix the LIDAR, order a new one

Crane fails/breaks severe small project deadlines may

get delayed ask staff to fix

Tools/software don’t work medium medium project deadlines may

get delayed find new tools/find a fix for the software

Incompatibility issues with software/hardware

high medium project deadlines may get delayed

ask for help from instructors / online

13) Quality plan As this project comprises of both software and hardware parts, for quantitatively analysing the quality of the systems we will have to evaluate it based on certain criterias: Data handling: efficient real time functioning with minimal latency LIDAR mounting frame: robustness and vibration resistance along with prevention of blind spots in measurements Trajectory planning: optimal trajectory planning in different scenarios and integration with collision avoidance Object detection: accurate detection of object of different colours, types and structures Integration: functioning of all features in parallel without any considerable latency, and synergistic working of all the features together These qualities are monitored by their respective work package leaders who are responsible for keeping the project manager updated on their status. Instructors may interfere if they see problems with the quality of any of the work packages. Any occurring problems are discussed as soon as possible with the entire team or the work package leaders that the problem concerns. Actions concerning quality are discussed commonly, for instance in the official meetings and decisions on how to proceed are decided in a fair way, so that all group members may present their opinion.

Page 34 of 42

Page 35: Final Report Project #15 Autonomous crane for warehouse

14) Changing this plan The group has decided that changing the project plan is possible throughout the development process. In the case a group member thinks the project plan should be changed, they should contact other group members and propose a change to the plan. If the majority of the group members agree on the change of plan, then the plan is changed and the change is written down in the new project plan. The change is communicated and decided in the official meetings.

15) Measures for successful project Final outcome evaluation: The demo does the required tasks safely and autonomously is the most important measure for successful project. The testing for the demo will be with different obstacles and to see that it can react in the planned way. Software is tested in successively more difficult environments to see that the behaviour is as determined, while an operator is ready to overwrite the control manually. The project involves many submodules and tasks and could be seen as a whole, even if the final prototype only has a subset of tasks working. However, if not all tasks work, the prototype cannot be used in collaboration with humans. The minimum requirement is that the crane can move autonomously from the designated pick up area, to the drop off area. The set tasks are feasible and should be within our reach and the project does not include anything that we deem unnecessary. Project progress and success is monitored with:

● Project plan ● Work package leaders keeping track of their respective package ● Reporting of project milestones to instructors ● Final report

Page 35 of 42

Page 36: Final Report Project #15 Autonomous crane for warehouse

Aalto University ELEC-E8004 Project work course Year 2019

Business aspects Project #15 LIDAR sensing in hoist crane Date: 15.3.2019 Lähteenmäki Mikko Högnäsbacka Joakim Pulkkinen Joonas Salovaara Janne

Page 36 of 42

Page 37: Final Report Project #15 Autonomous crane for warehouse

Information page Students Lähteenmäki Mikko Högnäsbacka Joakim Pulkkinen Joonas Salovaara Janne Project manager Lähteenmäki Mikko Official instructor Timo Oksanen Other advisors Juuso Autiosalo Matti Kleemola Starting date 10.1.2019 Approval The instructor has accepted the final version of this document Date: xx.x.2019

Page 37 of 42

Page 38: Final Report Project #15 Autonomous crane for warehouse

Summary The need for autonomous crane systems that are dynamical in path planning, are intelligent and can handle variation will grow as robots become more common in warehouse systems. A collaborative autonomous crane system that is able to avoid and replan its path with other dynamical objects is beneficial both in terms of logistics and cost. An intelligent crane system can avoid obstacles, is more versatile and does not require hard coding of routes and as mentioned, can handle variations in the environment. Manually operated machines, such as forklifts, trucks and manual overhead cranes, require manual labour and are inefficient and costly in comparison to autonomous systems as they can operate around the clock and do not require a high expenses after installation, except for training, maintenance and energy costs. The biggest potential customers are industries such as metal and mining, paper and automotive. The market is based on safety and benefit-to-cost ratio, and as newcomers in the market, people do not have experience in our products and will not necessarily trust them, as they have experience and training in other companies’ products. The manufacturing of cranes is very expensive, which is why our plan is to be merged or acquired by a larger company where we will be responsible for developing intelligent crane systems. The final product will consist of the crane, software and the needed sensors for object and obstacle detection. The benefit of this approach is that it is safer to use in collaboration with humans and other robots, compared to a non-intelligent autonomous crane, which has hard-coded routes.

1) Business idea The product is a collaborative autonomous crane system that can plan an efficient route for moving large objects while evading objects, humans and other robots. Largest industries for crane systems, where large objects are needed to be moved efficiently and precisely, are automotive, metal and mining, paper and shipyards. An autonomous crane system is efficient in the sense that it can work around the clock, and aside from electrical cost and maintenance, does not require a lot of work and costs after installation. An autonomous crane can replace humans working in hazardous environments that can be fatal for humans, e.g. in metal and mining applications, the system must be precise in order to not cause collapsing and can thus replace humans. In the future collaborative cranes and robots that can schedule tasks and cooperate with each other will be more and more common. Therefore, a crane that is dynamic and can take into consideration both humans and robots to evade them while performing its set task will be effective and save both time and cost in logistics. The final product consists of both software and crane with needed sensors. The revenue and profit also include providing services and maintenance to our customers.

Page 38 of 42

Page 39: Final Report Project #15 Autonomous crane for warehouse

2) Product/service The product is a collaborative autonomous overhead crane system that is equipped with a LIDAR and other sensors needed for path planning, object detection and obstacle avoidance. Included also in the product is training and safety classes on how to operate the product safely, and also maintenance and service of the system. The purpose of the crane is to help businesses make the logistics of moving and storing large objects in warehouse environments as efficient and problem-free as possible while being safe to work in cooperation with other robots and humans in the vicinity of the operating area. The benefits of the system include safety and cost efficiency as autonomous systems that operate dynamically without hardcoded routes are easier to implement into new areas and can thus operate in more versatile fields. The crane is capable of picking up and placing small objects as well in addition to large objects. The hook of the overhead crane can be switched to an array of different grippers for different types of objects based on the needs of the customer. The whole crane system contains the following components: the overhead crane, an emergency stop button and a manual controller for the crane, a Raspberry Pi computer mounted on the crane, a LIDAR mounted on the crane, attachment of the LIDAR and stepper motors that move the LIDAR and an infrared sensor. The data from the sensors is sent to the Raspberry Pi. The Raspberry Pi then forms a 3D point cloud from the laser scanned data. The point cloud is used to detect objects and obstacles while the infrared data is mostly used for detecting humans in the nearby area. The main operation principle is as follows: the crane can move overhead in two directions (x and y) and lower or lift the hoist (z direction). The crane system detects the pickable object(s) from a designated pick-up area using the point cloud data and moves over the object, lowers the hoist and uses its gripper to pick up the target object. The object is then lifted to its travelling height based on its characteristics (weight, shape) and then moved to a designated storage area. The crane then releases the object and moves to pick up the next object if there is one. The crane uses its sensor data to create a map of the environment which is updated in real time. With the real time map data and the use of A* algorithm, a path plan is done and executed. The trajectory of the crane can be changed on the fly if an object is moved in the way. In the case that a human is detected, then the crane will slow down and stop to maintain safe operation.

3) Market situation and competitors analysis The product at the moment is quite niche, as autonomous cranes are by law (in Finland) not allowed to operate with humans in the vicinity for safety reasons, but the market for collaborative system markets is rising. As robots in warehouses become more popular a dynamic system will prove efficient logistically compared to non-intelligent autonomous crane systems. Once autonomous systems have undergone rigorous testing, the law is expected to change, allowing humans and autonomous cranes to operate in the same area. The customer base for this product is warehouse management, specifically automotive, metal and mining and paper industry. The potential customer share is estimated as two percent of the market. Competitive products are forklifts, manually operated crane systems and autonomously operated crane systems, which are arguably cheaper in comparison. As the product is a crane with path planning software and extra sensors, we cannot compete with the existing market of crane giants, such as Konecranes, as the production of cranes are highly costly and we do not have the needed

Page 39 of 42

Page 40: Final Report Project #15 Autonomous crane for warehouse

know-how of cranes per se. Selling the product as software and sensors to be added to existing crane would not necessarily work as the existing cranes may not be compatible with our software and would cause too many uncertainties. Therefore, our plan is to merge or be acquired by a larger company, and operate as a sister company of e.g. Konecranes, which would design the specific software and sensor configuration as research and development. Being a newcomer in such a safety-involved market is difficult, as people trust companies and products they have experience of. The competitive factors of the product are the specialization and differentiation compared to other products.

4) Intellectual property From the point of view of Freedom to Operate, there currently exists several international patents concerning autonomous crane solutions as found in Espacenet, a patent search website of European Patent Office. However, these patents illustrate very specific solutions usually for stacker cranes in a warehouse environment which also describe the intended configuration of crane placement and storage spaces. Therefore, it is unlikely that our implementation would infringe any existing or pending patent. Relevant patents:

● JP2018184226 (A) - AUTOMATED WAREHOUSE ● JP2018177490 (A) - AUTOMATED WAREHOUSE ● JP2018131319 (A) - STACKER CRANE AND AUTOMATED WAREHOUSE ● US2018297779 (A1) - AUTOMATED WAREHOUSE AND SUSPENSION-TYPE STACKER CRANE ● WO2019021708 (A1) - AUTOMATED WAREHOUSE SYSTEM AND AUTOMATED WAREHOUSE SYSTEM

CONTROL METHOD For protecting our own design, there doesn’t appear to be a need for applying for a patent or a utility model since our mechanical implementation is rather general. Furthermore, a lot of our unique contribution is in software, so it would be best choosing simply to not disclose the specifics of our program code.

5) Product development and technology

Currently, the crane can operate and move autonomously and the LIDAR data can be sent to a Raspberry Pi, which sends the data to a computer. The object and obstacle detection is still under development and the point cloud data needs to be segmented to differentiate objects. The output of the project is still far from a commercial product as the end product of this course is just a prototype for testing the feasibility of an autonomously operating crane with sensor fusion data. A commercial product would need to be more compact and have the path planning, object and obstacle detection and control module integrated on the overhead crane, our prototype will send the data wirelessly from the Raspberry Pi to a computer that will process the data, as the raspberry pi does not have the needed processing power to compute these subtasks. Assuming we merge or are acquired by another company, we do not necessarily need the specific knowledge of crane operation and manufacturing. However, skills and knowledge needed is the

Page 40 of 42

Page 41: Final Report Project #15 Autonomous crane for warehouse

mechanical and electrical knowledge of designing an automatic gripper for picking and placing objects, to make the system truly autonomous. For creating the intelligent crane, we would need to have a crane, which can be operated through program commands and such we would need to know how the crane control can be integrated to our system. In order to succeed, we need knowledge in both business and law, as well as information from crane operators, to know what are common causes of problems and how to avoid them to better know the market and how to design a system that is practical and functional.

6) Conformance To sell products in the European Union market, we need a CE marking. For automated crane system to have a CE marking it needs to fulfil European Union’s directives such as Machinery Directive (2006/42/EC), the EMC Directive (2014/30/EU) and Low Voltage Directive (2014/35/EU). Machinery Directive tells us how the installation of the crane should be done, that they will not have adverse effects on the safety of the people using them. The EMC Directive tells us how the motors and other electromagnetic parts should be done so that they won’t become a source of electromagnetic interference and won’t be vulnerable to external sources of electromagnetism. The Low Voltage Directive tells us about the electric safety of our product as it works in the low voltage area of 75-1500V. By complying with these directives and the standards listed below, we can fulfil the legal responsibilities for a CE marking. Standards for the automated crane system:

● EN 15011:2011+A1:2014 – Cranes – Bridge and gantry cranes standard ● EN 60204-1 – Electrical equipment of machines ● EN 12100 – Risk assessment and risk reduction ● EN ISO 13849-1 – Safety related parts of control systems ● EN ISO 11161 – Integrated manufacturing systems ● EN ISO 10218-1 – Robots for industrial environments

Once the product is finalised it needs to be tested according to the requirements in the directives and standards. The product must also have a technical information documentation under the directive rules. This documentation must be maintained during the life-cycle of the product. Once all of these have been completed, we need to write a legal document of CE Declaration of Conformity, which describes our responsibility under the CE directives. Our current project output does not fulfil all of the standards for a fully automated crane system and as such we have the user controlling a dead man's switch which implements extra safety for the system. Also, we would need to add more automated safety systems that stop the crane if a collision is unavoidable.

7) SWOT-analysis

The strengths of the product is that it is new, innovative and dynamic, meaning it can operate in collaboration with other machines. We also have funding from Aalto and as a result, we can spend

Page 41 of 42

Page 42: Final Report Project #15 Autonomous crane for warehouse

time on the research and development and prototype making, even if it isn’t profitable in the short run. Aalto can also give guidance in patenting and expertise from professors with experience. The weakness is our lack of experience in cranes in practice, as well as lack of earlier experience with LIDARs. We do not have a reputation on the market and our potential customers may not have trust in our products. Opportunities are merging or acquisition, changes in law that allow the cooperation of autonomous cranes and humans that will hopefully provide us with a competitive edge compared to other products, as we have done a product and testing at that point. Threats include not finding a merging or acquisition partner and having to re-evaluate and alter our business plan, as well as changes in the law prohibiting the use of our system, saturated market and customers preferring to use manually operated systems as they are common and they have experience in these. In order to succeed in this market, we need to acquire knowledge of crane operation from people who have first hand experience in order to know the market. We also need to know the laws that may prohibit our product to know which markets and applications we should specialize in. This should happen during the product development phase, after we have tested the feasibility of operating an autonomous crane.

Supplement: Distribution of work and learning outcomes The most difficult tasks were conformance and intellectual property, as they required knowledge in how and where to search for standards and patents. The report gave us the opportunity to reflect on our product and the market from a business and manufacturing perspective and made us think about the importance of patents and standards in a business.

Page 42 of 42