evolution of al5a robotic arm derek pike, mike...
Post on 09-Jun-2020
3 Views
Preview:
TRANSCRIPT
Evolution of AL5A Robotic Arm
Derek Pike, Mike DeSeno
Project Report No. 4 - FINAL
December 7, 2016
CNT 4104 Software Project in Computer Networks
Instructor: Dr. Janusz Zalewski
Department of Software Engineering
Florida Gulf Coast University
Ft. Myers, FL 33965
Copyright © 2016 by Florida Gulf Coast University
2
1. Introduction
The AL5A Robotic arm with the SSC-32 microcontroller (Figure 1.1) incorporates four
degrees of freedom featuring base rotation, single plane shoulder, elbow, and wrist motion. In
addition, the robotic arm provides a functional gripper as well as a wrist rotate.
Figure 1.1 AL5A Robotic Arm with SSS-32 Microcontroller [1]
In previous project [4] the robotic arm was connected to the Internet in order to relay
commands remotely. Such implementations include connecting to a NearBus server, which is a
cloud connector that receives client commands wirelessly, and most recently to Amazon Web
Services which allowed more efficient operations. Services such as Simple Queue Service (i.e.,
SQS) and AWS Lambda allowed for wireless hardware connection to the cloud as well as the
ability to execute software based on signal calls. These commands were issued via a client user
interface which allowed users to manually move the robotic arm whichever way was allowed.
Another addition the previous project integrated was the Arduino Yun microcontroller (Figure
3
1.2). The Arduino Yun microcontroller allows for increased processing power as well as memory
storage and an Ethernet port.
Figure 1.2: Arduino Yun Microcontroller [2]
The current project looks to build on top of the previous implementations by including a
webcam on the base of the robotic arm in order to detect objects in view. OpenCV is a computer
vision and machine learning software library [3] that provides this capability along with other
features such as 3D model extractions, and object tracking. Information such as shapes, color,
and distances can be analyzed (Figure 1.3). The robotic arm can then use this information to
retrieve the foreign object in view via x, and y coordinates.
Figure 1.3: OpenCV multiple object detection with color [3]
4
Essentially all user interaction will be avoided and the robotic arm will be able to operate
on its own using the webcam as a reference of where the object is that needs to be removed.
Commands will be sent to the cloud (Amazon Web Services) and relayed to the Arduino Yun to
perform the needed operations. The project objective is to be able to analyze objects that come
into view of the robotic arm and successfully retrieve those objects with no user interaction. This
project could be used as a reference for manufacturing plants that would like to automate the
assembly line.
5
2. Software Requirements Specification
The current project aims to build on top of previous projects cloud architecture and to
implement a system that can analyze objects via a webcam in order to retrieve them with no user
interaction. The previous project was able to successfully control the robotic arm remotely via an
iOS mobile application. This project will not use a mobile application but instead a desktop
application to implement a user interface. The system configuration can be viewed from the
physical diagram in Figure 2.1.
Figure 2.1: System’s Physical Diagram
The diagram shows a Webcam connected to a computer via an USB connection. This will
provide a continuous video feed that the OpenCV software will analyze constantly. After
detecting an object, the computer will send commands to an Amazon server which holds the
machine instructions for moving the robotic arm; which can then be sent to the Arduino Yun for
6
execution. The Arduino Yun connects to the robotic arm via the SSC-32 Microcontroller that
controls all of the on-board motors. This flow of information can be better described by the
context diagram, which shows how the software will function in this project (Figure 2.2).
Figure 2.2: Context Diagram
There are three pieces of software for this system (Figure 2.2), for which respective
requirements are listed below.
2.1 Webcam and Graphical Data Software
2.1.1: The Webcam Software shall be able to receive, interpret and analyze the video data
using the OpenCV library software. [Derek Pike]
2.1.2: The Webcam Software shall connect to the AWS software when initialized. [Mike
DeSeno]
7
2.1.3: The Webcam Software shall relay video data to the AWS server once initialized.
[Mike DeSeno]
2.1.4: The Webcam software shall detect any object that comes into view to the Webcam.
[Derek Pike]
2.1.5: The Webcam Software shall be able to analyze the robotic arm movements needed
to retrieve the detected object. [Derek Pike]
2.1.6: The Webcam Software shall use an algorithm (see Appendix A) for analyzing
needed robotic arm movements based on detected object’s coordinates via video data.
[Derek Pike]
2.2 AWS Server Requirements
2.2.1: The AWS Server shall connect to the Microcontroller Software once the Webcam
Software is initialized. [Derek Pike]
2.2.2: The AWS Server shall disconnect from the Microcontroller Software once the
Webcam Software terminates. [Mike DeSeno]
2.2.3: The AWS Server shall disconnect from the Webcam Software once the Webcam
Software terminates. [Mike DeSeno]
2.2.4: The AWS Server shall handle post-requests sent from the Webcam Software.
[Derek Pike]
2.2.5: The AWS Server shall send post-requests to the Microcontroller Software. [Mike
DeSeno]
2.2.6: The AWS Server shall store AL5A movement commands inside Amazon’s S3
Bucket file storage system. [Derek Pike]
2.3 Arduino Yun Microcontroller Software Requirements
2.3.1. The Microcontroller Software shall handle parameters received from the AWS
Server. [Derek Pike]
2.3.2. The Microcontroller software shall send parameters as Pulse-Width Modulation
signals via Pin 11 to the SSC-32 Microcontroller onboard the Robotic Arm. [Mike
DeSeno]
8
2.3.3. The Microcontroller Software shall have security attributes to connect to the AWS
Server. [Mike DeSeno]
2.4 Design Constraints
2.4.1. The Microcontroller Software shall be able to connect wirelessly to the AWS
server. [Derek Pike]
9
3. Design Description
3.1 Software Architecture
Consistent with the context diagram the Software Architecture is presented in Figure 3.1.
The Webcam Software includes components such as the OpenCV library which will be used to
detect objects in the video feed. Once an object is detected, the Webcam Software will compute
the needed robotic arm movements to retrieve the object based on the object’s coordinates. Also,
a connection to the Amazon Server is needed in order to send data. Inside of the AWS module,
the AWS Lambda component, which includes auto scaling features, enables code to be run on
the server. The AWS Lambda sends data received from the Webcam Software to an AWS S3
Bucket which is a cloud storage component. In order to pass these data successfully, AWS SQS,
which stands for Simple Queue Service, is used in order to pass multiple messages to the S3
bucket at a time. Once the data are stored in the Bucket, the Microcontroller Software can
retrieve these data, and then send them to the AL5A Robotic Arm for execution.
Figure 3.1: Software Architecture
10
3.2 Static Detailed Design
Figure 3.2 illustrates the structure of the Webcam Software, which includes the details of
the components. The Webcam Software needs to be able to retrieve video feed from the Webcam
and use OpenCV to analyze this video. Once OpenCV detects an object, it computes the needed
Robotic Arm Movements and the Webcam Software connects to the Amazon Server to send a
command for the Robotic Arm in JSON data format.
Figure 3.2: Structure Chart for Web Camera Software
Amazon Lambda is used to receive the data from the Webcam Software. Amazon
Lambda needs to be able to send data to the data storage component, AWS S3 Bucket, once
connected to AWS SQS. This description can be viewed in Figure 3.3.
11
Figure 3.3: Structure Chart for Amazon Server
The structure chart for the Microcontroller Software is shown in Figure 3.4. Once data
are ready to be retrieved on the Amazon Server, the Microcontroller retrieves the data in the
AWS S3 Bucket and relays them to the AL5A SSC-32 Microcontroller for execution.
Figure 3.4: Structure Chart for Microcontroller Software
12
3.3 Dynamic Detailed Design
A flowchart for the operation of the entire system is shown in Figure 3.5. When the
Webcam Software is started, the Amazon Server Software, Microcontroller Software, and AL5A
robot itself are also initialized. Once initialized, the Webcam Software, particularly the OpenCV
library, waits to detect an object from the video feed supplied by the webcam. If an object is
detected, the Webcam Software will compute the needed Robotic Arm movements to retrieve the
object. After the needed movements are calculated, the Webcam Software sends requests, one at
a time, to AWS for the Robotic Arm to perform a specified action. The AWS Server will then
send this command to the Microcontroller Software where it will validate that there is a
connection with the AL5A Robotic Arm. If there is a valid connection, it will run the command
on the SSC-32 microcontroller in the AL5A to execute and wait for more commands from the
AWS Server. If there is not a valid connection, an error message will appear on the console
where the Webcam Software is running, which will cause the Webcam Software to terminate and
the video data to stop being analyzed.
13
Figure 3.5: Flowchart illustrating behavior of the entire system
3.4 User Interface Design
A minimal user interface is shown in Figure 3.6. The user interface includes a live feed of
the video stream from the Webcam. A user will be able to start or stop the program which in turn
enables of disabled object detection analysis. Finally, the user interface includes a connection
status of the AL5A Robotic Arm, to show if the arm is turned on and available.
14
Figure 3.6: User Interface Design
15
4. Implementation and Testing
This project’s implementation includes configuring the Webcam Software, which includes the
OpenCV library, to connect and communicate with the Arduino Yun microcontroller via
Amazon Web Services.
4.0 Project Updates and Changes
Some of the requirements have changed because it was not feasible to use the webcam
software to calculate and command movements of the AL5A arm. Consequently, the Software
Architecture also changed. All changes are described in this section.
4.0.1 Requirement Changes
Requirement 2.1.4 which reads: The Webcam Software shall be able to analyze the
needed robotic arm movements needed to retrieve the detected object is being removed from the
project, in addition to Requirement 2.1.3 which reads: The Webcam Software shall relay video
data to the AWS server once initialized. Requirement 2.1.5 which reads: The Webcam Software
shall use an algorithm (see Appendix A) for analyzing needed robotic arm movements based on
detected object’s coordinates via video data shall no longer be a requirement of the Webcam
Software but of the Microcontroller Software. This requirement shall be numbered 2.3.4. In
addition, Requirement 2.2.6 under AWS Server Requirements which reads: The AWS Server
shall store AL5A movement commands inside Amazon’s S3 Bucket file storage system shall be
changed to read: The AWS Server shall store targeted object’s x and y coordinates inside
Amazon’s SQS Queue system.
4.0.2 Architecture Changes
The software architecture for the project is now described in Figure 4.1. Inside of the
AWS Software module, the AWS SQS, which stands for Simple Queue Service, holds data sent
by the Webcam software which is then ready to read by the Arduino Software. One the data,
which consists of x and y coordinates of the detected object, are read the Arduino Software then
computes the necessary robotic arm movements and sends the commands to the AL5A Robotic
Arm. This is a change from the previous Software Architecture illustrated in Figure 3.1.
16
Figure 4.1: Updated Software Architecture
All of the structure charts in Figure 3.1, 3.2, and 3.3 have also changed. The update
structure chart for the Webcam Software is described in Figure 4.2.
17
Figure 4.2: Updated Structure Chart for the Webcam Software
The updated structure chart for the AWS Software is illustrated in Figure 4.3.
Figure 4.3: Update structure chart for the AWS Software
The updated structure chart for the Microcontroller Software is described in Figure 4.4.
18
Figure 4.4: Updated structure chart for the Microcontroller Software
4.1 Assembly and Coding
The AL5A robotic arm has been previously assembled [4], so this solution includes on the
description of code implementation.
4.1.1 Webcam Software
The Webcam Software’s main functionality is the use of the OpenCV library in order to
analyze and detect specific objects. For testing purposes, the software detects an object that has a
distinct color which can be seen in Figure 4.5.
Figure 4.5: Webcam Software detecting object
19
The OpenCV library analyzes objects by converting an image from the camera feed from
its RGB (Red Green Blue) values to HSV (Hue Saturation Value) values. After the image has
been converted into HSV values, the software computes whether any objects fall in the requested
range of HSV values determined by the slider in Figure 4.6.
Figure 4.6: Hue Saturation and Value Slider
If the object does fall in the range, the Webcam Software tracks the object and calculates in
the x and y coordinates of the middle of the object. The x and y coordinates of the identified
object will then be added to a queue which is implemented using Amazon’s Simple Queue
Service (SQS) Queue [5]. A code snippet of the object detection and tracking is shown in Figure
4.7.
20
Figure 4.7: Illustration of object detection process
Figure 4.8 describes the trackFilteredObject() method which actually determines
whether the x and y values of the object.
Figure 4.8: X and Y coordinates calculation
4.1.2 Amazon Simple Queue Service
In order for the Webcam Software to communicate with the AL5A Robotic Arm, a queue is
needed to pass data. Amazon SQS provides such a mechanism. The queue can be accessed via a
simple HTTP request which the Webcam Software will perform. The Arduino Yun
21
microcontroller will then read from the queue as messages continue to be added by the Webcam
Software. The queue can be accessed from the following URL: https://sqs.us-east-
1.amazonaws.com/066132136182/AL5ARoboticArm as shown in Figure 4.9.
Figure 4.9: URL for Simple Queue Service Queue.
4.1.3 Arduino Yun Microcontroller Software
The Arduino Yun Microcontroller Software implementation includes the Temboo Internet of
Things Cloud Stack [6]. Temboo provides the Arduino Yun to connect to the Amazon Server and
retrieve elements from the SQS queue AL5ARoboticArm shown in section 4.1.2. Figure 4.10
illustrates how Temboo makes it easy to read data from Amazon Simple Queue Service.
22
Figure 4.10: Using Temboo to read messages from Amazon SQS
Messages inside of the queue will include x and y coordinates of the target object that the
arm needs to retrieve. If the queue has data available, Temboo will call the queue on the Amazon
Server and the result will return a message in JSON format which includes the x and y
coordinates for the target object that the Robotic Arm needs to retrieve. The code for integrating
Temboo into the Arduino Yun is included in Figure 4.11.
23
Figure 4.11: Implementation of Temboo into Arduino Yun Software
In order for the code in Figure 4.7 to work properly Temboo provides a header file that
includes the Temboo account information that is accessing the Amazon server. That header file
in included in Figure 4.12.
Figure 4.12: Temboo.h file that includes Temboo account information
Once the Microcontroller Software has retrieved the x and y coordinates from the Amazon
SQS Queue, the Microcontroller Software then calculates the needed robotic arm movements
needed to retrieve the object as illustrated in Appendix A. Once the needed calculations are
performed the Microcontroller then sends the needed servos values to the SSC-32
microcontroller which controls the robotic arm. The code snippet for sending commands to the
SSC-32 microcontroller is shown in Figure 4.13.
24
Figure 4.13: Executing needed robotic arm movements to retrieve object
4.2 Testing
In reference to the requirements listed in Section 2 and revised in Section 4.0.1, this
section illustrates the test cases used to verify the functionality of the project. Three test cases are
described to show how some of the listed requirements were sufficiently met. Each test case is
labelled according to their corresponding requirement.
Test Case ID: 2.1.1
Description: The Webcam Software shall be able to receive, interpret and analyze the video data
using the OpenCV library software.
Input: Webcam Software initialized.
Output: Video stream starts and OpenCV begins analyzing feed. (See Figure 4.5)
Outcome: This requirement has been met by successfully starting video stream.
Test Case ID: 2.2.4
Description: The AWS Server shall handle post-requests sent from the Webcam Software.
Input: Webcam Software makes a HTTP POST request to Amazon SQS Queue.
Output: Successful response from Amazon Server illustrated in Figure 4.14.
25
Figure 4.14: Successful POST request response from Amazon Server
Outcome: This requirement has been met by the Webcam Software making a successful POST
request to Amazon Server.
Test Case ID: 2.1.4
Description: The Webcam software shall detect any object that comes into view of the Webcam.
Input: Object placed in view of camera.
Output: OpenCV detects and highlights the object. (See Figure 4.5)
Outcome: This requirement has been met by successfully detecting the foreign object.
Test Case ID: 2.2.6
Description: The AWS Server shall store targeted object’s x and y coordinates inside Amazon’s
SQS Queue system.
Input: Poll queue for new message.
Output: Coordinates stored in queue illustrated in Figure 4.15.
26
Figure 4.15: Illustration of storing object’s coordinates in Amazon SQS
Outcome: This requirement has been met by successfully connecting to AWS on webcam
startup.
Test Case ID: 2.3.3
Description: The Microcontroller Software shall have security attributes requiring the AWS
Access Key ID and Secret Key ID to connect to the AWS Server.
Input: Attempt to access queue with Access Key ID and Secret Key ID. (Figure 4.16)
Figure 4.16 Input for AWS Access Attempt
Output: Attempt successful SQS access granted. (Figure 4.17)
27
Figure 4.17 Permission Granted to Edit SQS
Outcome: This requirement has been met by successfully acquiring access to SQS on AWS
server.
28
5. Conclusion
This project’s goal was to be able to successfully detect objects that need to be retrieved
based on the customer’s needs (color of object) then remotely send the position of the object, in x
and y coordinates, to the Robotic Arm where it then calculates the needed movements in order to
retrieve the object. The listed objectives were successfully achieved however sometimes that
robotic arm is not as accurate as one would like it to be in retrieving the foreign object.
Many difficulties were encountered in this project. One of the difficulties was researching
the OpenCV library that is used for detecting objects. With clear documentation however, this
difficulty was resolved and was implemented successfully into the project. Another difficulty
was trying to understand what the previous group did since this project built off of what they had
done [4]. However once again, since they documented their implementation very well it proved
to be beneficial in learning about the AL5A Robotic Arm. In addition, one of the hardest
difficulties was actually figuring out how to design the software without implementing it first. In
the end the Software Requirements and Architecture had to be changed to reflect our final
product.
Future extension of this project could include streaming the webcam feed remotely on a
website and performing the object detection on the server. This way an executable program is not
needed to be able to detect objects and as long as the webcam is on, the feed would always be
running. Another suggestion would be to have the user have more interaction with the program
instead of the robotic arm autonomously retrieving the detected object. The Webcam Software
could include a confirm button for the user to press in order to request the arm to pick up the
desired object. Also the Webcam Software could feature two modes, one mode would be for
autonomous object retrieval and the other mode would require the user to request for the arm to
pick up the object. One last future extension suggestion would be to have the coordinate maps of
the robotic arm and the webcam to be the same. This project featured different coordinate maps
which made precision difficult for the arm to reach the object.
29
6. References
[1] Lynxmotion (2016) AL5D Robotic Arm with SSC. URL: http://www.lynxmotion.com/p-1035-lynxmotion-al5d-4-degrees-of-freedom-robotic-arm-
combo-kit-no-software-w-ssc-32u.aspx
[2] Arduino. (2015). Arduino Yun.
URL: https://www.arduino.cc/en/Main/ArduinoBoardYun
[3] OpenCV (2016) Multiple Object Detection with Color Using OpenCV. URL: https://i.ytimg.com/vi/hQ-bpfdWQh8/hqdefault.jpg
[4] M. Kress, J. Shah, D. Piro, BeagleBone Robotic Arm Controller, Florida Gulf Coast
University URL:http://itech.fgcu.edu/faculty/zalewski/CEN4935/Projects/BeagleboneRobotArmControl.pdf
[5] Amazon Simple Queue Service URL: https://aws.amazon.com/sqs/
[6] Temboo Internet of Things Cloud Stack URL: https://temboo.com/
30
7. Appendix A: Movement Algorithm
Figure 7.1 illustrates the functions necessary to retrieve the desired object.
Figure 7.1: Functions used in movement algorithm
Figure 7.2 shows the functions in Figure 7.1 being executed in the main loop() function.
31
Figure 7.2: Movement functions being called from loop()
top related