reducing computer visualization errors for in...

15
REDUCING COMPUTER VISUALIZATION ERRORS FOR IN-PROCESS MONITORING OF ADDITIVE MANUFACTURING SYSTEMS USING SMART LIGHTING AND COLORIZATION SYSTEM J. Engle, R. Nguyen, K. Buah, and J. M. Weaver Department of Manufacturing Engineering, Brigham Young University, Provo, UT 84602 Abstract Computer vision systems that monitor additive manufacturing processes are susceptible to producing false-positive errors for defects. Two of the main sources for these errors come from uncontrolled ambient lighting and insufficient visual contrast between prints and their backgrounds. This paper presents a method for controlling ambient lighting and increasing visual contrast for an in-process monitoring system for a 3D printer, using a light-filtering camera enclosure and a smart lighting and colorization system. A single-camera in-process monitoring system was developed and used to visually inspect a series of identical test prints. Various error classes, including false-positive error rates, were tested and measured for the camera system, comparing the results of including a blackout enclosure and a smart lighting system against using the camera system alone. Recommendations for future development of lighting and colorization systems are suggested. Introduction In-process monitoring is a highly-researched topic in additive manufacturing. Due to an increasing number of end-user products created with additive manufacturing processes [1] and a growing number of industrial applications [2]-[3], developing in-process monitoring capabilities for these processes is crucial. Research has been done to create in-process monitoring systems for additive manufacturing processes, with computer vision systems being one of the most commonly used methods [4]-[7]. One area of concern with these computer vision systems is the mistaken detection of print errors that don’t exist. These false-positive errors typically occur due to uncontrolled ambient lighting [5] and insufficient visual contrast for features of interest, such as when a print is a similar color as its background [6]. Other error sources have been noted, such as shadows created by prints with complex geometries [7]. These issues can result in the premature termination of otherwise errorless prints. There are several existing methods that can mitigate issues with ambient light and insufficient visual contrast for computer vision systems, including the use of filters, enclosures, 1482 Solid Freeform Fabrication 2019: Proceedings of the 30th Annual International Solid Freeform Fabrication Symposium – An Additive Manufacturing Conference Reviewed Paper

Upload: others

Post on 15-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: REDUCING COMPUTER VISUALIZATION ERRORS FOR IN …utw10945.utweb.utexas.edu/sites/default/files/2019/125 Reducing... · increasing number of end-user products created with additive

REDUCING COMPUTER VISUALIZATION ERRORS FOR IN-PROCESS MONITORING OF ADDITIVE MANUFACTURING SYSTEMS USING SMART

LIGHTING AND COLORIZATION SYSTEM

J. Engle, R. Nguyen, K. Buah, and J. M. Weaver

Department of Manufacturing Engineering, Brigham Young University, Provo, UT 84602

Abstract

Computer vision systems that monitor additive manufacturing processes are susceptible to producing false-positive errors for defects. Two of the main sources for these errors come from uncontrolled ambient lighting and insufficient visual contrast between prints and their backgrounds. This paper presents a method for controlling ambient lighting and increasing visual contrast for an in-process monitoring system for a 3D printer, using a light-filtering camera enclosure and a smart lighting and colorization system. A single-camera in-process monitoring system was developed and used to visually inspect a series of identical test prints. Various error classes, including false-positive error rates, were tested and measured for the camera system, comparing the results of including a blackout enclosure and a smart lighting system against using the camera system alone. Recommendations for future development of lighting and colorization systems are suggested.

Introduction

In-process monitoring is a highly-researched topic in additive manufacturing. Due to an increasing number of end-user products created with additive manufacturing processes [1] and a growing number of industrial applications [2]-[3], developing in-process monitoring capabilities for these processes is crucial. Research has been done to create in-process monitoring systems for additive manufacturing processes, with computer vision systems being one of the most commonly used methods [4]-[7].

One area of concern with these computer vision systems is the mistaken detection of print errors that don’t exist. These false-positive errors typically occur due to uncontrolled ambient lighting [5] and insufficient visual contrast for features of interest, such as when a print is a similar color as its background [6]. Other error sources have been noted, such as shadows created by prints with complex geometries [7]. These issues can result in the premature termination of otherwise errorless prints.

There are several existing methods that can mitigate issues with ambient light and

insufficient visual contrast for computer vision systems, including the use of filters, enclosures,

1482

Solid Freeform Fabrication 2019: Proceedings of the 30th Annual InternationalSolid Freeform Fabrication Symposium – An Additive Manufacturing Conference

Reviewed Paper

Page 2: REDUCING COMPUTER VISUALIZATION ERRORS FOR IN …utw10945.utweb.utexas.edu/sites/default/files/2019/125 Reducing... · increasing number of end-user products created with additive

and proper lighting position [8]-[9]. However, these methods may be difficult to implement on additive manufacturing processes, since these processes can make objects in a large variety of colors, sizes, materials, and geometries. A more flexible system is needed that can consistently provide sufficient lighting control and visual contrast between a print and its background, regardless of the varying physical properties between prints.

This paper presents a method for reducing false-positive errors for computer vision systems used for in-process monitoring for additive manufacturing systems through the use of a Smart Lighting and Colorization System (SLCS). An Ender 3 Pro Desktop Fused Filament Fabrication (FFF) printer was used to test the SLCS (described below) with a single-camera print failure detection system. The system controls ambient lighting and provides increased visual contrast between a print and its background. It was hypothesized that the print failure detection system would occasionally give false positive errors, and that the use of the SLCS would decrease this error rate.

Method

In order to test this concept, we developed the Smart Lighting and Colorization System and a single-camera print failure detection system. The systems’ functionality and theircomponents are described below.

Smart Lighting and Camera System (SLCS): The SLCS (Figure 1) works by identifying the color of the filament and shining that color into the printing space. Illuminatingthe object with like-colored light will cause it to brighten and stand out from other colors, which results in desired visual contrast [10]. To detect filament color, we used a TCS3200Programmable Color Light-To-Frequency Convertor and the pigpio C library. An algorithm describing the process is given below:

1. Calibrate the sensor by measuring and recording the frequencies of black and whitecolors, which gives a series of threshold values.

2. Convert these threshold values into the RGB format.3. Pass the RGB values from the sensor to the LED strip, which then illuminates the

printing area with the corresponding color.

The SLCS starts by using the TCS3200 color sensor to detect the frequencies from thecolor of the filament roll, and pass those frequencies on to the Raspberry Pi 3 computer. An algorithm converts these frequency values into an RGB (Red-Green-Blue) value. The RGBvalues are converted to HSV values, and are then passed from the Pi to a WS2812B LED strip, which uses the Userspace Raspberry Pi PMW library. The LED strip then illuminates the build

1483

Page 3: REDUCING COMPUTER VISUALIZATION ERRORS FOR IN …utw10945.utweb.utexas.edu/sites/default/files/2019/125 Reducing... · increasing number of end-user products created with additive

plate with the same color as the filament. The LED lights are placed inside a 3D-printed light fixture to direct the beam at the area of interest in the build plate.

Figure 1. Smart Lighting and Colorization System (SLCS)

Single-Camera Error-Detection System: The single-camera error-detection system(Figure 2) works by using algorithms to distinguish the 3D print from its background and to check the object against a pre-specified error type. It was created using a Raspberry Pi 3 and aRaspberry Pi camera, both selected for their low cost and natural interfacing. Our system resolution was set at 640 by 480 pixels at a framerate of 25 frames/sec. The algorithms werewritten in Python 3.5 using OpenCV 3.3.0 libraries, allowing for real-time image detection and manipulation. An algorithm describing the error-detection process is given below:

1. Foreground Elimination to clearly identify the printed part.2. Detection and tracking of a marker on the printer nozzle. 3. Taking pictures of part and saving them.4. Analyzing the part for errors and logging the results.

The camera system starts by using foreground elimination techniques, in which HSV values are manually adjusted until the camera “filters out” all colors besides the desired filamentcolor. Next, we track the nozzle by using an algorithm that uses binary thresholding and image filtering to locate a triangular marker placed above the nozzle. The camera will then take picturesof the 3D build during the print, triggered by occasional pauses that are included in the G-code. After the camera captures the build image, the image is analyzed for printing errors by finding

1484

Smart Lighting and Colorization System (SLCS)

2. Sends filament's color frequency

values to Raspberry Pi.

Pi te lls LEO strip ~ to turn on.

TCS3200 color sensor

Mu lt i-Co lored LED Light ing System

1. Sensor detects color frequency

values from PLA filament.

4. LEO strip illuminates t he bu ild plate w ith

given color values .

Ender 3 Pro {3D Pri nter)

Page 4: REDUCING COMPUTER VISUALIZATION ERRORS FOR IN …utw10945.utweb.utexas.edu/sites/default/files/2019/125 Reducing... · increasing number of end-user products created with additive

the Harris Corners coordinates on the part and comparing them to a range of predetermined boundaries. A part is considered “defective” if any part of the 3D print is found outside the predetermined boundaries. Afterwards, the results are logged with a reference to the matching picture indicating whether or not the part is out of bounds. The camera system will continue to take pictures and analyze the part for defects during each pause in the g-code for the rest of the print.

Figure 2. Single-Camera Error-Detection System

Experiment

Three tests were created to test the error-detection capabilities of the camera system by measuring the effects of natural versus artificial lighting sources, and the color contrast between the lighting source and the 3D build. The effects of these factors are measured by the false positive error rate, the undetected error rate, and correct defect response rate. The tests are conducted in three respective scenarios:

1. Using the camera system in typical lighting conditions 2. Using the camera system in the blackout enclosure with a white light 3. Using the camera system in the blackout enclosure with the SLCS

1485

Single-Camera Error-Detection System 1. User changes HSV values to isolate 2. Raspberry Pi uses Raspberry Pi Camera to track 3. Pre-programmed filament color from its background. nozzle and detect object. pa uses in the G-code

Foreground Elimination Program (Python 3.5)

5. Error-Detection progra m determines if object is defect ive or

non-defective.

Error-Detection Program (OpenCV 3.3.0)

Page 5: REDUCING COMPUTER VISUALIZATION ERRORS FOR IN …utw10945.utweb.utexas.edu/sites/default/files/2019/125 Reducing... · increasing number of end-user products created with additive

Figure 3. PLA builds used in the experiment. (Defect-Free and Defective)

Typical Lighting Test: The camera system was set up and used to monitor a series of identical builds (Figure 3) created by an Ender 3 Pro desktop FFF printer. 50% of the builds were printed without any defects, and 50% of the builds were printed with an intentional defect (mid-print layer shift). The blackout curtains and plywood top were removed from the enclosure, and the environment was lit by standard overhead fluorescent lighting (Figure 4). The procedure for this experiment is listed below.

1. Turn on 3D printer and Raspberry Pi. 2. Warm-up the 3D printer’s hot end and insert PLA filament into the extruder.3. Start running OpenCV program (no SLCS) on Raspberry Pi. 4. Place an object in front of the camera (same color as PLA) and calibrate vision system to

detect that color only. 5. Start the program on the 3D printer for the ‘defect-free’ print into the 3D printer.6. Remove object from the print bed and begin printing. 7. Remove the print and other PLA material on the build plate when it is finished.8. Count the number of recorded defects and compare the number of claimed defects against

the number of actual defects.9. Reset the program. 10. Repeat steps 3-9 for 3 more builds of the ‘defective print’ print in the same PLA color.11. Change printing program to ‘defect-free’, then repeat steps 3-9 for 4 more builds.

1486

Page 6: REDUCING COMPUTER VISUALIZATION ERRORS FOR IN …utw10945.utweb.utexas.edu/sites/default/files/2019/125 Reducing... · increasing number of end-user products created with additive

Figure 4. Camera System in Natural Lighting (No SLCS or Blackout Enclosure)

Blackout with Neutral Lighting Test: The procedure for the camera system test (withneutral lighting) is changed by the addition of a blackout enclosure and a white LED light shining on the print bed (controlled by the SLCS), as seen in Figure 5.

1. Turn on 3D printer and Raspberry Pi.2. Warm-up the 3D printer’s hot end and insert PLA filament into the extruder. 3. Start running OpenCV program (with SLCS) on Raspberry Pi.4. Use a white object to calibrate the SLCS, ensuring that the build plate is illuminated with

white light.5. Place an object in front of the camera (same color as PLA) and calibrate vision system to

detect that color only.6. Start the program on the 3D printer for the ‘defect-free’ print into the 3D printer. 7. Remove object from the print bed and begin printing.8. Close the enclosure curtains to ensure no outside light comes in. 9. Remove the print and other PLA material on the build plate when it is finished.10. Count the number of recorded defects and compare the number of claimed defects against

the number of actual defects.11. Reset the program. 12. Repeat steps 3-11 for 3 more builds of the ‘defective print’ print in the same PLA color.13. Change printing program to ‘defect-free’, then repeat steps 3-11 for 4 more builds.

1487

Page 7: REDUCING COMPUTER VISUALIZATION ERRORS FOR IN …utw10945.utweb.utexas.edu/sites/default/files/2019/125 Reducing... · increasing number of end-user products created with additive

Figure 5. Printing Process using Neutral Lighting and Blackout Enclosure

Blackout with SLCS Lighting Test: The procedure for the camera system test (with SLCS lighting) is changed by the addition of a blackout enclosure and LED lighting matchingthe filament color shining on the print bed (controlled by the SLCS), as seen in Figure 6.

1. Turn on 3D printer and Raspberry Pi. 2. Warm-up the 3D printer’s hot end and insert PLA filament into the extruder. 3. Start running OpenCV program (with SLCS) on Raspberry Pi. 4. Calibrate the SLCS by using the color sensor detect the color of the filament, resulting in

the build plate to be illuminated with the similar-color lighting as the PLA to be printed. 5. Place an object in front of the camera (same color as PLA) and calibrate vision system to

detect that color only. 6. Start the program on the 3D printer for the ‘defect-free’ print into the 3D printer. 7. Remove object from the print bed and begin printing. 8. Close the enclosure curtains to ensure no outside light comes in. 9. Remove the print and other PLA material on the build plate when it is finished. 10. Count the number of recorded defects and compare the number of claimed defects against

the number of actual defects. 11. Reset the program. 12. Repeat steps 3-11 for 3 more builds of the ‘defective print’ print in the same PLA color. 13. Change printing program to ‘defect-free’, then repeat steps 3-11 for 4 more builds.

1488

Page 8: REDUCING COMPUTER VISUALIZATION ERRORS FOR IN …utw10945.utweb.utexas.edu/sites/default/files/2019/125 Reducing... · increasing number of end-user products created with additive

Figure 6. Camera View using SLCS and Blackout Enclosure

Results

Correct Error-Detection Rate: The camera system was used to detect errors in 4 defective builds and 4 non-defective builds for each of the three lighting situations. The builds were determined to be defective or non-defective by comparing their current shape to pre-programmed boundaries throughout the print (see Figure 7). After the experiments, the correct defect response rate was calculated by determining if the system correctly labeled prints as defective or non-defective. Results are recorded in Table 1.

Figure 7. Camera View of Defective and Non-Defective Builds and Superimposed Boundaries

1489

Page 9: REDUCING COMPUTER VISUALIZATION ERRORS FOR IN …utw10945.utweb.utexas.edu/sites/default/files/2019/125 Reducing... · increasing number of end-user products created with additive

The data from Table 1 shows that the camera was able to accurately detect defective and non-defective builds under all lighting conditions, with the exception of using the SLCS with the blackout enclosure. This is somewhat unexpected, as other research studies showed that false-positive errors and other errors occur as a result of the ambient lighting present in an unenclosed printing environment. The reasons for this are discussed in the next section of this paper.

Table 1. Error Detection Results from Different Lighting SystemsLighting System

Type Print TypeFalse Positive

Rate:Correct Defect Response Rate:

Undetected Defect Rate:

Natural Lighting Defective 0.00% 100.00% 0.00%

Non-Defective 0.00% 100.00% 0.00%

Neutral Lighting (White Light)

Defective 0.00% 100.00% 0.00%

Non-Defective 0.00% 100.00% 0.00%

Smart Lighting andColorization System

Defective 0.00% 50.00% 50.00%

Non-Defective 0.00% 100.00% 0.00%

False-Positive and Undetected Error Rate: It is important to note that no false positive errors occurred during these experiments, indicating that the camera system was always capableof distinguishing the build from its background even in natural lighting. While expected for both lighting systems conducted in the blackout enclosure, it was expected that false-positive errorswould occur to some degree in the natural lighting environment. There are two possible reasons for why there were no false positive errors for any of the light systems.

First, the foreground elimination process took place in the same lighting conditions thatthe printing process would take place in. This means that the camera system had already filtered out all background colors that would occur in the printing process, leaving nothing but the buildvisible to the camera.

The filtering effect explanation seemed to be especially true in the natural light experiments. However, it is notable that the printer’s metal frame reflected same-colored light inthe SLCS system when the print was in progress, which could potentially cause a false-positive error. For a reason not yet discovered, the light seemed to change to a more neutral color eachtime when the printer stopped to take a picture for analysis (see Figure 8).

1490

Page 10: REDUCING COMPUTER VISUALIZATION ERRORS FOR IN …utw10945.utweb.utexas.edu/sites/default/files/2019/125 Reducing... · increasing number of end-user products created with additive

Figure 8. Color Contrast between Print in Progress (Left) and Paused Print (Right) for

the Smart Lighting and Colorization System Another possible reason is that the lighting was very stable during the natural light

experiments, without any light dimming or interfering shadows cast by people or moving objects. The camera system and the 3D printer were left in the enclosure with the blackout curtains up during the natural light experiments (see Figure 4). This meant that the enclosure provided a roof over the monitoring and printing area, blocking out overhead lighting while allowing light to come in from the sides.

There were two builds that had undetected errors during the SLCS experiments, which

was unexpected. The experimental logs and the last images taken for these two misidentified prints (See Figure 9 and 10) show that the last images recorded were of the two defective prints near the end of their non-defective stages, with no images recorded of the defective stage of the prints. This indicates that the camera failed to take a picture during the defective stages of the print, which resulted in the error-detection system never getting an opportunity to assess if there was a defect in that part of the print.

This error most likely occurred because the marker wasn’t identified by the camera

system near the end of the print, which results in a picture not being taken. Because of this, it is very possible that the undetected errors occurred from marker misidentification rather than mistakenly identifying the defective print as non-defective.

1491

Page 11: REDUCING COMPUTER VISUALIZATION ERRORS FOR IN …utw10945.utweb.utexas.edu/sites/default/files/2019/125 Reducing... · increasing number of end-user products created with additive

Figure 9. Experimental Log for 2 Misidentified Prints (RedDef02 and RedDef04)

Figure 10. Last Pictures for 2 Misidentified Prints (RedDef02 and RedDef04)

Conclusions

This experiment gave a practical outlook on the effectiveness of the current SLCS system and the error-detection camera system’s capabilities. The conclusions and recommendations for future research are given in the following sections.

Camera System Capabilities: The camera system performed satisfactorily for the experiments given under most lighting conditions. Some of the noted benefits of our camera system are given below.

● The camera system was able to consistently identify defective or non-defective test prints under neutral-lighted conditions and natural-lighting conditions.

1492

J RedDef04 - Notepad □

File Edit Format View Help

followtest_img1 Not out of bounds followtest_img2 Not out of bounds followtest_img3 Not out of bounds followtest_img4 Not out of bounds followtest_in:igS Not out of bounds followtest_img6 Not out of bounds

X ~ RedDef02 - Notepad

File Edit Format View Help

followtest_img1 Not out of bounds followtest_img2 Not out of bounds followtest_img3 Not out of bounds followtest_img4 Not out of bounds followtest_imgS Not out of bounds followtest_img6 Not out of bounds followtest_img7 Not out of bounds followtest_img8 Not out of bounds

□ X

>~~-----------------------....:;,,,_;

Page 12: REDUCING COMPUTER VISUALIZATION ERRORS FOR IN …utw10945.utweb.utexas.edu/sites/default/files/2019/125 Reducing... · increasing number of end-user products created with additive

● The camera system was found capable of correctly identifying different-colored defective or non-defective test prints in the pre-experimentation phase of the project during machine calibration.

● The camera system had a 0% false-positive error rate, meaning that it was fully capable of discerning a 3D printed object from its background under specified conditions.

Several areas of improvement are noted below for the camera system:

● The camera system was inconsistent at recognizing when the marker was in the pause position, which meant it sometimes didn’t take pictures needed to determine if a print was defective or non-defective.

● The camera system was never tested in an environment with varying lighting conditions, which limits the validity of our data collected from our experimental natural lighting conditions.

● The camera system could only detect out-of-boundary errors, which meant it would consider a part with surface defects or within-boundary warping as non-defective.

● The camera system could only determine the presence of a defect by checking the position of the print against a superimposed boundary, often a simple shape. This would be ineffective in any application where complex part geometry needs to be analyzed.

● The camera system relied on customized G-code that included pauses in printing. In practice, this would result in longer lead times for 3D printed objects and may result in more opportunities for surface defects.

● The camera system was only capable of doing foreground elimination for one color only, which meant it would not work for a multi-colored 3D print.

● The camera system only used 1 camera, which is only useful for analyzing one side of an object.

● The camera system would be only effective with FFF printers in its current configuration.

Capabilities of SLCS: This version of the SLCS did not perform as expected. The reasons for this are listed below:

● The light from the SLCS had a tendency to reflect off of the 3D printer’s metal frame and has the potential to cause issues with detecting the marker or the build itself. This is in part due to utilizing directional lighting techniques, which is less effective with reflective, specular surfaces [10].

1493

Page 13: REDUCING COMPUTER VISUALIZATION ERRORS FOR IN …utw10945.utweb.utexas.edu/sites/default/files/2019/125 Reducing... · increasing number of end-user products created with additive

● The SLCS was not capable of consistently illuminating the build plate with the exact same color as the filament. This is possibly due to the inexpensive color sensor, and due to some ambient light present within the enclosure while the system was calibrating.

● The SLCS didn’t solve the issue of making an object that is the exact same color as its background be discernible from its background.

● The SLCS had a tendency to change to a more neutral color when the camera checked the 3D build for errors (See Figure 8) instead of the filament color. This negated the theoretical benefit of having same-colored lighting to create contrast between the build and its background.

Future Recommendations: Due to design and experimental shortcomings, the camera system and the Smart Lighting and Colorization System didn’t perform as well as expected. The design and experimental limitations stemmed primarily from the camera system trying to detect a different colored marker and 3D build, limited error-detection capabilities of the camera system, flawed lighting positioning and colorization consistency from the SLCS, and an inaccurate representation of natural lighting conditions. Some recommendations to address these issues include the following:

● Replace the marker and single-photo analysis method with a system that monitors the build’s printing progress in real-time.

● Coat the 3D printer’s frame with a non-reflective paint to minimize light reflection issues.

● Use lighting techniques such as backlighting that provide a contrasting background without directly illuminating the specular metal surfaces on the 3D printer.

● Create a more robust error-detection program that is able to detect more naturally occurring errors, such as surface defects, missing material flow, etc.

● Design a natural lighting “torture test” that would allow the camera system to more accurately undergo natural changes in lighting that may occur during the 3D printing process. This may include projected shadows from stationary or moving objects, dimming and brightening of external lighting, etc.

With these improvements, the SLCS would be more effective at interfacing with the camera system and would be able to provide consistent contrast between the build and its background. It would also be able to more accurately compare the SLCS’s effectiveness against realistic natural lighting conditions. There is also future research that may be conducted beyond the scope of this project, such as comparing the ability of the SLCS and the camera system to effectively monitor and analyze 3D builds of different shapes, HSV color values, and doing so

1494

Page 14: REDUCING COMPUTER VISUALIZATION ERRORS FOR IN …utw10945.utweb.utexas.edu/sites/default/files/2019/125 Reducing... · increasing number of end-user products created with additive

with a multi-camera system. Due to its visual limits, methods besides an HSV color-vision camera systems will also have to be pursued to create effective error-detection systems for other 3D printing processes, such as stereolithography and selective laser melting.

1495

Page 15: REDUCING COMPUTER VISUALIZATION ERRORS FOR IN …utw10945.utweb.utexas.edu/sites/default/files/2019/125 Reducing... · increasing number of end-user products created with additive

References

[1] H. Kim, Y. Lin, T. L. B. Tseng, “A review on quality control in additive manufacturing,” Rapid Prototyping Journal, vol. 24, no. 3, pp. 645-669, 2018. [2] P. Kelly-Detwiler, 2018, “Why Additive Is Positive: The Benefits of 3D Printing the Power Industry,” [Online]. Available: https://www.ge.com/power/transform/article.transform. articles.2018.jun.the-benefits-of-3d-printing. [Accessed 18 Dec 2018]. [3] B. Berman, “3-D printing: The new industrial revolution,” Business Horizons, vol. 55 no. 2, pp. 155-162, 2012. [4] J. Straub, “Initial Work on the Characterization of Additive Manufacturing (3D Printing) Using Software Image Analysis,” Machines, vol. 3, no. 2, pp. 55-71, 2015. [5] R. Aroca, C. E. H. Ventura, I. De Mello, and T. F. P. A. T. Pazelli,“Sequential additive manufacturing: automatic manipulation of 3D printed parts,” Rapid Prototyping Journal, vol. 23, no. 4, pp. 653-659, 2017. [6] F. Baumann. and D. Roller, “Vision based error detection for 3D printing processes,” 2016 International Conference on Frontiers of Sensors Technologies, Hong-Kong, 2016. [7] S. Nuchitprasitchai, J. Pearce, and M. Roggeman, “Three Hundred and Sixty Degree Real-Time Monitoring of 3-D Printing Using Computer Analysis of Two Camera Views,” Journal of Manufacturing and Materials Processing, vol. 1, no. 1, pp. 1-32, 2017. [8] D. Martin, “Basic Lighting Techniques for Machine Vision,” [Online]. Available:https://www.visiononline.org/userAssets/aiaUploads/file/CVP_Beginning-Lighting-for-Machine-Vision_Daryl-Martin.pdf [Accessed 9 Apr 2019]. [9] M. Tardiff, 2004, “Choosing a camera for inspection: Color or monochrome?” [Online]. Available: https://www.edn.com/design/test-and-measurement/4384112/Choosing-a-camera-for-inspection-Color-or-monochrome- [Accessed 11 Apr 2019]. [10] “A Practical Guide to Machine Vision Lighting,” [Online]. Available: http://www.ni.com/white-paper/6901/en/ [Accessed 9 Apr 2019].

1496