groupfinal(report(pc/courses/432/2012/... · 2012-05-08 · 8!!...
TRANSCRIPT
![Page 1: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/1.jpg)
VIRTUAL TUNNEL
Group Final Report ECE532
Abdullah Tabassum, Ervin Sy, Shu Wand
4/9/2012
![Page 2: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/2.jpg)
2
Table of Contents
1 Overview................................................................................................................3
2 Outcome.................................................................................................................7
3 Project Schedule ....................................................................................................8
4 Detailed Description of IP BLOCKS..........................................................................9
Appendix....................................................................................................................17
![Page 3: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/3.jpg)
3
1. Overview
1.1 Background Recent technological advancement has revolutionized our entertainment experience by the emergence of CRT, LCD, LED and 3D TVs. The ultimate goal of this rapid technological advancement is to resemble reality as much as possible to enhance the entertainment experience as well as simulating real life situations without its hazards or implications. Engineers are trying to push the line between reality and virtual reality by creating images and animation that look real
Although there has been tremendous amount of advancement in computer graphics technology in the past century, but all seemed to have focused on the 2-‐D perception of image and video. Even though this issue is addressed with 3-‐D TVs, but it would require the user to wear 3-‐D glasses which is not very practical and 3-‐D TVs are considerably more expansive than its non 3-‐D counter parts.
1.2 Motivation
Our team has found the origin of the problem addressed in the above section, that is most of this advancement has been concerned with displaying images onto a screen without considering the position of the actual viewer with respect to the screen. The foremost goal in computer graphics is that the observer should see images that please him. For this reason, we believe that it is essential that the images being displayed onto the screen be catered to the observer; this is also the motivation of our project.
1.3 Project Goal
The goal of our project is to display an object on the computer screen with respect to the angle and distance of the viewer such that it will give the viewer an 3-‐D feel as if the object is coming out of the computer screen.
1.4 Project Overview Description Our project will display an image based on the distance and angle of the viewer to resemble a 3-‐D image. To achieve this, our system will detect the position and size of a marker object in real-‐time with a video camera and display a tunnel based on the position and size of the marker. The tunnel will be displayed on a VGA monitor. The marker will be placed on the forehead (by means of a hat or glasses) of a person who will be viewing the monitor. The aim of the system is to give the viewer a sense of 3-‐d realness of the tunnel which is really just a 2 dimensional image. To achieve this “realness” effect, the tunnel will be continuously redrawn to correspond to the perspective that an observer would have as the observer moves his eyes. For example, as the observer moves close to the screen, the tunnel’s contours will appear to converge. Or, as the observer moves right with respect to the screen the tunnel’s contours will move left. Please refer to Figure 1 for an illustration of our tunnel.
![Page 4: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/4.jpg)
4
Figure 1
This image shows the depiction of a tunnel when the observer is viewing the screen while being at a position that is right and south from the center of the screen
Figure 2
This image shows the depiction of a tunnel when the observer is viewing the screen while being left and north (towards the ground) with respect to the center of the screen. Comparing the two pictures should also reveal the sense of depth that can be portrayed from the image. In figure 2, the observer is further away from the screen than the observer in figure 1.
![Page 5: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/5.jpg)
5
1.4 System Block Diagram A block diagram of the system is shown in figure 3, each component will be discussed in details in later sections
Figure 3: System block diagram of our whole system
![Page 6: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/6.jpg)
6
1.5 Module Level Description Custom Blocks Color and Shape Detection Hardware -‐ The color detection hardware will analyze each pixel from a video stream (video camera). It will do this by reading each pixel from a frame buffer so the video can continue streaming while the color detection hardware is scanning for the color pixel. This will optimize better performance for more utilization of the system. When a specific color is detected on a specific location on the frame it will forward its first co-‐ordinate to the co-‐ordinate buffer, and then continue scanning for the last co-‐ordinate of the image found and store it to the coordinate buffer and raise a flag to the video stream that it can write in the other block of the buffer and signal the co-‐ordinate calculator that all the co-‐ordinates are found. The same process is repeated alternating between the double buffering frame memory block. The video stream cannot write into the previous frame block if the Color Detection Hardware is not done with it. Co-‐ordinate Calculator – The Co-‐ordinate Calculator will analyze the size difference of the object and the location based on the input co-‐ordinates passed from the Color Detection Hardware. The Co-‐ordinate calculator will sometimes get co-‐ordinates which will evaluate the diameter, to be different; for example: x1 = 10 and x2 = 15. It will then adjust it to be the median of the two for easier computation in the video drawer. When it has its co-‐ordinate and size calculation it will then forward, how many times larger it would need to draw the image and the coordinate location of the objects center. When the values are ready it will signal the video drawer to read in the co-‐ordinate. The Co-‐ordinate calculator would have to wait if the video drawer isn’t done using the co-‐ordinates/size of the previous evaluation Video Drawer – The video drawer will receive the calculated values of size and co-‐ordinates and draw the image, which will be hard-‐coded into the hardware (a specific pattern). It will draw the image in a frame buffer block and signal the VGA Drawer when it’s done drawing on the frame block. The Video drawer would have to wait if the VGA Drawer isn’t finished drawing on the previous block. Video Device Output Drawer – The VGA drawer will look at the frame memory buffer block and draw out what is in the frame buffer only if the frame is ready to be drawn, from a signal from the video drawer. These values are passed to the Video Device output Core to be displayed. Custom Slaves Video Device Output Drawer Slave – Will be the interface between Video Device Output Drawer and the processor. The processor is used for debugging. Co-‐ordinate Calculator Slave – Will be the interface between the Coordinate Calculator and the processor. The processor is used for debugging. Co-‐ordinate Detection Slave – Will be the interface between the Coordinate Detector and the processor. The processor is used for debugging.
![Page 7: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/7.jpg)
7
Other Microblaze Processor – used to signal the slaves for enables and disables, to read specific output values or freeze the process at a certain state. 2 Outcome:
Review of the Original Goals
All of the goals that we had initially proposed have been met. A break-‐ down of the goals and objectives of the system is given in the list below.
1) Detect an object of a specified colour (red) robustly, such that no other object is detected. Robust detection is very important in the system because without it, the graphical output, which is based on the detection, will be very unpleasant. The graphical output will look rigid and discontinuous rather the smooth and continuous. This detection must be done in real time through hardware.
2) Real time rendering of a tunnel scene that is based on the coordinates and size of the marker. The graphical output will be done through software.
3) Maintain an acceptable frame rate so that the visual graphics are pleasing to the observer. The graphical rendering should not be “jumpy” (i.e. the tunnel should not look like it jumps from one location to the next but rather, it should move in a continuous manner).
Final Outcome:
In the end, our project worked just as we expected it to. A tunnel was displayed on the screen with an orientation determined by the position of a marker (with respect to the camera) that was placed on the glasses of an observer. Some people also had a chance to try out the”virtual tunnel” and they experienced a very pleasing and realistic image rendering of a tunnel. Even people who were observing the observer (bystanders watching the viewer in front of the screen) could clearly identify that the tunnel was moving according to the position of the head of the observer (which held the red marker).
To improve the system that we built we would implement the graphics unit in software, implement anti aliasing, introduce 3d objects (such as a cube) into the scene and also add some lighting (shading) effects (shadows, varying brightness and reflection). Implementing the graphics unit in hardware would allow our system to render scenes faster. The anti aliasing would make the lines and objects in the scene seem smoother. The 3d object and the lighting would add an extra hint of realness and it would also make the scene seem less dull than the present tunnel.
If someone was to take over this project, we would recommend the person to first implement the graphics unit in hardware. After the graphics are being rendered through hardware, they could then easily add other affects as to their likings. They could extend this to a
![Page 8: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/8.jpg)
8
game, such as a ping pong game where a single player versus the computer. They would then need another object detection core to detect the position of the player’s paddle that he uses to hit the ball to the computer. Another idea that can be built upon this system is that of a virtual interactive scene such as a forest. The scene of a forest could be projected onto to a projector screen. A viewer (with a marker placed on his head) could then walk around (in front of the camera of course and the view of the forest would change according to the position of the viewer). The ideas for the expansion of this project are endless.
3 Project Schedule
Throughout the course, there was only one major setback that delayed out project by three weeks, the faulty board. We were still able to finish our project on time because we were working on the other modules in parallel, such as the graphics for the tunnel and the custom hardware component. Unfortunately, we were not able to add extra features to our project because of this setback. Below is a comparison of our proposed milestones and the actual milestones. Another reason for the major difference is that as a team we were unfamiliar with tools and how we were going to go about implementing the project when we proposed the milestones.
Dates (2012) Proposed Milestone Actual Progress February 15 Draw a square on the VGA
monitor We were able to draw the entire tunnel on the VGA (we have gone ahead of our current milestone and have completed later milestones as well)
February 22 Display video frames on the VGA through the FPGA board
The milestone was not completed because the Xilinx board that we were using was broken (we found this out two weeks later)
February 29 Detect the marker and display the detected coordinates through XMD
-‐We were able to create an FSM design of our hardware component -‐We were still struggling with VGA output as we had not yet discovered that the board was faulty
March 7 Determine the size and coordinates of the center of the marker
-‐ We coded the FSM in HDL and tested and verified that it was working properly -‐ We had the video output working properly after having our board replaced
March 14 Draw the square on the VGA using the coordinates provided by the Object detection block
-‐We were able to detect green pixels (but our results were not robust as we the system would detect pixels of other colours as well)
March 21 Extend the square to multiple -‐We integrated the FSM the computer
![Page 9: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/9.jpg)
9
nested squares and perform all translations of the square such that they moves together.
graphics module and the video component together -‐The VGA was able to display the animated tunnel but it was very jumpy due to poor timing considerations and because the marker detection was not robust
March 28 Draw Lines from fixed points on the monitor to fixed points on the squares. Please refer to Figure 1 and Figure 2 for clarification
-‐no progress because of the ece496 design fair
April 4 System Testing -‐We had removed the lag and glitches from the VGA output -‐We made the VGA output correspond to live feed from the video camera and our hardware component -‐We made the marker detection robust
April 11 Demo
4 Detailed Description of IP BLOCKS
MicroBlaze
MicroBlaze is a default soft processor provided as one of Xilinx’s PCores. The processor was automatically generated using the project wizard in EDK, setting the processor clock speed to be 100MHz. The MicroBlaze processor was used for three main purposes in our system:
1. To debug our system to read values from the DDR. eg. What color values in RBG was detected or coordinates of the object detected.
2. To attain the values from DDR such as the detected object’s size and object location. Then draw to the TFT based on those values attained from DDR. (code.c)
3. Set the video decoder configurations. (video_decoder.c, VGA.h)
PLB Bus (plb_v64)
The PLB buses were provided from the Xilinx IP catalog library named plb_v64. The main purpose of the PLB bus is to provide a connection to allow transportation of data to and from one module to another. We used two PLB buses because having multiple PLB busses reduces the amount of wait time, as one module is only allowed to access the bus at a time. By
![Page 10: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/10.jpg)
10
connecting the two busses to the MPMC it also allows two processes to pass data to and from one other at a larger quantity.
1. plb_video_to_ram, is connected to the Multi-‐Port Memory Controller (MPMC), as the custom detection module will need to read and write from and to the DDR memory.
2. Plb_uB, is connected to the TFT and the MPMC. This allows the MicroBlaze Processor to draw directly to the TFT controller or write directly into memory. It allows for the processor to read important values that the detection module writes, such as size and coordinates.
Bandwidth Calculation Video draw speed (TFT) Video Output Frame Size 640x480 pixels Pixel size 4 bytes /pixel Clock Speed 27MHz Time to process a frame 640x480 / 27MHz = 0.0114 s Bus Speed Bus Clock 100MHz Data per cycle used 4bytes/clock cycle used Detection Speed Video Input Frame Size 858x525 pixels Clock Speed 100MHz Scan rate 1 pixel /clock cycle Time to detect Whole object (858x525 /100M *1 ) = 0.0045045 s Total Time for detection Processor Reads 1 Processor Reads Time 1 / 100Mhz ~= 0.000000000……s Total Time 0.0045045 s + 0.0114 s = 0.0159045 s = 62.5 Frames/sec
IIC
The XPS_IIC module () was generated using the Xilinx IP Catalog. The XPS_IIC module is connected to the PLB so that the MicroBlaze processor can gain access to this module. The purpose of the IIC module is to configure the Digilent VDEC1 Video Decoder board and requires configuration instruction as it is a slave. This is where we used the MicroBlaze processor to send these video configurations to the slave to configure the video decoder, via VGA.h, video_decoder.c.
TFT Controller
The XPS TFT controller is an IP core generated from the EDK library. The TFT is connected to the PLB bus and acts as a master and a slave. It acts as a master and slave where as a master it
![Page 11: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/11.jpg)
11
reads from the memory address 0x40000000 to display the attached video and alternates as a slave reading from the MicroBlaze processor when the processor writes to the TFT address. The TFT handles a clock domain, handing 8-‐bit R,G and B values to the VGA monitor at 27 MHz.
MPMC & DDR
The MPMC (multi-‐port memory controller) a Xilinx IP library that allows for multiple buses to connected to the same memory chip. The MPMC handles read and write requests by queuing or executing requests between multiple buses. In this case the memory that the MPMC is handling is DDR memory and the buses attached is the PLB bus and plb_video_to_ram bus.
Decoder Logic
The decoder logic module was provided to us from piazza and was developed by another developer. The initial purpose of the decoder take input video from either S-‐video, Composite video or Component video and convert it to RBG values. These RGB values were then written to two RGB line buffers, which were used to write to DDR memory alternating from read to write.
The decoder logic was modified to not write directly to DDR memory but to pass RGB data from the line buffers to the custom Detection Core module.
Detection Core (on top of the decoder module)
Introduction
We are basing the assumption that the object we are detection is a circular object therefore, the first and last coordinate found in a single frame would be the diameter as we scan from left to right, top to bottom in a frame.
Ex:
First
Last
Diameter
read read write write
Stage 1 Stage 2
![Page 12: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/12.jpg)
12
The values objects location is calculated from the average of First (x1, y1) and Last(x2, y2) co-‐ordinate detected. The object location consisting of (X, Y) as ( !!!!!
!, !!!!!
! ).
The size would be the diameter of the object detected which would SIZE = y2 – y1.
To find the object for a specific RGB range in the video stream, in our case we detect a red marker where the intensities are specified as R>=235, G<=144, B<=142. If these conditions are all met there is a color match. The RGB values were found by taking a picture and reading the values in Microsoft paint.
Implementation
The custom detection core was designed to remember the pixel coordinate of the first and last occurrence of the desired color. From these values we obtain the size and coordinate of the object then write these values to memory.
To control such a feature two FSMs are implemented simultaneously communicating to each other.
One of the FSMs was used to read from the line buffers using a counter (x_pixel_counter) for the x-‐coordinate to keep track of the position of the line buffer being read. When a frame is done scanning, if w_vga_timing_line_count is 639, it will check to see if there is a new detected color coordinate, if there was it will wait till size and coordinate is ready to written to memory. When the values are ready it will then write to memory, but will not write if it the coordinate and size written to memory before was the same, this would reduce bandwidth usage with the memory.
The other FSM focused more on the detection algorithm. The FSM would take incoming pixel coordinates and associated RGB values. When it detects a color match (color_match) with the desired color for the first time, it will store that pixel coordinate in registers x1 and y1. The pixel coordinate will also be stored into registers x2 and y2 for every color match detected, until the end of the frame is detected. The object origin and size is then calculated and stored into a register and a DETECT_READY signal is raised notifying the first FSM the object size and coordinate calculated can be written into memory.
![Page 13: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/13.jpg)
13
FSMs
first_store =0DETECT_READY=0
size_out = 0;x_out = 0y_out = 0co_store=0Done=0
first_store =0DETECT_READY=0
size_out = 0;x_out = 0y_out = 0co_store=0Done=0
first_store =1DETECT_READY=1size_out = y2-‐y1;x_out = (x2-‐x1)/2y_out = (y2-‐y1)/2
co_store =0Done=0
first_store =1DETECT_READY=0
size_out = 0;x_out = 0y_out = 0co_store=1Done=1
first_store =1DETECT_READY=0
size_out = 0;x_out = 0y_out = 0co_store=0Done=0
Not_initialized(begin_detect==0)
Initialized(begin_detect==1)
!color_match
color_match &&
Frame Not Done
Data Not Written to Memory!(ready_reset ==1)
Data Written toMemory
(ready_reset ==1)
Data Written to Memory(ready_reset ==1)
color_match &&
Frame Done
Frame DoneData Not Written to
Memory!(ready_reset ==1)
detect_init
detect_start
first_detecteddetect_complete
detect_wait
DETECTION CONTROL
FSM
![Page 14: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/14.jpg)
14
w_M_request =0o_M_Abus=0
o_M_wrDBus=0x_pixel_count = 0Ready_reset=0
w_M_request =0o_M_Abus=0
o_M_wrDBus=0x_pixel_count = 0Ready_reset=0
w_M_request =0o_M_Abus=0
o_M_wrDBus=0x_pixel_count = x_pixel_count + 1
Ready_reset=0
w_M_request =0o_M_Abus=0
o_M_wrDBus=0x_pixel_count = 0Ready_reset=0
w_M_request =0o_M_Abus= 0
o_M_wrDBus= 0x_pixel_count = 0Ready_reset=0
w_M_request =1o_M_Abus= 0x45000000
o_M_wrDBus= { size_out[9:0],1'b0, y_out[9:0], 1'b0, x_out[9:0]}
x_pixel_count = 0Ready_reset=0
w_M_request =0o_M_Abus= 0x45000000
o_M_wrDBus= { size_out[9:0],1'b0,
y_out[9:0], 1'b0, x_out[9:0]}x_pixel_count = 0Ready_reset=0
w_M_request =0o_M_Abus= 0x45000000
o_M_wrDBus= { size_out[9:0],1'b0, y_out[9:0],
1'b0, x_out[9:0]}x_pixel_count = 0Ready_reset=0
DEBUG WRITES 2,3,4Of
S_WRITE_REQUEST#, S_WRITE#,
S_WRITE_COMPLETE#(Ready_reset=1 @
S_WRITE_COMPLETE4)
i_MPMC_Done_Init !=1i_sys_rst
r_line_ready !=1i_MPMC_Done_Init ==1
r_line_ready ==1 Frame Not Done&&
Line Done
Frame Done
Data Not Ready(DETECT_READY==0)
Data Ready(DETECT_READY==1)
New Size and Coordinate!=
Old Size and Coordinate
!(i_Sl_addrAck == 1'b1 && i_PLB_PAValid == 1)
i_Sl_addrAck == 1'b1 && i_PLB_PAValid == 1
i_Sl_wrComp ==0
i_Sl_wrComp ==1
Line Not Done
No detected Color in frame
(detect_state == 0 or 1)
S_INIT
S_START_LINE
S_LINE_WAIT
S_WAIT_DATA
S_CHECK_DATAS_WRITE_REQUEST
S_WRITE
S_WRITE_COMPLETE
New Size and Coordinate==
Old Size and Coordinate
* Jumps to Write_Complete4
Writes To the Following@WRITE2
Addr = 0x45000004Data = { 11'b0, y1[9:0], 1'b0, y2[9:0]}
@Write3Addr = 0x4500000C
Data = { 11'b0, x1[9:0], 1'b0, x2[9:0]};@Write4
Addr = 0x4500000CData = color_detected
WRITE CONTROL
FSM
![Page 15: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/15.jpg)
15
Drawing Algorithm (code.c)
Introduction
When the image detected is larger in diameter, the image produced will create a larger picture to give the perception of virtual reality.
To show perception of an inanimate object in space to the observer, the object will be drawn left if the mover moves right, and the object will right if the mover moves left. Vice versa for up and down
The idea was to draw a square background and join lines to the corners of that square. Then larger squares are drawn around the square background to show depth. An object is then drawn opposite to the square background with a size offset to show depth, with a line attached to the midpoint of the object to the midpoint of the square background, as the vanishing point.
Debug Implementation
We implemented two debug modules, one with extra write stages in the Write Control FSM to write out the detected (x1,y1) , (x2,y2) , color_detected of the detected object.
We had also used the LED and switches in the led_debug_mux_0 to display the current states of the machine as well as if a color is detected. The lights displayed would be controlled by a combination of the switches to show different debug values (eg. Detect_ready if switch is 1, if WRITE CONTROL FSM state if switch is 0). This was helpful to find out if one of the FSMs were stuck at a certain state and to check if the image was properly detected.
We had also modified the color replacement project to find out which color would be best to detect, as we would filter everything black and the desired color to be white. As a result we
![Page 16: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/16.jpg)
16
found a particular red color that stood out from other colors, where the RGB values are R>=235, G<=144, B<=142.
![Page 17: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/17.jpg)
17
Appendix
A-‐ Timing diagram for the FSM
![Page 18: GroupFinal(Report(pc/courses/432/2012/... · 2012-05-08 · 8!! game,!such!as!aping!pong!game!where!asingle!player!versus!the!computer.!They!would!then! need!another!objectdetection!core!to!detectthe!position!of!the!player’s](https://reader033.vdocuments.us/reader033/viewer/2022050103/5f41b81128de9d778b4bcf8a/html5/thumbnails/18.jpg)
18
B-‐ Screen Shot of the working system