• Save
Complex Weld Seam Detection Using Computer Vision Linked In
Upcoming SlideShare
Loading in...5
×
 

Complex Weld Seam Detection Using Computer Vision Linked In

on

  • 818 views

Shortened Thesis Presentation

Shortened Thesis Presentation

Statistics

Views

Total Views
818
Views on SlideShare
817
Embed Views
1

Actions

Likes
0
Downloads
0
Comments
0

1 Embed 1

http://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • In todays world it is becoming more important to be able to communicate with machines in a way that is intuitive for the user. The outcome of this being the achievement of complex tasks with minimal effort on behalf of the workforce.In a workplace that is continually changing their tooling to allow for different products to be manufactured or perhaps even manufacturing custom parts, the setup is often complex and requires specially skilled workers to be on site, taking them away from other tasks such as maintenance.The setup time needed for a new part for robotic welding often takes far longer to complete than the actual job itself. If were possible for the computer to identify where the weld seam needs to be completed with a simple gesture from a human, the time needed to complete the task would be drastically reduced.
  • These three current projects give an outline as to what other researchers are looking into for the purpose of interaction between humans and machines.They give an insight into how computer vision can be used to breech the gap between humans and machines.The purpose of the first article was to show how it was possible to control a robotic arm with the use of computer vision and an ordinary webcam.The second article explains how it is possible to achieve recognition of hand gestures with the Microsoft Kinect.The third article is concerned with recognising certain sequences of arm movements as gestures.Over the next slides I will explain these three articles in greater detail.
  • * This image shows that although the Kinect was designed for use in games, the hardware behind the sensor enables it to be used in complex computer vision applications.
  • I would just like to start out by saying that every step of this project has been a challenge. In the most part this was because of my limited computer programming skills. It seemed like a project within a project to learn not only the necessary c++ programming skills but also to learn about the different functions and data structures of OpenCV to allow me to write a program to satisfy the needs of this research.
  • * To initialise and open the Kinect’s data streams was a very time consuming task. In effect it took five weeks just to retrieve a depth image from the Kinect.
  • For this project the OpenCV libraries were used considerably.They give the programmer a base to build sophisticated vision applications.To enable the use of OpenCV its data structures need to be fully understood.
  • Once the depth stream had been opened I was able to set a threshold on the depth image. This threshold was set so that only values between 1.2 and 1.6 meters would be shown in the image.
  • The best way to visualise the minimum perimeter polygon is to stretch an elastic band around your hand.
  • Once the hand has been detected the next step is to use the colour camera of the Kinect to obtain a colour image.The colour image is converted to the HSV colour space as it is not as susceptible to lighting variations.The weighted average XY point from the depth image is transformed into the colour image and a HSV value is extracted.
  • A gesture is given to the Kinect which takes a image from one of the two stereovision cameras mounted on the robot.The user then moves their hand into the image and after five seconds, video from the on-board camera starts recording.The tip of the finger is then run along the joint or seam that is to be welded.Once the finger has reached the end of the seam, the user gives the same gesture to the Kinect stopping the recording.

Complex Weld Seam Detection Using Computer Vision Linked In Complex Weld Seam Detection Using Computer Vision Linked In Presentation Transcript

  • COMPLEX WELD SEAM DETECTION USING COMPUTER VISION Glenn Silvers, 16115327A presentation submitted for the partial fulfilment for the degree of: Bachelor of Engineering (Mechatronic and Robotic) (Honours) Supervisor Dr Gu Fang
  • THE NEED FOR RESEARCH To allow for the effective and simple communication between humans and machines. Simplification of complex setup tasks. Reduce time needed to complete tasks. Improved repeatability.
  • KEY OBJECTIVES To use computer vision to define the users hand to enable movement via gestures of a welding robot. The definition and therefore tracking of the hand must be real time to allow for adequate control over the robots motion. To define the region of interest where the weld seam lies to allow for seam detection.
  • CURRENT PROJECTS Imirok: Real-time imitative robotic arm control for home robot applications (Heng-Tze et al.,2011). Recognising Hand Gestures with Microsoft’s Kinect (Tang, 2011). Recognition of Arm Movements (Duric et al., 2002).
  • MICROSOFT KINECT http://www.itsagadget.com/2010/11/microsofts-kinect-sensor-is- now-officially-released.html
  • KINECT EXPLODEDhttp://hackedgadgets.com/wp-content/uploads/2010/11/inside-the-microsoft-kinect_2.jpg
  • THE KINECT’S SENSORS The Kinect makes use of three different types of sensors:  A Depth Sensor  A Colour Camera  A Four Element Microphone Array Each sensor allows the programmer endless possibilities in terms of application invention.
  • PLAN OF ATTACK 1. Accessing Kinect Data 2. Detecting and Tracking the Hand 3.2 Extracting 3.1 Gesture HSV Values Recognition from Hand 3.1.1 3.2.1 Commands for Seam ROI Movement of Detection the Robot
  • 1. ACCESSING KINECT’S DATA To access the Kinect’s data streams it is first necessary to understand its data structures. Once the data streams have been opened, it is then a matter of converting the data into a usable format.
  • 2. DETECTING AND TRACKING THE HAND OpenCV libraries  Created by Intel in 1999.  Initially designed to provide optimised computer vision code so that programmers would not need to start projects from scratch.  Has its own inbuilt data structures.  Over 500 functions that span many different areas of computer vision.
  • 2. DETECTING AND TRACKING THE HANDCONTINUED Detecting the hand:  Utilised the Kinect’s depth stream.  Applied a threshold to the depth values to create a depth region of interest where only the hand would be visible.
  • 2. DETECTING AND TRACKING THE HANDCONTINUED Detecting the hand:  Once the hand is within the frame it is necessary for the program to identify it as a hand.  First step is to find the contours (Suzuki and be, 1985) of the hand.
  • 2. DETECTING AND TRACKING THE HANDCONTINUED Detecting the hand:  The next step is to enclose the contour with the use of the minimum perimeter polygon algorithm (Sklansky, 1972).
  • 2. DETECTING AND TRACKING THE HANDCONTINUED Detecting the hand:  Looking at points where the contour and the minimum perimeter polygon meet shows the position of the fingers.
  • 2. DETECTING AND TRACKING THE HANDCONTINUED Detecting the hand:  To identify the fingers only a weighted average is performed.  This rejects all points lower than the thumb identifying the four fingers and thumb.
  • 3.1 GESTURE RECOGNITION The robot being used in this research is a Fanuc 100iC with a Lincoln Electric welding attachment. As this robot has six degrees of freedom there will need to be six different gestures to allow for complete control. At this point in time there have been three gestures coded and tested. The remaining gestures need to be coded into the program.
  • 3.1.1 COMMANDS FOR MOVEMENT OF THEROBOT Once the gestures have been recognised by the program, the robot movement speed will be limited to 10%. The reason for this is simply to ensure the health and safety of the users. Whilst the operator continues to make a gesture to the Kinect the robot will continually move in accordance with that gesture. As a safety precaution, for all gestures to be recognised five fingers need to be identified.
  • 3.2 EXTRACTING HSV VALUES FROM THEHAND Obtain colour image from Kinect. The colour image is converted to the HSV colour space. An XY depth point from the hand is transformed into the colour image. A HSV value is extracted for the hand. Makes use a Kinect function to transform point positions between the depth image and colour image.
  • 3.2 EXTRACTING HSV VALUES FROM THEHAND CONTINUED The below images show the HSV value extraction.
  • 3.2.1 SEAM REGION OF INTERESTDETECTION A gesture causes an image to be taken from an on-board camera. The user then moves their hand into the image and after five seconds, video from the on-board camera starts recording. The tip of the finger is then run along the joint or seam that is to be welded. The same gesture is used to stop the recording.
  • 3.2.1 SEAM REGION OF INTERESTDETECTION The reason for the five second delay:  So that the user has enough time to move their finger to the starting point of the seam before the recording starts. This ensures that the correct starting point of the seam is identified.
  • 3.2.1 SEAM REGION OF INTERESTDETECTION CONTINUED To identify the tip of the finger:  The image from the on-board camera and the first image from the video are converted into the HSV colour space.  The images are thresholded using the HSV values extracted earlier.  The first image of the video is the subtracted from the original image without the finger in it.  All that is left is the hand and some background noise.
  • 3.2.1 SEAM REGION OF INTERESTDETECTION CONTINUED The below images show the image subtraction:
  • 3.2.1 SEAM REGION OF INTERESTDETECTION CONTINUED To ensure that the hand as a whole is identified the image is dilated to give a stronger response:
  • 3.2.1 SEAM REGION OF INTERESTDETECTION CONTINUED From this image the contours and minimum perimeter polygon of the hand are calculated:
  • 3.2.1 SEAM REGION OF INTERESTDETECTION CONTINUED Once again where the contour and minimum perimeter meet is the region of interest (white circles).
  • WHAT HAS BEEN ACHIEVED Definition of the hand and fingers. Tracking of the hand and fingers in real time. Successfully defined the region of interest of the seam to allow seam detection.
  • WHAT STILL NEEDS TO BE ACHIEVED Hand gesture coding needs to be finalised – 80% Communication with the robot. Complete an in-depth analysis of how repeatable the methods set out are with many different skin types.
  • THE MAJOR STEP FORWARD The major step forward from this research has been the way in which the hand is identified within the colour image. By defining the hand in the depth image and then extracting HSV values from the colour image, a hybrid skin detector is formed. No matter what race or skin colour the user has, this method will be able to segment their hand to allow for seam ROI definition.
  • REFERENCES HENG-TZE, C., ZHENG, S. & PEI, Z. Imirok: Real-time imitative robotic arm control for home robot applications. Pervasive Computing and Communications Workshops (PERCOM Workshops), 2011 IEEE International Conference on, 21-25 March 2011 2011. 360-363. TANG, M. 2011. Recognising Hand Gestures with Microsoft’s Kinect. BEng (Electrical), Stanford University. DURIC, Z., LI, F. & WECHSLER, H. Recognition of arm movements. Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on, 21-21 May 2002 2002. 348-353. http://www.itsagadget.com/2010/11/microsofts-kinect-sensor-is-now-officially-released.html. Accessed 16/4/2012. http://hackedgadgets.com/wp-content/uploads/2010/11/inside-the-microsoft-kinect_2.jpg. Accessed 16/4/2012. SUZUKI, S. & BE, K. 1985. Topological structural analysis of digitized binary images by border following. Computer Vision, Graphics, and Image Processing, 30, 32-46. SKLANSKY, J. 1972. Measuring Concavity on a Rectangular Mosaic. Computers, IEEE Transactions on, C-21, 1355-1364.