A robot draws portraits using force control and image processing. The robot is equipped with a force/torque sensor and uses KUKA KR6 robot. Images are preprocessed using filters and Canny edge detection. Edges are labeled and sorted for the robot to draw from start to end points. The force control allows drawing on flat or arbitrary surfaces by maintaining contact force. The robot orients the pen normal to the surface and draws edges tangentially while sensing forces like a human artist.
Hello Everyone!
This is the best ppt on Robotics. It includes all the topics from history to today's latest technology. Those who are keen to know about artificial intelligence then this ppt is the best platform to start. I am sure that you will get to know how the robots are made, their operation in industries and how they can become a part of our future technology.
Do watch and learn the whole ppt and start learning more about it for your future career.
Remember the future is here!
Thank You!
Hello Everyone!
This is the best ppt on Robotics. It includes all the topics from history to today's latest technology. Those who are keen to know about artificial intelligence then this ppt is the best platform to start. I am sure that you will get to know how the robots are made, their operation in industries and how they can become a part of our future technology.
Do watch and learn the whole ppt and start learning more about it for your future career.
Remember the future is here!
Thank You!
We create a group presentation for Simulation & Modeling. This presentation has so many related fields as like artificial intelligence ,Information engineering,Neurology, Signal processing etc.
This presentation is related to the description and working principle of Virtual reality world. I presented this slide in my 1st semester of Masters Degree. I hope it will be helpful to many.
Intelligent mobile Robotics & Perception SystemsIntelligent mobile Robotics ...Gouasmia Zakaria
Recent advances in human-robot interaction, complex robotic tasks, intelligent reasoning, and decision-making are, at some extent, the results of the notorious evolution and success of ML algorithms. This chapter will cover recent and emerging topics and use-cases related to intelligent perception systems in robotics.
Presentation on coputer vision. Its definition,introudction,application,some examples and conclusion.
1.Image Understanding
Appeared in 1960s
Computer emulation of human vision
Inverse of Computer Graphics
2.It is a field that includes methods for acquiring,processing,analyzing and understanding images
Known as image analysis,scene analysis,image understanding
Theory of building artificial systems that obtain information from images
It is the ability of computers to see and also called:
Image understanding
Machine vision
Robot vision
3.conclusion::The field of computer vision has vastly improved since it began in the late 1960s.Computers can now quickly and accurately recognize thousands of faces, as well as a growing number of other objects. Although computer vision currently lacks the flexibility, and general capabilities and accuracy than that of human vision, the gap is steadily closing.
SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS Nandakishor Jahagirdar
The project is to develop a autonomous navigation system along with mapping of the path.
A robot which senses the edges of the object in the path and move without colliding the object. This application equipped with camera as main component which captures the images and transmitted to workstation through wireless antenna.
The processing of the image is done on a workstation or computer using MATLAB-2013a. An IR ranging device, which senses any objects ahead of it and accordingly the robot change its direction to avoid any collision.
Thus we ensure that even in cases of circumstances leading to errors in the output of the image processing algorithm, a decision can be made using the input from the IR sensors.
We create a group presentation for Simulation & Modeling. This presentation has so many related fields as like artificial intelligence ,Information engineering,Neurology, Signal processing etc.
This presentation is related to the description and working principle of Virtual reality world. I presented this slide in my 1st semester of Masters Degree. I hope it will be helpful to many.
Intelligent mobile Robotics & Perception SystemsIntelligent mobile Robotics ...Gouasmia Zakaria
Recent advances in human-robot interaction, complex robotic tasks, intelligent reasoning, and decision-making are, at some extent, the results of the notorious evolution and success of ML algorithms. This chapter will cover recent and emerging topics and use-cases related to intelligent perception systems in robotics.
Presentation on coputer vision. Its definition,introudction,application,some examples and conclusion.
1.Image Understanding
Appeared in 1960s
Computer emulation of human vision
Inverse of Computer Graphics
2.It is a field that includes methods for acquiring,processing,analyzing and understanding images
Known as image analysis,scene analysis,image understanding
Theory of building artificial systems that obtain information from images
It is the ability of computers to see and also called:
Image understanding
Machine vision
Robot vision
3.conclusion::The field of computer vision has vastly improved since it began in the late 1960s.Computers can now quickly and accurately recognize thousands of faces, as well as a growing number of other objects. Although computer vision currently lacks the flexibility, and general capabilities and accuracy than that of human vision, the gap is steadily closing.
SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS Nandakishor Jahagirdar
The project is to develop a autonomous navigation system along with mapping of the path.
A robot which senses the edges of the object in the path and move without colliding the object. This application equipped with camera as main component which captures the images and transmitted to workstation through wireless antenna.
The processing of the image is done on a workstation or computer using MATLAB-2013a. An IR ranging device, which senses any objects ahead of it and accordingly the robot change its direction to avoid any collision.
Thus we ensure that even in cases of circumstances leading to errors in the output of the image processing algorithm, a decision can be made using the input from the IR sensors.
The Follow-me bot is primarily humanoid with multiple degrees of freedom(one you might have seen in Real steel movie by Hugh Jackman). From the get go site, it has all the earmarks of being much the same as any other humanoid robot however the true contrast comes in when we go inside its working system. Follow-me bot lives up to expectations and follow the rules guideline for remote human type robot interface. It will take client's body movement as info and move much the same as him/her. It's more like offering sight to the robot and it'll mimicries client's movement. Dissimilar to another humanoid robot, the controller in this present bot's structural planning is the client himself. Controllers can steer correspond with the machine simply depending on body developments, and along these lines control the activities of remote robot progressively, coordination, and harmonization. The outline is a synthesis of new human movement catch strategies, sagacious executor controltechnique, system transmission, different sensor combination innovation. The name 'Follow-me bot' is utilized on the grounds that much the same as our follow-me, it'll move definitely in the same manner as we seem to be. This paper quickly portrays all the portions of the humanoid robot utilizing Kinect engineering.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Humans have evolved to better survive and have evolved their invention. In today’s age, a
large number of robots are placed in many areas replacing manpower in severe or dangerous
workplaces. Moreover, the most important thing is to take care of this technology for developing
robots progresses. This paper proposes an autonomous moving system which automatically finds its
target from a scene, lock it and approach towards its target and hits through a shooting mechanism.
The main objective is to provide reliable, cost effective and accurate technique to destroy an unusual
threat in the environment using image processing.
Humans have evolved to better survive and have evolved their invention. In today’s age, a
large number of robots are placed in many areas replacing manpower in severe or dangerous
workplaces. Moreover, the most important thing is to take care of this technology for developing
robots progresses. This paper proposes an autonomous moving system which automatically finds its
target from a scene, lock it and approach towards its target and hits through a shooting mechanism.
The main objective is to provide reliable, cost effective and accurate technique to destroy an unusual
threat in the environment using image processing.
HRI for Interactive Humanoid Head Amir-II for Visual Tracking and Servoing of...Waqas Tariq
In this paper, we describe the HRI (Human-Robot Interaction) system developed to operate a humanoid robot head capable of visual tracking and servoing of human face through image processing. The robotic humanoid head named Amir-II, equipped with a camera and servoing mechanism is used as the platform. The Amir-II tracks the human face within the field-of-vision (FOV) while the servoing mechanism ensures the detected human face remains at the center of its FOV. The algorithm developed in this research utilizes the capability offered by scientific computing program MATLAB along with its Image Processing Toolbox. The algorithm basically compares the locations of the face in the image plane that is detected from the static face image captured from real-time video stream. The calculated difference is then used to produce appropriate motion command for the servo mechanism to keep track of the human face moving within the range of its FOV.
This presentation deals with the combination of Sixth Sense technology and Robotics.
The Autonomous Robots are controlled using basic Hand gestures through sixth sense technology.
1. 1Mar. 19, 2015 @ SS24 2,11:40-12:00 AM, Floor 1 - Room Lebrija, Meliá Sevilla
A Force-Controlled Portrait
Drawing Robot
IEEE International Conference on
Industrial Technology
March 17-19, 2015
Seville, Spain
Shubham Jain, Prashant Gupta, Vikash Kumar
Swami Keshvanand Institute of Technology (SKIT), India
Kamal Sharma
Bhabha Atomic Research Centre, India
4. 4
Introduction
A Robot that draws the portrait of a human face like an artist.
All current portrait drawing systems can draw only over a flat
pre-calibrated surface.
Motivation
Outcomes
Now, robot is equipped with the force sensing capability, that can make the portrait
like a human artist.
A portrait drawing robot is very enjoyable recreational use of robotics and increases
human-robot interaction.
5. 5
Experimental Setup
Image is captured by Wolf Vision EYE – 12 camera .
Uniform background was preferred so that significant features of face can be
extracted.
For this project KUKA KR6 ARC Robot is used.
Good Illumination conditions must be present.
This robot is equipped with ATI 6 Force/Torque Sensor.
7. 7
Image Processing
Captured image is an RGB image.
RGB image is converted into grey scale image.
Grey scale image is sent for noise removal
using Gaussian filter.
B is a filtered image and A is an image input.
Pre Processing
8. 8
Image Processing
Edge Detection
Edges are detected from preprocessed image using Canny edge
detection algorithm.
Good detection: detects as many real edges as possible.
Good localization: Edges marked should be as close as possible to the real
edge.
Minimal response(towards noise): An edge in an image should be
marked only once and where possible.
Why
Canny
Edge
detector?
12. 12
Image Processing
Post Processing
Labeling of Edges:
Edges extracted are in the form of
binary pixels.
Edges represented by binary
pixel ‘1’.
Assigning label to each pixel by 8
connectivity.
13. 13
Image Processing
Post Processing
End Point Detection:
End points of the edges are detected
Removal of one pixel is necessary
to break the cycle
Robot draw the edge from start
point to end point.
Sorting of Edges:
14. 14
Force-Controlled Robot Motion
Need of Calibration is removed by the use of force sensing strategy.
Robot is mounted with a Force/Torque Sensor at its wrist.
Indirect Hybrid Force/Position Control Mechanism is Used.
Apply Force in X Direction.
Move in Y Direction.
15. 15
Force-Controlled Robot Motion
Draw the next edge point tangentially
over drawing surface
Orient drawing pen normal to drawing
surface
Maintain a contact force between drawing
pen & drawing surface
18. 19
Conclusion
A portrait drawing robot is very enjoyable recreational
use of robotics and increases human-robot interaction.
Humans also maintain a constant force while drawing portraits
Now the portrait drawing robot is equipped with the ability
to sense contact forces.
Now the robot can draw the portrait over a non-calibrated flat
surface as well as other surfaces of any shape.
19. 20
References
Jean-Pierre Gazeau, Said Zeghoul, "The artist robot: a robot drawing like
a human artist," IEEE International Conference on Industrial Technology,
pp.486,491, 19-21 March 2012.
P. Tresset and F. Leymarie, “Portrait drawing by Paul the robot”, Computers
and Graphics, pp. 348.363, 2013.
C. Y. Lin, L. W. Chuang, and T. T. Mac, .Human portrait generation system
for robot arm drawing,. Proceedings of the IEEE/ASME International
Conference on Advanced Intelligent Mechatronics, (Singapore),
pp. 1757.1762, IEEE, July 2009.
S. Calinon, J. Epiney, and A. Billard, “A humanoid robot drawing human
portraits,”Proc. IEEE-RAS International Conference on Humanoid Robots,
2005, pp. 161–166.
20. 21
References
M. Ruchanurucks, S. Kudoh, K. Ogawara, T. Shiratori, and K. Ikeuchi, .Humanoid
robot painter: visual perception and high level planning,. Proc. IEEE International
Conference on Robotics & Automation, pp. 3028.3033, 2007.
A. Srikaew, M. E. Cambron, S. Northrup, R. A. Peters II, M. Wilkes, and K.Kawamura
, .Humanoid drawing robot,. IASTED International Conference on Robotics and
Manufacturing, 1998
Canny J.,. A computational approach to edge detection,. IEEE Trans. Pattern
Analysis and Machine Intelligence, 8:679-714 , 1986.
Kamal Sharma, Varsha Shirwalkar, Prabir K. Pal, “An intelligent indirect hybrid
force/position controller for accurate and smooth tracking of unknown contours,”
Proceedings of Conference on Advances in Robotics, 2013.
21. 22
References
L.F. Baptista, S.P. Fernandes e J.M.G. Sá da Costa, "Bi-dimensional control with
force tracking control in non-rigid environments,“Portuguese Control Conference.
Controlo 2004, 7-9 de Junhode 2004, Universidade do Algarve, Portugal.
Kamal Sharma, .Thesis titled developing and optimizing strategies for robotic
peg-in-hole insertion with a force/torque sensor.,
http://www.scribd.com/doc/106529483/Kamal-sThesis
Donald Hearn, M. Pauline Baker Computer Graphics C Version Second Edition,
Prentice Hall.