The document describes the wireless integration of tactile sensing on the hands of a humanoid robot named NAO. FlexiForce sensors were attached to the fingers without modifying the robot's existing hardware. The sensors allow the robot to differentiate objects based on properties like weight, stiffness, and texture. A printed circuit board was designed to amplify sensor signals. The robot was programmed to pick up objects, measure tactile properties, and identify objects by comparing measurements to a database. This enhances the robot's perception and ability to learn about its environment through touch.
NAO Programming using .NET and Webots 01-Introduction to NAOSetiawan Hadi
This work is part of the SAME 2013 result, in the collaboration of Computer Vision Laboratory University of Padjadjaran INDONESIA and Cognition and Interaction Laboratory Informatics Research Center University of Skövde SWEDEN.
http://blogs.unpad.ac.id/setiawanhadi/?cat=8
http://informatika.unpad.ac.id/visilab/
http://www.his.se/en/Research/informatics/Interaction-Lab/Cognition--Interaction-Lab/
Nao is a 58cm tall humanoid robot that can recognize objects and faces, understand speech, detect sounds, and move autonomously. It has cameras and microsensors that allow it to sense its environment. Nao can fall but protect itself and avoid collisions with its reflexes and awareness of its own position. It is used for research and education due to its programmability and interaction capabilities.
This document provides specifications for the NAO humanoid robot platform produced by Aldebaran Robotics. It describes the robot's hardware components including its Intel Atom processor, cameras, sensors, and 22 degrees of freedom of movement provided by its joints. It also outlines its software features such as computer vision capabilities, speech recognition and synthesis, and programming interfaces. Its applications are described as including education, research, and entertainment.
The Kinect sensor is an input device by Microsoft that uses cameras and microphones to track body movements and recognize gestures and voices. It consists of an RGB camera, depth sensor using infrared light, and 4-microphone array. The depth sensor uses structured light to measure distances by projecting a pattern and analyzing its distortion. Kinect can track up to 20 joints of the human body in real-time using skeletal tracking. It has applications in 3D scanning, sign language translation, augmented reality, robot control, and virtual fitting rooms due to its low-cost depth sensing capabilities.
This document provides an overview of the Kinect sensor and Kinect for Windows SDK. It describes the Kinect sensor's capabilities including depth sensing, skeletal tracking, and speech recognition. It explains how the Kinect SDK allows accessing the sensor's data streams and provides APIs for tasks like skeletal tracking and speech recognition. The document also outlines the tools included in the SDK and provides code examples for initializing the sensor, accessing sensor data, and using speech recognition features.
This document provides an overview of Kinect motion technology. It describes how Kinect uses an infrared sensor and camera to track a user's full-body motion and interpret gestures and voice commands to control applications without any additional input devices. Applications discussed include gaming, healthcare, virtual pianos, and using Kinect to control robots and provide gesture-based interactions in augmented reality. Advantages are noted as not requiring additional input devices and allowing for voice and facial recognition, while disadvantages include sensitivity to infrared light sources and not detecting certain materials well.
Kinect is an electronic device that tracks body movement without contact. It was developed by Microsoft and released in 2010. There have been two main versions - Kinect for Xbox, which uses a time-of-flight camera for precise depth sensing, and Kinect for Windows, which has gone through several iterations to update its infrared sensor technology. Kinect uses infrared and depth sensing to detect joints, facial expressions, and other body metrics to enable full-body control of applications and games.
Enhanced Computer Vision with Microsoft Kinect Sensor: A ReviewAbu Saleh Musa
This document discusses enhanced computer vision capabilities using the Microsoft Kinect sensor. It covers preprocessing techniques, object tracking and recognition, human activity analysis, hand gesture analysis, indoor 3D mapping, and issues and future outlook. The document reviews Kinect's hardware, software tools, and performance, and techniques like depth data filtering, object detection and tracking, pose estimation, and sparse/dense point matching. It aims to provide an overview of research on using Kinect for applications in computer vision.
NAO Programming using .NET and Webots 01-Introduction to NAOSetiawan Hadi
This work is part of the SAME 2013 result, in the collaboration of Computer Vision Laboratory University of Padjadjaran INDONESIA and Cognition and Interaction Laboratory Informatics Research Center University of Skövde SWEDEN.
http://blogs.unpad.ac.id/setiawanhadi/?cat=8
http://informatika.unpad.ac.id/visilab/
http://www.his.se/en/Research/informatics/Interaction-Lab/Cognition--Interaction-Lab/
Nao is a 58cm tall humanoid robot that can recognize objects and faces, understand speech, detect sounds, and move autonomously. It has cameras and microsensors that allow it to sense its environment. Nao can fall but protect itself and avoid collisions with its reflexes and awareness of its own position. It is used for research and education due to its programmability and interaction capabilities.
This document provides specifications for the NAO humanoid robot platform produced by Aldebaran Robotics. It describes the robot's hardware components including its Intel Atom processor, cameras, sensors, and 22 degrees of freedom of movement provided by its joints. It also outlines its software features such as computer vision capabilities, speech recognition and synthesis, and programming interfaces. Its applications are described as including education, research, and entertainment.
The Kinect sensor is an input device by Microsoft that uses cameras and microphones to track body movements and recognize gestures and voices. It consists of an RGB camera, depth sensor using infrared light, and 4-microphone array. The depth sensor uses structured light to measure distances by projecting a pattern and analyzing its distortion. Kinect can track up to 20 joints of the human body in real-time using skeletal tracking. It has applications in 3D scanning, sign language translation, augmented reality, robot control, and virtual fitting rooms due to its low-cost depth sensing capabilities.
This document provides an overview of the Kinect sensor and Kinect for Windows SDK. It describes the Kinect sensor's capabilities including depth sensing, skeletal tracking, and speech recognition. It explains how the Kinect SDK allows accessing the sensor's data streams and provides APIs for tasks like skeletal tracking and speech recognition. The document also outlines the tools included in the SDK and provides code examples for initializing the sensor, accessing sensor data, and using speech recognition features.
This document provides an overview of Kinect motion technology. It describes how Kinect uses an infrared sensor and camera to track a user's full-body motion and interpret gestures and voice commands to control applications without any additional input devices. Applications discussed include gaming, healthcare, virtual pianos, and using Kinect to control robots and provide gesture-based interactions in augmented reality. Advantages are noted as not requiring additional input devices and allowing for voice and facial recognition, while disadvantages include sensitivity to infrared light sources and not detecting certain materials well.
Kinect is an electronic device that tracks body movement without contact. It was developed by Microsoft and released in 2010. There have been two main versions - Kinect for Xbox, which uses a time-of-flight camera for precise depth sensing, and Kinect for Windows, which has gone through several iterations to update its infrared sensor technology. Kinect uses infrared and depth sensing to detect joints, facial expressions, and other body metrics to enable full-body control of applications and games.
Enhanced Computer Vision with Microsoft Kinect Sensor: A ReviewAbu Saleh Musa
This document discusses enhanced computer vision capabilities using the Microsoft Kinect sensor. It covers preprocessing techniques, object tracking and recognition, human activity analysis, hand gesture analysis, indoor 3D mapping, and issues and future outlook. The document reviews Kinect's hardware, software tools, and performance, and techniques like depth data filtering, object detection and tracking, pose estimation, and sparse/dense point matching. It aims to provide an overview of research on using Kinect for applications in computer vision.
ECCV2010: feature learning for image classification, part 0zukun
This document discusses feature learning for image classification. It notes that computer vision is challenging and that machine learning algorithms require good feature representations of input data rather than raw pixels. The key question is whether machine learning can automatically learn good feature representations rather than relying on hand-tuned features designed by experts. The document then outlines using unsupervised feature learning to find better representations of images than raw pixels by using machine learning algorithms on unlabeled image data.
This document is a seminar report submitted by Albert Cleetus for their dual degree MCA. It discusses Project Soli, a technology developed by Google's ATAP division that uses radar to enable touchless gestures. The report provides background on Google and ATAP, and describes how Project Soli's miniature radar sensor is able to detect hand motions and gestures without contact. It explains how the sensor works using radar technology, and discusses the algorithms and applications of Soli, such as controlling devices with gestures.
The document summarizes Project Tango, an experimental project from Google's Advanced Technology and Projects (ATAP) group. It discusses how Project Tango uses sensors and computer vision to allow mobile devices to understand their physical environment and motion in 3D without relying on external signals. The key capabilities of Project Tango devices include simultaneous localization and mapping, depth perception through infrared projection and cameras, and area learning to recognize previously mapped locations. Potential applications mentioned include indoor navigation, augmented reality games, and assisting emergency responders.
The document discusses the Microsoft Kinect technology. It provides an introduction and history, describing how Kinect was launched in 2010 and allows users to control an Xbox or Windows system without a controller. It covers the design and working of Kinect, including its ability to track 20 joints and skeletons at a distance of 0.5-6 meters. Differences between the Kinect for Xbox and Kinect for Windows are outlined. Applications and the future scope of Kinect are discussed.
This document discusses different types of input devices including biometric devices, game controllers, and adaptive technology. It describes how biometric devices like fingerprint scanners, retinal scanners, and facial recognition are used for identification and security. Game controllers are used to interact with video games and have applications in weaponry control. Adaptive technology allows disabled individuals to use computers through devices like Braille keyboards, screen readers, and eye tracking systems.
Project Tango is a smartphone project by Google that uses motion tracking and depth perception to create a 3D model of the environment. It has an infrared projector, cameras, and sensors that allow it to track its position and map its surroundings in 3D. The phone emits infrared light pulses and records reflections to build detailed depth maps. Developers are exploring uses like augmented reality applications and helping robots perform tasks autonomously. The technology could also be integrated with devices like Google Glass in the future.
The lecture covers four main topics: 1) artificial intelligence including machine learning, natural language processing, and robotics, 2) ubiquitous computing using personal area networks, wireless sensor networks, and RFID tags, 3) next-generation networking through IP convergence and cloud/grid computing, and 4) conclusions. The goal of the lecture is to provide an overview of these emerging technologies.
Project Tango is a tablet development kit from Google that adds advanced computer vision, depth sensing, and motion tracking capabilities to Android devices. It uses a specialized camera, depth sensor, and Tegra K1 processor to enable applications like area learning, localization, and 3D reconstruction. Developers can access pose, depth, and area description data through APIs to build augmented reality and mobile visual computing experiences.
Project Tango is a project by Google that aims to give mobile devices a 3D understanding of space using advanced sensors and computer vision. The Tango prototype is an Android device that tracks its own 3D motion and creates a 3D model of the surrounding environment in real-time. It uses motion tracking, depth perception, and area learning technologies. Potential applications include improved indoor navigation, more efficient shopping, emergency response, augmented reality gaming, and 3D modeling of objects.
The document discusses Sympro, a new wireless projector that will be the smallest in the world. Sympro can capture images of objects and project them wherever desired. When connected to a phone, it allows browsing the internet and making calls. When connected to a TV or dish, it allows watching TV. Sympro offers advantages like capturing and projecting object images, and changing the color, dimensions, and position of projections. It is also portable.
Project Soli is a Google initiative that uses radar sensors to track hand gestures. The Soli chip uses radar to capture sub-millimeter finger motions at 10,000 frames per second. It can accurately detect hand movements in 3D space in real-time without needing light or direct contact. The chip's radar technology allows for touchless gesture recognition through materials to enable new interactions with devices like phones and computers.
A smartphone from Google ATAP which creates a live 3D image of your nearby space, such that you can access those data anywhere and anytime.
For any queries contact me at : akhilanair94@gmail.com
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/basler/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Mark Hebbel, Head of New Business Development at Basler, presents the "Time of Flight Sensors: How Do I Choose Them and How Do I Integrate Them?" tutorial at the May 2017 Embedded Vision Summit.
3D digitalization of the world is becoming more important. This additional dimension of information allows more real-world perception challenges to be solved in a wide range of applications. Time-of-flight (ToF) sensors are one way to obtain depth information, and several time-of-flight sensors are available on the market.
In this talk, Hebbel examines the strengths and weaknesses of ToF sensors. He explains how to choose them based on your specifications, and where to get them. He also briefly discusses things you should watch out for when incorporating ToF sensors into your systems, along with the future of ToF technology.
Project Tango is a prototype smartphone developed by Google that uses computer vision to allow mobile devices to understand their position and orientation in 3D space. It contains specialized cameras and sensors that enable features like motion tracking, area mapping, and depth perception. The main challenges were implementing simultaneous localization and mapping (SLAM) algorithms typically requiring high-powered computers onto a mobile device. It works by using a combination of cameras, sensors, and custom computer vision chips to generate real-time 3D models of environments.
Google project tango - Giving mobile devices a human scale understanding of s...Harsha Madusankha
Project Tango is a Google initiative to develop smartphones that can understand 3D space. It uses sensors and computer vision techniques to create 3D maps of environments in real-time. The Project Tango smartphone has motion tracking cameras, an infrared projector, and processors to make over a quarter million 3D measurements per second. This allows the phone to create 3D models of spaces and understand its position within physical environments. Potential applications include indoor mapping, navigation for the visually impaired, augmented reality gaming, and autonomous robotics. Google is working with other companies and universities to develop this technology further.
Project Soli is a Google technology that uses radar sensors and machine learning to enable touchless gesture control. A small Soli chip contains radar that can detect subtle hand motions and movements. This allows devices to be controlled through gestures without touching screens. Google is developing a Soli developer kit to allow creators to explore uses for areas like health, art, smartwatches, and other interfaces. The technology provides an alternative to camera-based gesture systems by offering higher motion tracking speeds and the ability to sense movements through certain materials.
Project soli is a gesture based technology.developed by google ATAP Team.Projct soli is working on the basis of RADAR.Human hand is one of the interactive mechanism to deals with any machine...
This slides must help you to get a great idea about "Project soli"
By,
BHAVIN.B
Bhavinbhadran7u@gmail.com
Robots Need Game Designers (C. Boudier / N. Rigaud)Nicolas Rigaud
This document discusses Aldebaran, a company that creates humanoid robots, and their presence at a game conference to discuss designing robot interactions. Aldebaran was founded in 2005, has sold over 15,000 robots, and employs 400 people across 4 offices. They are joined by former game industry executives to discuss how robots have evolved from research tools in 1960 to personal home assistants today, and how their development requires teams of designers, developers, and specialists in user experience, animation, linguistics, sound, and testing.
Présentation au Human talks Paris le 8 octobre 2013.
Nao est un robot humanoïde concu par Aldebaran Robotics à Paris. Lors de cette introduction, j'aborderai les points suivants : Qu'est-ce qu'il y a dans le robot ? Comment le programmer ? Comment devenir Nao Developper ?
ECCV2010: feature learning for image classification, part 0zukun
This document discusses feature learning for image classification. It notes that computer vision is challenging and that machine learning algorithms require good feature representations of input data rather than raw pixels. The key question is whether machine learning can automatically learn good feature representations rather than relying on hand-tuned features designed by experts. The document then outlines using unsupervised feature learning to find better representations of images than raw pixels by using machine learning algorithms on unlabeled image data.
This document is a seminar report submitted by Albert Cleetus for their dual degree MCA. It discusses Project Soli, a technology developed by Google's ATAP division that uses radar to enable touchless gestures. The report provides background on Google and ATAP, and describes how Project Soli's miniature radar sensor is able to detect hand motions and gestures without contact. It explains how the sensor works using radar technology, and discusses the algorithms and applications of Soli, such as controlling devices with gestures.
The document summarizes Project Tango, an experimental project from Google's Advanced Technology and Projects (ATAP) group. It discusses how Project Tango uses sensors and computer vision to allow mobile devices to understand their physical environment and motion in 3D without relying on external signals. The key capabilities of Project Tango devices include simultaneous localization and mapping, depth perception through infrared projection and cameras, and area learning to recognize previously mapped locations. Potential applications mentioned include indoor navigation, augmented reality games, and assisting emergency responders.
The document discusses the Microsoft Kinect technology. It provides an introduction and history, describing how Kinect was launched in 2010 and allows users to control an Xbox or Windows system without a controller. It covers the design and working of Kinect, including its ability to track 20 joints and skeletons at a distance of 0.5-6 meters. Differences between the Kinect for Xbox and Kinect for Windows are outlined. Applications and the future scope of Kinect are discussed.
This document discusses different types of input devices including biometric devices, game controllers, and adaptive technology. It describes how biometric devices like fingerprint scanners, retinal scanners, and facial recognition are used for identification and security. Game controllers are used to interact with video games and have applications in weaponry control. Adaptive technology allows disabled individuals to use computers through devices like Braille keyboards, screen readers, and eye tracking systems.
Project Tango is a smartphone project by Google that uses motion tracking and depth perception to create a 3D model of the environment. It has an infrared projector, cameras, and sensors that allow it to track its position and map its surroundings in 3D. The phone emits infrared light pulses and records reflections to build detailed depth maps. Developers are exploring uses like augmented reality applications and helping robots perform tasks autonomously. The technology could also be integrated with devices like Google Glass in the future.
The lecture covers four main topics: 1) artificial intelligence including machine learning, natural language processing, and robotics, 2) ubiquitous computing using personal area networks, wireless sensor networks, and RFID tags, 3) next-generation networking through IP convergence and cloud/grid computing, and 4) conclusions. The goal of the lecture is to provide an overview of these emerging technologies.
Project Tango is a tablet development kit from Google that adds advanced computer vision, depth sensing, and motion tracking capabilities to Android devices. It uses a specialized camera, depth sensor, and Tegra K1 processor to enable applications like area learning, localization, and 3D reconstruction. Developers can access pose, depth, and area description data through APIs to build augmented reality and mobile visual computing experiences.
Project Tango is a project by Google that aims to give mobile devices a 3D understanding of space using advanced sensors and computer vision. The Tango prototype is an Android device that tracks its own 3D motion and creates a 3D model of the surrounding environment in real-time. It uses motion tracking, depth perception, and area learning technologies. Potential applications include improved indoor navigation, more efficient shopping, emergency response, augmented reality gaming, and 3D modeling of objects.
The document discusses Sympro, a new wireless projector that will be the smallest in the world. Sympro can capture images of objects and project them wherever desired. When connected to a phone, it allows browsing the internet and making calls. When connected to a TV or dish, it allows watching TV. Sympro offers advantages like capturing and projecting object images, and changing the color, dimensions, and position of projections. It is also portable.
Project Soli is a Google initiative that uses radar sensors to track hand gestures. The Soli chip uses radar to capture sub-millimeter finger motions at 10,000 frames per second. It can accurately detect hand movements in 3D space in real-time without needing light or direct contact. The chip's radar technology allows for touchless gesture recognition through materials to enable new interactions with devices like phones and computers.
A smartphone from Google ATAP which creates a live 3D image of your nearby space, such that you can access those data anywhere and anytime.
For any queries contact me at : akhilanair94@gmail.com
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/basler/embedded-vision-training/videos/pages/may-2017-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Mark Hebbel, Head of New Business Development at Basler, presents the "Time of Flight Sensors: How Do I Choose Them and How Do I Integrate Them?" tutorial at the May 2017 Embedded Vision Summit.
3D digitalization of the world is becoming more important. This additional dimension of information allows more real-world perception challenges to be solved in a wide range of applications. Time-of-flight (ToF) sensors are one way to obtain depth information, and several time-of-flight sensors are available on the market.
In this talk, Hebbel examines the strengths and weaknesses of ToF sensors. He explains how to choose them based on your specifications, and where to get them. He also briefly discusses things you should watch out for when incorporating ToF sensors into your systems, along with the future of ToF technology.
Project Tango is a prototype smartphone developed by Google that uses computer vision to allow mobile devices to understand their position and orientation in 3D space. It contains specialized cameras and sensors that enable features like motion tracking, area mapping, and depth perception. The main challenges were implementing simultaneous localization and mapping (SLAM) algorithms typically requiring high-powered computers onto a mobile device. It works by using a combination of cameras, sensors, and custom computer vision chips to generate real-time 3D models of environments.
Google project tango - Giving mobile devices a human scale understanding of s...Harsha Madusankha
Project Tango is a Google initiative to develop smartphones that can understand 3D space. It uses sensors and computer vision techniques to create 3D maps of environments in real-time. The Project Tango smartphone has motion tracking cameras, an infrared projector, and processors to make over a quarter million 3D measurements per second. This allows the phone to create 3D models of spaces and understand its position within physical environments. Potential applications include indoor mapping, navigation for the visually impaired, augmented reality gaming, and autonomous robotics. Google is working with other companies and universities to develop this technology further.
Project Soli is a Google technology that uses radar sensors and machine learning to enable touchless gesture control. A small Soli chip contains radar that can detect subtle hand motions and movements. This allows devices to be controlled through gestures without touching screens. Google is developing a Soli developer kit to allow creators to explore uses for areas like health, art, smartwatches, and other interfaces. The technology provides an alternative to camera-based gesture systems by offering higher motion tracking speeds and the ability to sense movements through certain materials.
Project soli is a gesture based technology.developed by google ATAP Team.Projct soli is working on the basis of RADAR.Human hand is one of the interactive mechanism to deals with any machine...
This slides must help you to get a great idea about "Project soli"
By,
BHAVIN.B
Bhavinbhadran7u@gmail.com
Robots Need Game Designers (C. Boudier / N. Rigaud)Nicolas Rigaud
This document discusses Aldebaran, a company that creates humanoid robots, and their presence at a game conference to discuss designing robot interactions. Aldebaran was founded in 2005, has sold over 15,000 robots, and employs 400 people across 4 offices. They are joined by former game industry executives to discuss how robots have evolved from research tools in 1960 to personal home assistants today, and how their development requires teams of designers, developers, and specialists in user experience, animation, linguistics, sound, and testing.
Présentation au Human talks Paris le 8 octobre 2013.
Nao est un robot humanoïde concu par Aldebaran Robotics à Paris. Lors de cette introduction, j'aborderai les points suivants : Qu'est-ce qu'il y a dans le robot ? Comment le programmer ? Comment devenir Nao Developper ?
This is a workshop to program NAO robot that last for two to three hours. It's for kids aged 12 and above.
To use it, you'll need a NAO robot running NAOqi 2.1, Choregraphe 2.1 and an additional library of packaged mouvements (see http://goo.gl/7qm5fv)
This workshop is based on the one created by Daniel De Luca for Devoxx4Kids (www.devoxx4kids.org)
This document compares platforms and services, providing statistics on the number of users of different social platforms. It discusses key aspects of platforms like common interests, closed networks, and sense of belonging. It also outlines NHN's social app development platform, providing APIs, samples, and support for developers. The document advertises an upcoming social app developer conference to discuss platforms.
The document discusses various sensors used in robotics including motion sensors like accelerometers and gyroscopes, force/pressure sensors, position sensors, temperature and humidity sensors, light sensors, and novel sensors like the Kinect. It provides examples of applications for these sensors such as self-balancing robots, virtual reality, temperature control systems, and magnetic levitation trains. Diagrams and specifications are given for many common sensors.
This document provides an overview of the humanoid robot ASIMO created by Honda. It discusses the history and purpose of humanoid robots. ASIMO was designed to be helpful, harmless, and honest. It can recognize faces, gestures, sounds and its environment. Though not as fast or efficient as humans, ASIMO demonstrates human-like abilities such as walking, grasping objects, responding to voices, and interacting with people.
Devoxx4Kids workshop - Programming a humanoid robot - english versionNicolas Rigaud
This presentation is the english translation of the Devoxx4Kids workshop that was created by Daniel De Luca (@danieldeluca). This idea is to let kids (and parents) understand how easy it is to program NAO without even a single line of code thanks to Aldebaran GUI Choregraphe.
You can find more informations about Devoxx4Kids at http://www.devoxx.com/display/4KIDS
Thanks Stephen Chin (@steveonjava) for translating the first pages ;)
English version of the Devoxx4Kids workshop deck to teach programming using the NAO humanoid robot. (Credit to Daniel De Luca for content creation and Nicolas Rigaud on translation)
Powerpoint Search Engine has collection of slides related to specific topics. Write the required keyword in the search box and it fetches you the related results.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
The proposal is for developing integrated autonomous robot software called CACI that is cross-platform and cross-task. It aims to develop capabilities for perception, behaviors, 3D mapping, and human-robot interaction. The software architecture includes layers for planning, perception-behavior, and control. It will integrate technologies for perception, learning, mapping, and interaction to allow robots to operate semi-autonomously in indoor and outdoor environments and interact with humans and other robots. The expected impact is robots that can navigate using 3D maps, detect and track humans/objects, learn from human interactions, and operate with multimodal perception integrated with behaviors.
The proposal is for a software system called CACI that would enable integrated, autonomous robot capabilities across different robot platforms and tasks. The software aims to develop perceptual abilities using vision, speech, touch and maps, as well as behaviors for navigation, manipulation and human interaction. It proposes a layered architecture with planning, perception/behavior and control layers. The software would allow robots to operate semi-autonomously in indoor and outdoor environments with reduced human oversight through cross-platform integration of techniques like developmental learning and 3D mapping using lasers and cameras. The proposal requests $1.6M for year 1 and $1.7M for year 2 for a total of $3.3M.
The proposal is for developing cross-platform and cross-task integrated autonomous robot software. It proposes integrating various machine learning, perception, and developmental techniques to allow robots to operate semi-autonomously in indoor and outdoor environments and interact with humans and other robots. The software would provide a unified architecture and "plug-and-play" capability across different robot platforms. Extensive testing would be conducted on various indoor robot platforms at Michigan State University to validate the approach and overcome barriers to true autonomous robot perception and interaction.
IRJET-Arduino based Voice Controlled RobotIRJET Journal
Kottadi Kannan, J. Selvakumar, "Arduino based Voice Controlled Robot ", International Research Journal of Engineering and Technology (IRJET), Vol2,issue-01 March 2015. p-ISSN:2395-0056, e-ISSN:2395-0072. www.irjet.net
Abstract
Voice Controlled Robot (VCR) is a mobile robot whose motions can be controlled by the user by giving specific voice commands. The speech is received by a microphone and processed by the voice module. When a command for the robot is recognized, then voice module sends a command message to the robot’s microcontroller. The microcontroller analyzes the message and takes appropriate actions. The objective is to design a walking robot which is controlled by servo motors. When any commands are given on the transmitter, the EasyVR module will take the voice commands and convert the voice commands into digital signals. Then these digital signals are transmitted via ZIGBEE module to the robot. On the receiver side the other ZIGBEE module receives the command from the transmitter side and then performs the respective operations. The Hardware Development board used here is ATmega 2560 development board. In ATmega 2560 there are 15 PWM channels which are needed to drive the servo motors. Addition to this there is camera which is mounted in the head of the robot will give live transmission and recording of the area. The speech-recognition circuit functions independently from the robot’s main intelligence [central processing unit (CPU)]. This is a good thing because it doesn’t take any of the robot’s main CPU processing power for word recognition. The CPU must merely poll the speech circuit’s recognition lines occasionally to check if a command has been issued to the robot. The software part is done in Arduino IDE using Embedded C. Hardware is implemented and software porting is done.
Implementation of humanoid robot with using the concept of synthetic braineSAT Journals
Abstract This paper is elaborate the model of humanoid robot interacts with human being and perform various operation as per the command given by the human being. A humanoid robot having Synthetic brain can able to do Interaction, communication, Object detection, information acquisition about any object, response to voice command, chatting logically with human beings. Object detection will be done by this robot for that purpose there is use image processing concept (HAAR Technique), And to make the system intelligent that is whenever system interact, communicate, chat with human it gives proper response, question / answers there is integrates artificial intelligence and DFA / NFA automata and Prolog language concept for answering logically over the complex and relevant strings or data. Keywords —Humanoid Robotics, Artificial Intelligence, Image Processing, Audio Filtering.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A Research Paper on HUMAN MACHINE CONVERSATION USING CHATBOTIRJET Journal
The document describes a research paper on developing a human-machine conversation chatbot. It discusses using artificial intelligence, natural language processing, and machine learning techniques to create an intelligent tutoring chatbot. The proposed methodology involves two stages: knowledge modeling and representation, and conversation flow design. It defines the chatbot architecture and training process that uses Python libraries, intent data files, trained models, and a GUI interface. The goal is to demonstrate building a basic social media and command line chatbot to showcase chatbot and AI concepts.
A robot may need to use a tool to solve a complex problem. Currently, tool use must be pre-programmed by a human. However, this is a difficult task and can be helped if the robot is able to learn how to use a tool by itself. Most of the work in tool use learning by a robot is done using a feature-based representation. Despite many successful results, this representation is limited in the types of tools and tasks that can be handled. Furthermore, the complex relationship between a tool and other world objects cannot be captured easily. Relational learning methods have been proposed to overcome these weaknesses [1, 2]. However, they have only been evaluated in a sensor-less simulation to avoid the complexities and uncertainties of the real world. We present a real world implementation of a relational tool use learning system for a robot. In our experiment, a robot requires around ten examples to learn to use a hook-like tool to pull a cube from a narrow tube.
GILS: Automatic Security and Gas Detection RobotIRJET Journal
This document describes a proposed automatic security and gas detection robot called GILS. The robot would be controlled remotely using an android application and sensors. It would have the ability to detect hazardous gases using gas sensors and provide live video feeds through a mobile device camera to alert humans to dangers. The system architecture involves sensors communicating with a microcontroller on the robot, which is connected via Bluetooth to a mobile device. This would allow the mobile device to capture data and send it over the internet to a server. The robot is intended to provide security monitoring and hazard detection in dangerous areas that humans cannot access.
Wireless and uninstrumented communication by gestures for deaf and mute based...IOSR Journals
Abstract: The fact that technology is advancing as per Moore’s law, the attention towards deaf and mute individuals with hi-tech technology is not much. Deaf and mute have to communicate through sign language even for pithy things. And also many people did not understand this language. Now-a-days gesture is becoming an increasingly popular means of interacting with computers. This paper sheds light of an proposed potential idea relying on latest technology named Wi-See which was developed in Washington, US. This technology actually uses our conventional Wi-Fi signals for home automation by gesture recognition. So, depending upon this hi-tech technology, my modified application idea is towards deaf and dumb, especially, one who cannot speak, but knows English language for communication. Since wireless signals do not require line-of-sight and can traverse through walls, proposed idea can be very useful to expressed views by speechless people without requiring instrumentation of the human body with sensing devices. The whole idea is based on Doppler shift in frequency of Wi-Fi signals. Instead of controlling home appliances as by Wi-See, this idea extends its view for speech or words through speakers installed. Each successive pattern of English alphabet generated by Doppler shift by gestures in air, can be recorded and matched with predefined pattern, which when processed, be outputed through speaker as combined letter word ,inspired by English digital dictionary having prediction and correction algorithm. Keywords: Wi-Fi, Wi-See, Doppler shift, Gestures, Communication
IRJET- Can Robots are Changing the World: A ReviewIRJET Journal
This document summarizes a research paper on robots and how they are changing the world. It discusses how robots are rapidly developing from industrial uses to serving as companions. It proposes a hierarchical probabilistic representation of space that would allow robots to understand their environments in a way that is compatible with human users. This conceptual representation of space would be useful for robots to become familiar with their surroundings and interact in a semantically and socially intelligent manner. The paper reviews robot capabilities, sensors, effectors, software architectures, applications, and debates around machine intelligence and thinking.
Development of Pick and Place Robot for Industrial ApplicationsIRJET Journal
This document describes the development of a pick and place robot for industrial applications. It discusses designing a low-cost robot platform to perform pick and place operations using mechanical devices like a gripper and robotic arm. The robot is designed to fill liquid in bottles according to the volume occupied and then perform pick and place operations. Wireless communication is established between the mobile robot and remote base station. Serial communication is also set up between the base station and GUI application to allow wireless command and control of the robot. The robot is programmed using microcontrollers and tested to successfully achieve wireless and serial communication control.
Human-Machine Interface For Presentation RobotAngela Williams
This document summarizes the human-machine interface for a presentation robot. It discusses the requirements for the highest software layer, including redundancy, robustness, flexibility, and adaptability. It describes the available inputs like a touchscreen, camera, and sensors, as well as outputs like a screen, speakers, and printer. The interface is composed of independent blocks that are sequentially activated, with each block accessing robot inputs and outputs. Key blocks include motion, catching a user's attention, selecting options, and presenting maps, videos or games. The interface was tested on various users and indicators like interaction time and successful interactions were measured. The interface worked well across different types of users.
Eye(I) Still Know! – An App for the Blind Built using Web and AIDr. Amarjeet Singh
This paper proposes eye(I) still know!, a voice control solution for the visually impaired people. The main purpose is even though the blind cannot see they can still know where to go and what to do! Nearby 60% of total blind population across the world is present in India. In a time where no one likes to rely on anyone, this is a small effort to make the blind independent individuals. This can be achieved using wireless communication, voice recognition and image scanning. The application with the use of object identification will priorly inform about the barriers in the path.
The software will use the camera of the device and scan all the obstacles with their corresponding distances from the user. This will be followed by audio instructions through audio output of the device.
This will efficiently direct the user through his/her way.
This document describes the development of a smart robotic assistant that operates using both voice and gesture commands from a remote Android device. The robotic assistant has a mechanical arm that can pick up and place objects. It is controlled by an Arduino microcontroller and can perform operations like starting, stopping, moving in different directions, and picking up and placing objects. This robotic assistant has applications for helping elderly people and those with disabilities by performing tasks remotely using voice or gesture commands from a smart device.
IRJET - A Locomotive Voice-Based Assistant using Raspberry PiIRJET Journal
This document describes the design of a voice-controlled robot assistant using a Raspberry Pi. It discusses implementing voice recognition and face detection capabilities to allow the robot to understand commands and detect objects. The robot is intended to assist users by moving around and performing tasks based on voice commands and visual recognition.
This paper describes the hardware system of a socially interactive robot called Quori. Quori consists of an upper humanoid body with a rear projection and two gesturing arms mounted on a holonomic mobile base. The base allows for omnidirectional movement up to 0.8 m/s. Sensors like a laser range finder and cameras are used to sense the robot's internal state and environment. The design aims to create a low-cost, modular robot platform with maximum functionality for social interaction research.
Page 1 of 14 ENS4152 Project Development Proposal a.docxkarlhennesey
Page 1 of 14
ENS4152 Project Development
Proposal and Risk Assessment Report
Baxter Research Robot: Solving a Rubik’s Cube
Chris Dawes
Student # 10282558
30 Mar 2015
Supervisor: Dr Alexander Rassau
Page 2 of 14
Abstract
Robotics is currently used to perform many tasks but many of these are simple repetition of a
predefined method. By combining AI with robotics we can greatly increase the applications of
robotics. An algorithm that combines the vision and servo systems of a Baxter Research Robot
with a solving solution for a Rubik’s cube will demonstrate that the use of even simple AI with
robotics allows complex tasks to be completed. Further integration of object recognition will
allow the task to be completed in a dynamic environment, and further increase the areas
robots are capable of working within.
1. Introduction
1.1. Motivation
The Baxter Research Robot by Rethink Robotics is a dual arm robot, with seven degrees of
freedom per arm, released in 2012. Developed to be affordable, flexible in its purpose, and
above all else safe, Baxter includes three cameras, one on each wrist and the other on its head,
and a screen for displaying information relating to Baxter’s current task. The robot is designed
to be a versatile research platform while containing the same hardware as its industry
counterpart, allowing research to translate into industrial applications (Rethink Robotics,
2015).
In general robotics artificial intelligence (AI) has been developed separately to robotics, but is
now starting to become integrated. Unfortunately current AI is fragmented as each application
focuses on one area, as opposed to making a true AI that thinks like a human (Bogue, 2014).
Current usable AI is more akin to ‘smart’ robotics where decisions are made and problems
solved by the robot in very specific applications. In industry, robots are expanding into areas
that require more flexibility allowing robots to fill many more positions in increasingly complex
areas (Hajduk, Jenčík, Jezný, & Vargovčík, 2013). Mobile robots are even becoming more
common place, allowing for dynamic and spread out workspaces. These are all due to adding
sensing and analysis to robots allowing them to react to dynamic environments.
To further robotics in industry, multi robot work cells have been designed that combine
several robots working on the same part while cooperatively performing either one task, such
as welding and the required handling, or multiple tasks at the same time (Hajduk, Jenčík, Jezný,
& Vargovčík, 2013). The number of activities these work cells can perform increases
Page 3 of 14
dramatically, as the complexity of the task or tasks can be higher while the robots don’t need
to be capable of performing the whole task individually.
For performing more human tasks, dual arm robots have begun to emerge (Hajduk, Jenčík,
Jezný, & Vargovčík, 2013 ...
2. available humanoid robots or adding other external sensors
to the NAO robot.
A. Hardware Setup
In order to avoid violation of warranty, we have the
constraint of not replacing or modifying any existing
hardware on the NAO robot. The hardware components we
used in the system, besides the NAO robot itself, are listed
in Table I, along with simple descriptions of their
functionalities, their locations, and mounting methods. Fig. 2
shows how the hardware components are physically
mounted on the NAO robot.
TABLE I. HARDWARE COMPONENTS
Component Name Functionality Location
Mounting
Method
FlexiForce sensors Tactile sensing Fingers
Double sticky
tape
Pinted Circuit
Board (PCB)
Auxiliary circuit
for sensors
Upper
arm
Velcro strap
& tape
RF Link transmitter
Transmits sensor
data
Back Enclosure box
Arduino Mega 2560
w. battery
Interface
between PCB
and RF
transmitter
Back Enclosure box
RF Link receiver
Receives sensor
data
Not on
the robot
N/A
Arduino Uno
Interface
between RF
receiver and
computer
Not on
the robot
N/A
Computer
Processes sensor
data; delivers
behavioral
commands to the
NAO robot
Not on
the robot
N/A
Figure 2. Hardward mounted on the NAO robot.
The configuration of the five FlexiForce Sensors on the
NAO robot’s three-fingered right hand is shown in Fig. 3.
On both the left and right fingers, one sensor is mounted at
the tip and one is at the center. The fifth one is at the tip of
the thumb. The same sensor labeling as shown in Fig. 3 will
be used later in Section IV.
Figure 3. Configuration of the five sensors on NAO’s hand.
The data flow in the system is illustrated in Fig. 4. The
tactile sensor measurements are sent to the computer
wirelessly through the RF module and microcontrollers. The
computer analyzes and logs the sensor measurements, and
sends speech and behavior commands to the CPU on the
NAO robot itself wirelessly. The connection between the
NAO robot and the tactile sensors is only a physical
attachment without data flow in between.
Figure 4. Data flow chart.
B. Software Components
The software we developed for this project contains
several components, as listed in Table II.
Choregraphe allows easy capture of the joint angles for
the starting and ending positions of each motion we
implemented on the NAO robot later for the integration of
touch sensing. Fig. 5 shows how the arm angles were
captured in Choregraphe.
A cross-platform Arduino IDE is used to program the
microcontrollers that interface with the RF transmitter and
receiver, which communicate with 434 MHz radio frequency
signals. With the restriction of a maximum 4800 bits per
second (BPS) data rate of the RF module, currently the data
packets containing measurements of all five sensors are
transmitted at the frequency of 25 Hz.
983
3. TABLE II. SOFTWARE COMPONENTS
Component
Description
Development
Language
Development
Stage
Location of
Execution
Motion
recording
Choregraphe Preparation Computer
Wireless
communication
C Integration Microcontrollers
Main
application
C# Integration Computer
Speech and
behavioral
modules
Python Integration The NAO robot
Display of
measurements
C# Testing Computer
Figure 5. Arm angles obtained in Choregraphe.
The main application involves a learning process for the
NAO robot based on tactile information including weight,
stiffness and roughness. Just like how a toddler learns about
objects in his/her surroundings, the NAO robot will go
through the following steps during its learning process:
Step 1. Pick up an object and learn how heavy/light, how
hard/soft, and how rough/smooth it is, with measurements
from tactile sensors and associated actions.
Step 2. Characteristics extracted from measurements are
compared with the corresponding features of objects in the
database. Decisions are made as follows.
- If the actual features of weight, stiffness and roughness
are close to the features of the current object in the database,
in other words, the absolute values of the differences are
below predefined thresholds, say the name of the current
object.
- If the actual features do not match the features of the
current object, and the current object is not the last one in the
database, move on to the next object.
- If the actual features do not match the features of the
current object, and the current object is the last one in the
database, go to Step 3.
Step 3. Ask the name of the object and add it to the
database.
The software flow of the main application is shown in
Fig. 6. Although the main application was developed in C#,
in order to use the NAO SDK to send speech and behavioral
commands to the NAO robot, a Python script was written
for each action and was invoked in the C# program.
Figure 6. The Unified Modeling Language (UML) diagram of software
flow in the C# application.
A graphical user interface programmed in C# is
embedded in the main application to display the weight,
stiffness and roughness data during testing and
demonstrations.
III. SENSOR CALIBRATION AND TESTING
A. Design of Printed Circuit Board (PCB)
The FlexiForce sensor is an ultra-thin and flexible printed
circuit that uses a resistive-based technology. The application
of a force to the active sensing area of the sensor results in a
change in the resistance of the sensing element in inverse
proportion to the force applied. A modified version of the
recommended amplifier circuit in the user manual [8] is
shown in Fig. 7.
984
4. Figure 7. Amplifier circuit for FlexiForce sensors [8].
The feedback resistance RF as well as the drive voltage
VT can be used to adjust the sensitivity of the sensor. A
feedback resistance value of 100 kΩ and a drive voltage of
-1.5 V were selected in our design. A two-step process was
implemented to supply the -1.5 V to the sensors. First, a
voltage regulator consisting of two IN914 diodes connected
in series and a 240 Ω resistor provides +1.5 V with a 5 V
supply from the microcontroller. Next, an ADM660 Switched
Capacitor Voltage Converter was used to convert it to -1.5 V.
Considering the constraint of the size of PCB in order to
mount it on the robot’s upper arm, we chose a quad Op-Amp
chip MCP6004 and a dual Op-Amp chip MCP6002 to
provide all the Op-Amps needed in the circuit. The layout of
the custom PCB is shown in Fig. 8.
Figure 8. PCB layout.
B. Sensor Calibration
Two different methods were used to calibrate the sensor.
First, a flat load was applied to the sensor with its weight
changed by adding additional mass on top of it. Second, a
plastic ball, or spherical load, was used as test object. The
weight was also changed by adding additional mass on top of
the ball. The sensor was calibrated over a range of 0 to
approximately 2 N. The results are shown in Fig. 9. There is
a linear relationship for a flat load between the voltage output
from the sensor and the force applied to the sensor. The
linear equation fits the data with an R2
value (a statistical
metric that indicates how close the curve fits the data points)
of 0.9875. The values obtained for the spherical loads are
higher than the values obtained for the flat loads, likely
because of the smaller contact area for the spherical loads.
The results indicate that at higher values of weight, the
spherical load more closely approximates a flat load because
of more even distribution of the load due to compression of
the spherical object.
Figure 9. Experimental results of FlexiForce sensor measurements (square:
flat loads, circle: spherical loads).
Next, the FlexiForce sensors were attached on the fingers
of the NAO robot and calibration was performed with the
following configuration: the robot’s right arm and hand
were kept still and a plastic ball was placed in its hand with
a fixed position. Measurements were taken from two sensors
mounted at the centers of both left and right fingers. Then
the sensors were replaced by two other sensors, and so on.
The weight of the plastic ball was adjusted by adding water
through a hole on its top. The voltage for a particular mass
was obtained by averaging the voltages measured by the
sensors. Fig. 10 shows the experimental results of the ranges
and average voltages of the five sensors versus different
weight inputs.
Figure 10. Calibration results with sensors on the NAO robot
(bar: range of voltages, square: average of voltages).
IV. EXPERIMENTAL RESULTS
In this preliminary study, three objects, namely a golf
ball, a ping pong ball and a cotton ball, which are of similar
size, shape and color but different weight, stiffness and
roughness, were chosen for our experiments.
985
5. A. Comparison of Weight
The NAO robot was programmed to reach out with its
right forearm and open its right hand. At the same time, it
asked “Give me the ball.” The golf ball was then placed in its
right hand. The measurements from the two sensors mounted
at the center of the fingers are shown in Fig. 11. The sensors
at the finger tips and on the thumb were not pressed due to
the size and shape of objects in our experiments, therefore
their readings were discarded. A period of five seconds was
allotted to complete the test, and the voltage samples from
each of the above two sensors throughout the testing period
were averaged and logged in the database. A progress bar on
the display shows how much of the five-second period has
elapsed. As can be observed from Fig. 11, the center of
gravity is closer to one of the two fingers during that
particular test. Therefore, the average of sensor #3 and sensor
#4 voltages is recoded as the indicator of object weight.
Figure 11. Display of weight test result for a golf ball.
This process was repeated for a ping pong ball and a
cotton ball. The average voltages over a period of five
seconds for sensor #3 and #4 are shown in Table III for all
three objects. It can be easily observed from the sensor data
that the weight of the golf ball is much higher comparing to
either a ping pong ball or a cotton ball.
TABLE III. WEIGHT MEASUREMENTS
B. Comparison of Stiffness
Stiffness is an important property of an object that can be
obtained using the sense of touch. Stiffness is defined as the
extent to which an object resists deformation in response to
an applied force. Ideally, stiffness of the object being
identified by the NAO robot should be calculated as:
Fx,
where F is the force applied on the object and x is the amount
of deformation of the object surface at the contact point.
Unfortunately the positions of the fingertips of the NAO
robot cannot be obtained programmatically, which means the
measurement of object deformation using the penetration of
the fingertip in the object is very difficult to implement if not
impossible. Therefore, the sensor measurements were utilized
to characterize stiffness.
The NAO robot’s right hand was open for the weight test.
When the stiffness test started immediately after the weight
data were logged, the robot was commanded to secure the
ball using its left hand from above and then close its right
hand slowly. The assistance by the left hand was necessary,
especially for the ping pong ball which could have easily
slipped out of the NAO robot’s hand. Display of sensor
voltage readings and corresponding forces for a ping pong
ball is shown in Fig. 12.
Figure 12. Display of stiffness test result for a ping pong ball.
As can be observed from Fig. 12, the sensor on the thumb
(labelled as sensor #5) and the one at the center of the left
finger (labelled as sensor #4) showed high voltages but the
others showed zero voltages. This can be explained by how
the ping pong ball was grasped by the NAO robot’s right
hand. The ping pong ball was mainly between the thumb
and the left finger while the right finger was almost simply
resting on the surface of the ball. Due to the relative size of
the ball versus the hand, the finger tips were slightly above
the ball.
Object
Sensor #3
Average Voltage (V)
Sensor #4
Average Voltage (V)
Golf Ball 0.08 0.24
Ping Pong Ball 0.00 0.00
Cotton Ball 0.08 0.07
986
6. Sensor voltages from stiffness test are shown in Table IV
for all three objects. Although both the ping pong ball and the
cotton ball are very light as mentioned in Section IV-A, the
sensor voltages in Table IV showed their difference in
stiffness clearly, which was used for object identification
later. The sensor voltages for the golf ball were even higher
than those for the ping pong ball, which met our expectation.
TABLE IV. STIFFNESS MEASUREMENTS
In our experiment with only three objects: a golf ball, a
ping pong ball, and a cotton ball, the average of all sensor
measurements is sufficient to serve as an indicator of
stiffness. However, for objects with less difference in
stiffness, the sensor measurements should be analyzed more
selectively.
C. Comparison of Roughness
Due to the fact that currently only the right hand of the
NAO robot is equipped with the FlexiForce sensors, we
programmed the robot to put the ball in his left hand
immediately after stiffness data were logged in the database.
When the left hand grasped the ball firmly, the right hand
started stroking the surface of the ball with the tip of one
finger.
The interpretation of tactile sensor measurements for the
roughness of object surface is more challenging than weight
and stiffness. The research in surface texture discrimination
by robots has been advanced by both the development of
tactile sensing arrays and algorithms for temporal or
spatiotemporal analysis of the sensor data [9][10]. The Fast
Fourier Transform (FFT) was performed on the voltage data
collected from the sensor mounted on the tip of the finger
that stroked the object surface. By comparing the spectrum
of the golf ball data and that of the ping pong ball data
shown in Fig. 13, we noticed significant difference at the
high end of the frequency range. The sampling rate is
limited to 25 Hz by the data rate of the RF module. The
magnitude at the frequencies close to 12.5 Hz (half of the
sampling frequency) for the golf ball, which has a rough
surface, is obviously higher than that of the ping pong ball,
which has a smooth surface. Therefore, the magnitude of the
FFT at the highest frequency 12.5 Hz was logged in the
database for each object as the indicator of roughness.
(a)
(b)
Figure 13. Roughness test results: (a) golf ball; (b) ping pong ball.
D. Object Identification
The NAO robot asked for the name of the object after the
weight, stiffness and roughness data were all logged in the
data base. The learning process was repeated for all three
objects. The main application continued with object
identification following the learning process. The NAO
robot was programmed to ask the user to give it a ball. After
a ball was randomly selected and placed in its right hand, it
was able to identify whether it was a golf ball, a ping pong
ball, or a cotton ball.
V. CONCLUSIONS AND FUTURE RESEARCH
A tactile sensing system with five FlexiForce sensors and
wireless communication was successfully integrated to the
NAO humanoid robot. Active exploration behaviors were
programmed on the NAO robot and software interpretation
of the sensor voltages were implemented for weight,
stiffness, and roughness respectively. The NAO robot was
able to learn these properties of a golf ball, a ping pong ball
and a cotton ball, and identify them based on their
differences.
Future research includes investigation of more advanced
tactile sensing technology such as MEMS tactile sensor
arrays; improvement of hardware integration, for example,
using wearable LilyPad microcontrollers; integration of
tactile sensing with the NAO robot’s existing visual object
recognition capability; and last but not least, evaluation of
the accuracy of tactile-sensing based object identification
with testing on a large variety of objects with different
weight, stiffness and roughness.
REFERENCES
[1] R. D. Howe, “Tactile sensing and control of robotic manipulation,” J.
Adv. Robot., vol. 8, no. 3, pp. 245-261, 1994.
Object
Sensor Voltages (V)
#1 #2 #3 #4 #5
Golf Ball 0.03 0.61 0.01 2.05 4.99
Ping Pong Ball 0.0 0.0 0.0 2.12 3.89
Cotton Ball 0.02 0.02 0.02 0.02 0.02
987
7. [2] J. S. Son, “Integration of Tactile Sensing and Robot Hand Control,”
Ph.D. dissertation, School of Engineering and Applied Sciences,
Harvard Univ., Cambridge, MA, 1996.
[3] K. Suwanratchatamanee, M. Matsumoto, and S. Hashimoto, “Human-
machine interaction through object using robot arm with tactile
sensors,” in Proc. 17th IEEE Int. Symp. Robot Human Interactive
Commun., Munich, Germany, 2008, pp. 683-688.
[4] M. Ohka, H. Kobayashi, J. Takata, and Y. Mitsuya, “Sensing precision
of an optical three-axis tactile sensor for a robotic finger,” in Proc.
15th IEEE Int. Symp. Robot Human Interactive Commun., Hatfield,
U.K., 2006, pp. 214-219.
[5] R. Kageyama, S. Kagami, M. Inaba, and H. Inoue, “Development of
soft and distributed tactile sensors and the application to a humanoid
robot,” in Proc. IEEE Int. Conf. Systems, Man, and Cybernetics,
Tokyo, Japan, 1999, pp. 981-986.
[6] P. Mittendorfer and G. Cheng, “Humanoid multimodal tactile-sensing
modules,” IEEE Trans. Robotics, vol. 27, no. 3, pp. 401-410, 2011.
[7] R. S. Dahiya, G. Metta, M. Valle, and G. Sandini, “Tactile sensing –
from humans to humanoids,” IEEE Trans. Robotics, vol. 21, pp. 1–20,
Feb. 2010.
[8] Tekscan Inc., FlexiForce Sensors User Manual, 2008.
http://www.tekscan.com/pdf/FlexiForce-Sensors-Manual.pdf
[9] H. B. Muhammad, C. Recchiuto, C. M. Oddo, L. Beccai, C. J.
Anthony, M. J. Adams, M. C. Carrozza, and M. C. L. Ward, “A
capacitive tactile sensor array for surface texture discrimination,”
Microelectronic Engineering, vol. 88, Jan. 2011, pp. 1811–1813.
[10] C. J. Cascio and K. Sathian, “Temporal cues contribute to tactile
perception of roughness,” J. Neurosci., vol. 21, no. 14, pp. 5289-5296,
2001.
988