Human-Robot Interaction Based On Gesture Identification
Upcoming SlideShare
Loading in...5
×
 

Human-Robot Interaction Based On Gesture Identification

on

  • 7,047 views

 

Statistics

Views

Total Views
7,047
Views on SlideShare
7,047
Embed Views
0

Actions

Likes
1
Downloads
288
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Human-Robot Interaction Based On Gesture Identification Human-Robot Interaction Based On Gesture Identification Document Transcript

  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 11. INTRODUCTIONRobots are artificial agents with capacities of perception and action in thephysical world often referred by researchers as workspace. Their use has beengeneralized in factories but nowadays they tend to be found in the mosttechnologically advanced societies in such critical domains as search and rescue,military battle, mine and bomb detection, scientific exploration, law enforcement,entertainment and hospital care.These new domains of applications imply a closer interaction with the user.The concept of closeness is to be taken in its full meaning, robots and humans sharethe workspace but also share goals in terms of task achievement. This closeinteraction needs new theoretical models, on one hand for the robotics scientists whowork to improve the robots utility and on the other hand to evaluate the risks andbenefits of this new "friend" for our modern society.Robots are poised to fill a growing number of roles in today‘s society, fromfactory automation to service applications to medical care and entertainment. Whilerobots were initially used in repetitive tasks where all human direction is given apriori, they are becoming involved in increasingly more complex and less structuredtasks and activities, including interaction with people required to complete thosetasks. This complexity has prompted the entirely new endeavour of Human-RobotInteraction (HRI), the study of how humans interact with robots, and how best todesign and implement robot systems capable of interacting with humans. Thefundamental goal of HRI is to develop the principles and algorithms for robot systemsthat make them capable of direct, safe and effective interaction with humans. Manyfacets of HRI research relate to and draw from insights and principles frompsychology, communication, anthropology, philosophy, and ethics, making HRI aninherently interdisciplinary endeavour.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 2A robot is a mechanical or virtual intelligent agent that can perform tasksautomatically or with guidance, typically by remote control. In practice a robot isusually an electro-mechanical machine that is guided by computer and electronicprogramming. Robots can be autonomous,semi-autonomous or remotely controlled.The word robot first appeared in the play Rossum‘s Universal Robots by the Czechwriter Karel Čapek in 1920.Robots are used in an increasingly wide variety of tasks such as vacuumingfloors, mowing lawns, cleaning drains, building cars, in warfare, and in tasks that aretoo expensive or too dangerous to be performed by humans such as exploring outerspace or at the bottom of the sea. Robots range from humanoids suchas ASIMO and TOPIO to Nano robots, Swarm robots, Industrial robots, militaryrobots, mobile and serving robots The branch of technology that deals with robots isrobotics.At present there are two main types of robots, based on their use: general-purpose autonomous robots and dedicated robots. Robots can be classified bytheir specificity of purpose. A robot might be designed to perform one particular taskextremely well, or a range of tasks less well. Of course, all robots by their nature canbe re-programmed to behave differently, but some are limited by their physical form.With the advance in artificial intelligence, the research is focusing on one parttowards the safest physical interaction. But also on a socially correct interaction,dependent on cultural criteria. The goal is to build an intuitive and easycommunication with the robot through speech, gestures, and facial expressions.Dautenhan refers to friendly Human-robot interaction as "Robotiquette"defining it as the "social rules for robot behaviour (a ‗robotiquette‘) that iscomfortable and acceptable to humans.The robot has to adapt itself to our way ofexpressing desires and orders and not the contrary. But every day environments suchas homes have much more complex social rules than those implied by factories oreven military environments.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 32. HUMAN ROBOTIC INTERACTIONHuman–robot interaction is the study of interactions between humans androbots. It is often referred as HRI by researchers. Human–robot interaction is amultidisciplinary field with contributions from HCI, artificialintelligence, robotics, natural language understanding, and social sciences.Human-robot interaction has been a topic of both science fiction and academicspeculation even before any robots existed. Because HRI depends on knowledge of(sometimes natural) human communication, many aspects of HRI are continuationsof human communications topics that are much older than robotics per se.The origin of HRI as a discrete problem was stated by 20th-centuryauthor Isaac Asimov in 1941, in his novel I, Robot. He states the Three Laws ofRobotics as,1. A robot may not injure a human being or, through inaction, allow a humanbeing to come to harm.2. A robot must obey any orders given to it by human beings, except where suchorders would conflict with the First Law.3. A robot must protect its own existence as long as such protection does notconflict with the First or Second Law.These three laws of robotics determine the idea of safe interaction. The closerthe human and the robot get and the more intricate is the relationship the more therisk of a human being injured rises. Nowadays in advanced societies manufacturersemploying robots solve this issue by not letting human and robot share the workspaceat any time. This is achieved by the extensive use of safe zones and cages. Thus the View slide
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 4presence of humans is completely forbidden in the robot workspace while it isworking.With the advances of artificial intelligence, the autonomous robots couldeventually have more proactive behaviours, planning their motion in complexunknown environments. These new capabilities would have to keeping safety as aprimer issue and as second efficiency. To allow this new generation of robot, researchis being made on human detection, motion planning, scene reconstruction, intelligentbehaviour through task planning.The basic goal of HRI is to define a general human model that could lead toprinciples and algorithms allowing more natural and effective interaction betweenhumans and robots.Many in the field of HRI study how humans collaborate andinteract and use those studies to motivate how robots should interact with humans.HRI has continued to be a topic of academic and popular culture interest. Infact, real-world robots have come into existence long after plays, novels, and moviesdeveloped them as notions and began to ask questions regarding how humans androbots would interact, and what their respective roles in society could be. While notevery one of those popular culture works has affected the field of robotics research,there have been instances where ideas in the research world had their genesis inpopular culture.In I, Robot, the three laws were examined relative to commands that humansgive robots, methods for humans to diagnose malfunctions, and ways in which robotscan participate in society. The theoretical implications of how the three laws aredesigned to work has impacted the way that robot and agent systems operate today,even though the type of autonomous reasoning needed for implementing a system thatobeys the three laws does not exist yet.On the other end of HRI research the cognitive modelling of the "relationship"between human and the robots benefits the psychologists and robotic researchers theuser study are often of interests on both sides. This research endeavours part ofhuman society. View slide
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 5Philip K. Dick‘s novel Do Androids Dream of Electric Sheep (1968) is set in afuture world (originally in the late ‘90s) where robots (called replicants) mingle withhumans. The replicants are humanoid robots that look and act like humans, andspecial tests are devised to determine if an individual is a human or a replicant. Thetest is related to the Turing Test, in that both involve asking probing questions thatrequire human experiences and capacities in order to answer correctly. As is typical,the story also featured a battle between humans and replicants.George Lucas‘ Star Wars movies (starting in 1977) feature two robotcharacters (C3P0 and R2D2) as key characters, which are active, intuitive, evenheroic. One of the most interesting features from a robot design point of view is that,while one of the robots is humanoid in form (C3PO) and the other (R2D2) is not, bothinteract effectively with humans through social, assistive, and service interactions.C3P0 speaks, gestures, and acts as a less-than-courageous human. R2D2, on the otherhand, interacts socially only through beeps and movement, but is understood andoften preferred by the audience for its decisiveness and courage.In the television show Star Trek: The Next Generation (1987-1994), anandroid named Data is a key team member with super-human intelligence but noemotions. Data‘s main dream was to become more human, finally mastering emotion.Data progressed to becoming an actor, a poet, a friend, and often a hero, presentingrobots in a number of potentially positive roles.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 6Fig 2.1 An example of an HRI testbed: a humanoid torso on a mobile platform, and asimulation of the same system.The short story and movie The Bicentennial Man, features a robot whoexhibits human-like creativity, carving sculptures from wood. Eventually, he strikesout on his own, on a quest to find like-minded robots. His quest turns to a desire to berecognized as a human. Through cooperation with a scientist, he develops artificialorgans in order for him to bridge the divide between himself and other humans,benefiting both himself and humanity. Eventually, he is recognized as a human whenhe creates his own mortality.These examples, among many others, serve to frame to scope of HRI researchand exploration. They also provide some of the critical questions regarding robots andsociety that have become benchmarks for real-world robot systems.Scholtz describes five roles that a human may have when interacting with arobot: supervisor, operator, teammate, mechanic/programmer, and bystander. One ormore of these values would be assigned to the INTERACTION-ROLE classification.A supervisory role is taken by a human when it needs to monitor the behaviorof a robot, but does not need to directly control it. For example, a supervisor of an
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 7unmanned vehicle may tell the robot where it should move, then the robot plans andcarries out its task.An operator needs to have more interaction with a robot, stepping in toteleoperate the robot or needing to change the robot‘s behavior.A teammate works with a robot to accomplish a task. An example of thiswould be a manufacturing robot that accomplished part of an assembly while ahuman worked on another part of the assembly of the item.A mechanic or programmer needs to physically change the robot‘s hardwareor software.A bystander does not control a robot but needs to have some understanding ofwhat the robot is doing in order to be in the same space. For example, a person whowalks into a room with a robot vacuum cleaner needs to be able to avoid the robotsafely.2.1 HRI RESEARCH CHALLENGESThe study of HRI contains a wide variety of challenges, some of them of basicresearch nature, exploring concepts general to HRI, and others of domain-specificnature, dealing with direct uses of robot systems that interact with humans inparticular contexts. In this section, we overview the following major researchchallenges within HRI: multimodal sensing and perception; design and humanfactors; developmental and epigenetic robotics; social, service and assistive robotics;and robotics for education.Multi-Modal PerceptionReal-time perception and dealing with uncertainty in sensing are some of themost enduring challenges of robotics. For HRI, the perceptual challenges areparticularly complex, because of the need to perceive, understand, and react to humanactivity in real-time.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 8The range of sensor inputs for human interaction is far larger than for mostother robotic domains in use today. HRI inputs include vision and speech, both majoropen challenges for real-time data processing. Computer vision methods that canprocess human-oriented data such as facial expression and gestures must be capableof handling a vast range of possible inputs and situations. Similarly, languageunderstanding and dialog systems between human users and robots remain an openresearch challenge. Tougher still is to obtain understanding of the connection betweenvisual and linguistic data and combining them toward improved sensing andexpression.Design And Human FactorsThe design of the robot, particularly the human factor concerns, is a keyaspect of HRI. Research in these areas draws from similar research in human-computer interaction (HCI) but features a number of significant differences related tothe robot‘s physical real-world embodiment. The robot‘s physical embodiment, formand level of anthropomorphism, and simplicity or complexity of design, are some ofthe key research areas being explored.Developmental/Epigenetic RoboticsDevelopmental robotics, sometimes referred to as epigenetic robotics, studiesrobot cognitive development. Developmental roboticists are focused on creatingintelligent machines by endowing them with the ability to autonomously acquireskills and information. Research into developmental/epigenetic robotics spans a broadrange of approaches. One effort has studied teaching task behavior using shaping andjoint attention, a primary means used by children in observing the behavior of othersin learning tasks. Developmental work includes the design of primitives for humanoidmovements, gestures, and dialog.Social,Service And Assistive RoboticsService and assistive robotics include a very broad spectrum of applicationdomains, such as office assistants, autonomous rehabilitation aids, and educational
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 9robots. This broad area integrates basic HRI research with real-world domains thatrequired some service or assistive function. The study of social robots (or sociallyinteractive robots) focuses on social interaction, and so is a proper subset of problemsstudied under HRI.Educational RoboticsRobotics has been shown to be a powerful tool for learning, not only as a topicof study, but also for other more general aspects of science, technology, engineering,and math (STEM) education. A central aspect of STEM education is problem-solving, and robots serve as excellent means for teaching problem-solving skills ingroup settings. Based on the mounting success of robotics courses world-wide, thereis now is an active movement to develop robot hardware and software in service ofeducation, starting from the youngest elementary school ages and up. Robotics isbecoming an important tool for teaching computer science and introductory collegeengineering.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 103. PROPOSED WORKIn this project we have established a successful interaction between a humanand robot. This interaction has become possible by the hand gesture identification.The gestures (left, right, forward and backward) made by the human hand areidentified and converted to electrical signals (voltage) by the accelerometer. Theaccelerometer captures the motion in X,Y and Z directions and correspondingvoltages are produced which are transmitted to the receiver via a wirelesstransmission method ,zigbee is used for this purpose. Zigbee is used because it is verypowerful and reliable method than other methods.The receiver receives the signals transmitted and will generate some controlsequence to make corresponding motion in the autobot. The autobot is designed withthree wheels. Because three wheeled autobot controlling is easier and power savingmethod than the four wheeled autobot. In this autobot the front wheel is free to movein any direction and the two back wheels are connected to the shafts of two motors. Awireless camera is provided in the receiver, so the autobot can be controlled by thehuman by standing in a remote location. Camera gives the instant video in themonitor which is placed in the transmitter section. So by seeing video a deaf anddump person can control the autobot.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 114. BLOCK DIAGRAMS4.1 TRANSMITTER SECTIONFig 4.1.2 Block Diagram Of Transmitter SectionFigure above shows the basic block diagram of the Human Robot InteractionSystem. There are different ways to interact human with robot like sound, gesture,touch etc. Here we are using the gesture method of interaction. For identifying thegesture of the human hand we are using the accelerometer. Followed by theaccelerometer there is a processing unit which is a PIC microcontroller 16F876A.Which is an advanced and high speed device. The output of the microcontroller is
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 12transmitted through a wireless communication method called zigbee protocol. Whichis an advanced, high speed, reliable and accurate method is of wirelesscommunication than the other conventional wireless protocols.The accelerometer here using is an analog accelerometer. It will detect themotion corresponding to X, Y & Z directions. This device produces some analogvoltages corresponding to the motion. We cannot use these voltages in the analogform so we have to convert the analog values to digital values. The analog values areconverted to digital format by the usage of an analog to digital converter which insidethe microcontroller.In the microcontroller memory,there are some predefined ranges of values arestored for each type of motion for X,Y & Z. When a motion occurs the controllerchecks the value and compares it with the predefined range of values. If the value isin that predefined range, the controller identifies that the motion is occurred in X or Yor in Z direction. Then according to the accelerometer specification, for one motiontwo co-ordinate values changes and other one will remain the same. So for the left,right, front and back movements some values are taken experimentally and assigningsome range. If the output of accelerometer is in that range the controller will generatea particular code corresponding to each motion i.e. 01 for left, 02 for right etc. Thesecodes are generated in the any of the port of controller as per the program. Thesecodes are transmitted through the zigbee transmitter.The function of the controller is to initialize and monitor the stop count. Stopcount is the count which is given during the effective motion detection and codegeneration process.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 134.2 RECEIVER SECTIONFig 4.2.3 Block Diagram Of Receiver SectionFigure above shows the receiver section of the Human Robot InteractionSystem. When there is a motion occurs the transmitter detects the type of motion andwill generate and transmit codes corresponding to the type of motion. The zigbeereceiver receives the code and will produce another set of codes .These codesdetermines the tasks to be performed for each command, ie. Some tasks are assignedto each code. This is the main function of the 89C2051 microcontroller .It is a 20 pinmicrocontroller with two ports. The controller then sends the codes to the maincontroller of the autobot in a serial format. Then the controller in the receiver willinitialize a stop count. The stop count increments automatically and the devicecontinuously monitor the status of stop count. The code generation and sending
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 14process will stop when the stop count reaches it‘s maximum value or there is anothermotion occurred.The controller of the autobot receives the code generated by the receivercontroller and will generate some sequence of codes to control the motor driver .Themotor driver is provided to interface the two motors with the controller and to providemore power to the motors. It also helps to provide fast response. The motor driver cancontrol two motors at a time .It is having internal ESD protection and thermal shutdown, high noise immunity. According to the code received from the autobotcontroller the motor driver will rotate the motor shaft in clockwise and anti clockwisedirection for the motion of robot .Thus the autobot motion occurs.The autobot is a three wheeled device; two of them are connected to the dcmotors. The motor driver IC controls the movement of the motors. The front wheel isfree to move in any direction where as the other two wheels can move in clockwiseand anti-clockwise direction only. The three wheel concept reduces the powerrequirement, power loss and increases the fast response. For a four wheeled autobotfour motors and two drivers IC‘s are required.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 155. HARDWARE SECTION5.1 CIRCUIT DIAGRAMS5.1.1 TRANSMITTER SECTIONFig 5.1.1.4 Circuit Diagram Of Transmitter Section
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 16Figure shows the circuit diagram of transmitter of the Human RobotInteraction System. The circuit diagram of power supply is shown on the top of thefigure. The voltage regulator IC 7805 is used to regulate the incoming power supply.There is a power supply indicator also provided. The total system works with 5Vsupply.The transmitter parts mainly consist of an accelerometer. PIC microcontrollerand Zigbee transceiver. The accelerometer here using is ADXL 335 which is ananalog accelerometer. It detects the X, Y and Z directional motion of human hand andwill produce corresponding analog voltages. The system is more compatible withdigital voltages. So we need to convert the analog values in to digital format. TheADXL 335 consists of 3 output pins for X, Y and Z outputs. For the analog to digitalconversion, the ADC in the PIC is used. The output pins of ADXL 335 are connectedto the 3 analog inputs of the PIC. i.e. pin 2, 3 and 4. The PIC converts the analogvalue to digital value and compares the values with the predefined values stored in itsmemory.According to the specification of ADXL 335 for any motion the two co-ordinate values changes and one value remains the same. For example, consider theforward motion of the hand, X and Y co-ordinate value produces particular valuesand Z remains in the previous value. For each motions, ie.forward, backward, left andright the values for X,Y and Z co-ordinates are measured and stored in themicrocontroller memory.When the motion occurs the accelerometer produces corresponding output.The PIC compares those values with the values in its memory. If the comparisonsatisfies, the PIC will produce a particular code and will send that code to receiverthrough Zigbee transmitter. The Zigbee is connected to the transmitter and receiverpins of PIC microcontroller.The crystal oscillator is also provided to generate a clock frequency of20MHz.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 175.1.2 RECEIVER SECTIONFig 5.1.2.5 Circuit Diagram Of Receiver Section
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 18Figure shows the circuit diagram of receiver. The power supply is provided togenerate 5V supply. The voltage regulator IC 7805 is used.The main component is 89C2051 microcontroller. It is a 20 pinmicrocontroller with 2 ports and works with 12 MHz frequency. The Zigbeereceiver is connected to receiver pin of the microcontroller. The Zigbee receiverreceives some codes and controller monitors the code and for each code the controllergenerates some other codes in the port P1. The port 1 is pulled up with a resistorpack. And the output is connected to a latch IC 7417C573. And its output is appliedto the main controller of the autobot. The latch is used to provide the quick response.5.1.3 AUTOBOTFig 5.1.3.6 Circuit Diagram Of Autobot
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 19Figure shows the circuit diagram of autobot. The circuit diagram of the powersupply is shown in figure. There is a provision to give ac and dc supply to the device.There is a bridge provided for ac supply. Commonly dc is giving to the device tomake it wireless. 7805 voltage regulator is used for providing 5V supply at the output.PIC 18F4550 is the main controller used in the autobot. It is a USBprogrammable microcontroller. It is an 8 bit microcontroller with flash programmingcapability. The special codes generated by the receiver-microcontroller are applied tothe pins 27 to 30. The controller receives these codes and will give some commandsto the motor driver IC. The commands are saved in the memory of the maincontroller. These commands gives instruction to the driver IC and it controls themovement of motor and the wheels attached with the motor shaft. For making a leftturn, the motor at the right side should rotate in clockwise in full speed and the motorat the left should remain in still condition. For making a right turn the left motorshould rotate in clockwise in full speed and right motor should remain in stillposition. For the forward motion both motors should rotate in clockwise and in fullspeed. For the reverse motion both the motors should rotate in anticlockwisedirection.PIN 1 PIN2(INPUT) PIN7(OUTPUT) FUNCTIONHIGH LOW HIGH TURNCLOCKWISEHIGH HIGH LOW TURN ANTICLOCKWISEHIGH LOW LOW STOPHIGH HIGH HIGH STOPLOW NOT APPLICABLE NOT APPLICABLE STOPTable 5.1.1 L293D Operation modes
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 205.2 MAIN COMPONENTS5.2.1 ACCELEROMETERAn accelerometer is a device that measures proper acceleration, also called thefour-acceleration. For example, an accelerometer on a rocket accelerating throughspace will measure the rate of change of the velocity of the rocket relative to anyinertial frame of reference. However, the proper acceleration measured by anaccelerometer is not necessarily the coordinate acceleration (rate of change ofvelocity). Instead, it is the acceleration associated with the phenomenon of weightexperienced by any test mass at rest in the frame of reference of the accelerometerdevice. For an example where these types of acceleration differ, an accelerometer willmeasure a value of g in the upward direction when remaining stationary on theground, because masses on earth have weight m*g. By contrast, an accelerometer ingravitational free fall toward the center of the Earth will measure a value of zerobecause, even though its speed is increasing, it is at rest in a frame of reference inwhich objects are weightless.Most accelerometers do not display the value they measure, but supply it toother devices. Real accelerometers also have practical limitations in how quickly theyrespond to changes in acceleration, and cannot respond to changes above a certainfrequency of change.Single- and multi-axis models of accelerometer are available to detectmagnitude and direction of the proper acceleration (or g-force), as a vector quantity,and can be used to sense orientation (because direction of weight changes),coordinate acceleration (so long as it produces g-force or a change in g-force),vibration, shock, and falling (a case where the proper acceleration changes, since ittends toward zero). Micromachined accelerometers are increasingly present inportable electronic devices and video game controllers, to detect the position of thedevice or provide for game input.Pairs of accelerometers extended over a region of space can be used to detectdifferences (gradients) in the proper accelerations of frames of references associated
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 21with those points. These devices are called gravity gradiometers, as they measuregradients in the gravitational field. Such pairs of accelerometers in theory may also beable to detect gravitational waves.Physical PrinciplesAn accelerometer measures proper acceleration, which is the acceleration itexperiences relative to freefall and is the acceleration felt by people and objects. Putanother way, at any point in space-time the equivalence principle guarantees theexistence of a local inertial frame, and an accelerometer measures the accelerationrelative to that frame. Such accelerations are popularly measured in terms of g-force.An accelerometer at rest relative to the Earths surface will indicateapproximately 1 g upwards, because any point on the Earths surface is acceleratingupwards relative to the local inertial frame (the frame of a freely falling object nearthe surface). To obtain the acceleration due to motion with respect to the Earth, this"gravity offset" must be subtracted and corrections for effects caused by the Earthsrotation relative to the inertial frame.The reason for the appearance of a gravitational offset is Einsteinsequivalence principle, which states that the effects of gravity on an object areindistinguishable from acceleration. When held fixed in a gravitational field by, forexample, applying a ground reaction force or an equivalent upward thrust, thereference frame for an accelerometer (its own casing) accelerates upwards withrespect to a free-falling reference frame. The effects of this acceleration areindistinguishable from any other acceleration experienced by the instrument, so thatan accelerometer cannot detect the difference between sitting in a rocket on thelaunch pad, and being in the same rocket in deep space while it uses its engines toaccelerate at 1 g. For similar reasons, an accelerometer will read zero during any typeof free fall. This includes use in a coasting spaceship in deep space far from any mass,a spaceship orbiting the Earth, an airplane in a parabolic "zero-g" arc, or any free-fallin vacuum. Another example is free-fall at a sufficiently high altitude thatatmospheric effects can be neglected.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 22However this does not include a (non-free) fall in which air resistanceproduces drag forces that reduce the acceleration, until constant terminal velocity isreached. At terminal velocity the accelerometer will indicate 1 g accelerationupwards. For the same reason a skydiver, upon reaching terminal velocity, does notfeel as though he or she were in "free-fall", but rather experiences a feeling similar tobeing supported (at 1 g) on a "bed" of uprushing air.Acceleration is quantified in the SI unit metres per second per second (m/s2),in the cgs unit gal (Gal), or popularly in terms of g-force (g).For the practical purpose of finding the acceleration of objects with respect tothe Earth, such as for use in an inertial navigation system, a knowledge of localgravity is required. This can be obtained either by calibrating the device at rest, orfrom a known model of gravity at the approximate current position.APPLICATIONEngineeringAccelerometers can be used to measure vehicle acceleration. They allow forperformance evaluation of both the engine/drive train and the braking systems.Accelerometers can be used to measure vibration on cars, machines,buildings, process control systems and safety installations. They can also be used tomeasure seismic activity, inclination, machine vibration, dynamic distance and speedwith or without the influence of gravity. Applications for accelerometers that measuregravity, wherein an accelerometer is specifically configured for use in gravimetry, arecalled gravimeters.Notebook computers equipped with accelerometers can contribute to theQuake-Catcher Network (QCN), a BOINC project aimed at scientific research ofearthquakes.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 23IndustryAccelerometers are also used for machinery health monitoring to report thevibration and its changes in time of shafts at the bearings of rotating equipment suchas turbines, pumps, fans, rollers, compressors, and cooling towers,. Vibrationmonitoring programs are proven to warn of impending failure, save money, reducedowntime, and improve safety in plants worldwide by detecting conditions such aswear and tear of bearings, shaft misalignment, rotor imbalance, gear failure or bearingfault which, if not attended to promptly, can lead to costly repairs. Accelerometervibration data allows the user to monitor machines and detect these faults before therotating equipment fails completely. Vibration monitoring programs are utilized inindustries such as automotive manufacturing, machine tool applications,pharmaceutical production, power generation and power plants, pulp and paper, sugarmills, food and beverage production, water and wastewater, hydropower,petrochemical and steel manufacturing.Building And Structural MonitoringAccelerometers are used to measure the motion and vibration of a structurethat is exposed to dynamic loads. Dynamic loads originate from a variety of sourcesincluding:Human activities – walking, running, dancing or skippingWorking machines – inside a building or in the surrounding areaConstruction work – driving piles, demolition, drilling and excavatingMoving loads on bridgesVehicle collisionsImpact loads – falling debrisConcussion loads – internal and external explosionsCollapse of structural elements
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 24Wind loads and wind gustsAir blast pressureLoss of support because of ground failureEarthquakes and aftershocksMeasuring and recording how a structure responds to these inputs is criticalfor assessing the safety and viability of a structure. This type of monitoring is calledDynamic Monitoring.Consumer ElectronicsFig 5.2.1.7 Galaxy Nexus, an example of a smart phone with a built-in accelerometerAccelerometers are increasingly being incorporated into personal electronicdevices.Motion InputSome smartphones, digital audio players and personal digital assistantscontain accelerometers for user interface control; often the accelerometer is used topresent landscape or portrait views of the devices screen, based on the way the deviceis being held.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 25Automatic Collision Notification (ACN) systems also use accelerometers in asystem to call for help in event of a vehicle crash. Prominent ACN systems includeOnstar AACN service, Ford Links 911 Assist, Toyotas Safety Connect, Lexus Link,or BMW Assist. Many accelerometer-equipped smartphones also have ACN softwareavailable for download. ACN systems are activated by detecting crash-strength G-forces.Nintendos Wii video game console uses a controller called a Wii Remote thatcontains a three-axis accelerometer and was designed primarily for motion input.Users also have the option of buying an additional motion-sensitive attachment, theNunchuk, so that motion input could be recorded from both of the users handsindependently. Is also used on the Nintendo 3DS system.The Sony PlayStation 3 uses the DualShock 3 remote which uses a three axisaccelerometer that can be used to make steering more realistic in racing games, suchas Motorstorm and Burnout Paradise.The Nokia 5500 sport features a 3D accelerometer that can be accessed fromsoftware. It is used for step recognition (counting) in a sport application, and for tapgesture recognition in the user interface. Tap gestures can be used for controlling themusic player and the sport application, for example to change to next song by tappingthrough clothing when the device is in a pocket. Other uses for accelerometer inNokia phones include Pedometer functionality in Nokia Sports Tracker. Some otherdevices provide the tilt sensing feature with a cheaper component, which is not a trueaccelerometer.Sleep phase alarm clocks use accelerometric sensors to detect movement of asleeper, so that it can wake the person when he/she is not in REM phase, thereforeawakes more easily.Orientation SensingA number of 21st century devices use accelerometers to align the screendepending on the direction the device is held, for example switching between portrait
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 26and landscape modes. Such devices include many tablet PCs and some smartphonesand digital cameras.For example, Apple uses an LIS302DL accelerometer in the iPhone, iPodTouch and the 4th and 5th generation iPod Nano allowing the device to know when itis tilted on its side. Third-party developers have expanded its use with fancifulapplications such as electronic bobbleheads. The BlackBerry Storm phone was alsoan early user of this orientation sensing feature.Fig 5.2.1.8 Orientation DetectionThe Nokia N95 and Nokia N82 have accelerometers embedded inside them. Itwas primarily used as a tilt sensor for tagging the orientation to photos taken with thebuilt-in camera and later became available to other applications through a firmwareupdate.As of January 2009, almost all new mobile phones and digital cameras containat least a tilt sensor and sometimes an accelerometer for the purpose of auto imagerotation, motion-sensitive mini-games, and to correct shake when taking photographs.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 275.2.1.1 ANALOG ACCELEROMETER ADXL335Fig 5.2.1.9 ADXL 335The ADXL335 is a small, thin, low power, complete 3-axis accel-erometerwith signal conditioned voltage outputs. The product measures acceleration with aminimum full-scale range of ±3 g. It can measure the static acceleration of gravity intilt-sensing applications, as well as dynamic acceleration resulting from motion,shock, or vibration.The user selects the bandwidth of the accelerometer using the CX, CY, andCZ capacitors at the XOUT, YOUT, and ZOUT pins. Bandwidths can be selected tosuit the application, with a range of 0.5 Hz to 1600 Hz for the X and Y axes, and arange of 0.5 Hz to 550 Hz for the Z axis.The ADXL335 is available in a small, low profile, 4 mm × 4 mm × 1.45 mm,16-lead, plastic lead frame chip scale package (LFCSP_LQ).
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 28Functional BlockFig 5.2.1.10 Functional Block Of ADXL 335The ADXL335 is a complete 3-axis acceleration measurement system. TheADXL335 has a measurement range of ±3 g mini-mum. It contains a polysiliconsurface-micromachined sensor and signal conditioning circuitry to implement open-loop acceleration measurement architecture. The output signals are analog voltagesthat are proportional to acceleration. The accelerometer can measure the staticacceleration of gravity in tilt-sensing applications as well as dynamic accelerationresulting from motion, shock, or vibration.The sensor is a polysilicon surface-micromachined structure built on top of asilicon wafer. Polysilicon springs suspend the structure over the surface of the waferand provide a resistance against acceleration forces. Deflection of the structure ismeas-ured using a differential capacitor that consists of independent fixed plates andplates attached to the moving mass. The fixed plates are driven by 180° out-of-phasesquare waves. Acceleration deflects the moving mass and unbalances the differentialcapacitor resulting in a sensor output whose amplitude is proportional to acceleration.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 29Phase-sensitive demodulation techniques are then used to determine the magnitudeand direction of the acceleration.The demodulator output is amplified and brought off-chip through a 32 kΩresistor. The user then sets the signal bandwidth of the device by adding a capacitor.This filtering improves measurement resolution and helps prevent aliasing.For most applications, a single 0.1 μF capacitor, CDC, placed close to theADXL335 supply pins adequately decouples the accelerometer from noise on thepower supply. However, in applications where noise is present at the 50 kHz internalclock frequency (or any harmonic thereof), additional care in power supply bypassingis required because this noise can cause errors in acceleration measurement.If additional decoupling is needed, a 100 Ω (or smaller) resistor or ferrite beadcan be inserted in the supply line. Additionally, a larger bulk bypass capacitor (1 μFor greater) can be added in parallel to CDC. Ensure that the connection from theADXL335 ground to the power supply ground is low impedance because noisetransmitted through ground has a similar effect to noise transmitted through VS.The ADXL335 has provisions for band limiting the XOUT, YOUT, andZOUT pins. Capacitors must be added at these pins to implement low-pass filteringfor antialiasing and noise reduction. The equation for the 3 dB bandwidth isF−3 dB = 1/(2π(32 kΩ) × C(X, Y, Z))or more simplyF–3 dB = 5 μF/C(X, Y, Z)The tolerance of the internal resistor (RFILT) typically varies as much as±15% of its nominal value (32 kΩ), and the bandwidth varies accordingly. Aminimum capacitance of 0.0047 μF for CX, CY, and CZ is recommended in all cases.The ST pin controls the self-test feature. When this pin is set to VS, anelectrostatic force is exerted on the accelerometer beam. The resulting movement ofthe beam allows the user to test if the accelerometer is functional. The typical change
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 30in output is −1.08 g (corresponding to −325 mV) in the X-axis, +1.08 g (or +325 mV)on the Y-axis, and +1.83 g (or +550 mV) on the Z-axis. This ST pin can be left open-circuit or connected to common (COM) in normal use.Never expose the ST pin to voltages greater than VS + 0.3 V. If this cannot beguaranteed due to the system design (for instance, if there are multiple supplyvoltages), then a low VF clamping diode between ST and VS is recommended.The selected accelerometer bandwidth ultimately determines the measurementresolution (smallest detectable acceleration). Filtering can be used to lower the noisefloor to improve the resolution of the accelerometer. Resolution is dependent on theanalog filter bandwidth at XOUT, YOUT, and ZOUT.The output of the ADXL335 has a typical bandwidth of greater than 500 Hz.The user must filter the signal at this point to limit aliasing errors. The analogbandwidth must be no more than half the analog-to-digital sampling frequency tominimize aliasing. The analog bandwidth can be further decreased to reduce noiseand improve resolution.The ADXL335 noise has the characteristics of white Gaussian noise, whichcontributes equally at all frequencies and is described in terms of μg/√Hz (the noise isproportional to the square root of the accelerometer bandwidth). The user should limitbandwidth to the lowest frequency needed by the application to maximize theresolution and dynamic range of the accelerometer.5.2.2 ZIGBEEZigBee is a specification for a suite of high level communication protocolsusing small, low-power digital radios based on an IEEE 802 standard for personalarea networks. Applications include wireless light switches, electrical meters with in-home-displays, and other consumer and industrial equipment that requires short-rangewireless transfer of data at relatively low rates. The technology defined by the ZigBeespecification is intended to be simpler and less expensive than other WPANs, such asBluetooth. ZigBee is targeted at radio-frequency (RF) applications that require a low
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 31data rate, long battery life, and secure networking. ZigBee has a defined rate of 250kbps best suited for periodic or intermittent data or a single signal transmission from asensor or input device. ZigBee based traffic management system have also beenimplemented. The name refers to the waggle dance of honey bees after their return tothe beehive.ZigBee is a low-cost, low-power, wireless mesh network standard. The lowcost allows the technology to be widely deployed in wireless control and monitoringapplications. Low power-usage allows longer life with smaller batteries. Meshnetworking provides high reliability and more extensive range. ZigBee chip vendorstypically sell integrated radios and microcontrollers with between 60 KB and 256 KBflash memory.ZigBee operates in the industrial, scientific and medical (ISM) radio bands;868 MHz in Europe, 915 MHz in the USA and Australia, and 2.4 GHz in mostjurisdictions worldwide. Data transmission rates vary from 20 to 900 kilobits/second.The ZigBee network layer natively supports both star and tree typicalnetworks, and generic mesh networks. Every network must have one coordinatordevice, tasked with its creation, the control of its parameters and basic maintenance.Within star networks, the coordinator must be the central node. Both trees and meshesallows the use of ZigBee routers to extend communication at the network level.ZigBee builds upon the physical layer and medium access control defined inIEEE standard 802.15.4 (2003 version) for low-rate WPANs. The specification goeson to complete the standard by adding four main components: network layer,application layer, ZigBee device objects (ZDOs) and manufacturer-definedapplication objects which allow for customization and favor total integration.Besides adding two high-level network layers to the underlying structure, themost significant improvement is the introduction of ZDOs. These are responsible fora number of tasks, which include keeping of device roles, management of requests tojoin a network, device discovery and security.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 32Fig 5.2.2.11 Zigbee Protocol StackZigBee is not intended to support powerline networking but to interface withit at least for smart metering and smart appliance purposes.Because ZigBee nodes can go from sleep to active mode in 30 ms or less, thelatency can be low and devices can be responsive, particularly compared to Bluetoothwake-up delays, which are typically around three seconds.Because ZigBee nodes cansleep most of the time, average power consumption can be low, resulting in longbattery life.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 33UsesZigBee protocols are intended for embedded applications requiring low datarates and low power consumption. The resulting network will use very small amountsof power — individual devices must have a battery life of at least two years to passZigBee certification.Typical application areas include:Home Entertainment and Control — Home automation, smart lighting,advanced temperature control, safety and security, movies and musicWireless Sensor Networks — Starting with individual sensors likeTelosb/Tmote and Iris from MemsicIndustrial controlEmbedded sensingMedical data collectionSmoke and intruder warningBuilding automationDevice TypesThere are three different types of ZigBee devices:ZigBee coordinator (ZC): The most capable device, the coordinator forms theroot of the network tree and might bridge to other networks. There is exactlyone ZigBee coordinator in each network since it is the device that started thenetwork originally. It is able to store information about the network, includingacting as the Trust Center & repository for security keys.ZigBee Router (ZR): As well as running an application function, a router canact as an intermediate router, passing on data from other devices.ZigBee End Device (ZED): Contains just enough functionality to talk to theparent node (either the coordinator or a router); it cannot relay data from otherdevices. This relationship allows the node to be asleep a significant amount of
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 34the time thereby giving long battery life. A ZED requires the least amount ofmemory, and therefore can be less expensive to manufacture than a ZR or ZC.ProtocolsThe protocols build on recent algorithmic research (Ad-hoc On-demandDistance Vector, neuRFon) to automatically construct a low-speed ad-hoc network ofnodes. In most large network instances, the network will be a cluster of clusters. Itcan also form a mesh or a single cluster. The current ZigBee protocols support beaconand non-beacon enabled networks.In non-beacon-enabled networks, an unslotted CSMA/CA channel accessmechanism is used. In this type of network, ZigBee Routers typically have theirreceivers continuously active, requiring a more robust power supply. However, thisallows for heterogeneous networks in which some devices receive continuously,while others only transmit when an external stimulus is detected. The typical exampleof a heterogeneous network is a wireless light switch: The ZigBee node at the lampmay receive constantly, since it is connected to the mains supply, while a battery-powered light switch would remain asleep until the switch is thrown. The switch thenwakes up, sends a command to the lamp, receives an acknowledgment, and returns tosleep. In such a network the lamp node will be at least a ZigBee Router, if not theZigBee Coordinator; the switch node is typically a ZigBee End Device.In beacon-enabled networks, the special network nodes called ZigBee Routerstransmit periodic beacons to confirm their presence to other network nodes. Nodesmay sleep between beacons, thus lowering their duty cycle and extending theirbattery life. Beacon intervals depend on data rate; they may range from 15.36milliseconds to 251.65824 seconds at 250 kbit/s, from 24 milliseconds to 393.216seconds at 40 kbit/s and from 48 milliseconds to 786.432 seconds at 20 kbit/s.However, low duty cycle operation with long beacon intervals requires precisetiming, which can conflict with the need for low product cost.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 35In general, the ZigBee protocols minimize the time the radio is on, so as toreduce power use. In beaconing networks, nodes only need to be active while abeacon is being transmitted. In non-beacon-enabled networks, power consumption isdecidedly asymmetrical: some devices are always active, while others spend most oftheir time sleeping.Except for the Smart Energy Profile 2.0, ZigBee devices are required toconform to the IEEE 802.15.4-2003 Low-Rate Wireless Personal Area Network (LR-WPAN) standard. The standard specifies the lower protocol layers—the (physicallayer) (PHY), and the (media access control) portion of the (data link layer (DLL)).The basic channel access mode is "carrier sense, multiple access/collision avoidance"(CSMA/CA). That is, the nodes talk in the same way that people converse; theybriefly check to see that no one is talking before they start. There are three notableexceptions to the use of CSMA. Beacons are sent on a fixed timing schedule, and donot use CSMA. Message acknowledgments also do not use CSMA. Finally, devicesin Beacon Oriented networks that have low latency real-time requirements may alsouse Guaranteed Time Slots (GTS), which by definition do not use CSMA.XBEEFig 5.2.2.12 XBEE
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 36The XBee/XBee-PRO ZNet 2.5 OEM (formerly known as Series 2 and Series2 PRO) RF Modules were engineered to operate within the ZigBee protocol andsupport the unique needs of low-cost, low-power wireless sensor networks. Themodules require minimal power and provide reliable delivery of data between remotedevices.Serial CommunicationThe XBee ZNet 2.5 OEM RF Modules interface to a host device through alogic-level asynchronous serial port. Through its serial port, the module cancommunicate with any logic and voltage compatible UART; or through a leveltranslator to any serial device (For example: Through a Digi proprietary RS-232 orUSB interface board).UART Data FlowDevices that have a UART interface can connect directly to the pins of the RFmodule as shown in the figure below.Fig 5.2.2.13 UART DATA FLOWData enters the module UART through the DIN (pin 3) as an asynchronousserial signal. The signal should idle high when no data is being transmitted. Each databyte consists of a start bit (low), 8 data bits (least significant bit first) and a stop bit
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 37(high). The following figure illustrates the serial bit pattern of data passing throughthe module.Fig 5.2.2.14 UART data packet 0x1F (decimal number ʺ31ʺ) as transmitted through the RFmodule Example Data Format is 8‐N‐1 (bits ‐ parity ‐ # of stop bits)The module UART performs tasks, such as timing and parity checking, thatare needed for data communications. Serial communications depend on the twoUARTs to be configured with compatible settings (baud rate, parity, start bits, stopbits, data bits).5.2.3 PIC 18F4550PIC 18F4550 is an 8 bit microcontroller having mainly five extra features.1. Universal serial bus features2. Power managed modes3. Flexible oscillator structure4. Peripheral highlights5. Special microcontroller featuresUniversal Serial Bus FeaturesUSB V2.0 CompliantLow Speed (1.5 Mb/s) and Full Speed (12 Mb/s)
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 38Supports Control, Interrupt, Isochronous and Bulk TransfersOn-Chip USB Transceiver with On-Chip Voltage RegulatorSpecial Microcontroller FeaturesC Compiler Optimized Architecture with Optional ExtendedInstruction Set100,000 Erase/Write Cycle Enhanced Flash Program Memory Typical1,000,000 Erase/Write Cycle Data EEPROMMemory TypicalFlash/Data EEPROM Retention: > 40 YearsSelf-Programmable under Software ControlPriority Levels for Interrupts8 x 8 Single-Cycle Hardware MultiplierExtended Watchdog Timer (WDT):- Programmable period from 41 ms to 131sProgrammable Code ProtectionSingle-Supply 5V In-Circuit Serial Programming™ (ICSP™) via TwoPinsIn-Circuit Debug (ICD) via Two Pins5.2.4 PIC 16F876AMain Features- 8 channel Analog-to-Digital Converter (A/D)· Brown-out Reset (BOR)·Analog Comparator module with: - Two analog comparators -Programmable on-chip voltage reference (VREF) module - Programmableinput multiplexing from device inputs and internal voltage reference.- Only 35 single word instructions to learn· All single cycle instructionsexcept for program branches, which are two-cycle· Operating speed: - 20MHz clock input 200 ns instruction cycle· x 14 words of FLASH Program
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 39Memory, x 8 bytes of Data Memory (RAM), x 8 bytes of EEPROM DataMemory· Pinout compatible to other 40/44-pin PIC16CXXX andPIC16FXXX microcontrollers.- Low power, high speed FLASH/EEPROM technology Fully static designWide operating voltage range to 5.5V) Commercial and Industrialtemperature ranges Low power consumptionSpecial Microcontroller Features100,000 erase/write cycle Enhanced Flash program memory typical1,000,000 erase/write cycle Data EEPROM memory typicalData EEPROM Retention > 40 yearsIn-Circuit Serial Programming™ (ICSP™) via two pinsSingle-supply 5V In-Circuit Serial ProgrammingWatchdog Timer (WDT) with its own on-chip RC oscillator forreliable operationProgrammable code protectionPower saving Sleep mode5.2.5 89C2051 MICROCONTROLLERThe AT89C2051 is a low-voltage, high-performance CMOS 8-bitmicrocomputer with 2K bytes of Flash programmable and erasable read-only memory(PEROM). The device is manufactured using Atmel‘s high-density nonvolatilememory technology and is compatible with the industry standard MCS-51 instructionset. By combining a versatile 8-bit CPU with Flash on a monolithic chip, the AtmelAT89C2051 is a powerful microcomputer which provides a highly-flexible and cost-effective solution to many embedded control applications. The AT89C2051 providesthe following standard features: 2K bytes of Flash, 128 bytes of RAM, 15 I/O lines,two 16-bit timer/counters, a five vector two-level interrupt architecture, a full duplexserial port, a precision analog comparator, on-chip oscillator and clock circuitry. Inaddition, the AT89C2051 is designed with static logic for operation down to zero
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 40frequency and supports two software selectable power saving modes. The Idle Modestops the CPU while allowing the RAM, timer/counters, serial port and interruptsystem to continue functioning. The power-down mode saves the RAM contents butfreezes the oscillator disabling all other chip functions until the next hardware reset.5.2.6 L293D MOTOR DRIVERThe Device is a monolithic integrated high voltage, high current four channeldriver designed to accept standard DTL or TTL logic levels and drive inductive loads(such as relays solenoids , DC and stepping motors) and switching power transistors.To simplify use as two bridges each pair of channels is equipped with an enable input.A separate supply input is provided for the logic, allowing operation at a lowervoltage and internal clamp diodes are included. This device is suitable for use inswitching applications at frequencies up to 5 kHz. The L293D is assembled in a 16lead plastic package which has 4 center pins connected together and used for heatsinking The L293DD is assembled in a 20 lead surface mount which has 8 center pinsconnected together and used for heat sinking.Features1. 600ma output current capability per channel2. 1.2a peak output current (non repetitive) per channel enable facility3. Over temperature protection4. Logical ‖0‖ input voltage up to 1.5 v5. High noise immunity6. Internal clamp diodes5.2.7 SL74HC573This device contains protection circuitry to guard against damage due to highstatic voltages or electric fields. However, precautions must be taken to avoidapplications of any voltage higher than maximum rated voltages to this high-
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 41impedance circuit. For proper operation, VIN and VOUT should be constrained to therange.GND (VIN or VOUT) VCC.FeaturesThe SL74HC573 is identical in pinout to the LS/ALS573. The deviceinputs are compatible with standard CMOS outputs; with pull upresistors, they are compatible with LS/ALSTTL outputs.These latches appear transparent to data (i.e., the outputs changeasynchronously) when Latch Enable is high. When Latch Enable goeslow, data meeting the setup and hold time becomes latched.Outputs Directly Interface to CMOS, NMOS, and TTLOperating Voltage Range: 2.0 to 6.0 VLow Input Current: 1.0 MaHigh Noise Immunity Characteristic of CMOS Devices
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 426. FLOWCHARTS6.1 TRANSMITTER SECTIONFig 6.1.15 Flowchart Of Transmitter Section
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 43Figure above shows the flowchart of transmitter section of the Human- RobotInteraction system. The accelerometer is the device used for making the humaninteraction with robot. The accelerometer detects the gestures of human hand and itconverts the gestures to some electric voltages.ie the accelerometer detects the X,Y&Z directional motions of the hand and converts these motions in to some voltages.These voltages are decoded and transmitting to the robot at a remote location. Thisinformation are transmitted through a wireless transmission method called zigbee,which is one of the most effective wireless communication protocols. This is the basicworking of transmitting section.At first all the devices are initialized, including the accelerometer, zigbee andthe code generating section. The accelerometer is so sensitive to detect the motions.When there is a motion occurred, accelerometer detects the motion which is in X, Yor in Z direction. The device will produce some output voltages. These voltages areapplied to a code generating circuit. The code generating circuit mainly consist of aPIC micro controller 16F876A.This PIC contains analoge to digital converter. Beforethe operation some ranges of values produced in the accelerometer corresponding tothe X, Y & Z motions are specified and stored in the microcontroller memory. Whenthere is a motion occurred the controller compares the values and if the values are inthe ranges which are specified in the memory. For each range of values themicrocontroller will generate a pre assigned code in any one of it‘s port, ie for eg. forX &Y combination values the code is 01,for X&Z code is 02 and for Y&Z code is 03etc.So the controller checks the incoming values from the output pins ofaccelerometer and will generate corresponding code if the values are in the predefinedrange. Otherwise the device will not generate any code and waits for the occurance ofany motion.After generating the predefined code the microcontroller sends these codes tothe remote location through a zigbee device, and will initialize a stop count. The stopcount will increment repeatedly till another motion is occurred. At the receiver thereare some tasks assigned to the codes which are transmitted. When another motion isoccurred the stop count will be reinitialize for that particular motion code. Consider if
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 44there is no other motion occurred after an initialization of stop count, the controllerchecks the count and if it reaches the defined count value the code generation willstops. And the device will stop the working.Here we are using the zigbee protocol; the zigbee is having a particular range.Consider for a particular motion a particular code will be generate and consider thetask corresponding to the code is ―forward motion‖. If we are not trying to providethe stop count the device will check for the another motion for a long time and therobot at the receiver part will move continuously and will go out of the range ofzigbee and from our controllable range. And after this the change in the hand motionat the transmitter will not convert in to the motion of robot. So if we are providing thestop count the device will stop the operation when the counter reaches it‘s maximumvalue and waiting for the any other movement.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 456.2 RECEIVER SECTIONFig 6.2.16 Flowchart Of Receiver Section
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 46The receiver mainly consists of a zigbee receiver and a microcontroller89C2051, which is the main controller of receiver. The receiver is fixed on an autobotwhich is able to move in any direction according to the commands. The maincomponents of the autobot is PIC18F4550, a motor driver ic and two wheelsconnected to dc motors. The front wheel is free to move in any direction.During non operating condition ie, the device is not detecting any motion themotors are in idle state. When a motion occurs the transmitter detects the motion andsends the code corresponding to the motion. The zigbee receiver receives the codeand sends this data to microcontroller in te receiver circuit. The controller detects thecodes and generates some other codes corresponding to the received codes in any oneof it‘s port as per the program. This code determines the movement of the autobot.Some tasks like left motion, right motion, forward motion, backward motion and stopare assigned in the autobot controller to the codes of receiver-controller.While seeing the code generated by the receiver controller the main controllerof autobot produces some sequence of codes to control the motor driver IC and thedriver IC controls the motion of motors like forward, backward etc. After this the PICmicrocontroller initializes a stop count and waits for the any other motion.When another motion is detected the device reinitializes the stop count and thecode corresponding to that motion is generated and the main controller generatescorresponding sequence to control the motion of the motors and wheels of autobotcorresponding to that motion will be performed. If the motion is not changed thedevice will check the stop count and if it is reached it‘s maximum allowable valuethe operation of the device will stop.The stop count concept helps us to keep our device in our control range .In thereceiver part we are using two controllers one for controlling the receiver and one forcontrolling the autobot. There may be many complexity in programming the autobotand receiver with one controller which will affect the sudden response of the autobotwith the commands in the form of gestures. By using two controllers we can controlthe device more accurately without more interrupting the main controller of autobot.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 477. PCB LAYOUTSFig 7.17 Component Layout Of Autobot
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 48Fig 7.18 PCB Layout Of Autobot
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 49Fig 7.19 PCB Layout Of Receiver
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 50Fig 7.20 PCB Layout Of Transmitter
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 518. RESULT AND DISCUSSIONThe project ―Human Robotic Interaction Based On Gesture Identification‖was designed such that autobot can move in forward, backward, right side and leftside according to the motion of the hand. The main highlight of this project is theZIGBEE transceiver, which is used for the data transfer between the receiver andtransmitter. Movement of the hand is detected by the accelerometer which is attachedto the hand. This system can be used in home automation. This system also has acamera in the receiver section. As a result autobot can be used for spy works. By theusage of zigbee transceiver it is able to control the autobot from another location.Fig 8.21 Prototype of Robot
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 529. ADVANTAGES AND DISADVANTAGES9.1 ADVANTAGESEase of controlling.Movement of autobot can be controlled by hand movements.Fast response.The module can be made into various forms as per the area of application.User friendly- One need not to know about the robot, as they can control byhand movement.Efficient and low cost design.9.2 DISADVANTAGESCamera in the receiver section uses more power, so robot cannot run onbattery for long time.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 5310. APPLICATIONSRobots are used for many services in our society. It extends from industrialautomation tools to medical care. Robots can be used in the hazardous areas wherethe human can‘t reach.A deaf and dumb person can also control the robot. So this system can be usedin home automation. The system can be used in industrial areas for fast operation andease of work.Giant machinery vehicles can be controlled by body movements.In the mine industry, robots can be used before human workers for examiningthe environment .By knowing the environmental conditions inside the minesappropriate precautions can be taken.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 5411.FUTURE SCOPEHIR is going to be an important military application in future. By translatingwhole motions of a human body to a humanoid (human-like robot) we can make amachine clone of human beings. And these robots can be used for militaryapplications. By this we can reduce human casualty as there is no direct involvementof human beings, also the machine parts are not easily damaged as human organswould be.In medical area, doctors can treat patients in a remote location by sitting intheir own cabin under normal situation.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 5512.CONCLUSIONThis project proposes an authoring method capable of creating and controllingmotions of industrial robots based on gesture identification. The proposed method issimple, user-friendly, cost effective, and intelligent and facilitates motion authoringof industrial robots using hand, which is second only to language in terms of means ofcommunication. The proposed robot motion authoring method is expected to provideuser-friendly and intuitive solutions for not only various industrial robots, but alsoother types of robots including humanoids.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 5613. REFERENCES1. N. K. Aaronson, C. Acquadro, J. Alonso, G. Apolone, D. Bucquet, M.Bullinger, K.Bungay,S.Fukuhara, B. Gandek, S. Keller, D. Razavi and R.Sanson-Fisher.International quality of life assessment (iqola) project. Qualityof Life Research, 1(5):349–351, Dec 2004.2. P. Aigner and B. McCarragher. Shared control framework applied to a roboticaid for the blind. Control Systems Magazine, IEEE, 19(2):40–46, April 1999.3. D.Grollman,Jenkins. Learning elements of robot soccer from demonstration.In Proceedings of theInternational Conference on Development and Learning(ICDL), London, England, Jul 2007.4. K. Gold and B. Scassellati. Learning about the self and others throughcontingency. In AAAI Spring Symposium on Developmental Robotics,Stanford, CA, March 2005.5. P. H. Kahn, H. Ishiguro, B. Friedman, and T. Kanda. What is a human? –Toward psychological benchmarksin the field of human-robot interaction. InIEEE Proceedings of the International Workshop on Robot and HumanInteractive Communication (RO-MAN), Hatfield, UK, Sep 2006.
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 57APPENDIX
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 58
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 59
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 60
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 61
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 62
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 63
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 64
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 65
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 66
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 67
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 68
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 69
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 70
  • Human Robotic Interaction Based On Gesture IdentificationDept. of ECE, SJCET, Palai 71