Your SlideShare is downloading. ×
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Dmt
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Dmt

191

Published on

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
191
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
1
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. DMT Robot [The Green Team] Ron Adolph - Team leader Qais Chaudry - Sensors Moe Elzubeir - Aritificial Intelligence Shehnaz Chowdhury - Actuators Reema Ahluwalia - HMI Hasina Aziz - Simulation April 26, 2001
  • 2. Contents 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.10 Introduction . . . . . . . . . . . . . . . . . . . . . Design Philosophy . . . . . . . . . . . . . . . . . The DMT System . . . . . . . . . . . . . . . . . . Think (Machine Intelligence) . . . . . . . . . . . . Sense (Sensor Package) . . . . . . . . . . . . . . . Act (Actuators) . . . . . . . . . . . . . . . . . . . H.M.I (Human - Machine Interface) . . . . . . . . Comm/Conn (Communications and Connectivity) Simulation . . . . . . . . . . . . . . . . . . . . . . Implementation of the D.M.T. System . . . . . . 0.10.1 The Path . . . . . . . . . . . . . . . . . . 0.10.2 Surveillance Stations . . . . . . . . . . . . 0.10.3 Radio Repeaters . . . . . . . . . . . . . . 0.10.4 Control Room . . . . . . . . . . . . . . . . 1 Sensors 1.1 Overview . . . . . . . . . . . . . . . . . . . . . 1.2 Sensor Package Hardware . . . . . . . . . . . 1.3 Surveillance Cameras with night vision Scopes 1.3.1 Introduction . . . . . . . . . . . . . . . 1.3.2 Hardware . . . . . . . . . . . . . . . . 1.3.3 Sensor Software Package (S.S.P) . . . . 1.4 HeartBeat Sensor . . . . . . . . . . . . . . . . 1.4.1 Introduction . . . . . . . . . . . . . . . 1.4.2 Hardware . . . . . . . . . . . . . . . . 1.4.3 Software . . . . . . . . . . . . . . . . . 1.5 Laser and Range Finder . . . . . . . . . . . . 1.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 3 3 3 4 4 4 4 5 5 5 5 5 . . . . . . . . . . . . 6 6 6 6 6 7 8 9 9 11 11 11 11
  • 3. CONTENTS 1.5.2 1.5.3 2 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . 12 Software . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2 Artificial Intelligence 2.1 Introduction . . . . . . . . . . . . . . . . . . . 2.1.1 Hardware Requirements . . . . . . . . 2.1.2 Software Requirements . . . . . . . . . 2.2 Architecture Overview . . . . . . . . . . . . . 2.2.1 Layer 1 - Subsymbolic Level . . . . . . 2.2.2 Layer 2 - Skill Level . . . . . . . . . . 2.2.3 Layer 3 - Execution/Deliberation Level . . . . . . . . . . . . . . . . . . . . . 3 Actuators 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . 3.2 Hardware Requirements . . . . . . . . . . . . . . . 3.2.1 2 Drive Wheels and 2 Steering Wheels . . . 3.2.2 1 Fiberglass Arm with 2 Degrees-of-Freedom 3.2.3 A Gasoline Engine with 2 Horse Power . . . 3.2.4 A Single Ball-Bearing R/C Servo Motor . . 3.2.5 A Load Proximity Sensor Alarm . . . . . . . 3.2.6 1 Basic ARGOS Pan-Tilt Head Kit 15lbs . . 3.2.7 A Torso to Contain Other Units . . . . . . . 3.3 Architecture overview . . . . . . . . . . . . . . . . . 3.4 Decomposition diagram . . . . . . . . . . . . . . . . 3.4.1 Manipulator End Effector Movement . . . . 3.4.2 Manipulator degrees of Freedom . . . . . . . 3.5 Other considerations . . . . . . . . . . . . . . . . . 3.6 Functional States of Actuator . . . . . . . . . . . . 3.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . 4 Human-Machine Interface 4.1 Introduction . . . . . . . . . . . 4.1.1 HMI design . . . . . . . 4.1.2 Hardware Requirements 4.1.3 Software Requirements . 4.2 Architecture . . . . . . . . . . . 4.2.1 Input . . . . . . . . . . . 4.2.2 Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (DOF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 14 14 15 15 15 17 18 . . . . . . . . . . . . . . . . 20 20 20 21 21 21 22 23 24 24 24 27 27 27 28 29 29 . . . . . . . 30 30 30 32 32 33 33 35
  • 4. CONTENTS 3 5 Connectivity and Communication 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . 5.2 Decomposition Diagram . . . . . . . . . . . . . . . 5.2.1 Communication and Connectivity Hardware 5.2.2 Communications and Connectivity Software . . . . . . . . . . . . . . . . . . . . . . . . 36 36 36 36 38 6 Simulation 6.1 Introduction . . . . . . . . . . 6.1.1 Hardware and Software 6.2 Overview . . . . . . . . . . . . 6.3 Modeling . . . . . . . . . . . . 6.3.1 Form . . . . . . . . . . 6.3.2 Fit . . . . . . . . . . . 6.3.3 Function . . . . . . . . 6.3.4 Kinematic modeling . 6.3.5 Dynamic modeling . . 6.3.6 Simulation . . . . . . . 6.3.7 Animation . . . . . . . 6.3.8 Visualization . . . . . 6.4 Engineering Design . . . . . . 6.5 Path-Task Planning . . . . . . 6.5.1 Human Controlled . . 6.5.2 Autonomous . . . . . . 6.6 Predictability . . . . . . . . . 6.7 Path Execution . . . . . . . . 6.8 Developing Virtual Presence . 6.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 39 39 39 40 41 41 41 42 42 42 43 43 43 44 44 45 45 45 46 46 . . . . 49 49 49 49 49 A Functional State A A.1 Sensors . . . . . . A.2 AI . . . . . . . . A.3 Actuators . . . . A.4 HMI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B Functional State B 52 B.1 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 B.2 AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 B.3 Actuators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
  • 5. CONTENTS 4 B.4 HMI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 B.5 Communication and Connectivity . . . . . . . . . . . . . . . . 52 B.6 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 C Functional State C 54 C.1 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 C.1.1 Function 1.C1 . . . . . . . . . . . . . . . . . . . . . . . 54 C.2 AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 D Functional State D 55 D.1 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 D.2 AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 E Functional State E 56 E.1 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 E.2 AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 F Functional State F 57 F.1 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 F.2 AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 G Functional State G 58
  • 6. 0.1 Introduction 5 Overview 0.1 Introduction The Don’t Mess with Texas Robot system will assist law enforcement officials in maintaining the Texas boarder with Mexico. The D.M.T. is an autonomous robotic system that will patrol unpopulated regions of the boarder. D.M.T. can provide regular surveillance of a large region with little supervision. Several units on overlapping routes will have a force multiplying effect, allowing a single trained technician to provide the same surveillance coverage as several teams of agents operating in the field. D.M.T. will provide agencies with several benefits. With fewer agents tasked to surveillance in the field, agencies will have more man-hours available. Since fewer agents are on patrol, fuel costs, health care costs, and even manpower costs will be reduced. Agents no longer involved in tedious surveillance duties can be tasked other activities. 0.2 Design Philosophy While designing the D.M.T, our team has tried to keep the concept as simple as possible. A simpler system delivers greater reliability and lower cost. In this spirit we have eschewed a ’jack of all trades’ approach; what is left is an efficient design. All components included in the design are commercially available, with the exception of the vehicle itself. Little or no modification has been made on the components used, so encement parts will readily available and relatively economical. The D.M.T. system is assembled in a modular fashion, providing two additional benefits. Maintenance will be simpler and therefore cheaper, since most components can be easily swapped out. More importantly, with the aid of internal systems monitoring, trouble shooting of system failures will be a simple matter.
  • 7. 0.3 The DMT System 0.3 6 The DMT System For the purpose of this overview, we will look at the D.M.T. at the first level of decomposition. 0.4 Think (Machine Intelligence) The machine intelligence (M.I.) system is by far the most complex of the D.M.T’s systems. The M.I. is responsible for navigation, localization, implementation of user commands, etc Nearly every action taken by the D.M.T. will be filtered through the M.I. To reduce the processing load of the M.I, we have included some expert systems. The Intel Pentium 3 will serve as the processing architecture. 0.5 Sense (Sensor Package) The sensor package of the D.M.T. is the whole purpose for the robot. In addition to providing data for navigation purposes, the camera, night vision and heart beat sensors will provide the intelligence information required by law enforcement agencies. The most important sensors for completion of the mission will be the two surveillance cameras. The use of two cameras gives 3 dimensional image capabilities, but the cameras can be used in 2-d mode to track two separate targets. In the interest of cost-savings, the software will accomplish all image processing; simple, commercially available cameras can be employed. The heartbeat sensor’s primary function is to determine if a perceived object is indeed a human being, minimizing false alarms. The heartbeat sensor can also alert the D.M.T. to previously undetected human contacts that may be hiding or standing still to avoid detection. The laser range finding system will primarily assist in keeping the D.M.T. on the path, allowing the surveillance camera system to remain tasked to its primary mission.
  • 8. 0.6 Act (Actuators) 0.6 7 Act (Actuators) The actuators of the D.M.T. can be easily divided into two functional groups: the vehicle and the sensor mast. The vehicle’s actuators consist of a gasoline motor and its drive wheels, and the steering wheels. Since small modern gasoline motors are quiet and produce low emissions, it will provide little evidence of its presence. The sensor mast unit is designed to be simple and lightweight. Only 2 degrees of freedom are required to place the sensor package in surveillance position. Small servomotors at the top of the mast will position the cameras as needed. 0.7 H.M.I (Human - Machine Interface) A graphical user interface is employed for the human operator. The GUI visual presentation is as simple as possible to allow the operator to monitor multiple units at a single station. A GUI also reduces the amount of training time needed for new technicians. All commands to the unit can be made through the GUI. Standard keyboard and mouse are used for most command inputs. A joystick is provided to control the unit for off-the-path operations, and for manual control of the cameras. 0.8 Comm/Conn (Communications and Connectivity) The D.M.T. will be connected by way of a 2 way digital radio link. Encryption will be employed to assure secure communications. Use of a dual parity radio link provides real time error detection, and will assure that the unit is never uncontrolled. Installation of repeating stations may be necessary for areas with irregular topography, to ensure a strong signal in low-lying areas. 0.9 Simulation The simulation unit of the D.M.T. is very useful in path planning and verification. When the M.I. chooses a path, it is sent to the simulation for
  • 9. 0.10 Implementation of the D.M.T. System 8 verification, and the best route is chosen. The M.I. and the simulation unit work very closely together to provide quick processing of path planning problems. 0.10 Implementation of the D.M.T. System In order to successfully implement the D.M.T. system, several environmental considerations must be made. 0.10.1 The Path The D.M.T. requires a pre-prepared, pre-defined path. The use of a predefined path greatly simplifies the D.M.T’s path planning tasks. Way markers along the path will interact with the laser system to easily define the path for the robot. 0.10.2 Surveillance Stations By predefining the surveillance stations, the best possible coverage of the sensor systems will be assured. By defining these points, the sensors resources can be concentrated on problem areas, and blind spots can be minimized. 0.10.3 Radio Repeaters To ensure full communications coverage over the entire surveillance area, repeaters for the digital radio link may be needed. Repeaters can greatly improve reception in low-lying areas. 0.10.4 Control Room A dedicated D.M.T. control area will be needed for the control and interface equipment. A small area will also be needed for storage and maintenance of each D.M.T. unit.
  • 10. Chapter 1 Sensors 1.1 Overview The sensor package consists of mainly 3 Hardware components and a very sophisticated software to gather information and pass it on to the AI unit. The Sensor Software Package is written in C++ and gathers all info from the 3 hardware devices. The AI unit takes the input and makes decisions from that point. 1.2 Sensor Package Hardware 1. Surveillance cameras with night vision 2. HeartBeat Sensor 3. Laser and Range Finder 1.3 1.3.1 Surveillance Cameras with night vision Scopes Introduction We tried to make this as least expensive as possible of the hardware side and are doing all the processing on the software side. Our main input device are two very basic color surveillance cameras that you will find is any store.
  • 11. 1.3 Surveillance Cameras with night vision Scopes 10 We decided to use two Sony Hyper Color Camera. We also used two Night Vision scopes. These scopes (PNS 4.6 Night Vision Rifle scope) are made by Night Vision Optics and came with a lens mounting bracket that fit our surveillance cameras quite nicely. In order for all of this to work as are also using an Nvidia graphics accelerator card with Nvidia Detonator 5 drivers. 1.3.2 Hardware Figure 1.1: Camera • Rockque Solid Surveillance – Sony 1/3” Hyper HAD Color Camera SH-SSCDC14 – 470 lines resolution – 1.7 lux @ F1.2 – 24 VAC – Digital processing – Video or DC type A/I lens – C/CS mounting – Weight 2.0 lbs. – Cost $250. • Night Vision Optics (PNS 4.6 Night Vision Rifle Scope) – Magnification 5.2 – Field of view 6 degrees – Exit pupil diameter 7 mm – Tube Gen. 1 + Gain 30,000
  • 12. 1.3 Surveillance Cameras with night vision Scopes 11 Figure 1.2: Night Vision – Resolution 80 – Viewing distance Precise aiming up to 150 meters range with natural starlight illumination (down to .005 lx) – Power supply 9V battery – Diopter +3 to -3 dptr – Dimensions 214x59x102 mm – Weight 1.36 lbs.(0.85 kg) – Cost $390. • Nvidia Quadro 2 Pro – Hardware anti-aliased line engine – 6.4GB/sec bandwidth enabling work in fully textured mode while achieving real-time frame rates – 64MB unified frame buffer, providing ample room for high-resolution, 32bpp textures – Robust and full-featured board delivers unprecedented 3D and 2D performance. – Cost $280. 1.3.3 Sensor Software Package (S.S.P) We wrote our own Software package (DMT S.S.P) for the Stereo Vision. The software incorporates OpenGL and Direct3D API’s giving us a true 3D image. This is done by capturing images from both cameras and combining the image into one 3D image. The processing is all done at the Graphics accelerator
  • 13. 1.4 HeartBeat Sensor 12 Figure 1.3: Nvidia Card card and then the image is passed to the AI unit. The software runs in two modes (Intelligent and Direct). When SSP is running in Intelligent Mode, the #d images are passed to the AI for processing. In direct mode the Images are directly sent to the transmitter and passed on to the two Posts. When the Robot is Stationary and Intelligent mode turns on and SSP starts capturing the images for processing. When the robot is in motion then SSP switches to direct feed and does not do any processing to the images and images are sent directly to the both posts. 1.4 1.4.1 HeartBeat Sensor Introduction A new device developed by Ford and Volvo alerts drivers to young children or pets that have been left in the car or trapped in the trunk. This device has also been used in hunting where it tells the shooter if the target is animal or human. We are using the same exact device on our robot to detect and see if moving objects in the distance are human or animal. This device works on the principal of detecting the minute vibrations of a heartbeat in a distance of 100 feet. The built of microprocessor and our SSP program then determines if the heartbeat it animal or human.
  • 14. 1.4 HeartBeat Sensor Figure 1.4: Nvidia GEForce 256 13
  • 15. 1.5 Laser and Range Finder 1.4.2 14 Hardware • Secure Shooters • Range 100 feet. • Power Supply 9V. • Weight 1 lbs • Cost $300. Figure 1.5: Heart Beat Sensor 1.4.3 Software We have enhanced the software package that came with the HeartBeat sensor and incorporated it into our SSP software. The SSP software is getting the feed from the processor and determining what commands we need to send to the AI unit. The SSP software then sends a signal to the AI only if the outcome is human. 1.5 1.5.1 Laser and Range Finder Introduction The LH-7800 Laser and Range Finder is a hand-held instrument intended for use by infantry units. We are using this instrument in two states. First state is where the laser part is doing the navigation for us by bouncing the laser off of ball bearing along the path, this will be further explained in Simulation. The second Part is finding the range of distance objects. The laser is mounted on a swing arm which moves from right to left to find the correct path.
  • 16. 1.5 Laser and Range Finder 1.5.2 15 Hardware • LH-7800 • Range 120 Miles • Power Supply 12V • Weight 2 Lbs • Cost $550. 1.5.3 Software The Operation is handled by the SSP software, where it receives the signal from the LH- 7800 and then passes the output to the AI unit. The Laser and Range finder comes in very handy when we do not have stereoscopic vision during the Direct phase of transmission. At this point the range is send to the AI unit and then processed for any oncoming objects or obstacles.
  • 17. 1.5 Laser and Range Finder Figure 1.6: Decomposition Diagram 16
  • 18. Chapter 2 Artificial Intelligence 2.1 Introduction The integration of vision modules into a control architecture for an autonomous mobile robot can prove to be a challenege. This is due to the large amount of unfiltered raw data coming from the sensors, and the limitations of processing power on a mobile robot. With all these considerations, a careful modular design could overcome some if not all of these problems. By dividing the AI into layers and sublayers which can operate in parallel to one another, we can achieve high performance and rewarding effecieny. The AI unit is responsible for all the decision-making aspects of the DMT Robot. Although our sensors package does include Experts Systems, the AI is solely responsible for the heuristic behavior of the robot. 2.1.1 Hardware Requirements The AI unit of the DMT Robot will run on regular i386 architecture. The basic requirements estimated to provide almost real-time reactions are as follows: • Pentium III motherboard • Pentium III CPU (800 MHz) • Storage 20 GB SCSI HDD
  • 19. 2.2 Architecture Overview 18 • 512 MB Fast RAM Special consideration must be taken to keeping the processor cool through extreme heat conditions. 2.1.2 Software Requirements The Linux operating system provides a stable, efficient and free solution to our platform. Next-to real time results can be expected at the Subsymbolic level1 . The implementation and algorithms will be written in C++, maintaining a modular approach in our design. This includes the image analysis, identification, recognition, and even obstacle avoidance. This modularity allows for keeping most of the functions abstracted, and called using virtual functions. However, this does not apply to processor intensive functions as the overhead would affect the performance. 2.2 Architecture Overview The AI architecture is divided into three main layers. The first layer is the subsymbolic layer, the second is the execution layer, and the third is the deliberation layer. Those layers work together to bring the robot to optimal performance. 2.2.1 Layer 1 - Subsymbolic Level This level works in an almost real-time environment. Although we do not guarantee real-time response, the efficiency of the processor power and algorithms provide a very close comparison. Most of the intensive computations are left out of this level, to be placed on the third. This level contains algorithms for the following items: 1. Base server - robot position 2. Laser server - provides simple algorithms for the laser to continue detecting the path boundaries as it moves to aid in obstacle avoidance. 1 The subsymbolic level is essentially a reactive system
  • 20. 2.2 Architecture Overview 19 Figure 2.1: AI Modules and Layers 3. Infra-red server - activated during night-time 4. Image server - the proper channelling of images and its analysis. 5. Arm server - sets/resets the timer for arm movement. 6. Ptu serve - pan tilt unit server 7. HMI Output Essentially, the subsymbolic receives raw input from the sensors and commands the actuators, based on the later analysis of the execution layer.
  • 21. 2.2 Architecture Overview 2.2.2 20 Layer 2 - Skill Level The Skill level is based on proper configuration and synchronization of the subsymbolic activities, by selecting appropriate parameters for skills considering the current execution context. The units which compose the execution level are as follows: 1. Object Recognition 2. Self Localization 3. Mapper 4. Path Planner 5. HMI Input 6. Motion Control 7. Object Following Object Recognition Figure 2.2: Vision Module
  • 22. 2.2 Architecture Overview 21 The vision module is at a constant connection with the image server. Also, depending on the situtation, it is connected to the ptu and base server. The above figure shows the structure of the vision module. Figure 2.3: Vision Module Interface The vision module interface is a rather simplistic interface. The client issues a command, which involves configuration (activation, deactivation). Then in case of an event (e.g. object found), it returns the list of objects found and their attributes. Also, it leaves an open line for requests for new searches, and other behavior. When doing a search, the vision module continues to update the client of relative position information during tracking. 2.2.3 Layer 3 - Execution/Deliberation Level This layer contains most of the time-consuming algorithms, and are usually carried out in parallel to other layers. It can be said that it is divided into
  • 23. 2.2 Architecture Overview 22 two sub-layers, an execution and a deliberation layer. Both layers share the Knowledge Base, however, they do two different tasks: • Execution Layer - Agenda and Interpreter • Deliberation Layer - Symbolic Planner
  • 24. Chapter 3 Actuators 3.1 Introduction Motion control systems define the motion of any autonomous robot. All motion control systems have an intended or desired motion for the load. This desired motion is the basis for making a part and is often the implementation of the overall machine strategy. There are a number of parameters to consider when applying a motion control system. • Speed: How fast does the controlled device have to move? • Torque: how hard does the motion control device have to work to move the load? • Accuracy: How close to the ideal motion path does the motion control system have to conform? The component that actually provides the force for motion is called the actuator. A motor is an example of a rotary actuator and rotary actuators are often used in linear motion systems. 3.2 Hardware Requirements We have aimed to produce a robot with low cost in mind. The Actuator unit for the DMT Robot will use the following hardware:
  • 25. 3.2 Hardware Requirements 3.2.1 24 2 Drive Wheels and 2 Steering Wheels • DuBro low-bounce wheels • 12 and 3 inches in diameter respectively • Absorbs shock and grips the road, 5/32” Axle (4mm) • Cost $99.00 Figure 3.1: DMT Wheels 3.2.2 1 Fiberglass Arm with 2 Degrees-of-Freedom (DOF) • DuBro fiberglass arm 25lbs. • Cost $250.00 For the DMT we need an arm with only 2 degrees-of-freedom because the arm needs only to lift the cameras to a high level to survey the surrounding. 3.2.3 A Gasoline Engine with 2 Horse Power • Fuel capacity 1.2 gallons • Smokeless and very quiet • Runs at 25+ mph. • Cost $200.00 We have chosen a gasoline engine to power the robot because it has to run a path of approximately 10 miles up and down before it can reach a shed.
  • 26. 3.2 Hardware Requirements 25 Figure 3.2: Fiberglass Arm 3.2.4 A Single Ball-Bearing R/C Servo Motor Figure 3.3: Servo Motor • The MS492 can be used as a servo for actuating the joints of the robot
  • 27. 3.2 Hardware Requirements arm. • With a maximum of 49.2 oz.-in torque, the MS492, • Provides planey of muscle to drive small mechanical limbs. • Cost $255.00 3.2.5 A Load Proximity Sensor Alarm Figure 3.4: Sensor Alarm • QMT 42FF2000 Series • Minimum sensing distance - 2.0 inches; • Fixed cut-off distance - 79 inches. • Cost $169.00 26
  • 28. 3.3 Architecture overview 3.2.6 27 1 Basic ARGOS Pan-Tilt Head Kit 15lbs • 1 Argos Head for mounting optional camera, sonar, etc., • 1 Mezzanine platform • 2 Connecting link from tilt-servo to head • 2 Tilt-servo Mount (2 servos included) • Cost $200.00 Figure 3.5: ARGOS Pan-Tilt Head Kit The cameras can view the ground even when the camera platform is at the topmost level. 3.2.7 A Torso to Contain Other Units • TJBody-Wd • Body-extra strong 5 ply plywood • Cost $69.95 3.3 Architecture overview actuator involves kinematic and dynamic modeling. Dynamics is an essential criteria for the design of actuators because we are concerned with mass, moment of inertia, force, torque momentum and acceleration. We will consider
  • 29. 3.3 Architecture overview 28 Figure 3.6: DMT Sideview Figure 3.7: DMT Topview the low-level control of a wheeled robot. We will use the Top Down architecture. The NASA/NBS Standard Reference Model for Telerobotic Control System Architecture (NASREM) defines six levels of actuator control. We
  • 30. 3.4 Decomposition diagram 29 will focus on the five levels of architecture that pertain to our DMT actuator. The five levels are: Figure 3.8: Five-level Architecture Intention is generated at the highest level and eventually broken down to elemental moves. The elemental moves are broken down to primitive poses1. Once the target pose is defined, concept trajectories of other mobile parts of the robot can be engineered.
  • 31. 3.4 Decomposition diagram 30 Figure 3.9: Actuators Decomposition 3.4 3.4.1 Decomposition diagram Manipulator End Effector Movement In order to know the pose of the end-effector ( here the camera platform) we have to calculate the joint angles of the arm to place the end-effector to it’s desired pose. This means that the DMT must be able to pose the end-effector by calculating the geometry of joint movement involved, given the coordinates of the desired pose in conjunction with its current pose. 3.4.2 Manipulator degrees of Freedom has only 2 dof since it is only required to stretch upwards with a maximum of 90? degree joint angle and retract its arm back to 10? degree joint angle
  • 32. 3.5 Other considerations 31 when it is moving. 3.5 Other considerations When the 10 minute counter starts the arm goes up with a 90? joint angle putting up the camera platform, the slave motor turns on, two cameras pan (rotate) at 180? in opposite directions and tilt up and down at angle of +30? to get a complete view of the terrain, illustrated in the diagram below. Figure 3.10: Movement of cameras on platform We have two cameras mounted on the rotating motor. There are two more rotating motors for each camera. These rotating motors only rotate 180 degrees each. The base of the main camera mount spins 360 degrees and by using the Pan-Tilt head kit we can also tilt the cameras, so that they can see the ground while the arm is up.
  • 33. 3.6 Functional States of Actuator 3.6 32 Functional States of Actuator The actuator must constantly have feed back from the sensors for the following functions: 1. The moment the HMI puts on the robot, the actuator sends message to the sensors to on the timer (stage A). 2. Once the timer is on, the laser ranger finds a reflected ray to find the correct path, it must constantly feed the actuator with the message to keep going on the track (stage B). 3.7 Conclusion In our autonomous robot, from the actuator point we chose to implement the sense-think-act triad. The N-squared diagram shows the connectivity of the major six sub-systems. The DMT patrols a preplanned path along which ball bearings are detected by laser beams. In case of avoiding an obstacle the AI will help to simulate another path. There should be constant feedback from the actuators and the sensors and other components.
  • 34. Chapter 4 Human-Machine Interface 4.1 Introduction Machines and process systems require operators and operators need to interface with these mindless marvels. Originally labeled OI (Operator Interface) then renamed MMI (Man Machine Interface) and the present trend is to call it HMI (Human Machine Interface). HMI is a device used to provide interactive information and control between a human and a machine controlled process. With the possible exception of ”ON-OFF” or ”START-STOP” functions, different machines have unique control functions due to variations in designs, construction, and operational modes. In addition, the required interactive information to operate each machine can vary greatly in both function and range. 4.1.1 HMI design Optimum DMT’s HMI design include: • Increase operator effectiveness by creating a standard design to use with the HMI terminals throughout the plant. • Enable operator empowerment by providing the required functionality and information in an easy to understand and use format, permitting the operator(s) to execute corrective and preventive actions. • Improve quality and acceptability of information by considering the factors in the design methodology approach.
  • 35. 4.1 Introduction 34
  • 36. 4.1 Introduction 35 • Decrease training time by minimizing the training regimen for operators, supervisors, engineers and maintenance personnel through consistency on all screens and programs. • Incorporate diagnostic capabilities to aid operators and maintenance personnel in troubleshooting problems and preventive maintenance procedures. 4.1.2 Hardware Requirements The DMT has following hardware requirements: • Pentium III CPU (800 MHz) • 512 MB Fast RAM • Real Time Co-Processor : PC based solution with a dedicated ISA Bus Controller and I/O Modules. • Display sizes from 22” to 30” For Human Machine Interface DMT is using a SYSTEM200 package, a unique new open-architecture control strategy that eliminates the high expense of specialized controls for every new application requirement. Under SYSTEM200, every control function -HMI, drives and I/O - is designed as a compact module with a uniform look and feel. These standard modules can be easily connected - like building blocks - to create a seamlessly interconnecting control system. 4.1.3 Software Requirements HMI is totally graphical user interface, making it easy for the operators to understand and learn DMT. The programming language used are C++ and Visual C++.
  • 37. 4.2 Architecture 4.2 36 Architecture The HMI is basically divided into two parts. They are input and output from the machine. 4.2.1 Input In order for the robot to respond to operator commands there must be some communication device to relay that information to the robot. There are various different inputs needed by the robot and various devices available to provide that input. 4.2.1.1 Various Input Devices Available There are variety of input devices available these days. For example: Keyboard, Mouse, Touch Screen, Key pads, Microphones, Joysticks, bar codes, sign language, graphics board, etc. 1. Input for World Model Update In due course of time, the robot should be able to modify its database also known as the world model. It should be able to acquire the environment with its sensors, by entitizing the environment, and by collecting
  • 38. 4.2 Architecture 37
  • 39. 4.2 Architecture 38 enough description of each object so that it is capable of recognizing the object if encountered again. 2. Input in Response to the Queries from the Robot Once the robot has recognized the object it may ask the operator to identify that object for future references. 3. Input for Systems Testing and Maintenance Once the robot is operational, continuous maintenance is required. Built in test procedures should take place each time the robot power is turned on. And when some failures are detected they need to be corrected by the operator through the channels (hardware/software). 4.2.1.2 Input Devices used by DMT DMT uses the following input devices: • Keyboard The keyboard is still the most popular input device used. • Mouse Very efficient device to click on buttons for graphical user interface. • Joystick It is a hand held device. Used as a forced input by the operator if a situation arise to operate the DMT manually. 4.2.2 Output The output is provided by the robot through various different devices or hardware. They could either be a monitor, speakers, sirens, bells, LED displays, Flat Panel displays, etc. 4.2.2.1 Output devices used by DMT • Monitor It is a standard device for computer output. Since the HMI for DMT uses three-dimensional graphics, monitor is the most usable output device. • Alarm Devices (Siren) DMT uses a siren for alerting the operator.
  • 40. Chapter 5 Connectivity and Communication 5.1 Introduction For communication and connectivity, the DMT will use a dual parity digital radio link. This type of link allows for error detection in real-time, by comparing identical broadcasts on two different frequencies. Encryption can be employed to assure security. Sub-elements Comm/Conn Comm/Conn System Comm/Conn Software Antenna Transmitter/Receiver 1st Level 2nd Level 3rd Level Section 5 Section 5.1 Section 5.2 Section 5.1.1 Section 5.1.2 Table 5.1: Communication and Connectivity Breakdown 5.2 5.2.1 Decomposition Diagram Communication and Connectivity Hardware Dual parity, two way digital radio link.
  • 41. 5.2 Decomposition Diagram 40 Figure 5.1: Decomposition Diagram To provide a low cost, secure solution to the COMM/CONN problem, we will employ one of the commercially available radio packages such as the Sony DPQ-41. The DPQ-41 makes use of a serial connection in the processing unit and comes with it’s own software. Subsystems: • Section 5.1.1 Antenna • Section 5.1.2 Transmitter/Receiver • Cost $650
  • 42. 5.2 Decomposition Diagram 5.2.2 41 Communications and Connectivity Software Sony includes a software package that provides for error detection and will switch frequencies on its own to assure the best connection. Error detection messages will be sent to the M.I. to assure that the unit is never uncontrolled. Antenna Sony provides a single powered antenna unit that can simultaneously transmit and receive on two different frequencies. The antenna unit will be mounted to the rear of the vehicle to avoid interference with the electronic components located at the front of the unit. The insulated motor compartment will assure that the motor will not interfere with radio transmissions. Transmitter/Receiver The transmitter / receiver unit will be located in the interior of the vehicle. Use of a powered antenna allows placement of the transmitter / receiver unit to be made without interference concerns, since radio and electric field emissions will be of a low level. A simple serial cable can accomplish connection to the processing unit.
  • 43. Chapter 6 Simulation 6.1 Introduction • Simulation Representation of the operation or features of one system through the use of another. • Simulation as applied to Robotic Systems Accurately modeling the robot and a good sampling of environmental objects with which developers can anticipate the robot will interact. 6.1.1 Hardware and Software Requirement • minimum Pentium II 400 MHz • 3D Studio MAX version 4.0 and 4.01 (SP-1) software The simulation is implemented using C++ language. The 3D environment editor allows us to customize the robotics scenarios, with efficient actuator and sensor modeling (including cameras). Once the simulation system works, we can download it to autonomous robots. 6.2 Overview 1. Robotic Simulation of DMT includes
  • 44. 6.3 Modeling 43 Figure 6.1: Decomposition Diagram (a) Modeling (b) Simulation (c) Animation (d) Visualization. 2. Robotic Simulation of DMT Assists us in (a) Engineering Design (b) Path-Task Planning (c) Path Execution (d) Predictability (e) Path Execution (f) Developing Virtual Reality 6.3 Modeling • Develop an electronic model of the design of the robot.
  • 45. 6.3 Modeling 44 • In designing a robot part by part give additional considerations of the time factor involving parts moving with respect to each other, the environment, and environmental objects. • We need to consider the physical integrity of each part, and also how it will interact with other parts attached to it and jointed at their connecting points. • Validation of Form, Fit, Functionality of the robot interacting within its proposed environment • Includes Kinematics and Dynamics modeling 6.3.1 Form The simulation allows for computer design and on-screen visualization of the form of proposed design in 3-D representations. If the initial form is unacceptable, then design modifications is effected before expending resources in creating the actual robot. 6.3.2 Fit The fitting parts large and small, integrated or separate, mobile and stationary is extremely important in the design of the robot and their surrounding environment. Animation and Visualization helps in the fitting together of parts and testing their physical compatibility regarding movement in proximity of one part with another. 6.3.3 Function During the design, development, and implementation process simulation allows us to test for functionality. Suitability is also tested along with functionality. Simulation needs to meet the criterion of Suitability, Feasibility, Acceptability before the robot can be implemented. 1. Suitability Our DMT Robot will serve the purpose of patrolling the borders of Texas and detecting intruders.
  • 46. 6.3 Modeling 45 2. Feasibility The required resources money, manpower, and will are available. 3. Acceptability Implies the considerations of the social, political, psychological, and economic impact of the implementation of the robot. 6.3.4 Kinematic modeling Involves geometries, joints, linkings, and movement without consideration of the forces and inertial response to forces causing the movements. Specifies robot link dimensions, joint limits, and constraints. 6.3.5 Dynamic modeling Involves all elements and properties of the robot system that will impact on movement such as mass, friction, inertia, motor torque, joint compliance, moments. The accuracy of kinematics and dynamics modeling determines the accuracy of simulation. 6.3.6 Simulation • An operator can direct and control the models of the robot and the environmental objects as though they were real things. • The model of the robot and all of its joints and links are driven by computer replicas of the actuators. • Dynamic Parameters that would be generated in the description of the real actuators are generated in the simulation. • Positional parameters are derived as though the system was truly operating the real world. • Feedback data is simulated as though positioning sensors, which are also modeled, are providing feedback information to the appropriate recipients throughout the control system.
  • 47. 6.4 Engineering Design 6.3.7 46 Animation • The act, process, or result of imparting life, interest, spirit, motion, or activity. • As related to robotic simulation: reflection of time-related displacement of objects in a given space. • Relates spatial relationships to time considerations 6.3.8 Visualization • To form a mental image of an object. To Envision. • Visualization of the results of modeling, simulation, and animation is made possible by three-dimensional representations on a computer monitor. This is done for DMT robot by 3-D Studio MAX software. 6.4 Engineering Design • Involves Form (morphology) Fit (more than one part ’fitting’ together), Function, and modeling (developing an electronic model of the design of the robot). • Mapping Figure 6.2: Simulation mapping
  • 48. 6.5 Path-Task Planning 6.5 47 Path-Task Planning Simulation assists us in human controlled and autonomous path / task planning with obstacle avoidance. 6.5.1 Human Controlled Pendant Teaching Programmed Motion Figure 6.3: Joint Space Figure 6.4: Joint
  • 49. 6.6 Predictability 6.5.2 48 Autonomous • Kinematic Simulation: The branch of mechanics that studies the motion of a body or a system of bodies without consideration given to its mass or the forces acting on it. • Dynamic Simulation – Ultimate ”Mass Motion” – The branch of mechanics that studies the motion of a body or a system of bodies consideration given to its mass or the forces acting on it. 6.6 Predictability • ”Mind-Read” the Intentions of the Robot. • View the path plan the robot generated. • Smoothing the discovered path and feeding it back to the operator when he requests it. 6.7 Path Execution • Path Execution Monitoring and Comparing Command Signals (intentions) to Actual Effection. • Use simulation as we would as an assisted computerized pendant teaching • Using Path Planning to develop Programmed Motion. Intentions −− > Commands −− > Actuators • We are running our simulation • By generating the actual commands that will be delivered in real world to the actuators. • We stop short of moving metal by moving electrons instead.
  • 50. 6.8 Developing Virtual Presence 6.8 49 Developing Virtual Presence • The robot is equipped with enough sensors, which allows us to recreate the environment, work pieces, and the robot itself as part of the environment. • Simulation allows us to compare intention with execution in real time basis. • We develop Virtual Presence from the sensor arrays that we put on the robot. • If we return enough sensor data and we wire up the human with the sensor data, so the human is experiencing what the robot is experiencing through its sensors. 6.9 Conclusion • In our autonomous intelligent robot system, we choose a top-level decomposition of six major sub-systems, and we show their inter-connectivity. The N-squared diagram for the simulation part is shown in the appendices. • We are going to lay down a prepared path along the borders of Texas and Mexico. • On a real-time basis we are going to detect the obstacles that somebody intentionally planted on the path, so that the robot can traverse the path and plan a different path. • When required to plan a path, Artificial Intelligence sends signal to the simulator, and the simulator simulates a path. • Artificial intelligence then decides if the path is safe to follow. • We are using 3D Studio Max version 4.01 software to simulate DMT using C++ language. • The 3D environment editor allows us to customize the robotics scenarios, with efficient actuator and sensor modeling (including cameras).
  • 51. 6.9 Conclusion 50 • Once the simulation system works, we can download it to our autonomous Don’t Mess With Texas robot.
  • 52. Appendices
  • 53. Appendix A Functional State A A.1 Sensors A.2 AI SENSE Send status Request status THINK Request status ACT Request status Request status HMI CONN/COMM. SIM. Table A.1: AI - Function State A During the P.O.S.T, the AI unit requests status feed from all parts of the robot. In the case of any malfunction, it is reported back the HQ and on the physical robot’s interface. A.3 Actuators A.4 HMI During the first functional state the human machine interface is the key feature. The operator turns the power on for the DMT. The operator is then
  • 54. A.4 HMI 53 required to do the necessary checks and boot the system. The checks that are done by the operator include: • Fuel check • Battery check • Siren check • Motor temperature • Functionality of the camera • Check the laser devices All these checks are done using the input/output devices of DMT. The operator is prompted with different screens for different checks and the report is shown instantly. At the end all the checks for DMT a progress report is printed and saved in the database for future references. After the checks are completed the signal is sent to the transmitters and sensors to turn them on. Sensors would then detect the path markers with laser beam. All this is done in the start up mode of DMT.
  • 55. A.4 HMI 54
  • 56. Appendix B Functional State B B.1 Sensors B.2 AI B.3 Actuators B.4 HMI During the second functional state of DMT it awaits for the signals from the transmitters and sensors after being turned on and booting the system. They send the signals accordingly after performing the checks and detecting the path. After receiving this vital information the operator gives the command accordingly to the DMT. If all the checks mentioned previously in Appendix A are performed normally, DMT waits for the final response from the operator. The operator gives DMT the command ”GO” and DMT starts its activities by moving forward on its path. B.5 Communication and Connectivity B.6 Simulation
  • 57. B.6 Simulation 56
  • 58. Appendix C Functional State C C.1 C.1.1 Sensors Function 1.C1 The laser and range sensor guides the robot through the path, the heartbeat sensor checks for animal or human in the field, and depending on the input the cameras analyze the input and take proper action. C.2 AI SENSE THINK Receive command ACT HMI CONN/COMM. SIM. Table C.1: AI - Function State C
  • 59. Appendix D Functional State D D.1 Sensors D.2 AI SENSE THINK Task plan Reject/Accept ACT HMI CONN/COMM. Simulate path Table D.1: AI - Function State D SIM.
  • 60. Appendix E Functional State E E.1 Sensors E.2 AI SENSE Data feed THINK Status Motion control ACT Progress HMI CONN/COMM. SIM. Table E.1: AI - Function State E
  • 61. Appendix F Functional State F F.1 Sensors F.2 AI SENSE Motion detection Track object THINK Follow object Follow/track Status ACT Status HMI CONN/COMM. SIM. Table F.1: AI - Function State F
  • 62. Appendix G Functional State G

×