ROBOTICS(Open Elective 2)
UNIT I Introduction
1.1 Definition and Applications of Mobile
Robotics ,History of mobile Robotics
COURSE OUTCOME:
After successful completion of the course, students will be able to:
• Explain the fundamentals of mobile robotics and its history
• Illustrate the system design with its architecture
• Illustrate the Kinematics and locomotion of mobile robotics
• Elucidate the need and implementation of related Instrumentation &
control in robotics
• Hands-on introduction to the field of mobile robotics and various
issues in designing and planning of robot work environment.
Syllabus
Definition:
• Mobile robotics refers to the design and creation of robots that are capable of locomotion.
• Mobile robotics is a subfield of robotics involving the development and operation of robots that
can move autonomously in various environments.
• These robots can navigate through different terrains and perform tasks without human
intervention, leveraging sensors, actuators, and control algorithms.
• Unlike stationary robots, mobile robots are equipped to move around in their environment and
perform tasks autonomously or semi-autonomously.
• They are equipped with sensors, processors, and often manipulators or tools to interact with their
surroundings.
Classifications of mobile robots
• Mobile robots can be classified in two ways:
1. Based on the environment in which they work.
2. Based on the device they use to work.
1. Based on the Environment they work
a) Aerial Robots – Aerial Robots are also known as Unmanned Aerial Vehicles (UAVs). These
robots can fly through the air.
Fig 1. Drones Fig 2. Helicopter Robot
1. Based on the Environment they work contd..
b) Land or Home Robots – Land Robots are also known as Unmanned Ground Vehicles (UGVs).
These robots can navigate on dry land and within buildings of houses and offices. These are used to
supply equipment .eg- dirtdog etc.
Fig 4. Unmanned Ground Vehicles (UGVs).
Fig 3 Home Robots
1. Based on the Environment they work
c) Underwater Robots –
Underwater Robots are also known as Autonomous Underwater Vehicles (AUVs). These robots can
direct themselves and travel through water in an efficient manner. eg-submarines etc.
2. Based on the device they use to work
• Mobile Robots can be classified on the basis of devices they use to
work as follows.
• Wheeled Robots –
Wheeled Robots are also known as Autonomous Intelligent Vehicles (AIVs).
These robots are those robots that use wheels for their locomotion. There are
different types of wheeled robots like one-wheeled robots, two-wheeled
robots, etc.
2. Based on the device they use to work
• Humanoid Robots –
Humanoid robots are those robots that have their body shape, which
resembles the human body. They work majorly on the basis of sensors .eg
androids, etc.
2. Based on the device they use to work
• Legged Robots –
Legged robots are those robots that use articulated limbs like legs and follow
a mechanism to provide locomotion .eg hexapod robot etc.
Features of Mobile Robots
• Mobile Robots provide wireless communication.
• They contain locomotion controlling softwares.
• They work majorly on predefined algorithms.
• They have hierarchical controlling structure.
• They are easy to operate and have more efficiency.
Application fields of Mobile Robots
• Education and Research
• Remote handling of explosive materials
• Military Mobile Robots
• Fire Fighting and Rescue
• Mining
• Construction
• Photography and Videography
• Planetary Exploration
• https://youtu.be/PaSFFxj-9vI?si=i-fp6bAo27BXxR0z
• https://youtu.be/OV7T0ADNMbQ?si=4ntIhtsJnzxPl3IU
• https://youtu.be/JUGTXmf3K48?si=lLrcOArDzs0hbPvn
• https://youtu.be/JRHdnkUjcZg?si=5M3LbZVqlPD1vI_L
The applications of mobile robotics
The applications of mobile robotics are diverse and span various fields including:
1. Industrial Automation: - Autonomous Guided Vehicles (AGVs) transport materials in warehouses and factories.
- Robots in manufacturing lines for tasks like welding, assembly, and painting.
2. Healthcare: - Service robots assist with patient care, medication delivery, and sanitation in hospitals.
- Telepresence robots enable remote consultations and monitoring.
3. Agriculture: - Robots perform tasks such as planting, harvesting, and monitoring crop health.
- Drones for aerial surveying and crop spraying.
4. Military and Security: - Unmanned Ground Vehicles (UGVs) for reconnaissance, bomb disposal, and supply transport.
- Surveillance robots for perimeter security and monitoring.
The applications of mobile robotics
5. Exploration: - Rovers for space exploration missions, such as Mars rovers.
- Underwater robots for oceanographic research and underwater archaeology.
6. Service and Domestic Use: - Robotic vacuum cleaners and lawn mowers for household
chores.
- Service robots in hotels and restaurants for customer service and delivery.
7. Transportation: - Self-driving cars and buses for public and private transportation.
- Drones for parcel delivery and logistics.
History of Mobile Robotics
1. Early Beginnings:
- The concept of mobile robotics dates back to ancient times with early automatons designed by inventors like Hero of Alexandria.
- In the 20th century, the development of mobile robots began to take shape with advancements in computing and electronics.
2. 1940s-1950s:
- The first notable mobile robot, Grey Walter’s "Elsie" and "Elmer," was developed in the late 1940s and early 1950s. These tortoise-
like robots could navigate and avoid obstacles using simple electronic sensors.
3. 1960s-1970s:
- Shakey the Robot, developed by Stanford Research Institute (SRI) in the 1960s, was one of the first mobile robots capable of
reasoning about its actions. It used a combination of cameras, sensors, and a computer to navigate.
- The 1970s saw the advent of more sophisticated mobile robots, including the development of Mars rovers prototypes and
industrial mobile robots.
History of Mobile Robotics
4. 1980s-1990s:
- The 1980s and 1990s witnessed significant advancements in mobile robotics with the introduction of more powerful
microprocessors and sensors.
- Projects like Carnegie Mellon University's Navlab, an early autonomous vehicle, showcased the potential for
autonomous navigation in complex environments.
5. 2000s-Present:
- The 21st century has seen rapid growth in mobile robotics, driven by advances in artificial intelligence, machine
learning, and sensor technology.
- Commercial applications have expanded, with autonomous vehicles, drones, and service robots becoming
increasingly prevalent.
- Notable achievements include the successful deployment of the Mars rovers (e.g., Spirit, Opportunity, Curiosity, and
Perseverance) and the development of sophisticated robotic vacuum cleaners and delivery drones.
**Mobile robotics continues to evolve, integrating advanced AI, improved sensors, and more efficient
energy sources, paving the way for even more innovative applications in the future.***
UNIT 2
Design of system and Navigation
architecture
2.1 Reference control scheme of mobile robotics.
2.2 Temporal decomposition of architecture, control decomposition,
hybrid architecture, mobile architecture, perception.
2.3 Representation and the mapping process.
Design of system and Navigation architecture
• Designing a system and navigation architecture for mobile robotics involves
creating a comprehensive framework that ensures
• The robot can efficiently and effectively navigate its environment.
• Must integrate various components including sensors, control systems, algorithms, and user
interfaces.
• A structured approach to designing such a system includes hardware and software
components
System Architecture
Hardware Components
1. Sensors
• LIDAR(light detection and ranging): For 360-degree environment scanning and obstacle detection.
• Cameras: For visual perception, object recognition, and navigation assistance.
• IMU (Inertial Measurement Unit): For orientation and motion tracking. an electronic device that
measures and reports acceleration, orientation, angular rates, and other gravitational forces.
https://youtu.be/H2-Yp30TGk4?si=h4ROO3CTZvSiu4n0
System Architecture
Hardware Components
1. Sensors
• Ultrasonic Sensors: For proximity detection and collision avoidance. an instrument that measures the distance
to an object using ultrasonic sound waves.
• GPS: For outdoor localization and path planning. a network of satellites and receiving devices used to
determine the location of something on Earth.
• Encoders: For wheel rotation and odometry ( the use of data from motion sensors to estimate change in position
over time). a sensing device that provides feedback. Encoders convert motion to an electrical signal
Hardware Components
2. Actuators: An actuator converts electrical, pneumatic or hydraulic input input signal into the required form of
mechanical energy
Motors: For driving and steering the robot.
Servos: For adjusting sensors or other movable parts.
3. Processing Unit
Microcontroller/Processor: For executing control algorithms and managing sensor data.
Computing Platform: For heavy processing tasks, such as image processing or advanced path planning
•(e.g., Raspberry Pi, NVIDIA Jetson).
4. Communication Systems
Wireless Modules: For remote control and data transfer (e.g., Wi-Fi, Bluetooth).
Wired Interfaces: For reliable communication in controlled environments (e.g., CAN bus). The Controller Area
Network (CAN bus) is a message-based protocol designed to allow the Electronic Control Units (ECUs) found in today's
automobiles, as well as other devices, to communicate with each other in a reliable, priority-driven fashion.
5. Power Supply
Batteries: To power all components.
Power Management System: For efficient energy use and monitoring.
Example Architecture
Ghorbel, A., Ben Amor, N. & Jallouli, M. Design of a flexible reconfigurable mobile robot localization system using FPGA technology. SN
Appl. Sci. 2, 1183 (2020). https://doi.org/10.1007/s42452-020-2960-4
Software Components
1. Operating System
• RTOS (Real-Time Operating System) or Linux: For handling real-time tasks and managing resources.
2. Middleware
• Robot Operating System (ROS): Provides libraries and tools for building robot applications, including
drivers, sensors, and algorithms.
3. Algorithms
• SLAM (Simultaneous Localization and Mapping): For building maps and tracking the robot’s location
within an unknown environment.
• Path Planning: Algorithms like Dijkstra’s for route planning and obstacle avoidance.
• Object Recognition and Detection: For identifying and reacting to objects using computer vision
techniques.(Computer vision is a field of computer science that focuses on enabling computers to identify and understand objects
and people in images and videos)
Software Components
4. Control Systems
• PID Controllers: For maintaining desired behaviors such as speed and direction.(A PID (Proportional –
Integral – Derivative) controller is an instrument used by control engineers to regulate temperature, flow, pressure,
speed, and other process variables in industrial control systems.)
• State Machines: For managing different operating modes and transitions.(A state machine reads a set of
inputs and changes to a different state based on those inputs. )
5. User Interface
• GUI: For monitoring, control, and configuration (e.g., with ROS’s Rviz or custom applications).
The Robot Operating System (ROS) is an open-source framework that helps researchers and developers build and reuse code
between robotics applications.
RVIZ is a ROS graphical interface that allows you to visualize a lot of information, using plugins for many kinds
of available topics. rviz shows the robot's perception of its world, whether real or simulated.
Navigation Architecture
1. Perception
• Sensor Fusion
• Combine data from multiple sensors to create a comprehensive understanding of the environment (e.g.,
fusing LIDAR and camera data).
• Feature Extraction
• Identify key features and objects in the environment that are relevant for navigation (e.g., walls, obstacles).
2. Localization
• Odometry
• Use wheel encoders and IMU data to estimate the robot’s position and orientation based on movement.
• Map-Based Localization
• Use pre-built maps and SLAM data to refine position estimates and improve accuracy.
• GPS-Based Localization
• For outdoor robots, GPS data can be used to provide global positioning.
Navigation Architecture
3. Path Planning
• Global Path Planning
• Compute an optimal path from the current location to the target location using algorithms
like A* or Dijkstra’s.
• Local Path Planning
• Continuously adjust the path in response to dynamic obstacles or changes in the
environment using techniques like Dynamic Window Approach (DWA) or Rapidly-exploring
Random Trees (RRT).
• Obstacle Avoidance
• Implement strategies to detect and navigate around obstacles in real-time.
Navigation Architecture
4. Control
• Motion Control
• Generate commands for the actuators to follow the planned path and maintain the desired trajectory.
• Feedback Control
• Use sensor data to correct deviations from the planned path and ensure accurate navigation.
5. Integration and Testing
• Simulation
• Test algorithms and system components in a simulated environment before deploying them on the
actual robot.
• Real-World Testing
• Perform iterative testing in real-world scenarios to refine navigation strategies and ensure robustness.
• Debugging and Optimization
• Continuously monitor system performance, identify issues, and optimize algorithms and hardware for improved
efficiency and reliability.
Reference control scheme of mobile robotics
• The term "Reference Control Scheme" in mobile robotics generally refers to a method
of controlling a robot by following a reference trajectory or desired path.
• Uses control algorithm to ensure that the robot's movement aligns with a predefined
path or set of commands, adjusting its behavior based on feedback from its sensors.
Reference control scheme of mobile robotics
Steps in Reference Control Scheme
1. and RepDefine the Reference Path: Establish the desired path or trajectory for the robot to follow,
which can be a set of waypoints or a continuous curve.
2. Measure Robot’s Position: Use sensors to determine the robot’s current position and orientation.
3. Calculate Errors: Determine the deviation of the robot from the reference path.
4. Compute Control Inputs: Use a control algorithm to calculate the necessary adjustments to
correct the deviation.
5. Apply Control Commands: Send commands to the robot’s actuators to adjust its movement and
bring it back on track.
6. Update: Continuously update the position, recalculate errors, and adjust controls in real-time to
maintain accurate path following.
Applications
• Autonomous Vehicles: Following roads or predefined routes.
• Robotic Arms: Moving along a specified path for tasks like welding or painting.
• Service Robots: Navigating through indoor environments to reach specific destinations.
Temporal decomposition of architecture, control decomposition, hybrid
architecture, mobile architecture, perception.
1. Temporal Decomposition: involves breaking down a robotic system's functions into different
time scales or phases.
2. Control Decomposition refers to breaking down the control system into manageable
components or layers, each responsible for different aspects of control.
3. Hybrid Architecture combines multiple approaches or techniques to leverage their strengths
and mitigate their weaknesses.
4. Mobile Architecture refers to the overall design and structure of the robotic system that allows
it to move and interact with its environment.
5. Perception involves sensing and interpreting the robot’s environment to make informed
decisions.
Temporal Decomposition
• Short-Term Planning: Immediate actions and reactions, such as obstacle avoidance or real-time
adjustments.
• Medium-Term Planning: Intermediate goals, such as navigating through a room or following a
path.
• Long-Term Planning: Strategic goals and missions, such as exploring an area or performing a
sequence of tasks.
Control Decomposition
• Low-Level Control: Handles basic actions such as motor control, speed regulation, and immediate
responses to sensor inputs.
• Mid-Level Control: Manages higher-level tasks such as path following, obstacle avoidance, and
coordination of different low-level controllers.
• High-Level Control: Oversees complex decision-making, strategic planning, and goal-setting.
Hybrid Architecture
It Integrates:
• Reactive Systems: For real-time responses and immediate actions.
• Deliberative Systems: For planning and decision-making based on higher-level goals and
strategies.
Mobile Architecture
It typically includes:
• Platform: The physical structure and locomotion system (e.g., wheels, tracks, legs).
• Navigation System: Components for positioning and movement, such as GPS, IMUs (Inertial
Measurement Units), and odometry.
• Control System: Manages the robot's movement and behavior, integrating various sensors and
actuators.
Perception
It includes:
• Sensors: Devices that collect data about the environment, such as cameras, LIDAR, sonar, and
infrared sensors.
• Data Processing: Analyzing sensor data to detect objects, map surroundings, and identify
features.
• Recognition and Classification: Using algorithms to identify and categorize objects and features
based on sensory input.
Integrating These Concepts
These concepts often overlap and interact:
• Temporal and Control Decomposition: A system may use different control strategies at various
timescales, such as reactive control for immediate obstacles and planning-based control for
longer-term goals.
• Hybrid Architecture: Combines different control strategies and integrates perception systems to
handle a wide range of tasks effectively.
• Mobile Architecture: Incorporates both control and perception systems to enable the robot to
move and interact intelligently within its environment.
By understanding and integrating these elements, robotic systems can achieve more sophisticated
and effective performance in diverse scenarios.
Integrated Block diagram
High Level Control
Medium Term Planning
(Path Navigation, Intermediate Goals)
Short Term Planning
(Immediate Reactions, Obstacle Avoidance)
Control Decomposition
Low Level control
(Motor Control, response)
Mid Level Control(Path
following ,obstacle avoidance )
High Level
control(Strategic ,Decision
making)
Mobile Architecture Perception system Hybrid Architecture
Representation and the mapping process.
Representation and mapping are fundamental processes that enable a robot to understand and
navigate its environment.
• Representation refers to how a robot perceives and organizes information about its environment.
This involves creating a model or a map that helps the robot understand and interact with its
surroundings.
• Mapping is the process of constructing a representation of the environment based on sensory
information. This typically involves creating a map that reflects the spatial layout and features of
the environment.
Key Aspects of Representation
This involves creating a model or a map that helps the robot understand and interact with its surroundings.
1. Environmental Models: Various ways to represent the environment, such as:
• Grid Maps: Represent the environment as a grid
• Occupancy Grids: probabilistic information about whether a cell is occupied.
• Feature Maps: Represent specific features or landmarks in the environment, like walls, obstacles, or objects.
• Topological Maps: Represent the environment based on spatial relationships and connectivity, rather than precise distances.
• Semantic Maps: Include information about the type and meaning of objects (e.g., "desk" or "door") in addition to their location.
2. Data Structures: Used to store and manipulate environmental data such as
• Maps: Store spatial information, often in a 2D or 3D grid format and
• Graphs: Represent connections between different locations or features and
• Hierarchical Structures: Organize data at different levels of detail or abstraction.
3. Sensor Data Integration: Combining information from various sensors (e.g., cameras, LIDAR, sonar) to
create a coherent representation of the environment.
Examples of Representation
• Occupancy Grid Map: A 2D grid where each cell contains a probability value indicating the
likelihood that the cell is occupied.
• Topological Map: Nodes represent important locations (e.g., rooms) and edges represent paths
or connections between them
Feature Maps
Semantic Map
Grid Map
Other maps
Key Steps in the Mapping Process
1. Data Collection:
• Sensors: Use sensors like LIDAR, cameras, and sonar to collect data about the environment.
• Odometry: Track the robot’s movement and position using wheel encoders or inertial measurement units (IMUs).
2. Feature Extraction:
• Detection: Identify features or landmarks in the sensor data.
• Segmentation: Group sensor readings into meaningful regions or objects.
3. Data Fusion:
• Sensor Fusion: Combine data from multiple sensors to improve accuracy and robustness.
• Temporal Fusion: Integrate data collected over time to refine the map and correct errors.
4. Map Creation:
• Incremental Mapping: Update the map continuously as the robot moves and collects new data.
• Simultaneous Localization and Mapping (SLAM): Perform localization and mapping simultaneously, updating the
map while keeping track of the robot’s position.
Key Steps in the Mapping Process
5. Map Refinement:
• Error Correction: Adjust the map to correct for errors and inconsistencies.
• Optimization: Refine the map using techniques like bundle adjustment or graph-based optimization.
6. Map Utilization:
• Navigation: Use the map for path planning and obstacle avoidance.
• Localization: Determine the robot’s position relative to the map.
• Task Execution: Perform tasks based on the information in the map (e.g., reaching a goal location).
Example Mapping Techniques
• Grid-Based Mapping: Create a grid map where each cell is updated based on sensor data, using
algorithms like the occupancy grid algorithm.
• Feature-Based Mapping: Build a map by detecting and recording distinct features or landmarks, using
methods like Visual SLAM.
UNIT 3
Locomotion
3.1 Issues for locomotion, legged mobile robots, wheeled mobile robots.
Kinematics introduction, forward and reverse kinematics, wheeled kinematics
and its constraints.
3.2 Mobile system locomotion, human biped locomotion as a rolling polygon.
3.3 Representation of robot position through the reference frame.
Locomotion
• Locomotion in mobile robotics refers to the methods and mechanisms that allow a robot to move
from one place to another.
• It encompasses the design, control, and implementation of the movement systems that enable
robots to navigate their environment.
• Locomotion is a fundamental aspect of mobile robots, influencing how they interact with their
surroundings, accomplish tasks, and adapt to different terrains.
Types of Locomotion in Mobile Robotics
• Wheeled Locomotion
• Legged Locomotion
• Tracked Locomotion
• Hybrid Locomotion
• Flying Locomotion
Wheeled Locomotion
• Description: Involves robots that move using wheels. This is one of the most common types of
locomotion in mobile robotics due to its simplicity and efficiency on flat surfaces.
• Examples:
• Differential Drive: A system with two wheels that can rotate independently, allowing the
robot to move forward, backward, and turn by varying the speed of each wheel.
• Ackermann Steering: Similar to car steering, where the front wheels are used to steer while
the rear wheels provide propulsion.
• Advantages: High speed, low energy consumption, and simplicity of control on smooth surfaces.
• Disadvantages: Limited ability to traverse uneven or rough terrain.
Legged Locomotion
• Description: Involves robots that move using articulated legs, mimicking the movement of
animals or humans. This allows for greater adaptability to uneven terrains.
• Examples:
• Bipedal Robots: Robots with two legs, such as humanoid robots.
• Quadrupedal Robots: Robots with four legs, like Boston Dynamics' Spot.
• Advantages: Ability to navigate complex and uneven terrains, flexibility in movement.
• Disadvantages: Complexity in design and control, higher energy consumption, and challenges in
maintaining balance.
Tracked Locomotion
• Description: Involves robots that move using continuous tracks, like those on a tank. This allows
for stable movement over rough terrain.
• Advantages: Good stability and ability to handle rough terrain, low ground pressure.
• Disadvantages: Slower movement compared to wheeled robots, complexity in steering.
Hybrid Locomotion
• Description: Combines multiple types of locomotion mechanisms, such as wheels and legs, to
take advantage of the strengths of each system.
• Examples: Robots that can switch between wheeled movement for speed and legged movement
for rough terrain.
Flying Locomotion
• Description: Involves robots that move through the air, such as drones and quadcopters.
• Advantages: Ability to access hard-to-reach areas, bypass obstacles on the ground.
• Disadvantages: Limited battery life, complex control systems.
Importance of Locomotion in Mobile Robotics
• Locomotion is essential for enabling robots to perform tasks in dynamic and unstructured
environments.
• It allows robots to explore, transport objects, assist humans, and carry out missions in areas that
may be dangerous or inaccessible to humans.
• The choice of locomotion mechanism directly impacts a robot’s capabilities, including its speed,
stability, energy efficiency, and ability to navigate various terrains.
Issues in Locomotion: Legged and Wheeled Mobile Robots
Legged Mobile Robots
Challenges:
• Stability: Maintaining balance, especially for bipedal robots, is a significant challenge due to the
higher center of gravity and dynamic nature of legged locomotion.
• Complexity: The control algorithms needed for legged locomotion are complex, involving real-
time adjustments based on sensory feedback.
• Energy Efficiency: Legged robots generally consume more energy compared to wheeled robots
due to the need for continuous movement and balance.
• Terrain Adaptability: While legged robots are more adaptable to rough and uneven terrain,
ensuring that they maintain stable locomotion over such surfaces requires advanced sensors and
control systems.
Wheeled Mobile Robots
Challenges:
• Terrain Limitation: Wheeled robots are generally restricted to flat and smooth surfaces, as rough
or highly uneven terrain can hinder their movement.
• Maneuverability: Depending on the type of wheels (e.g., fixed, steering, omni-directional), the
robot's ability to maneuver can be limited.
• Slippage: On certain surfaces, wheeled robots can experience slippage, which complicates
accurate position estimation and control.
• Complex Kinematics: For robots with multiple wheels, especially those with steering capabilities,
kinematic modeling becomes complex.
Kinematics Introduction
Kinematics is the study of motion without considering the forces that cause it. In robotics,
kinematics is used to describe the motion of robots and their parts.
• Forward and Inverse Kinematics
• Forward Kinematics: This involves computing the position and orientation of the robot's end-
effector (or wheels, limbs, etc.) based on given joint parameters (angles, displacements). It is
often straightforward to compute.
• Inverse Kinematics: This involves computing the necessary joint parameters to achieve a desired
position and orientation of the end-effector. Inverse kinematics is generally more complex than
forward kinematics because it may have multiple solutions or no solution at all.
Wheeled Kinematics and Constraints
• Wheeled Kinematics: Refers to the relationship between the wheels' rotational speeds and the
robot's overall movement. For example, differential drive robots (with two independently driven
wheels) use the difference in wheel speeds to turn.
• Kinematic Constraints: These are limitations on the robot's motion due to its wheel
configuration. For instance:
• Non-holonomic Constraints: These constraints mean that certain movements (like sideways
motion for traditional wheels) are not possible. For example, a standard car can’t move
directly sideways.
• Holonomic Constraints: Allow the robot to move in any direction without needing to reorient
itself. This is possible with omni-directional wheels or spherical wheels.
Mobile System Locomotion: Human Biped Locomotion as a Rolling
Polygon
Human Biped Locomotion
• Human Biped Locomotion: This can be modeled as a rolling polygon, where the vertices of the
polygon represent the points of contact of the feet with the ground. As humans walk, their body
can be considered as moving along a path that is defined by these contact points.
• Rolling Polygon Model: In this model, the transition from one foot to another can be seen as the
rolling of the polygon over the terrain. The model helps in understanding the dynamics of
walking, including balance, momentum, and gait.
Mobile System Locomotion
• Locomotion Control: In mobile systems, locomotion is controlled by algorithms that determine
how the system moves from one point to another while maintaining stability and efficiency.
• Gait Analysis: For legged robots, understanding and implementing different gaits (such as
walking, trotting, or galloping) is crucial for optimizing movement across different terrains.
Representation of Robot Position through the Reference Frame
Reference Frames
• Global Reference Frame: This is a fixed coordinate system (often denoted as the world frame)
relative to which the robot's position and orientation are measured.
• Local Reference Frame: This is a coordinate system fixed to the robot, often referred to as the
robot frame. The robot's movements are easier to describe relative to this frame.
Representation of Position
• Position: The robot's position is typically represented as a vector in the global reference frame.
For example, in a 2D space, the position might be given as (x,y)(x, y)(x,y), while in 3D, it would be
(x,y,z)(x, y, z)(x,y,z).
• Orientation: Orientation is represented by angles, such as the yaw (rotation around the vertical
axis), pitch (rotation around the lateral axis), and roll (rotation around the longitudinal axis) in 3D
space.
• Homogeneous Transformation Matrix: A common method to represent both position and
orientation is the homogeneous transformation matrix, which combines rotation and translation
into a single matrix. This is particularly useful for performing coordinate transformations between
different reference frames.
Summary
• Locomotion in mobile robots, whether legged or wheeled, involves addressing several challenges
related to stability, maneuverability, and terrain adaptability.
• Understanding kinematics, both forward and inverse, is crucial for controlling robot motion.
• Wheeled kinematics introduces specific constraints that must be managed.
• The representation of a robot's position through reference frames is a fundamental concept in
robotics, allowing for accurate control and navigation.

ROBOTICS(Opendfgdfgdfg Elective 2) (3).pptx

  • 1.
  • 2.
    UNIT I Introduction 1.1Definition and Applications of Mobile Robotics ,History of mobile Robotics
  • 3.
    COURSE OUTCOME: After successfulcompletion of the course, students will be able to: • Explain the fundamentals of mobile robotics and its history • Illustrate the system design with its architecture • Illustrate the Kinematics and locomotion of mobile robotics • Elucidate the need and implementation of related Instrumentation & control in robotics • Hands-on introduction to the field of mobile robotics and various issues in designing and planning of robot work environment.
  • 4.
  • 5.
    Definition: • Mobile roboticsrefers to the design and creation of robots that are capable of locomotion. • Mobile robotics is a subfield of robotics involving the development and operation of robots that can move autonomously in various environments. • These robots can navigate through different terrains and perform tasks without human intervention, leveraging sensors, actuators, and control algorithms. • Unlike stationary robots, mobile robots are equipped to move around in their environment and perform tasks autonomously or semi-autonomously. • They are equipped with sensors, processors, and often manipulators or tools to interact with their surroundings.
  • 6.
    Classifications of mobilerobots • Mobile robots can be classified in two ways: 1. Based on the environment in which they work. 2. Based on the device they use to work.
  • 7.
    1. Based onthe Environment they work a) Aerial Robots – Aerial Robots are also known as Unmanned Aerial Vehicles (UAVs). These robots can fly through the air. Fig 1. Drones Fig 2. Helicopter Robot
  • 8.
    1. Based onthe Environment they work contd.. b) Land or Home Robots – Land Robots are also known as Unmanned Ground Vehicles (UGVs). These robots can navigate on dry land and within buildings of houses and offices. These are used to supply equipment .eg- dirtdog etc. Fig 4. Unmanned Ground Vehicles (UGVs). Fig 3 Home Robots
  • 9.
    1. Based onthe Environment they work c) Underwater Robots – Underwater Robots are also known as Autonomous Underwater Vehicles (AUVs). These robots can direct themselves and travel through water in an efficient manner. eg-submarines etc.
  • 10.
    2. Based onthe device they use to work • Mobile Robots can be classified on the basis of devices they use to work as follows. • Wheeled Robots – Wheeled Robots are also known as Autonomous Intelligent Vehicles (AIVs). These robots are those robots that use wheels for their locomotion. There are different types of wheeled robots like one-wheeled robots, two-wheeled robots, etc.
  • 11.
    2. Based onthe device they use to work • Humanoid Robots – Humanoid robots are those robots that have their body shape, which resembles the human body. They work majorly on the basis of sensors .eg androids, etc.
  • 12.
    2. Based onthe device they use to work • Legged Robots – Legged robots are those robots that use articulated limbs like legs and follow a mechanism to provide locomotion .eg hexapod robot etc.
  • 13.
    Features of MobileRobots • Mobile Robots provide wireless communication. • They contain locomotion controlling softwares. • They work majorly on predefined algorithms. • They have hierarchical controlling structure. • They are easy to operate and have more efficiency.
  • 14.
    Application fields ofMobile Robots • Education and Research • Remote handling of explosive materials • Military Mobile Robots • Fire Fighting and Rescue • Mining • Construction • Photography and Videography • Planetary Exploration • https://youtu.be/PaSFFxj-9vI?si=i-fp6bAo27BXxR0z • https://youtu.be/OV7T0ADNMbQ?si=4ntIhtsJnzxPl3IU • https://youtu.be/JUGTXmf3K48?si=lLrcOArDzs0hbPvn • https://youtu.be/JRHdnkUjcZg?si=5M3LbZVqlPD1vI_L
  • 15.
    The applications ofmobile robotics The applications of mobile robotics are diverse and span various fields including: 1. Industrial Automation: - Autonomous Guided Vehicles (AGVs) transport materials in warehouses and factories. - Robots in manufacturing lines for tasks like welding, assembly, and painting. 2. Healthcare: - Service robots assist with patient care, medication delivery, and sanitation in hospitals. - Telepresence robots enable remote consultations and monitoring. 3. Agriculture: - Robots perform tasks such as planting, harvesting, and monitoring crop health. - Drones for aerial surveying and crop spraying. 4. Military and Security: - Unmanned Ground Vehicles (UGVs) for reconnaissance, bomb disposal, and supply transport. - Surveillance robots for perimeter security and monitoring.
  • 16.
    The applications ofmobile robotics 5. Exploration: - Rovers for space exploration missions, such as Mars rovers. - Underwater robots for oceanographic research and underwater archaeology. 6. Service and Domestic Use: - Robotic vacuum cleaners and lawn mowers for household chores. - Service robots in hotels and restaurants for customer service and delivery. 7. Transportation: - Self-driving cars and buses for public and private transportation. - Drones for parcel delivery and logistics.
  • 17.
    History of MobileRobotics 1. Early Beginnings: - The concept of mobile robotics dates back to ancient times with early automatons designed by inventors like Hero of Alexandria. - In the 20th century, the development of mobile robots began to take shape with advancements in computing and electronics. 2. 1940s-1950s: - The first notable mobile robot, Grey Walter’s "Elsie" and "Elmer," was developed in the late 1940s and early 1950s. These tortoise- like robots could navigate and avoid obstacles using simple electronic sensors. 3. 1960s-1970s: - Shakey the Robot, developed by Stanford Research Institute (SRI) in the 1960s, was one of the first mobile robots capable of reasoning about its actions. It used a combination of cameras, sensors, and a computer to navigate. - The 1970s saw the advent of more sophisticated mobile robots, including the development of Mars rovers prototypes and industrial mobile robots.
  • 18.
    History of MobileRobotics 4. 1980s-1990s: - The 1980s and 1990s witnessed significant advancements in mobile robotics with the introduction of more powerful microprocessors and sensors. - Projects like Carnegie Mellon University's Navlab, an early autonomous vehicle, showcased the potential for autonomous navigation in complex environments. 5. 2000s-Present: - The 21st century has seen rapid growth in mobile robotics, driven by advances in artificial intelligence, machine learning, and sensor technology. - Commercial applications have expanded, with autonomous vehicles, drones, and service robots becoming increasingly prevalent. - Notable achievements include the successful deployment of the Mars rovers (e.g., Spirit, Opportunity, Curiosity, and Perseverance) and the development of sophisticated robotic vacuum cleaners and delivery drones. **Mobile robotics continues to evolve, integrating advanced AI, improved sensors, and more efficient energy sources, paving the way for even more innovative applications in the future.***
  • 19.
    UNIT 2 Design ofsystem and Navigation architecture 2.1 Reference control scheme of mobile robotics. 2.2 Temporal decomposition of architecture, control decomposition, hybrid architecture, mobile architecture, perception. 2.3 Representation and the mapping process.
  • 20.
    Design of systemand Navigation architecture • Designing a system and navigation architecture for mobile robotics involves creating a comprehensive framework that ensures • The robot can efficiently and effectively navigate its environment. • Must integrate various components including sensors, control systems, algorithms, and user interfaces. • A structured approach to designing such a system includes hardware and software components
  • 21.
    System Architecture Hardware Components 1.Sensors • LIDAR(light detection and ranging): For 360-degree environment scanning and obstacle detection. • Cameras: For visual perception, object recognition, and navigation assistance. • IMU (Inertial Measurement Unit): For orientation and motion tracking. an electronic device that measures and reports acceleration, orientation, angular rates, and other gravitational forces. https://youtu.be/H2-Yp30TGk4?si=h4ROO3CTZvSiu4n0
  • 22.
    System Architecture Hardware Components 1.Sensors • Ultrasonic Sensors: For proximity detection and collision avoidance. an instrument that measures the distance to an object using ultrasonic sound waves. • GPS: For outdoor localization and path planning. a network of satellites and receiving devices used to determine the location of something on Earth. • Encoders: For wheel rotation and odometry ( the use of data from motion sensors to estimate change in position over time). a sensing device that provides feedback. Encoders convert motion to an electrical signal
  • 23.
    Hardware Components 2. Actuators:An actuator converts electrical, pneumatic or hydraulic input input signal into the required form of mechanical energy Motors: For driving and steering the robot. Servos: For adjusting sensors or other movable parts. 3. Processing Unit Microcontroller/Processor: For executing control algorithms and managing sensor data. Computing Platform: For heavy processing tasks, such as image processing or advanced path planning •(e.g., Raspberry Pi, NVIDIA Jetson). 4. Communication Systems Wireless Modules: For remote control and data transfer (e.g., Wi-Fi, Bluetooth). Wired Interfaces: For reliable communication in controlled environments (e.g., CAN bus). The Controller Area Network (CAN bus) is a message-based protocol designed to allow the Electronic Control Units (ECUs) found in today's automobiles, as well as other devices, to communicate with each other in a reliable, priority-driven fashion. 5. Power Supply Batteries: To power all components. Power Management System: For efficient energy use and monitoring.
  • 24.
    Example Architecture Ghorbel, A.,Ben Amor, N. & Jallouli, M. Design of a flexible reconfigurable mobile robot localization system using FPGA technology. SN Appl. Sci. 2, 1183 (2020). https://doi.org/10.1007/s42452-020-2960-4
  • 25.
    Software Components 1. OperatingSystem • RTOS (Real-Time Operating System) or Linux: For handling real-time tasks and managing resources. 2. Middleware • Robot Operating System (ROS): Provides libraries and tools for building robot applications, including drivers, sensors, and algorithms. 3. Algorithms • SLAM (Simultaneous Localization and Mapping): For building maps and tracking the robot’s location within an unknown environment. • Path Planning: Algorithms like Dijkstra’s for route planning and obstacle avoidance. • Object Recognition and Detection: For identifying and reacting to objects using computer vision techniques.(Computer vision is a field of computer science that focuses on enabling computers to identify and understand objects and people in images and videos)
  • 26.
    Software Components 4. ControlSystems • PID Controllers: For maintaining desired behaviors such as speed and direction.(A PID (Proportional – Integral – Derivative) controller is an instrument used by control engineers to regulate temperature, flow, pressure, speed, and other process variables in industrial control systems.) • State Machines: For managing different operating modes and transitions.(A state machine reads a set of inputs and changes to a different state based on those inputs. ) 5. User Interface • GUI: For monitoring, control, and configuration (e.g., with ROS’s Rviz or custom applications). The Robot Operating System (ROS) is an open-source framework that helps researchers and developers build and reuse code between robotics applications. RVIZ is a ROS graphical interface that allows you to visualize a lot of information, using plugins for many kinds of available topics. rviz shows the robot's perception of its world, whether real or simulated.
  • 27.
    Navigation Architecture 1. Perception •Sensor Fusion • Combine data from multiple sensors to create a comprehensive understanding of the environment (e.g., fusing LIDAR and camera data). • Feature Extraction • Identify key features and objects in the environment that are relevant for navigation (e.g., walls, obstacles). 2. Localization • Odometry • Use wheel encoders and IMU data to estimate the robot’s position and orientation based on movement. • Map-Based Localization • Use pre-built maps and SLAM data to refine position estimates and improve accuracy. • GPS-Based Localization • For outdoor robots, GPS data can be used to provide global positioning.
  • 28.
    Navigation Architecture 3. PathPlanning • Global Path Planning • Compute an optimal path from the current location to the target location using algorithms like A* or Dijkstra’s. • Local Path Planning • Continuously adjust the path in response to dynamic obstacles or changes in the environment using techniques like Dynamic Window Approach (DWA) or Rapidly-exploring Random Trees (RRT). • Obstacle Avoidance • Implement strategies to detect and navigate around obstacles in real-time.
  • 29.
    Navigation Architecture 4. Control •Motion Control • Generate commands for the actuators to follow the planned path and maintain the desired trajectory. • Feedback Control • Use sensor data to correct deviations from the planned path and ensure accurate navigation. 5. Integration and Testing • Simulation • Test algorithms and system components in a simulated environment before deploying them on the actual robot. • Real-World Testing • Perform iterative testing in real-world scenarios to refine navigation strategies and ensure robustness. • Debugging and Optimization • Continuously monitor system performance, identify issues, and optimize algorithms and hardware for improved efficiency and reliability.
  • 30.
    Reference control schemeof mobile robotics • The term "Reference Control Scheme" in mobile robotics generally refers to a method of controlling a robot by following a reference trajectory or desired path. • Uses control algorithm to ensure that the robot's movement aligns with a predefined path or set of commands, adjusting its behavior based on feedback from its sensors.
  • 31.
    Reference control schemeof mobile robotics
  • 32.
    Steps in ReferenceControl Scheme 1. and RepDefine the Reference Path: Establish the desired path or trajectory for the robot to follow, which can be a set of waypoints or a continuous curve. 2. Measure Robot’s Position: Use sensors to determine the robot’s current position and orientation. 3. Calculate Errors: Determine the deviation of the robot from the reference path. 4. Compute Control Inputs: Use a control algorithm to calculate the necessary adjustments to correct the deviation. 5. Apply Control Commands: Send commands to the robot’s actuators to adjust its movement and bring it back on track. 6. Update: Continuously update the position, recalculate errors, and adjust controls in real-time to maintain accurate path following.
  • 33.
    Applications • Autonomous Vehicles:Following roads or predefined routes. • Robotic Arms: Moving along a specified path for tasks like welding or painting. • Service Robots: Navigating through indoor environments to reach specific destinations.
  • 34.
    Temporal decomposition ofarchitecture, control decomposition, hybrid architecture, mobile architecture, perception. 1. Temporal Decomposition: involves breaking down a robotic system's functions into different time scales or phases. 2. Control Decomposition refers to breaking down the control system into manageable components or layers, each responsible for different aspects of control. 3. Hybrid Architecture combines multiple approaches or techniques to leverage their strengths and mitigate their weaknesses. 4. Mobile Architecture refers to the overall design and structure of the robotic system that allows it to move and interact with its environment. 5. Perception involves sensing and interpreting the robot’s environment to make informed decisions.
  • 35.
    Temporal Decomposition • Short-TermPlanning: Immediate actions and reactions, such as obstacle avoidance or real-time adjustments. • Medium-Term Planning: Intermediate goals, such as navigating through a room or following a path. • Long-Term Planning: Strategic goals and missions, such as exploring an area or performing a sequence of tasks.
  • 36.
    Control Decomposition • Low-LevelControl: Handles basic actions such as motor control, speed regulation, and immediate responses to sensor inputs. • Mid-Level Control: Manages higher-level tasks such as path following, obstacle avoidance, and coordination of different low-level controllers. • High-Level Control: Oversees complex decision-making, strategic planning, and goal-setting.
  • 37.
    Hybrid Architecture It Integrates: •Reactive Systems: For real-time responses and immediate actions. • Deliberative Systems: For planning and decision-making based on higher-level goals and strategies.
  • 38.
    Mobile Architecture It typicallyincludes: • Platform: The physical structure and locomotion system (e.g., wheels, tracks, legs). • Navigation System: Components for positioning and movement, such as GPS, IMUs (Inertial Measurement Units), and odometry. • Control System: Manages the robot's movement and behavior, integrating various sensors and actuators.
  • 39.
    Perception It includes: • Sensors:Devices that collect data about the environment, such as cameras, LIDAR, sonar, and infrared sensors. • Data Processing: Analyzing sensor data to detect objects, map surroundings, and identify features. • Recognition and Classification: Using algorithms to identify and categorize objects and features based on sensory input.
  • 40.
    Integrating These Concepts Theseconcepts often overlap and interact: • Temporal and Control Decomposition: A system may use different control strategies at various timescales, such as reactive control for immediate obstacles and planning-based control for longer-term goals. • Hybrid Architecture: Combines different control strategies and integrates perception systems to handle a wide range of tasks effectively. • Mobile Architecture: Incorporates both control and perception systems to enable the robot to move and interact intelligently within its environment. By understanding and integrating these elements, robotic systems can achieve more sophisticated and effective performance in diverse scenarios.
  • 41.
    Integrated Block diagram HighLevel Control Medium Term Planning (Path Navigation, Intermediate Goals) Short Term Planning (Immediate Reactions, Obstacle Avoidance) Control Decomposition Low Level control (Motor Control, response) Mid Level Control(Path following ,obstacle avoidance ) High Level control(Strategic ,Decision making) Mobile Architecture Perception system Hybrid Architecture
  • 42.
    Representation and themapping process. Representation and mapping are fundamental processes that enable a robot to understand and navigate its environment. • Representation refers to how a robot perceives and organizes information about its environment. This involves creating a model or a map that helps the robot understand and interact with its surroundings. • Mapping is the process of constructing a representation of the environment based on sensory information. This typically involves creating a map that reflects the spatial layout and features of the environment.
  • 43.
    Key Aspects ofRepresentation This involves creating a model or a map that helps the robot understand and interact with its surroundings. 1. Environmental Models: Various ways to represent the environment, such as: • Grid Maps: Represent the environment as a grid • Occupancy Grids: probabilistic information about whether a cell is occupied. • Feature Maps: Represent specific features or landmarks in the environment, like walls, obstacles, or objects. • Topological Maps: Represent the environment based on spatial relationships and connectivity, rather than precise distances. • Semantic Maps: Include information about the type and meaning of objects (e.g., "desk" or "door") in addition to their location. 2. Data Structures: Used to store and manipulate environmental data such as • Maps: Store spatial information, often in a 2D or 3D grid format and • Graphs: Represent connections between different locations or features and • Hierarchical Structures: Organize data at different levels of detail or abstraction. 3. Sensor Data Integration: Combining information from various sensors (e.g., cameras, LIDAR, sonar) to create a coherent representation of the environment.
  • 44.
    Examples of Representation •Occupancy Grid Map: A 2D grid where each cell contains a probability value indicating the likelihood that the cell is occupied. • Topological Map: Nodes represent important locations (e.g., rooms) and edges represent paths or connections between them
  • 45.
  • 46.
  • 47.
    Key Steps inthe Mapping Process 1. Data Collection: • Sensors: Use sensors like LIDAR, cameras, and sonar to collect data about the environment. • Odometry: Track the robot’s movement and position using wheel encoders or inertial measurement units (IMUs). 2. Feature Extraction: • Detection: Identify features or landmarks in the sensor data. • Segmentation: Group sensor readings into meaningful regions or objects. 3. Data Fusion: • Sensor Fusion: Combine data from multiple sensors to improve accuracy and robustness. • Temporal Fusion: Integrate data collected over time to refine the map and correct errors. 4. Map Creation: • Incremental Mapping: Update the map continuously as the robot moves and collects new data. • Simultaneous Localization and Mapping (SLAM): Perform localization and mapping simultaneously, updating the map while keeping track of the robot’s position.
  • 48.
    Key Steps inthe Mapping Process 5. Map Refinement: • Error Correction: Adjust the map to correct for errors and inconsistencies. • Optimization: Refine the map using techniques like bundle adjustment or graph-based optimization. 6. Map Utilization: • Navigation: Use the map for path planning and obstacle avoidance. • Localization: Determine the robot’s position relative to the map. • Task Execution: Perform tasks based on the information in the map (e.g., reaching a goal location). Example Mapping Techniques • Grid-Based Mapping: Create a grid map where each cell is updated based on sensor data, using algorithms like the occupancy grid algorithm. • Feature-Based Mapping: Build a map by detecting and recording distinct features or landmarks, using methods like Visual SLAM.
  • 49.
    UNIT 3 Locomotion 3.1 Issuesfor locomotion, legged mobile robots, wheeled mobile robots. Kinematics introduction, forward and reverse kinematics, wheeled kinematics and its constraints. 3.2 Mobile system locomotion, human biped locomotion as a rolling polygon. 3.3 Representation of robot position through the reference frame.
  • 50.
    Locomotion • Locomotion inmobile robotics refers to the methods and mechanisms that allow a robot to move from one place to another. • It encompasses the design, control, and implementation of the movement systems that enable robots to navigate their environment. • Locomotion is a fundamental aspect of mobile robots, influencing how they interact with their surroundings, accomplish tasks, and adapt to different terrains.
  • 51.
    Types of Locomotionin Mobile Robotics • Wheeled Locomotion • Legged Locomotion • Tracked Locomotion • Hybrid Locomotion • Flying Locomotion
  • 52.
    Wheeled Locomotion • Description:Involves robots that move using wheels. This is one of the most common types of locomotion in mobile robotics due to its simplicity and efficiency on flat surfaces. • Examples: • Differential Drive: A system with two wheels that can rotate independently, allowing the robot to move forward, backward, and turn by varying the speed of each wheel. • Ackermann Steering: Similar to car steering, where the front wheels are used to steer while the rear wheels provide propulsion. • Advantages: High speed, low energy consumption, and simplicity of control on smooth surfaces. • Disadvantages: Limited ability to traverse uneven or rough terrain.
  • 53.
    Legged Locomotion • Description:Involves robots that move using articulated legs, mimicking the movement of animals or humans. This allows for greater adaptability to uneven terrains. • Examples: • Bipedal Robots: Robots with two legs, such as humanoid robots. • Quadrupedal Robots: Robots with four legs, like Boston Dynamics' Spot. • Advantages: Ability to navigate complex and uneven terrains, flexibility in movement. • Disadvantages: Complexity in design and control, higher energy consumption, and challenges in maintaining balance.
  • 54.
    Tracked Locomotion • Description:Involves robots that move using continuous tracks, like those on a tank. This allows for stable movement over rough terrain. • Advantages: Good stability and ability to handle rough terrain, low ground pressure. • Disadvantages: Slower movement compared to wheeled robots, complexity in steering.
  • 55.
    Hybrid Locomotion • Description:Combines multiple types of locomotion mechanisms, such as wheels and legs, to take advantage of the strengths of each system. • Examples: Robots that can switch between wheeled movement for speed and legged movement for rough terrain.
  • 56.
    Flying Locomotion • Description:Involves robots that move through the air, such as drones and quadcopters. • Advantages: Ability to access hard-to-reach areas, bypass obstacles on the ground. • Disadvantages: Limited battery life, complex control systems.
  • 57.
    Importance of Locomotionin Mobile Robotics • Locomotion is essential for enabling robots to perform tasks in dynamic and unstructured environments. • It allows robots to explore, transport objects, assist humans, and carry out missions in areas that may be dangerous or inaccessible to humans. • The choice of locomotion mechanism directly impacts a robot’s capabilities, including its speed, stability, energy efficiency, and ability to navigate various terrains.
  • 58.
    Issues in Locomotion:Legged and Wheeled Mobile Robots Legged Mobile Robots Challenges: • Stability: Maintaining balance, especially for bipedal robots, is a significant challenge due to the higher center of gravity and dynamic nature of legged locomotion. • Complexity: The control algorithms needed for legged locomotion are complex, involving real- time adjustments based on sensory feedback. • Energy Efficiency: Legged robots generally consume more energy compared to wheeled robots due to the need for continuous movement and balance. • Terrain Adaptability: While legged robots are more adaptable to rough and uneven terrain, ensuring that they maintain stable locomotion over such surfaces requires advanced sensors and control systems.
  • 59.
    Wheeled Mobile Robots Challenges: •Terrain Limitation: Wheeled robots are generally restricted to flat and smooth surfaces, as rough or highly uneven terrain can hinder their movement. • Maneuverability: Depending on the type of wheels (e.g., fixed, steering, omni-directional), the robot's ability to maneuver can be limited. • Slippage: On certain surfaces, wheeled robots can experience slippage, which complicates accurate position estimation and control. • Complex Kinematics: For robots with multiple wheels, especially those with steering capabilities, kinematic modeling becomes complex.
  • 60.
    Kinematics Introduction Kinematics isthe study of motion without considering the forces that cause it. In robotics, kinematics is used to describe the motion of robots and their parts. • Forward and Inverse Kinematics • Forward Kinematics: This involves computing the position and orientation of the robot's end- effector (or wheels, limbs, etc.) based on given joint parameters (angles, displacements). It is often straightforward to compute. • Inverse Kinematics: This involves computing the necessary joint parameters to achieve a desired position and orientation of the end-effector. Inverse kinematics is generally more complex than forward kinematics because it may have multiple solutions or no solution at all.
  • 61.
    Wheeled Kinematics andConstraints • Wheeled Kinematics: Refers to the relationship between the wheels' rotational speeds and the robot's overall movement. For example, differential drive robots (with two independently driven wheels) use the difference in wheel speeds to turn. • Kinematic Constraints: These are limitations on the robot's motion due to its wheel configuration. For instance: • Non-holonomic Constraints: These constraints mean that certain movements (like sideways motion for traditional wheels) are not possible. For example, a standard car can’t move directly sideways. • Holonomic Constraints: Allow the robot to move in any direction without needing to reorient itself. This is possible with omni-directional wheels or spherical wheels.
  • 62.
    Mobile System Locomotion:Human Biped Locomotion as a Rolling Polygon Human Biped Locomotion • Human Biped Locomotion: This can be modeled as a rolling polygon, where the vertices of the polygon represent the points of contact of the feet with the ground. As humans walk, their body can be considered as moving along a path that is defined by these contact points. • Rolling Polygon Model: In this model, the transition from one foot to another can be seen as the rolling of the polygon over the terrain. The model helps in understanding the dynamics of walking, including balance, momentum, and gait.
  • 63.
    Mobile System Locomotion •Locomotion Control: In mobile systems, locomotion is controlled by algorithms that determine how the system moves from one point to another while maintaining stability and efficiency. • Gait Analysis: For legged robots, understanding and implementing different gaits (such as walking, trotting, or galloping) is crucial for optimizing movement across different terrains.
  • 64.
    Representation of RobotPosition through the Reference Frame Reference Frames • Global Reference Frame: This is a fixed coordinate system (often denoted as the world frame) relative to which the robot's position and orientation are measured. • Local Reference Frame: This is a coordinate system fixed to the robot, often referred to as the robot frame. The robot's movements are easier to describe relative to this frame.
  • 65.
    Representation of Position •Position: The robot's position is typically represented as a vector in the global reference frame. For example, in a 2D space, the position might be given as (x,y)(x, y)(x,y), while in 3D, it would be (x,y,z)(x, y, z)(x,y,z). • Orientation: Orientation is represented by angles, such as the yaw (rotation around the vertical axis), pitch (rotation around the lateral axis), and roll (rotation around the longitudinal axis) in 3D space. • Homogeneous Transformation Matrix: A common method to represent both position and orientation is the homogeneous transformation matrix, which combines rotation and translation into a single matrix. This is particularly useful for performing coordinate transformations between different reference frames.
  • 66.
    Summary • Locomotion inmobile robots, whether legged or wheeled, involves addressing several challenges related to stability, maneuverability, and terrain adaptability. • Understanding kinematics, both forward and inverse, is crucial for controlling robot motion. • Wheeled kinematics introduces specific constraints that must be managed. • The representation of a robot's position through reference frames is a fundamental concept in robotics, allowing for accurate control and navigation.