Presentazione Tesi Laurea Triennale in InformaticaLuca Marignati
Università degli Studi di Torino
Dipartimento di Informatica
Titolo: Apprendimento per Rinforzo e Applicazione ai Problemi di Pianificazione del Percorso
Topic: Machine Learning
Presentazione Tesi Laurea Triennale in InformaticaLuca Marignati
Università degli Studi di Torino
Dipartimento di Informatica
Titolo: Apprendimento per Rinforzo e Applicazione ai Problemi di Pianificazione del Percorso
Topic: Machine Learning
Left Turn Display Mechanism for Facilitating Left Hand TurnsAli Vira
SYDE 1A Design Project
Atef Chaudhury
Jacinta Ferrant
Joey Loi
Michal Ulman
Ali Vira
Elizabeth Yang
Drivers making left hand turns are faced with the challenge of making decisions with incomplete information, leading to dangerous situations where an individual may drive into the path of an oncoming vehicle. A modification to current traffic systems was designed to aid drivers by alerting them of oncoming traffic obscured by blind spots. Although some intersections currently use the advance green for left turns, the oncoming traffic must be at a halt. This system will stand out by not having any effect on the oncoming flow of traffic. Unlike competitors’ systems, this system dynamically calculates an unsafe zone based on the speed of oncoming cars, weather conditions, and driver reaction time and intuitively presents this information. The system has the following four functions: detect oncoming traffic, determine the size of left-turning vehicle, calculate an unsafe zone in which a driver cannot safely make a left hand turn, and present the information to a driver in a simple fashion. The first function is achieved through the use of two radars pointed at oncoming traffic, which are able to identify the speed and position of oncoming traffic in up to 10 lanes. Left-turning vehicle classification is achieved through using a camera facing the left-turning vehicle. The third function is achieved through the use of a Raspberry Pi computer with a connection to a weather network. The mean time to make a left turn has been found to be 3.0s at a two-lane intersection. The universal human reaction time used by accident reconstructionists is 1.5 seconds. Both these times were factored into the unsafe-zone calculation. If it is determined that there is not enough time to make a safe left turn, the system signals the left turning driver that it is not safe to go. This function is achieved through the use of a flashing amber light. The system will reset once an oncoming car passes through the intersection. During mechanical testing, the system was able to withstand winds up to 128km/h and temperatures -50ºC to 60ºC. The vehicle detection range was found to be 76.2m, and the power requirement was found to be 23.4Wh. For further improvement, the system will incorporate pedestrian and cyclist detection; use a more accurate algorithm, and features to enhance compatibility.
PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...Dongmin Lee
I reviewed the PRM-RL paper.
PRM-RL (Probabilistic Roadmap-Reinforcement Learning) is a hierarchical method that combines sampling-based path planning with RL. It uses feature-based and deep neural net policies (DDPG) in continuous state and action spaces. In experiment, authors evaluate PRM- RL, both in simulation and on-robot, on two navigation tasks: end-to-end differential drive indoor navigation in office environments, and aerial cargo delivery in urban environments.
Outline
- Abstract
- Introduction
- Reinforcement Learning
- Methods
- Results
Thank you.
We have successfully implemented an obstacle detector on Raspberry Pi using LIDAR (Neato xv_11) sensor for ADAS RC model car Project using ROS(with Python) and generating emergency and warning thresholds.
Intelligent Mobility: From Last Mile to Long Distance Route OptimizationBigML, Inc
This session covers two different use cases, one provides a vision of the current state of the art regarding route optimization, and the other example will showcase a more complex and sophisticated application to long-distance transportation.
Speaker: Andreu Araque, Co-Founder and CEO at Hedyla.
*Intelligent Mobility 2021: Virtual Conference.
Deep reinforcement learning framework for autonomous drivingGopikaGopinath5
Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, it is possible to propose a framework for autonomous driving using deep reinforcement learning.
It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios.
The Future of Mixed-Autonomy Traffic (AIS302) - AWS re:Invent 2018Amazon Web Services
How will self-driving cars change urban mobility patterns? This talk examines scientific contributions in the field of reinforcement learning, presented in the context of enabling mixed-autonomy mobility—the gradual and complex integration of autonomous vehicles into existing traffic systems. We explore the potential impact of a small fraction of autonomous vehicles on low-level traffic flow dynamics, using novel techniques in model-free deep reinforcement learning. We share examples in the context of a new open-source computational platform and state-of-the-art microsimulation tools with deep-reinforcement libraries.
This Machine Learning Algorithms presentation will help you learn you what machine learning is, and the various ways in which you can use machine learning to solve a problem. At the end, you will see a demo on linear regression, logistic regression, decision tree and random forest. This Machine Learning Algorithms presentation is designed for beginners to make them understand how to implement the different Machine Learning Algorithms.
Below topics are covered in this Machine Learning Algorithms Presentation:
1. Real world applications of Machine Learning
2. What is Machine Learning?
3. Processes involved in Machine Learning
4. Type of Machine Learning Algorithms
5. Popular Algorithms with a hands-on demo
- Linear regression
- Logistic regression
- Decision tree and Random forest
- N Nearest neighbor
What is Machine Learning: Machine Learning is an application of Artificial Intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed.
- - - - - - - -
About Simplilearn Machine Learning course:
A form of artificial intelligence, Machine Learning is revolutionizing the world of computing as well as all people’s digital interactions. Machine Learning powers such innovative automated technologies as recommendation engines, facial recognition, fraud protection and even self-driving cars.This Machine Learning course prepares engineers, data scientists and other professionals with knowledge and hands-on skills required for certification and job competency in Machine Learning.
- - - - - - -
Why learn Machine Learning?
Machine Learning is taking over the world- and with that, there is a growing need among companies for professionals to know the ins and outs of Machine Learning
The Machine Learning market size is expected to grow from USD 1.03 Billion in 2016 to USD 8.81 Billion by 2022, at a Compound Annual Growth Rate (CAGR) of 44.1% during the forecast period.
- - - - - -
What skills will you learn from this Machine Learning course?
By the end of this Machine Learning course, you will be able to:
1. Master the concepts of supervised, unsupervised and reinforcement learning concepts and modeling.
2. Gain practical mastery over principles, algorithms, and applications of Machine Learning through a hands-on approach which includes working on 28 projects and one capstone project.
3. Acquire thorough knowledge of the mathematical and heuristic aspects of Machine Learning.
4. Understand the concepts and operation of support vector machines, kernel SVM, naive Bayes, decision tree classifier, random forest classifier, logistic regression, K-nearest neighbors, K-means clustering and more.
5. Be able to model a wide variety of robust Machine Learning algorithms including deep learning, clustering, and recommendation systems
- - - - - - -
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
More Related Content
Similar to Imitation Learning and Direct Perception for Autonomous Driving
Left Turn Display Mechanism for Facilitating Left Hand TurnsAli Vira
SYDE 1A Design Project
Atef Chaudhury
Jacinta Ferrant
Joey Loi
Michal Ulman
Ali Vira
Elizabeth Yang
Drivers making left hand turns are faced with the challenge of making decisions with incomplete information, leading to dangerous situations where an individual may drive into the path of an oncoming vehicle. A modification to current traffic systems was designed to aid drivers by alerting them of oncoming traffic obscured by blind spots. Although some intersections currently use the advance green for left turns, the oncoming traffic must be at a halt. This system will stand out by not having any effect on the oncoming flow of traffic. Unlike competitors’ systems, this system dynamically calculates an unsafe zone based on the speed of oncoming cars, weather conditions, and driver reaction time and intuitively presents this information. The system has the following four functions: detect oncoming traffic, determine the size of left-turning vehicle, calculate an unsafe zone in which a driver cannot safely make a left hand turn, and present the information to a driver in a simple fashion. The first function is achieved through the use of two radars pointed at oncoming traffic, which are able to identify the speed and position of oncoming traffic in up to 10 lanes. Left-turning vehicle classification is achieved through using a camera facing the left-turning vehicle. The third function is achieved through the use of a Raspberry Pi computer with a connection to a weather network. The mean time to make a left turn has been found to be 3.0s at a two-lane intersection. The universal human reaction time used by accident reconstructionists is 1.5 seconds. Both these times were factored into the unsafe-zone calculation. If it is determined that there is not enough time to make a safe left turn, the system signals the left turning driver that it is not safe to go. This function is achieved through the use of a flashing amber light. The system will reset once an oncoming car passes through the intersection. During mechanical testing, the system was able to withstand winds up to 128km/h and temperatures -50ºC to 60ºC. The vehicle detection range was found to be 76.2m, and the power requirement was found to be 23.4Wh. For further improvement, the system will incorporate pedestrian and cyclist detection; use a more accurate algorithm, and features to enhance compatibility.
PRM-RL: Long-range Robotics Navigation Tasks by Combining Reinforcement Learn...Dongmin Lee
I reviewed the PRM-RL paper.
PRM-RL (Probabilistic Roadmap-Reinforcement Learning) is a hierarchical method that combines sampling-based path planning with RL. It uses feature-based and deep neural net policies (DDPG) in continuous state and action spaces. In experiment, authors evaluate PRM- RL, both in simulation and on-robot, on two navigation tasks: end-to-end differential drive indoor navigation in office environments, and aerial cargo delivery in urban environments.
Outline
- Abstract
- Introduction
- Reinforcement Learning
- Methods
- Results
Thank you.
We have successfully implemented an obstacle detector on Raspberry Pi using LIDAR (Neato xv_11) sensor for ADAS RC model car Project using ROS(with Python) and generating emergency and warning thresholds.
Intelligent Mobility: From Last Mile to Long Distance Route OptimizationBigML, Inc
This session covers two different use cases, one provides a vision of the current state of the art regarding route optimization, and the other example will showcase a more complex and sophisticated application to long-distance transportation.
Speaker: Andreu Araque, Co-Founder and CEO at Hedyla.
*Intelligent Mobility 2021: Virtual Conference.
Deep reinforcement learning framework for autonomous drivingGopikaGopinath5
Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, it is possible to propose a framework for autonomous driving using deep reinforcement learning.
It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios.
The Future of Mixed-Autonomy Traffic (AIS302) - AWS re:Invent 2018Amazon Web Services
How will self-driving cars change urban mobility patterns? This talk examines scientific contributions in the field of reinforcement learning, presented in the context of enabling mixed-autonomy mobility—the gradual and complex integration of autonomous vehicles into existing traffic systems. We explore the potential impact of a small fraction of autonomous vehicles on low-level traffic flow dynamics, using novel techniques in model-free deep reinforcement learning. We share examples in the context of a new open-source computational platform and state-of-the-art microsimulation tools with deep-reinforcement libraries.
This Machine Learning Algorithms presentation will help you learn you what machine learning is, and the various ways in which you can use machine learning to solve a problem. At the end, you will see a demo on linear regression, logistic regression, decision tree and random forest. This Machine Learning Algorithms presentation is designed for beginners to make them understand how to implement the different Machine Learning Algorithms.
Below topics are covered in this Machine Learning Algorithms Presentation:
1. Real world applications of Machine Learning
2. What is Machine Learning?
3. Processes involved in Machine Learning
4. Type of Machine Learning Algorithms
5. Popular Algorithms with a hands-on demo
- Linear regression
- Logistic regression
- Decision tree and Random forest
- N Nearest neighbor
What is Machine Learning: Machine Learning is an application of Artificial Intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed.
- - - - - - - -
About Simplilearn Machine Learning course:
A form of artificial intelligence, Machine Learning is revolutionizing the world of computing as well as all people’s digital interactions. Machine Learning powers such innovative automated technologies as recommendation engines, facial recognition, fraud protection and even self-driving cars.This Machine Learning course prepares engineers, data scientists and other professionals with knowledge and hands-on skills required for certification and job competency in Machine Learning.
- - - - - - -
Why learn Machine Learning?
Machine Learning is taking over the world- and with that, there is a growing need among companies for professionals to know the ins and outs of Machine Learning
The Machine Learning market size is expected to grow from USD 1.03 Billion in 2016 to USD 8.81 Billion by 2022, at a Compound Annual Growth Rate (CAGR) of 44.1% during the forecast period.
- - - - - -
What skills will you learn from this Machine Learning course?
By the end of this Machine Learning course, you will be able to:
1. Master the concepts of supervised, unsupervised and reinforcement learning concepts and modeling.
2. Gain practical mastery over principles, algorithms, and applications of Machine Learning through a hands-on approach which includes working on 28 projects and one capstone project.
3. Acquire thorough knowledge of the mathematical and heuristic aspects of Machine Learning.
4. Understand the concepts and operation of support vector machines, kernel SVM, naive Bayes, decision tree classifier, random forest classifier, logistic regression, K-nearest neighbors, K-means clustering and more.
5. Be able to model a wide variety of robust Machine Learning algorithms including deep learning, clustering, and recommendation systems
- - - - - - -
Similar to Imitation Learning and Direct Perception for Autonomous Driving (20)
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
Imitation Learning and Direct Perception for Autonomous Driving
1. Thesis Seminar:
Imitation Learning and
Direct Perception for
Autonomous Driving
Rocky Liang
University of Waterloo CogDrive Lab
Supervisor: Dr. Dongpu Cao
1
2. Table of Contents
▪ Problem Definition
▪ Conditional Imitation Learning
▪ Direct Perception
▪ Curvature Based Dynamic Controller
2
3. Goal: To design a policy that can operate an autonomous system in a more
streamlined way than the traditional robotics approach, which is complex and difficult
to scale.
Problem Definition
Environment
Understandin
g
Decision
Making
Sensor Input
Control
Action
Actuation
Imitation
Learning
Policy
3
4. Conditional Imitation Learning
1. Record expert demonstrations of (state, action) pairs
2. Initialize model weights
3. Update weights by minimizing the model’s predicted action and real action
Model
State/
Observation
Context
Expert Action/
Ground Truth
4
5. Conditional Imitation Learning - Context
▪ Driving is a multimodal task: there
can be multiple correct actions in
any given location
▪ Without a context, predicting the
appropriate action is an under
constrained problem
5
6. Conditional Imitation Learning - Dataset
Dataset used: CARLA Imitation Learning
Dataset
Available contexts:
▪ Follow lane
▪ Left turn
▪ Right turn
▪ Straight
Anatomy of a datapoint
Context
Input (Observation & context) Label (action)
Steering, throttle, brake
Vx
6
Scene Description
▪ Suburban scenery
▪ Single lane roads
▪ Intersections
▪ Other vehicles & pedestrians
9. Conditional Imitation Learning - Limitations
▪ Low explainability
▪ Why did it take that specific action?
▪ Hard to debug
▪ Immutable behavior
▪ The behavior shown in the dataset is the behavior the trained model will
have
9
11. Direct Perception - Affordances
● Affordances are states that are key to the
robot’s operation
● Unlike mediated perception which seeks to
build a full reconstruction of the environment,
direct perception only extracts affordances
from sensors
● Affordances:
○ Lane deviation
■ Heading error & crosstrack error
○ Road curvature
○ Distance to car in front
● Scales across different vehicles with minimal
vehicle specific development
● Affordances are fed to a controller to drive the
car
11
Nonzero curvature
Zero curvature
13. Data Recorder
Python Clients
Weather Client
NPC Client
Recorder Client
CARLA Server
User Input
Saved
Data
Display
Window
Left Lane Change
Right Lane
Change
Left Turn Right Turn
13
14. Curvature Based Dynamic Controller
Lateral Dynamic Model
Source: Vehicle System Dynamics,
Khajepour
Loss Calculation Update Delta
14
1. Propagate lateral model and get curvature prediction
2. Calculate loss between predicted curvature and
affordance curvature
3. Minimize loss by updating control input using its
gradient
15. Curvature Based Dynamic Controller
Lateral
Dynamic Model
Vehicle states from
sensors
Control input
Predicted states
Error
Calculation
Iteratively solve
Solver Loop
15
17. Summary & Contributions
▪ Developed context aware imitation learning models for vehicle control
▪ Developed direct perception model for vehicle control
▪ Context aware affordance predictor
▪ Curvature based dynamic controller
▪ Learning based but still vehicle agnostic
▪ Data recorder
▪ Able to record contextual driving data including lane change affordances
17
18. Future Work
More robust context modeling
▪ Currently, contexts are discrete categories
▪ Cannot navigate more complicated intersections
▪ Would like to represent context in a continuous way
▪ How to represent this information?
▪ How to pass it to the network?
Domain adaptation
▪ Fine tuning with real world dataset (simple, low generalizability)
▪ Learn common scene representation across multiple domains (tough, high
generalizability)
18
19. Key Sources
▪ End-to-end Driving via Conditional Imitation Learning, Codevilla et al.
▪ Agile Autonomous Driving using End-to-End Deep Imitation Learning, Pan et al.
▪ Imitation Learning for Vision-based Lane Keeping Assistance, Innocenti et al.
19