Self-driving cars aim to revolutionize car travel by making it safe and efficient. In this article, we outlined some of the key components such as LiDAR, RADAR, cameras, and most importantly – the algorithms that make self-driving cars possible.
Few things need to be taken care of:
The algorithms used are not yet optimal enough to perceive roads and lanes because some roads lack markings and other signs.
The optimal sensing modality for localization, mapping, and perception still lack accuracy and efficiency.
Vehicle-to-vehicle communication is still a dream, but work is being done in this area as well.
The field of human-machine interaction is not explored enough, with many open, unsolved problems.
Self-driving cars aim to revolutionize car travel by making it safe and efficient. In this article, we outlined some of the key components such as LiDAR, RADAR, cameras, and most importantly – the algorithms that make self-driving cars possible.
Few things need to be taken care of:
The algorithms used are not yet optimal enough to perceive roads and lanes because some roads lack markings and other signs.
The optimal sensing modality for localization, mapping, and perception still lack accuracy and efficiency.
Vehicle-to-vehicle communication is still a dream, but work is being done in this area as well.
The field of human-machine interaction is not explored enough, with many open, unsolved problems.
Q-learning is one of the most commonly used DRL algorithms for self-driving cars. It comes under the category of model-free learning. In model-free learning, the agent will try to approximate the optimal state-action pair. The policy still determines which action-value pairs or Q-value are visited and updated (see the equation below). The goal is to find optimal policy by interacting with the environment while modifying the same when the agent makes an error.
An autonomous vehicle is a kind of vehicle which can drive itself to the destination without any human
conduction. This is also known as driverless vehicle, self-driving vehicle or robot vehicle. Autonomous
vehicles require the combination of various sensors to detect their surroundings and interpret the
information to identify the appropriate navigation path and the obstacles in the way.
Modern vehicles provide some autonomous features like speed controls, emergency braking or keeping
the vehicle into the lane. Here, differences remain between a fully autonomous vehicle on one hand
and driver assistance technologies on the other hand.
Research presentation on Autonomous Driving. Direction perception approach.
Research work by Princeton University group.
Note: Link given in the presentation
An autonomous vehicle is a kind of vehicle which can drive itself to the destination without any human
conduction. This is also known as driverless vehicle, self-driving vehicle or robot vehicle. Autonomous
vehicles require the combination of various sensors to detect their surroundings and interpret the
information to identify the appropriate navigation path and the obstacles in the way.
Modern vehicles provide some autonomous features like speed controls, emergency braking or keeping
the vehicle into the lane. Here, differences remain between a fully autonomous vehicle on one hand
and driver assistance technologies on the other hand.
Research presentation on Autonomous Driving. Direction perception approach.
Research work by Princeton University group.
Note: Link given in the presentation
Vehicle Detection using Camera
Vehicle Detection Using Cameras for Self-Driving Cars |
Using machine learning and computer vision I create a pipeline that detects nearby vehicles from a dash-cam.
Object detection is a computer technology related to computer vision and image processing that deals with detecting instances of semantic objects of a certain class (such as humans, buildings, or cars) in digital images and videos.
A Small Helping Hand from me to my Engineering collegues and my other friends in need of Object Detection
Artificial intelligence in autonomous vehicleGwenaël C
Présentation réalisé pour le cours d'anglais de la Licence 3 Miashs parcours Miage réalisée l'université de Toulouse Capitole conjointement à l'université Toulouse Paul Sabatier
Self driving cars are the future and we must be ready for it whether we like it or not.
This ppt covers self driving cars and the latest technology used in them
An autonomous car is an autonomous vehicle capable of fulfilling the human transportation capabilities of a traditional car. As an autonomous vehicle, it is capable of sensing its environment and navigating without human input.
Google Self Driving Cars
The Google Self-Driving Car is a project by Google that involves developing technology for autonomous cars. The software powering Google's cars is called Google Chauffeur. Lettering on the side of each car identifies it as a "self-driving car". The project is currently being led by Google engineer Sebastian Thrun, former director of the Stanford Artificial Intelligence Laboratory and co-inventor of Google Street View. Thrun's team at Stanford created the robotic vehicle Stanley which won the 2005 DARPA Grand Challenge and its US$2 million prize from the United States Department of Defense. The team developing the system consisted of 15 engineers working for Google, including Chris Urmson, Mike Montemerlo, and Anthony Levandowski who had worked on the DARPA Grand and Urban Challenges.
Legislation has been passed in four states and the District of Columbia allowing driverless cars. The U.S. state of Nevada passed a law on June 29, 2011, permitting the operation of autonomous cars in Nevada, after Google had been lobbying in that state for robotic car laws. The Nevada law went into effect on March 1, 2012, and the Nevada Department of Motor Vehicles issued the first license for an autonomous car in May 2012, to a Toyota Prius modified with Google's experimental driverless technology. In April 2012, Florida became the second state to allow the testing of autonomous cars on public roads, and California became the third when Governor Jerry Brown signed the bill into law at Google HQ in Mountain View. In July 2014, the city of Coeur d'Alene, Idaho adopted a robotics ordinance that includes provisions to allow for self-driving cars.
Videos
https://www.youtube.com/channel/UCCLyNDhxwpqNe3UeEmGHl8g
Autonomous car based on artificial intelligence which is used by google for replacing drivers in car. Which will leads to the driving into the next phase
Vehicle Detection using Camera
Vehicle Detection Using Cameras for Self-Driving Cars |
Using machine learning and computer vision I create a pipeline that detects nearby vehicles from a dash-cam.
Object detection is a computer technology related to computer vision and image processing that deals with detecting instances of semantic objects of a certain class (such as humans, buildings, or cars) in digital images and videos.
A Small Helping Hand from me to my Engineering collegues and my other friends in need of Object Detection
Artificial intelligence in autonomous vehicleGwenaël C
Présentation réalisé pour le cours d'anglais de la Licence 3 Miashs parcours Miage réalisée l'université de Toulouse Capitole conjointement à l'université Toulouse Paul Sabatier
Self driving cars are the future and we must be ready for it whether we like it or not.
This ppt covers self driving cars and the latest technology used in them
An autonomous car is an autonomous vehicle capable of fulfilling the human transportation capabilities of a traditional car. As an autonomous vehicle, it is capable of sensing its environment and navigating without human input.
Google Self Driving Cars
The Google Self-Driving Car is a project by Google that involves developing technology for autonomous cars. The software powering Google's cars is called Google Chauffeur. Lettering on the side of each car identifies it as a "self-driving car". The project is currently being led by Google engineer Sebastian Thrun, former director of the Stanford Artificial Intelligence Laboratory and co-inventor of Google Street View. Thrun's team at Stanford created the robotic vehicle Stanley which won the 2005 DARPA Grand Challenge and its US$2 million prize from the United States Department of Defense. The team developing the system consisted of 15 engineers working for Google, including Chris Urmson, Mike Montemerlo, and Anthony Levandowski who had worked on the DARPA Grand and Urban Challenges.
Legislation has been passed in four states and the District of Columbia allowing driverless cars. The U.S. state of Nevada passed a law on June 29, 2011, permitting the operation of autonomous cars in Nevada, after Google had been lobbying in that state for robotic car laws. The Nevada law went into effect on March 1, 2012, and the Nevada Department of Motor Vehicles issued the first license for an autonomous car in May 2012, to a Toyota Prius modified with Google's experimental driverless technology. In April 2012, Florida became the second state to allow the testing of autonomous cars on public roads, and California became the third when Governor Jerry Brown signed the bill into law at Google HQ in Mountain View. In July 2014, the city of Coeur d'Alene, Idaho adopted a robotics ordinance that includes provisions to allow for self-driving cars.
Videos
https://www.youtube.com/channel/UCCLyNDhxwpqNe3UeEmGHl8g
Autonomous car based on artificial intelligence which is used by google for replacing drivers in car. Which will leads to the driving into the next phase
Week 8 Expository Essay 2 Revision 65 pointsContent and De.docxmelbruce90096
Week 8: Expository Essay 2 Revision: 65 points
Content and Development – 45 Points
Points Earned – 45/45
Additional Comments:
The revision is substantive and effective. To note:
0. Major revisions (i.e., noticeable changes in content and organization) are evident.
0. The revision remains in line with its assigned rhetorical mode.
0. The revision contains all the elements of a successful essay: a precise, pertinent, and persuasive thesis; effective support for that thesis; and descriptive language where and when appropriate.
0. Inter- and intra-paragraph content is effectively organized (spatially, temporally, logically, or by order of importance) and makes use of topic sentences and appropriate transitional expressions.
0. The introduction and conclusion are engaging, cohesive, and appropriate to their position in the revision.
The revision has been submitted to Turnitin. A paragraph of self-reflection and the original essay accompany the revision.
Readability and Style – 10 Points
Points Earned – 10/10
The tone is appropriate to the content and assignment.
Sentences are complete, clear, and concise.
Sentences are well-constructed, with consistently strong, varied syntax.
Sentence transitions are present and maintain the flow of thought.
Mechanics – 10 Points
Points Earned – 10/10
Rules of grammar, usage, and punctuation are followed.
Mechanics are accurate.
Spelling is correct.
Total – 65 Points
Points Earned – 65/65
Overall Comments:
Running Page: GOOGLE CAR
PAGE
1
GOOGLE CAR
Sasha Alba
Google Car
Self-driving cars are normally found in fictional movies, but Google is about to turn fiction into reality with the development of a full-fledged self-driven car. This means that the car can steer, accelerate, and can stop by itself. Google’s software, known as the Google chauffer, has components that include mission planning, behavior, perception, and motion planning and vehicle control (USA Today, 2014).
Design
The vehicle employs the use of artificial intelligence software that exhibits human intelligence that exhibits human behavior. It includes voice recognition, face recognition, natural language processing, game intelligence, artificial creativity, expert systems, among others. The mission planner component determines the waypoint segments that the vehicle should travel so as to complete a mission. It uses information such as road networks, terrain profiles, and information gathered during missions. After the information has been processed, it outputs waypoints to the behavior module (USA Today, 2014).
Perception is determined by algorithms that perform localization, object classification, and road detection. Sensors such as Lidar and Radar integrate information so that it can be used by planning and reactive components. The behavior component enables the vehicle to follow rules. The rules may be intersection progression for ground vehicles or docking for surface vehicles. In the event of the rules conflicting,.
The car uses LIDAR and Camera data to autonomously drive in the desired path. Algorithms for map generation,
localization, and detection developed on AURIX TC297 using data from RP-LIDAR and OpenMV Camera.
Autonomous Vehicles: the Intersection of Robotics and Artificial IntelligenceWiley Jones
Autonomous Vehicle Webinar. Crash course in AVs: high-level overview, technology deep-dives, and trends. Follow me on Twitter at https://twitter.com/wileycwj.
Link to YouTube Video: https://www.youtube.com/watch?v=CruCp6vqPQs
Google Slides: https://docs.google.com/presentation/d/1-ZWAXEH-5Xu7_zts-rGhNwan14VH841llZwrHGT_9dQ/edit?usp=sharing
An autonomous car is a vehicle capable of sensing its environment and operating without human involvement. A human passenger is not required to take control of the vehicle at any time, nor is a human passenger required to be present in the vehicle at all. An autonomous car can go anywhere traditional cargoes and do everything that an experienced human driver does.
The Society of Automotive Engineers (SAE) currently defines 6 levels of driving automation ranging from Level 0 (fully manual) to Level 5 (fully autonomous). These levels have been adopted by the U.S. Department of Transportation.
Autonomous vs. Automated vs. Self-Driving: What’s the difference?
The SAE uses the term automated instead of autonomous. One reason is that the word autonomy has implications beyond the electromechanical. A fully autonomous car would be self-aware and capable of making its own choices. For example, you say “drive me to work” but the car decides to take you to the beach instead. A fully automated car, however, would follow orders and then drive itself.
The term self-driving is often used interchangeably with autonomy. However, it’s a slightly different thing. A self-driving car can drive itself in some or even all situations, but a human passenger must always be present and ready to take control. Self-driving cars would fall under Level 3 (conditional driving automation) or Level 4 (high driving automation). They are subject to geofencing, unlike a fully autonomous Level 5 car that could go anywhere.
Tesla Autopilot is an advanced driver-assistance system feature offered by Tesla .That has lane centring, adaptive cruise control, self-parking, ability to automatically change lanes without requiring driver steering, and enables the car to be summoned to and from a garage or parking spot.
Automatic control systems related to safety in autonomous carsMRUGENDRASHILVANT
Various technologies used in the Safety of the Autonomous vehicles are discussed. These techniques are explained with the help of various simple examples.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
3. Introduction
The first self-driving car was invented in 1989.
it was the Automatic Land Vehicle in Neural Network (ALVINN).
It used neural networks to detect lines, segment the environment,
navigate itself, and drive.
It worked well, but it was limited by slow processing powers and
insufficient data.
With today’s high-performance graphics cards, processors, and huge
amounts of data, self-driving is more powerful than ever. If it becomes
mainstream, it will reduce traffic congestion and increase road safety.
4. How do self-
driving cars
work?
Self-driving cars are autonomous decision-
making systems.
They can process streams of data from
different sensors such as cameras, LiDAR,
RADAR, GPS, or inertia sensors.
This data is then modeled using deep learning
algorithms, which then make decisions
relevant to the environment the car is in.
5.
6. Understand the
workings of
self-driving
cars
• To understand the workings of self-
driving cars, we need to examine the
four main parts:
1. Perception
2. Localization
3. Prediction
4. Decision Making
1. High-level path planning
2. Behaviour Arbitration
3. Motion Controllers
7. Perception
• One of the most important properties
that self-driving cars must have
is perception, which helps the car see
the world around itself, as well as
recognize and classify the things that it
sees.
• In order to make good decisions, the
car needs to recognize objects
instantly.
• Perception is more than just seeing
and classifying, it enables the system
to evaluate the distance and decide to
either slow down or brake.
8. • To achieve such a
high level of
perception, a self-
driving car must
have three
sensors:
• Camera
• LiDAR
• RADAR
9. Camera
• The camera provides vision
to the car, enabling multiple
tasks like classification,
segmentation, and localizati
on. The cameras need to be
high-resolution and
represent the environment
accurately.
10. LiDAR
LiDAR stands for Light Detection And Ranging, it’s a method to measure the distance of
objects by firing a laser beam and then measuring how long it takes for it to be reflected
by something.
A camera can only provide the car with images of what’s going around itself. When it’s
combined with the LiDAR sensor, it gains depth in the images – it suddenly has a 3D
perception of what’s going on around the car.
So, LiDAR perceives spatial information. And when this data is fed into deep neural
networks, the car can predict the actions of the objects or vehicles close to it. This sort of
technology is very useful in a complex driving scenario, like a multi-exit intersection,
where the car can analyze all other cars and make the appropriate, safest decision.
11.
12. RADARs
Radio detection and ranging (RADAR) is
a key component in many military and
consumer applications. It was first used
by the military to detect objects. It
calculates distance using radio wave
signals. Today, it’s used in many vehicles
and has become a primary component
of the self-driving car.
RADARs are highly effective because
they use radio waves instead of lasers,
so they work in any conditions.
13. The RADAR data should be cleaned in order to make good decisions and predictions. We
need to separate weak signals from strong ones; this is called thresholding. We also
use Fast Fourier Transforms (FFT) to filter and interpret the signal.
Clustering algorithms such as Euclidean Clustering or K means Clustering are used to
achieve this task.
14. Localization
• Localization algorithms in self-driving cars
calculate the position and orientation of
the vehicle as it navigates – a science
known as Visual Odometry (VO).
• VO works by matching key points in
consecutive video frames. With each frame,
the key points are used as the input to a
mapping algorithm. The mapping
algorithm, such as Simultaneous
localization and mapping (SLAM),
computes the position and orientation of
each object nearby with respect to the
previous frame and helps to classify roads,
pedestrians, and other objects around.
15. Deep learning is generally used to improve the performance of VO, and to
classify different objects.
Neural networks, such as PoseNet and VLocNet++, are some of the
frameworks that use point data to estimate the 3D position and
orientation.
These estimated 3D positions and orientations can be used to derive
scene semantics, as seen in the image below.
16. Prediction
• Understanding human drivers
is a very complex task. It
involves emotions rather than
logic, and these are all fueled
with reactions.
• It becomes very uncertain
what the next action will be of
the drivers or pedestrians
nearby, so a system that can
predict the actions of other
road users can be very
important for road safety.
17. Decision-making
• In general, decision-making in self-driving cars is a hierarchical process. This process has four components:
• Path or Route planning: Essentially, route planning is the first of four decisions that the car must make. Entering the
environment, the car should plan the best possible route from its current position to the requested destination. The idea
is to find an optimal solution among all the other solutions.
• Behaviour Arbitration: Once the route is planned, the car needs to navigate itself through the route. The car knows
about the static elements, like roads, intersections, average road congestion and more, but it can’t know exactly what
the other road users are going to be doing throughout the journey. This uncertainty in the behavior of other road users
is solved by using probabilistic planning algorithms like MDPs.
• Motion Planning: Once the behavior layer decides how to navigate through a certain route, the motion planning system
orchestrates the motion of the car. The motion of the car must be feasible and comfortable for the passenger. Motion
planning includes speed of the vehicle, lane-changing, and more, all of which should be relevant to the environment the
car is in.
• Vehicle Control: Vehicle control is used to execute the reference path from the motion planning system.
18.
19. CNNs used for self-driving cars
Convolutional neural networks (CNN) are used to model spatial information, such as images. CNNs
are very good at extracting features from images, and they’re often seen as universal non-linear
function approximators.
CNNs can capture different patterns as the depth of the network increases. For example, the layers
at the beginning of the network will capture edges, while the deep layers will capture more
complex features like the shape of the objects (leaves in trees, or tires on a vehicle). This is the
reason why CNNs are the main algorithm in self-driving cars.
The key component of the CNN is the convolutional layer itself. It has a convolutional kernel which
is often called the filter matrix. The filter matrix is convolved with a local region of the input image
which can be defined as:
20. The three important CNN properties that make them
versatile and a primary component of self-driving cars are:
local receptive fields,
shared weights,
spatial sampling.
21.
22. CNN networks that are used by three
companies pioneering self-driving
cars:
HydraNet by
Tesla
ChauffeurNet by
Google Waymo
Nvidia Self
driving car
24. • HydraNets is dynamic architecture so it can have different CNN
networks, each assigned to different tasks. These blocks or
networks are called branches. The idea of HydraNet is to get
various inputs and feed them into a task-specific CNN network.
25. ChauffeurNet by Google Waymo
• ChauffeurNet is an RNN-based neural
network used by Google Waymo, however, CNN
is actually one of the core components here and
it’s used to extract features from the perception
system.
• The CNN in ChauffeurNet is described as a
convolutional feature network, or FeatureNet,
that extracts contextual feature representation
shared by the other networks. These
representations are then fed to a recurrent
agent network (AgentRNN) that iteratively yields
the prediction of successive points in the driving
trajectory.
26. Nvidia self-driving car: a minimalist approach
towards self-driving cars
• Nvidia also uses a Convolution Neural
Network as a primary algorithm for its
self-driving car. But unlike Tesla, it uses 3
cameras, one on each side and one at the
front.
• The network is capable of operating
inroads that don’t have lane markings,
including parking lots. It can also learn
features and representations that are
necessary for detecting useful road
features.
27. Reinforcemen
t learning
• Reinforcement learning used for self-driving cars
• Reinforcement learning (RL) is a type of machine learning where
an agent learns by exploring and interacting with the
environment. In this case, the self-driving car is an agent.
28. Partially Observable
Markov Decision
Process used for
self-driving cars
• The Markov Decision Process gives us a way to
sequentialize decision-making. When the agent
interacts with the environment, it does so sequentially
over time. Each time the agent interacts with the
environment, it gives some representation of the
environment state. Given the representation of the
state, the agent selects the action to take,
29. partially
observable Markov
decision
process (POMDP)
• In a partially observable Markov decision
process (POMDP), the agent senses the
environment state with observations received
from the perception data and takes a certain
action followed by receiving a reward.
• The POMDP has six components and it can be
denoted as POMDP M:= (I, S, A, R, P, γ), where,
• I: Observations
• S: Finite set of states
• A: Finite set of actions
• R: Reward function
• P: transition probability function
• γ – discounting factor for future rewards.
30. Q-learning used for self-driving cars
• Q-learning is one of the most
commonly used DRL algorithms for
self-driving cars. It comes under the
category of model-free learning. In
model-free learning, the agent will try
to approximate the optimal state-
action pair. The policy still determines
which action-value pairs or Q-value are
visited and updated (see the equation
below). The goal is to find optimal
policy by interacting with the
environment while modifying the
same when the agent makes an error.
31. Conclusion
• Self-driving cars aim to revolutionize car travel by making it safe and efficient. In
this article, we outlined some of the key components such as LiDAR, RADAR,
cameras, and most importantly – the algorithms that make self-driving cars
possible.
Few things need to be taken care of:
1.The algorithms used are not yet optimal enough to perceive roads and lanes
because some roads lack markings and other signs.
2.The optimal sensing modality for localization, mapping, and perception still lack
accuracy and efficiency.
3.Vehicle-to-vehicle communication is still a dream, but work is being done in this
area as well.
4.The field of human-machine interaction is not explored enough, with many open,
unsolved problems.
32. THANKYOU
REFERENCES
A Convolutional Neural Network Approach Towards Self-Driving Cars
- IEEE Xplore Full-Text PDF:
Self-Driving Cars With Convolutional Neural Networks (CNN) Self-
Driving Cars With Convolutional Neural Networks (CNN) - neptune.ai