TECHNICAL SEMINAR
ON
GOOGLE DRIVERLESS CAR
syed jabir
12uc1a0462
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
Contents
 ABSTRACT
 WHY WE NEED GOOGLE CAR
 TECHNOLOGIES
 COMPONENTS
 SENSORS
 LIDAR
 RADAR
 CAMERA
 ULTRASONIC SENSORS
 CAR’S CPU
 MAJOR CHALLENGES
 CONCLUSION
 REFERENCES
ABSTRACT
It is a vehicle that can drive itself from one point
to another without assistance from a driver. It
has neither a steering wheel nor pedals in other
words, with an autopilot system.
WHY WE NEED GOOGLE CAR?
 Driver error is the most common cause of traffic accidents.
 Cell phones in-car ,entertainment systems, more traffic and more
complicated road systems making it more frequent accidents
 By this improving technology our car will do the concentrating for us.
Technologies
 Anti-lock brakes(ABS)
 Electronic stability control
(ESC)
 Cruise control
 Lane Departure Warning
System
 Self Parking
 Automated Guided Vehicle
Systems
COMPONENTS:
Google Maps
Provides the car with road
information
GPS
Provides with real time
location
Sensors
Provides the car with real
time environment conditions
Artificial Intelligence
Provides the car with real
time decisions
SENSORS
LIDAR
The heart of Google’s self driving car is the rotating roof top camera, Lidar,
which is a laser range finder. With its array of 64 laser beams, this camera
creates 3D images of objects helping the car see hazards along the way. This
device calculates how far an object is from the moving vehicle based on the
time it takes for the laser beams to hit the object and come back. These high
intensity lasers can calculate distance and create images for objects in an
impressive 200m range.
RADAR
4 radars mounted on the car’s front and rear bumpers enable the car to be
aware of vehicles in front of it and behind it. Most of us are familiar with
this technology .
Radar is an object-detection system that uses radio waves to determine
the range, angle, or velocity of objects.
CAMERA
A camera mounted on the windshield. This camera with the help of IMAGE
PROCESSING and ARTIFICIAL INTELLIGENCE rightly interpret
common road behavior and motorist signs. For example, if a cyclist gestures
that he intends to make a turn, the driverless car interprets it correctly and
slows down to allow the motorist to turn. Predetermined shape and motion
descriptors are programmed into the system to help the car make intelligent
decisions.
ULTRASONIC SENSOR
As we are very familiar with this technique, this technique
uses sound propagation to detect object. An ultrasonic sensor on one of the
rear wheels helps keep track of the movements of the car. It also calculates
number of rotation of the wheel to find exact location of the car with the help
of GPS and GOOGLE MAP.
And it also alert the car about the obstacles in the rear.
CAR’S CPU
All the data gathered by these sensors is collated and interpreted
together by the car’s CPU or in built software system to create a
safe driving experience.
MAJOR CHALLENGES
 99% countries till now not allowed the driving of Google car.
 If the vehicle is using internet which is have less security then From
the hackers point of view in some cases the vehicle can be switched
off on the road.
 Hackers can also change the rout which is plotted in the system.
 Car has some issues driving in heavy rain.
 It have problem in differentiating between a plastic bag and a rock.
CONCLUSION
 Google Car is very useful.
 It will reduce the rate of road accidents.
 It will follow the traffic rules and regulations.
 It will make the efficient use of highways.
 All the issues will be short out by the year 2020, and then we
can enjoy the ride of Google Car.
REFERENCES
 http://en.wikipedia.org/wiki/driverlesscar/
 http://www.national.co.uk/tech-powers-
google-car
 http://www.national.co.uk/tech-powers-
google-car/
 http://www.techtimes.com/articles/14625/2
0140902/rain-or-snow-rock-or-plastic-bag-
google-driverless-car-cant-tell.htm
THANK YOU

Googlecar 150915202230-lva1-app6892

  • 1.
    TECHNICAL SEMINAR ON GOOGLE DRIVERLESSCAR syed jabir 12uc1a0462 DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
  • 2.
    Contents  ABSTRACT  WHYWE NEED GOOGLE CAR  TECHNOLOGIES  COMPONENTS  SENSORS  LIDAR  RADAR  CAMERA  ULTRASONIC SENSORS  CAR’S CPU  MAJOR CHALLENGES  CONCLUSION  REFERENCES
  • 3.
    ABSTRACT It is avehicle that can drive itself from one point to another without assistance from a driver. It has neither a steering wheel nor pedals in other words, with an autopilot system.
  • 4.
    WHY WE NEEDGOOGLE CAR?  Driver error is the most common cause of traffic accidents.  Cell phones in-car ,entertainment systems, more traffic and more complicated road systems making it more frequent accidents  By this improving technology our car will do the concentrating for us.
  • 5.
    Technologies  Anti-lock brakes(ABS) Electronic stability control (ESC)  Cruise control  Lane Departure Warning System  Self Parking  Automated Guided Vehicle Systems
  • 6.
    COMPONENTS: Google Maps Provides thecar with road information GPS Provides with real time location Sensors Provides the car with real time environment conditions Artificial Intelligence Provides the car with real time decisions
  • 7.
  • 8.
    LIDAR The heart ofGoogle’s self driving car is the rotating roof top camera, Lidar, which is a laser range finder. With its array of 64 laser beams, this camera creates 3D images of objects helping the car see hazards along the way. This device calculates how far an object is from the moving vehicle based on the time it takes for the laser beams to hit the object and come back. These high intensity lasers can calculate distance and create images for objects in an impressive 200m range.
  • 9.
    RADAR 4 radars mountedon the car’s front and rear bumpers enable the car to be aware of vehicles in front of it and behind it. Most of us are familiar with this technology . Radar is an object-detection system that uses radio waves to determine the range, angle, or velocity of objects.
  • 10.
    CAMERA A camera mountedon the windshield. This camera with the help of IMAGE PROCESSING and ARTIFICIAL INTELLIGENCE rightly interpret common road behavior and motorist signs. For example, if a cyclist gestures that he intends to make a turn, the driverless car interprets it correctly and slows down to allow the motorist to turn. Predetermined shape and motion descriptors are programmed into the system to help the car make intelligent decisions.
  • 11.
    ULTRASONIC SENSOR As weare very familiar with this technique, this technique uses sound propagation to detect object. An ultrasonic sensor on one of the rear wheels helps keep track of the movements of the car. It also calculates number of rotation of the wheel to find exact location of the car with the help of GPS and GOOGLE MAP. And it also alert the car about the obstacles in the rear.
  • 12.
    CAR’S CPU All thedata gathered by these sensors is collated and interpreted together by the car’s CPU or in built software system to create a safe driving experience.
  • 13.
    MAJOR CHALLENGES  99%countries till now not allowed the driving of Google car.  If the vehicle is using internet which is have less security then From the hackers point of view in some cases the vehicle can be switched off on the road.  Hackers can also change the rout which is plotted in the system.  Car has some issues driving in heavy rain.  It have problem in differentiating between a plastic bag and a rock.
  • 14.
    CONCLUSION  Google Caris very useful.  It will reduce the rate of road accidents.  It will follow the traffic rules and regulations.  It will make the efficient use of highways.  All the issues will be short out by the year 2020, and then we can enjoy the ride of Google Car.
  • 15.
    REFERENCES  http://en.wikipedia.org/wiki/driverlesscar/  http://www.national.co.uk/tech-powers- google-car http://www.national.co.uk/tech-powers- google-car/  http://www.techtimes.com/articles/14625/2 0140902/rain-or-snow-rock-or-plastic-bag- google-driverless-car-cant-tell.htm
  • 16.