Homes are chaotic too. Imagine how many variations there are.
What does it mean to be a robot in human world?
Interact with people.Work in real-world.Be safe.
How can robots enable people?
Observe and interpret the environment.Make predictions.Decide what to do next.Act: move, play sound, talk.Adapt and learn.
So how do we do it? Like any human driver, our self-driving cars have to answer four questions: where am I, what’s around me, what will happen next, and what should I do.
One of the main ways in which we answer “where we are” is by mapping—essentially we drive cars through the world first to create not just regular maps, but really complex ones that include detailed information like road profiles, curbs and sidewalks.
We take these detailed maps and combine them with live information from our sensors so we can compare what we see now to what we’ve seen in the past, giving us a much better idea of where we are exactly.
Now to answer “what’s around me”, the sensors on the car (lasers, radars, cameras) first pick up lots of data points from the environment 360 degrees around us. The software then processes all this information to classify objects into different categories.
There’s a lot of complexity here to work with—think about how different pedestrians could look for example—we all are of different sizes and shapes, we have different postures, we may gesture, we wear different clothing. The software needs to be able to understand that all these variations are pedestrians.
So in this image, the pink boxes are vehicles, the red box is a cyclist, and you can also see the road work ahead in red. And again, remember that the car is seeing 360 degrees around it, tracking all the objects in its surroundings simultaneously.
Now that we know what’s around us, we also want to know what objects around us might do next.
Notice the bright green line at the top, across the intersection. This is us predicting what that vehicle is going to do, and what we believe it’ll do is change lanes around that construction zone, given the obstruction.
So we do this, but we actually also do this for all the objects around us that we’re perceiving.
These lines represent what we think the objects around us might do next.
We make these predictions by getting lots of real-world experience driving on the road, and then using all that database of information to create probabilistic predictions of what might happen next given certain specific situations.
And now that we know where we are, what’s around us, and what they might do next, we can move on to the last question.
To finally answer “what should I do”, the software takes into account all the information we just presented so we can figure out a safe and comfortable trajectory and speed for the vehicle.
So notice in this image that we’ve removed the prediction lines, but there’s now a green path right in the middle. This shows you the car’s intended route.
But notice that we’ve also added these two green fences. These fences appear because we’ve also taken into account what’s around us and what the objects are likely to do next as well.
One fence is drawn with solid lines, and it’s right behind the car (the pink box) ahead of us. This fence signifies that this is the next object ahead that we really want to pay very close attention to. The fence is green, which means the vehicle ahead is probably going to continue across the intersection; we’re pretty good to follow. But, we’ll yield if necessary — in this situation the fence would’ve turned red.
There’s another fence right below this solid green one, and the lines are dashed. This fence is located at the crosswalk, and this green fence means that we have not detected anyone is in the crosswalk, but if someone were to appear, we’ll yield, so let’s pay special attention.
So now, given (i) our route, (ii) our understanding of the world around us, and of course (iii) a keen focus on safe, comfortable, defensive driving, the car translates all this information into two outputs — speed (the way you’d press on the gas/brakes) and trajectory (the way you’d steer a vehicle) — to get you where you want to go safely and comfortably.
Now let’s look at some examples of how this comes to life on city streets
Tomorrow’s Robots are about People
http://www.hizook.com/blog/2009/08/10/robotic-walkers-assist-elderly
https://motherboard.vice.com/en_us/article/z43z4a/the-mit-professor-obsessed-with-building-intelligent-prosthetics
http://www1.udel.edu/research/media/babiesrobotsgallery.html
http://robokind.com/
Tomorrow’s Robots are about People
http://www.hizook.com/blog/2009/08/10/robotic-walkers-assist-elderly
https://motherboard.vice.com/en_us/article/z43z4a/the-mit-professor-obsessed-with-building-intelligent-prosthetics