4.
Week 2: Reading<br />What is a possible reason for the fact that nature did not evolve wheels except for a few animals that using rolling as means of locomotion?<br />Because rotational actuators are not part of nature's repertoire.<br />Because wheeled locomotion is not efficient on soft and/or uneven ground.<br />Not true, there are various examples for wheel-based locomotion in nature.<br />What is the difference between static and dynamic stability?<br />Dynamic stability is when a robot does not fall over even when moving.<br />Static stability considers "snapshots" of robot poses, whereas dynamic stability addresses sequences of statically stable poses.<br />Dynamic stability requires motion for the system to be stable, static stability does not.<br />What is the prime purpose of a suspension system in a mobile robot?<br />To prevent damage of equipment on the robot<br />To guarantee that the robot base is always parallel to the ground<br />To assure that all wheels have maximum ground contact<br />
5.
Week 3: Reading<br />How do you calculate the forward kinematics of a wheeled robot? <br />I calculate the contribution of each wheel to the degrees of freedom of the robot in robot coordinates, then add up them up, and finally transform them into world coordinates.<br />The world coordinates can be expressed in robot coordinates using a simple rotation matrix.<br />I calculate the 1st and 2nd moments of the rotational center of the robot and transform those using a 3x3 rotation matrix in world coordinates.<br />What is key when calculating wheel kinematic constraints? <br /> The angle of the wheel plane needs to be fixed.<br /> Rolling and sliding constraints should not be zero for the robot to move.<br />T he rolling speed must add up to the robot motion along the direction of the wheel.<br />Which one of the following configurations has steerability degree 1 and mobility degree 2? <br />A robot that can translate along two axes and rotate its main body with a single steering wheel.<br />A robot that can steer one wheel which leads to change translation along one axis and rotation of its main body.<br />A robot with two steering wheels that can independently drive the robot AND let it rotate on the spot. <br /> What is a good recipe to drive a differential wheel robot to a desired position? <br />Calculate the robot speed as a function of the robot’s wheel-speed (Forward kinematics). Use this information to predict how to change the wheel-speed in order to drive the error (expressed in Polar-coordinates) using the controller from Section 3.6.2.4.<br /> Use the control law from Section 3.6.2.4 to calculate the desired robot speed in polar coordinates. Now transform the polar coordinates into robot coordinates (Inverse kinematics) and from there in world coordinates (Forward kinematic model).<br />Calculate first the relation between forward and angular speed at the robot’s center and its wheel-speed (Forward kinematic model). Determine how to set the wheel-speed in order to achieve a desired robot speed (Inverse kinematics). Calculate the error in polar coordinates, and use the control law from Section 3.6.2.4 to calculate the desired robot speed.<br />
6.
Week 4: Reading<br />Your robot is facing a wall with its distance sensor, and even though the robot is not moving its readings appear to be random. This is most likely a problem with the sensor’s<br />Resolution<br />Dynamic Range<br />Bandwidth<br />Precision<br />Your robot is equipped with an infra-red distance sensor that delivers very accurate readings that reflect even very small changes in distance. Unfortunately, the sensors do not work well when sunlight is penetrating the lab windows. This is a problem of<br />Sensitivity of the sensor<br />Cross-Sensitivity of the sensor<br />Accuracy of the sensor <br />Why do you require four satellites to establish your position with GPS?<br />There are four unknowns: x, y, z and orientation<br />There are four unknowns: x,y, z and clockskew<br />There are only three unknowns, a compass is required for orientation<br />How does a laser range finder work?<br />A laser beam changes its amplitude at high speed. The Doppler effect leads to a phase-shift of the amplitude-modulated laser signal. This phase-shift can be measured and is proportional to the distance.<br />The amplitude of the laser beam changes with a specific frequency whose wavelength is larger than the maximum range of the laser. Upon reflection the phase of this beam is shifted. This phase-shift can be measured and is proportional to the distance.<br />A laser beam with wavelength of 824 nm is reflected from a surface and its reflection is recorded on a linear camera, which is used to measure the time between the emission of the ray and its arrival.<br />
7.
Week 5: Reading<br />What makes color-based object recognition using image sensors difficult?<br />Colors are expressed in terms of their red, green and blue components. The associated gains change drastically as a function of the lighting conditions, and make even red and green objects ambiguous to distinguish.<br />The way the sensor sees the image is different from that of the human eye and therefore requires careful calibration.<br />Colors are easy to distinguish and this is therefore one of the easiest problems in vision. <br />What is not a valuable cue to detect depth from a single monocular image?<br />Blurring<br />Known size of an object<br />Disparity<br />All of the vision-based range extraction mechanisms suffer from the following problem<br />Depth is difficult to estimate for objects that are far away<br />Changes in lighting conditions change the way color is perceived<br />Only stereo-vision range estimates fail in the far field<br />Range estimates based on stereo-vision can be improved by increasing the baseline between the cameras. What are the trade-offs?<br />The sensor requires considerably more space and range to objects that are close cannot be estimated as one of the cameras might not see it anymore.<br />The sensor requires considerably more space and is more difficult to calibrate.<br />The sensor just requires more space.<br />
8.
Uncertainty<br />All sensors are noisy<br />Today<br />How to model uncertainty?<br />How does uncertainty propagate from sensors to features?<br />Example: line detection<br />
9.
The Gaussian/Normal Distribution<br />Defined by<br />Mean<br />Variance<br />Good approximation for some sensors<br />
10.
Current Stats: Week 1-4 vs. Spring 2010<br />Bi-modal distribution: Undergraduates/Graduates<br />Different performance thresholds for U-Grads / Grads<br />Spring 2010, 2 different distributions<br />N=27<br />Max=48<br /># students<br />Points<br /># students<br />Overall score (%)<br />
11.
Week 6: reading<br />Why is a Gaussian distribution the model of choice for representing uncertainty in robotic sensing?<br />Sensor readings are subject to uncertainty and this uncertainty behaves like a Gaussian distribution.<br />The true distribution of noise on most sensors is unknown, but the mathematical properties of the Gaussian model make it the model of choice being applicable to most sensors.<br />Because the likelihood is very high that all the sensor readings are within 3 standard deviations.<br />What is the reasoning behind the derivation of Equation 4.73 and 4.74 (least-squares optimization)?<br />The derivative of a function is zero at its extreme values (maximum or minimum), and thus finding the value where the derivative of the least-square error is zero, minimizes it. The value for which the least-square error is minimal, is the best fit for the line.<br />Finding the angle of the line that best matches the set of point requires a double integration (double sum).<br />Finding the best fitting line is a complex numerical optimization problem for which no analytical solution can be found.<br />In order to detect an edge in the image<br />You have to find areas in the image where the frequency, i.e. the change between neighboring pixels is high<br />You have to find areas in the image that are brighter than others <br />You have to find areas in the image that are darker than others<br />How can you calculate the variance for the detection of a feature that relies on multiple sensors?<br />The variance for feature detection corresponds to that of the sensor with the highest variance. This is represented by the Jacobian that encodes the dependencies between all sensor’s error models.<br />The variance for feature detection is the product of the variances of all sensors involved in its detection. This is represented by the Jacobian that encodes the dependencies between all sensor’s error models.<br />The variance for feature detection is a weighted sum of the individual variance of each sensor weighted by the dependency of the sensors of each other.<br />
12.
Example Feature: Detecting Lines<br />Camera<br />Laserscan<br />N.B. Every single point is subject to uncertainty!<br />
13.
Line Fitting<br />What is the uncertainty associated to each line feature?<br />
14.
Example: Line Fitting<br />Given:<br />Desired: r, a<br />
15.
Solution (Line fitting)<br />Additional trick: weight each measurement by<br />the variance expected at this distance.<br />
17.
Error propagation<br />What is the variance of a and r?<br />Error propagation law<br />Y are the output variables, X input<br />Cx,y are matrices of covariances<br />F is a Jacobian matrix<br />
19.
Summary<br />Every sensor has noise and makes reasoning uncertain<br />Sensor measurements can be combined into features<br />The uncertainty of these features can be calculated using the error propagation law<br />Knowing how uncertainty behaves helps you decide<br />
28.
Exercise 2: Locomotion<br />If you were to write a controller, what do you think would be your best bet to generate the joint values for ji for joint i at time t? Hint: look at how the dog in ghostdog.wbt is controlled.<br />ji(t+1)=a j1(t)+b<br />ji(t+1)= a sin (t-b)+c<br />ji(t+1)=a(t-b)^2+c<br />Can you implement a new gait in ghostdog.wbt that lets the robot trot? What do you need to do except adding a case TROT to the finite state machine in ghostdog.c? Try this out before answering the question!<br />Calculate the servo speed so that both front and hind pairs are out of phase, but one front leg is in phase with one hind leg.<br />Calculate the servo speed so that the phase between front and hind legs is always shifted by 90 degrees.<br />Calculate the servo speed so that all legs are phase shifted by 45 degrees.<br />Which of the motions in Figure 2.1 is only dynamically stable?<br />From 1->2<br />From 3->4<br />From 5->8<br />None<br />What is a straightforward way to presumably double the speed of the forward motion? To test this, edit the file with a text-editor. If you don’t get the desired speed-up, why is this?<br />The inertia and limited motor speed and torque hinder the robot from executing the motions double as fast.<br />The motors are simply not fast enough.<br />Just changing the timing of the gait does not affect its actual execution.<br />
29.
Exercise 3: Control<br />What happens when you increase KR and lower KA in the controller?<br />The robot will drive curves with larger radius.<br />The robot will drive curves with smaller radius.<br />The robot will just drive straight, the values need to be exactly as they are.<br />How does your controller deal with the obstacles<br />Collision avoidance subsumes navigation. Obstacles are avoided and navigation is resumed as soon as the obstacles are cleared.<br />The robot plans around the obstacles.<br />The robot gets stuck in the obstacles.<br />Build an obstacle with a U-shape by shifting the obstacles in the arena (press the shift key and move them with the mouse) and let the robot run into this. What happens?<br />The robot follows the inner perimeter of the U-obstacle to get out of it and eventually reaches the goal. <br />The robot goes back and forth into the U-obstacle. Some kind of planning would be needed.<br />The shape of the obstacle does not matter.<br />
30.
Exercise 4: Control<br />How can you tell that the robot is lying on its front or its back?<br />I need to integrate the direction of acceleration in order to determine the direction the robot has fallen.<br />I need to identify the direction of the acceleration exerted by the gravity of earth.<br />I use the accelerometer to detect a fall and then use the camera to detect whether the robot is facing down or not.<br />Can you use the Nao’s accelerometer for integrating the robot’s position? If you are not sure, try it!<br />Sure, the acceleration allows me to calculate the position and small errors around the mean will cancel each other out. <br />It is not possible to calculate position simply from acceleration.<br />Problems are the gravity of earth and the fact that even small errors have fatal impact after the required double integration.<br />What is the problem with the resulting map?<br />The laser scanner’s readings accuracy is pretty bad, leading to a rather noisy map.<br />Accumulating odometry errors renders the resulting map useless very fast.<br />The environment is hard to map.<br />What problem does this robot create?<br />It continues to collide with the mapping robot.<br />Dynamic obstacles make their way into the map.<br />The other robot moves to fast to be mapped accurately.<br />
Be the first to comment