• Save
Multi sensor contour following with vision, force & acceleration sensors for an industrial robot
Upcoming SlideShare
Loading in...5

Multi sensor contour following with vision, force & acceleration sensors for an industrial robot



For more projects visit @ www.nanocdac.com

For more projects visit @ www.nanocdac.com



Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds



Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

Multi sensor contour following with vision, force & acceleration sensors for an industrial robot Multi sensor contour following with vision, force & acceleration sensors for an industrial robot Document Transcript

  • 268 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 62, NO. 2, FEBRUARY 2013Multisensor Contour Following With Vision, Force,and Acceleration Sensors for an Industrial RobotHeiko Koch, Alexander König, Alexandra Weigl-Seitz, Karl Kleinmann, and Jozef SuchýAbstract—In robotic contour-following tasks, such as sewingor cutting, an industrial robot guides a tool along the contourof a workpiece. Manually teaching the robot is time consumingand results in a system that is unable to react to uncertaintiesor changes in the environment. Because the cost effectiveness ofa robotic solution depends on the amount of human interven-tion, particularly small series production benefits from greatersystem autonomy through the integration of sensor systems. Inthis paper, we present an integrated approach for multisensorcontour following. A look-ahead vision sensor steers the robotalong the workpiece while force-feedback control maintains thedesired contact force. Acceleration sensors are used to compensatethe force measurements for inertial forces, so the arrangementof the acceleration sensors is investigated. Couplings that arisebetween force and vision control systems are estimated, and onlinemeasurements of contact forces between the robot and the envi-ronment are used to adjust measurement results from the visionsensor to compensate for environmental deformations. Parametersof a second-order linear model of the environment are estimatedby online identification. The identification combines force andacceleration sensors in an observer-based control scheme. Thesystem is validated by experiments that involve contour followingon compliant objects.Index Terms—Robot control, robot sensing systems, robotvision systems, sensor fusion, tactile sensors.I. INTRODUCTIONROBOTIC CONTOUR following can be used for a rangeof applications such as sewing, grinding, or applyingadhesives. The aviation industry, for example, uses roboticsewing techniques to connect carbon fiber material [1], [2].Conventional solutions assume a stiff nonmoving environmentusing fixed robot paths; however, in an automated task, visionfeedback is used to control the path directions while forcefeedback is used to control the contact force.The performance and speed of visual servoing are limited bythe sensor system and dynamics of the robot. Image process-ing can introduce significant time delays in the control loop,Manuscript received October 30, 2011; revised May 27, 2012; acceptedMay 29, 2012. Date of publication September 17, 2012; date of current versionDecember 29, 2012. This work was supported by the Federal Ministry ofEducation and Research, Germany. The Associate Editor coordinating thereview process for this paper was Dr. John Sheppard.H. Koch and A. König were with the University of Applied SciencesDarmstadt, 64295 Darmstadt, Germany. They are now with Chemnitz TechnicalUniversity, 09107 Chemnitz, Germany (e-mail: hkoch@eit.h-da.de).A. Weigl-Seitz and K. Kleinmann are with the University of AppliedSciences Darmstadt, 64295 Darmstadt, Germany.J. Suchý is with Chemnitz Technical University, 09107 Chemnitz, Germany.Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TIM.2012.2214934making direct feedback impossible. Koivo and Houshangi [3]circumvent time delays by estimating the current position ofa moving object using model-based prediction. Delays in avisual servoing system can be handled by adding feedforwardsignals from estimations of system states to reduce trackingerrors; however, generating feedforward signals for an objecttracking application that lets the tracking error converge tozero is impossible because the future position/velocity of thetarget to be tracked can only be estimated and is not directlymeasured [4], [5].In contrast to object tracking, contour-following applicationsare capable of measuring “future” path characteristics usinglook-ahead vision [ahead of the tool center point (TCP)]. Asa result, it is possible to cope directly with system delays,as we have shown in [6]. Lange et al. [7] have shown theadvantages of predictive contour following using look-aheadvision in combination with an offline-generated reference path.Research by Baeten and De Schutter [8] showed the benefits ofvisual servoing including feedforward signals, although theirresearch was limited to planar contours. Many approaches ofconstrained contour following use a vision system to performa measurement at the contact point between the tool and work-piece, e.g., [9]. Because we do not perform a measurement atthe contact point, the tool does not occlude the camera view.Performing measurements with a moving camera requires arobust sensor system and fast image acquisition [10]. A robot-guided triangulation-based laser imaging system, as shown,e.g., by Zou et al. [11], provides 3-D image data that makeit possible to follow a contour at high speed using a robot-mounted camera.In this paper, we present an algorithm to adjust the positionand orientation of the tool using predictive vision-based control.Force feedback is used to control the contact force. Our sys-tem includes feature validation for suppressing measurementerrors, tool alignment with minimum angular tracking erroralong unknown contours, and the ability to restrict the toolvelocity online, depending on the applicable angular velocities.Moreover, we compensate for inertial forces within the forcemeasurement and compensate the vision sensor data for dis-turbances caused by workpiece deformation to decouple force-and vision-controlled directions.Filtering of the acquired sensor data is necessary due tomeasurement noise. We use a least squares approach to fita quadratic curve to the measured data to generate a path.We detect corners in the measured data to be able to han-dle the discontinuity at corners. Corner detection can be di-vided into two main groups: 1) from grayscale images and 2)from a list of points obtained from the contour. Masood and0018-9456/$31.00 © 2012 IEEE
  • KOCH et al.: MULTISENSOR CONTOUR FOLLOWING FOR AN INDUSTRIAL ROBOT 269Sarfraz [12] proposed sliding a set of three rectangles alongthe 2-D curve to identify corners by counting contour pointslying in each rectangle. Pritchard et al. [13] tried to fit trianglesto the 2-D boundary. Mokhtarian and Suomela [14] proposedcorner detection using curvature scale space (CSS), in whichcorners are defined as the local maxima of the absolute value ofcurvature. Corners are detected when the scale is large, whichreduces the effect of noise but locates the position of the cornerimprecisely. Thereafter, they refine the scale and examine thesame corner to precisely locate them. Our proposed cornerdetection algorithm adapts the idea of the CSS to fit trianglesof different scales to the 3-D contour.To maintain the desired contact force, we apply a controlledforce in the direction of the contact. We combine the advantagesof vision and force sensors (global vision information/highbandwidth on local force situation). According to the classifi-cation system described by Nelson et al. [15], we use a hybridcontrol scheme in which force and vision sensors are only ap-plied in the orthogonal directions. The high-level task descrip-tion in the task frame formalism [16], [17] makes it possible toprecisely describe each direction being controlled; however, inreal applications, even if only working in orthogonal directions,one has to compensate for the couplings arising between forceand vision sensing systems. Methods for compensating forcesensors for the gravitational and dynamic forces withinforce measurements have been investigated in [18]–[20]. Weshow the benefits of direct acceleration measurement for com-pensation and investigate the importance of the sensor position,as we have shown in [21]. Wang and Yuan [22] proposeda 6-degree-of-freedom (DOF) acceleration sensor. We focusonly on the measurement of linear acceleration because of thelimitations in our sensor system.Baeten and De Schutter [8] showed improved performanceduring contour following by compensating the camera posefor tool deformation under the current contact force; however,when working on compliant surfaces, one has to instead com-pensate for deformations in the environment. In order to predictthe deformations, a model of the environment is necessary. Thiscan be obtained by finite-element methods, e.g., [23], or byonline identification. The excitation of the system is importantfor reliable estimation of, e.g., damping parameters [24].As we have shown in [25], we use a multisensor contour-following algorithm using compliant workpieces. We identifyparameters of the environment such as stiffness, mass, anddamping using recursive least squares (RLS) online identifica-tion (e.g., [26]). We divide the workspace of the robot into a gridin order to estimate position-dependent parameters, as proposedby Love and Book [27]. We are thus able to compensate thevisual measurement results for environmental deformations.Moreover, we improve the identification of environmental pa-rameters by providing a robot position signal with high dynamicrange using a rigid-body observer (RBO), as proposed in [28],combining the data provided by an acceleration sensor with therobot position signal provided by the robot controller [29].This paper is organized as follows. In Section II, we de-scribe how we compensate for inertial forces within the forcemeasurement results, as previously reported in [21]. Differentmethods to obtain the linear acceleration signal are compared.Fig. 1. Overview of the multisensor contour-following algorithm.In Section III, we present the look-ahead visual control scheme,as we have shown in [6]. We extend the acquisition of thefeature path with validation, filtering, and path prediction.Moreover, corners are detected in the path data using a triangle-fitting approach. Decoupling of the parallel vision-force controlloop is discussed in Section IV. A position-dependent linearspring–mass–damper system is shown to estimate deformationsunder current contact forces for compensation, as shown in[25]. Section V deals with online identification of the envi-ronment using observer-based position estimation. Section VIshows an experiment on contour following that combines allproposed algorithms. We conclude this paper in Section VII.Fig. 1 shows an overview of the complete multisensor contour-following algorithm. Experimental results are given in eachsection.A. Experimental System SetupWe use a KUKA KR60-2 industrial robot for our experi-ments, as shown in Fig. 2. Using the Robot Sensor Interface,the Cartesian positions and the Cartesian control input valuesare exchanged with the external controller PC. The controllercalculates the control input values based on measurementresults obtained from the attached sensors using a samplinginterval of T = 4 ms. Because we are working in a compliantenvironment, the stiffness of the robot and of the force-torque(FT) sensor can be neglected.We obtain 3-D vision data using a calibrated robot-mountedtriangulation-based laser line scanner with an acquisition speedof approximately 50 frames/s. Fig. 3 shows a schematic repre-sentation of the sensor system.To measure force and acceleration, we either use a 6-DOFstiff FT sensor or a compliant FT-acceleration (FTA) sensor,each with an acquisition speed of 500 samples/s. Additionalacceleration sensors are used to measure acceleration directlyat the tool.
  • 270 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 62, NO. 2, FEBRUARY 2013Fig. 2. (a) Robot system equipped with (Acc-Sensor) an acceleration sensor,(Laser–Camera System) a camera and a laser, and (FT-Sensor) an FT sensorused for contour-following applications. (b) Camera view (grayscale camera)of the laser projection. (c) Laser light on the contour.Fig. 3. Setup for laser–camera triangulation. (a) Camera advance lengthlcam depends on the mounting position and angle of the system. (b) and(c) Deformation by contact influences the visual measurement.B. Task DescriptionTo follow a certain visible contour (e.g., a weld seam) ofan unknown free-form surface with the robot-guided tool, weacquire the contour with our vision system. We describe themotion of the robot along the contour at a desired velocity vx inthe x-direction of the tool frame, as shown in Fig. 2(a). Visualcontrol is used to control the y-direction of the tool (transverseto the path direction) and to adapt the orientation of the toolalong the direction of the path, as shown in Section III. Thez-direction is either controlled by force-feedback control or byvisual control, depending on the desired application. An offsetdy, dz can be defined to maintain a desired distance betweenthe tool and the contour. We describe our complete task withlinear directions x, y, and z and angular directions α, β, and γaround z, y, and x, respectively, as follows:x velocity vx ≤ 150 mm/s along the contour;y distance dy between the tool and the contour;z force Fz ≤ 10 N or distance dz ≥ 0 mm between the tooland the contour;γ rotation γ around the tool x-axis to align the tool y-axiswith the workpiece plane;β rotation β around the tool y-axis to align the tool z-axisnormal to the contour;α rotation α around the tool z-axis to align the tool x-axistangential to the contour.II. FORCE COMPENSATIONWe implement a proportional–integral–derivative (PID)force controller in the force-controlled direction z. The mea-sured force Fm from the FT sensor is composed of three parts:the static (gravitational) force Fs, the dynamic force Fa, andthe external contact force F. In order to perform force-feedbackcontrol, we extract the contact force F from the measureddata FmF = Fm − Fs − Fa. (1)Parameters of the robot-guided tool are necessary to calculateFs and Fa. The mass mtool and the center of the mass r =(rx, ry, rz)Twith respect to the sensor coordinate system arecalculated from measurements in different orientations. Withthe measured forces Fm = (Fmx, Fmy, Fmz)Tand measuredtorques Mm = (Mmx, Mmy, Mmz)T, this leads tomtool =F2mx + F2my + F2mzg(2)Mm = r × Fm =⎡⎣ry · Fmz − rz · Fmyrz · Fmx − rx · Fmzrx · Fmy − ry · Fmx⎤⎦ (3)rx = −MmyFmz=MmzFmy∀ Fmx = 0ry = −MmzFmx=MmxFmz∀ Fmy = 0rz = −MmxFmy=MmyFmx∀ Fmz = 0. (4)A. Static CompensationThe robot controller provides Euler angles α, β, and γ forthe rotations around the z-, y-, and x-axes, respectively. Wedetermine the rotation matrix between base and tool frame fromthese angles. Gravity affects the forces only in the z-direction ofthe base frame; hence, the static forces Fs = (Fsx, Fsy, Fsz)Tare obtained byFs = mtool ·⎡⎣− sin βcos β sin γcos β cos γ⎤⎦ · g. (5)B. Dynamic CompensationAcceleration causes dynamic forces Fa due to inertia of thetool. With linear acceleration a = (ax, ay, az)T, these forcesare calculated usingFa = mtool · a. (6)The acceleration signal a is essential to estimate Fa. Wecompare three methods of measuring acceleration:1) second derivative of the robot position signal;2) direct measurement using the compliant FTA sensor (theacceleration sensors are inside the FTA sensor on the sideof the robot’s wrist);3) direct measurement by acceleration sensors on the tool.
  • KOCH et al.: MULTISENSOR CONTOUR FOLLOWING FOR AN INDUSTRIAL ROBOT 271Fig. 4. Acceleration measurement in the z-direction. (Upper plot) Com-manded acceleration in z. (Lower plot) Commanded acceleration in x.In Fig. 4, we compare experimental results from methods 1)and 2).The Cartesian position for method 1) is gained by calculationof the forward kinematics using the measured angles of thesix motors. Dynamics that occur within the drive train cannotbe measured at the motors; thus, the measurement resultsobtained using method 1) appear to be more damped thanthose obtained using direct measurements from method 2)(see Fig. 4 at t > 0.3 s). Moreover, the sampling rate inmethod 1) is much lower than that in method 2), which explainsthe noisy curves for method 1) in Fig. 4. Furthermore, physicalwave propagation through the robot’s arm causes a time lag.An ideal robot would only accelerate in the direction of thecommanded motion, but in reality, vibrations occur in all otherdirections as well. In the upper plot of Fig. 4, we show theresults of commanding a motion in the z-direction of the toolframe and measure the acceleration in the same direction. Thesignals are not delayed because the resulting acceleration iscaused by the commanded motion of the motors; thus, the cal-culated acceleration primarily aligns with the resulting actualacceleration. In the lower plot of Fig. 4, however, we show theresults of measuring the acceleration in an orthogonal direc-tion to the commanded motion. We only measure vibrationsorthogonal to the commanded motion, which must propagatethrough the drive train to be measured by the motor angle.Method 1) lags method 2) by approximately 20 ms in thisexperiment.Direct acceleration measurement provides a higher samplingrate and a higher signal resolution than the derivative of therobot position signal does. Moreover, direct measurement rep-resents the actual motion of the TCP and is not affected bydelays within the drive train. As a result, we use the accel-eration sensors to calculate inertial forces for dynamic forcecompensation.The sensor position is important, particularly when usinga compliant FT sensor. Due to the compliance of the sensoritself, tool oscillation can occur. These oscillations cannot bemeasured by the internal sensors of the FTA sensor [method 2)].Fig. 5. Dynamic compensation using the internal versus external accelerationsensors with the compliant FT sensor at high acceleration motion.We therefore attached external acceleration sensors directly onthe tool [method 3)] to measure the actual tool oscillation. Thecomparison between methods 2) and 3) is shown in Fig. 5.Tool oscillation causes an oscillating force signal. As a result,method 3) provides the best force compensation.III. CONTOUR-FOLLOWING ALGORITHMFor the proposed contour-following algorithm, we use alook-ahead vision sensor. Features are not acquired directlyat the TCP but at a look-ahead distance in front of the TCP[see Fig. 3(a)]. This means that we predict “future” controlinput values along the contour. As a result, we can take systemtime delays into account and control the velocity of the robot,depending on the upcoming contour curvature. Moreover, thetool itself does not occlude the camera because features are notacquired directly at the TCP. The proposed contour-followingalgorithm controls the position, orientation, and velocity ofthe tool along the path. It consists of data acquisition, sorting,filtering, prediction, and corner detection, as shown in Fig. 1.The control structure is shown in Fig. 6.A. Acquisition of the Feature PathWith a frame rate of approximately 50 fps, the vision sensorprovides 3-D features pj = (pjx, pjy, pjz)Tof the contour.These data are stored in the feature path PP = {p0, p1, . . . , pN }. (7)A change in the distance between the tool and workpiece or achange in the orientation of the tool frame changes the advancelength lcam (see Fig. 3). Fig. 7 shows a measurement along acorner with low linear velocity vx. The rotation of the laser linelets lcam decrease, resulting in retrograde sampling (the orderof sampling does not match the progression along the contour).As a result, a short path segment gets recorded multiple times;however, the calculations for the visual control scheme relyon correct geometrical order. Thus, the visual data have to bevalidated and sorted first.
  • 272 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 62, NO. 2, FEBRUARY 2013Fig. 6. Structure of look-ahead visual control. Control input values are generated in advance and are selected by predicting the future robot position.Fig. 7. Visual measurement with a look-ahead vision sensor. The rotation ofthe laser line in the corner results in a retrograde effect on the contour data inwhich the order of sampling does not match the progression along the contour.An image of an actual corner is shown in Fig. 9.Fig. 8. (a) Append a new feature pnew to the end of the path or (b) insertbetween existing points.B. Sorting and Validation of the Path DataBecause the order of measurement may not match the pro-gression along the contour, we cannot append every acquiredfeature pnew directly to the end of the feature path P. Hence, wehave to either append pnew to the end of P [Fig. 8(a)] or insertit between existing features [Fig. 8(b)]. We therefore calculateline segments si between neighboring features pi−1 and pi foreach position along the path and calculate the normal plane Niwith the point pnew and the normal vector in the direction of sisi : x = pi−1 + λi · ni, with ni = pi − pi−1 (8)Ni : ni · (x − pnew) = 0. (9)The intersection of si and Ni leads to λi. There exists exactlyone λi along the path that satisfies one of the following cases.1) λi > 1 at the end of P: Append pnew to P [Fig. 8(a)].2) 0 < λi < 1: Insert pnew between pi−1 and pi [Fig. 8(b)].Fig. 9. (Left) Contour with a corner. (Right) Measurement of the corner.Outliers are detected as measurement errors; the sharp corner is stored intothe path.To identify measurement errors, we reject points that causesharp bends between path segments, e.g., if the angle ϕ is abovea maximum angle ϕmax of 50◦(see Fig. 8)ϕ!≤ ϕmax. (10)The measurement results obtained from an actual corner ofthe contour are, however, also rejected by (10). Hence, we donot directly delete new points that violate (10) but store themtemporarily. If the directions of the following acquisitions (e.g.,12 points) cause the same bend with increasing distance tothe last element of P, these stored elements will be appendedto the feature path, because we can then assume that thesefeatures belong to an actual corner. An experimental result ofthis algorithm is shown in Fig. 9. The outliers are rejected,according to (10), whereas the real corner is stored into the path.C. Orientation MeasurementAs Fig. 10 shows, we measure the future orientation of thepath using a tangent tadv to the advance point padv. We usethe tangent tadv to calculate a homogeneous transformationmatrix that defines the desired orientation of the TCP, withthe x-axis parallel to the tangent and the z-axis orthogonal tothe workpiece. To measure the orientation around the path, oursensor provides an additional measurement p∗i within the laserline on the contour to define the vector vadv = p∗adv − padv
  • KOCH et al.: MULTISENSOR CONTOUR FOLLOWING FOR AN INDUSTRIAL ROBOT 273Fig. 10. Orientation of the path is measured in advance by the tangent tadv.The robot path R and the feature path P are not identical due to a desired offsetdy between R and P.in the workpiece plane. The vectors n, o, and a define thedirections of the x-, y-, and z-axes of the transformation matrixpadvbase T in the advance point padv as follows:n =tadvtadva =n × vadvn × vadvo = a × n (11)padvbase T =⎛⎜⎝nx ox ax 0ny oy ay 0nz oz az 00 0 0 1⎞⎟⎠ . (12)We decompose (12) into a set of roll-pitch-yaw (RPY) Eulerangles γadv, βadv, and αadv of the desired tool frame [30] atthe advance point padv. These desired angles and positions arestored in the set-point path S = {s0, s1, . . . , sM } withsk = (padv,x, padv,y, padv,z, γadv, βadv, αadv). (13)D. PredictionTo obtain the correct position and orientation of the toolframe along the contour, the path R = {r0, r1, . . . , rM } of therobot must be equal to the set-point path S; however, S isacquired in advance, so we cannot use S directly as controlinput. Moreover, we must take system time delays into account.Data filtering and transfer behavior of the robot causes a systemtime delay of τsys = 195 ms in our experimental system. Hence,a given Cartesian control input needs the time τsys until it isactually reached by the position-controlled robot. We use theactual velocity vx of the robot to estimate the distance ˆl that therobot will move during this delayˆl = vx · τsys. (14)Starting at the current robot position ract, we want to predictthe position that the robot reaches after traveling ˆl. Becausethe look-ahead measurement is only available for P, the corre-sponding position pact to the current robot position ract needsto be specified. We define this correspondence by the shortestdistance from the current robot position ract to the feature pathP in the normal plane E. We therefore calculate the intersectionof the normal plane E to the robot path R with a line segmentgi of the feature path (see Fig. 10)E : nE · (x − ract) = 0, with nE = ract − rB (15)gi : x = pi + λ · ngi, with ngi= pi+1 − pi. (16)Fig. 11. Approximation of a tangent to the contour at padv between pQ andpR by a secant, linear regression, or the derivative of quadratic regression.Fig. 12. Angle measurement error by different approximations of a tangent toa curved path segment.rB is a point 5 mm from ract, providing the calculation ofa reliable normal vector nE. The index i is chosen so thatthe intersection of E and gi is within 0 < λ < 1, definingthe feature position pact corresponding to the current robotposition ractpact = pi + λ · ngi. (17)Using (14), it is possible to predict the position ˆp that is reachedwithin τsys starting from the current position pact. Thus, in eachcycle, we select a set point within S closest to ˆp as control inputin order to obtain a minimum alignment error.E. FilteringThe orientation of the tool frame is calculated from thetangent to the contour in (11). As shown in Fig. 11, a tangent tothe discrete data in the presence of noise is calculated by linearapproximation within a neighborhood of the tangent point padvbetween pQ and pR. We compare the following three differentapproximations:1) secant between pQ and pR;2) linear regression line between pQ and pR;3) first derivative at padv of the quadratic regression be-tween pQ and pR.The contour in Fig. 11 is a sinusoidal function with noiseand nonequidistant spacing. We determine the ideal angle ofthe tangent along the contour using the first derivative of the(ideal) sinusoidal function to calculate the error of the threeapproximations. Linear path approximation results in a largererror, as shown by the dotted and dashed lines in Fig. 12.The approximation of the tangent by the first derivative ofthe quadratic regression produces a smaller error (solid line);hence, we prefer quadratic regression for contour following.Quadratic regression in R3is computationally expensive.To be able to calculate the regression within one interpolationcycle, we chose to approximate the regression as follows. First,
  • 274 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 62, NO. 2, FEBRUARY 2013we project the data between pQ and pR onto a best fit plane;then, we approximate a quadratic function to the projected datawithin the plane. We describe the best fit plane EF k in everytime step k dependent on the coordinates x, y, and z as follows:EF k : y = ak + bk · x + ck · z. (18)(In the remainder of this section, we omit the index k to increasereadability). The unknown parameters a, b, and c are obtainedby performing a least squares fit to the measurements pi =(pix, piy, piz)T, (i = Q, . . . , R). By projecting the features pionto EF , we get the projected features ei = (eix, eiy, eiz)T.With the normal vector n = (−b/a, 1/a, −c/a)T, we definethe axes of the plane coordinate system as follows:aE =nnoE =eR − eQeR − eQnE = oE × aEEbaseT =nE oE aE eQ0 0 0 1. (19)EbaseT transforms a point from the plane coordinate system intobase coordinates; hence, to describe ei(i = Q, . . . , R) in R2, eiis transformed to the plane coordinate systemci = EbaseT ·ei1. (20)Because the z-axis of the plane coordinate system is definedby the normal vector of E [see (19)], the z-component of theprojected point ci = (ciu, civ, 0)Twithin the plane coordinatesystem is zero. A regression in R2can be calculated from theu- and v-components of ci. A quadratic function relating u to vcan be written asv = d + e · u + f · u2. (21)The unknown parameters d, e, and f of the quadratic regressionare calculated from the measurements ci by performing a leastsquares fit. From (21), we calculate the filtered tangent pointcadv in the center of the regression data ci in R2and calculatean arbitrary point ctan of the tangent in R2using the derivativeof (21). We transform these points to base coordinates asfollows:padv = EbaseT ·⎛⎝cadv01⎞⎠ ptan = EbaseT ·⎛⎝ctan01⎞⎠ (22)tadv = ptan − padv. (23)As a result, (23) provides the filtered direction of the tangenttadv to the contour at the filtered advance point padv, which isused for the calculation of the transformation matrix in (12).The projection of the features onto a best fit plane is sufficientbecause one always applies the regression only to a shortsegment of the path; hence, the path segment can be describedas a function (21).Fig. 13. Visual measurement of a contour with corners.F. Detection of CornersUsing the tangent as a measurement of orientation workswell for continuous contours. A corner is a discontinuity thatmust be considered separately (see Fig. 13). Unlike continuouscontours, where the tool is guided with a desired velocityalong the contour, the tool must stop in corners and thenchange direction before continuing in the new direction. Manyapproaches have been proposed to detect corners in orderedlists of 2-D contour points. We adopt the idea of the CSS [14]to detect corners at different scales. Online corner detectionneeds to be efficient due to limited computation time withinone interpolation cycle (T = 4 ms). We combine the triangle-fitting approach with the calculations of orientation along thecontour that are already calculated in (23). Hence, with theangle ∠(tadv, tadv−1) between the tangents t of two subsequenttime steps, the change of angle ϕ per distance s is calculated asfollows:dϕds≈∠(tadv, tadv−1)padv − padv−1. (24)This is a measurement of the sharpness of the curvature alongthe contour but does not identify corner positions. A localmaximum of (24) indicates a possible corner that is inspectedby the following algorithm (see Fig. 13).1) The corresponding feature of the local maximum of (24)defines point A of the triangle.2) The triangle A, B, C is defined by AB = AC = s(we use s = 11 mm in the first iteration).3) A is moved along the contour between B and C (withoutmoving B and C), with AB , AC > s/2, until themaximum angle φmax between AB and AC is found.4) If φmax is under a threshold of, e.g., 50◦, discard thepossible corner; otherwise, reduce the triangle side lengths by, e.g., 3 mm and continue with step 2), whereas thecorresponding position to φmax defines the next point A.If s is very small (e.g., s < 2 mm) and φmax over thethreshold, a corner is detected.Fig. 14 shows the measurement of a corner angle for threeiterations. The measurement clearly shows a corner, since thetriangle at large scale detects a significant curvature in thepath segment as well as at the small scale. If it was onlya curve with high curvature, the small triangle would havedetected a very low corner angle. Using only small trianglesfor corner detection would not be robust enough, because the
  • KOCH et al.: MULTISENSOR CONTOUR FOLLOWING FOR AN INDUSTRIAL ROBOT 275Fig. 14. Measurement of the corner angle by different triangle scales s.Fig. 15. Corner detection makes it possible to stop in the corner, rotate, andthen continue in the new direction to reduce the alignment error. A desiredoffset dy = 2 mm between the feature path and the robot path is applied.measurement of curvature on a very small scale is sensitiveto noise within the measurement. The exact corner positionis calculated using the smallest triangle. Fig. 15 shows therobot path at a corner (a desired offset dy = 2 mm between thefeature path and the robot path is applied in this experiment).In the case indicated by the dashed line, the robot moves withconstant velocity in the range of the corner. In the solid linecase, the robot stops in the corner, rotates, and continues in thenew direction. As we can see, the error along the contour issignificantly reduced.G. Control of the Maximum Angular VelocitySome applications and the robot itself allow a certain max-imum angular velocity of the tool frame. The reduction inangular velocity without reducing the linear velocity vx of therobot would cause significant misalignment along the contour.As a consequence, vx must be reduced, which automaticallyreduces the angular velocity along the path. When we measurethe control input values in advance, we can calculate the changein angle per unit distance. For the change in angle β per distances, we havedβidsi≈βi − βi−1padv − padv−1. (25)The definition of a maximum angular velocity dβ/dt leadsto the maximum linear velocity vx byvx max,i =dsidt=dsidβi·dβidt=dβidtdβidsi. (26)Equation (13) defines the desired position of the tool alongthe contour, and (26) describes the velocity profile; hence,the complete trajectory for the robot is defined. In Fig. 16,the linear velocity vx is plotted, where the maximum angularvelocity is limited to dβ/dt = 50◦/s along a contour on awavy surface. The desired linear velocity is defined by vx =Fig. 16. Linear velocity vx with angular velocity reduced to dβ/dt ≤ 50◦/s.Fig. 17. Angular velocity with and without limiting dβ/dt ≤ 50◦/s.Fig. 18. Error in the actual angle β around the y-axis of the tool frame alongthe workpiece.Fig. 19. Visual measurement of a contour as contact force is changed. Thisshape may lead to undesired alignments of the tool by visual control.100 mm/s. The average velocity is approximately 70 mm/s.As shown in Fig. 17, the maximum angular velocity is notexceeded, when the linear velocity is adapted to the curvature inadvance. Fig. 18 shows that the alignment error is significantlyreduced if the linear velocity is automatically adjusted (whilethe average velocity is the same; see Fig. 16).IV. DECOUPLING OF FORCE AND VISIONChanging contact forces during contour following influencesthe visual controller. The vision system cannot distinguish
  • 276 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 62, NO. 2, FEBRUARY 2013Fig. 20. Model of force control and compensation. Compensated features pcomp are used for visual control (see Fig. 6) instead of p.between shapes of the contour and deformations of the con-tour that are caused by the contact situation itself. Deforma-tions are regarded as real characteristics of the shape, leadingto significant misalignment of the tool along the contour.Such a measurement with changing contact force is shown inFig. 19; therefore, compensation must be made for the defor-mations in order to avoid a misalignment of the tool along thecontour.A. Compensation of DeformationsThe FT sensor provides measurements of the current contactforce on the workpiece. To compensate for deformation of thepath within the visual measurement, a model of the workpiecebehavior is necessary. We assume the deformation Δz to bea linear function of contact force, described by the stiffnessK(li). The position-dependent stiffness K(li) is obtained byidentification, as shown in the following section. Using thecurrent contact force Fi, we can calculate the deformation ineach time step i asΔzi =FiK(li). (27)In our experiments, we divided the contour into sections with awidth of Δli = 10 mm in which the parameters are assumedto be constant. The compensation by (27) is valid for thestatic case only. In dynamic situations, we must take dynamiccomponents, such as inertial forces and damping, into account.Using the mass M and the damping D, we obtain the followingextension of (27):Δzi =Fi − M(li) · Δ¨zi − D(li) · Δ ˙ziK(li). (28)Moreover, workpieces with low stiffness deform locally at thepoint of contact, as shown in Fig. 3(c). This effect is takeninto account by modifying the stiffness K(li) using the localdeformation ΔK(li), because the deformation in the advancelength lcam differs from the deformation at the TCP. Thedeformation of the workpiece under the current contact forceis described asΔzi =Fi − M(li) · Δ¨zi − D(li) · Δ ˙ziK(li) + ΔK(li). (29)Fig. 21. Compensation for deformation within visual data reduces the align-ment error of the visual controller.B. Filter and Delay Issues of Compensation/DecouplingThe compensation (29) is shown in Fig. 20. The quality of thedecoupling depends on synchronized robot data, vision sensordata, and force measurement. Synchronization is achieved bytaking into account the following delays:τIP image processing delay;τfil filter delay caused by filtering of the force signal;τmech mechanical system delay between the measured po-sition (measured on the motor shaft) and the actualposition of the TCP; this delay is caused by dynamicswithin the drive trains of the robot.We filter the measured force signal to reduce noise, resultingin a filter delay τfil. The calculated dynamic force Fdyn passesa filter with the same time delay and is subtracted from thefiltered force measurement. From this signal, we calculate thedeformation Δpcomp as shown in Fig. 20, which is taken forcompensation of vision data.Image processing is used to calculate the 3-D position of thefeatures along the contour based on the camera image and theactual robot position. This measurement must be delayed by(τfil − τIP) to take τfil into account, whereas image processingalready introduces a delay of τIP. By subtracting Δpcomp fromthis signal, we obtain the compensated feature position, whichcan be used for visual control.Alignment errors occur when orientation is adapted to thedeformed path, as shown in Fig. 21 by the dashed line. Thecompensation, particularly when dynamic parameters are takeninto account, reconstructs the nondeformed path, as shown inFig. 22. Thus, the alignment error is significantly reduced, asshown in Fig. 21.
  • KOCH et al.: MULTISENSOR CONTOUR FOLLOWING FOR AN INDUSTRIAL ROBOT 277Fig. 22. Visual measurement recognizes the deformation of the workpiececaused by changing contact force. Compensation reconstructs the nondeformedpath for visual control.The following section covers the identification of the neces-sary dynamic parameters of the environment.V. IDENTIFICATION OF THE WORKPIECE PARAMETERSMass, damping, and stiffness of the workpiece are usedwithin a model in order to compensate for deformations. Thesevalues are either known (from offline identification) or can beobtained/updated by online identification. Thus, with every taskexecution, the compensation for deformations becomes moreaccurate. We apply block identification with a block length of10 mm along the contour.A. Linear Spring–Mass–Damper ModelWe model the workpiece as a linear spring–mass–dampersystem, described byG(s) =Z(s)F(s)=1M · s2 + D · s + K(30)where Z is the position of the TCP in the direction of themeasured force F. M, D, and K are the mass, damping, andstiffness, respectively. In the z-domain (T = 4 ms), we describethis system as follows:G(z) =Z(z)F(z)=B(z−1)A(z−1)=b1z−1+ b2z−21 + a1z−1 + a2z−2(31)where bn and an, n = 1, 2, are the coefficients obtained by thebilinear transformation of (30), withˆM = −14·T2· (ˆa1 − 1 − ˆa0)ˆb1 + ˆb0ˆD = −T · (ˆa0 − 1)ˆb1 + ˆb0(32)ˆK =1 + ˆa0 + ˆa1ˆb1 + ˆb0. (33)Fig. 23. Identification of the stiffness of the contour within 30 runs ofrecursive identification. After approximately 15 runs, the stiffness value K isquite stable.Fig. 24. Rigid-body Luenberger observer (RBO), adapted from [28].With the data vector ψ, parameter vector ˆθ, and dead time d,the system is described as follows:ψT(k) = [−Zk−1, −Zk−2, Fk−d−1, Fk−d−2] (34)ˆθ = [ˆa1, ˆa2,ˆb1,ˆb2]T(35)ˆz(k|k − 1) = ψT(k)ˆθ(k − 1). (36)The well-known RLS identification algorithm is applied to(36). The performance of the identification depends on the ex-citation of the system. In our experiments, we excite the systemusing a pseudorandom binary sequence (PRBS) with the forceamplitude FPRBS = ±1N while moving with a constant linearvelocity along the contour. With this excitation, the parametersM, D, and K with an identification block width along thecontour of 10 mm are estimated within approximately 15 runsof contour following with a linear velocity of vx = 30 mm/s, asshown in Fig. 23.The robot position signal has a lower sampling rate than theFT sensor. In particular, in dynamic situations, the robot posi-tion signal does not provide the actual motion of the tool. Os-cillations that occur at the TCP cannot be completely measuredby the robot controller because not all oscillations propagatethrough the complete drive train into the motors, where jointangles are measured. However, by increasing the quality of therobot position signal by an observer, we increase the quality ofthe identification algorithm. We therefore applied an observer(adapted from [28]) that uses the robot position xrob as inputand an acceleration sensor signal asens as feedforward signal.The model error is eliminated by a PID controller. This struc-ture, as shown in Fig. 24, is rewritten as a Butterworth filter withthe filter parameters KOP , KOI, and KOD (in the frequencydomain for better readability, although applied as digital filter)vobs =asenss·s3s3 + KOD(s2 + KOP s + KOI )+xrobs ·KOD(s2+ KOP s + KOI)s3 + KOD(s2 + KOP s + KOI). (37)
  • 278 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 62, NO. 2, FEBRUARY 2013Fig. 25. Position measurement after the robot stops abruptly. The robotcontroller cannot measure the remaining oscillations, whereas the observerdoes.Fig. 26. Three-dimensional plot of the contour on a compliant plastic material(automobile dashboard) with high curvature and a corner along the path.Hence, the acceleration signal is filtered by a high-passfilter, whereas the robot position is filtered by a low-passfilter. The sum of these filtered values provides a signal withreliable steady-state values originating from the robot and high-frequency portions originating from the acceleration sensor. Acomparison between the unfiltered robot position signal and theobserved signal is shown in Fig. 25. There, the robot abruptlystops and causes the tool to oscillate, which is only measured bythe observed signal. Using the observed robot position withinthe RLS identification algorithm, the workpiece parameters canbe identified faster, as shown in Fig. 23, because the observedsignal reflects the true robot position more accurately.VI. CONCLUDING EXPERIMENTIn the previous sections, we proposed algorithms for differentcapabilities of the multisensor contour-following system:1) force compensation;2) vision-based contour following;3) corner detection;4) control of the maximum angular velocity;5) compensation of the deformation of the environment;6) identification of environmental parameters.We increased the performance of each of the capabilities andvalidated the algorithms with component-centric experiments.Fig. 27. During contour following, the path is deformed because the desiredcontact force between tool and workpiece changes. The deformation of thepath is compensated when all capabilities are activated; hence, the angularerror along the complete path is reduced. The angular error is higher whencorner detection, the control of the angular velocity, and the compensation ofdeformation are deactivated.Depending on the application, we can combine the proposed ca-pabilities for the desired contour-following task. Fig. 26 showsa contour that contains high curvatures as well as a corner on acompliant workpiece. We follow the contour with our proposedcontour-following system and compare the position error andthe angular error of the TCP during contour following for thethese two cases.1) Corner detection, control of the maximum angular veloc-ity, and the compensation of deformation are deactivated.2) All proposed capabilities are activated simultaneously(the minimum sharpness of a curvature in order to beconsidered a corner is φ = 30◦).We apply changing contact forces to show the compensationfor workpiece deformation. The workpiece parameters are iden-tified as shown in Section V. The stiffness along the contourvaries between 1 and 2 N/mm. We calculate the mean squareddistance between the measured features and the robot pathalong the complete contour for both cases. By activating allcapabilities, the mean squared position error of the TCP alongthe complete path is reduced by approximately 30% because weautomatically reduce the tool velocity online at high curvaturesand because we stop in the corner. The mean squared angularerror along the complete path decreases by 35%, and the maxi-mum angular error decreases by 8.2◦[see Fig. 27 (bottom)].
  • KOCH et al.: MULTISENSOR CONTOUR FOLLOWING FOR AN INDUSTRIAL ROBOT 279VII. CONCLUSIONIn this paper, we have presented an approach that com-bines vision, force, and acceleration sensor data for contour-following tasks. The experiments in Section II showed how theforce measurements could be compensated for the motions ofthe robot. In particular, the application of direct accelerationmeasurement at the tool improves the compensation for inertialforces. Section III presented the visual control scheme thatadapts position/orientation along the contour and includes thelimitation of the maximum angular velocity in the Cartesianspace. The error along the contour is reduced by an algorithmthat detects corners. The visual controller aligns the tool witha very small alignment error to the contour when movingslowly. Larger errors occur at higher velocity in the rangeof high curvatures on the path. This problem was solved byautomatically slowing down in these areas. Moreover, we haveshown the success of decoupling the vision measurement fromdeformations of the workpiece that are caused by the contactsituation. The alignment error is drastically reduced by thecompensation for deformations. By applying acceleration sen-sors, we improve the identification algorithm that outputs therequired parameters in order to compensate for deformationsof the workpiece. In Section VI, we presented the resultsof an experiment to evaluate the system with all proposedcapabilities activated simultaneously. The overall contour erroris significantly reduced.The presented approach, consisting of force compensation,vision-based contour following, decoupling, identification, andfiltering provides components to achieve greater autonomy forrobotic systems for a variety of contour-following tasks on com-pliant objects. In contrast to other approaches, we compensatefor the deformation of the environment under the current con-tact force. The presented approach supports a modular controlstructure with subtasks that are decoupled from each other. Itis therefore possible to develop algorithms for the force controlpart without affecting the visual controller and vice versa. Forexample, detection of corners/steps along the contour is thenindependent of an object deformation.For an industrial application of the presented research results,the autonomy of the system could be increased by improvingthe performance of each of the presented subtasks. Nonlinearmodeling of the environment, as well as increased frame rateof the image processing, would increase the quality of thecompensation and the visual control. Moreover, a combinationof online measurement with offline data would make the systemmore independent from measurement errors. Furthermore, thecomplete model of the robot could be included in the visual con-troller to predict joint velocities and singularities in advance.REFERENCES[1] J. Witting, “Recent development in the robotic stitching technology fortextile structural composites,” J. Textile Apparel Technol. Manage., vol. 2,no. 1, pp. 1–8, Fall 2001.[2] G. Biegelbauer, M. Richtsfeld, W. Wohlkinger, M. Vincze, and M. Herkt,“Optical seam following for automated robot sewing,” in Proc. IEEE Int.Conf. Robot. Autom., 2007, pp. 4758–4763.[3] A. Koivo and N. Houshangi, “Real-time vision feedback for servoingrobotic manipulator with self-tuning controller,” IEEE Trans. Syst., Man,Cybern., vol. 21, no. 1, pp. 134–142, Jan./Feb. 1991.[4] P. I. Corke, “Dynamic issues in robot visual-servo systems,” in Proc.ISRR, 1995, pp. 488–498.[5] P. Corke and M. Good, “Dynamic effects in visual closed-loop systems,”IEEE Trans. Robot. Autom., vol. 12, no. 5, pp. 671–683, Oct. 1996.[6] H. Koch, A. König, K. Kleinmann, A. Weigl-Seitz, and J. Suchý, “Predic-tive robotic contour following using laser–camera-triangulation,” in Proc.IEEE/ASME Int. Conf. AIM, 2011, pp. 422–427.[7] F. Lange, P. Wunsch, and G. Hirzinger, “Predictive vision based control ofhigh speed industrial robot paths,” in Proc. IEEE Int. Conf. Robot. Autom.,1998, vol. 3, pp. 2646–2651.[8] J. Baeten and J. De Schutter, “Improving force controlled planar contourfollowing using online eye-in-hand vision based feedforward,” in Proc.IEEE/ASME Int. Conf. Adv. Intell. Mechatron., 1999, pp. 902–907.[9] D. Xiao, B. Ghosh, N. Xi, and T. Tarn, “Sensor-based hybridposition/force control of a robot manipulator in an uncalibrated environ-ment,” IEEE Trans. Control Syst. Technol., vol. 8, no. 4, pp. 635–645,Jul. 2000.[10] G. Lambert, S. Wienand, and E. Ersü, “Vision on the fly—by robot-mounted sensor,” Robotik, pp. 245–250, 2002.[11] Y. Zou, M. Zhao, L. Zhang, and C. Jiang, “Development of laser stripesensor for automatic seam tracking in robotic tailored blank welding,” inProc. 7th WCICA, 2008, pp. 3062–3066.[12] A. Masood and M. Sarfraz, “Corner detection by sliding rectangles alongplanar curves,” Comput. Graph., vol. 31, no. 3, pp. 440–448, Jun. 2007.[13] A. Pritchard, S. Sangwine, and R. Horne, “Corner and curve detectionalong a boundary using line segment triangles,” in Proc. Inst. Elect.Eng.—Colloq. Hough Transforms, 1993, pp. P2/1–P2/4.[14] F. Mokhtarian and R. Suomela, “Robust image corner detection throughcurvature scale space,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20,no. 12, pp. 1376–1381, Dec. 1998.[15] B. J. Nelson, J. D. Morrow, and P. K. Khosla, “Robotic manipulationusing high bandwidth force and vision feedback,” Math. Comput. Model.,vol. 24, no. 5/6, pp. 11–29, Sep. 1996.[16] J. Baeten, H. Bruyninckx, and J. D. Schutter, “Integrated vision/forcerobotic servoing in the task frame formalism,” Int. J. Robot. Res., vol. 22,no. 10/11, pp. 941–954, Oct. 2003.[17] M. T. Mason, “Compliance and force control for computer controlledmanipulators,” IEEE Trans. Syst., Man, Cybern., vol. 11, no. 6, pp. 418–432, Jun. 1981.[18] A. Winkler and J. Suchý, “Dynamic force/torque measurement using a12DOF sensor,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2007,pp. 1870–1875.[19] J. Garcia, A. Robertsson, J. Ortega, and R. Johansson, “Force and accel-eration sensor fusion for compliant robot motion control,” in Proc. IEEEInt. Conf. Robot. Autom., 2005, pp. 2709–2714.[20] T. Kröger, D. Kubus, and F. Wahl, “6D force and acceleration sensorfusion for compliant manipulation control,” in Proc. IEEE/RSJ Int. Conf.Intell. Robots Syst., 2006, pp. 2626–2631.[21] H. Koch, A. König, A. Weigl-Seitz, and K. Kleinmann, “Improving forcemeasurements in multimodal robotic contour following tasks using ac-celeration sensors,” Proc. 12th Mechatron. Forum Biennal Int. Conf.,2010.[22] D. Wang and G. Yuan, “A six-degree-of-freedom acceleration sensingmethod based on six coplanar single-axis accelerometers,” IEEE Trans.Instrum. Meas., vol. 60, no. 4, pp. 1433–1442, Apr. 2011.[23] Y. Luo and B. J. Nelson, “Fusing force and vision feedback for manip-ulating deformable objects,” J. Robot. Syst., vol. 18, no. 3, pp. 103–117,Mar. 2001.[24] D. Erickson, M. Weber, and I. Sharf, “Contact stiffness and dampingestimation for robotic systems,” Int. J. Robot. Res., vol. 22, no. 1, pp. 41–57, Jan. 2003.[25] H. Koch, A. König, A. Weigl-Seitz, K. Kleinmann, and J. Suchý, “Force,acceleration and vision sensor fusion for contour following tasks with anindustrial robot,” in Proc. IEEE Int. Symp. ROSE, 2011, pp. 1–6.[26] L. Ljung, System Identification—Theory for the User. Englewood Cliffs,NJ: Prentice-Hall, 1987.[27] L. Love and W. Book, “Environment estimation for enhanced imped-ance control,” in Proc. IEEE Int. Conf. Robot. Autom., 1995, vol. 2,pp. 1854–1859.[28] G. Ellis and R. Lorenz, “Resonant load control methods for indus-trial servo drives,” in Conf. Rec. IEEE Ind. Appl. Conf., 2000, vol. 3,pp. 1438–1445.[29] W. Weber and H. Koch, “A state-space controller for movement axes withflexibilities,” Automatisierungstechnische Praxis ATP Edition, vol. 12,no. 4, pp. 20–24, 2010.[30] B. Siciliano, L. Sciavicco, L. Villani, and G. Oriolo, Robotics Modelling,Planning and Control. Berlin, Germany: Springer-Verlag, 2009.
  • 280 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 62, NO. 2, FEBRUARY 2013Heiko Koch received the Diploma degree in elec-trical engineering from the University of AppliedSciences Darmstadt, Darmstadt, Germany, in 2008.He is currently working toward the Ph.D. degree atChemnitz Technical University, Chemnitz, Germany.Alexander König received the Master of Electrical Engineering degree fromthe University of Applied Sciences Darmstadt, Darmstadt, Germany, in 2010.He is currently working toward the Ph.D. degree at Chemnitz TechnicalUniversity, Chemnitz, Germany.Alexandra Weigl-Seitz received the Ing. and Ph.D. degrees in electricalengineering from Technische Universität Darmstadt, Darmstadt, Germany, in1992 and 1997, respectively.She is currently a Professor with the Faculty of Electrical Engineering and In-formation Technology, University of Applied Sciences Darmstadt, Darmstadt.Karl Kleinmann received the Ph.D. degree in robotics from TechnischeUniversität Darmstadt, Darmstadt, Germany, in 1996.Since 2005, he has been a Professor of automation systems with the Univer-sity of Applied Sciences Darmstadt, Darmstadt.Jozef Suchý received the Ing. and Ph.D. degrees in electrical engineering fromthe Slovak University of Technology, Bratislava, Slovakia, in 1973 and 1978,respectively.Until 1996, he was with the Institute of Control Theory and Robotics,Slovak Academy of Science, Bratislava. He is currently a Professor withthe Department of Robotic Systems, Faculty of Electrical Engineering andInformation Technology, Chemnitz Technical University, Chemnitz, Germany.His research interests lie on the fields of robotics and control.Dr. Suchý is a member of IEEE Robotics and Automation, IEEE ControlSystem, and IEEE Education Societies.