IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT ID 1Towards Perceiving Robots as Humans:Three Handshake Models Face theTurin...
2 IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT IDinterface is necessary for controlling the nature of the in-formation av...
AVRAHAM ET AL.: TOWARDS PERCEIVING ROBOTS AS HUMANS ODD PAGE 3)3()()1()()( humannoiseinter tFtFtfwhere )(human tF is, idea...
4 IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT IDgator hand and haptic device), )(inter tF is the force appliedby the int...
AVRAHAM ET AL.: TOWARDS PERCEIVING ROBOTS AS HUMANS ODD PAGE 5ones that lengthen, and the extensor group are the onesthat ...
6 IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT IDof 52 young adults (ages 18-36). Therefore, assumingsmall angular moveme...
AVRAHAM ET AL.: TOWARDS PERCEIVING ROBOTS AS HUMANS ODD PAGE 7after signing the informed consent form as stipulated bythe ...
8 IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT IDeach block was random and predetermined. Each expe-riment consisted of 1...
AVRAHAM ET AL.: TOWARDS PERCEIVING ROBOTS AS HUMANS ODD PAGE 9The calibration curves (noise model, solid curves) indeedyie...
10 IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT IDwhich the robot is being compared. The original Turingtest for artifici...
AVRAHAM ET AL.: TOWARDS PERCEIVING ROBOTS AS HUMANS ODD PAGE 11test . Proc. The fifth Computational Motor Control Workshop...
12 IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT ID[40] Gomi, H., and Kawato, M., Equilibrium point controlhypothesis exam...
AVRAHAM ET AL.: TOWARDS PERCEIVING ROBOTS AS HUMANS ODD PAGE 13hemiplegic cerebral palsy , Clinical Biomechanics, vol. 22,...
Upcoming SlideShare
Loading in...5

Towards perceiving robots as humans three handshake models face the turing like hand shake test


Published on

For more projects visit @

Published in: Technology, Education
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Towards perceiving robots as humans three handshake models face the turing like hand shake test

  1. 1. IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT ID 1Towards Perceiving Robots as Humans:Three Handshake Models Face theTuring-like Handshake TestGuy Avraham, Ilana Nisky, Hugo L. Fernandes, Daniel E. Acuna, Konrad P. Kording, Gerald E.Loeb, Senior Member, IEEE and Amir Karniel, Senior Member, IEEEAbstract—In the Turing test a computer model is deemed to “think intelligently” if it can generate answers that areindistinguishable from those of a human. We developed an analogous Turing-like handshake test to determine if a machine canproduce similarly indistinguishable movements. The test is administered through a telerobotic system in which an interrogatorholds a robotic stylus and interacts with another party – artificial or human with varying levels of noise. The interrogator is askedwhich party seems to be more human. Here we compare the human-likeness levels of three different models for handshake: (1)Tit-for-Tat model, (2) model and (3) Machine Learning model. The Tit-for-Tat and the Machine Learning models generatedhandshakes that were perceived as the most human-like among the three models that were tested. Combining the best aspectsof each of the three models into a single robotic handshake algorithm might allow us to advance our understanding of the waythe nervous system controls sensorimotor interactions and further improve the human-likeness of robotic handshakes.Index Terms—Handshake, Sensorimotor Control, Psychophysics, Teleoperation, Turing test.—————————— ——————————1 INTRODUCTIONACHINES are being used increasingly to replacehumans in various tasks such as preparing food,communicating and remembering. Personal assis-tive robots are a promising mean to deal with the largeincrease in the ratio of retirees to workers in most indu-strialized societies. Semi or fully automatically controlledrobots may be used as coworkers with humans to per-form dangerous and monotonous jobs, such as liftingheavy objects on a production line. From a medical pointof view, personal assistive robots can be most useful withhelping motorically-impaired patients in performing var-ious tasks such as encouraging a stroke patient to do re-habilitation, helping a Parkinson patient to pick up a cupof coffee, etc. When such robots interact physically withhumans, they will need to be capable of human-like non-verbal communication such as the natural tendency ofcooperating humans to synchronize their movements.From a sensorimotor perspective, handshake is one ofthe simplest and most common sensorimotor interactions.It is characterized by the synchronization of the partici-pants hand motions, which must necessarily be in phase.From a cognitive standpoint, the handshake is consideredto be a very complex process. Humans use their observa-tions during handshake to make judgments about thepersonality of new acquaintances [1, 2]. Thus, by its bidi-rectional nature, in which both sides actively shakehands, this collaborative sensorimotor behavior allowsthe two participants to communicate and learn about eachother.Around 1950 Turing proposed his eponymous test forevaluating the intelligence of a machine, in which a hu-man interrogator is asked to distinguish between answersprovided by a person and answers provided by a com-puter. If the interrogator cannot reliably distinguish themachine from the human, then the machine is said tohave passed the test [3]. Here we present a Turing-likehandshake test for sensorimotor intelligence. We expectthis test to help in the design and construction of mecha-tronic hardware and control algorithms that will enablegenerating handshake movements indistinguishable fromthose of a human. We define the term ‘sensorimotor intel-ligence’ as the sensorimotor version of the ‘artificial intel-ligence’ that Turing aimed to explore in his test. In theoriginal concept for this test, the interrogator generateshandshake movements through a telerobotic interfaceand interacts either with another human, a machine, or alinear combination of the two (Fig. 1) [4-8]. We aimed todesign the experiment such that the interrogator’s judg-ments were based only on the sensorimotor interactionwith the telerobotic interface (a Phantom Desktop hapticdevice from Sensable Technologies in the current study)and not on additional cues. In this context, the teleroboticxxxx-xxxx/0x/$xx.00 © 200x IEEE————————————————G. Avraham, and A. Karniel are with the Department of Biomedical Engi-neering, Ben-Gurion University of the Negev, Beer-Sheva, Israel. E-mail:{guyavr, nisky, akarniel} Nisky was with the Department of Biomedical Engineering, Ben-GurionUniversity of the Negev, Beer-Sheva, Israel, and now with the Departmentof Mechanical Engineering, Stanford University, Stanford, California. E-mail: Fernandes, D.E. Acuna and K.P. Kording are with the Department ofPhysical Medicine and Rehabilitation, Northwestern University and Reha-bilitation Institute of Chicago, Chicago, Illinois. E-mail:,, Loeb is with the Department of Biomedical Engineering, University ofSouthern California, Los Angeles, California. E-mail: manuscript received March 26, 2012.MDigital Object Indentifier 10.1109/ToH.2012.16 1939-1412/12/$31.00 © 2012 IEEEIEEE TRANSACTIONS ON HAPTICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
  2. 2. 2 IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT IDinterface is necessary for controlling the nature of the in-formation available to the human interrogator, much asthe teletype was necessary for hiding the physical com-puter from the questioning human in the original Turingtest.Obviously, an ultimate Turing-like test for sensorimo-tor intelligence involves an enormous repertoire ofmovements. We used a reduced version of the test that isbased on a one dimensional handshake test proposed anddeveloped in previous studies [4-8]. We used the samequantitative measure for human-likeness of a handshakemodel, the Model Human-Likeness Grade (MHLG). Thisgrade quantifies the human-likeness on a scale between 0and 1 based on the answers of the human interrogator,who compares the artificial handshake to the humanhandshake degraded with varying amounts of noise [5,7].Although there is a growing interest in improving hu-man-likeness of mechatronic systems [9-20], the devel-opment of artificial handshake systems is still in its infan-cy [21-28]. Research on human sensorimotor control hasgenerated many theories about the nature of handmovements [29-42]. The proposed Turing-like handshaketest can be useful for testing those theories. In generalterms, we assert that a true understanding of the senso-rimotor control system could be demonstrated by build-ing a humanoid robot that is indistinguishable from ahuman in terms of its movements and interaction forces.A measure of how close such robots are to this goalshould be useful in evaluating current scientific hypo-theses and guiding future neuroscience research. Here,we present the results of a Turing-like handshake testmini-tournament, in which we compared three selecteddifferent models for handshake: (1) the Tit-for-tat model,according to which we identified active-passive roles inthe interaction and generated handshakes by imitatingthe interrogator’s previous movements; (2) the model,which takes into account physiological and biomechanicalaspects of handshake; and (3) the iML-Shake model,which uses a machine learning approach to reconstruct ahandshake. Each of these models is based on a differentaspect of human sensorimotor control. Therefore, we didnot have an a priori hypothesis as to which of the modelswould be more human-like, and the study was explorato-ry in its nature. In the discussion section, we highlight thesimilarities and differences of the models.2 THE TURING-LIKE HANDSHAKE TESTFollowing the original concept of the classical Turing test,each experiment consisted of three entities: human, com-puter, and interrogator. These three entities were inter-connected via two Phantom Desktop haptic devices and ateleoperation channel (see Fig. 2). Two human performersparticipated in each experiment: human and interrogator.Throughout the test, each of the participants held the sty-lus of one haptic device and generated handshake move-ments. The participants’ vision of each other’s hand wasoccluded by a screen (Fig. 2a).The computer employed a simulated handshake modelthat generated a force signal as a function of time, theone-dimensional position of the interrogators haptic de-vice, and its temporal evolution. This force was appliedon the haptic device that the interrogator held. In theremainder of the paper, we use the following notation forforces: aF is the force applied by or according to ele-ment a , and af is the force applied on element a . Hence,the force applied by the model, in the most general nota-tion, was:)1(0)]),([)( intermodel hTtttxtFwhere ]),([ inter ttx stands for any causal operator, hT isthe duration of the handshake and )(inter tx are the posi-tions of the interrogators haptic device up to time t .In each trial, both haptic devices applied forces on thehuman and the interrogator, according to the general ar-chitecture that is depicted in Fig. 2b. The human was pre-sented with a teleoperated handshake of the interrogator.In our implementation, it was a force proportional to thedifference between the positions of the interrogator andof the human. The interrogator was presented with twoconsecutive handshakes. In one of the handshakes – thestandard – the interrogator interacted with a computerhandshake model (Fig. 1a), namely:)2()()( modelinter tFtfThe other handshake – the comparison – is a handshakethat is generated from a linear combination of the humanhandshake and noise (Fig. 1c) according to:teleroboticinterfaceteleroboticinterfaceteleroboticinterfaceacb1Fig. 1. Illustration of the handshake test. The human interrogator(right) was presented with simulated handshakes (a), natural hand-shakes (b) or a linear combination of human handshake and noise(c). After two interactions the interrogator had to choose which of thehandshakes was more human-like.IEEE TRANSACTIONS ON HAPTICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
  3. 3. AVRAHAM ET AL.: TOWARDS PERCEIVING ROBOTS AS HUMANS ODD PAGE 3)3()()1()()( humannoiseinter tFtFtfwhere )(human tF is, ideally, the force applied by the human,and in our implementation, a force calculated based onthe difference between the positions of the human and theinterrogator; )(noise tF is a signal that was intended to dis-tort the human-likeness of the human handshake; wasone of rn equally distributed values in the range between0 and 1, e.g., {0, 0.2, 0.4, 0.6, 0.8, 1}. According to (3), whenwas set to 0, the force applied on the interrogator wasgenerated fully by the human; when it was set to 1, theinterrogator felt a force that was generated solely accord-ing to )(noise tF . )(noise tF was chosen such that when pre-sented alone, it was perceived to be as little human-like aspossible. This allowed effective comparison of a simu-lated handshake with a human handshake corrupted bydifferent levels of noise. The idea is that if more noise isrequired to degrade the human handshake such that itbecomes indistinguishable from the model, then the mod-el is less human-like. Such an approach was suggested inthe context of assessing virtual environments according tothe amount of noise required to degrade the real and vir-tual stimulation until the perceived environments becomeindistinguishable [43, 44]. In the current study, we chosethe noise as a mixture of sine functions with frequenciesabove the natural bandwidth of the human handshake [4,5, 45] and within the bandwidth of physiological motornoise, but the general framework is flexible, and any oth-er function can be used instead, as long as it is not hu-man-like.Each trial of the Turing-like test consisted of two hand-shakes, and the interrogator was asked to compare be-tween the handshakes and answer which one felt morehuman-like.3 THE THREE MODELS FOR HANDSHAKEIn this section, we describe each of the models for humanhandshake examined in the current study. We will startby describing the physical interaction of the interrogatorwith the haptic device when he/she is introduced to amodel, as in (2). In this case, as far as the interrogator isconcerned, the block diagram in Fig. 2b is reduced to thepart that is encapsulated by the dashed frame by setting0hG and 1cG .A similar architecture of interaction with a haptic de-vice was used in numerous classical studies in the field ofsensorimotor control [32, 34, 35, 38-42, 46]. Typically inthese studies, an external perturbation is applied to thehand using a haptic device, and then, a computationalmodel of the movement is constructed and can be testedin a simulation. In such simulation, the computationalmodel replaces the hand and results in movement trajec-tories that are similar to the experimental trajectories.Generally, this yields differential equations in the form of[34]:)4(),,(),,(),,( txxCxxxExxxDwhere ),,( xxxD represents the passive system dynamics(hand and haptic device), ),,( xxxE represents the envi-ronment dynamics – the active forces applied by the hap-tic device, and ),,( txxC represents the forces that dependon the operation of the controller and are actively appliedby the muscles. This equation is solved numerically andyields the simulated trajectories.In our implementation, an equation similar to (4) canbe written, but the interrogator hand and the haptic de-vice switch roles. The dynamic equation that describes thesystem is:)5()()(),,( modelinter tFtFxxxDwhere ),,( xxxD is the passive system dynamics (interro-Interrogator Computer Human[ ( ), ]x t ttK( )interx t ( )humanx tInterrogator Human-ComputerhGcGabcFig. 2. The Turing-like handshake test for sensorimotor intelligencewas administered through a telerobotic interface. (a) The human andthe interrogator held the stylus of a haptic device. Position informa-tion was transmitted between the two devices and forces were ap-plied on each of the devices according to the particular handshake.(b) Block diagram of the experimental system architecture. The dif-ferent models and the noise were substituted in the “computer” block.The human received a force proportional, by the factor Kt, to thedifference between the positions of the interrogator- xinter(t)- and thehuman- xhuman(t). Gh and Gc are the gains of the human and the com-puter force functions, respectively, that control the force applied tothe interrogator. When an interrogator interacted with a handshakemodel, as far as the interrogator was concerned, the block diagramwas reduced to the part that is encapsulated by the dashed frame bysetting Gh =0 and Gc =1. (c) Closeup view of the hand-device inter-face.IEEE TRANSACTIONS ON HAPTICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
  4. 4. 4 IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT IDgator hand and haptic device), )(inter tF is the force appliedby the interrogator and )(model tF is the force that modelsthe forces that would be applied by the human hand in ahuman handshake, and is actually applied by the hapticdevice. Note, that )(inter tF replaces ),,( xxxE from (4), and)(model tF replaces ),,( txxC from (4). More importantly,while (4) represents a computational model, (5) describesa physical interaction, and its only computed part is)(model tF .3.1 Tit-for-Tat modelIn the early eighties Axelrod and Hamilton examineddifferent strategies for optimizing success in the Prison-ers Dilemma game [47]. The proposed strategies werecompeted against each other in a tournament that re-vealed that the best strategy is when the players imitateeach other’s actions from the previous iteration. The fol-lowing algorithm employs a similar strategy, and thus, ithas been given the same name "tit-for-tat".This model is based on the assumption that during asingle handshake, the behavior of each participant can becharacterized as one of two possible roles – a leader or afollower – and that the decision on which role to take re-quires a perception of the other partys choice [15]. In ad-dition, we assume that this approximation does notchange during a single handshake.When designing this model, we considered the cyclicproperty of the handshake movement. A cycle of amovement was defined by the zero-crossing point in thevertical axis. This zero point is set for every handshake asthe starting position of the interrogator’s device at themoment the handshake began. Initially, the tit-for-tat al-gorithm waited for ms750 to see if the interrogatorwould initiate the handshake as a leader. If the interroga-tor made a displacement of mm30 or more, the tit-for-tatalgorithm acted as a follower, and imitated the hand-shake of the leader. During the first cycle of a movement(up until the second zero position crossing point in theposition trace), FirstCycleT , no forces were applied to the han-dle by the algorithm. The acceleration of the interrogatorwas recorded for the first movement cycle of the currenthandshake, )(tah . For all subsequent cycles, the algorithmgenerated forces that would be required to produce thesame trajectory in the unloaded Phantom handle. Theseforces were determined by the recorded acceleration dur-ing the first cycle of the handshake, )(tah , multiplied by aconstant k . We set the value of k so that the generatedforce would produce a movement with similar positionamplitude generated by interrogators when performinghandshake movements through the telerobotic system.The forces started to be activated at the moment the handwas at the same position as it was at time 0.If mmxtx 30)0()( interinter for at least onemstt ii 750 :)6()(00)(modelhFirstCyclehFirstCycleTtTtakTttF followerwhere ),mod()( inter FirstCycleh Ttxtaand “mod” stands for the modulo operation (the re-mainder of division). If the interrogator did not initiate amovement larger than mm30 within the first ms750 ofthe handshake, the model took over the leader rule. Inthis case, the forces were determined by the recorded ac-celeration during the first cycle of the last handshake inwhich this model was presented, )(1 tah , multiplied bythe constant k .If mmxtx 30)0()( interinter for all mstt ii 750 :)7(750)(75000)(1modelhh TtmstakmsttF leaderwhere );mod()( **inter1 FirstCycleh Ttxta "*" standsfor previous trialIn the beginning of the experiment, we familiarized theinterrogator with trials containing examples of all thepossible handshakes that he/she was expected to meetduring the experiment. The answers of the interrogatorfrom these trials were not analyzed (see Section 4). Inthese trials, we asked the interrogators to be active in thehandshake; thus, we were able to collect data for a tem-plate for the tit-for-tat algorithm in case it needed to act asthe leader in the first test with the interrogator. This alsocreates a difference between the data use of the algo-rithms. The tit-for-tat model used data from previous in-teractions in the same performer-interrogator pairing,while the other models did not.3.2 modelThe idea of threshold control in the motor system, andspecifically for hand movements, was proposed in thesixties by Feldman [48]. According to this hypothesis, forevery intentional movement, there is a virtual position ofthe arm that precedes its actual position so that forcesemerge in the muscles depending on the differences be-tween the two positions and velocities. This virtual posi-tion is named "a threshold position" [49]. We designed themodel for human handshake by assuming that duringa handshake the hand follows an alternating thresholdposition. The smooth movements of the handshakeemerge from taking into account the physiology andbiomechanics of muscle activation.A movement of a limb involves activations of twogroups of muscles: flexor and extensor. For a movementaround the elbow axis, when the elbow angle ( ) in-creases, the flexor group of muscles is defined here as theIEEE TRANSACTIONS ON HAPTICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
  5. 5. AVRAHAM ET AL.: TOWARDS PERCEIVING ROBOTS AS HUMANS ODD PAGE 5ones that lengthen, and the extensor group are the onesthat shorten. For simplicity, we assume that the move-ment is around a single joint (elbow), and that the twomuscle groups, flexor and extensor, are symmetric.According to the model, muscle forces and joint tor-ques emerge as a result of two types of motor commands.One command, R , specifies the desired angle position ofthe joint, in which the flexor and extensor muscle groupsmay both be silent. The second command, C , determinesthe positional range ( C2 ) in which the two musclegroups are co-activated. These commands define an indi-vidual muscle threshold ( ) for each muscle group:)8(CRf)9(CReHere, and in all the equations below, the subscripts fand e indicate that the associated variable is related tothe flexor and the extensor muscle groups, respectively.This model takes into account that under dynamic condi-tions the threshold muscle length ( * ) is a decreasingfunction of velocity due to the dynamic sensitivity of affe-rent feedback to the motor neurons [50]:)10(*where is the angular velocity and is a constant withthe value ms80 [49]. The only difference between the dy-namic threshold between the flexor and extensor musclegroups is expressed by the individual muscle threshold( ) differences as described in (8) and (9).The magnitude of muscle activation ( A ) at time t de-pends on the current joint angle ( ) and its distance fromthe desired position at the same moment (the dynamicthreshold * ), as presented in (11) and (12):)11(*][ ffA)12(]*[ eeAFollowing the passive properties of the muscle (elastic-ity and damping), the total force developed by the mus-cles ( M ) depends on the current joint angle and the thre-shold position at the same moment. The following equa-tions define these invariant torque-angle characteristics[48]:)13()1()( fAff eaAM)14()1()( eAee eaAMwhere 1.2 Nma and 10.05deg [49] are constantsthat determine the magnitude and shape of the joint inva-riant torque-angle characteristics profile, respectively [51].The opposite signs between the expressions in (13) and(14) reflect the opposing torque directions generated bythe flexor and extensor muscle groups.In addition to its dependency on length, the muscletorque is affected by the changing velocity of the musclefibers, as denoted by (15). In this equation, the torque, fT ,is presented for the flexor muscles group; thus, positivevelocity of the joint is related to muscle lengthening, andvice versa. A similar formula can be written for the exten-sor muscle torque, eT , by taking into account that thisgroup of muscles shortens when the velocity of the joint ispositive.The torque generated by the flexor muscle is:011011031,)(MV,)(MVm,Tbb.fmbVffm; )15(3.0mmVbVbbThis equation is formulated based on previous studiesthat analyzed the velocity dependent force generation ofmuscle. When the muscle contracts, torque is generatedaccording to the relation proposed in Hills model [52].While stretching, the muscle torque rises with increasingvelocity until it saturates at a value exceeding the musclestatic torque value by a factor of 5.11.1 [53].All the values of the constant parameters in the modelwere set to match experimental movements: 3.1fM ,sb deg/80 , and sVm deg/200 . The R and C com-mands were ramp-shaped, with a ramp velocityof sv deg/100 and sv deg/10 , respectively. For thepurpose of implementing the model with the Phantomhaptic device, we transformed all the relevant outputs ofthe model from units of degrees (in elbow joint coordi-nates) to meters (hand end-point coordinates). To per-form this transformation, we calculated the average fo-rearm length and its standard deviation (315.8±27.8 mm)0 1 2 3 4 5-0.08-0.06-0.04-0.0200. with threshold controlTime (sec)Pos(m)PositionRCFig. 3. An example of one handshake of the interrogator with thethreshold control model. position (solid black line) following activationof the cooperative commands R (dashed-dotted red lines) and C(solid blue line). The figure presents the commands after transforma-tion from degrees to meters, with the R command amplitude value of~5cm (10deg), and the C command value of ~1cm (2deg).IEEE TRANSACTIONS ON HAPTICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
  6. 6. 6 IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT IDof 52 young adults (ages 18-36). Therefore, assumingsmall angular movements, we set][]/[300][ radradmmmmx where x is the handend-point displacement and is the joint angle dis-placement.To determine the plateau boundaries value of the Rcommand (Fig. 3) which sets the desired angle trajectoryduring the 5 second lasting handshake, we measured theend-point positional amplitudes (the absolute differencebetween sequential maxima and minima in the positiontrace) of subjects while they generated handshake move-ments through the telerobotic system. We found an aver-age end-point positional amplitude of mm30100 [4].Thus, we used angular positional amplitude of deg20 .We assumed that when the interrogators hold thePhantom device, their initial elbow angle is deg900R .Thus, we set the minimum and maximum values of R tobe deg80upR and deg100downR , respectively. Fromits initial position, the value of R decreased first to thedesired position upR so that the movement direction inthe beginning of the handshake would be upward [54].Each switch of the R command from one plateau state toanother (up-down or down-up) occurred when the elbowangle of the interrogator approached the value of the re-levant extremum value of R ( upR when decreased anddownR when increased) by %10 .The forces applied by the Phantom device on the inter-rogator were generated according to the online-computedtorque:)16()(model ef TTcFwhere c is a unit conversion constant from muscle tor-ques to simulated handshake forces.In terms of data use, the model, unlike the othermodels, uses only instantaneous position of the interroga-tor to calculate the forces in (16). However, some informa-tion about general human movements and handshakeswas incorporated when the specific parameters valueswere chosen.3.3 iML-Shake modelThe iML-Shake model is a simple implementation of ma-chine learning. The underlying idea is that human beha-vior can be seen as an input-output function: subjects ob-serve positions and produce forces. If enough data fromhuman subjects were available, such machine learningsystems should be able to approximate any possiblehandshake behavior humans could ever produce. Here,we present a simple linear version of this machine learn-ing strategy.In the iML-Shake, the output force was a linear combi-nation of time t and each of the state variables (position)(inter tx , estimated velocity )(tv and estimated acceleration)(ta ) convolved with a set of 4 truncated Gaussian tem-poral basis functions (Figure 4a) [55]. Velocity and accele-ration were estimated from the interrogators position as:)17()()()( interintertttxtxtv)18()()()(tttvtvtaThe 14 parameters – corresponding to baseline, timeand 12 covariates that result from the convolution of thestate variables with the temporal filters - were fitted usinglinear regression to 60 trials of training data from onetraining subject, collected before the experiments started.No additional data were used to adapt the regressionwhile the test was performed. We then calculated )(model tFas a linear function of the covariates with added inde-pendent and normally distributed noise:)19(),0(~)(),()()( 2model NttttFwhere is the column vector of parameters and )(t isa 14-dimensional row vector that contains the values ofthe covariates at time t . The training data were interpo-lated to match a sampling rate of Hz60 and we usedcross-validation to control for overfitting.In terms of data usage, the iML-Shake model used theposition of the interrogator within the specific handshake,as well as past experience. However, unlike in the tit-for-tat model, the past experience was collected before theexperiments started. Hence, no specific human-interrogator pairing information was used.4 METHODS4.1 Experimental procedure, apparatus, andarchitectureTwenty adults (ages 23-30) participated in the experimentA0 150 30000.510 1 2 3 4 5−101time (s)Forcetime lag (ms)Bbasis functions datapredicteda.u.a bBasis functions DataPredicteda.u.ForceTime Lag (ms) Time (s)Fig. 4. Basis functions of the general linear model and handshakesimulation according to the iML-Shake. (a) Truncated Gaussian tem-poral basis functions used in the iML-Shake algorithm, in arbitraryunits (a.u.). (b) Force data (black line) from one trial and fittings of themodel (red line).IEEE TRANSACTIONS ON HAPTICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
  7. 7. AVRAHAM ET AL.: TOWARDS PERCEIVING ROBOTS AS HUMANS ODD PAGE 7after signing the informed consent form as stipulated bythe Institutional Helsinki Committee, Beer-Sheva, Israel.In each experiment, two naïve performers – human andinterrogator – controlled the stylus of a Phantom® Desk-top haptic device (SensAble Technologies) and generatedhandshake movements, as depicted in Figure 1. Duringthe handshake the participants’ vision of each other’shand was occluded by a screen that was placed betweenthem (Fig. 2a). The duration of each handshake was s5( hT in eq. 1). Throughout the entire experiment, the inter-rogator was asked to follow the instructions presented ona screen that both the interrogator and the human subjectwhere able to see (Fig. 2a). The interrogator initiated thehandshake by pressing a specific key on the keyboard.When the key was pressed the word ‘HANDSHAKE’ ap-peared on the screen. This way, the interrogator signaledthe initiation of a handshake to the human. Throughoutthe experiment, the interrogator was requested to answerwhich of the two handshakes within a single trial feltmore human-like by pressing the appropriate key on thekeyboard. Both haptic devices were connected to a DellPrecision450 computer with dual CPU Intel Xeon 2.4GHzprocessor. The position of the interrogator, )(inter tx , and ofthe human, )(human tx , along the vertical direction wererecorded at a sampling rate of Hz1000 . These positionsignals were used to calculate the forces that were appliedby each of the devices according to the overall systemarchitecture depicted in Fig. 2b. The human performeralways felt a force that was proportional to the differencebetween the positions of the interrogator and the humanhimself, namely:)20())()(()( humaninterhuman txtxKtf t ,where mNKt /150 . The interrogator felt one of twopossible forces. The first option was a combination of theforce in (20) and noise generated by the computer, name-ly:)21()()();())()((noisecomputercomputerinterhumanintertFtFtFGtxtxKGf cthwhere hG and cG are the gains of the human and com-puter force functions. The second option was a computer-generated force according to each of the models that weredescribed in Section 3, namely:)22()()();( modelcomputercomputerinter tFtFtFfThe calculation of )(computer tF was performed at Hz60 ,and the force was interpolated and rendered at Hz1000 .The timeline of the experiment is presented in Fig. 5.Each experiment started with 38 practice trials (76 hand-shakes) in which the interrogator shook hands only withthe human through the telerobotic system, namely 1hGand 0cG . The purpose of these practice trials was toallow the participants to be acquainted with a humanhandshake in our telerobotic system, and also, to providestarting information to the tit-for-tat algorithm.In each trial, the interrogator was presented with apure computer handshake, namely 0hG and 1cG ,which was one of the three candidate models described inSection 3, and a human handshake combined with noise,namely 1hG and cG . was assigned with 8equally distributed values from 0 to 1: = {0, 0.142, 0.284,0.426, 0.568, 0.710, 0.852, 1}. The noise function was cho-sen as a mixture of sine functions with frequencies abovethe natural bandwidth of the human handshake [4, 5, 45]:)23()5.3,5.2(~);2sin(7.0)(51HzHzUttf iiiwhere U(a,b) is a continuous uniform distribution be-tween a and b.Within each block, there were eight calibration trials inwhich the combined human-noise handshake with1hG and cG , for each of the eight values of ,was compared to a combined human-noise handshakewith 5.0hG and 5.0cG . In these trials, one of thehandshakes was always composed of a greater weight ofhuman forces than the other handshake. We assume thata handshake with larger weight of human versus noise isperceived as more human, and therefore, the participantshould be able identify the handshake that is more similarto that of a human. Thus, the purpose of these trials wasto verify that the interrogators were attending to the taskand that they were able to discriminate properly betweenhandshakes with different combinations of human gener-ated force and noise.Overall, each experimental block consisted of 32 trials:each of the 8 linear combinations of noise and human per-formance were compared with each of the three modelsas well as with the noise combined with human perfor-mance (calibration trials). The order of the trials within50 100 150 200 250 300 350 400 450Human PracticeLambdaTit-for-TatiML shakeNoiseTrialFig. 5. Experimental trials order. Each trial included a human hand-shake and additional handshake as specified in the figure. Dark-grayshaded areas are human practice trials. Namely, both handshakeswere human. Light-gray shaded area is one training block; the an-swers in these blocks were not analyzed.IEEE TRANSACTIONS ON HAPTICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
  8. 8. 8 IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT IDeach block was random and predetermined. Each expe-riment consisted of 10 blocks. Thus, each of the modelswas presented to the interrogator in 80 handshakes. Oneexperimental block (32 trials) was added at the beginningof the experiment for general acquaintance with the sys-tem and the task. The answers of the interrogators in thisblock were not analyzed. To preserve the memory of thefeeling of a human handshake in the telerobotic setup, thesubject was presented with twelve human handshakesafter each experimental block. It is important to note thatthis allows interrogators not only to detect human-likehandshakes but also to detect the specific aspects of theircurrent partner. As such, a human may, on average, betechnically worse than human-like. To increase the moti-vation of the participants, at the end of each block, theyreceived a grade that was computed based on their an-swers in the calibration trials. For example, an answer ina calibration trial was considered correct when the inter-rogator stated that a handshake that contained 0.852 hu-man generated force and 0.148 noise was more human-like than a handshake that contained 0.5 human generat-ed force and 0.5 noise.4.2 The Model Human-Likeness Grade (MHLG)To assess the human likeness of each model, we fitted apsychometric curve [56, 57] to the probability of the inter-rogator to answer that a stimulus handshake is more hu-man-like than a reference as a function of - the relativeweight of noise in the comparison handshake. In accor-dance with our assumption that a higher weight of thenoise component in the comparison handshake yieldshigher probability to choose the standard handshake asmore human-like, this probability approaches one asapproaches one, and zero when it approaches zero. Ingeneral, such psychometric curves are sigmoid functions[56, 57], and they can reveal both the bias and uncertaintyin discriminating between the standard and the compari-son handshakes (Fig. 6a-b). The bias in perception, ormore specifically, the point of subjective equality (PSE),can be extracted from the 0.5 threshold level of the psy-chometric curve, indicating the value of for which thestandard and the comparison handshakes are perceivedto be equally human-like. Namely, for a model that isindistinguishable from a human, the expected PSE is 0 ifhumans make no systematic mistakes or "slips" in theirjudgments. Similarly, for a model that is as human-like asthe noise function (hence, the least human-like model) theexpected PSE is 1, again assuming no systematic mistakesor slips. We subtracted the value of PSE from one to ob-tain the MHLG (eq. 24), such that 0MHLG means thatthe model is clearly non-human like, and 1MHLGmeans that the tested model is indistinguishable from thehuman handshake.)24(PSE1MHLGThe models that are perceived as least or most human-like possible yield MHLG values of 0 or 1, respectively.Therefore, MHLG is cut-off at 0 and 1.We used the Psignifit toolbox version 2.5.6 for Matlab(available at to fit a logistic psychometric function [57] tothe answers of the interrogators, extracted the PSE, andcalculated the MHLG of each of the models according to(24).4.3 Statistical analysisWe used 1-way ANOVA with repeated measures in orderto determine whether the difference between theMHLG values of the models was statistically significant.We used the Tukeys honestly significant difference crite-rion (p<0.05) to perform the post-hoc comparisons be-tween the individual models. We performed this statistic-al analysis using Matlab® statistics toolbox.5 RESULTSExamples of psychometric curves that were fitted to theanswers of two selected interrogators are depicted in Fig-ure 6a and 6b. The MHLG of individual subjects for eachof the Turing-like tests are presented in Figure 6c. Thesefigures reveal substantial differences of opinion amongthe interrogators. For example, interrogator #2 (Fig. 6a)perceived the model as the least human-like (dottedcurves), while the tit-for-tat model was perceived as themost human-like (dashed-dotted curves), as opposed tointerrogator #5 (Fig. 6b) who perceived the tit-for-tatmodel as the least human-like, and yielded higherMHLG for the and iML-Shake (dashed curves) models.a bdc0 0.2 0.4 0.6 0.8 #2NoiseLambdaTit-for-TatiML-Shake0 0.2 0.4 0.6 0.8 #5NoiseLambdaTit-for-TatiML-ShakeLambda Tit-for-Tat iML-Shake00. Tit-for-Tat iML-Shake00. 6. Human likeness for the models. Panels (a) and (b) presentexamples of psychometric curves that were fitted to the answers ofinterrogator #2 and interrogator #5, respectively. Dots are data points,and the horizontal bars are 95% confidence intervals for the estima-tion of PSE. A model with higher MHLG yields a curve that is shiftedfurther to the left. (c) The MHLGs that were calculated according toeach of the 10 interrogators who performed the test. Symbols areestimations of MHLG, and vertical bars are 95% confidence intervals.(d) Means with bootstrap 95% confidence intervals (error bars) thatwere estimated for each of the models using the MHLG of all theinterrogators.IEEE TRANSACTIONS ON HAPTICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
  9. 9. AVRAHAM ET AL.: TOWARDS PERCEIVING ROBOTS AS HUMANS ODD PAGE 9The calibration curves (noise model, solid curves) indeedyielded a PSE that was not statistically significantly dif-ferent from 0.5. Furthermore, for a few interrogators, wedid not observe significant differences between theMHLG of all three models. Estimations of the meanMHLG of all models across interrogators are presented inFigure 6d, together with the 95% confidence intervals forthese estimations. There was a statistically significant dif-ference between the MHLG values of the three models (1-way ANOVA with repeated measures, F2,18=4.57,p=0.0248). The tit-for-tat and iML-Shake models yieldedstatistically significantly higher MHLG values than themodel (Tukeys honestly significant difference criterion,p<0.05), respectively. There was no statistically significantdifference between the MHLG of the tit-for-tat and theiML-Shake models.6 DISCUSSIONIn this study, we presented three models for humanhandshake: , based on physiological and biomechanicalaspects; tit-for-tat, based on leader-follower roles and imi-tation; and iML-Shake, based on machine learning. Thetit-for-tat and the iML-Shake models were perceived sta-tistically significantly more human-like than the mod-el, but not significantly different between them. However,the grading was not consistent among subjects.6.1 Further development of the modelsEach of the models that were presented in this paper hasits own strength, but each could be improved. Moreover,the best features of each could be combined into onehandshake to improve the human likeness.In the passive phases of the tit-for-tat algorithm, the in-terrogator experiences zero forces, while in reality a “pas-sive hand” would present inertial and perhaps some vis-coelastic impedance. The algorithm could start to modifyits behavior after a typical voluntary reaction time, andreduce its impedance as the interrogator’s intentions be-came apparent. This would require a more accuratemodel of the relevant musculoskeletal structures andtheir intrinsic mechanical properties, some of which arecaptured in the model. Such models are easily con-structed in the MSMS environment (MusculoSkeletalModeling Software, available from;[58]), but are out of the scope of the current work. How-ever, initial values for such properties could be takenfrom the literature, e.g. M=0.15kg, B=5Ns/m, andK=500N/m for wrist movements [59], or M=1.52kg,B=66Ns/m, K=415N/m for elbow movements [34].A unique aspect of the tit-for-tat model is the notion oftwo roles in a sensorimotor interaction: a leader and afollower. The model adjusts itself to the interrogator bytaking one of these roles. However, the limitation here isthat the model can take the role of a leader only if it wasintroduced at least once with a leader interrogator.The model generates handshakes according to thethreshold control hypothesis. It can be improved bychanging the threshold position whenever a change inmovement direction is detected. In addition, the imple-mentation of the model included some simplifications(e.g. neglecting the inertia of the arm, and assuming onejoint and two muscles), and it is not adaptive, namely, weset the movement range to specific fixed values, whiledifferent interrogators may display movements with dif-ferent amplitudes.Unlike the other two models, the model takes intoaccount dynamical properties that have previously beendemonstrated to fit the biomechanical characteristics ofthe hand. These include the invariant characteristics ofthe passive force generated by the muscle, and the force-velocity relationship of the muscles.The iML-Shake is based on linear regression. This ap-proach represents a large class of stateless and time inva-riant methods. We could allow smoother predictions andrecovery from input failures by utilizing another class ofstate-space algorithms that models time variations ([55],Section 13.3). The advantage of this approach is that bymaking minimal assumptions about the mapping be-tween the inputs and outputs of the handshake “func-tion”, we are factoring in many details of human hand-shake biomechanics that may be unknown, difficult toquantify, or inefficient to implement.Further improvement can be achieved by includingadditional knowledge about human cognitive and biome-chanical characteristics in a more specific machine learn-ing approach. For example, for the mechanics we couldinclude the state-space evolution of multilink serial chainsas a model of the arm and hand, with contact forces andtorques [60]. For behavioral strategies, we could includethe evolution of beliefs about the other’s motor plan andcombine it with our agent’s goals. We can formulate thisproblem as the inversion of a Partially Observable Mar-kov Decision Process (POMDP), which has been shown toresemble how humans think about others’ intentions [61].6.2 Subjectivity vs. objectivityThe difference in grading among interrogators limits con-clusions about the relative performance of the algorithms(see Fig. 6a-c). A possible explanation is that the internalrepresentation of human-likeness is multidimensional,and each interrogator might follow a different decisionpath in the space of human likeness features [7]. Accord-ing to this view, all the interrogators would correctlyidentify the least and the most human-like possible hand-shakes, but may have various opinions about the salientfeature characterizing human-likeness. Moreover, hu-mans themselves exhibit a great range of handshakestyles, both as individuals and as members of ethnic andnational cultures. Thus, the test results could be biasedby similarities between the handshake style of the inter-rogator and of the robotic algorithm, or by dissimilaritiesbetween the interrogator and the human handshake toIEEE TRANSACTIONS ON HAPTICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
  10. 10. 10 IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT IDwhich the robot is being compared. The original Turingtest for artificial intelligence has a similar cultural prob-lem. In future studies, a larger number of subjects couldbe tested in a fully factored design to make sure that allcombinations of interrogators, robots and human perfor-mer are tested against each other so that such trendscould be extracted statistically from the results. Such testresults might provide quantitative insights into culturaldifferences among humans [62, 63].The difference in grading might be the result of thesubjective and declarative nature of the test, so it may beuseful to look at objective, physiologically related, meas-ures, such as skin conductance response [64], heart rate[65], postural responses [66] or task performance [20, 67,68]. This is of special importance in the light of evidencethat declarative perception is not always consistent withmotor responses [69-73].6.3 Future Turing-like handshake testsFor this preliminary investigation, we limited the hand-shake dynamics to those that can be generated in one axisvia a Phantom handle. A great deal of information is con-veyed in other dimensions of the human handshake, suchas distribution of grip forces among the fingers and off-axis motions of the wrist and forearm. The rapid im-provements in anthropomorphic robotic hands and arms(e.g. Shadow Robot [74] and DLR-HIT [75]) should makeit feasible to create much more realistic handshake Turingtests. However, these will require much more complexcontrol algorithms.The comparison of the three models may not be entire-ly fair, and future versions of the handshake Turing testmay be adapted towards this aim. Specifically, the tit-for-tat strategy uses data about the specific interrogator-subject pairing. This problem is compounded by the factthat there are human-human reminder trials in which asubject can learn about the specific characteristics of theirpartner handshakes. Both the tit-for-tat and iML-Shakemodels use past data, while the model only uses in-stantaneous information about the interrogator move-ments. The and iML-Shake models both use the infor-mation of the whole five seconds of the handshake, whilethe tit-for-tat model only uses the first cycle of movement.To take into account these differences, in future studies,only models that use similar data should be compared toeach other and a quantitative measure of model complexi-ty could be developed and incorporated into the MHLG .It may also be important to let each interrogator expe-rience sequential trials with the same algorithm inters-persed randomly with the same human. This would bemore akin to the extended dialogue of the original Turingtest, in which the interrogator uses the results of onequery to frame a subsequent query.Ultimately, a haptically-enabled robot should itselfhave the ability to characterize and identify the agentswith which it interacts. This ability fulfills the purpose ofa human handshake. Thus, the robotic Turing test pre-sented here could be made bidirectional, testing the abili-ty of the robot to perceive whether it is shaking handswith a human or with another robotic algorithm – thereverse handshake hypothesis [8]. According to this hy-pothesis, once a model that best mimics the human hand-shake is constructed, it itself can then be used to probehandshakes. Such an optimal discriminator might requireenhancements of sensory modalities such as tactile per-ception of force vectors, slip or warmth [76].6.4 Concluding remarksWe assert that understanding the sensorimotor controlsystem is a necessary condition for understanding brainfunction, and it could be demonstrated by building a hu-manoid robot that is indistinguishable from a human. Thecurrent study focuses on handshakes via a teleroboticsystem. By ranking models that are based on the prevail-ing scientific hypotheses about the nature of humanmovement, we should be able to extract salient propertiesof human sensorimotor control, or at least the salientproperties required to build an artificial appendage that isindistinguishable from a human arm.Beyond the scientific insights into sensorimotor con-trol, understanding the characteristic properties ofhealthy hand movements can be very helpful for clinicaland industrial applications. For example, it may facilitatean automatic discrimination between healthy handmovements and movements that are generated by motor-ically-impaired individuals, such as cerebral palsy pa-tients [77, 78] and Parkinson patients [79]. In future indus-trial and domestic applications, robotic devices are ex-pected to work closely and interact naturally with hu-mans. Characterizing the properties of a human-likehandshake is a first step in that direction.ACKNOWLEDGMENTThe authors thank Prof. Anatol Feldman for his assistancein constructing the model, and Mr. Amit Miltshteinand Mr. Ofir Nachum for their help in implementing themodels and the experiments. This work was supported bythe Israel Science Foundation Grant number 1018/08. INwas supported by the Kreitman and Clore Foundations.REFERENCES[1] Chaplin, W.F., Phillips, J.B., Brown, J.D., Clanton, N.R., andStein, J.L., Handshaking, gender, personality, and firstimpressions , Journal of Personality and Social Psychology, vol. 79,no. 1, pp. 110, 2000.[2] Stewart, G.L., Dustin, S.L., Barrick, M.R., and Darnold, T.C.,Exploring the handshake in employment interviews , Journal ofApplied Psychology, vol. 93, no. 5, pp. 1139, 2008.[3] Turing, A.M., Computing Machinery and Intelligence , Mind,A Quarterly Review of Psychology and Philosophy, vol. LIX, no.236, 1950.[4] Avraham, G., Levy Tzedek, S., and Karniel, A., Exploring therhythmic nature of handshake movement and a Turing likeIEEE TRANSACTIONS ON HAPTICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
  11. 11. AVRAHAM ET AL.: TOWARDS PERCEIVING ROBOTS AS HUMANS ODD PAGE 11test . Proc. The fifth Computational Motor Control Workshop, BeerSheva, Israel, 2009.[5] Karniel, A., Avraham, G., Peles, B. C., Levy Tzedek, S., andNisky, I., One Dimensional Turing Like Handshake Test forMotor Intelligence , Journal of Visualized Experiments, vol. e2492,2010.[6] Avraham, G., Nisky, I., and Karniel, A., When Robots BecomeHumans: a Turing like Handshake Test . Proc. The seventhComputational Motor Control Wokshop, Beer Sheva, Israel, 2011.[7] Nisky, I., Avraham, G., and Karniel, A., Three Alternatives toMeasure the Human Likeness of a Handshake Model in aTuring like Test , Presence, In press.[8] Karniel, A., Nisky, I., Avraham, G., Peles, B. C., and LevyTzedek, S., A Turing Like Handshake Test for MotorIntelligence , in ‘Haptics: Generating and Perceiving TangibleSensations’ (Springer Berlin / Heidelberg, 2010), pp. 197 204.[9] Bailenson, J.N., Yee, N., Brave, S., Merget, D., and Koslow, D.,Virtual interpersonal touch: expressing and recognizingemotions through haptic devices , Human Computer Interaction,vol. 22, no. 3, pp. 325 353, 2007.[10] Bailenson, J., and Yee, N., Virtual interpersonal touch: Hapticinteraction and copresence in collaborative virtualenvironments , Multimedia Tools and Applications, vol. 37, no. 1,pp. 5 14, 2008.[11] Kim, J., Kim, H., Tay, B.K., Muniyandi, M., Srinivasan, M.A.,Jordan, J., Mortensen, J., Oliveira, M., and Slater, M.,Transatlantic Touch: A Study of Haptic Collaboration overLong Distance , Presence: teleoperators and virtual environments,vol. 13, no. 3, pp. 328 337, 2004.[12] McLaughlin, M., Sukhatme, G., Wei, P., Weirong, Z., and Parks,J., Performance and co presence in heterogeneous hapticcollaboration . Proc. HAPTICS 2003, 11th Symposium on HapticInterfaces for Virtual Environment and Teleoperator Systems, 2003.[13] Hespanha, J.P., McLaughlin, M., Sukhatme, G.S., Akbarian, M.,Garg, R., and Zhu, W., Haptic collaboration over the internet .Proc. Fifth PHANTOM Users Group Workshop, 2000.[14] Gentry, S., Feron, E., and Murray Smith, R., Human humanhaptic collaboration in cyclical Fitts tasks . Proc. IROS 2005,IEEE/RSJ International Conference on Intelligent Robots andSystems 2005.[15] Groten, R., Feth, D., Goshy, H., Peer, A., Kenny, D.A., and Buss,M., Experimental analysis of dominance in hapticcollaboration . Proc. RO MAN 2009, the 18th IEEE InternationalSymposium on Robot and Human Interactive Communication, 2009.[16] Durlach, N., and Slater, M., Presence in shared virtualenvironments and virtual togetherness , Presence: Teleoperatorsand Virtual Environments, vol. 9, no. 2, pp. 214 217, 2000.[17] Ikeura, R., Inooka, H., and Mizutani, K., Subjective evaluationfor manoeuvrability of a robot cooperating with human . Proc.RO MAN 1999, 8th IEEE International Workshop on Robot andHuman Interaction, 1999.[18] Rahman, M.M., Ikeura, R., and Mizutani, K., Investigation ofthe impedance characteristic of human arm for development ofrobots to cooperate with humans , JSME International JournalSeries C Mechanical Systems, Machine Elements and Manufacturing,vol. 45, no. 2, pp. 510 518, 2002.[19] Wang, Z., Lu, J., Peer, A., and Buss, M., Influence of vision andhaptics on plausibility of social interaction in virtual realityscenarios , in ‘Haptics: Generating and Perceiving TangibleSensations’ (Springer Berlin / Heidelberg, 2010), pp. 172 177.[20] Feth, D., Groten, R., Peer, A., and Buss, M., Haptic human–robot collaboration: Comparison of robot partnerimplementations in terms of human likeness and taskperformance , Presence: Teleoperators and Virtual Environments,vol. 20, no. 2, pp. 173 189, 2011.[21] Jindai, M., Watanabe, T., Shibata, S., and Yamamoto, T.,Development of Handshake Robot System for EmbodiedInteraction with Humans . Proc. The 15th IEEE InternationalSymposium on Robot and Human Interactive Communication,Hatfield, UK 2006.[22] Kasuga, T., and Hashimoto, M., Human Robot Handshakingusing Neural Oscillators . Proc. Internatioanl Conference onRobotics and Automation, Barcelona, Spain, 2005.[23] Ouchi, K., and Hashimoto, S., Handshake Telephone System toCommunicate with Voice and Force . Proc. IEEE InternationalWorkshop on Robot and Human Communication1997 pp. 466 471.[24] Bailenson, J.N., and Yee, N., Virtual interpersonal touch anddigital chameleons , Journal of Nonverbal Behavior, vol. 31, no. 4,pp. 225 242, 2007.[25] Miyashita, T., and Ishiguro, H., Human like natural behaviorgeneration based on involuntary motions for humanoidrobots , Robotics and Autonomous Systems, vol. 48, no. 4, pp. 203212, 2004.[26] Kunii, Y., and Hashimoto, H., Tele handshake usinghandshake device , 1995.[27] Sato, T., Hashimoto, M., and Tsukahara, M., Synchronizationbased control using online design of dynamics and itsapplication to human robot interaction , 2007.[28] Wang, Z., Peer, A., and Buss, M., An HMM approach torealistic haptic human robot interaction , 2009.[29] Loeb, G., Brown, I., and Cheng, E., A hierarchical foundationfor models of sensorimotor control , Experimental brain research,vol. 126, no. 1, pp. 1 18, 1999.[30] Kording, K.P., and Wolpert, D.M., Bayesian decision theory insensorimotor control , Trends in Cognitive Sciences, vol. 10, no. 7,pp. 319 326, 2006.[31] Feldman, A.G., and Levin, M.F., The equilibrium pointhypothesis–past, present and future , Progress in Motor Control,pp. 699 726, 2009.[32] Shadmehr, R., and Wise, S.P., The ComputationalNeurobiology of Reaching and Pointing: A Foundation forMotor Learning (MIT Press. 2005).[33] Flash, T., and Gurevich, I., Models of motor adaptation andimpedance control in human arm movements , in ‘SelfOrganization, Comutational Maps, and Motor Control’(Elsevier Science, 1997), pp. 423 481.[34] Shadmehr, R., and Mussa Ivaldi, F.A., Adaptive representationof dynamics during learning of a motor task , Journal ofNeuroscience, vol. 14, no. 5, pp. 3208 3224, 1994.[35] Flanagan, J.R., and Wing, A.M., The role of internal models inmotion planning and control: evidence from grip forceadjustments during movements of hand held loads , Journal ofNeuroscience, vol. 17, no. 4, pp. 1519 1528, 1997.[36] Lackner, J.R., and DiZio, P., Rapid adaptation to Coriolis forceperturbations of arm trajectories , Journal of Neurophysiology,vol. 72, pp. 299 313, 1994.[37] Wolpert, D.M., and Ghahramani, Z., Computational principlesof movement neuroscience , Nat Neurosci, vol. 3, pp. 1212 1217,2000.[38] Thoroughman, K.A., and Shadmehr, R., Learning of actionthrough adaptive combination of motor primitives , Nature, vol.407, no. 6805, pp. 742 747, 2000.[39] Karniel, A., and Mussa Ivaldi, F.A., Sequence, time, or staterepresentation: how does the motor control system adapt tovariable environments? , Biological Cybernetics, vol. 89, no. 1, pp.10 21, 2003.IEEE TRANSACTIONS ON HAPTICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
  12. 12. 12 IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT ID[40] Gomi, H., and Kawato, M., Equilibrium point controlhypothesis examined by measured arm stiffness duringmultijoint movement , Science, vol. 272, no. 5258 pp. 117 120,1996.[41] Flash, T., and Hogan, N., The coordination of arm movements:An experimentally confirmed mathematical model. , Journal ofNeuroscience, vol. 5, no. 7, pp. 1688 1703, 1985.[42] Morasso, P., Spatial control of arm movements , ExperimentalBrain Research, vol. 42, no. 2, pp. 223 227, 1981.[43] Sheridan, T.B., Further musings on the psychophysics ofpresence . Proc. Humans, Information and Technology , 1994 IEEEInternational Conference onSystems, Man, and Cybernetics, 1994.[44] Sheridan, T.B., Further musings on the psychophysics ofpresence , Presence: Teleoperators and virtual environments, vol. 5,no. 2, pp. 241 246, 1996.[45] Avraham, G., Levy Tzedek, S., Peles, B. C., Bar Haim, S., andKarniel, A., Reduced frequency variability in handshakemovements of individuals with Cerebral Palsy . Proc. The sixthComputational Motor Control Workshop, Beer Sheva, Israel, 2010.[46] Pilon, J. F., and Feldman, A., Threshold control of motoractions prevents destabilizing effects of proprioceptive delays ,Experimental brain research, vol. 174, no. 2, pp. 229 239, 2006.[47] Axelrod, R., and Hamilton, W.D., The evolution ofcooperation , Science, vol. 211, no. 4489, pp. 1390, 1981.[48] Feldman, A., Functional tuning of the nervous system withcontrol of movement or maintenance of a steady posture. II.Controllable parameters of the muscle , Biophysics, vol. 11, no.3, pp. 565 578, 1966.[49] Pilon, J.F., and Feldman, A.G., Threshold control of motoractions prevents destabilizing effects of proprioceptive delays ,Experimental brain research, vol. 174, no. 2, pp. 229 239, 2006.[50] Feldman, A.G., and Levin, M.F., The origin and use ofpositional frames of reference in motor control , Behavioral andBrain Sciences, vol. 18, no. 4, pp. 723 744, 1995.[51] Gribble, P.L., Ostry, D.J., Sanguineti, V., and Laboissi re, R.,Are complex control signals required for human armmovement? , Journal of Neurophysiology, vol. 79, no. 3, pp. 1409,1998.[52] Hill, A.V., The heat of shortening and the dynamic constants ofmuscle , Proceedings of the Royal Society of London. Series B,Biological Sciences, vol. 126, no. 843, pp. 136 195, 1938.[53] Dudley, G.A., Harris, R.T., Duvoisin, M.R., Hather, B.M., andBuchanan, P., Effect of voluntary vs. artificial activation on therelationship of muscle torque to speed , Journal of AppliedPhysiology, vol. 69, no. 6, pp. 2215, 1990.[54] Yamato, Y., Jindai, M., and Watanabe, T., Development of ashake motion leading model for human robot handshaking ,2008.[55] Bishop, C.M., and SpringerLink, Pattern recognition andmachine learning (Springer New York. 2006).[56] Gescheider, G.A., Method, Theory, and Application(Lawrence Erlbaum Associates. 1985).[57] Wichmann, F., and Hill, N., The psychometric function: I.Fitting, sampling, and goodness of fit , Perception &Psychophysics, vol. 63, no. 8, pp. 1293 1313, 2001.[58] Davoodi, R., and Loeb, G.E., Physics based simulation forrestoration and rehabilitation of movement disorders , IEEETransactions on Biomedical Engineering, in press.[59] Kuchenbecker, K.J., Park, J.G., and Niemeyer, G.,Characterizing the human wrist for improved hapticinteraction . Proc. IMECE 2003 International MechanicalEngineering Congress and Exposition, Washington, D.C. USA,2003.[60] Zatsiorsky, V.M., Kinetics of human motion (Human KineticsPublishers. 2002).[61] Baker, C.L., Saxe, R., and Tenenbaum, J.B., Actionunderstanding as inverse planning , Cognition, vol. 113, no. 3,pp. 329 349, 2009.[62] Everett, F., Proctor, N., and Cartmell, B., Providingpsychological services to American Indian children andfamilies , Professional Psychology: Research and Practice, vol. 14,no. 5, pp. 588, 1983.[63] Woo, H.S., and Prud’homme, C., Cultural characteristicsprevalent in the Chinese negotiation process , European BusinessReview, vol. 99, no. 5, pp. 313 322, 1999.[64] Laine, C.M., Spitler, K.M., Mosher, C.P., and Gothard, K.M.,Behavioral triggers of skin conductance responses and theirneural correlates in the primate amygdala , Journal ofNeurophysiology, vol. 101, no. 4, pp. 1749 1754, 2009.[65] Anttonen, J., and Surakka, V., Emotions and heart rate whilesitting on a chair . Proc. Proceedings of the SIGCHI conference onHuman factors in computing systems, Portland, Oregon, USA 2005pp. 491 499.[66] Freeman, J., Avons, S.E., Meddis, R., Pearson, D.E., andIJsselsteijn, W., Using behavioral realism to estimate presence:A study of the utility of postural responses to motion stimuli ,Presence: Teleoperators and Virtual Environments, vol. 9, no. 2, pp.149 164, 2000.[67] Schloerb, D.W., A quantitative measure of telepresence ,Presence: Teleoperators and Virtual Environments, vol. 4, no. 1, pp.64 80, 1995.[68] Reed, K.B., and Peshkin, M.A., Physical collaboration ofhuman human and human robot teams , IEEE Transactions onHaptics, vol. 1, no. 2, pp. 108 120, 2008.[69] Ganel, T., and Goodale, M.A., Visual control of action but notperception requires analytical processing of object shape ,Nature, vol. 426, no. 6967, pp. 664 667, 2003.[70] Goodale, M.A., and Milner, A.D., Separate visoual pathwaysfor perception and action , Trends in Neurosciences, vol. 15, no. 1,pp. 20 25, 1992.[71] Aglioti, S., DeSouza, J.F.X., and Goodale, M.A., Size contrastillusions deceive the eye but not the hand , Current Biology, vol.5, no. 6, pp. 679 685, 1995.[72] Pressman, A., Nisky, I., Karniel, A., and Mussa Ivaldi, F.A.,Probing Virtual Boundaries and the Perception of DelayedStiffness , Advanced Robotics, vol. 22, pp. 119 140, 2008.[73] Nisky, I., Pressman, A., Pugh, C.M., Mussa Ivaldi, F.A., andKarniel, A., Perception and Action in Teleoperated NeedleInsertion , IEEE Transactions on Haptics, vol. 4, no. 3, pp. 155166, 2011.[74] Tuffield, P., and Elias, H., The Shadow robot mimics humanactions , Industrial Robot: An International Journal, vol. 30, no. 1,pp. 56 60, 2003.[75] Liu, H., Meusel, P., Seitz, N., Willberg, B., Hirzinger, G., Jin, M.,Liu, Y., Wei, R., and Xie, Z., The modular multisensory DLRHIT Hand , Mechanism and Machine Theory, vol. 42, no. 5, pp.612 625, 2007.[76] Wettels, N., Santos, V.J., Johansson, R.S., and Loeb, G.E.,Biomimetic tactile sensor array , Advanced Robotics, vol. 22, no.8, pp. 829 849, 2008.[77] van der Heide, J.C., Fock, J.M., Otten, B., Stremmelaar, E., andHadders Algra, M., Kinematic characteristics of reachingmovements in preterm children with cerebral palsy , Pediatricresearch, vol. 57, no. 6, pp. 883, 2005.[78] Roennqvist, L., and Roesblad, B., Kinematic analysis ofunimanual reaching and grasping movements in children withIEEE TRANSACTIONS ON HAPTICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.
  13. 13. AVRAHAM ET AL.: TOWARDS PERCEIVING ROBOTS AS HUMANS ODD PAGE 13hemiplegic cerebral palsy , Clinical Biomechanics, vol. 22, no. 2,pp. 165 175, 2007.[79] van Den, B., Coordination disorders in patients withParkinson s disease: a study of paced rhythmic forearmmovements , Experimental brain research, vol. 134, no. 2, pp. 174186, 2000.Guy Avraham received his B.Sc. (magna cumlaude) from the Department of Life Science, TheHebrew University of Jerusalem, Israel, in 2008.Currently, he is a M.Sc. student in the Department ofBiomedical Engineering, Ben-Gurion University ofthe Negev, Israel. Mr. Avraham received the ScienceFaculty Dean’s Award for Excellence from the He-brew University of Jerusalem in 2008.Ilana Nisky received her B.Sc. (summa cum laude),M.Sc. (summa cum laude), and Ph.D. from the De-partment of Biomedical Engineering, Ben-GurionUniversity of the Negev, Israel, in 2006, 2009, and2011 respectively. She is currently a postdoctoralresearch fellow in the department of MechanicalEngineering, Stanford University. Dr. Nisky receivedthe Wolf Foundation scholarship for undergraduateand graduate students, Zlotowski and Ben-Amitai prizes for excellentgraduate research, Kreitman Foundation Fellowship, Clore scholar-ship, and the Wezmann Institute of Science national postdoctoralaward for advancing women in science. Her research interests in-clude motor control, haptics, robotics, human and machine learning,teleoperation, and human-machine interfaces. She is a member ofthe Society for the Neural Control of Movement, the Society for Neu-roscience, Technical Committee on Haptics, and EuroHaptics Socie-ty.Hugo L. Fernandes received his B.Sc. in Mathemat-ics in 2001 and his M.Sc. in Applied Mathematics in2004 from the Faculty of Science of the University ofPorto, Portugal. Currently, he is enrolled in the Ph.D.Program in Computational Biology, Instituto Gulben-kian de Ciência, Oeiras, Portugal. Since 2009, he isdoing his research at the Bayesian Behavior Lab,Northwestern University, Chicago, USA. His mainresearch interests relate to how information, in particular uncertainty,is represented and processed in the brain.Daniel E. Acuna (Ph.D. ’11) received his Bachelor’and Master’ Degrees from the University of Santiago,Chile and his Ph.D. from the University of Minnesota-Twin Cities, Minnesota. He is currently a researchassociate in the Sensory Motor Performance Pro-gram (SMPP) at the Rehabilitation Institute of Chica-go and Northwestern University. During his Ph.D., hereceived a NIH Neuro-physical-computationalSciences (NPCS) Graduate Training Fellowship (2008-2010) and anInternational Graduate Student Fellowship from the Chilean Councilof Scientific and Technological Research and the World Bank (2006-2010). His research interests include sequential optimal learning andcontrol, brain machine interfaces, and hierarchical motor control.Konrad P. Kording received a PhD in physics fromthe Federal Institute of Technology (ETH), Zurich,Switzerland, where he worked on cat electrophysiol-ogy and neural modeling. He received postdoctoraltraining in London where he worked on motor controland Bayesian statistics. Another postdoc, partiallyfunded by a German Heisenberg stipend followed atMIT where he worked on cognitive science and natu-ral language processing and deepened his knowledge of Bayesianmethods. Since 2006 he works for the Rehabilitation Institute of Chi-cago and Northwestern University where he received tenure andpromotion to associate professor in 2011. His group is broadly inter-ested in uncertainty, using Bayesian approaches to model humanbehavior and for neural data analysis.Gerald E. Loeb (M’98) received a B.A. (’69) andM.D. (’72) from Johns Hopkins University and didone year of surgical residency at the University ofArizona before joining the Laboratory of Neural Con-trol at the National Institutes of Health (1973-1988).He was Professor of Physiology and BiomedicalEngineering at Queen’s University in Kingston, Can-ada (1988-1999) and is now Professor of BiomedicalEngineering and Director of the Medical Device Development Facilityat the University of Southern California ( Dr.Loeb was one of the original developers of the cochlear implant torestore hearing to the deaf and was Chief Scientist for AdvancedBionics Corp. (1994-1999), manufacturers of the Clarion cochlearimplant. He is a Fellow of the American Institute of Medical andBiological Engineers, holder of 54 issued US patents and author ofover 200 scientific papers available at of Dr. Loeb’s current research is directed toward sensorimotorcontrol of paralyzed and prosthetic limbs. His research team devel-oped BION™ injectable neuromuscular stimulators and has beenconducting several pilot clinical trials. They are developing andcommercializing a biomimetic tactile sensor for robotic and prosthetichands through a start-up company for which Dr. Loeb is Chief Execu-tive Officer ( His lab at USC is developingcomputer models of musculoskeletal mechanics and the interneu-ronal circuitry of the spinal cord, which facilitates control and learningof voluntary motor behaviors by the brain. These projects build onDr. Loeb’s long-standing basic research into the properties and natu-ral activities of muscles, motoneurons, proprioceptors and spinalreflexes.Amir Karniel received a B.Sc. degree (Cum Laude)in 1993, a M.Sc. degree in 1996, and a Ph.D. degreein 2000, all in Electrical Engineering from the Tech-nion-Israel Institute of Technology, Haifa, Israel. Dr.Karniel received the E. I. Jury award for excellentstudents in the area of systems theory, and the WolfScholarship award for excellent research students.He had been a post doctoral fellow at the departmentof physiology, Northwestern University Medical School and the Ro-botics Lab of the Rehabilitation Institute of Chicago. He is currentlyan associate professor in the Department of Biomedical Engineeringat Ben-Gurion University of the Negev where he serves as the headof the Computational Motor Control Laboratory and the organizer ofthe annual International Computational Motor Control Workshop. Inthe last few years his studies are funded by awards from the IsraelScience Foundation, The Binational United-States Israel ScienceFoundation, and the US-AID Middle East Research Collaboration.Dr. Karniel is on the Editorial board of the IEEE Transactions onSystem Man and Cybernetics Part A, and the Frontiers in Neuros-cience. His research interests include Human Machine interfaces,Haptics, Brain Theory, Motor Control and Motor Learning.IEEE TRANSACTIONS ON HAPTICSThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.