Capacitive Fingerprinting: Exploring User Differentiation     by Sensing Electrical Properties of the Human Body          ...
RELATED APPROACHES                                                of these techniques require large, static setups and/or ...
properties of a user, it should be possible to identify, or atleast differentiate, the person.Humans have a multitude of e...
Figure 4. Impact on classification performance by varying           Figure 5. Classification accuracies for all trial part...
Figure 6. Example painting application. Two users                Figure 7. Example – sketching application. Each user   ca...
while User B can select blue and paint in blue without af-         Robustness: Our experimental results suggest a fairly r...
REFERENCES                                                          18. Mandryk, R.L. and Inkpen, K.M. Physiological indic...
Upcoming SlideShare
Loading in...5

Tecnología Táctil Detecta Múltiples Usuarios - Disney -


Published on - Disney Research desarrolla una tecnología táctil que permite detectar usuarios creando perfiles únicos según la impedancia eléctrica. La tecnología basada en Touché fue desarrollada detectar para múltiples usuarios.

Published in: Technology
1 Like
  • Be the first to comment

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Tecnología Táctil Detecta Múltiples Usuarios - Disney -

  1. 1. Capacitive Fingerprinting: Exploring User Differentiation by Sensing Electrical Properties of the Human Body Chris Harrison1,2 Munehiko Sato1,3 Ivan Poupyrev1 1 2 3 Disney Research Pittsburgh, Carnegie Mellon University, Graduate School of Engineering, 4720 Forbes Avenue, 5000 Forbes Avenue, The University of Tokyo, Pittsburgh, PA 15213 USA Pittsburgh, PA 15213 USA Hongo 7-3-1, Tokyo, 113-8656 munehiko@acm.orgABSTRACT tive tabletops [20]. These are sufficiently large to accom-At present, touchscreens can differentiate multiple points of modate multiple users interacting simultaneously. Secondcontact, but not who is touching the device. In this work, are handheld mobile devices. Their small size and weightwe consider how the electrical properties of humans and allows for them to be easily passed around among multipletheir attire can be used to support user differentiation on users, enabling asynchronous co-located collaboration [16].touchscreens. We propse a novel sensing approach based on Tablet devices occupy the middle ground: portable enoughSwept Frequency Capacitive Sensing, which measures the to be easily shared among multiple users, while also provid-impedance of a user to the environment (i.e., ground) across ing sufficient surface area for two or more people to inter-a range of AC frequencies. Different people have different act simultaneously.bone densities and muscle mass, wear different footwear, In this paper we consider how the electrical properties ofand so on. This, in turn, yields different impedance profiles, users’ bodies can be used for differentiation – the ability towhich allows for touch events, including multitouch ges- tell users apart, but not necessarily uniquely identify them.tures, to be attributed to a particular user. This has many The outcome of our explorations is a promising, novel sens-interesting implications for interactive design. We describe ing approach based on Swept Frequency Capacitive Sens-and evaluate our sensing approach, demonstrating that the ing (SFCS) [26]. This approach, which we call Capacitivetechnique has considerable promise. We also discuss limita- Fingerprinting, allows touchscreens, or other touch sensi-tions, how these might be overcome, and next steps. tive devices, to not only report finger touch locations, butACM Classification: H.5.2 [Information interfaces and also identify to which user that finger belongs. Our tech-presentation]: User Interfaces - Graphical user interfaces; nique supports single finger touches, multitouch finger ges-Input devices and strategies. tures (e.g., two-finger pinch), bi-manual manipulations [5],General terms: Human Factors, Design. and shape contacts [6], such as a palm press. Importantly, our technique requires no user instrumentation – they simp-Keywords: User identification; ID; login; collaborative ly use their fingers as they would on a conventional touch-multi-user interaction; swept frequency capacitive sensing; screen. Further, our technique could be made mobile andSFCS; Touché; touchscreens; finger input; gestures. enhance a broad variety of mobile devices and applications.INTRODUCTION Our experiments show the approach is feasible. In a con-Touch interaction is pervasive, especially on mobile devic- trolled lab study, touches from pairs of users were differen-es. In a typical touch-sensitive interface, applications re- tiated with an accuracy of 96 percent. Put simply, fourceive touch coordinates for each finger, and possibly con- touches in 100 were incorrectly attributed to the other user.tact ellipsoids as well. However, there is usually no notionas to who is touching – a valuable contextual cue, whichcould enable a wide range of exciting collaborative andmultiplayer interactions [12,19,25,28,30].There are two basic classes of touch-centric computing thatcould be enhanced with user identification and tracking.Foremost are large touchscreens, situated on desktops,mounted on walls, or placed horizontally, such as interac-Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise,or republish, to post on servers or to redistribute to lists, requires prior Figure 1. Example two-player “Whac-A-Mole” game.specific permission and/or a fee.UIST  ’12,  October 7–10, 2012, Cambridge, Massachusetts, USA. Red highlights indicate user one hits; green for userCopyright 2012 ACM 978-1-4503-1580-7/12/10...$15.00. two. Individual score is kept and shown at top of screen.
  2. 2. RELATED APPROACHES of these techniques require large, static setups and/or in-User identification has implications in many application strumentation of the user, increasing both cost and com-domains, including security, personalization, and group- plexity, and not permitting truly mobile, ad-hoc interaction.ware. The fundamental goal of user identification is to un- Finally, there are several systems that employ uniquelyderstand who is controlling the input at any given moment identifiable pens, such as in Wacom tablets [32], which useand adjust interaction or functionality accordingly. At- electromagnetic resonance sensing to identify several styli,tempts to support co-located multi-user interaction on which could be attributable to different users. TapSenseshared displays goes back to at least the late 1980’s, with [14] used pens with different tip materials, allowing for pensystems such as Boardnoter [29] and Commune [3]. The disambiguation through acoustic sensing. Although robust,conceptual and social underpinnings of Single Display these approaches do not support the highly popular directGroupware have been extensively researched by Stewart, touch interaction.Bederson, Gutwin and others (see e.g., [12,30]). More re-cently, there have been efforts to develop toolkits to facili- CONTRIBUTIONtate “identity enabled” applications [19,23,25,28]. The salient distinguishing feature of our user differentiation approach is that it allows direct touch interaction withoutThere are significant technical challenges in developing and requiring instrumentation of either the user or environmentdeploying technologies for user identification, particularly – sensing electronics are fully contained within the device.on touchscreens. Foremost, and perhaps most challenging, This important property sets it apart from all previous tech-is that the best techniques avoid instrumenting the user. niques. Further, our technology is sufficiently compact,Further, there should be minimal or no instrumentation of low-powered and inexpensive to enable integration intothe environment, as external infrastructure is costly and mobile devices and allow for truly mobile interactions. Ad-prohibits mobility. Furthermore, the technique must be fast ditionally, user classification occurs in real time; the initialand robust, identifying a large number of users both se- “first-touch” calibration takes less than a second. Finally,quentially and simultaneously. Additionally, it should be our approach is not sensitive to occlusion, orientation, light-inexpensive, easily deployable, sufficiently compact, and ing, or other factors that are problematic for computer vi-have low power requirements, thus making integration into sion driven devices feasible. Currently, we are not aware of anysystem that satisfies all of these requirements. At present, Capacitive Fingerprinting also has several drawbacks. Foremost, it can differentiate only among aIn attempting to answer this challenge, there has been sig- small set of concurrent users. Further, users can only touchnificant effort put forth to develop technical solutions that sequentially, not simultaneously. There are additional limi-can support user identification on touchscreens. One ap- tations regarding the persistence of identification. Finally,proach is to not uniquely identify each user per se, but ra- although our experimental results are promising, robustnessther distinguish that there are multiple users operating at the needs to be improved for real world use.same time. For example, in Medusa [1], the presence ofmultiple users can be inferred by proximity sensing around Nonetheless, this work puts forward a novel and attractivethe periphery of an augmented table. Touches to the surface approach for user differentiation that has not been proposedcan be attributed to a particular user by using arm orienta- previously. This paper assesses and confirms the feasibilitytion sensed by an array of proximity sensors on the table of the approach and expands the toolbox of techniques HCIbezel. If users exit the sensing zone, knowledge of the user researchers and practitioners can draw upon. Similar to anyis lost; upon returning, the user is treated as new. Similarly, other user identification approach, Capacitive Fingerprint-Dang et al. [8] used finger orientation cues to back-project ing offers a distinct set of pros and cons. Given that a sen-to a user, achieving a similar outcome. In both systems, sor fusion approach might ultimately prove strongest forocclusion and users in close proximity (i.e., side by side) user identification, we believe that our approach may fillare problematic. important gaps in the feature space used for classification. We hope this work will contribute to the ultimate goal ofAnother approach to user identification is to capture identi- robust and unobtrusive technologies for differentiating andfying features, such that each can be uniquely recognized. identifying users in a broad variety of applications.One option is to instrument the user with an identifyingitem, for example, fiducial markers [19] or infrared-code- CAPACITIVE FINGERPRINTINGemitting accessories [21,24]. Researchers have also consid- Our approach is based on the fundamental observation thatered biometric features, such as face [35], hand contour every human body has varying levels of bone density, mus-[27] and fingerprint analysis [15,31]. MultiToe [2] uses a cle mass, blood volume and a plethora of other biologicalseries of down-facing Kinect cameras to segment users’ and anatomical factors. Furthermore, users also wear differ-shoes for identification purposes. DiamondTouch [9] and ent shoes and naturally assume different postures, whichDT Controls [10] uniquely ground each user (e.g., through alters how a user is grounded. As a consequence, the elec-a chair or floor mat wired to the sensing electronics). Be- trical properties of a particular user can be fairly unique,cause the electrical path is unique to each user, touches on a like a fingerprint, from which we derive our system’s name.large shared screen can be sensed quickly and reliably. All Therefore, if one can accurately measure the electrical
  3. 3. properties of a user, it should be possible to identify, or atleast differentiate, the person.Humans have a multitude of electrical properties that can bemeasured, such as vital signs (e.g., EKG). In this paper, weestimate impedance profiles of users at different frequen-cies, by using recently proposed SFCS techniques [26]. Theadvantage of SFCS is that it is trivial to instrument devices,as only a single electrode and wire are needed. Further-more, it is inexpensive, and it does not require user to wear Figure 2. Mean permittivity and resistivity ofor hold any additional devices. We are not aware of previ- different tissues (from [11]).ous attempts to explore SFCS for user identification.The fundamental physical principle behind Capacitive Fin- tion between measured body impedance and properties ofgerprinting is that the path of alternating current (AC) in a the human body are still not fully understood [11]. Mosthuman body depends on the signal frequency [11]. This is often, just one or two frequencies are used for such meas-because the opposition of body tissues, blood, bones, etc., urements and we are not aware of any attempts to apply thisto the flow of electrical current – or body electrical imped- technique in HCI applications. Capacitive Fingerprintingance – is also frequency dependent. For example, at 1 kHz should not be confused with galvanic skin response (GSR),bone has a resistivity of approximately 45 Ω•m, but at 1 which measures the conductivity of the skin (see e.g.,MHz it’s resistivity increases to ~90 Ω•m (Figure 2) [11]. [18,22] for applications in HCI).Since the AC signal always flows along the path of least PROTOTYPEimpedance, it is theoretically possible to direct the flow of We created a proof-of-concept system seen in Figure 1 andthe current through various paths inside the user’s body by schematically described in Figure 3. To capture and classifysweeping over a range of frequencies. impedance profiles among a small set of users, we employ As the signal flows through the body, the signal amplitude Swept Frequency Capacitive Sensing (SFCS), introduced asand phase change differently at different frequencies. These Touché [26]. Beyond the Touché sensor board, our systemchanges can be measured in real time and used to build a consists of a 6.7” LCD panel, a 6.4” IR touch screen, andfrequency-to-impedance profile. Different people, by virtue an Indium Tin Oxide (ITO) coated transparent plastic sheet.of having unique bodies, should exhibit slightly different The Touché sensor board generates a 6.6V peak-to-peakprofiles. Although we do not specifically model this rela- sinusoidal wave, ranging in frequency from 1KHz totionship, our fingerprint-based classification approach relies 3.5MHz, using 200 steps. This signal is injected into anon it. ITO sheet situated on top of the LCD panel. When a userImportantly, Capacitive Fingerprinting is a non-invasive touches the ITO sheet, an electrical connection to the Tou-technique – we do not require a special purpose ground ché sensor is created (the user is also grounded to the envi-electrode be coupled to users (as in [9,10]). Instead, we use ronment). The current of the sine wave is significantly low-the natural environment as ground (i.e., the floor). This also er than 0.5 mA, safe for humans and on par with commer-means shoes influence the impedance profile. As shown in cially available touchscreens [33]. Impedance profiles are[2], users’ shoes are also fairly unique and aid poster classi- sent over USB to a computer approximately 33 times perfication. second. Note that our present sensor board does not meas-We should note that impedance measurements of the hu- ure the true impedance of the body, but rather measures theman body have been used since the 1970s in medical diag- amplitude component. Specifically, it creates a voltage-nostics, such as measuring fluid composition and BMI divider circuit with a resistor and samples this with an AD[11,17] and electro-impedance tomography imaging [7]. converter. We leave measurement of the phase componentDespite a long history of such measurements, the correla- to future work. Figure 3. A cutaway view of the touchscreen layers, which are connected to a Touché sensor board. Here, when a user touches the screen; our classifier attributes the touch event to a set of users that have previously logged in.
  4. 4. Figure 4. Impact on classification performance by varying Figure 5. Classification accuracies for all trial participant classifier training data from 0.1 seconds (1 sample per pairings. Classifer was trained using 0.5 seconds of participant) to 8 seconds (80 samples per participant). finger training data (5 samples per participant).We first attempted to build our system on top of a capaci- finger pose and pressure that might be naturally encoun-tive touch input panel. However, we found that the conduc- tered through extended use, but unlikely reflected in a typi-tive nature of the ITO sheet interfered with touch sensing. cal, single touch event. During the touch period, 10 samplesSimultaneously, the conductive layers inside capacitive and were collected per second, yielding 80 data points per par-resistive touchscreens interfered with SFCS. This necessi- ticipant. We were also interested in evaluating how thetated the use of an electrically passive sensing technology. technique scaled to other gestures beyond simple fingerWe selected an IR-driven touch panel, though several other touches. The same procedure was used to collect data for atechnologies are applicable (e.g., Surface Wave Acoustic). two-finger pinch (using one hand), a bi-modal two-fingerIt is important to note, however, that with tighter hardware touch, and resting the palm on the screen.integration it may be possible to use e.g., a projective ca- Note that we deliberately chose to collect data from a singlepacitive touchscreen for both touch and impedance sensing. touch (i.e., single point) on the touchscreen, as this bestThe final component of our system is a conventional com- simulated a “login button” experience. Although multi-puter running a classification engine. We use a Support point collection would yield more data, and potentiallyVector Machine (SVM) implementation provided by the stronger results, pilot testing suggested that this would beWeka Toolkit [13] (SMO, C=2.0, polynomial kernel, impractical for real world use and applications.e=1.0). We employ the same feature set used successfully During testing data collection, a 4x4 grid of numberedin [26]. Our setup provides 200-point impedance profiles 33 crosshairs was provided as touch targets. Users were askedtimes per second. When a user first touches the screen, ten to touch, with a single finger, each crosshair in order. Twoimpedance profiles are captured, taking approximately rounds were completed, yielding 32 touch events per partic-300ms. Each profile is classified in real time; a majority- ipant. In addition, participants performed, in locations ofvoting scheme is used to decide on the final classification. their choosing, ten one-handed pinches, ten bi-modal two-This improves accuracy, as the first 100ms of touch can be finger touches, and ten palm touches. In total, this processunstable due to the user not yet making full contact. This provided 62 touch events, using four different gestures,final classification result is paired with the last touch event. distributed over the entire surface of the screen.EXPERIMENTAL EVALUATION When investigating a new sensing technique, it is beneficialThe first fundamental question that we aim to answer in this to control various experimental factors so as to best isolatepaper is: can we use measured electrical properties of a the innate performance of the technique. Once this has beenuser’s body for differentiation? To answer this question, we established, it is then interesting to relax various constraintsconducted an evaluation that included 11 participants (two to explore the broader feasibility. Following this mantra,female, mean age 28.1) recruited from our industrial re- during the experiment users were asked to stand with bothsearch lab. Our evaluation consisted of two phases. First, feet on the floor, providing a relatively consistent connec-we collected data for the purpose of training our classifier. tion to the floor.The second phase collected independent data for the pur- Pairing Userspose of evaluating our classifiers. The experiment took ap- To assess our system’s feasibility, we investigated userproximately 20 minutes. differentiation accuracy for pairs of users. Instead of run-Procedure ning a small number of pairs live, we used train/test dataDuring the training data collection, users were asked to from our 11 participants to simulate 55 pairings (all combi-touch a single point in the center of the screen for 8 seconds nations of participants). For example, in a trial pairing par-while rocking their finger back-and-forth and side-to-side ticipant 1 with participant 2, the system was initialized with(see Video Figure). This helped to capture variability in training data from 1 and 2. The resulting classifier was then
  5. 5. Figure 6. Example painting application. Two users Figure 7. Example – sketching application. Each user can select colors from a palette on the left of screen. has a different drawing color to attribute edits. An The system records each user’s selection, allowing “undo” button is provided, allowing users to undo users to paint in a personalized color. strokes from their personal edit history.fed unlabeled testing data from participants 1 and 2 com- all 55 simulated participant pairings, average accuracy wasbined and in a random order. This simulated sequential 97.8% (SD=6.9%). This is very similar in performance totouch events as if the users were co-located. From the per- finger-touch-only performance using 2 seconds of trainingspective of the classifier, this was no different than real- data (both classifiers were trained on 20 samples per partic-time operation. ipant).Results We repeated the latter experiment using a single frequencyLooking first at single finger touches, performance using all in order to demonstrate the utility of employing a swept8 seconds of training data yielded an all-pairs average accu- frequency approach. We used attribute selection to identifyracy of 97.3% (SD=5.9%). Figure 4 illustrates the classifi- 753.5kHz as the single best frequency at which to differen-cation performance with different volumes of training data, tiate users. It should be noted that in a real world systemvarying from 0.1 second (1 training sample per participant) this ideal frequency depends on the set of users and envi-to 8 seconds (80 training samples per participant). Perfor- ronmental conditions, and thus cannot be known a priori –mance significantly plateaus after 0.5 seconds of training thus our estimate is idealized. On average, user differentia-data (5 training sample per participant), which achieves tion was 87.3% accurate vs. 97.8% when all frequencies96.4% accuracy (SD=9.0%). Figure 5 shows classification were used, or roughly six times the error rate.accuracies for all user pairings. Of note, two thirds of pair- EXAMPLE APPLICATIONSings had 100% differentiation accuracy. The second research question that we would like to answerThese findings underscore two key strengths of our ap- in this paper is: what are the real-world implications of thisproach. Foremost, the fact the system is able to perform at technique? To begin to address this question, we designed84.5% accuracy (SD=18.9%) with a single training instance three simple exemplary applications based on Capacitivefrom each user suggests the feature space we selected is Fingerprinting. These applications demonstrate differenthighly discriminative between users. Second, 0.5 seconds interaction possibilities if user differentiation was availableappears to be a sweet spot, in particular, a nice balance be- on touchscreens (see also the accompanying Video Figure).tween classification accuracy and training duration. Where- For example, there are many games, especially for tablets,as an 8-second login sequence would significantly interrupt that allow two users to play simultaneously. When individ-interactive use, 500ms is sufficiently quick to be of minimal ual scoring or control is needed, interfaces most typicallydistraction. Given that our current approach requires users have a split view. However, this limits game design possi-to login each time they want the system to differentiate bilities and decreases the available screen real estate forthem, this interaction has to be extremely lightweight if it is each player. Using Capacitive Fingerprinting, it is possibleto be practical. for two players to interact in a common game space. ToWe ran a second, post-hoc simulation that included all ges- demonstrate this, we created a “Whac-A-Mole” game [34],tures: single finger touches, one-handed pinches, bi-modal seen in Figure 1. Targets appear out of random holes; play-two-finger touches, and palm touches. The goal was not to ers must press these with their fingers before they returndistinguish between different gestures, as demonstrated in underground. Each time a player successfully hits a target,[26], but rather to distinguish between users performing a he gains a point; individual scores are kept for each playervariety of gestures. Our classifier was trained on 20 sam- automatically and transparently for each user.ples per participant (5 samples per gesture), representing 2 In another example, we created a painting application (Fig-seconds of training data. Our testing data consisted of all 62 ure 6), where two users can paint with their own selectedtouch events from our testing data collection. Again using color. For example, User A can select red and paint in red,
  6. 6. while User B can select blue and paint in blue without af- Robustness: Our experimental results suggest a fairly robustfecting User A’s selection. This could be trivially extended system with paired-user accuracies in excess of 96%. How-to brush type, thickness, opacity, and similar features. In a ever, we caution that a controlled lab study is not a goodgeneral sense, Capacitive Fingerprinting ought to allow proxy for real-world use. Our experimental results shouldusers to operate on the touch interface with personalized be viewed as evidence that the underlying technique is validproperties [25]. Further, because applications identify the and may be a tenable way forward for supporting instru-owner of each stroke, it is possible to support individual- mentation-free, identity-enabled, mobile touchscreen inter-ized undo stacks – a feature we built into a simple sketching action – the first technique to achieve this end. This opensapplication (Figure 7). up the possibility of future work, which we discuss next.LIMITATIONS AND CHALLENGES FUTURE WORKThe experimental results suggest that measuring the imped- Combining Capacitive Fingerprinting with other sensingance of users is a promising differentiation approach. How- techniques is of great interest. Sensor fusion approachesever, our study and exemplary applications brought to light generally aim to merge multiple imperfect techniques toseveral limitations and challenges. These include: achieve a superior outcome. Our approach has a unique setPersistence of identification: Calibration seems to be sensi- of strengths and weaknesses that lend well to this approach.tive to changes in the human body over the course of the We are also interested in exploring adaptive classifiers,day. Thus, walking away and returning hours (or certainly where the system could continuously collect training datadays) later is not currently possible. We hypothesize that from users and integrate changes, including natural drift.environmental factors, such as ambient temperature and It may also be possible to identify specific situations wherehumidity, as well as biological factors, such as blood pres- the difference between two users is sufficiently dramatic, sosure and hydration, cause impedance profiles to drift over that login is no longer necessary and a persistent generaltime. This personal change variability can be larger than classifier can be used. One example application scenario isbetween-person variability. This suggests use scenarios differentiation between parents and children, or teacher andwhere interaction is ad hoc and relatively short, or where young student, where games and educational experiencesthe precision of recognition is not critical. may be designed according to who is providing input. ForBecause of this limitation, we instituted a login procedure example, in an educational application a teacher could drawfor all of our example applications. Specifically, when a a hint to help a student solve a math problem; the hint thenuser wants to join a multi-user collocated interaction, he would fade out after a few seconds, prompting the studentmust first press and hold a “login button” (see Video Fig- to complete the problem him or herself.ure). This triggers the system to capture an impedance pro- Finally, we are also interested in exploring applications offile (of the finger pressing the button) and retrain the classi- Capacitive Fingerprinting in highly controlled environ-fier. As discussed in the evaluation section, this interaction ments or applications where exact user identification is notcan be completed as quickly as 500ms. necessary - so called soft biometrics. In-car entertainmentGround connection: The electrical properties of the user systems are a prime example. We are also curious as to howcannot be taken separately from the electrical properties of our approach could be applied to differentiating betweenthe environment. However, since the environment is typi- humans and non-humans (e.g., bags, drinks) in ride systemscally the same for co-located users, per-user differences are and other applications.detectable. The system is sensitive to how a user is con- CONCLUSIONnected to ground. For example, if a user logs into the sys- In this paper we have described how sensing of humanstem while seated and then attempts to use it standing, electrical properties can be used for interactive user differ-recognition can be poor. By changing the global impedance entiation. We integrated this approach into a small,of a user so dramatically, the user variations are often ob- touchscreen device and built three simple demo applica-scured. The user must re-login so as to register a new im- tions to highlight some basic uses of user-aware interaction.pedance profile. Future work is required to study this in The evaluation of our sensing technique demonstrated thatmore detail, as well as to develop possible techniques to the approach holds significant promise. We hope that theovercome this limitation. current research will encourage HCI researchers and practi-Sequential touch: A further limitation is that our system tioners to investigate this interesting and exciting technolo-currently uses a single electrode, covering the entire touch gy direction.surface. As a consequence, our current prototype can only AKNOWLEDGEMENTSprocess a single touch at a time (i.e., two users cannot press We are grateful to Zhiquan Yeo, Josh Griffin, Jonas Lohthe screen simultaneously and be differentiated). It is likely, and Scott Hudson for their significant contributions in in-though not tested, that a mosaic of electrodes, as seen in vestigating early prototypes of Touché, on which this worksome forms of projective capacitive touchscreens, could be is based. We are also thankful to Disney Research and Theused to overcome this by sampling the smaller regions Walt Disney Corporation for continued support of this re-which are unlikely to fit two users. search effort.
  7. 7. REFERENCES 18. Mandryk, R.L. and Inkpen, K.M. Physiological indicators1. Annett, M., Grossman, T., Wigdor, D., and Fitzmaurice, G. for the evaluation of co-located collaborative play. In Proc. Medusa: a proximity-aware multi-touch tabletop. In Proc. CSCW 04. 102-111. UIST 11. 337-346. 19. Marquardt, N., Kiemer, J., Ledo, D., Boring, S., and2. Augsten, T., Kaefer, K., Meusel, R., Fetzer, C., Kanitz, D., Greenberg, S. Designing user-, hand-, and handpart-aware Stoff, T., Becker, T., Holz, C., and Baudisch, P. Multitoe: tabletop interactions with the TouchID toolkit. In Proc. ITS high-precision interaction with back-projected floors based 11. 21-30. on high-resolution multi-touch input. In Proc. UIST ’10. 20. Matsushita, N. and Rekimoto, J. HoloWall: Designing a 209-218. Finger; Hand, Body, and Object Sensitive Wall. In Proc.3. Bly, S.A., and Minneman, S.L. Commune: A Shared UIST 97. 209-210. Drawing Surface.” In SIGOIS Bulletin, Massachusetts, 21. Meyer, T. and Schmidt, D. IdWristbands: IR-based user 1990. 184-192. identification on multi-touch surfaces. In Proc. ITS 10.4. Burges, C.J. A Tutorial on Support Vector Machines for 277-278. Pattern Recognition. Data Mining and Know. Disc., 2.2, 22. Moore, M.M. and Dua, U. A galvanic skin response inter- June 1998. 121-167. face for people with severe motor disabilities. In Proc.5. Buxton, W., and Myers, B. A study in two-handed input. In ASSETS 04. 48-54. Proc. CHI ’86. 321-326. 23. Partridge, G.A. and Irani, P.P. IdenTTop: a flexible plat-6. Cao, X., Wilson, A., Balakrishnan, R., Hinckley, K., and form for exploring identity-enabled surfaces. In CHI EA Hudson, S.E. ShapeTouch: Leveraging contact shape on in- 09. 4411-4416. teractive surfaces. In Proc. TABLETOP ‘08. 129-136. 24. Roth, V., Schmidt, P., and Guldenring, B. The IR ring:7. Cheney, M., Isaacson, D., and Newell, J.C. Electrical im- Authenticating users’ touches on a multi-touch display. In pedance tomography. SIAM Review, 41(1), 1999. 85-101. Proc. UIST ‘10. 259-262.8. Dang, C.T., Straud, M., and Andre, E. Hand distinction for 25. Ryall, K., Esenther, A., Everitt, K., Forlines, C., Morris, multi-touch tabletop interaction. In Proc. ITS ’09. 101-108 M.R., Shen, C., Shipman, S., and Vernier, F. iDwidgets: Parameterizing widgets by user identity. In Proc.9. Dietz, P. and Leigh, D. DiamondTouch: a multi-user touch INTERACT 05. 1124-1128. technology. In Proc. UIST 01. 219-226. 26. Sato, M. Poupyrev, I, and Harrison, C. Touché: Enhancing10. Dietz, P.H., Harsham, B., Forlines, C., Leigh, D., Yera- Touch Interaction on Humans, Screens, Liquids, and Eve- zunis, W., Shipman, S., Schmidt-Nielsen, B., and Ryall, K. ryday Objects. In Proc. CHI ‘12. 483-492. DT controls: adding identity to physical interfaces. In Proc. UIST 05. 245-252. 27. Schmidt, D., Chong, M.K. and Gellersen, H. HandsDown: hand-contour-based user identification for interactive sur-11. Foster, K.R. and Lukaski, H.C. Whole-body impedance - faces. In Proc. NordiCHI 10. 432-441. what does it measure? The American Jour. of Clinical Nu- trition, 64 (3), 1996. 388S-396S. 28. Shen, C., Vernier, F.D., Forliens, C., and Ringel M. Dia- mondSpin: an extensible toolkit for around-the-table inter-12. Gutwin, C., Greenberg, S., Blum, R. and Dyck, J. Support- action. In Proc. CHI ’04. 167-174. ing Informal Collaboration in Shared Workspace Group- ware. HCI Report 2005-01, U. Saskatchewan, Canada. 29. Stefik, M., Bobrow, D. G., Foster, G., Lanning, S., and 2005. Tatar, D. WYSIWIS Revised: Early experiences with mul- tiuser interfaces. IEEE Transactions on Office Information13. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reute- Systems, 5(2), 1987. 147-167. mann, P., and Witten, I.H. The WEKA Data Mining Soft- ware: An Update. SIGKDD Explor., 11(1), 2009. 30. Stewart, J. Bederson, B., and Druin, A. Single Display Groupware: A Model for Co-present Collaboration. In14. Harrison, C., Schwarz, J., and Hudson, S.E. TapSense: Proc. CHI ‘99. 286-293. enhancing finger interaction on touch surfaces. In Proc. UIST 11. 627-636. 31. Sugiura, A. and Koseki, Y. A user interface using finger- print recognition: holding commands and data objects on15. Holz, C. and Baudisch, P. The generalized perceived input fingers. In Proc. UIST 98. 71-79. point model and how to double touch accuracy by extract- ing fingerprints. In Proc. CHI 10. 581-590. 32. Wacom tablet. Lucero, A., Holopainen, J., and Jokela, T. Pass-them- 33. Webster, J., Editor. Medical instrumentation: Application around: collaborative use of mobile phones for photo shar- and design. 4th ed. 2008, Wiley. p. 641. ing. In Proc. CHI 11. 1787-1796. 34. Whac-a-mole. Lukaski, H., Johnson, P., Bolonchuk, W., and Lykken, G. 35. Zhao, W., Chellappa, R., Phillips, P.J., and Rosenfeld, A. Assessment of fat-free mass using bioelectrical impedance Face recognition: A literature survey. ACM Comput. Surv. measurements of the human body. The American Jour. of 35(4), Dec. 2003. 399-458. Clinical Nutrition. 41, 4 (1985), 810-817.