An Overview of Interactive Surfaces: Applications, Sensors, and Dimensions
An Overview of Interactive Surfaces: Applications, Sensors, and Dimensions Song Li1,2, Kathryn Clagett1,3, Geisa Bugs1,4 1 Mobile Interaction Institute for Geoinformatics – IFGI, University of Münster, Germany 2 firstname.lastname@example.org, email@example.com, firstname.lastname@example.org Abstract. This paper seeks to summarize several aspects of interactive surface technology by exploring their applications, sensors, and dimensions. Application areas are surveyed and sub-divided into categories of entertainment, collaboration, communication, computer interface, and customer-vendor interfaces. Sensing technology is also broken down into smaller groups where three sensors—capacitive touch screen, optical imaging, and frustrated total internal reflection—are investigated in depth, with a general overview given to other types. In regards to dimension, the size and complexity of different interactive services are overviewed. It concludes by offering the possible benefits of using interactive surface technology. Keywords: interactive surfaces, touch screen, sensing technology.1 IntroductionAt the end of the 1980’s, Mark Weiser, author of Ubiquitous Computing  that addressesthe role of human/computer relationships, proposed a ‘third computing era,’ wheretechnology recedes into the background of our lives. In other words, in the third eracomputers are everywhere, with each user accessing many computers on a regular basis.Twenty years later this is being realized; it is becoming possible to not only transferdigital photography to a wireless phone device without having to plug into a computer,but also to move a cursor with your fingers to interact with interfaces, or even to entercommands by voice. We are now living in an era where people are surrounded byintelligent surfaces and where digital information is coupled with everyday physicalobjects and environments. Interactive surfaces are one manifestation of this new era. In general, an interactivesurface is indicated by “the transformation of …surface[s] within architectural space (e.g.,walls, desktops, ceilings, doors, windows) into an active interface between the physicaland virtual worlds .” Broadly speaking, interactive surfaces are those that can track
2 Li, Clagett, Bugshuman gesture and voluntary manipulation of the object, the media, and the spaceinvolved in the interaction by a user. Consequently, interactive surfaces can be much moreintuitive than platforms based on keyboard and mouse, and therefore much closer tospontaneous human behavior. This is a whole new way of interacting with information. The way people relate with the surface depends on the technology applied. One of thegreatest advents of user interface design is touch screen technology. Although research inthis area started in the 1980’s, this technology has witnessed increasing exposure recentlythrough the launch of the iPhone, the first cellular phone that allows the user to touch thedigital content directly with their fingers, a revolutionary notion. This technology willundoubtedly become more ubiquitous, with interactive surfaces likely to become commonon the surface of a kitchen table or on the television screen for example. Basically, touchscreen technology allows for interaction with the surface, and the manipulation of morethan one thing at time with the movement of the hands or gestures from the human body. The goal of this research is to give an overview of interactive surfaces today, focusingfor the most part in touch screen technology, where the hands are making the gesture towhich the surface responds. The paper divides the overview into three main parts:application areas, technologies, and dimensions of interactive surfaces. First, theapplication areas are classified according to the function/objective along with devicesexamples. Next, the many solutions available for particular applications, or simple thetechnologies behind it, are explored. Finally, the third section addresses the size andcomplexity dimensions of interactive surfaces. Lastly, the conclusion will summarize thefindings, emphasizing the benefits, disadvantages, and future speculation on interactivesurfaces.2 Application AreasThis section presents an effort to illustrate the broad spectrum of viable applications ofinteractive surfaces. The devices were classified in five areas based on theobjective/function: 1) entertainment, 2) collaboration, 3) communication, 4) computerinterface, and 5) customer-vendor interface. Each application area is described along withexamples. It is important to highlight that the categories are not mutually exclusive andsome devices can fall into more than one category.2.1 EntertainmentThere are a considerable number of current and possible applications for entertainmentpurposes. It is quite difficult to evaluate all of the achievable applications for surfaces ofany shape or size that sense the location of objects and gestures. Commercial multi-touchdisplays   are already being employed mainly in bars, discos, and events. Throughfantastic graphic design on tables, floors, and walls the surfaces attract curiosity,
An Overview of Interactive Surfaces: Applications, Sensors, and Dimensions 3transforming customers of these bars or discos into active participants. For example, bysetting their glass on the bar, clients can create light ‘paths’ on tables with their glasses, orpatrons may have fun ‘drawing’ on the floor while dance with their feet. One can imaginea lounge full of walls projections that respond with the body movement or with the touch,changing color and display. It is at least an interesting and attractive application ofinteractive surfaces, and may serve to draw in more customers, thereby being aworthwhile investment for business owners. Additionally, interactive games are another entertainment area with huge potential foremploying technologies that let the content be controlled in the real world by user’sgestures. In order to offer more ‘degrees of freedom,’ game design industry isconcentrating on the concept of augmented reality, which combines real and virtual worldelements, in real-time interactive 3D. The most popular example may be the NintendoWii. Even though the players don’t touch the screen, by using the Wii Remote, users areable to control the game with physical gestures and also navigate on the menu operatingsystem, rather than using the button press traditional form. For an example where touch screen technology is used, the Invisible Train  is asimple multi-player game in which players guide virtual trains, only visible through thePDAs (Personal Digital Assistant) video display, over a real wooden miniature railroadtrack. In the game operations, track switches and speed adjustments are activated byresponding to a tap on the PDA’s touch screen. Likewise, entertainment can be linkedwith knowledge acquisition. The iFloor  is an interactive floor where the ultimate goalis to make the physical place of libraries more attractive and appealing to its users. TheiFloor allows multiple people to post questions/answers with a cursor that is draggedaround the floor via body movement (Figure 1). The purpose of this application is closelyrelated to the collaboration area, which will be addressed next. Fig. 1. iFloor
4 Li, Clagett, Bugs2.2 CollaborationOversized displays like whiteboards, projections, or plasma screens are typical inorganizations or educational spaces with the intent to share information. The interactivewhiteboard is the precursor to bring interaction to these displays. The differencecomparing to the simple computer desktop projection is that users can interact with theboard using a pen, finger, or other device (some are even adapting the Wii Remote). Several projects intend to enhance these public surfaces and make them yet moreinteractive in an attempt to facilitate group collaboration in meeting rooms, supporting awide range of activities. These surfaces can promote knowledge, information, and evenobjects sharing in brainstorms. Public interactive surfaces are also able to save stages ofwork process by employing multiple displays. For instance, the DiamondTouch Table  is one of the first multi-touch technologiesthat support small group collaboration (Figure 2). This table works by running anelectronic signal to the user’s finger via their chair to differentiate each user. Anotherexample, the Dynamo  multi-user interactive surface, was designed to enable thesharing and exchange of digital media in public spaces. The surface is composed of one ormore displays that can be horizontally and vertically. Users can attach multiple USB miceand keyboards to the surface or access remotely. Perceptive Pixel  company alsocreates large and medium size displays adopting multi-touch technology that claim toenhance collaborative spaces, allowing multiple users, up to 20 fingers, to work andinteract. In fact, the technology is currently being utilized at CNN news transmissions. Fig.2. DiamondTouch Table
An Overview of Interactive Surfaces: Applications, Sensors, and Dimensions 52.3 CommunicationLarge surfaces are often employed for the communication of useful information aboutproducts or services in public spaces like museums, airports, and shopping malls. Despitethat, being just projections these surfaces usually don’t allow direct public contact; it islikely that greater levels of interaction will be reached soon since it is a topic ofsignificant interest among researchers. Conversely, handhelds, mobile, and wearabledevices are more likely to accommodate user input. Large or small, these devices canequally be explored as communication tools using touch screen technology. The Gesture Wall project , for example, first introduced at the Brain Opera in1996, make use of electrodes to measure the position and movement of the users handsand body in front of a projection screen. Additionally, Sensitive Wall  is acommercially available interactive surface with a large vertical touchless display able todetect hands presence in real time, up to 30 centimeters from the surface. Alternatively itis easy to envision an interactive surface device that might accompany a user during anexhibition, similar to the PEACH (Personal Experience with Active Cultural Heritage)project , a suite of interactive and user-adaptive technologies for museum visitors,using together with others, video documentaries on mobile devices.2.4 Computer InterfaceThe multi-touch technology will rapidly become commonplace on computers interfaces.This technology will likely overtake traditional user interface techniques such as graphicaluser interface and desktops in which people traditionally use a mouse on a horizontalsurface to control a visual pointer on the screen. Interactive surfaces are more situation-aware or assistance-oriented rather than command-oriented. Following the success of the iPhone, Apple expanded of the use of multi-touchcomputing to the iPod Touch (portable media player and wi-fi mobile platform), as well tothe notebook lines. Additionally, Asus has already included multi-touch function on thetouchpad of Asus’ EEE PC, a small portable notebook announced last April. Although the multi-touch technology evidences the technical evolution of computerinterfaces, still other kinds of interactions may be supported on newer types of interactivecomputer interfaces. This may lead to a completely change on operating systems, makeUSB cords obsolete, and/or enable new recognition commands. Users will be able tointeract with real world that is augmented by computer synthetic information.2.5 Costumer-Vendor InterfaceOf course the marketplace has also adopted/embraced this new technology. Interactivesurfaces are captivating and thereby an effective advertising tool, catching a consumer’s
6 Li, Clagett, Bugsattention instantaneously, in a crowd-stopping effect. Consequently there is a significantfield for costumer-vendor interface applications ranging from organizing pictures andvideos to ordering and paying. Microsoft Surface  utilizes gestures detection system and movements throughinfrared cameras and the image is shown in the screen through a system of projection.This technology has been explored by Microsoft partners since April 2008. Onedemonstration of the device shows how customers could user it to customize snowboards.By taking a tag from a snowboard in a shop that contains a chip identifying the it, the usercan then place this tag down on the interactive surface, which immediately recognizeswhich snowboard the customers is interested in. The costumer is then able to resize,rotate, and change colors of graphic symbols which can be added to the product. Aftervirtually ‘creating’ their product design, the layout can be saved on a mobile phone orother storage device, meaning that the customer can leave the store but save their designshould they wish to come back and purchase that model. One additional example of thedevice utilization from advertising videos shows a restaurant table in which wine menuscan pop-up on screen, showing a list of preferences by categories and information aboutit, such as cities of origin, producers, the winery, and a virtual map of the region. Afterhaving purchased an appropriate wine to accompany their dinner, the costumer can thenpay the bill by simply placing the credit card on the surface, where the table recognizesthe card and can charge the bill accordingly. In the same fashion, the system called Tap Tracker  shows costumers interactingwith glass windows of stores (Figure 3). Four contact microphones were glued to theinside corners of a glass window and users could choose to watch brief video clips orengage in a game that was a ploy to get people into the store. Results showed that duringthe experiment the store’s sales increased, suggesting that in this instance the use of aninteractive surface was an effective means to bringing in potential customers. Fig.3. Tap Tracker
An Overview of Interactive Surfaces: Applications, Sensors, and Dimensions 73 TechnologyAlex Pentland wrote in the Scientific American: “The problem, in my opinion, is that ourcurrent computers are both deaf and blind: they experience the world only by way of akeyboard and a mouse…I believe computers must be able to see and hear what we dobefore they can prove truly helpful .” Today, all kinds of sensing technology are usedas eyes and ears of interactive surfaces. The signal collected by sensors is processed bysoftware to identify properties (location, direction, speed and other properties) of theaction. Based on the identified properties, corresponding reactions, which can be visual,acoustic or even olfactory, will be executed by the computer. Sensors are the first step in an interactive surface and as such, they play a key role inthe whole process. Different sensors, varied in functions, resolutions, installationconditions, etc., are used to meet the requirements in the different application areasillustrated above. Since it is impossible to include all technologies in this report, threedifferent sensors--captive touch screens, optical imaging touch screens, frustrated totalinternal reflection touch screens--are introduced in detail while the others are onlyoverviewed briefly.3.1 Capacitive touch screensThe idea of using capacitive sensing in the field of human-computer interfaces has a longhistory. Generally it works with an array of wires interacting with fingers touching thescreen behind the board. The interaction between the different wires (laminated in a X-and Y-axis manner) and the tip of the finger is measured and calculated to a (x,y)coordinate. However, the interaction is sensitive to temperature and humidity. Smart-Skin , a capacitive sensing architecture developed by Jun Rekimoto, is anexample. Constructed by laying a mesh of vertical transmitter/horizontal receiverelectrodes on the surface (Figure 4) the receiver receives this wave signal because eachcrossing point (transmitter/ receiver pairs) acts as a (very weak) capacitor when one of thetransmitters is excited by a wave signal (of typically several hundred kilohertz). Themagnitude of the received signal is proportional to the frequency and voltage of thetransmitted signal, as well as to the capacitance between the two electrodes. When aconductive and grounded object approaches a crossing point, it couples to the electrodes,and drains the wave signal. As a result, the received signal amplitude becomes weak. Bymeasuring this effect, it is possible to detect the proximity of a conductive object, such asa human hand.
8 Li, Clagett, Bugs Fig. 4. The Smart-Skin sensor configuration: a meshshaped sensor grid is used to determine the hand’s position and shape. When the user’s hand is placed within 5-10 cm from the table, the system recognizesthe effect of the capacitance change. A potential field is created when the hand is in theproximity to the table surface. To accurately determine the hand position, which is thepeak of the potential field, a bi-cubic interpolation method is used to analyze the senseddata (Figure 5). By using this interpolation, the position of the hand can be determined byfinding the peak on the interpolated curve. In the future, the Smart-Skin may not be limited to a tabletop—instead, a non-flat oreven flexible surface like interactive paper, could act as a vehicle for such a surface. Also,Smart-Skin can combine with tactile feedback. If Smart-Skin could make the surfacevibrate by using a transducer or a piezoactuator, the user could “feel” as if he/she weremanipulating a real object.
An Overview of Interactive Surfaces: Applications, Sensors, and Dimensions 9Fig. 5. Gestures and corresponding sensor values (top: a hand on the sensor mesh, middle: raw input values, bottom: after bicubic interpolation)3.2 Optical imaging touch screensOptical imaging touch screen is a relatively modern development in touch screentechnology. Two or more image sensors are placed around the edges (primarily thecorners) of the screen. Infrared backlights are placed in the cameras field of view on theother sides of the screen. A touch shows up as a shadow and each pair of cameras can thenbe triangulated to locate the touch. This technology is growing in popularity, due to itsscalability, versatility, and affordability, especially for larger units. An example of it is Digital Vision Touch (DViT) technology developed by SMARTTechnologies Inc. Four cameras, one in each corner, constantly scan the surface to detecta target object (Figure 6). When a camera detects a target, its processor identifies theaffected pixel and calculates the angle at which it occurs. Each camera detects the targetand calculates its own angle. Mathematical formulas are developed that automaticallyrecord the distance between two cameras and their viewing angles in relation to eachother. With this information known, the technology can then triangulate the location of thecontact point. Mathematically, only two cameras are needed to calculate a contact point,but to ensure a robust system, DViT technology uses four cameras and has every camerapair report its triangulated result. The location of the contact point is sent to the user’scomputer.
10 Li, Clagett, Bugs Fig. 6. Camera identification of a contact point In addition, it’s possible to detect both when an object contacts the surface and when ithovers above it. The cameras scan a layer of stacked pixels. If an object breaks into thispixel area, it registers with the cameras regardless of whether it has touched the surface. With specially designed cameras that have a high frame rate (100 FPS) and support fastimage processing techniques, touch accuracy is high enough for mouse operation andwriting. It is also scalable to almost any practical size because the technology resides inthe corners, not the surface area. This technology can also support multi-touch after someimprovement and in the future, it may even be possible to distinguish between differentpointers.3.3 Frustrated Total Internal Reflection touch screensWhen light encounters an interface to a medium with a lower index of refraction (e.g.glass to air), the light becomes refracted to an extent that depends on its angle of incidenceand beyond a certain critical angle, and the light undergoes total internal reflection (TIR).Fiber optics, light pipes, and other optical waveguides rely on this phenomenon totransport light efficiently with very little loss. However, another material at the interfacecan frustrate this total internal reflection, causing light to escape the waveguide thereinstead. This phenomenon is well known as Frustrated Total Internal Reflection (FTIR). The first application to touch input appears to have been disclosed in 1970 in a binarydevice that detects the attenuation of light through a platen waveguide caused by a fingerin contact . This technique was further applied in fingerprint sensing and roboticsfield by Mallos  and Kasday separately . Based on Mallos/Kasday design,Jefferson Y. Han  proposed a scaled-up FTIR fingerprint sensor, or a FTIR robottactile sensor, used for interactive surface. This technique provides full imaging touch information without occlusion or ambiguityissues. The touch sense is zero-force and true: it accurately discriminates touch from avery slight hover. It samples at both high temporal and spatial resolutions. Pressure is not
An Overview of Interactive Surfaces: Applications, Sensors, and Dimensions 11sensed, though this is largely mitigated by the precision with which it can determine thecontact area of a depressed finger. It is inexpensive to construct, and trivially scalable tomuch larger surfaces. A drawback of the approach is that, being camera-based, it requires a significantamount of space behind the interaction surface, though it is primarily expected applicationscenarios where rear-projection would have been employed anyway (e.g. interactivewalls, tables). Also, as an optical system, it remains susceptible to harsh lightingenvironments. As a more robust alternative to Frustrated Total Internal Reflection, Zachary ScottCarpenter developed Diffused Laser Imaging (DLI) touch screen. DLI uses the samesoftware and imaging system as FTIR. Instead of the light coming from inside the screenit comes from diffused laser light being conducted through human flesh and onto thediffuser of the screen. A two dimensional plane of laser light is emitted directly over arigid screen (diffused glass). When a human finger breaks this plane of laser light the lightis diffused by the finger and thus passes from the intersection point through the finger anddown to the diffuser of the rigid screen. This system works just as well as FTIR but is notvulnerable to smudges or dirt on the screen. Furthermore most all multi touch functionscan be achieved with only two laser line sources versus the many light sources required tointernally illuminate the internal portion of a FTIR screen. DLI can also be used on mostany kind of flat monitor by placing one or two position sensing cameras out in front of thedisplay. In this case the cameras pick up the diffused finger tips as blobs of light insteadof dots as viewed from the back side.3.4 Other sensing technologiesResistive touch screens, which are composed of two flexible sheets coated with a resistivematerial and separated by an air gap or microdots. When contact is made to the surface ofthe touch screens, the two sheets are pressed together, registering the precise location ofthe touch. Resistive touch screens typically have a relative high resolution providingaccurate touch control and allow one to use a finger, a stylus, or any other pointing deviceon the surface of the board. However, by their nature, they do not support the full functionof a mouse and interface effects such as hover/pop-ups require additional interactions orsoftware helper functions . The Electromagnetic touch screens feature an array of wires embedded behind theboard surface interacts with a coil in the stylus tip to determine the (X,Y) coordinate ofthe stylus. In other words, there are magnetic sensors in the board that react and send amessage back to the computer when they are activated by a magnetic pen. Styli are eitheractive (require a battery or wire back to the whiteboard) or passive (alter electrical signalsproduced by the board, but contain no power source). These screens typically have a veryhigh resolution and usually support the full range of mouse functions, moreover, they mayaccommodate pressure sensing and multiple users touching and pointing on the surface.
12 Li, Clagett, Bugs Surface Acoustic Wave (SAW) touch screens employ transducers on the borders of aglass surface to vibrate the glass and produce acoustic waves that ripple over the glasssurface. When a contact is made on the glass surface, the waves reflect back and thecontact position is determined from the signature of the reflected waves. However, SAWtouch screens suffer several drawbacks: 1) can only provide medium resolution; 2) exhibitnoticeable parallax due to the thickness of the vibrating glass which is placed over thesurface of the video or computer display; 3) hovering is not supported; 4) cannot scalebeyond a few feet diagonal . Strain gauge touch screens use one or more, preferably symmetrically positioned straingage sensors to monitor relative distribution of force. The strain gauge sensors arepositioned such that bending strain on the touch screen, engendered by touch, is detectedby the one or more strain gages and accurately measured by an electronic controller,connected to the strain gauge(s). The electronic controller, or associated hardware, isprogrammed to relate relative bending force to a unique position on the screen and is acharge balancing and multiplying analog-to-digital converter which provides accurateposition determination, even with very low forces and with very minor differentiation inposition related forces . An infrared touch screen panel employs one of two very different methods. Onemethod uses thermal induced changes of the surface resistance. This method is sometimesslow and requires warm hands. Another method is an array of vertical and horizontal IRsensors that detect the interruption of a modulated light beam near the surface of thescreen. IR touch screens have the most durable surfaces and are used in many militaryapplications that require a touch panel display . Acoustic Pulse Recognition (APR) touch screen comprises a glass display overlay orother rigid substrate, with four piezoelectric transducers mounted on the back surface. Thetransducers are mounted on two diagonally opposite corners out of the visible area andconnected via a flex cable to a controller card. The impact when the screen is touched, orthe friction caused while dragging between a user’s finger or stylus and the glass, createsan acoustic wave. The wave radiates away from the touch point, making its way to thetransducers which produce electrical signals proportional to the acoustic waves. Thesesignals are amplified in the controller card and then converted into a digital stream ofdata. The touch location is determined by comparing the data to a profile. APR isdesigned to reject ambient and extraneous sounds, as these do not match a stored soundprofile. The touch screen itself is actually pure glass, giving it the optics and durability ofthe glass out of which it is made. It works with scratches and dust on the screen, andaccuracy is very good . Dispersive signal touch screen, introduced in 2002, uses sensors to detect themechanical energy in the glass that occur due to a touch. Complex algorithms theninterpret this information and provide the actual location of the touch. The technologyclaims to be unaffected by dust and other outside elements, including scratches. Sincethere is no need for additional elements on screen, it also claims to provide excellentoptical clarity. Also, since mechanical vibrations are used to detect a touch event, anyobject can be used to generate these events, including fingers and styli. A downside is that
An Overview of Interactive Surfaces: Applications, Sensors, and Dimensions 13after the initial touch the system cannot detect a motionless finger . More informationcan be found in .3.5 How to choose appropriate interactive surface?To compare all the sensing technologies and find the most suitable one for the application,besides the cost, lots of technical factors need to be considered. The scale is the firstlimitation. The capacitive technology cannot be used for interactive floors and walls whilesome optical technology cannot be used on mobile phone. Secondly, the applicationrequirements should be identified. Is multi-touch necessary? Is the full function of mouserequired? Is the ability to identify objects required? Input with finger or stylus? Last, butnot least important factors include for example, the resistance to scratch and bad workingconditions, the reaction speed, the existence of parallax, need external power supply ornot, etc. The factors mentioned above along with some others are listed in table 1 ;a fullcomparison of these technologies will be a very difficult task especially considering thetechnology innovation day by day. However, a complete and authoritative comparison canbe helpful for potential users of interactive surface. Table 1. Factors in Choosing Proper Sensing Technologies [adapted from 25]Performance: speed, sensitivity, resolution, accuracy, calibration stability, drag, z-axis, double touch, parallax (kind of)Input flexibility: glove, finger, fingernail, credit card, pen, signature capture, handwriting recognitionOptics: light transmission, reflection (lack thereof), clarity, color purityMechanical: small sizes (<10’’), large sizes (>19’’), curved CRTs, ease of integration, sealability, IP 651/NEMA 41Electrical: controller chip available, lower power battery option, operation with poor ground, ESD1, EMI1, RFI1Environmental: temperature, humidity, shock/vibration, altitude, in-vehicle, chemical resistance, scrath resistance, breakage resistance, safe break pattern, dust/dirt, rain/liquids, snow/ice, metal, ambient/UV light, fly on screen, non-glass surface possible, works through other materials, durability/wear, EMI1/RFI1, surroundings1 These are not directly referred on the text; see the source for more information.
14 Li, Clagett, Bugs4 Dimensions of Interactive SurfacesInteractive surfaces can come in many shapes, sizes, and complexities. While this mayseem self-evident, it is important to consider, as it is this very characteristic that makesthem such a valuable and viable technology; interactive surfaces are not constrained tojust one form, thereby limiting their applicability. This section of the paper will exploresome of the many manifestations that interactive surfaces can take.4.1 SizeAs the technology has advanced, a wide variety of sizes for interactive surfaces exist. Thesmallest devices are mobile and hand-held. Popular examples of such a device include theiPod Touch and the iPhone, both introduced in the summer of 2007. These devices’dimensions come in at just 11.5cm x 6.1cm x 1.2cm for the iPhone and 11cm x 6.2cm x.8cm for the iPod Touch . Such small sizes make these gadgets extremely portablewhile maintaining sophisticated interaction capabilities. Other common small interactivesurfaces include most GPS systems and PDAs, many of which have no keyboard ormouse and rely only on surface touch. In fact, these systems may be smaller than theApple products—GPS systems for bicycle navigation have only recently hit the marketand the Garmin model (Edge 705) comes in at only 5.1cm x 10.9cm x 2.5cm, weighingslightly less than the iPod Touch . Also to be included in this ‘small’ category is other common touch screens such asthose on bank machines or for buying train tickets. Although the screens in these systemsare small, they are generally part of a much larger machine, which has this examplestraddling the line between smaller and larger interactive surfaces. Moving up in scale are those surfaces that can be defined as being of ‘medium’ sizeincluding all tabletop interactive surfaces. There is, of course, some difference indimension, but, for example, the Microsoft Surface, the first commercially-availabledevice of its kind, has a 30 inch screen but the entire unit is 56cm x 53cm x 107cm .Obviously, a unit such as this is not particularly mobile, although since it’s marketed as analternative to an actual tabletop, one assumes that it is portable to some degree. Since many of these medium-sized surfaces are created for the purpose of groupinteraction, it is important to consider how the dimensions of this device might affect thesuccess of this device at facilitating collaboration. One study explores how the extent ofthe display influences the speed with which a group was able to complete a certainactivity; in this case the recreation of a poem based on words scattered about the surface.The research found that in fact the different sizes of interactive surfaces used in thisexperiment (87cm and 107 cm diagonally) had no effect on the success of the givenactivity, although they did note that the size of the group seemed to influence the group’ssuccess . In the comparison of these two extents, apparently size is not a determiningfactor of success, although it seems likely that, looking at the global range of dimensions
An Overview of Interactive Surfaces: Applications, Sensors, and Dimensions 15of interactive surfaces, this medium size in general is probably the most effective for thistype of group collaboration. Moving up to the largest devices is possible to find surfaces often requiring entirebody movements for interaction. In this category live interactive floors, walls, and entirespaces. All of these can generally be as large as the designer wants them to be. Floorsreact to footsteps across them while walls generally act as a large tabletop surfacemounted vertically; spaces, of course, such as the SensitiveSpaceSystem designed by iO,combine elements of both floors and walls to make entire rooms or hallways that interactwith the person in it. The size of all of these units, to some degree, can be determined either for the displayor for the unit as a whole—for example, the actual surface of a tabletop display is quitethin while the entire machine is quite large and in a ticket kiosk, the screen is only a smallpart of the larger device. Perhaps the best example of this is for projected interactivesurfaces where the technology is a projector, and the actual size of the display depends onthe surface being displayed upon, be it the floor, wall, or even a window.4.2 ComplexityBeyond size, the complexity of an interactive surface can range widely. The ‘complexity’of an interactive surface can be defined in many ways: according to the exact way humanscan interact with it (single-touch versus multi-touch), by the type of application, and bythe operating system employed in the technology. At the simplest end of traditional interactive surfaces are single-touch interactivesurfaces; included in this category are most information kiosks, bank machines, GPSdevices, and most current PDAs. These surfaces function with primarily string gaugetouch screens and react only to one touch. Many of these devices, such as someinformation kiosks and bank machines are set up to look like their predecessors where youhit actual buttons, rather than virtual ones. Some of single-touch platforms combine thisbutton model with that of typical computer interfaces, such as many museum displayswhere often your touch drags a pointer around the screen. While these interfaces are verysimple, in the context of their purpose, they are perfectly adequate and appropriate. Multi-touch systems, where the user can use more than one point to interact on asurface, have been slowly moving into more mainstream technology, particularly with therelease of the iPhone and iPod Touch last summer, as well as with the MacBook modelsthat come equipped with a multi-touch mouse pad. Multi-touch functionality allows usersto, for example, resize images by dragging out the corners or scroll horizontally/verticallyon a laptop touch pad by using two fingers as well as encouraging collaboration so thatmany users can be working in the same interface simultaneously. Besides thesecomputing interface examples of multi-touch, there are other surfaces that respond tomultiple touch points such as interactive floors and bars that have primarily aestheticdisplays that respond somehow to pressure from human gestures. Clearly, relative to
16 Li, Clagett, Bugssingle-touch surfaces, multi-touch surfaces are more interactive and the possibilities forapplications of such a technology are limitless. Complexity can also be considered with regards to the application of the interactivesurface. As mentioned in Section 2, there is an enormous range of applications forinteractive surfaces. Surfaces such as dance floors and bars are interesting, but serve as ameans to a primarily aesthetic end—while the technology behind the surface may still bequite sophisticated, the difficulty in using them is minimal and the actual benefit fromusing the surface (other than entertainment) is fairly minimal. While this type ofapplication is fairly basic, as presented in Section 2, many interactive surfaces provideexcellent, highly interactive platforms for computing, collaboration, and showcasingproducts, among others. Therefore, applications too can also be a further measure ofcomplexity for interactive surfaces. Lastly, in reference to complexity, one can look at the degree of involvedness in theoperating system behind the interactive surface. This measure parallels the degree ofcomplexity as determined by application. On one hand you have the previouslymentioned dance floors and bars where the user simply touches the surface and somethingsimply of aesthetic interest happens at that point. On the other end of the spectrum areactual computer interfaces that are built as an operating system of sorts, often forcollaboration on group project, such as tabletop computing systems, and now increasinglyas handheld devices, such as Apple’s iPhone. These systems not only react to multi-touch, but also serve a much more complex purpose than that of the previously mentionedfloor or bar. These platforms allow users to do most everything they can do on theirregular computer but in a way that eliminates the need for a mouse/touchpad. Thesophistication of the applications of interactive surfaces can vary widely—from purelyentertainment-based functionality to complete operating systems.5 ConclusionThis paper emphasizes the diversity inherent in all aspects of interactive surfaces, be it intheir application, the technology that makes them work, or their dimensions. Applicationsrun the gamut from purely aesthetic to sophisticated collaborative tools. Furthermore, aseemingly endless and ever growing array of sensing technologies supports theseapplications. What’s more, the shape, size, and complexity of these surfaces can varygreatly—some will fit in your pocket and can organize your life while others decoratehuge spaces providing aesthetic appeal. The versatility of interactive surfaces are whatmakes them so appealing to the general public and ensures that they will become onlymore prevalent in our lives. While the future prevalence of interactive surfaces is all but ensured, the precise futureis wide open. The success of technologies such as the iPhone would seem to suggest thatthe general population is willing to embrace interactive surface technology. Sincetechnology often evolves less to fill a void but rather to solve a problem people never
An Overview of Interactive Surfaces: Applications, Sensors, and Dimensions 17even knew they had in the first place, it is hard to determine exactly what interactivesurface may look like or be used for in the future. What is certain is that there are alreadysome clear benefits from this technology. Firstly, and most basically, this technology hasa profound crowd-stopping effect; people are fascinated by it and even when it is servingno practical purpose (such as with the interactive surface bar), it still manages to provokegreat interest. More pragmatically, this technology facilitates communication in manyforms: be it between customers and vendors or between colleagues as many people caneasily collaborate together on one surface. Furthermore, many argue that this type of interface is more natural and therefore moreintuitive for users . Perhaps interactive surfaces will democratize computing the waythat virtual globes have democratized Geographic Information Systems . It remains tobe seen exactly to what degree interactive surfaces will be adopted by the general publicand it will likely not know for many years yet as many of the surfaces described here areeither only prototypes or are not yet available to the general public. It is apparent,however, that the potential is there for interactive surfaces to revolutionize not only theway we perform our every day computing tasks, but also the way we order meals atrestaurants, shop for goods, and work with our peers. Interactive surfaces are atechnology we are best served to fully embrace as they will likely develop to be a majorpresence in our everyday lives.References1. Ubiquitous Computing, http://www.ubiq.com/hypertext/weiser/UbiHome.html2. Ishii, H., Ullmer B.: Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms. In CHI ’97: Proceedings of the SIGCHI conference on human factors in computing systems, pp. 234-241, ACM Press (1997)3. Mindstorm Interactive Surface Solution, http://www.mindstorm.eu.com/4. Valli, A.: Natural Interaction White Paper. http://www.naturalinteraction.org/images/ whitepaper.pdf (2007)5. Wagner, D., Pintaric, T., Ledermann, F., Schmalstieg, D.: Towards Massively Multi-user Augmented Reality on Handheld Devices. In Pervasive Computing: Third International Conference, PERVASIVE 2005, Lecture Notes in Computer Science (LNCS). Springer-Verlag, (2005)6. Petersen, M. G., Krogh, P. G., Ludvigsen, M., Lykke-Olesen, A.: Floor Interaction: HCI Reaching New Ground. In CHI’05 extended abstracts on human factors in computing systems, Portland, OR, USA (2005)7. Dietz, P., Leigh, D.: DiamondTouch: A Multi-User Touch Technology. Mitsubishi Electric Research Laboratories, http://www.merl.com/papers/docs/TR2003-125.pdf (2003).8. Izadi, S., Brignull, H., Rodden, T., Rogers, Y., Underwood, M.: Dynamo: A public interactive surface supporting the cooperative sharing and exchange of media. UIST’03, Vancouver, BC, Canada (2003)9. Perspective Pixel, http://www.perceptivepixel.com/
18 Li, Clagett, Bugs10.Paradiso, J. A.: Tracking contact and free gesture across large interactive surfaces. Communications of the ACM, vol. 46, pp. 62-69 (2003)11.Stock, O., Zancanaro, M., Busetta, P., Callaway, C., Krüger, A., Kruppa, M., Kuflik, T., Not, E., Rocchi, C.: Adaptive, intelligent presentation of information for the museum visitor in PEACH. User Model User-Adap Inter, vol. 17, pp. 257-304, (2007)12. Microsoft Surface, http://www.microsoft.com/surface/index.html13. Johnson, R. and Fryberger, D. 1972. Touch Actuable Data Input Panel Assembly. U.S. Patent 3,673,327. Jun. 197214. Mallos, J. 1982. Touch Position Sensitive Surface. U.S. Patent 4,346,376. Aug. 198215. Kasday, L. 1984. Touch Position Sensitive Surface. U.S. Patent 4,484,179. Nov. 198416. Han, J. Y. 2005. Low-Cost Multi-Touch Sensing through Frustrated Total Internal Reflection17. Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology: http://delivery.acm.org/10.1145/1100000/1095054/p115- han.pdf?key1=1095054&key2=1556913121&coll=GUIDE&dl=GUIDE&CFID=72511020&CF TOKEN=4457049618. http://en.wikipedia.org/wiki/Analog_resistive_touchscreen. Retrieved on June 10, 200819. http://en.wikipedia.org/wiki/Interactive_whiteboard#Interactive_Whiteboard_Technologies. Retrieved on June 10, 200820. Passive touch system and method of detecting user input. http://www.freshpatents.com/Passive- touch-system-and-method-of-detecting-user-input- dt20070405ptan20070075982.php?type=description. Retrieved on June 10, 200821. US Patent 5708460 - Touch screen, http://www.patentstorm.us/patents/5708460/fulltext.html22. http://en.wikipedia.org/wiki/Touchscreen#Surface_acoustic_wave. Retrieved on June 10, 200823. http://media.elotouch.com/pdfs/marcom/apr_wp.pdf24. http://multimedia.mmm.com/mws/mediawebserver.dyn?6666660Zjcf6lVs6EVs66SJAbCOrrrrQ-. Retrieved on June 10, 200825. http://www.elotouch.com/Technologies/compare_all.asp#notes26. Apple Computers, http://www.apple.com27. Garmin GPS. http://buy.garmin.com28. Ryall, K., Forlines, C., Shen, C., Ringel Morris, M.: Exploring the effects of group size and table size on interactions with tabletop shared-display groupware, Proceedings of the 2004 ACM conference on Computer supported cooperative work, Chicago, Illinois, USA, (2004)29. Butler, D.: Virtual globes: the web-wide world. Nature, vol. 439, pp. 776-778, (2006)