Upcoming SlideShare
Loading in...5

Design & realization of framework system interaction in smart homes


Published on

For more projects visit @

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Transcript of "Design & realization of framework system interaction in smart homes"

  1. 1. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 1, JANUARY 2012Design and Realization of a Framework forHuman–System Interaction in Smart HomesChao-Lin Wu, Member, IEEE, and Li-Chen Fu, Fellow, IEEE15Abstract—The current smart home is a ubiquitous computingenvironment consisting of multiple autonomous spaces, and itsadvantage is that a service interacting with home users can beset with different configurations in space, hardware, software,and quality. As well as being smart technologically speaking, asmart home should also never forget to retain the “home na-ture” when it is serving its users. In this paper, we first analyzethe relationship among services, spaces, and users, and then wepropose a framework as well as a corresponding algorithm tomodel their interaction relationship. Later, we also realize thehuman–system interaction framework to implement a smart homesystem and develop “pervasive applications” to demonstrate howto utilize our framework to fulfill the human-centric interactionrequirement of a smart home. Finally, our preliminary evaluationsshow that our proposed work can enhance the performance of thehuman–system interaction in a smart home environment.Index Terms—Human centric, human–computer interaction,multiagent systems, smart homes, user-centered design.I. INTRODUCTIONAS COMPUTER technology advances, Mark Weiser’s vi-sion [1], ubiquitous computing (UbiComp), graduallyis becoming reality; pervasively existing computing hardware(HW) embedded devices and increasing quantity of intelligentsoftware (SW) tend to interact with people more to improvehuman life in our living environment. Nevertheless, UbiCompHW/SW should cooperate before their interaction with people.This kind of notion elevates human–computer interaction tohuman–system interaction (HSI) [2].Within this trend, a smart home advances into a UbiComp-based environment consisting of distributed but cooperativeautonomous spaces (see Fig. 5 for an example), each of whichis a service area equipped with HW/SW to provide variousservices managed by agents. The greatest advantage of theUbiComp-based smart home is that a service can be performedManuscript received May 20, 2009; revised August 24, 2010; acceptedFebruary 22, 2011. Date of publication July 12, 2011; date of current versionDecember 16, 2011. This work was supported by the National Science Councilof Taiwan under Grant NSC99-2218-E-002-002 and NSC99-2221-E-002-191.This paper was presented in part at the International Conference on HumanSystem Interaction, Krakow, Poland, May 25–27, 2008. This paper was recom-mended by Associate Editor M. Dorneich.C.-L. Wu was with the National Taiwan University, Taipei 106, Taiwan. Heis now with the Institute of Information Science, Academia Sinica, Taipei 115,Taiwan (e-mail: Fu is with the Department of Computer Science and Information Engi-neering and Department of Electrical Engineering, National Taiwan University,Taipei 106, Taiwan (e-mail: versions of one or more of the figures in this paper are available onlineat Object Identifier 10.1109/TSMCA.2011.2159584with multiple configurations. The major challenge, however, ishow to create the most appropriate configuration.1From the viewpoint of human-centric requirements, a homeis generally considered to provide services to fulfill “comfort,”“convenience,” and “security,”2and this makes a smart homedifferent from other UbiComp-based environments. In detail,we define “comfort” to be the quality of services (QoS) and theway to provide services, “convenience” to be the relationshipbetween the user and the space where services are provided,and “security” to be the issues of information security andprivacy. In addition, all of the service interaction should adaptaccording to the context changes of the environment and theinvolved users. In this paper, we will study the above issues toidentify the basic key elements of the context model in regard tothe interaction relationship among users, services, and spaces,along with proposing a solution composed of a framework withan algorithm.When designing the algorithm for a smart home systemto interact with its users, it is important to achieve mixed-initiative (m.-i.) [4], [5]. A service with a graphical user inter-face for a user to directly manipulate its details is user-initiative(u.-i.), a service with multiple agents in the background han-dling its details and helping a user to focus on his/her goal onlyis system-initiative (s.-i.), whereas a service capable of boththese characteristics is m.-i.3An example of m.-i. for robotsin smart homes can be found in [6]. The s.-i. helps users focusmore on the activities they are doing than on interacting withthe system, and u.-i. gives users ability to guide the systemswhen necessary (e.g., the system makes some mistakes when ins.-i. mode) thus enhancing the interaction between systems andusers, so how to perform the transition between s.-i. and also important [7]. Therefore, after identifying the elementsof HSI framework in smart homes, the main task of the corre-sponding algorithm is to achieve m.-i. by deciding the existenceand method of smart homes’ interaction with inhabitants, suchthat the system can proactively behave to show its intelligence,and users can also feel that they have the authority to controlthe interaction.1 The approach proposed in this paper is toward the notion of a smart homethat aims to improve the lifestyle of living, and it has no relation to smart grids,where the aim is to reduce the waste of energy.2 In this paper, “security” means information security and privacy rather thanphysical security,3 Take an HVAC service for example. An HVAC service with a GUI for a userto set temperature, operation hours, fan speed, etc., is u.-i., whereas an HVACservice with agents automatically handling the above details by inferring howmany users there are, how long they will stay, etc., is s.-i.1083-4427/$26.00 © 2011 IEEE
  2. 2. 16 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 1, JANUARY 2012The rest of this paper is organized as follows. Section II sur- required. In this paper, we have adopted the framework of [12]veys related works, and Section III describes the overall systemwith an application scenario. Sections IV and V address theproposed framework and its corresponding algorithm, respec-tively. Section VI analyzes and discusses the design issues, andSection VII describes the realization of a smart home systemaccording to our proposed framework. We apply our frameworkto develop pervasive applications with demonstration scenariosin Section VIII, and we show some preliminary evaluations,in Section IX. Finally, Section X concludes this work anddiscusses some future development.II. RELATED WORKSThere are several famous living home projects, such as theAware home project [8] from GeorgiaTech or the Place Lab [9]from House_n [10] of MIT. The Aware home project mainlyfocuses on developing applications that support some specificscenarios for users living in a home, and the Place Lab focuseson utilizing data collected from sensors embedded in homes orpersonal devices to investigate human behaviors. In contrast,our work is to identify the basic key elements of HSI frameworkin smart homes and thus is intended for system developers andapplication developers designing smart home systems as wellas their services in smart homes.In the UbiComp environment, a user can perform an inter-action in multiple spaces, so [3] developed a system that canadapt its QoS to the environmental changes and user preference.Nevertheless, it considers neither the issue of multiple spacesnor that of multiple users, since every space may have differentHW/SW/resources (e.g., cameras may be installed in a studyroom but not in a bedroom), and multiple users may performdifferent interactions in the same space simultaneously (e.g.,two users are in the living room, and one of them plays videogames while the other one listens to music). In addition, theissues of different privileges of a user at different spaces (e.g., toavoid danger, minor children should not have privileges to useequipment in the kitchen) or different privileges for differentusers in the same space have not been addressed (e.g., parentsshould not touch equipment in their teenager children’s roomwithout their children’s permission), either. Besides that, theprivacy issue and the requirements on interaction for services(e.g., Skype is an application requiring user attention andpossibly user privacy) are both ignored.To identify the interaction relationship between services andusers, the authors in [11] proposed a contextual notificationmodel by which the system will adapt its interaction to “Mes-sage Priority,” “Usage Level,” and “Conversation.” It considersthe relations between user and system; nevertheless, it is fora single user only, and issues of privacy, privilege, and multi-ple spaces are also ignored. To further consider the relationsbetween services and user, the authors in [12] proposed aframework to categorize the notification systems according totheir design objectives in “interruption,” “reaction,” and “com-prehension,” whereas the authors in [13] extended the modelfrom [14] to categorize the methods of interaction betweenuser and system into two groups, namely, “foreground” and“background,” depending on whether the user’s attention iswhile taking a stand similar to that of [13] to represent therequirements on interaction for services as well as the relationsamong services, spaces, and users.As for the privacy issue, [15] claims that a system should pro-vide users three modalities of control—solitude, confidentiality,and autonomy—to complete the construction of an integratedprivacy. Such a viewpoint is borrowed here to address theinteraction privacy from three perspectives—physical entity, in-formation access, and autonomy of interaction. Details is shownin our previous work [16]. There are also several importantworks about privacy in ubiquitous environments. As UbiCompmakes it easy to capture and store information about peopleand their activities, [17] describes a framework for designersto check whether their UbiComp applications provide sufficientmechanism for users to maintain their privacy. Reference [18]develops six principles for guiding the design of privacy-awareUbiComp applications based on a set of fair informationpractices common in privacy legislation, thus protecting userprivacy. Reference [19] proposes three design principles forthe architecture of an access control mechanism, so users cancontrol their privacy in UbiComp environments. Reference [20]presents an infrastructure for facilitating the development ofprivacy-sensitive UbiComp applications to help end-users man-age their privacy. Nevertheless, the above four related worksmainly deal with how to avoid personal information beingsensed or accessed illegally or unwillingly by remote users in anapplication, whereas our work focuses on how to avoid privateinteractions being accessed accidentally by other unsuitablehome users.III. APPLICATION SCENARIO AND SYSTEM OVERVIEWIn this section, an application scenario is presented first, andthe important design issues are described within in italics. Afterthat, the types of services supported by the proposed frameworkare described, and then the system overview is given.A. Application ScenarioAlice has a family of three people, including her husband,Bob, her son, Carl, and herself. They live in a smart homeconsisting of multiple autonomous spaces for them to interactwith services provided by the smart home. Some days afterwork, Alice picks up Carl from his school, and they come hometogether. When they are home, HVAC services are provided.In addition, the smart home ascertains their identity and tracestheir locations at home to provide services. Therefore, as Alicegoes to her bedroom through the home, the smart home turnson/off the lights of each area for her automatically. After that,Alice plays her favorite music in her bedroom to relax.The aforementioned lighting service is also provided to Carlas he goes to the living room, where he uses the main screento start the service of the interactive tutor for his homework.After awhile, Carl’s friend calls him via Skype. Rather thanringing in every space,thereby disturbing Alice, the smart homedirects this telecommunication to the space Carl is focused on,which is the main screen of living room. After talking, Carldecides to take a break, so, instead of continuing to practice
  3. 3. WU AND FU: REALIZATION OF A FRAMEWORK FOR HUMAN–SYSTEM INTERACTION IN SMART HOMESwith the interactive tutor, he uses another nearby space, thePC in the corner of the living room, to surf the web. Later,when Carl’s other friend calls him, rather than directing tothe previous space, which is the main screen, the smart homedirects this telecommunication to the space where Carl iscurrently focused, which is the PC in the corner.After some rest, Alice enables the functionality for music tofollow her. So, besides turning on lights, the smart home alsoplays Alice’s favorite music for her wherever she goes. AfterAlice goes to the kitchen with her music, she starts the serviceof the recipe assistant for cooking. During her cooking, thesmart home also provides Alice various information according17to her interests, e.g., supermarket sales beginning tomorrow. Inthe meantime, her husband, Bob, dials her cell phone but failsto reach her because Alice is in the kitchen and cannot hear hercell phone ringing in her bedroom. So, Bob utilizes the smarthome to send a message about arriving home late, and the smarthome successfully notifies Alice by displaying this message inthe space where she is currently performing interactions, whichis the kitchen screen providing the recipe assistant.After receiving Bob’s message, Alice decides to slow downthe pace of cooking, so she turns down the heat of the stovefor the food to be ready later, goes to the restroom, then goesto the living room to relax. Alice’s favorite music follows herto restroom but not to the main screen of living room, whereCarl has resumed the interactive tutor after his rest. As the PCin the corner of the living room is free now, Alice goes there toenjoy her music and read magazines. During this period of time,even as some information of Alice’s interest arrives, the smarthome does not interfere with Alice since she does not show herintention to interact with the smart home. When the food in thekitchen is ready, and the stove needs to be turned off, however,the smart home utilizes all of the spaces Alice may notice tonotify her to accept this urgent information immediately, andthese spaces include both the PC in the corner and the mainscreen of the living room (although this may interfere withCarl). Alice logs in to interact with the smart home to receivethis urgent information as well as other normal information,then returns to the kitchen with her music following her.As Alice returns to the kitchen according to the notification,the smart home prompts her to resume her unfinished interac-tion, which is the recipe assistant. Alice accepts it and uses therecipe assistant to check the current progress as well as theremaining steps. The smart home continues to provide Aliceinformation about her interests again; nevertheless, as Alice isalready familiar with the remaining steps and is busy finishingher cooking, she no longer interacts with the recipe assistant,so the smart home stops the immediate and frequent providingof information except for urgent messages in order to preventinterference with Alice’s cooking. Later, the smart home notifiesAlice of an important event that her favorite TV show is goingto start, so Alice interacts with the smart home again for the TVshow to be played in the kitchen.After finishing the cooking, Alice goes to the living room. Atthis moment, Carl has finished his practice with the interactivetutor and is using the main screen to play video games. With theagreement of Carl, Alice interacts with the smart home to usethe main screen to continue watching her TV show with CarlFig. 1. Interaction flowchart of a smart home.while waiting for Bob’s arrival, allowing them to have dinnertogether.After dinner, the three people enjoy time in the living room.Meanwhile, Bob’s favorite TV series will begin soon, but thesmart home finds that Carl is too young to watch this TV seriesand he is still in the living room. Therefore, the smart homenotifies Bob of this information and suggests several otherplaces in the home to enjoy this TV series. Although Carl’sroom has the capability to play this TV series, the suggestedplaces do not include Carl’s room since this family wishesto respect each member’s privacy. Bob chooses to watch hisTV series in his own bedroom. After watching the TV series,Bob receives a call from his colleague via Skype to discusstomorrow’s work meeting. The smart home prompts Bob withthe option of manual control of this interaction since this callis important, and Bob accepts this suggestion. Bob later findsthat they need video communication to do further discussion,but there is no camera in his bedroom, so Bob asks the smarthome to provide suggestions about places in the home suitablefor this interaction. Since the study room, which has cameras,is currently being used by Alice, the smart home excludes itfrom the suggestions to avoid interfering with her, and insteadsuggests the living room, which is another space embeddedwith cameras that is currently unoccupied. So, Bob goes to theliving room and uses the camera there for video communicationto continue his discussion with his colleague.B. Overview of ServicesAccording to the above scenario, services in a smart homecan be classified to three types, which can be matched to thethree arrow-lines framed by the red rectangle in Fig. 1:1) Unidirectional services changing environment: Lightingservice, HVAC service, etc.2) Unidirectional services involving users: Music/videoplaying, information providing, event notification, websurfing, etc.3) Bidirectional services requiring user feedback: Interac-tive tutor, Skype, recipe assistant, login/resuming prompt-ing, etc.C. System OverviewThe interaction flowchart of a smart home system can bedescribed simply by Fig. 1. Through the backbone platform,
  4. 4. 18 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 1, JANUARY 2012Fig. 3. Human–System Interaction Framework.cannot be provided in the current environment. In either of thetwo situations mentioned above, the smart home will notify theuser. The idea is to survey all of the spaces in the smart homeso services are provided at appropriate spaces according to therelations among user, service, and space. After the notification,if the user admits the service, he/she can choose one of the CSsand change its status so the smart home will surely performthe service, or just ignore the suggestion from the smart homeand manually initiate the service at some unqualified space. TheFig. 2. Flowchart for a smart home to provide services.all of the components are connected and can communicatewith one another. Based on such connections, the smart homesystem gathers the status of the environment and its inhabitantsvia intelligent sensing HW/SW, receives user input from smarthuman–computer interfaces, and sends all of the information tothe inference mechanism to infer the most appropriate servicesto be provided. According to the inference results, the smarthome system will perform the functions framed by the redrectangle in Fig. 1, including interaction with inhabitants viasmart interface, manipulation of devices around users, andchange of the environmental status via the integrated controlhandled by the home automation system.The objective of this work is to utilize the UbiComp-basedenvironment to fulfill “comfort + convenience + security”when performing the aforementioned three functions, and thedescriptive flowchart, shown in Fig. 2, is detailed in the fol-lowing. When a smart home is going to perform a service fora user, it will compare the requirements of the service withthe status of current environment to find out if there are somequalified spaces (QSs) whose status and resources are readyfor activation of this service (related details are provided inSections IV and V-B). In other words, in QSs, the smart homeis ready to interact with the user for the service, and vice versa.Therefore, if QSs exist, the smart home will perform the serviceat one of the QS according to the requirements of the service;otherwise, the smart home will continue to find if there aresome candidate spaces (CSs) whose resources are ready, butnot their statuses. In other words, in CSs, the smart home isready to interact with the user for the service, but the user is not.Therefore, if such spaces exist, the smart home will form a CSlist and inform the user that a service is ready to be performed;otherwise, the smart home will inform the user that the servicelatter, however, is beyond the scope of this work.According to the flowchart shown in Fig. 2, systems initiateinteractions if every condition is satisfactory, whereas theynotify users to initiate the interactions when encountering un-satisfactory conditions. It is important to note, however, thatsmart home systems are only for supporting users to performinteractions, so the users can actively initiate the interactions ortake control of the interactions anytime they wish.To achieve the objectives of this paper, there are severalissues that need to be addressed, as shown below. This will bedone in Sections IV and V.1) Define the requirements of services and status of theenvironment so they can be compared.2) Find the conditions under which a space is deemed a QSand the ones under which a space is deemed a CS.3) Based on the former requirements of service and thestatus of environment, determine the relationship amonglevel of service quality, choice of QS, and configurationof HW/SW.4) Determine the criterion of ranking the CSs so as toproduce the CS list.5) Appropriately notify the user according to the status ofenvironment.IV. FRAMEWORK FOR HUMAN–S YSTEM INTERACTIONOur framework is designed based on the interaction rela-tionship among users, services, and spaces. The complete HSIframework is shown in Fig. 3.A. Properties of UsersThere are usually multiple users in a smart home environ-ment, so every user needs an ID to be uniquely identified.Furthermore, for the privacy issue analyzed in our previous
  5. 5. WU AND FU: REALIZATION OF A FRAMEWORK FOR HUMAN–SYSTEM INTERACTION IN SMART HOMESwork [16], given specific characteristics of each user, he/shewill belong to certain appropriate groups defined by the servicedesigners. The relations between user groups and service typesare specified by the service designers, the users themselves, orby the administrator of a smart home. As for the security issue,this work defines the user privilege for initiating services orfor utilizing spaces. The underlying philosophy is that someservices can only be initiated by some specific users/groups,and every space may allocate different HW/SW/Resources todifferent users.19B. Requirements of ServicesEach service has to specify its required resources and corre-sponding HW/SW.4Moreover, these requirements are classifiedinto a number of QoS levels. For example, a media_on_demandservice may require HW/SW/Resources as “speakers andscreens, media player, 1 Mb/s bandwidth, and 30 MB memory”for a high QoS level, whereas it may require “speakers, simplemedia player, 384 Kb/s bandwidth, and 10 MB memory” for alow QoS level.Next, besides specifying that whether user attention is re-quired or not, each service also specifies the level of attention itrequires because foreground services may require different lev-els of user attention, while background services do not requireone. For example, a lighting service is a background service anda voice-only Skype requires low-level user attention, whereas avideo Skype requires high user attention. Furthermore, in ourproposed interaction curve, shown in Section IV-D, the indexvalue of the (minimum) level of user attention is set to be 0.2for background services, and set to be 0.4 for low-level userattention versus 0.7 for high-level user attention for foregroundservices.Finally, each service sets its levels for priority and privacyrequirements, respectively. The former can be set as “normal”or “high,” whereas the latter can be set as “personal,” “group,”or “basic.” The priority level is used to adjust the interac-tion method and the initial interaction scopes of services andnotifications, and the privacy level serves as a constraint (orsuggestion) for space selection to perform services. Althoughthere may be multiple kinds of user groups in a smart home, weassume that if a service requires “group” privacy, such a levelof privacy can only be assigned to one of them. For example,a video_playing service may require “group = Adults” privacylevel due to its service content, and a telecommunication servicemay require “personal” privacy level according to its caller.All of the aforementioned levels should be decided at thedesign phase by the service developers and could possibly bemodified through consultancy with users after reviewing theservice content at the running stage.C. The Status of EnvironmentThe status of the environment is determined based on thestatus of every agent-based autonomous space introduced in4HW means physical devices like speakers or screens, SW means appli-cations that manipulate HW, and resources mean necessary CPU loading,computer memory, and network bandwidth that are consumed by SW.Fig. 4. Example for transition of privacy level in a space. Some possiblesituations/transitions are omitted. Alice is user1, Bob is user2, and their childCarl is user3. GroupA is JazzFans, GroupB is SmartHomeManagers, GroupCis Adults, and GroupD is Male.Fig. 5. Example of smart home that consists of multiple spaces with theirtopology. In particular, each red dot roughly represents the center of a space.Section I. The interaction level of a space is one of the mostimportant statuses of a space, and we defer its details to thenext subsection.The privacy level of a space is determined as “personal,”“group,” “basic,” or “none,” by all involved users. In contrast,a service can specify “group” privacy level only for a sin-gle group, but a space can specify “group” privacy level formultiple groups according to all the involved users. A simpleexample is shown in Fig. 4. Intuitively, privacy level is usedto avoid unsuitable services being performed; nevertheless, itcould also be used to suggest possible services, e.g., playingjazz music when Alice and Carl are in the same space, as shownin Fig. 4.Besides specifying its own resources and its equippedHW/SW, to further deal with the privilege issue, however, eachspace also has to specify the resources and HW/SW that can beallocated to each user according to the user identity.For each space, there are still three other statuses not relatedto service requirements. The first one is a unique name for eachspace. The other two are its service area and its topology withother spaces. The former is used to determine whether a userenters a space, and the latter is used to estimate the distancebetween spaces, which then enables us to rank the CSs. Anexample of topology of spaces is shown in Fig. 5. Furtherdiscussions about statuses for a space can be found in [16].
  6. 6. 20 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 1, JANUARY 2012Fig. 7. Our Proposed Curve of Interaction Level. (System sleep time is set asFig. 6. (a) The Ebbinghaus Forgetting Curve. (Within 1 h) (b) The PossibleCurve of Usage Level Proposed in Nomadic Radio. (System sleep time is set as15 min.)D. Interaction Level of a SpaceReferring to the level of attention of services previouslydescribed, we classify the spaces into two types, backgroundspace (B.Space) and foreground space (F.Space), first, accord-ing to their interaction level. The interaction level of a spaceis defined as the degree of foreground interaction betweena user and this space recently. To model the user attentiondecreasing as time goes by or when the user leaves the space,another space type, called inactive foreground space (IF.Space),is proposed. The design concept of IF.Space is inspired by thefamous Forgetting Curve [see Fig. 6(a)], proposed by a Germanphilosopher, Hermann Ebbinghaus, in 1885, which describesthat people will forget things quickly once they are no longer intouch with those things. We apply this curve to simulate the userfeelings for interactions in a space. Once users stop performingforeground interactions in a space, their attentions are assumedto decay swiftly; thus, the interaction level of that space soonlowers and deviates from the original attention requirement ofthe ongoing foreground service. More discussions about thedifference between B.Space/F.Space and IF.Space are providedin Section VI-C.20 min.)of usage level [see Fig. 6(b)] from [11], similar to interactionlevel here, defined as the elapsed time6scaled by the “systemsleep time (the time threshold for an idle system to sleep),” suchthat the resulting logarithmic curve is applied to reflect thatusers tend not to prefer significant change in interaction withsurroundings in a short period of time. Such a value is between0 and 1, where a value closer to 1, which indicates high level,is assigned for usage with elapsed time being short to medium,and a value of 0 is assigned after sleep time.Next, we apply the two design concepts mentioned aboveto design the curve of interaction level where the value dropsfrom 1 to 0.4, as well as to design the corresponding attentionrequirement of services. Here, we divide the attention require-ment of foreground services as “high” and “low” and divide thecurve of interaction into two halves accordingly. The first halfof the curve is for a high interaction level, whose value descendsfrom 1 to 0.7, whereas the second half of the curve is for a lowinteraction level, whose value descends from 0.7 to 0.4. Theformer part of the curve is applied to comply with the usagelevel in [11], whereas the latter part of the curve is applied toapproximate the Ebbinghaus’ Forgetting Curve. The value ofinteraction level is calculated by the following two formulas:According to the Forgetting Curve, we define that the interac-tion level is 0.3, 0.4, and 1, for B.Space, IF.Space, and F.Space,log(SystemSleepT ime/2 − ElapsedT ime)log(SystemSleepT ime/2)∗ 0.3 + 0.7respectively. The retention of immediate recall is 100%, sothe interaction level is defined as 1 for F.Space, where userspay attention to perform foreground interaction currently or orif ElapsedT ime < SystemSleeptime/2recently. The retention quickly degrades to 58% after 20 min,44% after 1 h, 36% after 9 h, and 34% after 1 day. Therefore, thebasic interaction level for any space users use daily is definedlog(ElaspedT ime − SystemSleepT ime/2)log(SystemSleepT ime/2)∗ −0.3 + 0.7as 0.3 (a simplified value of 34%), and is assigned to B.Space,where users currently are and do not need to pay attentionto interact with. As for IF.Space, which is to model that userattention decreases as time goes by after stopping foregroundinteraction, the interaction level is defined as 0.4, the averagevalue of 44% and 36%, since users often interact with a spacein an hourly frequency in their daily life.5To establish ourframework for user–space interaction, we borrow the concept5The retention after 20 min and that after more than 1 day are not used sincethe former is still changing significantly, unlike the retention at a later time, andusers usually interact with their home with daily frequency.if ElapsedT ime > SystemSleeptime/2.As for the curve of interaction level after system sleeptime, unless users log in/out again, it maintains the fixed valueaccording to [11], and we set it as 0.4. An example of how thiscurve is used is provided in Section VI-B, and an example of theinteraction curve before system sleep time is shown in Fig. 7.Next, considering the spaces that can be utilized but currentlyare not engaged with some specified user, we denote them as6The value of “Elapsed Time” means the length of time starting from peopleno longer interacting with the system,
  7. 7. WU AND FU: REALIZATION OF A FRAMEWORK FOR HUMAN–SYSTEM INTERACTION IN SMART HOMESFig. 9. Algorithm for a smart home to initiate services.21Fig. 8. Curve of Interaction Level according to described scenario.another space type, called free space (Free.Space). Moreover,concerning the privacy issue under the circumstance of multipleusers, we define another space type, called restricted space(X.Space), as follows: if a space is a F.Space for user1butnot for user2, then this space is an X.Space for user2. Further,according to whether user2is in the X.Space or not, X.Spaceis classified as restricted background space (XB.Space) orrestricted free space (XFree.Space).Now, we are ready to examine our framework, in which thevalues of various interaction levels are 1 ∼ 0.4, 0.4, 0.3, 0.2,0, −1, for F.Space, IF.Space, B.Space, XB.Space, Free.Space,and XFree.Space, respectively. The value of interaction levelin B.Space is set lower than that in IF.Space since it is con-sidered that the interaction level for the former should berelatively lower, and such a concept is verified in Scenario 3 ofSection IX-A. The philosophy of setting the value of interactionlevel in XB.Space is to consider that X.Space should have lowerpreference for services in order not to interfere with other users,but the XB.Space should still be suitable for certain kinds ofbackground interaction. As for Free.Space and XFree.Space, itis because users are not there, and we want to avoid interferingwith users performing foreground interactions, respectively.The interaction level in a space will evolve with time like thatshown in Fig. 8, which tries to describe the following scenario:In the beginning, a space (e.g., living room) is a Free.Space.After a user (e.g., Carl) enters, it becomes a B.Space. Oncethe user logs in for foreground interaction (e.g., interactivetutor), it becomes F.Space. Then, once the user stops interactingwith the service (e.g., Carl takes a break), its interaction leveldecays. Its interaction level returns to 1 whenever users performforeground interaction (e.g., Carl uses the interactive tutor). Ifusers become idle longer than the system sleep time (e.g., Aliceis busy finishing her cooking and no longer interacts with therecipe assistant), it becomes IF.Space. Once the user logs out,it goes back to B.Space. Finally, it becomes Free.Space againonce users leave. A simple example for the status transition ofspaces can be found in [16].V. ALGORITHM FOR HUMAN–SYSTEM INTERACTIONUnder our proposed framework, for a smart home system todisplay “comfort + convenience + security” when interactingwith its users, a corresponding algorithm was proposed in [16].We further improved it and presented its core ideas below. Somedetails of the original algorithm are omitted and can be foundin [16].A. System Initiative ServicesFig. 9 shows a main algorithm, which formalizes Fig. 3, andsome sub-algorithms in Fig. 9 are described later. If the smarthome initiates a service, svcx, for a user, usery, then eachspace, spacei, is examined as to whether it can be a QS or aCS by comparing the requirements of svcxwith all of its spacestatuses.B. Find QS and CS From the EnvironmentFirst, the system manages to find QSs for svcxto be per-formed. If spaceiis a QS for svcxfor usery, it fulfills thefollowing conditions:1) Its remaining resources and HW/SW for useryare richerthan the least resources constraints of svcx.2) Its privacy level for useryis not lower than the privacyrequired by svcx.3) Its interaction level for useryis not lower than the userattention level specified by svcx.The idea of CS is to deal with the situation where there is noQS currently for the s.-i. foreground service. Note that CSs arethose spaces that do not fulfill the last condition only, and allthe CSs being found are made into a CS list for the user. Moredetails can be found in [16].C. Configuration of ServicesThe algorithm in Fig. 10 determines the conditions underwhich svcxwill be performed, including which QS, whatHW/SW, and QoS level. The priority of svcxis used here toadjust the interaction scope:1) Services requiring high priority: All of the QSs initiatesvcxat the best QoS level they can afford with relatedHW/SW specified by svcxand the option of manualcontrol for svcxis prompted.2) Services requiring normal priority: All of the QSs calcu-late the best QoS level they can afford, and all the QSswith the best QoS level are chosen to initiate svcxwithrelated specified HW/SW.If there are multiple spaces performing svcxat the same time,the user can decide which space to interact with, and after the
  8. 8. 22 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 1, JANUARY 2012Fig. 10. Algorithm to perform and configure a service.user has made the decision, other spaces will stop svcx. If theuser does not make the decision, then the situation will remain.D. NotificationIf there is no QS, the system will notify users to handlethe service initiation. For a background service, as it has noCS, the system only notifies users “svcxcannot be performedcurrently.” As for a foreground services, it notifies users “svcxis waiting to be performed”, with the CS list attached inaddition.The form of notifications depends on their developers. Nev-ertheless, the authors suggest that it should include a messagewindow displaying some basic information about its contents(what information is displayed depends on the notificationdevelopers), as well with a hinting sound if the notificationhas high priority. After users agree to accept a notificationby selecting it, it will display its full contents. Note that, ifusers are currently in B.Space only, the real notifications willbe prohibited, and users will only get a notification promptingthem to log in, with a hinting sound if the real notificationhas high priority. If users wish, they will interact with the realnotifications after they log in. More details can be found in [16].E. Manage SpacesIn addition to providing of services to inhabitants, a smarthome system should also manage its spaces according to howusers interact with them to comply with our proposed frame-work. The algorithm shown in Fig. 11 describes how to handlethe interaction level and the privacy level of each space if someuser enters, leaves, logs in, logs out of some space, or whetherhe/she gives some foreground input or not. Note that when usersenter IF.Space, login will be prompted with message about theirunfinished interactions in IF.Space.Fig. 11. Algorithm for a smart home system to configure its spaces.VI. ANALYSIS AND DISCUSSIONA basic analysis has been provided in our previous work [16]to discuss how our proposed framework allows a smart hometo fulfill “comfort + convenience + security” when it interactswith users. In this section, several issues behind the basic designconcept of our framework are raised for further discussion.A. Performing Foreground Interactions in F.SpaceIn our previous analysis in [16], a notification is made readilyaccording to its developer, if the user is in some F.Space.However, even if a user is in a F.Space, he/she may have nointent to interact with the system at that point. In this case,the proposed framework is not to perform foreground services(e.g., notifications) directly, but to find a more suitable space forforeground services to be initiated. Afterwards, how to performservices (e.g., whether to provide notifications directly or not)depends on service developers, and users in F.Space still havefreedom to block or shutdown services they do not need viathe interaction mechanism designed by service developers. Inaddition, if users have no intent to interact with the systemat all, they can simply either log out of F.Space or moveto some B.Space to block all foreground interactions. Theevaluation results of Scenario 5 and Scenario 6 in Section IX-Balso support this design concept, and, without the proposedframework, users may be disturbed by the system whenever andwherever due to the nature of ubiquitous computing.Nevertheless, people may feel frustrated when beingoverloaded with multiple foreground interactions occurringsimultaneously. Therefore, both the levels of frustration andthe maximum number of services acceptable in the F.Space or
  9. 9. WU AND FU: REALIZATION OF A FRAMEWORK FOR HUMAN–SYSTEM INTERACTION IN SMART HOMESB.Space window need to be studied, although current evalua-tion results of Scenario 6 in Section IX-B have shown that it isacceptable for two foreground interactions simultaneous in theF.Space for a user.B. Use of “Curve of Interaction Level”In this paper, the “curve of interaction” is used to model thepossible curve of both usage level and degrading attention. Forbackground services or foreground services that do not changebehavior according to user attention, its developers do not needto apply this curve. This curve, however, can assist foregroundservices needing a different interaction level to handle inter-action details. For example, similar to nomadic radio [11], aninformation_providing service may provide complete contentof some information when users have high interaction level andprovide a brief summary when users only have a low interactionlevel; On the contrary, an alarm system requiring user feedback(e.g., an alarm clock) may increase its interaction energy (e.g.,increase the alarm volume) as user interaction level degradessince it requires user attention urgently.When applying this curve to design a foreground service,service developers need to first design the different service Fig. 12. Smart home components and their interaction relationship.behaviors corresponding to interaction level. After that, service23developers need to decide whether the interaction level shouldbe determined by service itself or by smart homes. In the lattercase, a default SystemSleepTime specified by users is used,whereas in the former case, service developers need to specify afixed value or a dynamic value according to the characteristicsof service itself. In either case, only a parameter needs to bespecified, and the remaining task is to calculate how long usershave not interacted with this service.C. Notion of IF.SpaceIF.Space is intuitively very much like B.Space, however,there are two reasons for providing the notion of an IF.Space.First, although users are still in IF.Space, their attention may below because they have already stopped foreground interactionsfor a period of time, and this makes IF.Space different fromF.Space. Second, although users are not in IF.Space, they havesome unfinished foreground interactions there, and this makesIF.Space different from B.Space, where users do not have anyforeground interactions. In addition, according to our surveyspresented in Scenario 3 in Section IX-A, users prefer smarthome systems to have different behaviors in these two kinds ofspaces. Therefore, if one simply treats these two kinds of spacesas the same kind of spaces, users may not have a favorableresponse.D. Learning of User InteractionsRather than requiring the user to interact every time, it ispossible for the smart home system to learn interactions fromthe user and infer actions. Though not presented within thisframework, the authors have proposed a mechanism [29] forthis problem. With observations from a variety of multi-modaland unobtrusive wireless sensors seamlessly integrated intosmart homes, this mechanism [29] infers a user’s interactionsby utilizing a generalized Bayesian Network fusion engine withinputs from a set of the most informative features collected byranking their usefulness in estimating interactions of interest.With the help of this work, which identifies the key elementsof context model about interaction relationship among users,services, and spaces, this mechanism [29] can know whichelement to be learned to understand user preferences and thus tohelp the interaction to be more “comfortable” and “convenient.”The cooperation between the proposed framework and thislearning mechanism [29] will be studied in the future.VII. SYSTEM DESIGN AND IMPLEMENTATIONOur research project, NTU Attentive Home, proposed amulti-agent service-oriented smart home architecture [21],where each component is designed as an agent, and a Message-Oriented Middleware (MOM) is built as the backbone plat-form to model “event-driven” property, which is a naturalcharacteristic in the UbiComp environment [22]. Our MOMcreates Home Message Bus as a “software bus,” embeddedwith multiple topics as “logical pathways,” for all the agentsto communicate with one another by exchanging messagesvia publishing/subscribing events through these topics. Het-erogeneous SW/HW, including applications developed in thenext section, is also integrated via this mechanism [22]. Theremaining part of this section shows how to design and realizea smart home system according to our proposed framework, andthe implemented smart home components are shown in Fig. 12with their interaction relationship.77 A figure in larger size and the figures of each decomposed component areavailable online at
  10. 10. 24 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 1, JANUARY 2012A. Locator Agent With Identity RecognitionA Locator Agent is necessary for a smart home system tofind locations of its inhabitants, thus enabling location-awareservices, such as turning on a lamp in a dark place whenever aninhabitant walks into that place. Identity Recognition is also animportant additional function of the Locator Agent, which helpsa smart home system know whom to interact with currentlyand to provide personalized services, such as to play musicaccording to his/her preference.There are many ways to implement this agent, and therehave been many researches/products addressing this issue [23].As long as the adopted method can send formalized messagesabout its detection/recognition results to our platform, it is suit-able for our system. In this case, we use our previous researchresults to realize this requirement [23]–[26]. Specifically, thework in [25], which utilizes RFID technology, is responsiblefor identifying users, and the basic Locator Agent is jointlysupported by [23] and [24], which track users via smart floorand cameras. Afterwards, smart home systems will use RFIDto identify users again only if the user id needs to be verified orcorrected [26].B. Space AgentAccording to our framework, each autonomous space is man-aged by a Space Agent, which will define its service area, itsresources, and each inhabitant’s privileges over these resources.Each Space Agent subscribes to the messages from the Loca-tor Agent, thereby learning if some inhabitant has entered/leftits service area. Once this situation occurs, Space Agents willimmediately publish events to notify the Privacy Manager andInteraction Manager, which are then followed by correspondingstatus updates in a smart home.The Space Agent also manages resource allocation whensome application requires resources. Once receiving a resourcerequest from some application, which also specifies the sub-ject of this application, the Space Agent readily queries theApplication Database (AppDB) to obtain information aboutthe resources needed for this application and checks if theseresources can be allocated. There are three possible results:1) Permitted: If this inhabitant has the privilege to use allthe requested resources and all of them are free, theSpace Agent will allocate these resources so that thisapplication is ready to be performed at its best QoS level,and will inform the application to start the correspondinginteraction.2) Reserved: If this inhabitant has the privilege to use all therequested resources, but some of them are not availableat the moment, the Space Agent will put this requestinto a waiting queue, and inform the application of this.According to the logic designed in the application, theapplication may choose to wait, to cancel this request, orto notify inhabitants to solicit their opinions/decisions.3) Forbidden: If this inhabitant does not have the privilegeto use some of the requested resources, the Space Agentwill inform the application that this request is forbidden.Once the application finishes its interaction or cancels itsrequest in the waiting queue, it will inform the correspondingSpace Agent to release resources or to remove its request fromthe waiting queue. In the former situation, when the applicationnormally finishes, the Space Agent will check whether thereare some other applications in the waiting queue and allocatethe released resources to some of those waiting requests whoserequirements are all fulfilled.It is worthwhile to note that another important agent in oursystem, called Find_QS_and_CS Agent, will also query theSpace Agent, with details being described in Section VII-G.C. Application Database (AppDB)Each application should specify its resource requirementwhen being developed. If an application can perform its in-teraction at various QoS levels, multiple resource requirementsshould be specified according to each corresponding QoS level.Then, when an application is installed into a smart home sys-tem, the above information will be stored into AppDB, whichwill be queried by the Space Agent later when necessary, as inthe situation described in the previous subsection.D. User Interface (UI) AgentEach autonomous space will have one UI Agent, whichallows inhabitants to log in/out to switch interaction levelto Foreground/Background, with the UI Agent notifying theInteraction Manager. This agent will also calculate the idle timefor each inhabitant who logs in and set his/her interaction levelas “Inactive Foreground (IF)” if he/she does not perform anyforeground interaction for a period of time exceeding somethreshold. The UI Agent is also responsible for displayingsystem messages from the Notification Agent. Nevertheless, tomeet the human-centric design, which is to maintain the users’privacy, the UI Agent will not display the message directly if therecipient does not log in at its corresponding space. Instead, theUI Agent will display a message to ask the recipient to log infirst and display the system message after the recipient logs in.Each active foreground pervasive application needs to notifythe UI Agent that it is still active, whereby its correspondinginhabitant’s interaction level for the current space will be up-dated. As per the discussions in Section VI-B, however, eachpervasive application can calculate the interaction level by itselfif it wishes.This agent also accepts commands from the Interaction Man-ager and some applications that are capable of an interactionmechanism similar to login/logout. The latter is for inhabitantsto use some other interaction applications similar to this agentif they wish, while still obtaining support from our framework.Whereas in the former case, to be specific, the InteractionManager will command the UI Agent to set the interaction levelbetween the corresponding space and some inhabitant as IFwhen he/she leaves the space before logout, or command the UIAgent to log the inhabitants who already left out and performremote logout later.
  11. 11. WU AND FU: REALIZATION OF A FRAMEWORK FOR HUMAN–SYSTEM INTERACTION IN SMART HOMES 25E. Privacy ManagerEach privacy-aware application for a group needs to registera list into this manager, including group names and groupmembers. Once being informed by a Space Agent about someinhabitant’s entering/leaving, the Privacy Manager will re-estimate the privacy level of each inhabitant in this space andpublish events to notify every privacy-aware application.The Privacy Manager maintains a table of users over spacesabout privacy level, which deals with two kinds of query: queryabout privacy level of some space for some user, or query abouta list of spaces meeting some privacy level for some user. Thelatter is for a privacy-aware application to find suitable spacesto initiate interaction.Unlike the situation where there is one Space Agent foreach autonomous space, a smart home system only needs onePrivacy Manager. This is because each space has its own uniqueset of resources and its own privilege permission for differentinhabitants, but the privacy level in a space is only related to itscurrent users rather than to the environment where the space islocated, and its inference rule is universal over all spaces in asmart home.F. Interaction ManagerOnce being informed by a Space Agent about an inhabitant’sentering/leaving, the Interaction Manager will re-estimate theinteraction level of each inhabitant in this space and notifyevery interaction-aware application. Estimation will also beinvoked when the Interaction Manager is informed by some UIAgent about an inhabitant’s login/logout.The Interaction Manager maintains a table of users overspaces with their corresponding interaction levels and dealswith two kinds of query: query about the interaction level ofsome space for some user, or query about a list of spacesmeeting some interaction level for some user. The latter is for aninteraction-aware application to find suitable spaces to initiateinteractions.If an inhabitant leaves a space before logout, the InteractionManager will change his/her level of interaction with this spaceas IF and command the UI Agent in this space to change his/herstatus. The Interaction Manager also helps inhabitants performremote logout from some UI Agent.A smart home system also only needs one Interaction Man-ager, and the reason is the same as for Privacy Manager.G. Find_QS_and_CS AgentEach application can query the Space Agent, PrivacyManger, and Interaction Manager for interaction permissionor interaction information by itself; nevertheless, our frame-work additionally provides Find_QS_and_CS Agent to supportan integrated mechanism for pervasive applications. Under arequest from an application with some specific inhabitant aswell as with a specified interaction level and privacy level, theabove agent will query the Privacy Manager, the InteractionManager, and every Space Agent to find QSs and CSs onbehalf of the application sending this request. The design ofthis component embeds the human-centric notion by achievingintegrated interaction between the smart home system and itsinhabitants via finding the best configurations of interactionwith multiple autonomous spaces whenever possible. The cor-responding details are already described in Section V-B.H. Notification AgentTo achieve the characteristic of m. − i. [5], when someambiguous situation occurs, e.g., with multiple QSs and CSs,applications can notify inhabitants to decide the next step totake, e.g., which space to use to perform the interaction. Inaddition, applications can also notify inhabitants about someinteraction exceptions, e.g., some interaction is forbidden dueto insufficient privacy level, and let inhabitants decide whetherthis interaction should be performed or not. This design alsocomplies with the human-centric notion, since this will helpinhabitants control interaction with smart homes. Nevertheless,whether to use this component is optional for pervasive appli-cations. Each application can design its own logic to handleambiguous situations and/or interaction exceptions.VIII. APPLICATION EXAMPLEWe have developed two pervasive applications based on ourframework, which are “Media Follow Me” and “UbiquitousSkype.” However, due to the page limitation, only the latter oneis demonstrated below, and the former one, which is simpler, isprovided online.8The basic idea of “Ubiquitous Skype” is to enable the inhab-itant to use Skype wherever he/she goes. The motivation forSkype to move from space to space is to explore the possibilitiesof what smart home systems can support. In addition, ratherthan carrying mobile devices almost all the time when beingoutside the home, users may put their mobile devices someplacein the home but move around to different spaces in home fora variety of activities, so they cannot reach for their mobiledevices immediately. Furthermore, when using a space ratherthan a mobile device as the media to perform Skype, users havemore freedom (e.g., two hands free to do other work) and abetter experience (especially when enabling visual function ona big screen).To enable an inhabitant to use Skype wherever he/she goes,the simplest method is to install Skype at every autonomousspace and let the inhabitant logs in at each of them. Therefore,every Skype will automatically ring if this inhabitant is beingcalled. Nevertheless, from the Human-Centric viewpoint, sev-eral points need to be further considered:1) Someone else may already be using Skype at some spacethat this inhabitant is heading to.2) This inhabitant cannot use Skype due to a privilege issueor resource constraint.3) No matter where the inhabitant is at now, when he/sheis being called, every Skype device rings. This will defi-nitely disturb other inhabitants.8
  12. 12. 26 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 1, JANUARY 2012and Interaction Manager, and these two managers will reasonthe context information to be published for related applications.The flowchart above is shown in Fig. 13(a).In this scenario, Skype is an application both privacy awareand interaction aware, and it requires user’s foreground atten-tion. Therefore, when there is an incoming call, the SkypeAdmin will publish its requirements, which are “user1+Skype resource + personal privacy + foreground attention” inthis scenario, for the Find_QS_and_CS Agent to provide a listof QS and CS, which is an intersection set from lists providedby the Privacy Manager, the Interaction Manager, and eachSpace Agent. The flowchart above is shown in Fig. 13(b).If there are some QSs, Skype application will be started atone of QSs. Nevertheless, since user1does not log in as fore-ground Interaction Level, it implies that user1may not have theattention required by Skype, so there is no QS. Therefore, theSkype Admin asks the Notification Agent to notify user1to login, and the Notification Agent asks the UI Agent to display thisnotification. Once user1receives this notification, he/she canchoose whether to log in or not. U ser1can choose not to login if he/she prefers rest or not to be disturbed; in this scenario,however, user1logs in to make Location 1 a QS, and the SkypeAdmin starts Skype at Location 1 to interact with user1. Theflowchart above is shown in Fig. 13(c).After a while, user2enters Location 1 and lowers the PrivacyLevel of Location 1. The Privacy Manager publishes this con-text for the Skype Admin to receive this change, and the SkypeAdmin asks Skype used by user1at Location 1 to pause. TheSkype Admin then asks the Find_QS_and_CS Agent to providelists of QS and CS to find where Skype can continue for user1.The flowchart above is shown in Fig. 13(d).Once receiving the list, the Skype Admin asks the Notifi-cation Agent to notify user1that Location 2 is a CS. Theflowchart above is shown in Fig. 13(e). As user1moves toLocation 2, the flowchart is shown in Fig. 13(f). After user1logs in as foreground Interaction Level, the Skype Admincommands Skype at Location 1 to stop and Skype at Location 2to continue interacting with user1. The flowchart is shown inFig. 13(g).IX. PRELIMINARY EVALUATIONSThe goal of the evaluations in this section is to verify thedesign concept of our proposed work. Our proposed work wasevaluated through interview of users involved in our designedapplication scenarios before and after applying our frameworkand algorithm.A. Framework Evaluations—Phase 1Fig. 13. Control flowchart and demonstration scenario of UbiquitousSkype.4) The inhabitant being called may have company in thespace where he/she is at now, thus causing the privacylevel of this space to be insufficient for this call.5) The inhabitant being called may wish not to be disturbed.After applying our framework, its control flowchart is illus-trated in Fig. 13. Initially, when user1enters Location 1, theSpace Agent will publish this event for the Privacy ManagerIn this part, our proposed work was evaluated by 54 people,including 24 graduate students majoring in computer science,1 graduate student majoring in social work, 5 kindergartenteachers, and 24 parents of kindergarten children.99The involved kindergarten is a public school for regular parents, rather thanan elite school. Furthermore, the reason for choosing this kindergarten is that itcooperates with a social work researcher who is involved in a research projectwith us, so the authors took advantage of this by visiting the kindergarten toincrease the varieties of survey respondents.
  13. 13. WU AND FU: REALIZATION OF A FRAMEWORK FOR HUMAN–SYSTEM INTERACTION IN SMART HOMESTABLE I TABLE II27APPROPRIATENESS OF SYSTEM BEHAVIORS IN SCENARIO 1For each system behavior in each scenario, these 54 peoplewere asked to choose the level of appropriateness from fiveoptions: very bad, bad, fair, good, very good. For readingsimplicity, the evaluation results are simplified to only threeoptions: bad (bad or very bad), fair, and good (good or verygood). We have, however, managed to present the originalresults by assigning each of the original option a score toquantifying them: 0 for very bad, 25 for bad, 50 for fair,75 for good, and 100 for very good.Scenario 1—Dealing With Paused Interactions When UsersReturn: When users move around in various spaces in a homeenvironment, they may return to some space where their pausedinteractions reside. The survey respondents are asked to evalu-ate the behaviors about whether and how to resume the pausedinteractions under this situation.• Before applying our work: Paused interactions may re-sume automatically or wait for manual instructions fromusers.• After applying our work: The system first prompts usersto log in, and a message briefly shows that there aresome paused interactions. After that, the system waits forusers to log in. Once users log in, the system restores andresumes the status of interactions paused previously.From Table I, neither “resume automatically” nor “wait formanual instructions” seems to be a preferred system behaviorfor users. Some kinds of semi-automated behaviors should bebetter choices, and “providing prompts first” (system-initiative)seems slightly better than “waiting for user actions first” (user-initiative) in this scenario, although they both get high appro-priateness and low inappropriateness. Nevertheless, our systembehavior is the one with the highest appropriateness, “promptfirst and wait for user instructions.”Scenario 2—Dealing With Paused Interactions When UsersEnter a New Space: When users move around in a homeenvironment and enter a new space, they may have some pausedinteractions in the space they just left. The survey respondentswere asked to evaluate the behaviors about whether and howto transfer the paused interactions from the old space to thenew one.• Before applying our work: Again, in the new space, pausedinteractions may resume automatically or wait for manualinstructions from users.• After applying our work: In this scenario, our system waitsfirst until user login, since it cannot discern whether userswant to continue interactions or to rest. Nevertheless, onceAPPROPRIATENESS OF SYSTEM BEHAVIORS IN SCENARIO 2TABLE IIIAPPROPRIATENESS DIFFERENCES AND RANKING CHANGE.(BETWEEN IF.SPACEAND B.SPACE)users log in, the system initiates interactions in the newspace and restores and resumes the status of interactionspausing in the previous space.From Table II, semi-automated behaviors are the betterchoices again, but in this scenario, “waiting for user actionsfirst” (user-initiative) seems better than “providing promptsfirst” (system-initiative). Our system behavior is again the onewith the highest appropriateness, “wait until user login thenresume interactions.”Scenario 3—Verification of the Design of IF.Space: Ourwork especially designs IF.Space to represent spaces that con-tain paused foreground interactions, thus to differentiate themwith B.Space, representing the spaces that do not containforeground interactions. Recall that Scenario 1 describes thesituation of dealing with HSI in IF.Space, whereas Scenario 2describes that in B.Space. The comparison between Tables I andII is shown in Table III, and the difference in their appropriate-ness and the change in their ranking support our design concept,which is that IF.Space and B.Space are different for usersin HSI. Therefore, without our work to differentiate systembehaviors in IF.Space and B.Space, the smart home system willnot achieve the highest appropriateness in both IF.Space andB.Space.Scenario 4—Secured Interactions in a Space: Thisscenario is about Security. Taking video playing andtelecommunications as the applications for demonstration, thesurvey respondents were asked how to handle a situation wherevideo/telecommunication is or will be happening in somespace, and other family members, who are not suitable for thisvideo or not related to this telecommunication, are still in thesame space or accidentally enter it.• Before applying our work: Interactions have no privilegedifferentia for users, and interactions continue thoughinappropriate inhabitants join in.• After applying our work: Every user belongs to somegroups concerning all services provided in a smart home.Interactions automatically pause or continue based on the
  14. 14. 28 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 1, JANUARY 2012TABLE IVAVERAGED APPROPRIATENESS OF SYSTEM BEHAVIORS IN SCENARIO 4.(BEFORE AND AFTER APPLYING OUR WORK)TABLE VDEMANDING FOR CONTROL INTERFACE IN SCENARIO 4privacy level resulting from the user groups in the samespace.The appropriateness of system behaviors is shown inTable IV. Although our framework in the case of telecommuni-cation does not perform well compared with that in the case ofvideo playing, it still improves appropriateness and decreasesinappropriateness.The difference between the former demonstration applica-tion, video playing, and the latter one, telecommunication, maybe due to two reasons. The first reason is that the conflictsituation in the latter application is not that serious. It maynot hurt if unrelated users are involved in telecommunication,but it does matter if video is played while some unsuitableusers are present. The second reason is that telecommunicationis Human–Human Interaction via System rather than simpleHSI. When dealing with this kind of interaction, not only theneeds from users at the local side but also the needs fromusers at the other side have to be considered. Therefore, sincethe damage in the latter application is not that critical, usersmay prefer to handle this conflict by themselves, rather thanlet this situation be handled by system automatically. Table Valso supports this analysis. Although users think that theydemand some control interfaces to control the interactions inboth demonstrating applications, there are more users, almosteveryone, who demand this requirement in the latter case. Somefurther evaluation results are presented in Scenario 8.B. Framework Evaluations—Phase 2To clarify design issues analyzed from the results in theprevious phase, our proposed work was further evaluated by24 people, who are either undergraduates or graduates majoringin computer science. The format of questionnaires is the sameas the one in the previous phase.Scenario 5—Providing Notifications in F.Space: Users arein the intersection of Space A and Space B, which means thatthe users and system can interact with each other in either ofthese spaces. A foreground service is provided in one of thesespaces, and a notification is provided in the same space or inthe other one for users to choose the level of appropriateness.To eliminate bias caused by the service itself, two kinds offoreground services are tested, Video Playing and Telecommu-nication. To consider the effects caused by the importance ofthe notification, two priority levels of notifications are used,Normal and High.According to Table VI, in all the eight possible cases, whena foreground service is already being performed in some space,no matter whether the priority level of notifications is Normalor High, providing notifications in the same space attains higherappropriateness (25% in average) and lower inappropriateness(13.5% in average) than providing notifications in the otherspace. In addition, the experiments in our previous study [16]have also shown that users respond to a foreground interactionfaster when this interaction is initiated in F.Space rather than inB.Space. Together, they provide evidences about the differencesbetween F.Space and B.Space to verify the design of these twoelements and show that users prefer the design of “providingforeground interactions (e.g., system notifications) in F.Spacerather than B.Space.”Scenario 6—Two Foreground Services in the Same Space:As in Scenario 5, users are in the intersection of Space A andSpace B, and two kinds of foreground services are used, whichare Video Playing and Telecommunication. A foreground ser-vice is performed first in one of these spaces, then the other oneis performed in the same space or the other space for users tochoose the level of appropriateness. To eliminate bias causedby the order of performing services, both possible orders aretested.According to Table VII, in all of the possible cases, when aforeground service is already performed in some space, provid-ing another foreground service in the same space gets higherappropriateness (25% on average) and lower inappropriateness(21.9% on average).Scenario 7—Providing Private Notifications: The scenariois similar to Scenario 5, but the notification to be provided isa private message. According to Table VI, notifications shouldbe provided in F.Space. According to Table VIII, though, thedifference is very small but users prefer private notificationsto be provided in Space B rather than in Space A even whenSpace B is not an F.Space. Nevertheless, note that ignoringthe factor of privacy and just simply providing notifications inF.Space can still get more than 70% appropriateness on averagein the above special case.The reason that Space B is better than Space A may be thatSpace A is less private in its nature due to its greater and openerservice area. This implies that when providing a private service,if there are multiple spaces with the same conditions, it may gethigher user satisfactions if choosing the one more private in itsnatural characteristics. We defer consideration of this issue to afuture work.Scenario 8—Manual Control Option in Case of HighPriority: According to Table V, users demand the control inter-faces to control the interactions whose potential damage is crit-ical. Therefore, the design of actively prompting the option ofmanual control when initiating a service in high priority isproposed, and verified in this scenario. Under the situation of
  15. 15. WU AND FU: REALIZATION OF A FRAMEWORK FOR HUMAN–SYSTEM INTERACTION IN SMART HOMESTABLE VIAPPROPRIATENESS OF SYSTEM BEHAVIORS IN SCENARIO 5TABLE VIIAPPROPRIATENESS OF SYSTEM BEHAVIORS IN SCENARIO 6TABLE VIIIAPPROPRIATENESS OF SYSTEM BEHAVIORS IN SCENARIO 7TABLE IXAPPROPRIATENESS OF SYSTEM BEHAVIORS IN SCENARIO 829a service in different priority level, users are asked to choosethe level of appropriateness for two kinds of behaviors:• System-Initiative (S.-I.): The interaction details are han-dled by smart homes first and are handled by users only ifsmart homes cannot deal with them.• User-Initiative (U.-I.): All of the interaction details arecontrolled by users rather than smart homes.Two priority levels of services are used, which are Normaland High. To eliminate bias caused by the service itself, twokinds of foreground services are tested, which are Video Play-ing and Telecommunication.According to Table IX, when a foreground service in highpriority is to be initiated, actively prompting the option of man-ual control will get higher appropriateness (43.8% on average)and lower inappropriateness (12.5% on average), whereas theprompting is unnecessary in normal priority.X. CONCLUSION AND FUTURE WORKSIn this paper, we have proposed a framework to modelthe interaction between a smart home and its inhabitants,so that a smart home can fulfill “comfort + convenience +security” when performing services to interact with its in-habitants. The proposed framework mainly focuses on therelationship among services, spaces, and users, and is ana-lyzed from the perspective of human–computer interaction.
  16. 16. 30 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 42, NO. 1, JANUARY 2012An algorithm is also proposed for a smart home to appro-priately configure its services in the UbiComp-based environ-ment, so that a smart home can behave like a smart systemwhile maintaining all kinds of “home” functions. A smarthome system and two applications are implemented to realizethe proposed framework for verifying the design concept byinterview questionnaires. The preliminary evaluation resultsshow that, when performing services/interactions, the systembehaviors are more appropriate after applying our work, thussupporting the hypothesis that our work can indeed improveHSI between smart home system and its inhabitants. Note thatit is important to gather user preferences during the iterativedesign phase, and the evaluation presented here is preliminaryfeedback, so a detailed evaluation will be reported in thefuture.As for future research, we will improve our framework byconsidering the roles played by robots or mobile computingdevices in smart homes as “mobile spaces” and incorporate theconcept of “personal space [27]” and “space awareness [28].”We will also study the maximum number of services acceptablein the F.Space or B.Space window and the levels of frustrationwhen people are overloaded with multiple interactions. In addi-tion, there could be some sub-factors under the key elements inthe proposed framework. For example, user affect and value ofthe new service to the user can be classified as sub-element of“privacy” and “priority” requirements. Moreover, the proposedframework will cooperate with our proposed learning mecha-nism [29] to learn user interactions thus to help the interactionto be more “comfortable” and “convenient.”We will also extend our framework to model the interac-tion when a smart home system obtains context information.In this part, privacy issue gets more and more concernedwith the human-centric researches. Reference [30] proposeda dynamic method to estimate privacy level according to theinteraction information gathered from the inhabitant, and Alt-man’s theoretical privacy framework [31], a well-recognizedand significant work in the field of social sciences, proposedthat human being’s interaction space will dynamically changeaccording to whom he/she interacts with. These two works arevery similar to part of the design concept of our framework,and we plan to incorporate them into our framework in thefuture.REFERENCES[1] M. Weiser, “The computer for the twenty-first century,” Sci. Amer.,vol. 265, no. 3, pp. 94–104, Sep. 1991.[2] E. Chang, T. Dillon, and D. Calder, “Human system interaction withconfident computing,” in Proc. Conf. Human Syst. Interact., Krakow,Poland, May 25–27, 2008, pp. 1–11.[3] J. P. Sousa, V. Poladian, D. Garlan, B. Schmerl, and M. Shaw, “Task-basedadaptation for ubiquitous computing,” IEEE Trans. Syst., Man, Cybern. C,Appl. Rev., vol. 36, no. 3, pp. 328–340, May 2006.[4] E. Horvitz, “Principles of mixed-initiative user interfaces,” in Proc. CHI,Pittsburgh, PA, May 15–20, 1999, pp. 159–166.[5] N. Ramakrishnan, R. G. Capra, III, and M. A. Perez-Quinones, “Mixed-initiative interaction = mixed computation,” in Proc. ACM SIGPLANWorkshop PEPM, Jan. 2002, pp. 119–130.[6] J.-H. Hong, Y.-S. Song, and S.-B. Cho, “Mixed-initiative human-robot interaction using hierarchical Bayesian networks,” IEEE Trans.Syst., Man, Cybern. A, Syst., Humans, vol. 37, no. 6, pp. 1158–1164,Nov. 2007.[7] P. A. Hancock, “On the process of automation transition in multitaskhuman-machine systems,” IEEE Trans. Syst., Man, Cybern. A, Syst., Hu-mans, vol. 37, no. 4, pp. 586–598, Jul. 2007.[8] J. A. Kientz, S. N. Patel, B. Jones, E. Price, E. D. Mynatt, andG. D. Abowd, “The Georgia tech aware home,” in Proc. CHI, Florence,Italy, Apr. 5–10, 2008, pp. 3675–3680.[9] House_n the PlaceLab. [Online]. Available:[10] MIT House_n. [Online]. Available:[11] N. Sawhney and C. Schmandt, “Nomadic radio: Speech and audio inter-action for contextual messaging in nomadic environments,” ACM Trans.Comput.-Human Interact., vol. 7, no. 3, pp. 353–383, Sep. 2000.[12] D. S. McCrickard, C. M. Chewar, J. P. Somervell, and A. Ndiwalana,“A model for notification systems evaluation—Assessing user goals formultitasking activity,” ACM Trans. Comput.-Human Interact., vol. 10,no. 4, pp. 312–318, Dec. 2003.[13] K. Hinckley, J. Pierce, E. Horvitz, and M. Sinclair, “Foreground and back-ground interaction with sensor-enhanced mobile devices,” ACM Trans.Comput.-Human Interact., vol. 12, no. 1, pp. 239–246, Mar. 2005.[14] W. Buxton, “Integrating the periphery and context: A new model oftelematics,” in Proc. Graph. Interface, 1995, pp. 239–246.[15] M. Boyle and S. Greenberg, “The language of privacy: Learning fromvideo media space analysis and design,” ACM Trans. Comput.-HumanInteract., vol. 12, no. 2, pp. 328–370, Jun. 2005.[16] C.-L. Wu and L.-C. Fu, “A human-system interaction framework andalgorithm for UbiComp-based smart home,” in Proc. Conf. Human Syst.Interact., Krakow, Poland, May 25–27, 2008, pp. 257–262.[17] V. Bellotti and A. Sellen, “Design for privacy in ubiquitous computingenvironments,” in Proc. 3rd Eur. Conf. Comput.-Support. Coop. Work,Milan, Italy, Sep. 13–17, 1993, pp. 77–92.[18] M. Langheinrich, “Privacy by design—Principles of privacy-aware ubiq-uitous systems,” in Proc. 3rd Int. Conf. Ubiquitous Comput., Atlanta, GA,Sep. 30/Oct. 2, 2001, pp. 273–291.[19] U. Hengartner and P. Steenkiste, “Access control to information in perva-sive computing environments,” in Proc. 9th Conf. Hot Topics Oper. Syst.,Lihue, HI, May 18–21, 2003, p. 27.[20] J. I. Hong and J. A. Landay, “An architecture for privacy-sensitive ubiqui-tous computing,” in Proc. 2nd Int. Conf. Mobile Syst., Appl. Serv., Boston,MA, Jun. 6–9, 2004, pp. 177–189.[21] C.-L. Wu, C.-F. Liao, and L.-C. Fu, “Service-oriented smart home ar-chitecture based on OSGi and mobile agent technology,” IEEE Trans.Syst., Man, Cybern. C, Appl. Rev., vol. 37, no. 2, pp. 193–205,Mar. 2007.[22] C.-F. Liao, Y.-W. Jong, and L.-C. Fu, “Toward a message-oriented appli-cation model and its middleware support in ubiquitous environments,” Int.J. Hybrid Inf. Tech., vol. 1, no. 3, pp. 1–10, Jul. 2008.[23] W.-H. Liau, C.-L. Wu, and L.-C. Fu, “Inhabitants tracking system in acluttered home environment via floor load sensors,” IEEE Trans. Autom.Sci. Eng., vol. 5, no. 1, pp. 10–20, Jan. 2008.[24] C.-R. Yu, C.-L. Wu, C.-H. Lu, and L.-C. Fu, “Human localization viamulti-cameras and floor sensors in smart home,” in Proc. IEEE Int. Conf.Syst., Man, Cybern., Oct. 8–11, 2006, vol. 5, pp. 3822–3827.[25] C.-H. Lu, W.-H. Liao, C.-L. Wu, and L.-C. Fu, “Power-efficient extensiblearchitecture for RFID-assisted multiple target tracking,” in Proc. IEEEInt. Conf. Syst., Man, Cybern., Oct. 8–11, 2006, vol. 2, pp. 1068–1073.[26] C.-H. Lu, C.-L. Wu, and L.-C. Fu, “A reciprocal and extensible archi-tecture for multiple-target tracking in a smart home,” IEEE Trans. Syst.,Man, Cybern. C, Appl. Rev., vol. 41, no. 1, pp. 120–129, Jan. 2011,DOI: 10.1109/TSMCC.2010.2051026.[27] K.-L. Park, J.-K. Park, and S.-D. Kim, “An effective model and man-agement scheme of personal space for ubiquitous computing applica-tions,” IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 38, no. 6,pp. 1295–1311, Nov. 2008.[28] S. W. Loke, S. Ling, M. Indrawan, and E. Leung, “Q-Aura: A quantitativemodel for managing mutual awareness of smart social artifacts,” IEEETrans. Syst., Man, Cybern. A, Syst., Humans, vol. 41, no. 1, pp. 161–168,Jan. 2011.[29] C.-H. Lu and L.-C. Fu, “Robust location-aware activity recognition usingwireless sensor networks in an attentive home,” IEEE Trans. Autom. Sci.Eng., vol. 6, no. 4, pp. 598–609, Oct. 2009.[30] S. Moncrieff, S. Venkatesh, and G. West, “Dynamic privacy assess-ment in a smart house environment using multimodal sensing,” ACMTrans. Multimedia Comput. Commun. Appl., vol. 5, no. 2, pp. 1–29,Nov. 2008.[31] I. Altman, The Environment and Social Behavior—Privacy, PersonalSpace, Territory, Crowding. Pacific Grove, CA: Brooks/Cole, 1975.
  17. 17. WU AND FU: REALIZATION OF A FRAMEWORK FOR HUMAN–SYSTEM INTERACTION IN SMART HOMES 31Chao-Lin Wu (M’03) received the B.S. degreein industrial technology education from NationalTaiwan Normal University, Taipei, Taiwan, in 1996and the Ph.D. degree in computer science and in-formation engineering from National Taiwan Univer-sity, Taipei, Taiwan, in 2009, respectively.He is currently with the Institute of Informa-tion Science, Academia Sinica, Taipei, Taiwan. Hisresearch interests include smart home, intelligentspace, context-aware technology, human–computerinteraction based on ubiquitous computing, human-computation, and topics related to them.Li-Chen Fu (M’84–SM’94–F’04) received the from National Taiwan University, Taipei,Taiwan, in 1981, and the M.S. and Ph.D. degreesfrom University of California, Berkeley, in 1985 and1987, respectively.Since 1987, he has been a member of the fac-ulty, and is currently an Associate Dean of Collegeof Computer Science and Information Engineering,National Taiwan University. His research interestsinclude robotics, smart home, visual detection andtracking, intelligent vehicle, evolutionary optimiza-tion, virtual reality, and nonlinear control theory and applications.Dr. Fu currently serves as Editor-in-Chief of Asian Journal of Control andhad been invited to serve as Distinguished Lecturer of IEEE Robotics andAutomation Society during 2004–2005 and in 2007. He was awarded LifetimeDistinguished Professorship from his university in 2007. He has receivednumerous academic recognitions, such as Distinguished Research Awards fromNational Science Council, Taiwan, the Irving T. Ho Chair Professorship.