This document summarizes a master's thesis about using NFC technology to control services and resources in interactive spaces. The REACHeS platform allows users to start, control, and transfer content to services and devices by touching NFC tags located in their environment. Usability tests found that NFC interaction was easy to use and intuitive, though not suitable for real-time applications due to latency. While REACHeS enables creating interactive spaces, the system faces challenges with latency and scalability that require further optimization. The thesis contributes a platform for building interactive spaces and studies different NFC-based interaction methods.
Blue Eyes is a technology conducted by the research team of IBM at its Almaden Research Center (ARC) in San Jose, California since 1997. ... The aim of the blue eyes technology is to give human power or abilities to a computer, so that the machine can naturally interact with human beings as we interact with each other.
This is the introductory lecture for the module Interactive Space Design at Newcastle University's School of Architecture, Planning and Landscape. Part of the MArch and MSc in Digital Architecture.
Join Brian Pichman of the Evolve Project on an adventure of laughs, thrills, and an opportunity to experiment and build with some of the latest and greatest gadgets in the market place. This workshop will guide you through the whys and hows of building environments that allow creativity through the use of innovative technology. At the end of this session, you will be building circuits, programming robots, and more, during this jam packed information and hands on session. There will be something for everyone at this event and will leave you with new ideas that you can implement the very next day in your library.
Talk given at the NFC 2012 Workshop, March 13 2012, Helsinki Finland. The talk presents the research performed on NFC-based user interfaces at the University of Oulu, Department of Computer Science and Engineering, Oulu, Finland.
Blue Eyes is a technology conducted by the research team of IBM at its Almaden Research Center (ARC) in San Jose, California since 1997. ... The aim of the blue eyes technology is to give human power or abilities to a computer, so that the machine can naturally interact with human beings as we interact with each other.
This is the introductory lecture for the module Interactive Space Design at Newcastle University's School of Architecture, Planning and Landscape. Part of the MArch and MSc in Digital Architecture.
Join Brian Pichman of the Evolve Project on an adventure of laughs, thrills, and an opportunity to experiment and build with some of the latest and greatest gadgets in the market place. This workshop will guide you through the whys and hows of building environments that allow creativity through the use of innovative technology. At the end of this session, you will be building circuits, programming robots, and more, during this jam packed information and hands on session. There will be something for everyone at this event and will leave you with new ideas that you can implement the very next day in your library.
Talk given at the NFC 2012 Workshop, March 13 2012, Helsinki Finland. The talk presents the research performed on NFC-based user interfaces at the University of Oulu, Department of Computer Science and Engineering, Oulu, Finland.
Distributed Artificial Intelligence with Multi-Agent Systems for MECTeemu Leppänen
In this paper, the Multi-access Edge Computing (MEC) system architecture, as defined by the ETSI standards, is modeled as a multi-agent system. MEC system management services and application execution components are designed as software agents, facilitating distributed artificial intelligence capabilities in their operation and cooperation. Further, the integration of current agent technologies into the standardized MEC system is discussed. Lastly, a case study is presented on how to integrate an existing Internet of Things agent framework and agent-based edge application seamlessly to the MEC system.
ISCC 2013 keynote "Pervasive Sensing and IoT Cooking Recipe: Just add People ...Milan Milenkovic
ISCC 2013 Conference Keynote on Internet of Things scope and requirements, illustrations and specific use case energy efficiency in office buildings with personal eco-feedback application POEM (Personal Office Energy Monitor)
Tizen apps with Context Awareness and Machine LearningShashwat Pradhan
Using mobile sensor data tuned with Machine Learning models, developers can build advance context aware apps. The simple contextual lifecycle of Sense, Understand and Adapt can be implemented using simple APIs. The presentation focuses on User Experience and future applications with context. Machine Learning models on top of the sensor data will give developers better understanding about the user.
The presentation looks at the following: 1) Long range view of fundamental trends and shifts in computing and User Experience, 2)
What does IoT and context mean for ambient conversational AI?, 3)
How does Conversational AI work?
Self-Learning: Implicit and explicit customer feedback based learning.
We live in a mobile world. As a society, we are increasingly becoming more mobile, and our technology is evolving with us. Almost everyone has some form of mobile computer within reach at any given time. Many of us have more than one set of these devices – a personal device and a work device.
The current generation has grown up while being immersed in modern digital technology. The lines between work and personal life are blurred, and this is best shown in how they want to consume technology. The status quo of work devices and personal devices is no longer good enough. The latest generation to enter the workforce wants to be able to work anytime, anywhere, and from any device, and they will expect IT to deliver on this.
In this session, you'll learn how you can deliver an always on workspace that enables the business to work anywhere from any device while keeping security and user experience at the forefront.
For more details on contextual apps visit: http://emberify.com/blog
Using the sensors in the mobile phone, developers can build enriched apps. The simple lifecycle of Sense, Understand and Adapt can be implemented in an app using simple APIs. What sets contextual apps apart from other mobile apps is their ability to figure out your needs and anticipate your interests. . Apps like Google Now, Tempo, Sherpa, Sunrise and Cortana use a contextual lifecycle to predict habits and interests. Using APIs to access the GPS, accelerometer, social networks, Gyroscope and other information from the device developers can easily create context aware.
Distributed Artificial Intelligence with Multi-Agent Systems for MECTeemu Leppänen
In this paper, the Multi-access Edge Computing (MEC) system architecture, as defined by the ETSI standards, is modeled as a multi-agent system. MEC system management services and application execution components are designed as software agents, facilitating distributed artificial intelligence capabilities in their operation and cooperation. Further, the integration of current agent technologies into the standardized MEC system is discussed. Lastly, a case study is presented on how to integrate an existing Internet of Things agent framework and agent-based edge application seamlessly to the MEC system.
ISCC 2013 keynote "Pervasive Sensing and IoT Cooking Recipe: Just add People ...Milan Milenkovic
ISCC 2013 Conference Keynote on Internet of Things scope and requirements, illustrations and specific use case energy efficiency in office buildings with personal eco-feedback application POEM (Personal Office Energy Monitor)
Tizen apps with Context Awareness and Machine LearningShashwat Pradhan
Using mobile sensor data tuned with Machine Learning models, developers can build advance context aware apps. The simple contextual lifecycle of Sense, Understand and Adapt can be implemented using simple APIs. The presentation focuses on User Experience and future applications with context. Machine Learning models on top of the sensor data will give developers better understanding about the user.
The presentation looks at the following: 1) Long range view of fundamental trends and shifts in computing and User Experience, 2)
What does IoT and context mean for ambient conversational AI?, 3)
How does Conversational AI work?
Self-Learning: Implicit and explicit customer feedback based learning.
We live in a mobile world. As a society, we are increasingly becoming more mobile, and our technology is evolving with us. Almost everyone has some form of mobile computer within reach at any given time. Many of us have more than one set of these devices – a personal device and a work device.
The current generation has grown up while being immersed in modern digital technology. The lines between work and personal life are blurred, and this is best shown in how they want to consume technology. The status quo of work devices and personal devices is no longer good enough. The latest generation to enter the workforce wants to be able to work anytime, anywhere, and from any device, and they will expect IT to deliver on this.
In this session, you'll learn how you can deliver an always on workspace that enables the business to work anywhere from any device while keeping security and user experience at the forefront.
For more details on contextual apps visit: http://emberify.com/blog
Using the sensors in the mobile phone, developers can build enriched apps. The simple lifecycle of Sense, Understand and Adapt can be implemented in an app using simple APIs. What sets contextual apps apart from other mobile apps is their ability to figure out your needs and anticipate your interests. . Apps like Google Now, Tempo, Sherpa, Sunrise and Cortana use a contextual lifecycle to predict habits and interests. Using APIs to access the GPS, accelerometer, social networks, Gyroscope and other information from the device developers can easily create context aware.
3. REACHeS
• REACHeS platform:
– Enables the creation of Interactive Spaces
– Permits a user to control services and resources
(displays and speakers) by using physical user
interfaces.
– NFC technology is the ”bridge” between the
digital and virtual world. By touching NFC tags
located in users’ environment using their NFC
enabled mobile phone users can:
• Start a service
• Command a service
• Select a resource
• Interact with a resource
5. Smart spaces
• Environments that tries to predict user
intentions and needs based on sensed data.
– Implicit interaction
• Problem:
– Human comunication is very complex
– Wrong decisions by the system implies user’s
frustation and bad UX.
• User does not feel in control of the interaction
6. Interactive Spaces
• Environments in which:
– Users control services and resources by
interacting with objects in the environments
• TUIs or PUIs
– User is in control of interaction: system does not
initate any task without explicit user intervention
• Research questions:
– Is it possible to build more user-centered
pervasive computer environments using
Interactive Spaces? How?
– Which are the best technologies and interaction
methods for Interactive Spaces
8. REACHeS
• REACHeS allows the quick creation of
Interactive Spaces
– Communicates users, services and resources
(displays and speakers)
• REACHeS is a gateway between users’ mobile clients
and the rest of the Interactive Space.
– Allows multiple interaction methods
• In this Master’s Thesis the focus is on NFC
– Provides different services such as resource
allocation, session control, client installation via
OT, content management.
11. REACHeS’ interaction modes
using NFC
• Starting a service
– Tag contains the service name,
information for the resource allocation
system and other particular parameters.
• Controlling a service
– Tag contains commands and
parameters.
• Selecting devices
– The tag contains the device id and the
device type
• Transferring content to resources
– The tag contains the device id
14. Usability tests
• Touch & Control versus
traditional keypad control
• Gesture recognition vs.
traditional keypad control
• Speech and gesture recognition
vs. Touch & Control
• Resource allocation processes
PHONE
GUI
TOUCH &
CONTROL
Reliability 8,1 8,6
Easiness 8,7 9,4
Speed 6,8 7,6
Intuitiveness 8,4 9,2
Cognitive load 8,2 8,8
Average UX 8,3 8,6
• Better learnability, speed,
reliabilty and intuitiviness
• Similar task execution time
• Better UX than initial user
expectance
• Much better task execution
time and learnability
• Different environments =>
different allocation processes
• Automatic method is the
worst option
15. Main test results
• NFC interaction is easy to use and very
intuitive.
• Main visual feedback in external displays
– Phone’s display only when there is no external
display available
– Phone’s haptic and audio feedback for
complementary feedback
• NFC cannot be used to build real time
applications.
– NFC produces too much delay
16. Main REACHeS problems
• Latency
• Scalability
Average time
(ms)
Start
command
Average time
(ms)
Others
command
Round trip time
(internal service)
3369 (σ =
2620)
1230 (σ =
705)
REACHeS Execution
time (internal service)
1265 10
Service Execution time
(internal service)
1095 2
Display update time
(internal service)
1398 171
Effective time (internal
service)
3082 786
18. Conclusion
• The main contributions are:
1. REACHeS a server platform to build Interactive
Spaces
2. Study of different interaction methods based on
NFC
3. Example applications and usability tests
4. Theoretical background about Interactive
Spaces, its interaction methods and
technologies