Be the first to like this
In the post-desktop era, many devices (laptops, smartphones, tablet PCs) have proliferated and have become undividable part of our digital lives. However, each device comes with different input methods (multi-touch, mouse, and voice), sensors (accelerometer, compass, GPS) and display sizes. Moreover, they are mostly used for separate tasks, e.g. phones for calling, tablets for reading and web browsing, and laptops for text editing. It may be convenient to use one of the devices for one type of application, while the device may be limited to run the other types of applications. Why shouldn’t we take advantage of all of our personal computing devices and to act as one logical device?
In this Virtual Campfire demo, we present a framework for rich internet applications with user interfaces distributed over a federation of heterogeneous commodity devices with multimodality, i.e. laptops, smartphones, and tablet computers. The UI is based on web widgets, running in widget containers such as iGoogle or OpenSocial. We employ the latest Web technologies including XMPP and HTML5 WebSockets to realize cross-platform inter-widget communication, based on the SDK of the EU Project ROLE and i5 Mobile Cloud Infrastructure. This underlying technology virtually connects the distributed UI parts (widgets) and enables real-time input fusion and output fission. We show the framework in action with a prototype for the use case of collaborative semantic video annotation (SeViAnno 2.0), which was already pilot tested for documentation purposes in cultural heritage management. The user has more flexible control over the different parts of the application by using his smartphone or tablet computer with multi-touch functionality, e.g. to navigate over a digital map or preview and annotate videos, and carry out other text input tasks from a laptop.