Task models are the cornerstone of user-centred design methodologies for user interface design. Therefore, they deserve attention in order to produce them effectively and efficiently, while guaranteeing the reproducibility of a task model: different persons should in principle obtain the same task model, or a similar one, for the same problem. In order to provide user interface designers with some guidance for task modelling, a list of canonical task types is proposed that offers a unified definition of frequently used tasks types in a consistent way. Each task type consists of a a task action coupled with a task object, each of them being written according to design guidelines. This list provides the following benefits: tasks are modelled in a more consistent way, their definition is more communicable and shared, task models can be efficiently used for model-driven engineering of user interfaces.
L’Università di Pisa investe da anni in tecnologie e infrastrutture di rete e può contare su circa 3000 Km di fibra di proprietà, la connettività del progetto GARR-X, un backbone a 10G in MPLS/BGP e l’accesso per la maggior parte degli utenti a 100Mb/s. Tuttavia alla fine del 2012 l’organizzazione dei servizi e della rete all’in- terno del campus era legata ad un modello organizzativo dell’Ateneo dove ogni struttura godeva di autonomia nel gestire le proprie risorse. Questa situazione ha determinanto negli anni un’elevata disomogenità dei servizi e un’eccessiva frammentazione delle risorse sia sul piano tecnico che sul piano amministrativo. Nel 2013 in se- guito alla necessità di razionalizzare le risorse dell’Ateneo è sorta l’esigenza di riprogettare la connettività e i meccanismi di erogazione dei servizi. La presentazione illustra il modello e gli aspetti tecnici del nuovo Servi- zio di Connettività d’Ateneo, in cui l’automazione per una rete vasta e complessa come qeulla dell’Università di Pisa svolge un ruolo determinante.
Il servizio di connettività d’Ateneo (SCA) offre connettività (Wired/Wireless ) e servizi correlati a tutte le strut- ture universitarie ed ai data center dell’Ateneo. Il nuovo modello organizzativo è un sistema composto da 7 Ac- cess Provider e un Carrier: ogni access provider si occupa di gestire la connettività e i servizi per il proprio bacino di utenti, il carrier si occupa di erogare la connettività, fornire le infrastrutture di gestione ed erogazio- ne dei servizi a tutti gli access provider. L’università è stata suddivisa in 7 bacini d’utenza uno per ogni Access Provider, in ogni area è stato definito un dominio di routing nel quale transita il traffico proveniente dalle strut- ture afferenti. Gli apparati di distribuzione su cui insistono i domini di routing sono magliati tra di loro e con i due router che costituiscono il backbone. Ogni router d’area viene affiancato da un nodo delegato ad erogare i servizi secondo una logica multi-tenant.
L’infrastruttura dei servizi di rete (a.k.a. progetto belfagor), consiste di una piattaforma cloud privata i cui nodi sono connessi da un anello in fibra ottica con protezione ERPS, su cui transita esclusivamente traffico di ma- nagement dell’infrastruttura stessa. Ogni componente dell’anello è costituita da due swtich in stack e due nodi della piattaforma di cloud computing basata su KVM/openstack. Ogni nodo di computing esegue le istanze vir- tuali nelle quali sono configurati e attivi i servizi in rapporto 1:1, una istanza un servizio. Si può pensare ai servizi come a delle vere e proprie App, il cui contesto di esecuzione è la piattaforma di cloud, che insieme agli strumenti di automazione effettua il provisioning dei servizi relativi ad un’area specifica.
Le App/Servizi sono idempotenti ovvero non avendo necessità di storage persistente possono essere ricreate infinite volte senza perd
A Theoretical Survey of User Interface Description Languages: Preliminary Res...Jean Vanderdonckt
A user interface description language (UIDL) consists of a specification language that describes various aspects of a user interface under development. A comparative review of some selected user interface description languages is produced in order to analyze how they support the various stages of user interface development life cycle and development goals, such as support for multi-platform, device-independence, modality independence, and content delivery. There has been a long history and tradition to attempt capturing the essence of user interfaces at various levels of abstraction for different purposes, including those of development. The recent return of this effort today gains more attraction, along with the dissemination of XML markup languages, and gives birth to many proposals for various user interface description languages. Consequently, an in-depth analysis of the salient features that make these languages different from each other is desired in order to identify when and where they are appropriate for a specific purpose. The review is conducted based on a systematic analysis grid and some user interfaces implemented with these languages.
A Model-Based Approach for Developing Vectorial User InterfacesJean Vanderdonckt
This paper presents a model-based approach for developing vectorial user interfaces to an interactive applications, whether it is a web or a stand-alone applications. A vectorial user interface exhibits the capability of being re-scaled in any dimension without any loss of information, while taking advantage of the screen real estate offered by the computing platform on which the interactive application is running. A model describes the vectorial user interface in order to capture its presentation and behavior in a way that is independent of any context of use. Implemented as a browser plug-in, a rendering engine parses this model at run-time so as to render the user interface bounded with the domain, thus producing together a running application. This facilitates platform adaptation, since the interface scales up or down depending on the screen resolution and user adaptation since the model can change from one session to another. The interface is then re-rendered with adaptation for the benefit of the end user. Both platform and user adaptations contribute to making the web application accessible in a ubiquitous way
Aircraft cockpit system design is an activity with several challenges, particularly when new technologies break with previous user experience. This is the case with the design of the advanced human machine interface (AHMI), used for controlling the Advanced Flight Management System (AFMS), which has been developed by the German Aerospace Center (DLR). Studying this new User Interface (UI) requires a structured approach to evaluate and validate AHMI designs. In this paper, we introduce a model-based development process for AHMI development, based on our research in the EUs 7th framework project “Human”.
During the stage of system requirements gathering, model elicitation is aimed at identifying in textual scenarios model elements that are relevant for building a first version of models that will be further exploited in a model-driven engineering method. When multiple elements should be identified from multiple interrelated conceptual models, the complexity increases. Three method levels are successively examined to conduct model elicitation from textual scenarios for the purpose of conducting model-driven engineering of user interfaces: manual classi-fication, dictionary-based classification, and nearly natural language understanding based on semantic tagging and chunk extraction. A model elicitation tool implementing these three levels is described and exemplified on a real-world case study for designing user interfaces to workflow information systems. The model elicitation process discussed in the case study involves several models: user, task, domain, organization, resources, and job.
L’Università di Pisa investe da anni in tecnologie e infrastrutture di rete e può contare su circa 3000 Km di fibra di proprietà, la connettività del progetto GARR-X, un backbone a 10G in MPLS/BGP e l’accesso per la maggior parte degli utenti a 100Mb/s. Tuttavia alla fine del 2012 l’organizzazione dei servizi e della rete all’in- terno del campus era legata ad un modello organizzativo dell’Ateneo dove ogni struttura godeva di autonomia nel gestire le proprie risorse. Questa situazione ha determinanto negli anni un’elevata disomogenità dei servizi e un’eccessiva frammentazione delle risorse sia sul piano tecnico che sul piano amministrativo. Nel 2013 in se- guito alla necessità di razionalizzare le risorse dell’Ateneo è sorta l’esigenza di riprogettare la connettività e i meccanismi di erogazione dei servizi. La presentazione illustra il modello e gli aspetti tecnici del nuovo Servi- zio di Connettività d’Ateneo, in cui l’automazione per una rete vasta e complessa come qeulla dell’Università di Pisa svolge un ruolo determinante.
Il servizio di connettività d’Ateneo (SCA) offre connettività (Wired/Wireless ) e servizi correlati a tutte le strut- ture universitarie ed ai data center dell’Ateneo. Il nuovo modello organizzativo è un sistema composto da 7 Ac- cess Provider e un Carrier: ogni access provider si occupa di gestire la connettività e i servizi per il proprio bacino di utenti, il carrier si occupa di erogare la connettività, fornire le infrastrutture di gestione ed erogazio- ne dei servizi a tutti gli access provider. L’università è stata suddivisa in 7 bacini d’utenza uno per ogni Access Provider, in ogni area è stato definito un dominio di routing nel quale transita il traffico proveniente dalle strut- ture afferenti. Gli apparati di distribuzione su cui insistono i domini di routing sono magliati tra di loro e con i due router che costituiscono il backbone. Ogni router d’area viene affiancato da un nodo delegato ad erogare i servizi secondo una logica multi-tenant.
L’infrastruttura dei servizi di rete (a.k.a. progetto belfagor), consiste di una piattaforma cloud privata i cui nodi sono connessi da un anello in fibra ottica con protezione ERPS, su cui transita esclusivamente traffico di ma- nagement dell’infrastruttura stessa. Ogni componente dell’anello è costituita da due swtich in stack e due nodi della piattaforma di cloud computing basata su KVM/openstack. Ogni nodo di computing esegue le istanze vir- tuali nelle quali sono configurati e attivi i servizi in rapporto 1:1, una istanza un servizio. Si può pensare ai servizi come a delle vere e proprie App, il cui contesto di esecuzione è la piattaforma di cloud, che insieme agli strumenti di automazione effettua il provisioning dei servizi relativi ad un’area specifica.
Le App/Servizi sono idempotenti ovvero non avendo necessità di storage persistente possono essere ricreate infinite volte senza perd
A Theoretical Survey of User Interface Description Languages: Preliminary Res...Jean Vanderdonckt
A user interface description language (UIDL) consists of a specification language that describes various aspects of a user interface under development. A comparative review of some selected user interface description languages is produced in order to analyze how they support the various stages of user interface development life cycle and development goals, such as support for multi-platform, device-independence, modality independence, and content delivery. There has been a long history and tradition to attempt capturing the essence of user interfaces at various levels of abstraction for different purposes, including those of development. The recent return of this effort today gains more attraction, along with the dissemination of XML markup languages, and gives birth to many proposals for various user interface description languages. Consequently, an in-depth analysis of the salient features that make these languages different from each other is desired in order to identify when and where they are appropriate for a specific purpose. The review is conducted based on a systematic analysis grid and some user interfaces implemented with these languages.
A Model-Based Approach for Developing Vectorial User InterfacesJean Vanderdonckt
This paper presents a model-based approach for developing vectorial user interfaces to an interactive applications, whether it is a web or a stand-alone applications. A vectorial user interface exhibits the capability of being re-scaled in any dimension without any loss of information, while taking advantage of the screen real estate offered by the computing platform on which the interactive application is running. A model describes the vectorial user interface in order to capture its presentation and behavior in a way that is independent of any context of use. Implemented as a browser plug-in, a rendering engine parses this model at run-time so as to render the user interface bounded with the domain, thus producing together a running application. This facilitates platform adaptation, since the interface scales up or down depending on the screen resolution and user adaptation since the model can change from one session to another. The interface is then re-rendered with adaptation for the benefit of the end user. Both platform and user adaptations contribute to making the web application accessible in a ubiquitous way
Aircraft cockpit system design is an activity with several challenges, particularly when new technologies break with previous user experience. This is the case with the design of the advanced human machine interface (AHMI), used for controlling the Advanced Flight Management System (AFMS), which has been developed by the German Aerospace Center (DLR). Studying this new User Interface (UI) requires a structured approach to evaluate and validate AHMI designs. In this paper, we introduce a model-based development process for AHMI development, based on our research in the EUs 7th framework project “Human”.
During the stage of system requirements gathering, model elicitation is aimed at identifying in textual scenarios model elements that are relevant for building a first version of models that will be further exploited in a model-driven engineering method. When multiple elements should be identified from multiple interrelated conceptual models, the complexity increases. Three method levels are successively examined to conduct model elicitation from textual scenarios for the purpose of conducting model-driven engineering of user interfaces: manual classi-fication, dictionary-based classification, and nearly natural language understanding based on semantic tagging and chunk extraction. A model elicitation tool implementing these three levels is described and exemplified on a real-world case study for designing user interfaces to workflow information systems. The model elicitation process discussed in the case study involves several models: user, task, domain, organization, resources, and job.
Field of Study and Research Methods for an Effect of Cognitive and Informatio...Yury Solonitsyn
Authors show one of the possible approaches to the experimental research of the cognitive and information load on PC’s users – the mental load or amount of efforts, spent because of the necessity to recognise screen form’s elements and to process the data presented.
Andrei Balkanskii, Artem Smolin, Yury Solonitsyn
ITMO University, Saint Petersburg, Russia
A MDA-Compliant Environment for Developing User Interfaces of Information Sys...Jean Vanderdonckt
This presentation, entitled "A MDA-Compliant Environment for Developing User Interfaces of Information Systems" summarizes the characteristics of the User Interface eXtensible Markup Language, as a support for model-driven engineering of user interfaces. This presentation was the keynote address for the CAISE'2005 conference (Porto, June 16, 2005).
Paper presented at HCI Int'1995.
According to human behavior studies, several disciplines (e.g., cognitive psychology, software ergonomics, visual design) have brought substantive results to improve the user friendliness of user interface (UI). One possible output of these disciplines come as recommendations that could be translated into ergonomic rules (or guidelines). Guideline knowledge is often contained in five sources : recommendation papers [1], design standards
(e.g., ISO 9241 [2]), style guides which are specific to a particular environment (e.g., IBM
Common User Access [3]), design guides (e.g., Scapin's guide [4], Vanderdonckt's guide [5])
and algorithms for ergonomic design (e.g., automatic selection of interaction objects [6]).
Studies carried out with designers show that these guidelines are difficult to apply at design
time:
average search time for a guideline in a design guide lasts 15 minutes [1];
about 58% of designers succeed to find guidelines relevant to their problem [1];
designers do not respect about 11% of guidelines [7];
designers experienced interpretation problems for 30% of guidelines [7].
User interface software tools past present and futureAlison HONG
we consider cases of both success and failure in past user interface tools. From these cases we extract a set of themes which can serve as lessons for future work.
To the end of our possibilities with Adaptive User InterfacesJean Vanderdonckt
Slides of the keynote presented at the 1st International Workshop on Human-in-the-Loop Applied Machine Learning (HITLAML '23)
September 04 - 06, 2023 - Belval, Luxembourg.
This presentation summarizes the evolution of techniques used to adapt the user interfaces to the context of use, which is composed of the user, the platform, and the environment.
Engineering the Transition of Interactive Collaborative Software from Cloud C...Jean Vanderdonckt
Paper presented at EICS '22: https://dl.acm.org/doi/10.1145/3532210
The "Software as a Service" (SaaS) model of cloud computing popularized online multiuser collaborative software. Two famous examples of this class of software are Office 365 from Microsoft and Google Workspace. Cloud technology removes the need to install and update the software on end users' computers and provides the necessary underlying infrastructure for online collaboration. However, to provide a good end-user experience, cloud services require an infrastructure able to scale up to the task and allow low-latency interactions with a variety of users worldwide. This is a limiting factor for actors that do not possess such infrastructure. Unlike cloud computing which forgets the computational and interactional capabilities of end users' devices, the edge computing paradigm promises to exploit them as much as possible. To investigate the potential of edge computing over cloud computing, this paper presents a method for engineering interactive collaborative software supported by edge devices for the replacement of cloud computing resources. Our method is able to handle user interface aspects such as connection, execution, migration, and disconnection differently depending on the available technology. We exemplify our approach by developing a distributed Pictionary game deployed in two scenarios: a nonshared scenario where each participant interacts only with their own device and a shared scenario where participants also share a common device, including a TV. After a theoretical comparative study of edge vs. cloud computing, an experiment compares the two implementations to determine their effect on the end user's perceived experience and latency vs. real latency
More Related Content
Similar to Towards Canonical Task Types for User Interface Design
Field of Study and Research Methods for an Effect of Cognitive and Informatio...Yury Solonitsyn
Authors show one of the possible approaches to the experimental research of the cognitive and information load on PC’s users – the mental load or amount of efforts, spent because of the necessity to recognise screen form’s elements and to process the data presented.
Andrei Balkanskii, Artem Smolin, Yury Solonitsyn
ITMO University, Saint Petersburg, Russia
A MDA-Compliant Environment for Developing User Interfaces of Information Sys...Jean Vanderdonckt
This presentation, entitled "A MDA-Compliant Environment for Developing User Interfaces of Information Systems" summarizes the characteristics of the User Interface eXtensible Markup Language, as a support for model-driven engineering of user interfaces. This presentation was the keynote address for the CAISE'2005 conference (Porto, June 16, 2005).
Paper presented at HCI Int'1995.
According to human behavior studies, several disciplines (e.g., cognitive psychology, software ergonomics, visual design) have brought substantive results to improve the user friendliness of user interface (UI). One possible output of these disciplines come as recommendations that could be translated into ergonomic rules (or guidelines). Guideline knowledge is often contained in five sources : recommendation papers [1], design standards
(e.g., ISO 9241 [2]), style guides which are specific to a particular environment (e.g., IBM
Common User Access [3]), design guides (e.g., Scapin's guide [4], Vanderdonckt's guide [5])
and algorithms for ergonomic design (e.g., automatic selection of interaction objects [6]).
Studies carried out with designers show that these guidelines are difficult to apply at design
time:
average search time for a guideline in a design guide lasts 15 minutes [1];
about 58% of designers succeed to find guidelines relevant to their problem [1];
designers do not respect about 11% of guidelines [7];
designers experienced interpretation problems for 30% of guidelines [7].
User interface software tools past present and futureAlison HONG
we consider cases of both success and failure in past user interface tools. From these cases we extract a set of themes which can serve as lessons for future work.
To the end of our possibilities with Adaptive User InterfacesJean Vanderdonckt
Slides of the keynote presented at the 1st International Workshop on Human-in-the-Loop Applied Machine Learning (HITLAML '23)
September 04 - 06, 2023 - Belval, Luxembourg.
This presentation summarizes the evolution of techniques used to adapt the user interfaces to the context of use, which is composed of the user, the platform, and the environment.
Engineering the Transition of Interactive Collaborative Software from Cloud C...Jean Vanderdonckt
Paper presented at EICS '22: https://dl.acm.org/doi/10.1145/3532210
The "Software as a Service" (SaaS) model of cloud computing popularized online multiuser collaborative software. Two famous examples of this class of software are Office 365 from Microsoft and Google Workspace. Cloud technology removes the need to install and update the software on end users' computers and provides the necessary underlying infrastructure for online collaboration. However, to provide a good end-user experience, cloud services require an infrastructure able to scale up to the task and allow low-latency interactions with a variety of users worldwide. This is a limiting factor for actors that do not possess such infrastructure. Unlike cloud computing which forgets the computational and interactional capabilities of end users' devices, the edge computing paradigm promises to exploit them as much as possible. To investigate the potential of edge computing over cloud computing, this paper presents a method for engineering interactive collaborative software supported by edge devices for the replacement of cloud computing resources. Our method is able to handle user interface aspects such as connection, execution, migration, and disconnection differently depending on the available technology. We exemplify our approach by developing a distributed Pictionary game deployed in two scenarios: a nonshared scenario where each participant interacts only with their own device and a shared scenario where participants also share a common device, including a TV. After a theoretical comparative study of edge vs. cloud computing, an experiment compares the two implementations to determine their effect on the end user's perceived experience and latency vs. real latency
UsyBus: A Communication Framework among Reusable Agents integrating Eye-Track...Jean Vanderdonckt
Presentation of ACM EICS '22 paper: https://dl.acm.org/doi/10.1145/3532207
Eye movement analysis is a popular method to evaluate whether a user interface meets the users' requirements and abilities. However, with current tools, setting up a usability evaluation with an eye-tracker is resource-consuming, since the areas of interest are defined manually, exhaustively and redefined each time the user interface changes. This process is also error-prone, since eye movement data must be finely synchronised with user interface changes. These issues become more serious when the user interface layout changes dynamically in response to user actions. In addition, current tools do not allow easy integration into interactive applications, and opportunistic code must be written to link these tools to user interfaces. To address these shortcomings and to leverage the capabilities of eye-tracking, we present UsyBus, a communication framework for autonomous, tight coupling among reusable agents. These agents are responsible for collecting data from eye-trackers, analyzing eye movements, and managing communication with other modules of an interactive application. UsyBus allows multiple heterogeneous eye-trackers as input, provides multiple configurable outputs depending on the data to be exploited. Modules exchange data based on the UsyBus communication framework, thus creating a customizable multi-agent architecture. UsyBus application domains range from usability evaluation to gaze interaction applications design. Two case studies, composed of reusable modules from our portfolio, exemplify the implementation of the UsyBus framework.
µV: An Articulation, Rotation, Scaling, and Translation Invariant (ARST) Mult...Jean Vanderdonckt
Paper presented at ACM EICS '22
Finger-based gesture input becomes a major interaction modality for surface computing. Due to the low precision of the finger and the variation in gesture production, multistroke gestures are still challenging to recognize in various setups. In this paper, we present µV, a multistroke gesture recognizer that addresses the properties of articulation, rotation, scaling, and translation invariance by combining $P+'s cloud-matching for articulation invariance with !FTL's local shape distance for RST-invariance. We evaluate µV against five competitive recognizers on MMG, an existing gesture set, and on two new versions for smartphones and tablets, MMG+ and RMMG+, a randomly rotated version on both platforms. µV is significantly more accurate than its predecessors when rotation invariance is required and not significantly inferior when it is not. µV is also significantly faster than others with many samples and not significantly slower with few samples
RepliGES and GEStory: Visual Tools for Systematizing and Consolidating Knowle...Jean Vanderdonckt
The body of knowledge accumulated by gesture elicitation studies (GES), although useful, large, and extensive, is also heterogeneous, scattered in the scientific literature across different venues and fields of research, and difficult to generalize to other contexts of use represented by different gesture types, sensing devices, applications, and user categories. To address such aspects, we introduce RepliGES, a conceptual space that supports (1) replications of gesture elicitation studies to confirm, extend, and complete previous findings, (2) reuse of previously elicited gesture sets to enable new discoveries, and (3) extension and generalization of previous findings with new methods of analysis and for new user populations towards consolidated knowledge of user-defined gestures. Based on RepliGES, we introduce GEStory, an interactive design space and visual tool, to structure, visualize and identify user-defined gestures from a number of 216 published gesture elicitation studies
Gesture-based information systems: from DesignOps to DevOpsJean Vanderdonckt
Keynote address for the 29th International Conference on Information Systems Development ISD'2021 (Valencia, Spain, September 8-10, 2021). See https://isd2021.webs.upv.es/program.php#keynotes
This talk promotes the Seven I':
Implementation continuity
Inclusion of end-users
Interaction first
Integration among stakeholders
Iteration short
Incremental progress
Innovation openness
Intra-platform plasticity regularly assumes that the display of a computing platform remains fixed and rigid during interactions with the platform in contrast to reconfigurable displays, which can change form depending on the context of use. In this paper, we present a model-based approach for designing and deploying graphical user interfaces that support intra-platform plasticity for reconfigurable displays. We instantiate the model for E3Screen, a new device that expands a conventional laptop with two slidable, rotatable, and foldable lateral displays, enabling slidable user interfaces. Based on a UML class diagram as a domain model and a SCRUD list as a task model, we define an abstract user interface as interaction units with a corresponding master-detail design pattern. We then map the abstract user interface to a concrete user interface by applying rules for the reconfiguration, concrete interaction, unit allocation, and widget selection and implement it in JavaScript. In a first experiment, we determine display configurations most preferred by users, which we organize in the form of a state-transition diagram. In a second experiment, we address reconfiguration rules and widget selection rules. A third experiment provides insights into the impact of the lateral displays on a visual search task.
Conducting a Gesture Elicitation Study: How to Get the Best Gestures From Peo...Jean Vanderdonckt
Lecture 3: Conducting a Gesture Elicitation Study: How to Get the Best Gestures From People?
Francqui Chair in Computer Science 2020 VUB, Jean Vanderdonckt, 27 April 2021
User-centred Development of a Clinical Decision-support System for Breast Can...Jean Vanderdonckt
See the paper at https://www.scitepress.org/Link.aspx?doi=10.5220/0010258900600071
We conducted a user-centered design of a clinical decision-support system for breast cancer screening, diagnosis, and reporting based on stroke gestures. We combined knowledge elicitation interviews, scenario-focused questionnaires, and paper mock-ups to understand user needs. Multi-fidelity (low and high) prototypes were designed and compared first in vitro in a usability laboratory, then in vivo in the real world. The resulting user interface provides radiologists with a platform that integrates domain-oriented tools for the visualization of mammograms, the manual, and the semi-automatic annotation of breast cancer findings based on stroke gestures. The contribution of this work lies in that, to the best of our knowledge, stroke gestures have not yet been applied to the annotation of mammograms. On the one hand, although there is a substantial amount of research done in stroke-based interaction, none focuses especially on the domain of breast cancer annotation. On the other hand, typical gestures in breast cancer annotation tools are those with a keyboard and a mouse
Simplifying the Development of Cross-Platform Web User Interfaces by Collabo...Jean Vanderdonckt
Ensuring responsive design of web applications requires their user interfaces to be able to adapt according to different contexts of use, which subsume the end users, the devices and platforms used to carry out the interactive tasks, and also the environment in which they occur. To address the challenges posed by responsive design, aiming to simplify their development by factoring out the common parts from the specific ones, this paper presents Quill, a web-based development environment that enables various stakeholders of a web application to collaboratively adopt a model-based design of the user interface for cross-platform deployment. The paper establishes a series of requirements for collaborative model-based design of cross-platform web user interfaces motivated by the literature, observational and situational design. It then elaborates on potential solutions that satisfy these requirements and explains the solution selected for Quill. A user survey has been conducted to determine how stakeholders appreciate model-based design user interface and how they estimate the importance of the requirements that lead to Quill
Detachable user interfaces consist of graphical user interfaces whose parts or whole can be detached at run-time from their host, migrated onto an- other computing platform while carrying out the task, possibly adapted to the new platform and attached to the target platform in a peer-to-peer fashion. De- taching is the property of splitting a part of a UI for transferring it onto another platform. AttAaching is the reciprocal property: a part of an existing interface can be attached to the currently being used interface so as to recompose another one on-demand, according to user's needs, task requirements. Assembling inter- face parts by detaching and attaching allows dynamically composing, decom- posing and re-composing new interfaces on demand. To support this interaction paradigm, a development infrastructure has been developed based on a series of primitives such as display, undisplay, copy, expose, return, transfer, delegate, and switch. We exemplify it with QTkDraw, a painting application with attach- ing and detaching based on the development infrastructure.
The Impact of Comfortable Viewing Positions on Smart TV GesturesJean Vanderdonckt
Whereas gesture elicitation studies for TV interaction
assume that participants adopt an upright, frontal viewing
position, we asked 21 participants to hold a natural, comfortable
viewing position, the posture they adopt when watching TV
at home. By involving a broad selection of users regarding
age, profession, our study targets a higher ecological validity
than in existing studies. Agreements rates were lower than existing studies using an upright, frontal viewing position. Participants experienced problems due to (1) having to use their slave hand instead of their dominant hand, (2) being in a certain orientation with their head making it more difficult to perform some physical movements, and (3) being hindered in their movement by the sofa there lay on. Since each person may have a different
position inducing different gestures due to the aforementioned
problems, the effect of a comfortable viewing position is analyzed
by comparison to gestures for a frontal position.
Head and Shoulders Gestures: Exploring User-Defined Gestures with Upper BodyJean Vanderdonckt
This paper presents empirical results about user-dened gestures
for head and shoulders by analyzing 308 gestures elicited from 22 participants for 14 referents materializing 14 different types of tasks in IoT context of use. We report an overall medium consensus but with medium variance (mean: .263, min: .138, max: .390 on the unit scale) between participants gesture proposals, while their thinking time were less similar (min: 2.45 sec, max: 22.50 sec), which suggests that head and shoulders gestures are not all equally easy to imagine and to produce. We point to the challenges of deciding which head and shoulders gestures
will become the consensus set based on four criteria: the agreement rate, their individual frequency, their associative frequency, and their unicity.
Paper accessible at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A213794
G-Menu: A Keyword-by-Gesture based Dynamic Menu Interface for SmartphonesJean Vanderdonckt
Instead of relying on graphical or vocal modalities for searching
an item by keyword (called K-Menu), this paper presents the G-Menu exploiting gesture interaction and gesture recognition: when a user sketches a keyword by gesturing the first letters of its label, a menu with items related to the recognized letters is constructed dynamically and presented to the user for selection and auto-completion. The selection can be completed either gesturally by an appropriate gesture (called the G-Menu) or by touch only (called the T-Menu). This paper compares the three types of menu, i.e., by keyword, by gesture, and by touching, in a user study with twenty participants on their item selection time (for measuring task efficiency), their error rate (for measuring task effectiveness),
and their subjective satisfaction (for measuring user satisfaction).
Paper accessible at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A213790
Unistroke and multistroke gesture recognizers have always striven to reach some robustness with respect to
all variations encountered when people issue gestures by hand
on touch surfaces or with sensing devices. For this purpose,
successful stroke recognizers rely on a gesture recognition
algorithm that satisfies a series of invariance properties such
as: stroke-order invariance, stroke-number invariance, stroke direction invariance, position, scale, and rotation invariance.
Before initiating any recognition activity, these algorithms
ensure these properties by performing several pre-processing
operations. These operations induce an additional computational
cost to the recognition process, as well as a potential error
bias. To cope with this problem, we introduce an algorithm that
ensures all these properties analytically instead of statistically
based on a vector algebra. Instead of points, the recognition
algorithm works on vectors between vectors. We demonstrate
that this approach not eliminates the need for these preprocessing
operations but also satisfies an entire structure preserving
transformation.
Paper available at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A217006
Body-based gestures, such as acquired by Kinect sensor, today benefit from efficient tools for their recognition and development, but less for automated reasoning. To facilitate this activity, an ontology for structuring body-based gestures, based on user, body and body parts, gestures, and environment, is designed and encoded in Ontology Web Language according to modelling triples (subject, predicate, object). As a proof-of-concept and to feed this ontology, a gesture elicitation study collected 24 participants X 19 referents for IoT tasks = 456 elicited body-based gestures, which were classified and expressed according to the ontology.
See paper at https://dl.acm.org/citation.cfm?id=3328238
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Towards Canonical Task Types for User Interface Design
1. 1 November 9-11, 2009 - Mérida, Mexico CLIHC’09
Towards Canonical Task Types
for User Interface Design
Juan Manuel Gonzalez-Calleros, Josefina Guerrero-
García, Jean Vanderdonckt and Jaime Muñoz-Arteaga
Université catholique de Louvain (UCL),
Louvain School of Management (LSM)
Information Systems Unit (ISYS)
juan.m.gonzalez@uclouvain.be
jean.vanderdonckt@uclouvain.be
Sistemas de Información
Universidad Autónoma de Aguascalientes
jmunozar@correo.uaa.mx
2. 2 November 9-11, 2009 - Mérida, Mexico CLIHC’09
Outline
1. Introduction
2. State of the Art
3. A comparative analysis of User Interface Actions
4. Practical Use of the canonical list of task types
5. Case Study
6. Conclusion
3. 3 November 9-11, 2009 - Mérida, Mexico CLIHC’09
Introduction
• The task model is today a cornerstone of many
activities carried out during the User Interface (UI)
development life cycle, such as, but not limited to:
– user-centred design,
– task analysis
– task modelling
– model-driven engineering of user interfaces
– human activity analysis
– safety critical systems
– real-time systems.
4. 4 November 9-11, 2009 - Mérida, Mexico CLIHC’09
Model-driven engineering of
user interfaces
Environment T
Final user
Interface T
Concrete user
Interface T
Task and
Domain T
Abstract user
Interface T
T=Target context of use
Concrete user
Interface S
Final user
Interface S
Task and
Domain S
Abstract user
Interface S
S=Source context of use
Reification
Abstraction
Reflexion
Translation
http://www.plasticity.org
UsiXML
unsupported
model
UsiXML
supported
model
User S Platform S Environment S Platform TUser T
5. 5 November 9-11, 2009 - Mérida, Mexico CLIHC’09
Introduction
• The many degrees of freedom offered by task
modelling should not let us to forget the quality of the
resulting task model.
• Labels, definitions, goals, and properties used for a
task suffer from many drawbacks
– Limited:
• completeness in task modeling
• consistency in task modeling
• correctness in task modeling
6. Introduction
• Modelling a task based on well-defined semantics and
using a well-understood notation are key aspects.
• A list of canonical task types is proposed that
addresses the aforementioned concerns of task
modelling.
• Our goal is to provide methodological means to
systematically derive UI.
• The list is just about the name of the task and
properties and not its structure, thus remaining flexible
for task modelling.
6 November 9-11, 2009 - Mérida, Mexico CLIHC’09
7. 7 November 9-11, 2009 - Mérida, Mexico CLIHC’09
Outline
1. Introduction
2. State of the Art
3. A comparative analysis of User Interface Actions
4. Practical Use of the canonical list of task types
5. Case Study
6. Conclusion
8. 8 November 9-11, 2009 - Mérida, Mexico CLIHC’09
State of the art
• Several MBUI development methods rely on attributes
that describe the User Interface interaction ([Frank
1993] [Paternò 2002][Puerta 1997] ...)
• The User Interface interaction is composed of two
elements:
– The task type sometimes referred as UI action or activity
– The task item, as proposed by [Constantine 2002], that
is manipulated or required in the UI interaction.
9. 9 November 9-11, 2009 - Mérida, Mexico CLIHC’09
State of the art
• Task Types Name spaces
have been created in
different research
domains:
• Graphical User
Interfaces
• Web Interaction
• Input Devices
10. 10 November 9-11, 2009 - Mérida, Mexico CLIHC’09
State of the art
• Some taxonomies of task
Types are very much
related to interaction
devices [Foley 1984]
SELECTION
S1 From screen with direct pickdevice S1.1 Light pen
S1.2 Touch pane;
S2 Indirect with cursor match S2.1 Tablet
S2.2 Mouse
S2.3 Joystick(absolute)
S2.4 Joystick(velocity)
S2.5 Trackball
S2.6 Cursor control keys
S3 With character string name (See text input)
S4 Time scan S4.1 Programmed function keyboard
S4.2 Alphanumeric keyboard
S5 Button Push S5.1 Programmed function keyboard
S5.2 Soft keys
S6 Sketch recognition S6.1 Tablet and stylus
S6.2 Light pen
S7 Voice input S7.1 Voice recognizer
P1 Direct with locator device P1.1 Touch panel
P2 Indirect with locator device P2.1 Tablet
P2.2 Mouse
P2.3 Joystick(absolute)
P2.4 Joystick(velocity-controlled)
P2.5 Trackball
P2.6 Cursor control keys with
auto-repeat
P3 Indirect with directional commands P3.1 Up-down-left-right arrow keys
(See selection)
P4 With numerical coordinates (See text input)
P5 Direct with pickdevice P5.1 Light pen tracking
P5.2 Search for light pen
O1 Indirect with locator device O1.1 Joystick(absolute)
O1.2 Joystick(velocity-controlled)
O2 With numerical value (See text input)
Q1 Direct with valuator device Q1.1 Rotary potentiometer
Q1.2 Linear potentiometer
Q2 With character string value (See text input)
Q3 Scale drive with one axis of locator device Q3.1 Tablet
Q3.2 Mouse
Q3.3 Joystick(absolute)
Q3.4 Joystick(velocity-controlled)
Q3.5 Trackball
Q4 Light handle Q4.1 Light pen
Q4.2 Tablet with stylus
Q5 Up-down count controlled by commands Q5.1 Programmed function keyboard
Q5.2 Alphanumeric keyboard
T1 Keyboard T1.1 Alphanumeric
T1.2 Chord
T2 Stroked character recognition T2.1 Tablet with stylus
T3 Voice recognition T3.1 Voice recognizer
T4 Direct pickfrom menu with locator device T4.1 Light pen
T4.2 Touch panel
T5 Indirect pickfrom menu with locator device (See positioning)
POSITION
ORIENT
QUANTIFY
TEXT
11. 11 November 9-11, 2009 - Mérida, Mexico CLIHC’09
State of the art
• Shortcomings:
• Always dependency between the name space
and the modality of interaction
• Cognitive tasks
• Gestures
• Feedback
• System Functionalities
12. 12 November 9-11, 2009 - Mérida, Mexico CLIHC’09
Outline
1. Introduction
2. State of the Art
3. A comparative analysis of User Interface Actions
4. Practical Use of the canonical list of task types
5. Case Study
6. Conclusion
13. 13 November 9-11, 2009 - Mérida, Mexico CLIHC’09
A comparative analysis of User
Interface Actions
• More than two hundred names were identified.
14. 14 November 9-11, 2009 - Mérida, Mexico CLIHC’09
A comparative analysis of User
Interface Actions
• Comparative analysis on the name spaces
• Comparing names
• Context of use
• Definitions
• Group task types with similar definitions but
different names (choose, select, …)
• Determine which was the most abstract set of task
considering
• Modality and platform independent
15. A comparative analysis of User
Interface Actions
15 November 9-11, 2009 - Mérida, Mexico CLIHC’09
16. 16 November 9-11, 2009 - Mérida, Mexico CLIHC’09
Outline
1. Introduction
2. State of the Art
3. A comparative analysis of User Interface Actions
4. Practical Use of the canonical list of task types
5. Case Study
6. Conclusion
17. Practical Use of the canonical list of
task types
17 November 9-11, 2009 - Mérida, Mexico CLIHC’09
1. Help to decide how to name a task
For example, for a multimodal task
18. Practical Use of the canonical list of
task types
18 November 9-11, 2009 - Mérida, Mexico CLIHC’09
2. Selection of task type and task item
19. Practical Use of the canonical list of
task types
19 November 9-11, 2009 - Mérida, Mexico CLIHC’09
3. Selection of user categories
20. 4. User Interface Concretization of the Task
20 November 9-11, 2009 - Mérida, Mexico CLIHC’09
Practical Use of the canonical
list of task types
Concrete user
Interface S
Final user
Interface S
Task and
Domain S
Abstract user
Interface S
Task Type + Action
Item
Facet
Select + Element
Input
Concrete
Interaction Object
Selection
Widget
21. Practical Use of the canonical list of
task types
21 November 9-11, 2009 - Mérida, Mexico CLIHC’09
5. User Interface Concretization of the Task based on tables for
selecting widgets based on semantic properties
22. 22 November 9-11, 2009 - Mérida, Mexico CLIHC’09
Outline
1. Introduction
2. State of the Art
3. A comparative analysis of User Interface Actions
4. Practical Use of the canonical list of task types
5. Case Study
6. Conclusion
23. Case Study
23 November 9-11, 2009 - Mérida, Mexico CLIHC’09
• An Information System of a Travel Agency for organizing a trip
• The scenario and the requirements of the problem are captured in a
workflow using FlowiXML [Guerrero 2008]
24. Case Study
24 November 9-11, 2009 - Mérida, Mexico CLIHC’09
• Tasks in the process are detailed using task models
25. Case Study
25 November 9-11, 2009 - Mérida, Mexico CLIHC’09
• Attributes identified for the tasks
Task Task
Type
Task Item User
category
Facet
Insert Name Create Element Interactive Input
Insert Zip Code Create Element Interactive Input
Select Age category Select Element Interactive Input
Select Gender Select Element Interactive Input
26. Case Study
26 November 9-11, 2009 - Mérida, Mexico CLIHC’09
User Interface Action Types Facet Specification Information to take into account Possible Abstract Interaction
Component
“create name” and “create zip Code” Create attribute value Data type, domain characteristics A text output with a text input
associated to it
“select gender and select age
Category”
Select attribute value
+ selection values
known
Data type, domain characteristics,
selection values
A dropdown list, a group of radio
buttons textual or characters.
27. 27 November 9-11, 2009 - Mérida, Mexico CLIHC’09
Outline
1. Introduction
2. State of the Art
3. A comparative analysis of User Interface Actions
4. Practical Use of the canonical list of task types
5. Case Study
6. Conclusion
28. Conclusion
• A list of canonical UI task action types associated
to task models was presented.
• This proposal overcomes the limitations of task
modeling in the context of MDE UI development
• The proposal provides methodological means to
systematically derive UI.
• This work is focused on task modeling
specifications and UsiXML
28 November 9-11, 2009 - Mérida, Mexico CLIHC’09
29. Conclusion
• Future Work
– Investigate task relationships
– Evaluation of this technique
– Multimodal and Multidevice concretization what
if the task is no longer available
29 November 9-11, 2009 - Mérida, Mexico CLIHC’09
30. For more information and downloading,
http://www.isys.ucl.ac.be/bchi
http://www.usixml.org
User Interface eXtensible Markup Language
http://itea.defimedia.be/usixml-france
ITEA2 Call 3 project (2008026)
Special thanks to all members of the team!
Thank you very much for your
attention
Editor's Notes
Task Modelling plays an important role in the context of MDE. Briefly MDE is:
Research work on model-based design of user interfaces has sought to address the challenge of reducing the costs for developing and maintaining user interfaces through a layered architecture that separates out different concerns:
Application task models, data and meta-data
Abstract Interface (device and modality independent, e.g. select 1 from N)
Concrete Interface (device and/or modality dependent, e.g. use of radio buttons)
Implementation on specific devices (e.g. HTML, SVG or Java)
Each layer embodies a model of behavior (e.g. dialog models and rule-based event handlers) at a progressively finer level of detail. The relationships between the layers can be given in terms of transformations, for example, between objects and events in adjoining layers. XML is well suited as a basis for representing each layer, with the possible exception of the final user interface, which may be generated automatically, guided via author supplied policies.
High level development suites can be provided to shield authors from the underlying XML representations. For example, a data model could be manipulated as a diagram, while the user interface could be defined via drag and drop operations together with editing values in property sheets. The development suite is responsible for maintaining the mappings between layers and verifying their consistency. Authors can choose to provide alternative mappings as needed to address different delivery contexts.
As it can be seen worng decision in task modelling are in detriment of the User Interface
Incompleteness: labels, definitions, goals, and properties used for a task suffer from many drawbacks such as short name, name without action verb or without object (and therefore non-compliant with the traditional interaction paradigm of action+object), name that is incompatible with its definition, no usage of standard classification.
Inconsistency: labels, definitions, goals, and properties used for a task do not have unique names (e.g., a label, a goal is duplicated), there are some homonyms; there are some synonyms (e.g., tasks having the same semantics but wearing different names).
Incorrectness: labels, definitions, goals, and properties used for a task violate some of Meyer’s seven sins of specification (i.e., noise, silence, surspecification, contradiction, ambiguity, forward reference, and suroptimism).
Task Modelling plays an important role in the context of MDE. Briefly MDE is:
Research work on model-based design of user interfaces has sought to address the challenge of reducing the costs for developing and maintaining user interfaces through a layered architecture that separates out different concerns:
Application task models, data and meta-data
Abstract Interface (device and modality independent, e.g. select 1 from N)
Concrete Interface (device and/or modality dependent, e.g. use of radio buttons)
Implementation on specific devices (e.g. HTML, SVG or Java)
Each layer embodies a model of behavior (e.g. dialog models and rule-based event handlers) at a progressively finer level of detail. The relationships between the layers can be given in terms of transformations, for example, between objects and events in adjoining layers. XML is well suited as a basis for representing each layer, with the possible exception of the final user interface, which may be generated automatically, guided via author supplied policies.
High level development suites can be provided to shield authors from the underlying XML representations. For example, a data model could be manipulated as a diagram, while the user interface could be defined via drag and drop operations together with editing values in property sheets. The development suite is responsible for maintaining the mappings between layers and verifying their consistency. Authors can choose to provide alternative mappings as needed to address different delivery contexts.
As it can be seen worng decision in task modelling are in detriment of the User Interface