Lecture 3: Conducting a Gesture Elicitation Study: How to Get the Best Gestures From People?
Francqui Chair in Computer Science 2020 VUB, Jean Vanderdonckt, 27 April 2021
In November 2014, I was invited back to MMU to talk about how UX activities can be integrated with Agile software development approaches.
The talk touched on what Agile is, why it exists, and why there's potential for conflict with UX activities. I then talked about the opportunities for getting along with each other to make better products, and practical tips that students might be able to use when working in Agile projects.
Technology Education Fall Conference 2013Matthew Doyle
This presentation was done during the SUNY Oswego Technology Conference in 2013 to showcase the utilization of the Microsoft Kinect for education research.
In November 2014, I was invited back to MMU to talk about how UX activities can be integrated with Agile software development approaches.
The talk touched on what Agile is, why it exists, and why there's potential for conflict with UX activities. I then talked about the opportunities for getting along with each other to make better products, and practical tips that students might be able to use when working in Agile projects.
Technology Education Fall Conference 2013Matthew Doyle
This presentation was done during the SUNY Oswego Technology Conference in 2013 to showcase the utilization of the Microsoft Kinect for education research.
These slides cover various aspects of testing in the context of computer science and human-computer interaction (HCI). It includes topics like evaluation techniques, empirical methods in HCI, controlled experiments, and various testing approaches such as A/B testing, cognitive walkthrough, heuristic evaluation, and review-based evaluation. Additionally, it touches on issues related to internal and external validity, reliability in testing, and different experimental designs like between-subjects and within-subjects experiments. This material appears to be educational and is likely used in a course related to computer science or HCI.
All content within this presentation is the property of Royal Holloway, University of London. Unauthorized use, duplication, or distribution of the materials contained herein is strictly prohibited.
This is a step-by-step instruction for controlled experiment design. I tried to simplify this complex and tedious process into a relatively simple and easy to follow recipe. This is the lecture slides I developed for CS3248: Design of Interaction Systems in the School of Computing, National University of Singapore
Wellness at hand is designed to support smokers in managing their cravings for cigarette during a quit attempt. Wellness-at-hand includes a bracelet that senses the physiological data of the user and the user interaction with the system happens through the palm-based interactions. It provides a holistic approach to help smokers in managing their cravings in the following way: First, it enforces a quit plan attached with money deterrence; secondly, it engages the smoker in short interactions like games during a craving episode; thirdly, it provides a better understanding of emotions and unconscious thoughts leading to smoking, and lastly, it utilizes positive reinforcement to motivate smokers to continue their quit plan.
(Submitted as part of a course project for Interaction Design and Usability)
IOC 2004 - Learning Styles and Student Performance in an E-Learning EnvironmentMichael Barbour
Barbour, M. K. (2004, February). Learning styles and student performance in an e-learning environment. Paper presented at the annual Illinois Online Conference for Teaching and Learning, http://www.ilonlineconf.org/
Los sistemas educacionales actuales aun se centran fuertemente en la evaluación de contenidos a pesar de que vivimos en la era de la información y mediante métodos tradicionales como evaluaciones estandarizadas que estresan a los estudiantes. Los juegos serios representan oportunidades educacionales de gran impacto al representar entornos más realistas, que capturan gran cantidad de datos sobre el proceso que siguen los estudiantes y que son más disfrutables y relajados que los exámenes. Estos datos, combinados con técnicas de analítica de aprendizaje, representan gran potencial para construir modelos que permitan la evaluación de competencias claves para la sociedad del siglo 21 a través de juegos serios, estas evaluaciones se implementan de forma indirecta en lo que se conoce como Stealth Assessment (evaluación fantasma), para evitar interrumpir el flujo de juego. En este charla veremos una metodología que se basa en tres etapas – Diseño, Implementación y Evaluación – para la implementación de sistemas de evaluación a través de juegos.
Learning Analytics for the Evaluation of Competencies and Behaviors in Seriou...MIT
To fully leverage data-driven approaches for measuring learning in complex and interactive game environments, the field needs to develop methods to coherently integrate learning analytics (LA) throughout the design, development, and evaluation processes to
overcome the downfalls of a purely data approach. In this presentation, we introduce a process that weaves three distinctive disciplines together--assessment science, game design, and learning analytics--for the purpose of creating digital games for educational assessment.
To the end of our possibilities with Adaptive User InterfacesJean Vanderdonckt
Slides of the keynote presented at the 1st International Workshop on Human-in-the-Loop Applied Machine Learning (HITLAML '23)
September 04 - 06, 2023 - Belval, Luxembourg.
This presentation summarizes the evolution of techniques used to adapt the user interfaces to the context of use, which is composed of the user, the platform, and the environment.
Engineering the Transition of Interactive Collaborative Software from Cloud C...Jean Vanderdonckt
Paper presented at EICS '22: https://dl.acm.org/doi/10.1145/3532210
The "Software as a Service" (SaaS) model of cloud computing popularized online multiuser collaborative software. Two famous examples of this class of software are Office 365 from Microsoft and Google Workspace. Cloud technology removes the need to install and update the software on end users' computers and provides the necessary underlying infrastructure for online collaboration. However, to provide a good end-user experience, cloud services require an infrastructure able to scale up to the task and allow low-latency interactions with a variety of users worldwide. This is a limiting factor for actors that do not possess such infrastructure. Unlike cloud computing which forgets the computational and interactional capabilities of end users' devices, the edge computing paradigm promises to exploit them as much as possible. To investigate the potential of edge computing over cloud computing, this paper presents a method for engineering interactive collaborative software supported by edge devices for the replacement of cloud computing resources. Our method is able to handle user interface aspects such as connection, execution, migration, and disconnection differently depending on the available technology. We exemplify our approach by developing a distributed Pictionary game deployed in two scenarios: a nonshared scenario where each participant interacts only with their own device and a shared scenario where participants also share a common device, including a TV. After a theoretical comparative study of edge vs. cloud computing, an experiment compares the two implementations to determine their effect on the end user's perceived experience and latency vs. real latency
More Related Content
Similar to Conducting a Gesture Elicitation Study: How to Get the Best Gestures From People?
These slides cover various aspects of testing in the context of computer science and human-computer interaction (HCI). It includes topics like evaluation techniques, empirical methods in HCI, controlled experiments, and various testing approaches such as A/B testing, cognitive walkthrough, heuristic evaluation, and review-based evaluation. Additionally, it touches on issues related to internal and external validity, reliability in testing, and different experimental designs like between-subjects and within-subjects experiments. This material appears to be educational and is likely used in a course related to computer science or HCI.
All content within this presentation is the property of Royal Holloway, University of London. Unauthorized use, duplication, or distribution of the materials contained herein is strictly prohibited.
This is a step-by-step instruction for controlled experiment design. I tried to simplify this complex and tedious process into a relatively simple and easy to follow recipe. This is the lecture slides I developed for CS3248: Design of Interaction Systems in the School of Computing, National University of Singapore
Wellness at hand is designed to support smokers in managing their cravings for cigarette during a quit attempt. Wellness-at-hand includes a bracelet that senses the physiological data of the user and the user interaction with the system happens through the palm-based interactions. It provides a holistic approach to help smokers in managing their cravings in the following way: First, it enforces a quit plan attached with money deterrence; secondly, it engages the smoker in short interactions like games during a craving episode; thirdly, it provides a better understanding of emotions and unconscious thoughts leading to smoking, and lastly, it utilizes positive reinforcement to motivate smokers to continue their quit plan.
(Submitted as part of a course project for Interaction Design and Usability)
IOC 2004 - Learning Styles and Student Performance in an E-Learning EnvironmentMichael Barbour
Barbour, M. K. (2004, February). Learning styles and student performance in an e-learning environment. Paper presented at the annual Illinois Online Conference for Teaching and Learning, http://www.ilonlineconf.org/
Los sistemas educacionales actuales aun se centran fuertemente en la evaluación de contenidos a pesar de que vivimos en la era de la información y mediante métodos tradicionales como evaluaciones estandarizadas que estresan a los estudiantes. Los juegos serios representan oportunidades educacionales de gran impacto al representar entornos más realistas, que capturan gran cantidad de datos sobre el proceso que siguen los estudiantes y que son más disfrutables y relajados que los exámenes. Estos datos, combinados con técnicas de analítica de aprendizaje, representan gran potencial para construir modelos que permitan la evaluación de competencias claves para la sociedad del siglo 21 a través de juegos serios, estas evaluaciones se implementan de forma indirecta en lo que se conoce como Stealth Assessment (evaluación fantasma), para evitar interrumpir el flujo de juego. En este charla veremos una metodología que se basa en tres etapas – Diseño, Implementación y Evaluación – para la implementación de sistemas de evaluación a través de juegos.
Learning Analytics for the Evaluation of Competencies and Behaviors in Seriou...MIT
To fully leverage data-driven approaches for measuring learning in complex and interactive game environments, the field needs to develop methods to coherently integrate learning analytics (LA) throughout the design, development, and evaluation processes to
overcome the downfalls of a purely data approach. In this presentation, we introduce a process that weaves three distinctive disciplines together--assessment science, game design, and learning analytics--for the purpose of creating digital games for educational assessment.
To the end of our possibilities with Adaptive User InterfacesJean Vanderdonckt
Slides of the keynote presented at the 1st International Workshop on Human-in-the-Loop Applied Machine Learning (HITLAML '23)
September 04 - 06, 2023 - Belval, Luxembourg.
This presentation summarizes the evolution of techniques used to adapt the user interfaces to the context of use, which is composed of the user, the platform, and the environment.
Engineering the Transition of Interactive Collaborative Software from Cloud C...Jean Vanderdonckt
Paper presented at EICS '22: https://dl.acm.org/doi/10.1145/3532210
The "Software as a Service" (SaaS) model of cloud computing popularized online multiuser collaborative software. Two famous examples of this class of software are Office 365 from Microsoft and Google Workspace. Cloud technology removes the need to install and update the software on end users' computers and provides the necessary underlying infrastructure for online collaboration. However, to provide a good end-user experience, cloud services require an infrastructure able to scale up to the task and allow low-latency interactions with a variety of users worldwide. This is a limiting factor for actors that do not possess such infrastructure. Unlike cloud computing which forgets the computational and interactional capabilities of end users' devices, the edge computing paradigm promises to exploit them as much as possible. To investigate the potential of edge computing over cloud computing, this paper presents a method for engineering interactive collaborative software supported by edge devices for the replacement of cloud computing resources. Our method is able to handle user interface aspects such as connection, execution, migration, and disconnection differently depending on the available technology. We exemplify our approach by developing a distributed Pictionary game deployed in two scenarios: a nonshared scenario where each participant interacts only with their own device and a shared scenario where participants also share a common device, including a TV. After a theoretical comparative study of edge vs. cloud computing, an experiment compares the two implementations to determine their effect on the end user's perceived experience and latency vs. real latency
UsyBus: A Communication Framework among Reusable Agents integrating Eye-Track...Jean Vanderdonckt
Presentation of ACM EICS '22 paper: https://dl.acm.org/doi/10.1145/3532207
Eye movement analysis is a popular method to evaluate whether a user interface meets the users' requirements and abilities. However, with current tools, setting up a usability evaluation with an eye-tracker is resource-consuming, since the areas of interest are defined manually, exhaustively and redefined each time the user interface changes. This process is also error-prone, since eye movement data must be finely synchronised with user interface changes. These issues become more serious when the user interface layout changes dynamically in response to user actions. In addition, current tools do not allow easy integration into interactive applications, and opportunistic code must be written to link these tools to user interfaces. To address these shortcomings and to leverage the capabilities of eye-tracking, we present UsyBus, a communication framework for autonomous, tight coupling among reusable agents. These agents are responsible for collecting data from eye-trackers, analyzing eye movements, and managing communication with other modules of an interactive application. UsyBus allows multiple heterogeneous eye-trackers as input, provides multiple configurable outputs depending on the data to be exploited. Modules exchange data based on the UsyBus communication framework, thus creating a customizable multi-agent architecture. UsyBus application domains range from usability evaluation to gaze interaction applications design. Two case studies, composed of reusable modules from our portfolio, exemplify the implementation of the UsyBus framework.
µV: An Articulation, Rotation, Scaling, and Translation Invariant (ARST) Mult...Jean Vanderdonckt
Paper presented at ACM EICS '22
Finger-based gesture input becomes a major interaction modality for surface computing. Due to the low precision of the finger and the variation in gesture production, multistroke gestures are still challenging to recognize in various setups. In this paper, we present µV, a multistroke gesture recognizer that addresses the properties of articulation, rotation, scaling, and translation invariance by combining $P+'s cloud-matching for articulation invariance with !FTL's local shape distance for RST-invariance. We evaluate µV against five competitive recognizers on MMG, an existing gesture set, and on two new versions for smartphones and tablets, MMG+ and RMMG+, a randomly rotated version on both platforms. µV is significantly more accurate than its predecessors when rotation invariance is required and not significantly inferior when it is not. µV is also significantly faster than others with many samples and not significantly slower with few samples
RepliGES and GEStory: Visual Tools for Systematizing and Consolidating Knowle...Jean Vanderdonckt
The body of knowledge accumulated by gesture elicitation studies (GES), although useful, large, and extensive, is also heterogeneous, scattered in the scientific literature across different venues and fields of research, and difficult to generalize to other contexts of use represented by different gesture types, sensing devices, applications, and user categories. To address such aspects, we introduce RepliGES, a conceptual space that supports (1) replications of gesture elicitation studies to confirm, extend, and complete previous findings, (2) reuse of previously elicited gesture sets to enable new discoveries, and (3) extension and generalization of previous findings with new methods of analysis and for new user populations towards consolidated knowledge of user-defined gestures. Based on RepliGES, we introduce GEStory, an interactive design space and visual tool, to structure, visualize and identify user-defined gestures from a number of 216 published gesture elicitation studies
Gesture-based information systems: from DesignOps to DevOpsJean Vanderdonckt
Keynote address for the 29th International Conference on Information Systems Development ISD'2021 (Valencia, Spain, September 8-10, 2021). See https://isd2021.webs.upv.es/program.php#keynotes
This talk promotes the Seven I':
Implementation continuity
Inclusion of end-users
Interaction first
Integration among stakeholders
Iteration short
Incremental progress
Innovation openness
Intra-platform plasticity regularly assumes that the display of a computing platform remains fixed and rigid during interactions with the platform in contrast to reconfigurable displays, which can change form depending on the context of use. In this paper, we present a model-based approach for designing and deploying graphical user interfaces that support intra-platform plasticity for reconfigurable displays. We instantiate the model for E3Screen, a new device that expands a conventional laptop with two slidable, rotatable, and foldable lateral displays, enabling slidable user interfaces. Based on a UML class diagram as a domain model and a SCRUD list as a task model, we define an abstract user interface as interaction units with a corresponding master-detail design pattern. We then map the abstract user interface to a concrete user interface by applying rules for the reconfiguration, concrete interaction, unit allocation, and widget selection and implement it in JavaScript. In a first experiment, we determine display configurations most preferred by users, which we organize in the form of a state-transition diagram. In a second experiment, we address reconfiguration rules and widget selection rules. A third experiment provides insights into the impact of the lateral displays on a visual search task.
User-centred Development of a Clinical Decision-support System for Breast Can...Jean Vanderdonckt
See the paper at https://www.scitepress.org/Link.aspx?doi=10.5220/0010258900600071
We conducted a user-centered design of a clinical decision-support system for breast cancer screening, diagnosis, and reporting based on stroke gestures. We combined knowledge elicitation interviews, scenario-focused questionnaires, and paper mock-ups to understand user needs. Multi-fidelity (low and high) prototypes were designed and compared first in vitro in a usability laboratory, then in vivo in the real world. The resulting user interface provides radiologists with a platform that integrates domain-oriented tools for the visualization of mammograms, the manual, and the semi-automatic annotation of breast cancer findings based on stroke gestures. The contribution of this work lies in that, to the best of our knowledge, stroke gestures have not yet been applied to the annotation of mammograms. On the one hand, although there is a substantial amount of research done in stroke-based interaction, none focuses especially on the domain of breast cancer annotation. On the other hand, typical gestures in breast cancer annotation tools are those with a keyboard and a mouse
Simplifying the Development of Cross-Platform Web User Interfaces by Collabo...Jean Vanderdonckt
Ensuring responsive design of web applications requires their user interfaces to be able to adapt according to different contexts of use, which subsume the end users, the devices and platforms used to carry out the interactive tasks, and also the environment in which they occur. To address the challenges posed by responsive design, aiming to simplify their development by factoring out the common parts from the specific ones, this paper presents Quill, a web-based development environment that enables various stakeholders of a web application to collaboratively adopt a model-based design of the user interface for cross-platform deployment. The paper establishes a series of requirements for collaborative model-based design of cross-platform web user interfaces motivated by the literature, observational and situational design. It then elaborates on potential solutions that satisfy these requirements and explains the solution selected for Quill. A user survey has been conducted to determine how stakeholders appreciate model-based design user interface and how they estimate the importance of the requirements that lead to Quill
Detachable user interfaces consist of graphical user interfaces whose parts or whole can be detached at run-time from their host, migrated onto an- other computing platform while carrying out the task, possibly adapted to the new platform and attached to the target platform in a peer-to-peer fashion. De- taching is the property of splitting a part of a UI for transferring it onto another platform. AttAaching is the reciprocal property: a part of an existing interface can be attached to the currently being used interface so as to recompose another one on-demand, according to user's needs, task requirements. Assembling inter- face parts by detaching and attaching allows dynamically composing, decom- posing and re-composing new interfaces on demand. To support this interaction paradigm, a development infrastructure has been developed based on a series of primitives such as display, undisplay, copy, expose, return, transfer, delegate, and switch. We exemplify it with QTkDraw, a painting application with attach- ing and detaching based on the development infrastructure.
The Impact of Comfortable Viewing Positions on Smart TV GesturesJean Vanderdonckt
Whereas gesture elicitation studies for TV interaction
assume that participants adopt an upright, frontal viewing
position, we asked 21 participants to hold a natural, comfortable
viewing position, the posture they adopt when watching TV
at home. By involving a broad selection of users regarding
age, profession, our study targets a higher ecological validity
than in existing studies. Agreements rates were lower than existing studies using an upright, frontal viewing position. Participants experienced problems due to (1) having to use their slave hand instead of their dominant hand, (2) being in a certain orientation with their head making it more difficult to perform some physical movements, and (3) being hindered in their movement by the sofa there lay on. Since each person may have a different
position inducing different gestures due to the aforementioned
problems, the effect of a comfortable viewing position is analyzed
by comparison to gestures for a frontal position.
Head and Shoulders Gestures: Exploring User-Defined Gestures with Upper BodyJean Vanderdonckt
This paper presents empirical results about user-dened gestures
for head and shoulders by analyzing 308 gestures elicited from 22 participants for 14 referents materializing 14 different types of tasks in IoT context of use. We report an overall medium consensus but with medium variance (mean: .263, min: .138, max: .390 on the unit scale) between participants gesture proposals, while their thinking time were less similar (min: 2.45 sec, max: 22.50 sec), which suggests that head and shoulders gestures are not all equally easy to imagine and to produce. We point to the challenges of deciding which head and shoulders gestures
will become the consensus set based on four criteria: the agreement rate, their individual frequency, their associative frequency, and their unicity.
Paper accessible at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A213794
G-Menu: A Keyword-by-Gesture based Dynamic Menu Interface for SmartphonesJean Vanderdonckt
Instead of relying on graphical or vocal modalities for searching
an item by keyword (called K-Menu), this paper presents the G-Menu exploiting gesture interaction and gesture recognition: when a user sketches a keyword by gesturing the first letters of its label, a menu with items related to the recognized letters is constructed dynamically and presented to the user for selection and auto-completion. The selection can be completed either gesturally by an appropriate gesture (called the G-Menu) or by touch only (called the T-Menu). This paper compares the three types of menu, i.e., by keyword, by gesture, and by touching, in a user study with twenty participants on their item selection time (for measuring task efficiency), their error rate (for measuring task effectiveness),
and their subjective satisfaction (for measuring user satisfaction).
Paper accessible at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A213790
Unistroke and multistroke gesture recognizers have always striven to reach some robustness with respect to
all variations encountered when people issue gestures by hand
on touch surfaces or with sensing devices. For this purpose,
successful stroke recognizers rely on a gesture recognition
algorithm that satisfies a series of invariance properties such
as: stroke-order invariance, stroke-number invariance, stroke direction invariance, position, scale, and rotation invariance.
Before initiating any recognition activity, these algorithms
ensure these properties by performing several pre-processing
operations. These operations induce an additional computational
cost to the recognition process, as well as a potential error
bias. To cope with this problem, we introduce an algorithm that
ensures all these properties analytically instead of statistically
based on a vector algebra. Instead of points, the recognition
algorithm works on vectors between vectors. We demonstrate
that this approach not eliminates the need for these preprocessing
operations but also satisfies an entire structure preserving
transformation.
Paper available at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A217006
Body-based gestures, such as acquired by Kinect sensor, today benefit from efficient tools for their recognition and development, but less for automated reasoning. To facilitate this activity, an ontology for structuring body-based gestures, based on user, body and body parts, gestures, and environment, is designed and encoded in Ontology Web Language according to modelling triples (subject, predicate, object). As a proof-of-concept and to feed this ontology, a gesture elicitation study collected 24 participants X 19 referents for IoT tasks = 456 elicited body-based gestures, which were classified and expressed according to the ontology.
See paper at https://dl.acm.org/citation.cfm?id=3328238
AB4Web: An On-Line A/B Tester for Comparing User Interface Design AlternativesJean Vanderdonckt
We introduce AB4Web, a web-based engine that implements a balanced randomized version of the multivariate A/B testing, specifically designed for practitioners to readily compare end-users' preferences for user interface alternatives, such as menu layouts, widgets, controls, forms, or visual input commands. AB4Web automatically generates a balanced set of randomized pairs from a pool of user interface design alternatives, presents them to participants, collects their preferences, and reports results from the perspective of four quantitative measures: the number of presentations, the preference percentage, the latent score of preference, and the matrix of preferences. In this paper, we exemplify the AB4Web tester with a user study for which N=108 participants expressed their preferences regarding the visual design of 49 distinct graphical adaptive menus, with a total number of 5,400 preference votes. We compare the results obtained from our quantitative measures with four alternative methods: Condorcet, de Borda count starting at one and zero, and the Dowdall scoring system. We plan to release AB4Web as a public tool for practitioners to create their own A/B testing experiments.
Gelicit: A Cloud Platform for Distributed Gesture Elicitation StudiesJean Vanderdonckt
A gesture elicitation study, as originally defined, consists of gathering a sample of participants in a room, instructing them to produce gestures they would use for a particular set of tasks, materialized through a representation called referent, and asking them to fill in a series of tests, questionnaires, and feedback forms. Until now, this procedure is conducted manually in a single, physical, and synchronous setup. To relax the constraints imposed by this manual procedure and to support stakeholders in defining and conducting such studies in multiple contexts of use, this paper presents Gelicit, a cloud computing platform that supports gesture elicitation studies distributed in time and space structured into six stages: (1) define a study: a designer defines a set of tasks with their referents for eliciting gestures and specifies an experimental protocol by parameterizing its settings; (2) conduct a study: any participant receiving the invitation to join the study conducts the experiment anywhere, anytime, anyhow, by eliciting gestures and filling forms; (3) classify gestures: an experimenter classifies elicited gestures according to selected criteria and a vocabulary; (4) measure gestures: an experimenter computes gesture measures, like agreement, frequency, to understand their configuration; (5) discuss gestures: a designer discusses resulting gestures with the participants to reach a consensus; (6) export gestures: the consensus set of gestures resulting from the discussion is exported to be used with a gesture recognizer. The paper discusses Gelicit advantages and limitations with respect to three main contributions: as a conceptual model for gesture management, as a method for distributed gesture elicitation based on this model, and as a cloud computing platform supporting this distributed elicitation. We illustrate Gelicit through a study for eliciting 2D gestures executing Internet of Things tasks on a smartphone.
MoCaDiX: Designing Cross-Device User Interfaces of an Information System base...Jean Vanderdonckt
This paper presents MoCaDiX, a method for designing cross-device graphical user interfaces of an information system based on its UML class diagram, structured as a four-step process: (1) a UML class diagram of the information system is created in a model editor, (2) how the classes, attributes, methods, and relationships of this class diagram are presented across devices is then decided based on user interface patterns with
their own parametrization, (3) based on these parameters, a Concrete User Interface model is generated in
QuiXML, a lightweight fit-to-purpose User Interface Description Language, and (4) based on this model, HTML5 cross-device user interfaces are semi-automatically generated for four configurations: single/multiple device single/multiple-display on a smartphone, a tablet, and a desktop. From the practitioners’ viewpoint, a first experiment investigates effectiveness, efficiency, and subjective satisfaction of three intermediate and
three expert designers, using MoCaDiX on a representative class diagram. From the end user’s viewpoint, a second experiment compares subjective satisfaction and preference of twenty end users assessing layout strategies for interfaces generated on two devices.
Specification of a UX process reference model towards the strategic planning ...Jean Vanderdonckt
In this conceptual paper, we present a UX process reference model (UXPRM), explain how it builds on the related work and report our experience using it. The UXPRM includes a description of primary UX lifecycle processes, and a classification of UX methods and artifacts. This work draws an accurate picture of UX base practices and allows the reader to compare and select methods for different purposes. Building on that basis, our future work consists of developing a UX Capability/Maturity Model (UXCMM) intended for UX activity planning according to the organization’s UX capabilities. Ultimately, the UXCMM aims to
facilitate the integration of UX processes in software engineering, which should contribute to reducing the gap between UX research and UX practice.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Conducting a Gesture Elicitation Study: How to Get the Best Gestures From People?
1. Lecture 3: Conducting a Gesture Elicitation Study:
How to Get the Best Gestures From People?
Jean Vanderdonckt, UCLouvain
Vrije Universiteit Brussel, 20 April 2021, on-line
2. How to get the best gestures from people?
Case study: sketching a graphical UI
(Interact’07)
https://www.youtube.com/watch?v=SBNB1O-8pGw&t=5s
6. How to determine the best gestures for widgets?
1: none
2: low
3: medium
7. How to determine the best gestures for widgets?
1: none
2: low
3: medium
4: high
8. Use de Borda Count method (Condorcet method)
• Each participant assigns a score equal to the number of
representations to whom he or she is preferred in
decreasing order: n-1, n-2, n-3,…
n=5
Participant #1 0 n-3=2 n-4=1 n-1=4 n-2=3
Participant #2 n-1=4 0 n-3=2 n-2=3 n-4=1
Score= 4
Score= 7
Score= 3
Score= 2
Score= 4
9. Use de Borda Count method (Condorcet method)
• Each participant assigns a score equal to the number of
representations to whom he or she is preferred in
decreasing order: n-1, n-2, n-3,…
11. Further classification: complexity of the representation
ST=sketching time
DEL=delete operations
Source: Suzanne Kieffer, Adrien Coyette, Jean Vanderdonckt, User interface design by sketching: a complexity analysis of widget representations.
EICS 2010: 57-66
12. How to get the best gestures from people?
Considered insofar:
• Human factors: preference, sketching time, delete
operations
• System factors: recognition rate
Other method for preference
• By elicitation
• By wizard of Oz
13. Gesture Elicitation Study (GES)
• To elicit
• To call forth or draw out something, such as information or a
response (Merriam Webster)
• To evoke or draw out a reaction, answer, or fact from someone
(Dictionary.com)
• To draw forth something that is latent or potential into existence
(Dictionary.com)
• Elicitation
• Acquisition of information from a person or a group of persons in a
manner that does not disclose the intent of the interview,
discussion or conversation
• Elicitation technique
• A technique used to discreetly extract information, not readily
available for a specific purpose
Source: Wobbrock et al., 2009
14. Gesture Elicitation Study (GES)
• Definition
• Is a study for asking end-users to elicit their own gestures for a set
of predefined functions through referents and reach a consensus
• Aims and goals
• To let end users elicit the gestures they want for a function
• To compute agreement between end users (=participants, subjects)
• To define a vocabulary of potential gestures
• To identify a consensus set, which is a sub-vocabulary of agreed
gestures. Output = Consensus Set
• For one context of use at a time: C= (U,P,E)
• User : any person, Functions: IoT functions
• Platform/device/sensor: an armband
• Environment: smart home, usability lab, controlled room
15. Gesture Elicitation Study (GES)
• Advantages
• Free from any form of stress or burden of working
• Information comes deliberately once the favorable context is
created
• No adverse effect
• Easy and cheap to run
• Disadvantages
• Legacy bias
• Previous experience with a similar system, another device
• No experience
• Lack of creativity
16. GES Step 1: Prepare setup C=(U,P,E)
• Create a representative
sampling of the user
population
17. GES Step 1: Prepare setup C=(U,P,E)
• Find a representative sampling of the user population
• Example of stratified sampling
Sampling Female Male
Age group % 32 51 49
18-24 8.9 3 2 1
25-34 15.4 5 3 2
35-44 16.5 5 3 2
45-54 18.3 6 3 3
55-64 18.7 6 3 3
65+ 22.2 7 4 3
Total 100 32 18 14
18. GES Step 1: Prepare setup C=(U,P,E)
• Find a representative sampling of the user population
• Find n=30 participants
• Ideally 50% female, 50% male
• Different ages (not all students!)
• Different backgrounds (not all in your discipline!)
• Different levels of experience (e.g., with touch or not)
• Each participant takes part in the experiment individually
• Define an experiment schedule
• Location (should be constant): place to choose
• Time: 30 min for each participant + resting time
• Schedule: organize a Doodle
• Test the whole protocol with a guinea pig
• Different from the participants!
19. GES Step 1: Prepare setup C=(U,P,E)
• Identify referents for the following tasks
• A referent is the expected result of an action (e.g., key
pressing, touch, gesture) for a function
• Distribution the definition of the 14 referents across groups
• Example: IoT functions:
(1) Turn the TV on/off (2) Start player
(3) Turn up the volume (4) Turn down the volume
(5) Go to the next item in a list (6) Go to the previous item
(7) Turn AC on/off (8) Turn lights on/off
(9) Brighten lights (10) Dim lights
(11)Turn heat on/off (12) Turn alarm on/off
(13)Answer phone call (14) End phone call
• Randomize the order of the referents before the study for each
participant (create lists with www.random.org)
Is symmetric to
20. GES Step 1: Prepare setup C=(U,P,E)
• What is a referent?
Referent:
- State before action
- Action description
- State after action
21. GES Step 1: Prepare setup C=(U,P,E)
• Referent: textual, graphical, animation, video (visual priming)
20. Permute files remotely
Before
After
My Network
My Disk
My Network
My Disk
Our Network
Partner Disk
Our Network
Partner Disk
Our Network
Partner Disk
Our Network
Partner Disk
Yellow Document
Red Document
Red Document
Yellow Document
Blue Document
Blue Document
23. GES Step 1: Prepare setup C=(U,P,E)
• Prepare the hardware, software
• Keep them constant throughout the experiment
• Determine a software for recording gestures in Log Files
• Example: Python software for Leap Motion
• Example: MyO Gesture Control
24. GES Step 1: Prepare setup C=(U,P,E)
• Identify the physical location: office, room, usability lab,…
• Keep it constant throughout the experiment
• Sometimes determined by the device
• Example: Usability lab
25. GES Step 2: Pre-test (before experiment)
• Each participant
• Comes to the experiment location according to schedule
• Signs a consent form
• Fills in a demographic questionnaire
• Notoriety and experience if any for the target device
• System experience in general (e.g., for a tablet, frequency)
• Performs a Motor-Skill test: to touch their thumb to each of
the other fingers of the same hand 2 times consecutively
without flaw
• Is presented with an introduction to the device (e.g., a video)
• Experimenter
• Assigns an ID to each participant on the consent form
• Checks that all forms are completely filled in, before and
after
26. GES Step 3: Test (conduct experiment)
• Each participant
• Is presented with the list of referents (randomly selected)
• Confirms she understood the tasks
• Is asked to think of a suitable gesture
• Says “I am ready”
• Gives a rating (Goodness-of-fit) between 1 and 10
• To describe how well the proposed gesture fits the referent
• 1 = very poor fit 10 = excellent fit
• Optionally gives a value between 1 and 7 for
• Complexity: 1= most complex 7= simplest gesture
• Memorability: 1=hardest to remember 7=easiest to remember
• Fun: 1=is the least funny 7=is the funniest
for
each
gesture
in
the
random
set
27. GES Step 3: Test (conduct experiment)
• The experimenter, on the gesture sheet
• Records the thinking time = time since the task started (i.e., when
the referent was presented to the participant) and the moment
when the participant knows what gesture to propose
• Fills in the gesture sheet: for each gesture collected
• Writes the referent ID
• Records the thinking time
• Write all scores: Goodness-of-fit, Complexity, Memorability, Fun
• Videotapes the whole session for further analysis
• Keep video files for the final report
• Takes some pictures of the room for some interesting gestures
• Records the gestures into files with the very right device
• Example: X, Y, Z, timestamp
28. GES Step 4: Post-test (after experiment)
• The participant
• Fills in the IBM PSSUQ questionnaire
• Is being asked a few open questions
• What did you like the most in the experiment?
• What did you hate the most in the experiment?
• The experimenter
• Checks that the questionnaire is properly filled in
• Asks questions to the participant if any
• Encourages the participant to answer the open questions
• All
• Encode all data into spreadsheet
• Template file available
29. GES Step 5: Gesture classification
• Give each individual collected gesture a consistent,
structured name
• Adopt a structure (action verb)+(limb)+(parameter)
• Examples:
• Swipe right with two fingers, swipe left with the dominant hand
• References
• Hand gestures (e.g., LeapMotion):
http://gestureml.org/doku.php/gestures/motion/gesture_index
• Arm gestures (see in body motion gestures section above)
• arm_raised_forward (left/right), arm_raised_side (left/right),
• arm_point (left/right)
• arm_wave (left/right), arm_push (left/right), arm_throw (left/right)
• arm_punch (left/right), arm_folded (left/right), arm_on_hip (left/right)
30. GES Step 5: Gesture classification
• Assign each individual gesture to a category ID
• Example:
• Swipe right with two, three fingers, swipe right with the dominant hand
are assigned to the category “1: Swipe right” however the gesture is
performed
• Create your own classification
• You can draw inspiration from existing classifications
• You may depart from existing classifications by
• Adding, deleting, modifying your own categories
• Refining, generalizing existing categories
• Detail enough: minimum 10 categories
• Do not overdetail (e.g., by creating a category for each
gesture): maximum 20 gestures
• In the Excel file, tab “Elicited Gesture”, fill in the light grey area
• Input other data, like Thinking time, Goodness-of-fit, etc.
31. GES Step 5: Gesture classification
• Descriptive labeling: Gesture Cards
• Examples
• Touch the ring once, twice, or multiple times in a row. Tap a
rhythmic pattern on the ring’s surface
• Rotate the ring on the finger. Rotate the finger wearing the ring.
Rotate the hand wearing the ring
• Slide the ring along the finger. Pull out the ring. Place the ring
back on the finger. Change the ring to a different finger,
32. GES Step 5: Gesture classification
• Gesture type: describes the underlying meaning of a
gesture. Possible values are:
• P= Pointing gestures (= deictic gestures) indicate people, objects,
directions
• Semaphoric gestures are hand postures and movements conveying
specific meanings
• T= Static semaphorics are identified by a specific hand posture. Examples:
thumbs-up means “okay”, a flat palm facing from the actor means “stop”.
• D= Dynamic semaphorics convey information through their temporal aspects.
Example: a circular hand motion means “rotate”
• S= Semaphoric strokes represent hand flicks are single, stroke-like movements.
Example: a left flick of the hand means “dismiss this object”
• A = Pantomimic gestures demonstrate a specific task to be performed or
imitated, which mostly involves motion and particular hand postures.
• Examples: filling an imaginary glass with water, by tilting an imaginary bucket.
They often consist of multiple low-level gestures, grabbing an object, moving it,
and releasing it again
33. GES Step 5: Gesture classification
• Gesture type: describes the
underlying meaning of a gesture.
Possible values are (cont’d):
• Iconic gestures communicate information about objects
or entities, such as specific sizes, shapes, and motion
paths:
• I= Static iconics are performed by spontaneous
static hand postures. Example: Draw an “O” with
index finger and thumb means a “circle”
• Y= Dynamic iconics are often used to describe
paths or shapes, such as moving the hand in
circles, meaning “the circle”.
• - M= Manipulation gestures a guide movement in a
short feedback loop. Thus, they feature a tight
relationship between the movements of the actor and
the movements of the object to be manipulated. The
actor waits for the entity to “follow” before continuing
Source: https://www.microsoft.com/en-us/research/publication/understanding-mid-air-hand-gestures-a-study-of-human-preferences-in-usage-of-gesture-types-for-hci/
34. GES Step 5: Gesture classification
• Gesture form: specifies which form of gesture is elicited.
Possible values are:
• S= stroke when the gesture only consists of taps and flicks
• T= static when the gesture is performed in only one location
• M= static with motion (when the gesture is performed with a
static pose while the rest is moving)
• D= dynamic when the gesture does capture any change or
motion
35. GES Step 5: Gesture classification
• Range of motion: relates the distance between the
position of the human body producing the gesture and
the location of the gesture. Possible values are
• C= Close intimate, I= Intimate, P= Personal, S=Social,
U= Public, R= Remote
Range of
motion
36. GES Step 5: Gesture classification
• Laterality: characterizes how the two hands are
employed to produce gestures, with two categories, as
done in many studies. Possible values are:
• D= dominant unimanual, N= non-dominant unimanual,
S= symmetric bimanual, A= asymmetric bimanual
Source: https://www.tandfonline.com/doi/abs/10.1080/00222895.1987.10735426
D
(right handed)
N
(right handed)
S
(right handed)
A
(right handed)
37. GES Step 5: Gesture classification
• Classify each gesture category according to
the following criteria and enter corresponding
code
• Use other classification criteria depending on
• User
• Platform/device
• Environment
38. GES Step 5: Compute agreement among gestures
• Download AGaTE from
http://depts.washington.edu/acelab/proj/dollar/agate.html
• Prepare a CSV file with participant ID (column), referent
name (line), Gesture category ID (cell)
Participant ID
Referent
name
Gesture category
ID
39. GES Step 5: Compute agreement among gestures
• Compute Agreement Rate (AR) for all referents
• Agreement Rate = the number of pairs of participants in
agreement with each other divided by the total number
of pairs of participants that could be in agreement
• Compute co-agreement for pairs, groups (eg male vs
female), categories of referents (eg basic vs. advanced)
agreement rate disagreement rate co-agreement rate
Source: https://dl.acm.org/citation.cfm?id=2669511
40. GES Step 5: Compute agreement among gestures
• Example: agreement rate for one referent with 5
participants proposing 2 gestures A and B.
• Connected pairs represent how two participants
performed the same gesture
Source: https://dl.acm.org/citation.cfm?id=2669511
41. • Ring Gestures
• 41
24 participants
= 672 gesture proposals
x 2 rings (Ring Zero)
x 14 referents
Source: B. Gheran, J. Vanderdonckt, R.-D. Vatavu, Gestures for Smart Rings: Empirical Results, Insights, and Design Implications.
Proc. of ACM DIS’18, 623-635.
Example
42. • Ring Gestures: samples
Example
https://www.youtube.com/watch?v=FHT-5aFNhsA
45. GES Step 6: Analyze agreement among gestures
• Plot all referents in decreasing order of their AR, with error bars
denoting confidence interval (95%) and gesture category
45
0.193 0.173 0.157 0.157 0.14 0.13 0.117 0.11 0.11 0.107 0.107 0.097 0.08 0.07 0.067 0.06 0.053 0.053 0.043
0.107
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Turn Light
On
Turn TV
On
Begin
Player
Turn Light
Off
Turn TV
Off
Go to
previous
item
Go to next
item
Turn Heat
On
Answer a
Call
Increase
Volume
Turn Heat
Off
Increase
Light
Decrease
Volume
Hang Up
Call
Decrease
Light
Turn AC
On
Turn
Alarm On
Turn
Alarm Off
Turn AC
Off
Average
Agreement
rate
Low agreement
Medium agreement
Most
elicited
gesture
15: Splay 27: Button 27: Button
3: Point 14: Fist 10: Swipe 10: Swipe 8: Rotate 30: Phone 30: Phone
8: Rotate
28: Dimmer 28: Dimmer
15: Splay 6: Flat 10: Swipe 27: Button 27: Button 10: Swipe
Referent
Category
Source: https://dl.acm.org/citation.cfm?id=2702223
46. GES Step 7: Analyze Gestures
• Provide a detailed analysis of gestures elicited
• By criteria
• By agreement rate
• By observation (e.g., field notes, video notes, gesture sheet)
• By interview
• By IBM PSSUQ questionnaire
• Identify potential correlations between data (e.g., t test,
ANOVA, etc.)
• Decide Consensus set = set of agreed gestures based
on results
• Example: https://ieeexplore.ieee.org/document/8890919
47. Example of Gesture Elicitation Study
Head and Shoulders Gestures:
Exploring User-Defined Gestures with Upper Body
Source: Jean Vanderdonckt, Nathan Magrofuoco, Suzanne Kieffer, Jorge Pérez, Ysabelle Rase, Paolo Roselli, Santiago Villarreal:
Head and Shoulders Gestures: Exploring User-Defined Gestures with Upper Body. HCI (19) 2019: 192-213
48. 48
What are head and shoulders gestures?
• A head gesture is any movement of the head leaving the rest
of the body unaffected (stationary)
• A head gesture could occur in any plane (sagittal, transverse,
frontal)
• A shoulder gesture is any movement of the shoulder joint that
leaves the rest of the arm unaffected (stationary).
• A shoulder gesture occurs in any plane of motion (sagittal,
transverse, frontal) or direction (forward, backward, or circular)
50. 50
Why are head and shoulders gestures interesting?
• Hands-free interaction is made possible
• Fixed-Gaze Head Movement is appropriate when
• No device needed
• Both hands should be free
• No need to move the gaze
• Capability to trigger actions from a moderate set
• Medium command duration
• Accurate recognizers for head and shoulders gestures start to
appear
M13 M31
M65
M56
M42
M24
[Qinjie et al., 2019]
51. 51
Why are head and shoulders gestures interesting?
• They are used in physical exercises
52. 52
Why are head and shoulders gestures interesting?
• They exhibit some potential for a novel vocabulary
Head Label Alias Movement
(frontal,
transversal,
sagittal)
X trans-
lation
Move the head
left, right
Face left, face
right
Lateral translation
(v,c,c)
Y trans-
lation
Move the head
up, down
Face up, face
down
Neck elevation,
depression (c,v,c)
Z trans-
lation
Move the head
forward,
backward
Thrust,
retreat
Protraction,
retraction (c,c,v)
53. 53
Why are head and shoulders gestures interesting?
• They exhibit some potential for a novel vocabulary
Head Label Alias Movement
(frontal,
transversal,
sagittal)
Frontal
tilting
Tilt the head to
the left, right
Bend left,
right
Lateral flexion
(v,v,c)
Transvers
al tilting
Tilt the head up,
down
Bend up,
down
Extension, flexion
(v,c,v)
Saggital
tilting
Tilt the head
forward,
backward
Bend
forward,
backward
Extension, flexion
(c,v,v)
54. 54
Why are head and shoulders gestures interesting?
• They exhibit some potential for a novel vocabulary
Head Label Alias Movement
(frontal,
transversal,
sagittal)
X rotation Turn the head
up, down
Uphead,
downhead
Horizontal
rotation (c,v,v)
Y rotation Turn the head
left, right
Lefthead,
righthead
Vertical rotation
(v,c,v)
Z rotation Turn the head
forward,
backward
Forehead,
backhead
Facial rotation
(v,v,c)
55. 55
Why are head and shoulders gestures interesting?
• They exhibit some potential for a novel vocabulary
Shoulders Label Alias Movement (frontal, transversal,
sagittal)
X translation Move shoulder horizontally to left, right Decontract,
contract
Extension, flexion (v,c,c)
Y translation Raise shoulder, lower shoulder Raise, lower Shoulder elevation, depression
(c,v,c)
Z translation Move shoulder forward/backward Protract, retract Shoulder protraction, retraction
(c,c,v)
56. • There are also common head and shoulders gestures
57. • Larger design space than www.gestureml.org
Source: http://gestureml.org/doku.php/gestures/motion/gesture_index
“These head-motion based gestures can be great for adding subtle context cues to
game controls and metrics or even used to directly modify the way digital content is
presented on a display.”
58. 58
Experiment: Gesture Elicitation Study (GES)
• Participants
• 10 females + 12 males = 22 participants
• Aged from 18 to 62y (M=29, SD=13)
• Various occupations: secretary, teacher, employee,...
• Device usage frequencies
• Creativity score
y = 0.1467x + 52.482
R² = 0.0775
0
10
20
30
40
50
60
70
80
18 28 38 48 58
Creativity
[score]
Age [years]
6.09 6.05
2.68
1.73
1.05
0.00
0.00
1.00
2.00
3.00
4.00
5.00
6.00
7.00
Computer
Smartphone
Tablet
Game
console
MS
Kinect
Studied
device
Average
fequency
of
usage
Device
59. 59
Experiment: Gesture Elicitation Study (GES)
• Stimuli: 14 referents for IoT tasks:
Turn the TV On/Off, Start Player, Turn the Volume up, Turn the volume
down, Go to the next channel, Go to the previous channel, Turn Air
Conditioning On/Off, Turn Lights On/Off, Brighten Lights, Dim Lights,
Turn Heating system On/Off, Turn Alarm On/Off, Answer a phone call,
and End Phone Call.
Example: 3. INCREASE: Brighten lights
Before After
60. 60
Experiment: Results
• 22 participants X 14 referents = 308 elicited gestures
resulting into 10 categories
Head single gesture, 102
Concurrent compound
gesture, 70
Sequential
compound
gesture, 44
Both shoulders single
gesture, 29
Dominant shoulder single
gesture, 19
Non-dominant shoulder
single gesture, 14
Head repeated gesture, 10
Both shoulders
repeated
gestures, 9
Dominant shoulder
repeated gesture, 4
Non-dominant shoulder
repeated gesture, 3
Other, 26
62. 62
Experiment: Results
• Evolution of aggregated measures per referent
0.00
1.00
2.00
3.00
4.00
5.00
6.00
7.00
8.00
0.00
5.00
10.00
15.00
20.00
25.00
Value
Time
[Sec]
Referent
Average thinking time Goodness-of-fit
Linear (Average thinking time) Linear (Goodness-of-fit)
63. 63
Experiment: Results
• Breakdown per criteria
Upface/downface, 3%
Thrust, 1% Bend, 31%
Nod,
4%
Rotate,
4%
Left/right, 12%
Backhead, 0%
Raise
11%
Lower,
4%
Shrug, 11% Clog, 9%
Protract
6%
Retract,
3%
Head (%), 50.87 Shoulders (%), 30.62
Head and Shoulders (%),
18.51
One stroke
68.09%
Two strokes
22.37%
Three or more strokes
9.54%
Body part
Elicited gestures
Amount of strokes
64. 0.390 0.390
0.316
0.283
0.267 0.260
0.263
0.250 0.250 0.250 0.248
0.229
0.215
0.188
0.138
0.368 0.368
0.286
0.251
0.238 0.234
0.232
0.221 0.221 0.216 0.215
0.195
0.182
0.156
0.104
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
Go to Next
Channel
Go to
Previous
Channel
Answer
Phone Call
End Phone
Call
Play/pause Turn TV
On/Off
Average Turn Lights
On/Off
Turn Alarm
On/Off
Decrease
Volume
Brighten
Lights
Increase
Volume
Dim Lights Turn Air
Conditioning
On/Off
Turn Heating
System
On/Off
High Medium
64
Experiment: 14 Consensus gestures
Agreement score [Vatavu & Wobbrock, 2015]
Agreement rate [Vatavu & Wobbrock, 2016]
65. 65
Conclusion
• Contributions
• Design space for head and shoulders gestures
• Corpus of 308 elicited gestures with measures
• Classification into 10 categories
• Consensus set of 14 head and shoulders gestures
• Design guidelines
• Use bending gestures as a first-class citizen
• Use Upface/downface for infrequent tasks
• Use thrust only for play/pause
• Forehead and backhead gestures should not be used,
apart for exceptional assignation
67. • Shortcomings
• Legacy bias (Morris et al, 2010)
Source: M. Morris et al., Reducing Legacy Bias in Gesture Elicitation Studies, 2010
https://interactions.acm.org/archive/view/may-june-2014/reducing-legacy-bias-in-gesture-elicitation-studies
Priming. Priming users to think about the
capabilities of a new form factor or sensing
technology is another approach that may
reduce the impact of legacy bias
Partners. Inviting users to participate in
elicitation studies in groups, rather than
individually, can be another approach to
overcoming legacy bias
68. • Shortcomings
• Manual (variable) classification of gestures
Source: Radu-Daniel Vatavu, The Dissimilarity-Consensus Approach to Agreement Analysis in Gesture Elicitation Studies. CHI
2019: 224
“It is possible to design a highly guessable symbol set by acquiring guesses from participants.”
“Participants are first recruited to propose symbols for specified referents within a given domain.
The more participants, the more likely the resulting symbol set will be guessable to external users.”
The goal is to obtain a rich set of symbols from which to create the resultant symbol set.”
69. • Shortcomings
• Manual (variable) classification of gestures
N=30 children, Referent: “Scratch like a cat!”
Grouping criteria #1: same hand 39% agreement
Grouping criteria #2: same hand & body pose 19.6%
Grouping criteria #3: same hand & body pose & pattern of
movement 12.2%
Source: Radu-Daniel Vatavu, The Dissimilarity-Consensus Approach to Agreement Analysis in Gesture Elicitation Studies. CHI
2019: 224
70. • Shortcomings
• Automatic classification of gestures
1. A dissimilarity function for gestures Δ(𝑔𝑖,𝑔𝑗)
for example, the Euclidean distance
the Dynamic Time Warping cost function
or any distance function that takes two gestures as
input and returns a real, positive value of how
dissimilar they are
2. A threshold (τ, tau) for values computed by Δ
Source: Radu-Daniel Vatavu, The Dissimilarity-Consensus Approach to Agreement Analysis in Gesture Elicitation Studies. CHI
2019: 224
71. • Shortcomings
• Automatic classification of gestures
Source: Radu-Daniel Vatavu, The Dissimilarity-Consensus Approach to Agreement Analysis in Gesture Elicitation Studies. CHI
2019: 224
73. • Shortcomings and variants
• Gesture elicitation distributed in time and space
GestMAN
Source: GestMan: a cloud-based tool for stroke-gesture datasets
75. • Shortcomings and variants
• Some limbs are privileged
Source: Villarreal et al., A Systematic Review of Gesture Elicitation Studies: What Can We Learn from 216 Studies?
Body parts:
1.- The
Individual
Frequency of
Body Parts
(IFBP)
2. The
Combination
Frequency of
Body Parts
(CFBP)
76. • Shortcomings and variants
• Eliciting other symbols than gestures, other modalities
Source: Villarreal et al., A Systematic Review of Gesture Elicitation Studies: What Can We Learn from 216 Studies?
77. • Shortcomings and variants
• Eliciting more than one symbol
Source: Villarreal et al., A Systematic Review of Gesture Elicitation Studies: What Can We Learn from 216 Studies?
Multiple symbols could
be combined to trigger a
function by relying on
hierarchical structure and
congruence
78. • Conclusion
• Elicitation study is a practical tool for eliciting (symbol)
proposals from participants for commands, icons,
shortcuts, gestures, vocal commands, etc.
• Efficient, natural
• Subject to manual classification
• Not always sure for agreement rates (other scores)
• Legacy bias
• Continuity from elicitation to recognition
• Subject to Context variability (U,P,E)
79. • Conclusion
• Many GES already exist, but no consolidation of this
knowledge!
y = 3.8909x - 4.1636
R² = 0.9052
0
5
10
15
20
25
30
35
40
45
2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019
Number
of
studies
Year of publication
Frequency of Gesture ElicitationStudies
80. How to get the best gestures from people?
Considered insofar:
• Human factors: preference, sketching time, delete
operations, agreement, (dis)similarity
• System factors: recognition rate
More to come and to consider:
• Human factors: hedonic value, memorability,
naturalness, discoverability, consistency, congruence, …
• System factors: recognition rate, execution time,
computational complexity, more measures…