The document summarizes Jean Vanderdonckt's upcoming lecture on gestural interaction. It will cover the psychological, hardware, software, usage, social and user experience dimensions of gestural interaction. On the psychological dimension, it discusses definitions of gestures and theories of gesture types. On the hardware dimension, it outlines paradigms of contact-based and contact-less gesture interaction. On the software dimension, it provides an overview of gesture recognition algorithms such as Rubine, Siger, LVS and nearest neighbor classification.
Conducting a Gesture Elicitation Study: How to Get the Best Gestures From Peo...Jean Vanderdonckt
Lecture 3: Conducting a Gesture Elicitation Study: How to Get the Best Gestures From People?
Francqui Chair in Computer Science 2020 VUB, Jean Vanderdonckt, 27 April 2021
A lecture on evaluating AR interfaces, from the graduate course on Augmented Reality, taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
Using Physiological sensing and scene reconstruction in remote collaborationUniversity of Auckland
In this research, We explore novel Augmented Virtual Teleportation (AVT) methods based on the hybrid technologies of Augmented Reality (AR), Virtual Reality (VR), 3D live scene capturing, and multimodal interaction. Natural behavioral cues (Hand gestures, eye gaze, etc.) that are used in face-to-face communication play an essential role in effective collaboration. In contrast, most Mixed Reality (MR) remote collaboration systems mainly investigated computer-generated visual cues rendered as graphic objects or text for delivering instructions. In this research, we first study natural communication cues that people use in face-to-face collaboration. We then develop a novel remote collaboration system to enable people to communicate remotely as face-to-face. The system will contain two main parts: 1) Live scene capturing to enable real-time environment reconstruction and sharing of a user’s location, 2) Multimodal input such as gaze, gesture, and physiological signals to enhance remote communication. So far we have conducted two experiments to study the collaboration between a person with an AR interface and a remote user within a VR interface using multimodal input. We found that the remote collaboration system could provide a significantly stronger sense of co-presence for both the local and remote users by combing gaze and gesture cues than using the gaze cue alone. The combined cues were also rated significantly higher than using gaze cues alone in terms of the ease of conveying spatial actions. We plan to extend this system to study the effect of incorporating physiological signals in communication, especially in co-presence and usability. There are many potential applications of this research in different areas such as training, tourism, entertainment, gaming, and others. In conclusion, this thesis aims to study the effect of incorporating multimodal input and scene capture in remote collaboration systems in terms of presence, engagement, and task efficiency. This research will produce many benefits, such as design guidelines for future AVT systems, software libraries making it easy to create AVT systems, sample data-sets from experiments conducted, research publications, and more.
Conducting a Gesture Elicitation Study: How to Get the Best Gestures From Peo...Jean Vanderdonckt
Lecture 3: Conducting a Gesture Elicitation Study: How to Get the Best Gestures From People?
Francqui Chair in Computer Science 2020 VUB, Jean Vanderdonckt, 27 April 2021
A lecture on evaluating AR interfaces, from the graduate course on Augmented Reality, taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
Using Physiological sensing and scene reconstruction in remote collaborationUniversity of Auckland
In this research, We explore novel Augmented Virtual Teleportation (AVT) methods based on the hybrid technologies of Augmented Reality (AR), Virtual Reality (VR), 3D live scene capturing, and multimodal interaction. Natural behavioral cues (Hand gestures, eye gaze, etc.) that are used in face-to-face communication play an essential role in effective collaboration. In contrast, most Mixed Reality (MR) remote collaboration systems mainly investigated computer-generated visual cues rendered as graphic objects or text for delivering instructions. In this research, we first study natural communication cues that people use in face-to-face collaboration. We then develop a novel remote collaboration system to enable people to communicate remotely as face-to-face. The system will contain two main parts: 1) Live scene capturing to enable real-time environment reconstruction and sharing of a user’s location, 2) Multimodal input such as gaze, gesture, and physiological signals to enhance remote communication. So far we have conducted two experiments to study the collaboration between a person with an AR interface and a remote user within a VR interface using multimodal input. We found that the remote collaboration system could provide a significantly stronger sense of co-presence for both the local and remote users by combing gaze and gesture cues than using the gaze cue alone. The combined cues were also rated significantly higher than using gaze cues alone in terms of the ease of conveying spatial actions. We plan to extend this system to study the effect of incorporating physiological signals in communication, especially in co-presence and usability. There are many potential applications of this research in different areas such as training, tourism, entertainment, gaming, and others. In conclusion, this thesis aims to study the effect of incorporating multimodal input and scene capture in remote collaboration systems in terms of presence, engagement, and task efficiency. This research will produce many benefits, such as design guidelines for future AVT systems, software libraries making it easy to create AVT systems, sample data-sets from experiments conducted, research publications, and more.
This presentation is concerned with the development and evaluation of a redesign of the online and mobile app African Storybook initiative services that support the authoring and reading of openly licensed storybooks to support literacy development in Africa. The redesign makes use of a number of cultural-historical activity theory principles, including: object of activity, tool mediated and shared objects that are part of the third-generation activity system.
Lecture on Advanced Human Computer Interaction given by Mark Billinghurst on July 28th 2016. This is the first lecture in the COMP 4026 Advanced HCI course.
Workshop talk by Mark Billinghurst at the AWE Asia 2015 conference on October 17h 2015. This workshop gives an overview of design guidelines and tool for designing wearable interfaces.
COMP4010 Lecture 5 taught by Bruce Thomas at University of South Australia on August 24th 2017. This class was about using Interaction Design techniques for developing effective VR interfaces. Slides by Mark Billinghurst.
Presentation given by Mark Billinghurst on research into Empathic Glasses. Combining Augmented Reality, Wearable Computers, Emotion Sensing and Remote Collaboration. Given on February 18th 2016.
Keynote speech given by Mark Billinghurst at the CHIuXiD conference in Jakarta, Indonesia on April 14th 2016. This talk describes the research area of Empathic Computing and examples from research projects in this area.
2013 Lecture 6: AR User Interface Design GuidelinesMark Billinghurst
COSC 426 Lecture 6: on AR User Interface Design Guidelines. Lecture taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury on August 16th 2013
COMP 4010 Lecture 5 on Interaction Design for Virtual Reality. Taught by Gun Lee on August 21st 2018 at the University of South Australia. Slides by Mark Billinghurst
By interaction design, we mean
"designing interactive products to support the way people communicate and interact in their everyday lives." (Foley, 2007)
References:
Foley., J., “Interaction Design beyond human-computer interaction 2nd Edition,” 2007 John Wiley & Sons Ltd, pp.8
User experience (UX) strategy, a careful blend of research, analysis, and UX design, is where a successful digital product begins. It bridges the gap between vision and execution. By determining tangible objectives, benchmarks, and a roadmap up front, you create the opportunity to solve real problems for real people.
by Courtney Bradford for Circles Conference 2017
INTERACT 2019 'The Science Behind User Experience Design' CourseAsad Ali Junaid
Planning and conducting User Experience (UX) activities in a structured and scientific manner has many advantages. It is important that UX Professionals understand the scientific basis of UX methods and leverage them to enhance the UX of the application being designed. It would also be easier for the UX designer to get a buy-in from the stakeholders if his design recommendations are based in scientific logic and whetted by supporting data. In this course, UX relevant social sciences based scientific concepts and methods will be presented to the audience in a way which is simple to understand and easily to assimilate.
This presentation is concerned with the development and evaluation of a redesign of the online and mobile app African Storybook initiative services that support the authoring and reading of openly licensed storybooks to support literacy development in Africa. The redesign makes use of a number of cultural-historical activity theory principles, including: object of activity, tool mediated and shared objects that are part of the third-generation activity system.
Lecture on Advanced Human Computer Interaction given by Mark Billinghurst on July 28th 2016. This is the first lecture in the COMP 4026 Advanced HCI course.
Workshop talk by Mark Billinghurst at the AWE Asia 2015 conference on October 17h 2015. This workshop gives an overview of design guidelines and tool for designing wearable interfaces.
COMP4010 Lecture 5 taught by Bruce Thomas at University of South Australia on August 24th 2017. This class was about using Interaction Design techniques for developing effective VR interfaces. Slides by Mark Billinghurst.
Presentation given by Mark Billinghurst on research into Empathic Glasses. Combining Augmented Reality, Wearable Computers, Emotion Sensing and Remote Collaboration. Given on February 18th 2016.
Keynote speech given by Mark Billinghurst at the CHIuXiD conference in Jakarta, Indonesia on April 14th 2016. This talk describes the research area of Empathic Computing and examples from research projects in this area.
2013 Lecture 6: AR User Interface Design GuidelinesMark Billinghurst
COSC 426 Lecture 6: on AR User Interface Design Guidelines. Lecture taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury on August 16th 2013
COMP 4010 Lecture 5 on Interaction Design for Virtual Reality. Taught by Gun Lee on August 21st 2018 at the University of South Australia. Slides by Mark Billinghurst
By interaction design, we mean
"designing interactive products to support the way people communicate and interact in their everyday lives." (Foley, 2007)
References:
Foley., J., “Interaction Design beyond human-computer interaction 2nd Edition,” 2007 John Wiley & Sons Ltd, pp.8
User experience (UX) strategy, a careful blend of research, analysis, and UX design, is where a successful digital product begins. It bridges the gap between vision and execution. By determining tangible objectives, benchmarks, and a roadmap up front, you create the opportunity to solve real problems for real people.
by Courtney Bradford for Circles Conference 2017
INTERACT 2019 'The Science Behind User Experience Design' CourseAsad Ali Junaid
Planning and conducting User Experience (UX) activities in a structured and scientific manner has many advantages. It is important that UX Professionals understand the scientific basis of UX methods and leverage them to enhance the UX of the application being designed. It would also be easier for the UX designer to get a buy-in from the stakeholders if his design recommendations are based in scientific logic and whetted by supporting data. In this course, UX relevant social sciences based scientific concepts and methods will be presented to the audience in a way which is simple to understand and easily to assimilate.
Language is much more than the external expression and communication of internal thoughts formulated independently of their verbalization. In demonstrating the inadequacy and inappropriateness of such a view of language, attention has already been drawn to the ways in which one’s native language is intimately and in all sorts of details related to the rest of one’s life in a community and to smaller groups within that community. This is true of all peoples and all languages; it is a universal fact about language.
Designing with Inmigrants. When emotions run high Mariana Salgado
Presentation used in the European Academy of Design (2015) for presenting the paper with the same title. Paris, France. The paper can be found in: https://www.academia.edu/12261966/Designing_with_Immigrants._When_emotions_run_high
Designing with Immigrants. When emotions run high Mariana Salgado
This was a presentation of a paper with the same title in the European Academy of Design. 21.04.15 Paris. France. This paper was written with Helena Sustar and Michail Galanakis.
Designing with Immigrants. When emotions run high.pptxMariana Salgado
This presentation took place in the 11th European Academy of Design. The value of Design Research. The paper was: Designing with Immigrants. When emotions run high. Paris, France. 2015
This is an introduction workshop to Designing Interactions / Experiences module I’m teaching at Köln International School of Design of the Cologne University of Applied Sciences, which I’m honored to give by invitation of Professor Philipp Heidkamp.
To the end of our possibilities with Adaptive User InterfacesJean Vanderdonckt
Slides of the keynote presented at the 1st International Workshop on Human-in-the-Loop Applied Machine Learning (HITLAML '23)
September 04 - 06, 2023 - Belval, Luxembourg.
This presentation summarizes the evolution of techniques used to adapt the user interfaces to the context of use, which is composed of the user, the platform, and the environment.
Engineering the Transition of Interactive Collaborative Software from Cloud C...Jean Vanderdonckt
Paper presented at EICS '22: https://dl.acm.org/doi/10.1145/3532210
The "Software as a Service" (SaaS) model of cloud computing popularized online multiuser collaborative software. Two famous examples of this class of software are Office 365 from Microsoft and Google Workspace. Cloud technology removes the need to install and update the software on end users' computers and provides the necessary underlying infrastructure for online collaboration. However, to provide a good end-user experience, cloud services require an infrastructure able to scale up to the task and allow low-latency interactions with a variety of users worldwide. This is a limiting factor for actors that do not possess such infrastructure. Unlike cloud computing which forgets the computational and interactional capabilities of end users' devices, the edge computing paradigm promises to exploit them as much as possible. To investigate the potential of edge computing over cloud computing, this paper presents a method for engineering interactive collaborative software supported by edge devices for the replacement of cloud computing resources. Our method is able to handle user interface aspects such as connection, execution, migration, and disconnection differently depending on the available technology. We exemplify our approach by developing a distributed Pictionary game deployed in two scenarios: a nonshared scenario where each participant interacts only with their own device and a shared scenario where participants also share a common device, including a TV. After a theoretical comparative study of edge vs. cloud computing, an experiment compares the two implementations to determine their effect on the end user's perceived experience and latency vs. real latency
UsyBus: A Communication Framework among Reusable Agents integrating Eye-Track...Jean Vanderdonckt
Presentation of ACM EICS '22 paper: https://dl.acm.org/doi/10.1145/3532207
Eye movement analysis is a popular method to evaluate whether a user interface meets the users' requirements and abilities. However, with current tools, setting up a usability evaluation with an eye-tracker is resource-consuming, since the areas of interest are defined manually, exhaustively and redefined each time the user interface changes. This process is also error-prone, since eye movement data must be finely synchronised with user interface changes. These issues become more serious when the user interface layout changes dynamically in response to user actions. In addition, current tools do not allow easy integration into interactive applications, and opportunistic code must be written to link these tools to user interfaces. To address these shortcomings and to leverage the capabilities of eye-tracking, we present UsyBus, a communication framework for autonomous, tight coupling among reusable agents. These agents are responsible for collecting data from eye-trackers, analyzing eye movements, and managing communication with other modules of an interactive application. UsyBus allows multiple heterogeneous eye-trackers as input, provides multiple configurable outputs depending on the data to be exploited. Modules exchange data based on the UsyBus communication framework, thus creating a customizable multi-agent architecture. UsyBus application domains range from usability evaluation to gaze interaction applications design. Two case studies, composed of reusable modules from our portfolio, exemplify the implementation of the UsyBus framework.
µV: An Articulation, Rotation, Scaling, and Translation Invariant (ARST) Mult...Jean Vanderdonckt
Paper presented at ACM EICS '22
Finger-based gesture input becomes a major interaction modality for surface computing. Due to the low precision of the finger and the variation in gesture production, multistroke gestures are still challenging to recognize in various setups. In this paper, we present µV, a multistroke gesture recognizer that addresses the properties of articulation, rotation, scaling, and translation invariance by combining $P+'s cloud-matching for articulation invariance with !FTL's local shape distance for RST-invariance. We evaluate µV against five competitive recognizers on MMG, an existing gesture set, and on two new versions for smartphones and tablets, MMG+ and RMMG+, a randomly rotated version on both platforms. µV is significantly more accurate than its predecessors when rotation invariance is required and not significantly inferior when it is not. µV is also significantly faster than others with many samples and not significantly slower with few samples
RepliGES and GEStory: Visual Tools for Systematizing and Consolidating Knowle...Jean Vanderdonckt
The body of knowledge accumulated by gesture elicitation studies (GES), although useful, large, and extensive, is also heterogeneous, scattered in the scientific literature across different venues and fields of research, and difficult to generalize to other contexts of use represented by different gesture types, sensing devices, applications, and user categories. To address such aspects, we introduce RepliGES, a conceptual space that supports (1) replications of gesture elicitation studies to confirm, extend, and complete previous findings, (2) reuse of previously elicited gesture sets to enable new discoveries, and (3) extension and generalization of previous findings with new methods of analysis and for new user populations towards consolidated knowledge of user-defined gestures. Based on RepliGES, we introduce GEStory, an interactive design space and visual tool, to structure, visualize and identify user-defined gestures from a number of 216 published gesture elicitation studies
Gesture-based information systems: from DesignOps to DevOpsJean Vanderdonckt
Keynote address for the 29th International Conference on Information Systems Development ISD'2021 (Valencia, Spain, September 8-10, 2021). See https://isd2021.webs.upv.es/program.php#keynotes
This talk promotes the Seven I':
Implementation continuity
Inclusion of end-users
Interaction first
Integration among stakeholders
Iteration short
Incremental progress
Innovation openness
Intra-platform plasticity regularly assumes that the display of a computing platform remains fixed and rigid during interactions with the platform in contrast to reconfigurable displays, which can change form depending on the context of use. In this paper, we present a model-based approach for designing and deploying graphical user interfaces that support intra-platform plasticity for reconfigurable displays. We instantiate the model for E3Screen, a new device that expands a conventional laptop with two slidable, rotatable, and foldable lateral displays, enabling slidable user interfaces. Based on a UML class diagram as a domain model and a SCRUD list as a task model, we define an abstract user interface as interaction units with a corresponding master-detail design pattern. We then map the abstract user interface to a concrete user interface by applying rules for the reconfiguration, concrete interaction, unit allocation, and widget selection and implement it in JavaScript. In a first experiment, we determine display configurations most preferred by users, which we organize in the form of a state-transition diagram. In a second experiment, we address reconfiguration rules and widget selection rules. A third experiment provides insights into the impact of the lateral displays on a visual search task.
User-centred Development of a Clinical Decision-support System for Breast Can...Jean Vanderdonckt
See the paper at https://www.scitepress.org/Link.aspx?doi=10.5220/0010258900600071
We conducted a user-centered design of a clinical decision-support system for breast cancer screening, diagnosis, and reporting based on stroke gestures. We combined knowledge elicitation interviews, scenario-focused questionnaires, and paper mock-ups to understand user needs. Multi-fidelity (low and high) prototypes were designed and compared first in vitro in a usability laboratory, then in vivo in the real world. The resulting user interface provides radiologists with a platform that integrates domain-oriented tools for the visualization of mammograms, the manual, and the semi-automatic annotation of breast cancer findings based on stroke gestures. The contribution of this work lies in that, to the best of our knowledge, stroke gestures have not yet been applied to the annotation of mammograms. On the one hand, although there is a substantial amount of research done in stroke-based interaction, none focuses especially on the domain of breast cancer annotation. On the other hand, typical gestures in breast cancer annotation tools are those with a keyboard and a mouse
Simplifying the Development of Cross-Platform Web User Interfaces by Collabo...Jean Vanderdonckt
Ensuring responsive design of web applications requires their user interfaces to be able to adapt according to different contexts of use, which subsume the end users, the devices and platforms used to carry out the interactive tasks, and also the environment in which they occur. To address the challenges posed by responsive design, aiming to simplify their development by factoring out the common parts from the specific ones, this paper presents Quill, a web-based development environment that enables various stakeholders of a web application to collaboratively adopt a model-based design of the user interface for cross-platform deployment. The paper establishes a series of requirements for collaborative model-based design of cross-platform web user interfaces motivated by the literature, observational and situational design. It then elaborates on potential solutions that satisfy these requirements and explains the solution selected for Quill. A user survey has been conducted to determine how stakeholders appreciate model-based design user interface and how they estimate the importance of the requirements that lead to Quill
Detachable user interfaces consist of graphical user interfaces whose parts or whole can be detached at run-time from their host, migrated onto an- other computing platform while carrying out the task, possibly adapted to the new platform and attached to the target platform in a peer-to-peer fashion. De- taching is the property of splitting a part of a UI for transferring it onto another platform. AttAaching is the reciprocal property: a part of an existing interface can be attached to the currently being used interface so as to recompose another one on-demand, according to user's needs, task requirements. Assembling inter- face parts by detaching and attaching allows dynamically composing, decom- posing and re-composing new interfaces on demand. To support this interaction paradigm, a development infrastructure has been developed based on a series of primitives such as display, undisplay, copy, expose, return, transfer, delegate, and switch. We exemplify it with QTkDraw, a painting application with attach- ing and detaching based on the development infrastructure.
The Impact of Comfortable Viewing Positions on Smart TV GesturesJean Vanderdonckt
Whereas gesture elicitation studies for TV interaction
assume that participants adopt an upright, frontal viewing
position, we asked 21 participants to hold a natural, comfortable
viewing position, the posture they adopt when watching TV
at home. By involving a broad selection of users regarding
age, profession, our study targets a higher ecological validity
than in existing studies. Agreements rates were lower than existing studies using an upright, frontal viewing position. Participants experienced problems due to (1) having to use their slave hand instead of their dominant hand, (2) being in a certain orientation with their head making it more difficult to perform some physical movements, and (3) being hindered in their movement by the sofa there lay on. Since each person may have a different
position inducing different gestures due to the aforementioned
problems, the effect of a comfortable viewing position is analyzed
by comparison to gestures for a frontal position.
Head and Shoulders Gestures: Exploring User-Defined Gestures with Upper BodyJean Vanderdonckt
This paper presents empirical results about user-dened gestures
for head and shoulders by analyzing 308 gestures elicited from 22 participants for 14 referents materializing 14 different types of tasks in IoT context of use. We report an overall medium consensus but with medium variance (mean: .263, min: .138, max: .390 on the unit scale) between participants gesture proposals, while their thinking time were less similar (min: 2.45 sec, max: 22.50 sec), which suggests that head and shoulders gestures are not all equally easy to imagine and to produce. We point to the challenges of deciding which head and shoulders gestures
will become the consensus set based on four criteria: the agreement rate, their individual frequency, their associative frequency, and their unicity.
Paper accessible at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A213794
G-Menu: A Keyword-by-Gesture based Dynamic Menu Interface for SmartphonesJean Vanderdonckt
Instead of relying on graphical or vocal modalities for searching
an item by keyword (called K-Menu), this paper presents the G-Menu exploiting gesture interaction and gesture recognition: when a user sketches a keyword by gesturing the first letters of its label, a menu with items related to the recognized letters is constructed dynamically and presented to the user for selection and auto-completion. The selection can be completed either gesturally by an appropriate gesture (called the G-Menu) or by touch only (called the T-Menu). This paper compares the three types of menu, i.e., by keyword, by gesture, and by touching, in a user study with twenty participants on their item selection time (for measuring task efficiency), their error rate (for measuring task effectiveness),
and their subjective satisfaction (for measuring user satisfaction).
Paper accessible at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A213790
Unistroke and multistroke gesture recognizers have always striven to reach some robustness with respect to
all variations encountered when people issue gestures by hand
on touch surfaces or with sensing devices. For this purpose,
successful stroke recognizers rely on a gesture recognition
algorithm that satisfies a series of invariance properties such
as: stroke-order invariance, stroke-number invariance, stroke direction invariance, position, scale, and rotation invariance.
Before initiating any recognition activity, these algorithms
ensure these properties by performing several pre-processing
operations. These operations induce an additional computational
cost to the recognition process, as well as a potential error
bias. To cope with this problem, we introduce an algorithm that
ensures all these properties analytically instead of statistically
based on a vector algebra. Instead of points, the recognition
algorithm works on vectors between vectors. We demonstrate
that this approach not eliminates the need for these preprocessing
operations but also satisfies an entire structure preserving
transformation.
Paper available at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A217006
Body-based gestures, such as acquired by Kinect sensor, today benefit from efficient tools for their recognition and development, but less for automated reasoning. To facilitate this activity, an ontology for structuring body-based gestures, based on user, body and body parts, gestures, and environment, is designed and encoded in Ontology Web Language according to modelling triples (subject, predicate, object). As a proof-of-concept and to feed this ontology, a gesture elicitation study collected 24 participants X 19 referents for IoT tasks = 456 elicited body-based gestures, which were classified and expressed according to the ontology.
See paper at https://dl.acm.org/citation.cfm?id=3328238
AB4Web: An On-Line A/B Tester for Comparing User Interface Design AlternativesJean Vanderdonckt
We introduce AB4Web, a web-based engine that implements a balanced randomized version of the multivariate A/B testing, specifically designed for practitioners to readily compare end-users' preferences for user interface alternatives, such as menu layouts, widgets, controls, forms, or visual input commands. AB4Web automatically generates a balanced set of randomized pairs from a pool of user interface design alternatives, presents them to participants, collects their preferences, and reports results from the perspective of four quantitative measures: the number of presentations, the preference percentage, the latent score of preference, and the matrix of preferences. In this paper, we exemplify the AB4Web tester with a user study for which N=108 participants expressed their preferences regarding the visual design of 49 distinct graphical adaptive menus, with a total number of 5,400 preference votes. We compare the results obtained from our quantitative measures with four alternative methods: Condorcet, de Borda count starting at one and zero, and the Dowdall scoring system. We plan to release AB4Web as a public tool for practitioners to create their own A/B testing experiments.
Gelicit: A Cloud Platform for Distributed Gesture Elicitation StudiesJean Vanderdonckt
A gesture elicitation study, as originally defined, consists of gathering a sample of participants in a room, instructing them to produce gestures they would use for a particular set of tasks, materialized through a representation called referent, and asking them to fill in a series of tests, questionnaires, and feedback forms. Until now, this procedure is conducted manually in a single, physical, and synchronous setup. To relax the constraints imposed by this manual procedure and to support stakeholders in defining and conducting such studies in multiple contexts of use, this paper presents Gelicit, a cloud computing platform that supports gesture elicitation studies distributed in time and space structured into six stages: (1) define a study: a designer defines a set of tasks with their referents for eliciting gestures and specifies an experimental protocol by parameterizing its settings; (2) conduct a study: any participant receiving the invitation to join the study conducts the experiment anywhere, anytime, anyhow, by eliciting gestures and filling forms; (3) classify gestures: an experimenter classifies elicited gestures according to selected criteria and a vocabulary; (4) measure gestures: an experimenter computes gesture measures, like agreement, frequency, to understand their configuration; (5) discuss gestures: a designer discusses resulting gestures with the participants to reach a consensus; (6) export gestures: the consensus set of gestures resulting from the discussion is exported to be used with a gesture recognizer. The paper discusses Gelicit advantages and limitations with respect to three main contributions: as a conceptual model for gesture management, as a method for distributed gesture elicitation based on this model, and as a cloud computing platform supporting this distributed elicitation. We illustrate Gelicit through a study for eliciting 2D gestures executing Internet of Things tasks on a smartphone.
MoCaDiX: Designing Cross-Device User Interfaces of an Information System base...Jean Vanderdonckt
This paper presents MoCaDiX, a method for designing cross-device graphical user interfaces of an information system based on its UML class diagram, structured as a four-step process: (1) a UML class diagram of the information system is created in a model editor, (2) how the classes, attributes, methods, and relationships of this class diagram are presented across devices is then decided based on user interface patterns with
their own parametrization, (3) based on these parameters, a Concrete User Interface model is generated in
QuiXML, a lightweight fit-to-purpose User Interface Description Language, and (4) based on this model, HTML5 cross-device user interfaces are semi-automatically generated for four configurations: single/multiple device single/multiple-display on a smartphone, a tablet, and a desktop. From the practitioners’ viewpoint, a first experiment investigates effectiveness, efficiency, and subjective satisfaction of three intermediate and
three expert designers, using MoCaDiX on a representative class diagram. From the end user’s viewpoint, a second experiment compares subjective satisfaction and preference of twenty end users assessing layout strategies for interfaces generated on two devices.
Specification of a UX process reference model towards the strategic planning ...Jean Vanderdonckt
In this conceptual paper, we present a UX process reference model (UXPRM), explain how it builds on the related work and report our experience using it. The UXPRM includes a description of primary UX lifecycle processes, and a classification of UX methods and artifacts. This work draws an accurate picture of UX base practices and allows the reader to compare and select methods for different purposes. Building on that basis, our future work consists of developing a UX Capability/Maturity Model (UXCMM) intended for UX activity planning according to the organization’s UX capabilities. Ultimately, the UXCMM aims to
facilitate the integration of UX processes in software engineering, which should contribute to reducing the gap between UX research and UX practice.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
Gestural Interaction, Is it Really Natural?
1. Francqui Chair 2020, Inaugural Lesson:
Gestural Interaction, Is it Really Natural?
Jean Vanderdonckt, UCLouvain
Vrije Universiteit Brussel, February 20, 2020, 4 pm-6 pm
Location: Room I.0.02, Pleinlaan 2, B-1050 Brussels
Presented by
Prof. Dr. Beat Signer
2.
3. 3
Jean Vanderdonckt
Université catholique de Louvain (UCLouvain)
Louvain School of Management (LSM)
Louvain Research Institute in Management and Organizations
(LouRIM)
Institute of Information and Communication Technologies,
Electronics and Applied Mathematics (ICTEAM)
Director of Louvain Interaction Lab
Place des Doyens, 1 – B-1348 Louvain-la-Neuve,
Belgium
4. Gestural Interaction (Francqui Chair, VUB, Brussels, February 20, 2020) 4
Some links
• Web site
https://wise.vub.ac.be/news/francqui-chair-2020-prof-jean-
vanderdonckt
• Join me on
• SlideShare: https://www.slideshare.net/jeanvdd
• LinkedIn: https://www.linkedin.com/in/jeanvdd/
• YouTube: https://www.youtube.com/user/jeanvdd
• Amazon: https://www.amazon.com/Jean-
Vanderdonckt/e/B01640UKYK
7. 7
• The intersection between the 5 human senses gives rise to
many possible interaction modalities
Psychological dimension: What is a gesture?
Gesture
8. 8
• The intersection between the 5 human senses gives rise to
many possible interaction modalities
• Gesture communication emerges in young children even
before development of language
Psychological dimension: What is a gesture?
Source: S.W. Goodwyn, L.P. Acredolo, C. Brown. (2000). Impact of Symbolic Gesturing on Early Language Development. Journal of
Nonverbal Behavior 24, 81-103
9. 9
• Innate gestures (natural?)
• Gestures that the user intuitively knows or that make sense,
based on the person’s understanding of the world
• Examples
• Pointing to aim a target
• Grabbing to pick an object
(MS Kinect)
• Pushing to select something
• Learned gestures (less natural => memorability?)
• Gestures the user needs to learn before
• Examples
• Waving to engage
• Making a specific pose
to cancel an action
Psychological dimension: What is a gesture?
10. 10
• The intersection between the 5 human senses gives rise to
many possible interaction modalities
• Gesture communication emerges in young children even
before development of language
• Blind people gesture as they speak just as much as
sighted individuals do, even when they know their listener
is also blind.
Psychological dimension: What is a gesture?
Source: S.W. Goodwyn, L.P. Acredolo, C. Brown. (2000). Impact of Symbolic Gesturing on Early Language Development. Journal of
Nonverbal Behavior 24, 81-103
11. 11
• The intersection between the 5 human senses gives rise to
many possible interaction modalities
• Gesture communication emerges in young children even
before development of language
• Blind people gesture as they speak just as much as
sighted individuals do, even when they know their listener
is also blind
• People gesture without a visual model
• Gestures therefore require neither a model nor an
observant partner
Psychological dimension: What is a gesture?
Source: J. M. Iverson, S. Goldin-Meadow. (1998). Why people gesture when they speak. Nature, 396:228
12. 12
• Kendon’s classification of gestures (1972)
Psychological dimension: What is a gesture?
Gesticulation
Spontaneous movements of the hands
and arms that accompany speech
Speech-framed gestures
Gesticulation that is integrated into a spoken
utterance, replacing a particular word
Pantomimes
Gestures that depict objects or actions, with or
without an accompanying speech
Emblems Familiar gestures accepted as a standard
Signs Complete linguistic system
13. • McNeil’s interpretation of Kendon’s continuum (1994)
Psychological dimension: What is a gesture?
Gesticulation
Speech-framed gestures
Pantomimes
Emblems
Signs
“Adam Kendon once
distinguished gestures
of different kinds along
a continuum that I
named “Kendon's
Continuum”, in his
honor.” [McNeill, 1992]
Source: D. McNeill. Hand and Mind: What Gesture Reveals about Thought. University Chicago Press, 1992
Kendon, A., Do gestures communicate? A review. Research on Language and Social Interaction 27, 1994, 175-200
Mandatory presence
of speech
Optional presence
of speech
Mandatory presence of
speech frames
Optional absence
of speech
Mandatory absence
of speech
14. • McNeil’s interpretation of Kendon’s continuum (1994)
Psychological dimension: What is a gesture?
Gesticulation
Speech-framed gestures
Pantomimes
Emblems
Signs
Source: D. McNeill. Hand and Mind: What Gesture Reveals about Thought. University Chicago Press, 1992
Kendon, A., Do gestures communicate? A review. Research on Language and Social Interaction 27, 1994, 175-200
Mandatory presence
of speech
Optional presence
of speech
Mandatory presence of
speech frames
Optional absence
of speech
Mandatory absence
of speech
“As one moves along Kendon’s
Continuum, two kinds of
reciprocal changes occur. First,
the degree to which speech is an
obligatory accompaniment of
gesture decreases from
gesticulation to signs. Second, the
degree to which a gesture shows
the properties of a language
increases.”
[McNeill, 1992]
15. • McNeil’s interpretation of Kendon’s continuum (1994)
Psychological dimension: What is a gesture?
Gesticulation
Speech-framed gestures
Pantomimes
Emblems
Signs
Source: D. McNeill. Hand and Mind: What Gesture Reveals about Thought. University Chicago Press, 1992.
“Gestures enhance,
complement, and
sometimes even
replace speech.”
16. • Gesture (…) are communicative movements of the hands
and arms which express — just as language — speakers’
attitudes, ideas, feelings and intentions…” (Müller, 1998)
Psychological dimension: What is a gesture?
17. • Saffer’s definition: “a gesture (…) is any physical
movement that a digital system can sense and
respond to without the aid of a traditional pointing
devices, such as a mouse or stylus”
Psychological dimension: What is a gesture?
Source: Saffer, D., Designing Gestural Interfaces, O'Reilly Media, November 2008.
Sensor
Gesture
recognizer
Actuator
Context of use =
(User, Platform/device, Environment)
Disturbances
Feedback
feeds drives
operates on
is sensed by
produces
18. • Turk’s definition in Human-Computer Interaction
(2002)
• ”…expressive, meaningful body motions –i.e. physical
movements of the fingers, hands, arms, head, face or
body with the intent to convey information or interact
with the environment.”
Psychological dimension: What is a gesture?
Sources: Turk, M. (2002). Gesture Recognition. In K. M. Stanney (Ed.), Handbook of Virtual Environments (pp. 223–237). London:
Lawrence Erlbaum Associates, Publishers.
Isabel Benavente Rodriguez, Nicolai Marquardt, Gesture Elicitation Study on How to Opt-in & Opt-out from Interactions
with Public Displays, Proc. of ISS ‘17, pp. 32-41.
19. • Aigner et al.’s taxonomy of mid-
air gestures
• P= Pointing gestures (= deictic
gestures) indicate people,
objects, directions
Psychological dimension: What is a gesture?
Source: Aigner, R., Wigdor, D., Benko, H., Haller, M., Lindlbauer, D., Ion, A., Zhao, S., et al. (2012). Understanding Mid-Air Hand
Gestures: A Study of Human Preferences in Usage of Gesture Types for HCI.
20. • Aigner et al.’s taxonomy of mid-
air gestures
• Semaphoric gestures are hand
postures and movements
conveying specific meanings
• T= Static semaphorics are identified by a
specific hand posture. Example: a flat palm
facing from the actor means “stop”.
• D= Dynamic semaphorics convey information
through their temporal aspects. Example: a
circular hand motion means “rotate”
• S= Semaphoric strokes represent hand flicks
are single, stroke-like movements. Example: a
left flick of the hand means “dismiss this
object”
Psychological dimension: What is a gesture?
Source: Aigner, R., Wigdor, D., Benko, H., Haller, M., Lindlbauer, D., Ion, A., Zhao, S., et al. (2012). Understanding Mid-Air Hand
Gestures: A Study of Human Preferences in Usage of Gesture Types for HCI.
21. • Aigner et al.’s taxonomy of mid-
air gestures
• M= Manipulation gestures a
guide movement in a short
feedback loop. Thus, they feature
a tight relationship between the
movements of the actor and the
movements of the object to be
manipulated. The actor waits for
the entity to “follow” before
continuing
Psychological dimension: What is a gesture?
Source: https://www.microsoft.com/en-us/research/publication/understanding-mid-air-hand-gestures-a-study-of-human-
preferences-in-usage-of-gesture-types-for-hci/
22. • A gesture is any particular type of body
movement performed in 1D, 2D, or 3D.
• e.g., Hand movement (supination, pronation, etc.)
• e.g., Head movement (lips, eyes, face, etc.)
• e.g., Full body movement (silhouette, posture, etc.)
Psychological dimension: What is a gesture?
Source: Kendon, A. Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press, 2004.
23. • A gesture is any particular type of body
movement performed in 1D, 2D, or 3D
Psychological dimension: What is a gesture?
Individual body part Combined body parts
25. 25
Hardware dimension: How to gesture?
Paradigms of gesture interaction
Contact-based interaction
(surface limitation?)
26. 26
Hardware dimension: How to gesture?
Paradigms of gesture interaction
Contact-based interaction
(surface limitation?)
Contact-less interaction
With wearable
27. 27
Hardware dimension: How to gesture?
Paradigms of gesture interaction
Contact-based interaction
(surface limitation?)
Contact-less interaction
With wearable Without wearable
Close Far
29. 29
Software dimension: Which algorithm?
In Window mode In full screen mode
Source:
https://www.usenix.org/legacy/publications/library/proceedings/usenix03/tech/freenix03/full_papers/worth/worth_html/xstroke.html
XStroke
31. 31
Software dimension: Which algorithm?
Siger (2005)
Training phase
Vector string:
Stroke Vector direction
Left L
Right R
Up U
Down D
Regular expression:
(NE|E|SE)+(NW|N|NE)+(SW|W|NW)+(SE|S|SW)+.
LU,U,U,U,U,U,RU,RU,RU,RU,RU,RU,R,R,R,R,R,R,R,R,RD,RD,RD, RD,D,D,D,D,LD,LD,LD,LD,L,L,LD,LD,LD,D,D,D,D,D,D,D,D,D,L
33. 33
Software dimension: Which algorithm?
Source: Beat Signer, U. Kurmann, Moira C. Norrie: iGesture: A General Gesture Recognition Framework. ICDAR 2007: 954-958
35. 35
Nearest-Neighbor-Classification (NNC) for 2D strokes
Y
0 0.25 0.50 0.75 1
X
0
0.25
0.50
0.75
1
= candidate point
x = reference point
x
x
x
x x
x
x
x
x
x
x
x
x
x
k nearest neighbors
1 nearest
neighbor distance
0 1
x x x x x
x x x
x x x
Reference gestures
Training set
candidate
gesture
p2
p3
p4
p1
k-NN
k nearest neighbors
1-NN
Single nearest neighbor
applied to gesture recognition
q3
q1
q2
q4
reference
gesture
Software dimension: Which algorithm?
36. 36
Nearest-Neighbor-Classification (NNC)
• Pre-processing steps to ensure invariance
• Re-sampling
• Points with same space between: isometricity
• Points with same timestamp between: isochronicity
• Same amount of points: isoparameterization
• Re-Scaling
• Normalisation of the bounding box into [0..1]x[0..1] square
• Rotation to reference angle
• Rotate to 0°
• Re-rotating and distance computation
• Distance computed between candidate gesture and
reference gestures (1-NN)
Software dimension: Which algorithm?
37. 37
Nearest-Neighbor-Classification (NNC)
• Two families of approaches
• “Between points” distance
• $-Family recognizers: $1, $3, $N, $P, $P+,
$V, $Q,…
• Variants and optimizations: ProTractor,
Protactor3D,…
• “Vector between points” distance
• PennyPincher, JackKnife,…
[Vatavu R.-D. et al, ICMI ’12]
[Taranta E.M. et al, C&G ’16]
Software dimension: Which algorithm?
38. 38
Nearest-Neighbor-Classification (NNC)
• Two families of approaches
• “Between points” distance
• $-Family recognizers: $1, $3, $N, $P, $P+,
$V, $Q,…
• Variants and optimizations: ProTractor,
Protactor3D,…
• “Vector between points” distance
• PennyPincher, JackKnife,…
• A third new family of approaches
• “Vector between vectors” distance:
our approach
Software dimension: Which algorithm?
39. 39
• Local Shape Distance between 2 triangles based on
similarity (Roselli’s distance)
𝑎
𝑏
𝑢
𝑣
𝑎 + 𝑏
𝑢 + 𝑣
Paolo Roselli
Università degli Studi di Roma, Italy
Software dimension: Which algorithm?
Source: Lorenzo Luzzi & Paolo Roselli, The shape of planar smooth gestures and the convergence of a gesture recognizer, Aequationes
mathematicae volume 94, 219–233(2020).
40. 40
• Step 1. Vectorization for each pair of vectors between
three consecutive points
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
Training
gesture
Candidate
gesture
Software dimension: Which algorithm?
Source: Jean Vanderdonckt, Paolo Roselli, Jorge Luis Pérez-Medina, !FTL, an Articulation-Invariant Stroke Gesture Recognizer with
Controllable Position, Scale, and Rotation Invariances. ICMI 2018: 125-134
41. 41
• Step 1. Vectorization for each pair of vectors between
three consecutive points
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
p1
p2
p3
p4
p5
p6
q4
q5
q6
Training
gesture
Candidate
gesture
q1
q2
q3
Software dimension: Which algorithm?
Source: Jean Vanderdonckt, Paolo Roselli, Jorge Luis Pérez-Medina, !FTL, an Articulation-Invariant Stroke Gesture Recognizer with
Controllable Position, Scale, and Rotation Invariances. ICMI 2018: 125-134
42. 42
• Step 2. Mapping candidate’s triangles onto training
gesture’s triangles
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
Training
gesture
Candidate
gesture
p1
p2
p3
p2
p3
p4
p3
p4
p5
p4
p5
p6
Software dimension: Which algorithm?
Source: Jean Vanderdonckt, Paolo Roselli, Jorge Luis Pérez-Medina, !FTL, an Articulation-Invariant Stroke Gesture Recognizer with
Controllable Position, Scale, and Rotation Invariances. ICMI 2018: 125-134
43. 43
• Step 2. Mapping candidate’s triangles onto training
gesture’s triangles
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
Training
gesture
Candidate
gesture
p1
p2
p3
p2
p3
p4
p3
p4
p5
p4
p5
p6
Software dimension: Which algorithm?
Source: Jean Vanderdonckt, Paolo Roselli, Jorge Luis Pérez-Medina, !FTL, an Articulation-Invariant Stroke Gesture Recognizer with
Controllable Position, Scale, and Rotation Invariances. ICMI 2018: 125-134
44. 44
• Step 3. Computation of Local Shape Distance between
pairs of triangles
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
p1
p2
p3
p3
p4
p5
p2
p3
p4
p4
p5
p6
Training
gesture
Candidate
gesture
p1p2p3,
q1q2q3
(N)LSD (
)
(
=0.02
p2p3p4,
q2q3q4 )
=0.04
(p3p4p5,
q3q4q5 )
=0.0001
)
p4p5p6,
q3q4q5
(
=0.03
Software dimension: Which algorithm?
Source: Jean Vanderdonckt, Paolo Roselli, Jorge Luis Pérez-Medina, !FTL, an Articulation-Invariant Stroke Gesture Recognizer with
Controllable Position, Scale, and Rotation Invariances. ICMI 2018: 125-134
45. 45
• Step 4. Summing all individual figures into final one
• Step 5. Iterate for every training gesture
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
p1
p2
p3
p4
p5
p6
q1
q2
q3
q4
q5
q6
p1
p2
p3
p3
p4
p5
p2
p3
p4
p4
p5
p6
Training
gesture
Candidate
gesture
p1p2p3,
q1q2q3
(N)LSD (
)
p2p3p4,
q2q3q4 )
(
)
p4p5p6,
q3q4q5
(
(p3p4p5,
q3q4q5 )
=0.02 =0.04 =0.0001 =0.03
=0.02+0.04+0.0001+0.03=0.0901
(indicative figures)
Software dimension: Which algorithm?
Source: Jean Vanderdonckt, Paolo Roselli, Jorge Luis Pérez-Medina, !FTL, an Articulation-Invariant Stroke Gesture Recognizer with
Controllable Position, Scale, and Rotation Invariances. ICMI 2018: 125-134
46. 3D Hand gesture recognition
Software dimension: Which algorithm?
47. Full body gesture recognition
Software dimension: Which algorithm?
See video at
https://youtu.be/RTEGMlDRDL0
50. • Smart Home: TV, fridge, coffee machine,…
• Example: Samsung Smart TV
Usage dimension: Which application domains?
51. • Ring device: gesture elicitation study
Usage dimension: Which application domains?
Source: Bogdan-Florin Gheran, Jean Vanderdonckt, Radu-Daniel Vatavu, Gestures for Smart Rings: Empirical Results, Insights, and
Design Implications. Conference on Designing Interactive Systems 2018: 623-635
52. • Ring device at Home (Family management)
Usage dimension: Which application domains?
53. • Ring device at Home (Family management)
Usage dimension: Which application domains?
55. • History: chironomia
Social dimension: critical factors
Source: Gilbert Austin, Chironomia, or a Treatise on Rhetorical Delivery (1806). Ed. Mary Margaret Robb and Lester Thonssen.
Carbondale, IL: Southern Illinois UP, 1966.
56. Social dimension: critical factors
56
Gestures in Movies
• 12 Angry Men
(dir.: S. Lumet)
The defense and the prosecution have rested and the
jury is filing into the jury room to decide if a young man
is guilty or innocent of murdering his father. What
begins as an open-and-shut case of murder soon
becomes a detective story that presents a succession
of clues creating doubt, and a mini-drama of each of the
jurors' prejudices and preconceptions about the trial,
the accused, and each other. Based on the play, all of
the action takes place on the stage of the jury room.
58. Range of
motion
• Range of motion
• Relates the distance between the position of the human
body producing the gesture and the location of the
gesture
• Possible values are:
• C= Close intimate, I= Intimate, P= Personal, S=Social,
U= Public, R= Remote
Social dimension: critical factors
59. • Cultural influence and interpretation
• The gesture ”The ring” has four major meanings:
OK/Good, Orifice, Zero, Threat
Social dimension: critical factors
Source: Morris, Collett, Marsh, & O’Shaghnessy (1979)
• “Very tasty”: how to gesture that?
60. • Cultural influence and interpretation
• The gesture ”The ring” has four major meanings:
OK/Good, Orifice, Zero, Threat
Social dimension: critical factors
Source: Morris, Collett, Marsh, & O’Shaghnessy (1979)
• “Very tasty”: how to gesture that?
61. • Cultural influence and interpretation
• The gesture ”The ring” has four major meanings:
OK/Good, Orifice, Zero, Threat
Social dimension: critical factors
Source: Morris, Collett, Marsh, & O’Shaghnessy (1979)
• “Very tasty”: how to gesture that?
• “Select”: how to gesture that?
Italy: pointing Sweden: open hand Turkey: two open hands
62. • Engagement by propagation:
only by careful consideration of social gestures
Social dimension: critical factors
Source: Jean-Yves Lionel Lawson, Jean Vanderdonckt, Radu-Daniel Vatavu: Mass-Computer Interaction for Thousands of Users and
Beyond. CHI Extended Abstracts 2018
www.skemmi.com
See video at https://www.youtube.com/watch?v=IZaAl59AUk8
63. • Social acceptance or reluctance?
Social dimension: critical factors
What do you think of the Itchy Nose?
See video at https://www.youtube.com/watch?v=IQ_LkPM_GHs
64. • Social acceptance or reluctance?
Social dimension: critical factors
0.319
0.246 0.232 0.225
0.203
0.185 0.181 0.178
0.167 0.167 0.167
0.244 0.231
0.141
0.218
0.179
0.103
0.218 0.218
0.103
0.141
0.
0.509
0.236
0.273
0.291
0.182
0.4
0.109
0.182
0.236
0.145
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Start
Player
Go to
Previous
Item
Turn
Alarm Off
Increase
Volume
Turn Light
On
Decrease
Volume
Turn TV
Off
Turn TV
On
Dim Light Turn
Alarm On
Han
C
Global Male
Both side
Double tap
2
Center tap Left push
Both sides
Both hands
tap
Right push Left to right
flick
Right to left
flick
Continuous
rubbing
Top push Top to
bottom flick
Repe
cent
n
Agreement
rate
Source: Jorge Luis Pérez-Medina, Santiago Villarreal, Jean Vanderdonckt: A Gesture Elicitation Study of Nose-Based Gestures.
Sensors 20(24): 7118 (2020)
67. • Compatibility
• Imposition: each OS imposes its own set of gestures
• Some are natural to use
• Some others are not natural at all and remain unused
• Lack of acceptability:
• Some gestures can be simple to be recognized by the system,
yet hard to remember and reproduce for end users
• Some gestures can be accepted by end users, but harder to be
recognized
• Gestures are often system-defined, sometimes
designer-defined, rarely not user-defined
• Need for Gesture Elicitation Study (GES)
• Is a study for asking end-users to elicit their own gestures for a
set of predefined functions through referents and reach a
consensus
User experience dimension: evaluation criteria
68. • Compatibility
• Need for a natural conceptual model: Virtual Library
User experience dimension: evaluation criteria
Source: https://www.youtube.com/watch?v=ls5kj7oVwto
69. • Consistency
• Imposition: each OS imposes its own set of gestures
• A few gestures are common (hopefully, natural)
• Most other gestures are inconsistent
User experience dimension: evaluation criteria
Source: Ryan Lee, www.gesturecons.com
71. • Consistency: standardization?
• Gesture example “Shake”: Wake up, Update, Reset, Next
track, Shuffle, Unlock, Enter a comment (SnappView)
User experience dimension: evaluation criteria
72. • Discoverability
• GUI interaction is based on
• action exploration: eg by menu
• recognition (best)
• Gestures are not easy
to discover
• Solutions appear: feedforward
User experience dimension: evaluation criteria
Sources: Donald A. Norman, J. Nielsen, Gestural interfaces: a step backward in usability, Interactions, V7N5, Sept. 2010
Olivier Bau, Wendy E. Mackay, OctoPocus: a dynamic guide for learning gesture-based command sets. UIST 2008: 37-46
73. • Control: explicit, mixed, not implicit
• “When users think they did one thing but actually did
something else, they lose their sense of controlling the
system because they don't understand the connection
between actions and results.”
User experience dimension: evaluation criteria
Sources: Donald A. Norman, J. Nielsen, Gestural interfaces: a step backward in usability, Interactions, V7N5, Sept. 2010.
Bert Schiettecatte, Jean Vanderdonckt, AudioCubes: a distributed cube tangible interface based on interaction range for
sound design. Tangible and Embedded Interaction 2008: 3-10
Commands AudioCube action(s)
Grab and set
DOF=0
(discrete)
DOF=1
(linearly
correlated)
Move cube in 2D, almost 3D, 3D
74. • Control: explicit, mixed, not implicit
• “When users think they did one thing but actually did
something else, they lose their sense of controlling the
system because they don't understand the connection
between actions and results.”
User experience dimension: evaluation criteria
Sources: Donald A. Norman, J. Nielsen, Gestural interfaces: a step backward in usability, Interactions, V7N5, Sept. 2010.
Bert Schiettecatte, Jean Vanderdonckt, AudioCubes: a distributed cube tangible interface based on interaction range for
sound design. Tangible and Embedded Interaction 2008: 3-10
Commands AudioCube action(s)
DOF=1
(linearly
correlated)
Move cube in 2D, almost 3D, 3D
75. • Control: explicit, mixed, not implicit
• “When users think they did one thing but actually did
something else, they lose their sense of controlling the
system because they don't understand the connection
between actions and results.”
User experience dimension: evaluation criteria
Sources: Donald A. Norman, J. Nielsen, Gestural interfaces: a step backward in usability, Interactions, V7N5, Sept. 2010.
Bert Schiettecatte, Jean Vanderdonckt, AudioCubes: a distributed cube tangible interface based on interaction range for
sound design. Tangible and Embedded Interaction 2008: 3-10
Commands AudioCube action(s)
DOF=1
(rotationally
correlated)
Rotate in 2D, 3D
DOF=2
(freeform)
2D, 3D gestures
76. • Physical demand depends on variables
• Gesture form: specifies which form of gesture is elicited.
Possible values are:
• S= stroke when the gesture only consists of taps and flicks
• T= static when the gesture is performed in only one location
• M= static with motion (when the gesture is performed with a
static pose while the rest is moving)
• D= dynamic when the gesture does capture any change or
motion
User experience dimension: evaluation criteria
77. • Physical demand depends on variables
• Laterality: characterizes how the two hands are
employed to produce gestures, with two categories, as
done in many studies. Possible values are:
• D= dominant unimanual, N= non-dominant unimanual,
S= symmetric bimanual, A= asymmetric bimanual
User experience dimension: evaluation criteria
Source: https://www.tandfonline.com/doi/abs/10.1080/00222895.1987.10735426
D
(right handed)
N
(right handed)
S
(right handed)
A
(right handed)
78. • Agreement among end users
• Agreement Rate = the number of pairs of participants in
agreement with each other divided by the total number
of pairs of participants that could be in agreement
• Compute co-agreement for pairs, groups (eg male vs
female), categories of referents (eg basic vs. advanced)
User experience dimension: evaluation criteria
agreement rate disagreement rate co-agreement rate
Source: Radu-Daniel Vatavu, Jacob O. Wobbrock, Between-Subjects Elicitation Studies: Formalization and Tool Support. CHI 2016:
3390-3402.
79. • FUN!
• In games, all gestures are permitted (body)
• In professional contexts, a gesture could be
considered as awkward, inappropriate
User experience dimension: evaluation criteria
Example: MiniEurope (Alterface)
81. 81
• Gesture interaction is suitable for
• Natural interactions: interact directly with objects in physical way
• Less cumbersome or visible hardware
• Flexibility in hardware
• Fun
• Gesture interaction is NOT suitable for
• Heavy data input (use keyboards instead)
• Absence of visual feedback (e.g., a system without a screen or
targeting users with visual impairments)
• Unmet physical demands (e.g., swipe to receive a phone call in
winter)
• Constrained contexts of use (e.g., privacy, embarrassment)
• User and task
• Platform/device
• Environment
Conclusion: is it really natural?
Kendon [12] added that an important part of ‘kinetics’ research shows that gesture phrases can be organized in relation to speech phrases. We can parallel his arguments and reasonings to relatively coincide with pen gestures (as natural human gestures) and the instantiated sketch-objects (as dictionised speech contents). He also stated that there is a consistent patterning in how gesture phrases are formed in relation to the phrases of speech – just as, in a continuous discourse, speakers group tone units into higher order groupings resembling a hierarchy, so gesture phrases may be similarly organized.
Gestures that are put together to form phrases of bodily actions have the characteristics that permit them to be ‘recognized’ as components of willing communicative action
Designed for speech-related gestures
Not completely relevant for interaction design
Gesture-based interaction without the support of speech input
Tailor-made for interaction design
Gesture-based interaction without the support of speech input
Tailor-made for interaction design
Gesture-based interaction without the support of speech input
Tailor-made for interaction design
Gesture-based interaction without the support of speech input
Tailor-made for interaction design
Chironomia is the art of using gesticulations or hand gestures to good effect in traditional rhetoric or oratory. Effective use of the hands, with or without the use of the voice, is a practice of great antiquity, which was developed and systematized by the Greeks and the Romans. Various gestures had conventionalized meanings which were commonly understood, either within certain class or professional groups, or broadly among dramatic and oratorical audiences.