This presentation contains the slides used in the ACM Distinguished Program lecture entitled "Pen based Gestures and Sketching" presented at University of Suceava, November 15th, 2018
User Experience Showcase lightning talks - University of EdinburghNeil Allison
Lightning talk slide decks from a University of Edinburgh User Experience event held 13 October 2017. Topics: User needs, Web strategy, Digital Standards, Edinburgh Global Experience Language, Current student UX case study.
This presentation is about a Mini Project that I worked on over a 1 week period. Basically, it is about collecting the free text feedback from the event attendees, storing it in to a repository as RDF triples. That could be utilized for different applications such as Profiling and Recommendation. In essence, it is very large system which needs a lot of time and resources to be developed. Furthermore, to shape it from concept to realization it involves many disciplines such as Semantic web, Recommendation Systems, NLP etc. But for the sake of simplicity and other constraints, in this first iteration I have limited this to just a proof of concept and as a basic layout of the infrastructure that I envisaged and it is yet to be realized in near future.
How user research shaped the thinking towards developing our institutions cen...Brendan Owers
How user research shaped the thinking towards developing our institutions central web portal, presented at ALT-C 2019
https://altc.alt.ac.uk/2019/sessions/a-013/
Design and education: What tools and processes are new designers adopting?Mafalda Sequeira
Design and Education presentation at Edit Sketch event at March 4th, in Lisbon. I talked about my view of the current state of digital design education, and which tools and processes new designers are adopting.
News recommenders have the potential to help users filter the enormous amount of news that is available online, and as such may play an important role in determining what information users do and do not get to see. However, current approaches to evaluating recommender systems are often focused on measuring an increase in user clicks and short-term engagement, rather than measuring the user's and society’s longer term interest in diverse and important recommendations. In this talk we aim to bridge the gap between so-called normative notions of news diversity, as it is known in social sciences and specifically democratic theory, and quantitative metrics necessary for evaluating the recommender system. We discuss a number of democratic missions a recommender system could have, together with a set of evaluation metrics stemming from these missions, and suggest ways for practical implementations of these metrics.
The talk will be about practical considerations that our team has had to make in order to bring a recommender system into production. I’ll cover the “default” tools with which we started (Batch processing in Spark) and follow that up with more recent tools like AWS Lambda and Spark Streaming.
In data visualization courses, students learn to present data in visual form. This involves working with data, learning new software, and applying visual design principles. However, visualizations are only as effective as the insights they reveal.
These slides are from a 45-minute webinar that covers how to teach students to tell data-driven stories with visualizations, including:
Storytelling best practices
Techniques for helping others see key takeaways from visuals
The use of software to support data presentation in visual form
Tableau’s unique annotation, animation, and interactive features will be highlighted to demonstrate how they can support data-driven stories.
To the end of our possibilities with Adaptive User InterfacesJean Vanderdonckt
Slides of the keynote presented at the 1st International Workshop on Human-in-the-Loop Applied Machine Learning (HITLAML '23)
September 04 - 06, 2023 - Belval, Luxembourg.
This presentation summarizes the evolution of techniques used to adapt the user interfaces to the context of use, which is composed of the user, the platform, and the environment.
Engineering the Transition of Interactive Collaborative Software from Cloud C...Jean Vanderdonckt
Paper presented at EICS '22: https://dl.acm.org/doi/10.1145/3532210
The "Software as a Service" (SaaS) model of cloud computing popularized online multiuser collaborative software. Two famous examples of this class of software are Office 365 from Microsoft and Google Workspace. Cloud technology removes the need to install and update the software on end users' computers and provides the necessary underlying infrastructure for online collaboration. However, to provide a good end-user experience, cloud services require an infrastructure able to scale up to the task and allow low-latency interactions with a variety of users worldwide. This is a limiting factor for actors that do not possess such infrastructure. Unlike cloud computing which forgets the computational and interactional capabilities of end users' devices, the edge computing paradigm promises to exploit them as much as possible. To investigate the potential of edge computing over cloud computing, this paper presents a method for engineering interactive collaborative software supported by edge devices for the replacement of cloud computing resources. Our method is able to handle user interface aspects such as connection, execution, migration, and disconnection differently depending on the available technology. We exemplify our approach by developing a distributed Pictionary game deployed in two scenarios: a nonshared scenario where each participant interacts only with their own device and a shared scenario where participants also share a common device, including a TV. After a theoretical comparative study of edge vs. cloud computing, an experiment compares the two implementations to determine their effect on the end user's perceived experience and latency vs. real latency
More Related Content
Similar to Pen-based Gestures and Sketching User Interfaces
User Experience Showcase lightning talks - University of EdinburghNeil Allison
Lightning talk slide decks from a University of Edinburgh User Experience event held 13 October 2017. Topics: User needs, Web strategy, Digital Standards, Edinburgh Global Experience Language, Current student UX case study.
This presentation is about a Mini Project that I worked on over a 1 week period. Basically, it is about collecting the free text feedback from the event attendees, storing it in to a repository as RDF triples. That could be utilized for different applications such as Profiling and Recommendation. In essence, it is very large system which needs a lot of time and resources to be developed. Furthermore, to shape it from concept to realization it involves many disciplines such as Semantic web, Recommendation Systems, NLP etc. But for the sake of simplicity and other constraints, in this first iteration I have limited this to just a proof of concept and as a basic layout of the infrastructure that I envisaged and it is yet to be realized in near future.
How user research shaped the thinking towards developing our institutions cen...Brendan Owers
How user research shaped the thinking towards developing our institutions central web portal, presented at ALT-C 2019
https://altc.alt.ac.uk/2019/sessions/a-013/
Design and education: What tools and processes are new designers adopting?Mafalda Sequeira
Design and Education presentation at Edit Sketch event at March 4th, in Lisbon. I talked about my view of the current state of digital design education, and which tools and processes new designers are adopting.
News recommenders have the potential to help users filter the enormous amount of news that is available online, and as such may play an important role in determining what information users do and do not get to see. However, current approaches to evaluating recommender systems are often focused on measuring an increase in user clicks and short-term engagement, rather than measuring the user's and society’s longer term interest in diverse and important recommendations. In this talk we aim to bridge the gap between so-called normative notions of news diversity, as it is known in social sciences and specifically democratic theory, and quantitative metrics necessary for evaluating the recommender system. We discuss a number of democratic missions a recommender system could have, together with a set of evaluation metrics stemming from these missions, and suggest ways for practical implementations of these metrics.
The talk will be about practical considerations that our team has had to make in order to bring a recommender system into production. I’ll cover the “default” tools with which we started (Batch processing in Spark) and follow that up with more recent tools like AWS Lambda and Spark Streaming.
In data visualization courses, students learn to present data in visual form. This involves working with data, learning new software, and applying visual design principles. However, visualizations are only as effective as the insights they reveal.
These slides are from a 45-minute webinar that covers how to teach students to tell data-driven stories with visualizations, including:
Storytelling best practices
Techniques for helping others see key takeaways from visuals
The use of software to support data presentation in visual form
Tableau’s unique annotation, animation, and interactive features will be highlighted to demonstrate how they can support data-driven stories.
Similar to Pen-based Gestures and Sketching User Interfaces (20)
To the end of our possibilities with Adaptive User InterfacesJean Vanderdonckt
Slides of the keynote presented at the 1st International Workshop on Human-in-the-Loop Applied Machine Learning (HITLAML '23)
September 04 - 06, 2023 - Belval, Luxembourg.
This presentation summarizes the evolution of techniques used to adapt the user interfaces to the context of use, which is composed of the user, the platform, and the environment.
Engineering the Transition of Interactive Collaborative Software from Cloud C...Jean Vanderdonckt
Paper presented at EICS '22: https://dl.acm.org/doi/10.1145/3532210
The "Software as a Service" (SaaS) model of cloud computing popularized online multiuser collaborative software. Two famous examples of this class of software are Office 365 from Microsoft and Google Workspace. Cloud technology removes the need to install and update the software on end users' computers and provides the necessary underlying infrastructure for online collaboration. However, to provide a good end-user experience, cloud services require an infrastructure able to scale up to the task and allow low-latency interactions with a variety of users worldwide. This is a limiting factor for actors that do not possess such infrastructure. Unlike cloud computing which forgets the computational and interactional capabilities of end users' devices, the edge computing paradigm promises to exploit them as much as possible. To investigate the potential of edge computing over cloud computing, this paper presents a method for engineering interactive collaborative software supported by edge devices for the replacement of cloud computing resources. Our method is able to handle user interface aspects such as connection, execution, migration, and disconnection differently depending on the available technology. We exemplify our approach by developing a distributed Pictionary game deployed in two scenarios: a nonshared scenario where each participant interacts only with their own device and a shared scenario where participants also share a common device, including a TV. After a theoretical comparative study of edge vs. cloud computing, an experiment compares the two implementations to determine their effect on the end user's perceived experience and latency vs. real latency
UsyBus: A Communication Framework among Reusable Agents integrating Eye-Track...Jean Vanderdonckt
Presentation of ACM EICS '22 paper: https://dl.acm.org/doi/10.1145/3532207
Eye movement analysis is a popular method to evaluate whether a user interface meets the users' requirements and abilities. However, with current tools, setting up a usability evaluation with an eye-tracker is resource-consuming, since the areas of interest are defined manually, exhaustively and redefined each time the user interface changes. This process is also error-prone, since eye movement data must be finely synchronised with user interface changes. These issues become more serious when the user interface layout changes dynamically in response to user actions. In addition, current tools do not allow easy integration into interactive applications, and opportunistic code must be written to link these tools to user interfaces. To address these shortcomings and to leverage the capabilities of eye-tracking, we present UsyBus, a communication framework for autonomous, tight coupling among reusable agents. These agents are responsible for collecting data from eye-trackers, analyzing eye movements, and managing communication with other modules of an interactive application. UsyBus allows multiple heterogeneous eye-trackers as input, provides multiple configurable outputs depending on the data to be exploited. Modules exchange data based on the UsyBus communication framework, thus creating a customizable multi-agent architecture. UsyBus application domains range from usability evaluation to gaze interaction applications design. Two case studies, composed of reusable modules from our portfolio, exemplify the implementation of the UsyBus framework.
µV: An Articulation, Rotation, Scaling, and Translation Invariant (ARST) Mult...Jean Vanderdonckt
Paper presented at ACM EICS '22
Finger-based gesture input becomes a major interaction modality for surface computing. Due to the low precision of the finger and the variation in gesture production, multistroke gestures are still challenging to recognize in various setups. In this paper, we present µV, a multistroke gesture recognizer that addresses the properties of articulation, rotation, scaling, and translation invariance by combining $P+'s cloud-matching for articulation invariance with !FTL's local shape distance for RST-invariance. We evaluate µV against five competitive recognizers on MMG, an existing gesture set, and on two new versions for smartphones and tablets, MMG+ and RMMG+, a randomly rotated version on both platforms. µV is significantly more accurate than its predecessors when rotation invariance is required and not significantly inferior when it is not. µV is also significantly faster than others with many samples and not significantly slower with few samples
RepliGES and GEStory: Visual Tools for Systematizing and Consolidating Knowle...Jean Vanderdonckt
The body of knowledge accumulated by gesture elicitation studies (GES), although useful, large, and extensive, is also heterogeneous, scattered in the scientific literature across different venues and fields of research, and difficult to generalize to other contexts of use represented by different gesture types, sensing devices, applications, and user categories. To address such aspects, we introduce RepliGES, a conceptual space that supports (1) replications of gesture elicitation studies to confirm, extend, and complete previous findings, (2) reuse of previously elicited gesture sets to enable new discoveries, and (3) extension and generalization of previous findings with new methods of analysis and for new user populations towards consolidated knowledge of user-defined gestures. Based on RepliGES, we introduce GEStory, an interactive design space and visual tool, to structure, visualize and identify user-defined gestures from a number of 216 published gesture elicitation studies
Gesture-based information systems: from DesignOps to DevOpsJean Vanderdonckt
Keynote address for the 29th International Conference on Information Systems Development ISD'2021 (Valencia, Spain, September 8-10, 2021). See https://isd2021.webs.upv.es/program.php#keynotes
This talk promotes the Seven I':
Implementation continuity
Inclusion of end-users
Interaction first
Integration among stakeholders
Iteration short
Incremental progress
Innovation openness
Intra-platform plasticity regularly assumes that the display of a computing platform remains fixed and rigid during interactions with the platform in contrast to reconfigurable displays, which can change form depending on the context of use. In this paper, we present a model-based approach for designing and deploying graphical user interfaces that support intra-platform plasticity for reconfigurable displays. We instantiate the model for E3Screen, a new device that expands a conventional laptop with two slidable, rotatable, and foldable lateral displays, enabling slidable user interfaces. Based on a UML class diagram as a domain model and a SCRUD list as a task model, we define an abstract user interface as interaction units with a corresponding master-detail design pattern. We then map the abstract user interface to a concrete user interface by applying rules for the reconfiguration, concrete interaction, unit allocation, and widget selection and implement it in JavaScript. In a first experiment, we determine display configurations most preferred by users, which we organize in the form of a state-transition diagram. In a second experiment, we address reconfiguration rules and widget selection rules. A third experiment provides insights into the impact of the lateral displays on a visual search task.
Conducting a Gesture Elicitation Study: How to Get the Best Gestures From Peo...Jean Vanderdonckt
Lecture 3: Conducting a Gesture Elicitation Study: How to Get the Best Gestures From People?
Francqui Chair in Computer Science 2020 VUB, Jean Vanderdonckt, 27 April 2021
User-centred Development of a Clinical Decision-support System for Breast Can...Jean Vanderdonckt
See the paper at https://www.scitepress.org/Link.aspx?doi=10.5220/0010258900600071
We conducted a user-centered design of a clinical decision-support system for breast cancer screening, diagnosis, and reporting based on stroke gestures. We combined knowledge elicitation interviews, scenario-focused questionnaires, and paper mock-ups to understand user needs. Multi-fidelity (low and high) prototypes were designed and compared first in vitro in a usability laboratory, then in vivo in the real world. The resulting user interface provides radiologists with a platform that integrates domain-oriented tools for the visualization of mammograms, the manual, and the semi-automatic annotation of breast cancer findings based on stroke gestures. The contribution of this work lies in that, to the best of our knowledge, stroke gestures have not yet been applied to the annotation of mammograms. On the one hand, although there is a substantial amount of research done in stroke-based interaction, none focuses especially on the domain of breast cancer annotation. On the other hand, typical gestures in breast cancer annotation tools are those with a keyboard and a mouse
Simplifying the Development of Cross-Platform Web User Interfaces by Collabo...Jean Vanderdonckt
Ensuring responsive design of web applications requires their user interfaces to be able to adapt according to different contexts of use, which subsume the end users, the devices and platforms used to carry out the interactive tasks, and also the environment in which they occur. To address the challenges posed by responsive design, aiming to simplify their development by factoring out the common parts from the specific ones, this paper presents Quill, a web-based development environment that enables various stakeholders of a web application to collaboratively adopt a model-based design of the user interface for cross-platform deployment. The paper establishes a series of requirements for collaborative model-based design of cross-platform web user interfaces motivated by the literature, observational and situational design. It then elaborates on potential solutions that satisfy these requirements and explains the solution selected for Quill. A user survey has been conducted to determine how stakeholders appreciate model-based design user interface and how they estimate the importance of the requirements that lead to Quill
Detachable user interfaces consist of graphical user interfaces whose parts or whole can be detached at run-time from their host, migrated onto an- other computing platform while carrying out the task, possibly adapted to the new platform and attached to the target platform in a peer-to-peer fashion. De- taching is the property of splitting a part of a UI for transferring it onto another platform. AttAaching is the reciprocal property: a part of an existing interface can be attached to the currently being used interface so as to recompose another one on-demand, according to user's needs, task requirements. Assembling inter- face parts by detaching and attaching allows dynamically composing, decom- posing and re-composing new interfaces on demand. To support this interaction paradigm, a development infrastructure has been developed based on a series of primitives such as display, undisplay, copy, expose, return, transfer, delegate, and switch. We exemplify it with QTkDraw, a painting application with attach- ing and detaching based on the development infrastructure.
The Impact of Comfortable Viewing Positions on Smart TV GesturesJean Vanderdonckt
Whereas gesture elicitation studies for TV interaction
assume that participants adopt an upright, frontal viewing
position, we asked 21 participants to hold a natural, comfortable
viewing position, the posture they adopt when watching TV
at home. By involving a broad selection of users regarding
age, profession, our study targets a higher ecological validity
than in existing studies. Agreements rates were lower than existing studies using an upright, frontal viewing position. Participants experienced problems due to (1) having to use their slave hand instead of their dominant hand, (2) being in a certain orientation with their head making it more difficult to perform some physical movements, and (3) being hindered in their movement by the sofa there lay on. Since each person may have a different
position inducing different gestures due to the aforementioned
problems, the effect of a comfortable viewing position is analyzed
by comparison to gestures for a frontal position.
Head and Shoulders Gestures: Exploring User-Defined Gestures with Upper BodyJean Vanderdonckt
This paper presents empirical results about user-dened gestures
for head and shoulders by analyzing 308 gestures elicited from 22 participants for 14 referents materializing 14 different types of tasks in IoT context of use. We report an overall medium consensus but with medium variance (mean: .263, min: .138, max: .390 on the unit scale) between participants gesture proposals, while their thinking time were less similar (min: 2.45 sec, max: 22.50 sec), which suggests that head and shoulders gestures are not all equally easy to imagine and to produce. We point to the challenges of deciding which head and shoulders gestures
will become the consensus set based on four criteria: the agreement rate, their individual frequency, their associative frequency, and their unicity.
Paper accessible at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A213794
G-Menu: A Keyword-by-Gesture based Dynamic Menu Interface for SmartphonesJean Vanderdonckt
Instead of relying on graphical or vocal modalities for searching
an item by keyword (called K-Menu), this paper presents the G-Menu exploiting gesture interaction and gesture recognition: when a user sketches a keyword by gesturing the first letters of its label, a menu with items related to the recognized letters is constructed dynamically and presented to the user for selection and auto-completion. The selection can be completed either gesturally by an appropriate gesture (called the G-Menu) or by touch only (called the T-Menu). This paper compares the three types of menu, i.e., by keyword, by gesture, and by touching, in a user study with twenty participants on their item selection time (for measuring task efficiency), their error rate (for measuring task effectiveness),
and their subjective satisfaction (for measuring user satisfaction).
Paper accessible at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A213790
Unistroke and multistroke gesture recognizers have always striven to reach some robustness with respect to
all variations encountered when people issue gestures by hand
on touch surfaces or with sensing devices. For this purpose,
successful stroke recognizers rely on a gesture recognition
algorithm that satisfies a series of invariance properties such
as: stroke-order invariance, stroke-number invariance, stroke direction invariance, position, scale, and rotation invariance.
Before initiating any recognition activity, these algorithms
ensure these properties by performing several pre-processing
operations. These operations induce an additional computational
cost to the recognition process, as well as a potential error
bias. To cope with this problem, we introduce an algorithm that
ensures all these properties analytically instead of statistically
based on a vector algebra. Instead of points, the recognition
algorithm works on vectors between vectors. We demonstrate
that this approach not eliminates the need for these preprocessing
operations but also satisfies an entire structure preserving
transformation.
Paper available at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A217006
Body-based gestures, such as acquired by Kinect sensor, today benefit from efficient tools for their recognition and development, but less for automated reasoning. To facilitate this activity, an ontology for structuring body-based gestures, based on user, body and body parts, gestures, and environment, is designed and encoded in Ontology Web Language according to modelling triples (subject, predicate, object). As a proof-of-concept and to feed this ontology, a gesture elicitation study collected 24 participants X 19 referents for IoT tasks = 456 elicited body-based gestures, which were classified and expressed according to the ontology.
See paper at https://dl.acm.org/citation.cfm?id=3328238
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Pen-based Gestures and Sketching User Interfaces
1. Jean Vanderdonckt
Louvain School of Management (LSM)
Université catholique de Louvain (UCL)
Place des Doyens, 1 – B-1348 Louvain-la-Neuve, Belgium
jean.vanderdonckt@uclouvain.be – http://www.uclouvain.be/jean.vanderdonckt
Pen-based gestures and Sketching
2. Place des Doyens, 1 – B-1348 Louvain-la-Neuve, Belgium
http://www.lilab.be, http://www.lilab.eu, http://www.lilab.info
Louvain Interaction Laboratory
(LILab)
3. Who is talking?
• Jean VANDERDONCKT
– Lic. & Agreg. Mathematics, Univ. Namur, 1987
– Lic. & Master in Computer Science, Univ. Namur, 1989
– PhD in Computer Science, Univ. Namur, 1997
– Post-doc in eLearning, Univ. Lille 1, 1998
– Post-doc in HCI, Stanford Univ., 2000
• Researcher in Human-Computer Interaction (HCI) since 1988
• Full professor at UCL, Past President of LSM Research Institute
• Involvement in various R&D projects
– local level, Walloon Region
– Federal level, Belgium
– In Europe and outside (e.g., with USA)
• Scientific coordinator of the UsiXML Consortium
• See profile at
– DBLP: http://www.informatik.uni-trier.de/~ley/pers/hd/v/Vanderdonckt:Jean
– Google Scholar: http://scholar.google.com/citations?user=U-FgGrkAAAAJ&hl=fr
– Microsoft Academic: http://academic.research.microsoft.com/Author/495619.aspx
– ArnetMiner: http://arnetminer.org/person/j-vanderdonckt-1306161.html
3
4. • LinkedIn Profile: http://www.linkedin.com/in/jeanvdd
• Academia Profile:
http://uclouvain.academia.edu/JeanVanderdonckt
• YouTube:
http://www.youtube.com/results?search_query=usixml
• Slides: http://www.slideshare.net/jeanvdd
• Books
4
Who is talking?
5. • Introduction and motivations
• What Sketching as a Service is made for?
• What can we do with a Skaas tool?
• Perspectives
• Conclusion
Overview
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
6. • Ink-based interaction, pen-based interaction
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
7. • Gesture interaction (2D, 2D½ , 3D): EdgeWrite
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
8. • Gesture interaction (2D, 2D½ , 3D): DocToon
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
9. • Gesture interaction (2D, 2D½ , 3D): DocToon
– In an other room, a person is operating remotely an
avatar with gestures on WACOM Cintiq
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Backgrounds
Room cameras
Animation
10. • Drawing
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: Luca Galuzzi [CC-BY-SA-2.5] via Wikimedia Commons
11. • Sketching
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: http://min-linesonpaper.blogspot.be/2011_12_01_archive.html
12. • Sketching: workspace activity of design teams
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Tang88]
Action/function List Draw Gesture Total
Store
information
40 19 1 60 (27%)
Convey ideas 0 22 24 46 (20%)
Represent
ideas
2 41 9 52 (23%)
Engage
attention
0 21 46 67 (30%)
Total 42 (19%) 103 (46%) 80 (35%) 225 (100%)
13. • Sketching for User Interface Prototyping
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
14. • Functions of Sketching
– Externalize ideas
– Interpret each other’s ideas
– Stimulate use of early ideas
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
15. • UI design is a wicked problem
– There is no definitive statement of the problem
– The search for solutions is open ended
– Constraints are constantly changing
• UI development life cycle is
– Open
– Ill-defined
– Incomplete
– Iterative (e.g., Spiral model)
• Three strategies
– Authoritative: impose a solution (maybe sub-optimal)
– Competitive: race for a solution (maybe not the best one)
– Collaborative: gather for a solution (perhaps the most accepted)
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
16. • Observations
• Comments
– To register design sessions was a problem
– Designers and users worked together
– Both sketches on paper and pixel-perfect drawings
– “Test on the device”
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
17. • How previous work addressed the four activities
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
18. • Requirements
– R1 Support iterative design of interfaces
• Wicked Nature of UI design
– R2 Support collaboration between designers and
users
• UCD, brainstorming, collaborative solution to wicked problems
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
19. • Sketching evolves from informal to more formal
– Different levels of fidelity: low, medium, high
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Coyette2007]
Low
Check box
Identification label
Medium Final
SketchiXML
20. • Sketching evolves from informal to more formal
– Different levels of fidelity: low, medium, high
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Coyette2007]
21. • Requirements
– R1 Support iterative design of interfaces
• Wicked Nature of UI design
– R2 Support collaboration between designers and
users
• UCD, brainstorming, collaborative solution to wicked problems
– R3 Support different levels of fidelity
• Designers used sketches and pixel-perfect UIs
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
22. • Multiple devices
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Google 2012. The New Multi-screen World: Understanding Cross-platform Consumer Behavior]
23. • Multiple devices
– By the end of 2015, the number of connected mobile devices will
exceed the Earth’s population.
– By 2018 there will be 1.4 devices per person on the planet
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [CISCO Visual Networking Index
http://www.cisco.com/c/en/us/solutions/collateral/service-provider/
visual-networking-index-vni/white_paper_c11-520862.html]
24. • Multiple devices
– Design with/for multiple devices
– Responsive design = graceful degradation + progressive
enhancement
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: http://smartwebsiteguide.com/2013/11/responsive-web-design/
25. • Requirements
– R1 Support iterative design of interfaces
• Wicked Nature of UI design
– R2 Support collaboration between designers and
users
• UCD, brainstorming, collaborative solution to wicked problems
– R3 Support different levels of fidelity
• Designers used sketches and pixel-perfect UI’s
– R4 Support the design of interfaces on multiple
platforms
• Contemporary context, Designers used multiple devices
– R5 Support the prototyping of interfaces on multiple
platforms
• Contemporary context, Test on the device
Introduction and motivations
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
26. • Sketching as a Service (Skaas) is a delivery model where
the sketching activity is offered as a service for
– Multiple users
– Multiple platforms
– Multiple environments
– Multiple purposes
• A significant example:
GAMBIT
What Skaas is made for?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
27. • GAMBIT configurations
What Skaas is made for?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
28. • GAMBIT configurations
– Single User, Single Device (SUSD)
What Skaas is made for?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
29. • GAMBIT configurations
– Single User, Multiple Devices (SUMD)
What Skaas is made for?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
30. • GAMBIT configurations
– Multiple Users, Single Device (MUSD)
What Skaas is made for?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
31. • GAMBIT configurations
– Multiple Users, Multiple Devices (MUMD)
What Skaas is made for?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
32. • GAMBIT: infinite workspace
What Skaas is made for?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
33. • GAMBIT: architecture
What Skaas is made for?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
34. • GAMBIT: Step 1= define structure
What Skaas is made for?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
35. • GAMBIT: Step 2= define behavior
What Skaas is made for?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
36. • GAMBIT: Step 3= test
What Skaas is made for?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
37. • GAMBIT: Step 3= test
What Skaas is made for?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
38. • GAMBIT: Step 4= reflect: video
What Skaas is made for?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
Gambit
39. • Experiment #1
– 9 UI designers
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
40. • Experiment #1: configuration
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
41. • Experiment #1: results
– Preferred devices for Sketching were: Other (Pen and paper),
Tabletop, Wall Screen, Tablet, Smartphone
– The tool did not have all the functions designers expected (CSUQ)
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
42. • Experiment #2:
– 6 UI designers
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
43. • Experiment #2: configuration
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
44. • Experiment #2: setup
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
45. • Experiment #2: results = 11.529 words
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
46. • Experiment #2: results
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
47. • Experiment #2: results
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
48. • Experiment #2: results
– Qualitative impact – designers talked about different aspects while
using different devices
– Quantitative impact – designers talked more or less according to
which device they were using
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
Smartphone Tablet Tabletop
0
500
1000
1500
2000
Words
0
500
1000
1500
2000
Words
TabletSmartphone Tabletop
r2= 0.43
49. • Experiment #3: real-world case study
– Factory for steel galvanisation
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
50. • Experiment #3: real-world case study
– Mobile application
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
51. • Experiment #3: real-world case study
– Mobile application
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
52. • Experiment #3: configuration
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
53. • Experiment #3: brainstorming
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
54. • Experiment #3: GAMBIT prototype
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
55. • Experiment #3: setup
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
56. • Experiment #3: setup
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
57. • Experiment #3: results
– Prototyping with GAMBIT represented only 8% of the project
lifecycle and the satisfaction with the system (CSUQ) increased by
30%.
– Prototyping with multiple devices is possible on realistic contexts
What can we do with a Skaas tool?
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
Source: [Sangiorgi2014]
58. • Prototyping
different UI types:
Zoomable UIs
Perspectives
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
59. • Sketching as a Service (Skaas) is appropriate for
– Sketching any diagram
• An informal sketch: e.g., a graphical user interface (at any CRF level)
• A formal sketch: e.g., a UML model
– From any input material
• E.g., for UI design: screenshot, picture, mockup, sketch, wireframe,…
– With any level of fidelity
• Low, medium, high
• With/out recognition for transition
– On/for multiple devices
• Any interaction surface in principle
– By multiple users
• Different categories, profiles
– In various environments
• Stationary vs mobile, centralized vs distributed
• Same time vs asynchronous – Same space vs remote
Conclusion
Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
60. Ștefan cel Mare University of Suceava - Suceava, November 15th, 2018
About ACM
ACM, the Association for Computing Machinery (www.acm.org), is the
premier global community of computing professionals and students
with nearly 100,000 members in more than 170 countries interacting
with more than 2 million computing professionals worldwide.
OUR MISSION: We help computing professionals to be their best and
most creative. We connect them to their peers, to what the latest
developments, and inspire them to advance the profession and make a
positive impact on society.
OUR VISION: We see a world where computing helps solve tomorrow’s
problems – where we use our knowledge and skills to advance the
computing profession and make a positive social impact throughout the
world.
63. 11/15/2018 6325h@USV | www.eed.usv.ro/25h
Contest theme
In a world of smart devices, homes, cars, and
clothes, design smart interactive jewelry.
64. 11/15/2018 6425h@USV | www.eed.usv.ro/25h
Contest theme
In a world of smart devices, homes, cars, and
clothes, design smart interactive jewelry.
Stage #1: capture the movement of the body, or body parts,
using the acceleration sensors that were provided to you.
Stage #2: design, engineer, and demonstrate an interactive
application that makes use of this information for a practical,
useful purpose.
Stage #3 (not mandatory): include in your design other sensors,
devices, or software libraries to support your goal. Unleash your
creativity to create beautiful apps for smart jewelry!
65. 11/15/2018 6525h@USV | www.eed.usv.ro/25h
Contest theme
In a world of smart devices, homes, cars, and
clothes, design smart interactive jewelry.
Stage #1: capture the movement of the body, or body parts,
using the acceleration sensors that were provided to you.
Stage #2: design, engineer, and demonstrate an interactive
application for smart jewelry that makes use of the information
collected during stage #1 for a practical, useful purpose.
Stage #3 (not mandatory): include in your design other sensors,
devices, or software libraries to support your goal. Unleash your
creativity to create beautiful apps for smart jewelry!
66. 11/15/2018 6625h@USV | www.eed.usv.ro/25h
Contest theme
In a world of smart devices, homes, cars, and
clothes, design smart interactive jewelry.
Stage #1: capture the movement of the body, or body parts,
using the acceleration sensors that were provided to you.
Stage #2: design, engineer, and demonstrate an interactive
application for smart jewelry that makes use of the information
collected during stage #1 for a practical, useful purpose.
Stage #3 (not mandatory): include in your design other sensors,
devices, or online APIs to support your goal. Unleash your
creativity to create beautiful apps for smart jewelry!
67. 11/15/2018 6725h@USV | www.eed.usv.ro/25h
Contest theme
In a world of smart devices, homes, cars, and
clothes, design smart interactive jewelry.
Stage #1: capture the movement of the body, or body parts,
using the acceleration sensors that were provided to you.
Stage #2: design, engineer, and demonstrate an interactive
application for smart jewelry that makes use of the information
collected during stage #1 for a practical, useful purpose.
Stage #3 (not mandatory): include in your design other sensors,
devices, or online APIs to support your goal. Unleash your
creativity to create beautiful apps for smart jewelry!
This is a minimum requirement for you to qualify
for further evaluation and the awards!
68. 11/15/2018 6825h@USV | www.eed.usv.ro/25h
Contest theme
In a world of smart devices, homes, cars, and
clothes, design smart interactive jewelry.
Stage #1: capture the movement of the body, or body parts,
using the acceleration sensors that were provided to you.
Stage #2: design, engineer, and demonstrate an interactive
application for smart jewelry that makes use of the information
collected during stage #1 for a practical, useful purpose.
Stage #3 (not mandatory): include in your design other sensors,
devices, or software libraries to support your goal. Unleash your
creativity to create beautiful apps for smart jewelry!
• Total freedom to envision smart jewelry (e.g.,
ring, earrings, bracelet, necklace, brooch, etc.)
• Be creative! Keep in mind that one criteria is how
creative you were in your idea, design,
implementation, and demonstration
69. 11/15/2018 6925h@USV | www.eed.usv.ro/25h
Contest theme
In a world of smart devices, homes, cars, and
clothes, design smart interactive jewelry.
Stage #1: capture the movement of the body, or body parts,
using the acceleration sensors that were provided to you.
Stage #2: design, engineer, and demonstrate an interactive
application for smart jewelry that makes use of the information
collected during stage #1 for a practical, useful purpose.
Stage #3 (not mandatory): include in your design other sensors,
devices, or online APIs to support your goal. Unleash your
creativity to create beautiful apps for smart jewelry!
• Total freedom to make your own electronics
• Be creative! You will be evaluated by how
innovative your extended design is.
70. During your evaluation:
11/15/2018 7025h@USV | www.eed.usv.ro/25h
You will demonstrate to the jury how well your project works
The jury will try out your prototype
Communicate your solution clearly to the jury, provide
strong arguments, and answer carefully jury questions
71. During your evaluation:
11/15/2018 7125h@USV | www.eed.usv.ro/25h
You will demonstrate to the jury how well your project works
The jury will try out your prototype
Communicate your solution clearly to the jury, provide
strong arguments, and answer carefully jury questions
Evaluation criteria:
The first stage (capture motion) is mandatory to qualify
for the awards
Criterion #1 - Functionality: your prototype works (50%
of your evaluation)
Criterion #2 - Originality and creativity: your prototype
brings a new perspective on smart jewelry
Criterion #3 - Presentation: you manage to deliver a
great presentation to our jury (20%)
72. During your evaluation:
11/15/2018 7225h@USV | www.eed.usv.ro/25h
You will demonstrate to the jury how well your project works
The jury will try out your prototype
Communicate your solution clearly to the jury, provide
strong arguments, and answer carefully jury questions
Evaluation criteria:
The first stage (capture motion) is mandatory to qualify
for the awards
Criterion #1 - Functionality: your prototype works (50%
of your evaluation)
Criterion #2 - Originality and creativity: your prototype
brings a new perspective on smart jewelry
Criterion #3 - Presentation: you manage to deliver a
great presentation to our jury (20%)
1st prize = 900 RON
2nd prize = 450 RON
3rd prize = 300 RON
73. During the contest:
11/15/2018 7325h@USV | www.eed.usv.ro/25h
Free access to your own documentation, development
equipment (laptops, tablets, phones, video projectors, etc.),
and electronic devices and prototypes that you made
Free access to internet
You are free to use any development environment
The solution you propose must use the Phidgets HUB and
sensors that your received