Slides for the talk I gave at ICMI 2012, held in Santa Monica, CA, USA.
The full paper reference is:
El Ali, A., Kildal, J. & Lantz, V. (2012). Fishing or a Z?: Investigating the Effects of Error on Mimetic and Alphabet Device-based Gesture Interaction. In Proceedings of the 14th international conference on Multimodal Interaction (ICMI '12), 2012, Santa Monica, California.
cvpr2011: human activity recognition - part 2: overviewzukun
The document discusses approaches for human activity analysis from videos. It describes activity classification, detection, and recognition processes that analyze videos to identify human activities. It presents a taxonomy that categorizes recognition approaches as single-layered or hierarchical. Single-layered approaches recognize actions directly from videos, while hierarchical approaches model activities as combinations of sub-events. Hierarchical approaches are suitable for recognizing complex activities and interactions between humans or humans and objects.
cvpr2011: human activity recognition - part 5: description basedzukun
This document discusses description-based approaches for analyzing human activities. It describes representing activities semantically using definitions of their structures, and recognizing activities by matching observations to these definitions. It also discusses hierarchical representations of both simple and complex/recursive activities like interactions between multiple people. Recognition algorithms work by matching video observations to the formal syntactic representations of activities. Experiments demonstrated recognizing a variety of simple interactions between people from continuous video sequences.
cvpr2011: human activity recognition - part 6: applicationszukun
This document discusses applications and challenges in human activity analysis using computer vision techniques. It begins by describing current applications like object recognition in images and videos for tasks like pedestrian detection. It then discusses challenges like analyzing longer and more complex activities that involve interactions between humans, objects, and environments. Real-time processing of continuous video streams and handling large, noisy video databases are also challenges. The document concludes by discussing future directions like 3D modeling of activities, incorporating context like objects and poses, interactive learning approaches, and using active learning techniques to generate training videos.
cvpr2011: human activity recognition - part 1: introductionzukun
This document provides an introduction to human activity analysis and recognition from video. It discusses the goals of semantic video understanding like labeling objects and events. It reviews early work on activity recognition using point light displays. The document outlines different levels of video understanding from object detection to activity recognition. It discusses applications in surveillance, intelligent environments, sports analysis, and video retrieval. It categorizes human activities based on complexity and number of participants. Finally, it discusses challenges like environment variations, various activity types, and limited training data.
Mardanbegi, D., Hansen, D.W., and Pederson, T. “Eye-based head gestures: Head gestures through eye movements”. Proceedings of the ACM symposium on Eye tracking research & applications ETRA '12, ACM Press, California, USA, 2012. (Awarded as best full paper + best student paper)
(1) The document discusses vision-based assistive technologies for interaction using pointing gestures. (2) It describes how computer vision and image processing can track body parts like fingers, limbs, faces, and eyes to emulate mouse functions. (3) The AsTeRICS project is developing a wearable gaze tracking system to enable people with motor impairments to interact with computers using eye movements. (4) Early user tests of a remote gaze tracker prototype encountered problems with tremors but had high user acceptance. (5) Future work includes developing algorithms to reduce tremors and evaluating a head-mounted eye tracker.
This document discusses capturing human insight about the visual world through new forms of visual data annotation. It proposes asking annotators to provide rationales for their labels to provide more context. It also discusses learning from human-provided image descriptions to understand what visual cues humans find most important. Active learning approaches are presented that select the most informative examples for labels in a cost-effective and parallelizable way.
The RTFM of Usability at LUXR June 2011Rick Boardman
Testing hypotheses is best done through triangulating multiple methods, as each method has strengths and weaknesses. Some common methods include analytics, usability testing, remote testing, and focus groups. It is important to listen to users and understand why they do or do not use a product, rather than relying only on numbers or guesses.
cvpr2011: human activity recognition - part 2: overviewzukun
The document discusses approaches for human activity analysis from videos. It describes activity classification, detection, and recognition processes that analyze videos to identify human activities. It presents a taxonomy that categorizes recognition approaches as single-layered or hierarchical. Single-layered approaches recognize actions directly from videos, while hierarchical approaches model activities as combinations of sub-events. Hierarchical approaches are suitable for recognizing complex activities and interactions between humans or humans and objects.
cvpr2011: human activity recognition - part 5: description basedzukun
This document discusses description-based approaches for analyzing human activities. It describes representing activities semantically using definitions of their structures, and recognizing activities by matching observations to these definitions. It also discusses hierarchical representations of both simple and complex/recursive activities like interactions between multiple people. Recognition algorithms work by matching video observations to the formal syntactic representations of activities. Experiments demonstrated recognizing a variety of simple interactions between people from continuous video sequences.
cvpr2011: human activity recognition - part 6: applicationszukun
This document discusses applications and challenges in human activity analysis using computer vision techniques. It begins by describing current applications like object recognition in images and videos for tasks like pedestrian detection. It then discusses challenges like analyzing longer and more complex activities that involve interactions between humans, objects, and environments. Real-time processing of continuous video streams and handling large, noisy video databases are also challenges. The document concludes by discussing future directions like 3D modeling of activities, incorporating context like objects and poses, interactive learning approaches, and using active learning techniques to generate training videos.
cvpr2011: human activity recognition - part 1: introductionzukun
This document provides an introduction to human activity analysis and recognition from video. It discusses the goals of semantic video understanding like labeling objects and events. It reviews early work on activity recognition using point light displays. The document outlines different levels of video understanding from object detection to activity recognition. It discusses applications in surveillance, intelligent environments, sports analysis, and video retrieval. It categorizes human activities based on complexity and number of participants. Finally, it discusses challenges like environment variations, various activity types, and limited training data.
Mardanbegi, D., Hansen, D.W., and Pederson, T. “Eye-based head gestures: Head gestures through eye movements”. Proceedings of the ACM symposium on Eye tracking research & applications ETRA '12, ACM Press, California, USA, 2012. (Awarded as best full paper + best student paper)
(1) The document discusses vision-based assistive technologies for interaction using pointing gestures. (2) It describes how computer vision and image processing can track body parts like fingers, limbs, faces, and eyes to emulate mouse functions. (3) The AsTeRICS project is developing a wearable gaze tracking system to enable people with motor impairments to interact with computers using eye movements. (4) Early user tests of a remote gaze tracker prototype encountered problems with tremors but had high user acceptance. (5) Future work includes developing algorithms to reduce tremors and evaluating a head-mounted eye tracker.
This document discusses capturing human insight about the visual world through new forms of visual data annotation. It proposes asking annotators to provide rationales for their labels to provide more context. It also discusses learning from human-provided image descriptions to understand what visual cues humans find most important. Active learning approaches are presented that select the most informative examples for labels in a cost-effective and parallelizable way.
The RTFM of Usability at LUXR June 2011Rick Boardman
Testing hypotheses is best done through triangulating multiple methods, as each method has strengths and weaknesses. Some common methods include analytics, usability testing, remote testing, and focus groups. It is important to listen to users and understand why they do or do not use a product, rather than relying only on numbers or guesses.
by Robert Schumacher, Ph.D.
Presentation given on 21 May to the GCC HIMSS group in Chicago with ~50 people present.
www.usercentric.com
The point was to provide some background on usability (a gentle introduction to some of the science), some case studies, and introduce the measurement AND design components of user centered design.
Note because of all the animations, some pages do not view properly. Please contact me if you would like more information:
bob at usercentric.com
Agile at Seapine (University of Cincinnati 2011)Seapine Software
The document discusses challenges in implementing Agile practices at Seapine Software. It describes how the speaker develops using Agile while others use Waterfall. Key challenges include getting cooperation from others who estimate tasks differently, adopting test-driven development and pair programming, and integrating quality assurance and documentation into the Agile process. While difficult, the speaker believes Agile is worth it for better estimates, testing, adapting to changes, and avoiding wasted work. Additional resources on Agile at Seapine are provided.
The document discusses a research project that uses a smartphone app to collect subjective travel experience data from individuals. The app will provide feedback to users about their own experiences as well as those of others. The researchers aim to see if these interventions can change travel behaviors and reduce emissions. They will draw on theories from behavioral economics, psychology, and technology acceptance. An important goal is to pilot and refine the app to make it more usable and understand its impact on travel choices over multiple trials involving both strangers and friends.
IRJET- Vision Based Sign Language by using MatlabIRJET Journal
This document discusses vision-based sign language translation using MATLAB. It describes a system that uses a camera to capture images of hand gestures representing letters or words in sign language. MATLAB is used to analyze the images, recognize the gestures, and translate them into spoken words that are output through a speaker. The system aims to help deaf, mute, and blind individuals communicate more easily. Several image processing and machine learning techniques for hand segmentation, feature extraction, and classification are reviewed from previous studies. The results suggest this type of system could accurately translate sign language in real-time.
This document discusses biometrics and biometric identification techniques. It provides an introduction to biometrics, which involves capturing biological characteristics to identify individuals. The document then summarizes several common biometric techniques including fingerprinting, iris scanning, facial recognition, hand geometry, retina scanning, keystroke dynamics, and signature recognition. Examples of how each technique works are given. The document also compares the accuracy of different biometric methods and discusses factors like false acceptance and rejection rates. Overall, the document provides a high-level overview of biometrics and various biometric identification systems.
After Gutenberg: The Tradition of Authenticity in a New Agecgering
This document discusses the changing nature of literacy from the era of print to the current digital age. It explores how literacy has evolved from individual reading and writing to include multimedia communication skills. The key aspects of modern literacy outlined in the text include consuming, producing, and communicating across various media; consuming and sharing information; and developing life skills. Digital technologies have created an environment of ubiquitous connectivity that supports new forms of collaborative knowledge-building.
This document discusses the concepts of reality, augmented reality, and virtual reality. It begins by defining augmented reality as a live view of the real world with computer-generated information added. Virtual reality is described as an artificial, computer-generated environment that users accept as real. The origins of virtual reality are traced back to Ivan Sutherland's 1965 Sword of Damocles system. Reality is explored from philosophical perspectives, and it is noted that our perception of reality through limited senses means reality is constructed in the mind. The document examines whether reality can truly be observed independently of observation.
This document discusses a class on human perspective in artificial intelligence. It provides information on class attendance verification through a QR code, the class topics of learning and language, and required reading from a book on the society of mind. It also outlines upcoming exams, homework assignments, and discusses teaching limitations when using digital media like note taking on phones. Learning is discussed in the context of altering mini modules in the brain and using reflection to better understand and retain information.
The document provides an agenda for a hands-on testing techniques lab session titled "Let's Test Together". The agenda includes an overview of benefits of testing, exercises to illustrate challenges in testing combinations of inputs, and guidance for participants to generate their own tests for an application by considering hardware/software configurations, user types, user actions, and business rules. The objectives are to introduce an effective test design method and have participants actively create tests to change how they approach software testing.
This paper examines single gaze gestures (SGGs) as a selection method for gaze-controlled interfaces. SGGs involve making a single point-to-point eye movement between two on-screen locations. The study evaluated horizontal and vertical, long and short SGGs on two eye tracking devices. It found that long SGGs took significantly longer to complete than short SGGs. Horizontal SGGs were also completed significantly faster than vertical SGGs. However, there was no significant difference in error rates between horizontal and vertical SGGs. The study provides evidence that SGGs can be an effective selection technique, with properties like selection time varying based on gesture length and direction.
This document provides an overview of variational autoencoders (VAEs) through summaries of three sections:
1. Yann LeCun discusses why unsupervised and predictive learning are important for developing common sense in machines. He argues that generative models allow machines to make accurate predictions by learning the structure of data.
2. Jaan Altosaar's tutorial explains that a VAE can be seen as a denoising autoencoder that learns a probabilistic model. The encoder approximates the posterior distribution and the decoder parameterizes a deep generative model.
3. Shakir Mohamed derives the VAE objective function from importance sampling, showing that it maximizes the likelihood while regularizing the approximate posterior
Sparse representation based human action recognition using an action region-a...Wesley De Neve
This document presents a paper on sparse representation-based human action recognition using an action region-aware dictionary. It introduces the challenges of existing action recognition methods, including the lack of a general action detection method and the varying usefulness of context information depending on the action. The paper proposes constructing a dictionary containing separate context and action region information from training videos. It then presents a method to use this dictionary to adaptively classify human actions based on whether context region information is concentrated in the true class. The paper describes experiments on the UCF Sports Action dataset to evaluate the proposed method compared to existing sparse representation approaches.
This document is a final report on gesture recognition submitted by three students. It contains an abstract, introduction, background information on gesture recognition including American Sign Language and object recognition techniques. It discusses digital image processing and neural networks. It outlines the approach, modules, flowcharts, results and conclusions of the project, which developed a method to recognize static hand gestures using a perceptron neural network trained on orientation histograms of the input images. Source code and applications are also discussed.
T7 Embodied conversational agents and affective computingEASSS 2012
Here is an analysis of the French nominal group "le très petit bouton rouge" using the DAFT linguistic analysis tool:
- "le" is analyzed as a definite determiner (DD)
- "très" is analyzed as a very large adverb (INTLARGE)
- "petit" is analyzed as a small size adjective (SIZESMALL)
- "bouton" is analyzed as a button noun (BUTTON)
- "rouge" is analyzed as a red adjective (RED)
The analysis identifies the lexical form, lemma, part-of-speech and other attributes for each word in the nominal group. DAFT performs deep linguistic analysis of French text.
A Translation Device for the Vision Based Sign Languageijsrd.com
The Sign language is very important for people who have hearing and speaking deficiency generally called Deaf and Mute. It is the only mode of communication for such people to convey their messages and it becomes very important for people to understand their language. This paper proposes the method or algorithm for an application which would help in recognizing the different signs which is called Indian Sign Language. The images are of the palm side of right and left hand and are loaded at runtime. The method has been developed with respect to single user. The real time images will be captured first and then stored in directory and on recently captured image and feature extraction will take place to identify which sign has been articulated by the user through SIFT(scale invariance Fourier transform) algorithm. The comparisons will be performed in arrears and then after comparison the result will be produced in accordance through matched key points from the input image to the image stored for a specific letter already in the directory or the database the outputs for the following can be seen in below sections. There are 26 signs in Indian Sign Language corresponding to each alphabet out which the proposed algorithm provided with 95% accurate results for 9 alphabets with their images captured at every possible angle and distance.
„Emotional Flowers“ User Centered Game DesignMartin Ortner
The document discusses the development of the game "Emotional Flowers" which allows children to create virtual plants by performing facial expressions. It describes conducting user tests and workshops with children to inform the game's design. The development used an iterative process including concept creation, prototyping, and user evaluations. The goal was to actively involve users to shape game details and eliminate problems through repeated testing.
The document discusses haptics, which is the science of touch. It defines haptics as deriving from the Greek word meaning "being able to come into contact." The document outlines different types of haptic feedback including tactile and force feedback. It discusses how haptic devices work and how they are different from other input devices in providing both input and output. Examples of commonly used haptic devices are also provided such as exoskeletons, cybergloves, and Phantom devices. Applications of haptics include virtual reality, telepresence, games, surgical simulation, and military training.
The document discusses mobile user experience design. It covers key elements of mobile UX like context, functionality, information architecture, content, visual design, usability, user input, social aspects, trustworthiness, and feedback. It also discusses common dilemmas in mobile UX like choosing between native apps vs mobile web vs responsive web vs hybrid approaches, and selecting between iOS, Android and Windows platforms. The document provides examples to illustrate these concepts and dilemmas in mobile UX.
Improving Two-Thumb Text Entry on Touchscreen DevicesAalto University
Presentation at ACM CHI'13 in Paris by Antti Oulasvirta (Max Planck Institute for Informatics). Work done in collaboration with Keith Vertanen (Montana Tech) and Per Ola Kristensson (University of St Andrews)
CHI 2018 - Measuring, Understanding, and Classifying News Media Sympathy on T...Abdallah El Ali
CHI 2018 slides for the paper: Measuring, Understanding, and Classifying News Media Sympathy on Twitter after Crisis Events
DOI: https://dl.acm.org/citation.cfm?id=3174130
Paper: https://abdoelali.com/pdfs/paper556.pdf
by Robert Schumacher, Ph.D.
Presentation given on 21 May to the GCC HIMSS group in Chicago with ~50 people present.
www.usercentric.com
The point was to provide some background on usability (a gentle introduction to some of the science), some case studies, and introduce the measurement AND design components of user centered design.
Note because of all the animations, some pages do not view properly. Please contact me if you would like more information:
bob at usercentric.com
Agile at Seapine (University of Cincinnati 2011)Seapine Software
The document discusses challenges in implementing Agile practices at Seapine Software. It describes how the speaker develops using Agile while others use Waterfall. Key challenges include getting cooperation from others who estimate tasks differently, adopting test-driven development and pair programming, and integrating quality assurance and documentation into the Agile process. While difficult, the speaker believes Agile is worth it for better estimates, testing, adapting to changes, and avoiding wasted work. Additional resources on Agile at Seapine are provided.
The document discusses a research project that uses a smartphone app to collect subjective travel experience data from individuals. The app will provide feedback to users about their own experiences as well as those of others. The researchers aim to see if these interventions can change travel behaviors and reduce emissions. They will draw on theories from behavioral economics, psychology, and technology acceptance. An important goal is to pilot and refine the app to make it more usable and understand its impact on travel choices over multiple trials involving both strangers and friends.
IRJET- Vision Based Sign Language by using MatlabIRJET Journal
This document discusses vision-based sign language translation using MATLAB. It describes a system that uses a camera to capture images of hand gestures representing letters or words in sign language. MATLAB is used to analyze the images, recognize the gestures, and translate them into spoken words that are output through a speaker. The system aims to help deaf, mute, and blind individuals communicate more easily. Several image processing and machine learning techniques for hand segmentation, feature extraction, and classification are reviewed from previous studies. The results suggest this type of system could accurately translate sign language in real-time.
This document discusses biometrics and biometric identification techniques. It provides an introduction to biometrics, which involves capturing biological characteristics to identify individuals. The document then summarizes several common biometric techniques including fingerprinting, iris scanning, facial recognition, hand geometry, retina scanning, keystroke dynamics, and signature recognition. Examples of how each technique works are given. The document also compares the accuracy of different biometric methods and discusses factors like false acceptance and rejection rates. Overall, the document provides a high-level overview of biometrics and various biometric identification systems.
After Gutenberg: The Tradition of Authenticity in a New Agecgering
This document discusses the changing nature of literacy from the era of print to the current digital age. It explores how literacy has evolved from individual reading and writing to include multimedia communication skills. The key aspects of modern literacy outlined in the text include consuming, producing, and communicating across various media; consuming and sharing information; and developing life skills. Digital technologies have created an environment of ubiquitous connectivity that supports new forms of collaborative knowledge-building.
This document discusses the concepts of reality, augmented reality, and virtual reality. It begins by defining augmented reality as a live view of the real world with computer-generated information added. Virtual reality is described as an artificial, computer-generated environment that users accept as real. The origins of virtual reality are traced back to Ivan Sutherland's 1965 Sword of Damocles system. Reality is explored from philosophical perspectives, and it is noted that our perception of reality through limited senses means reality is constructed in the mind. The document examines whether reality can truly be observed independently of observation.
This document discusses a class on human perspective in artificial intelligence. It provides information on class attendance verification through a QR code, the class topics of learning and language, and required reading from a book on the society of mind. It also outlines upcoming exams, homework assignments, and discusses teaching limitations when using digital media like note taking on phones. Learning is discussed in the context of altering mini modules in the brain and using reflection to better understand and retain information.
The document provides an agenda for a hands-on testing techniques lab session titled "Let's Test Together". The agenda includes an overview of benefits of testing, exercises to illustrate challenges in testing combinations of inputs, and guidance for participants to generate their own tests for an application by considering hardware/software configurations, user types, user actions, and business rules. The objectives are to introduce an effective test design method and have participants actively create tests to change how they approach software testing.
This paper examines single gaze gestures (SGGs) as a selection method for gaze-controlled interfaces. SGGs involve making a single point-to-point eye movement between two on-screen locations. The study evaluated horizontal and vertical, long and short SGGs on two eye tracking devices. It found that long SGGs took significantly longer to complete than short SGGs. Horizontal SGGs were also completed significantly faster than vertical SGGs. However, there was no significant difference in error rates between horizontal and vertical SGGs. The study provides evidence that SGGs can be an effective selection technique, with properties like selection time varying based on gesture length and direction.
This document provides an overview of variational autoencoders (VAEs) through summaries of three sections:
1. Yann LeCun discusses why unsupervised and predictive learning are important for developing common sense in machines. He argues that generative models allow machines to make accurate predictions by learning the structure of data.
2. Jaan Altosaar's tutorial explains that a VAE can be seen as a denoising autoencoder that learns a probabilistic model. The encoder approximates the posterior distribution and the decoder parameterizes a deep generative model.
3. Shakir Mohamed derives the VAE objective function from importance sampling, showing that it maximizes the likelihood while regularizing the approximate posterior
Sparse representation based human action recognition using an action region-a...Wesley De Neve
This document presents a paper on sparse representation-based human action recognition using an action region-aware dictionary. It introduces the challenges of existing action recognition methods, including the lack of a general action detection method and the varying usefulness of context information depending on the action. The paper proposes constructing a dictionary containing separate context and action region information from training videos. It then presents a method to use this dictionary to adaptively classify human actions based on whether context region information is concentrated in the true class. The paper describes experiments on the UCF Sports Action dataset to evaluate the proposed method compared to existing sparse representation approaches.
This document is a final report on gesture recognition submitted by three students. It contains an abstract, introduction, background information on gesture recognition including American Sign Language and object recognition techniques. It discusses digital image processing and neural networks. It outlines the approach, modules, flowcharts, results and conclusions of the project, which developed a method to recognize static hand gestures using a perceptron neural network trained on orientation histograms of the input images. Source code and applications are also discussed.
T7 Embodied conversational agents and affective computingEASSS 2012
Here is an analysis of the French nominal group "le très petit bouton rouge" using the DAFT linguistic analysis tool:
- "le" is analyzed as a definite determiner (DD)
- "très" is analyzed as a very large adverb (INTLARGE)
- "petit" is analyzed as a small size adjective (SIZESMALL)
- "bouton" is analyzed as a button noun (BUTTON)
- "rouge" is analyzed as a red adjective (RED)
The analysis identifies the lexical form, lemma, part-of-speech and other attributes for each word in the nominal group. DAFT performs deep linguistic analysis of French text.
A Translation Device for the Vision Based Sign Languageijsrd.com
The Sign language is very important for people who have hearing and speaking deficiency generally called Deaf and Mute. It is the only mode of communication for such people to convey their messages and it becomes very important for people to understand their language. This paper proposes the method or algorithm for an application which would help in recognizing the different signs which is called Indian Sign Language. The images are of the palm side of right and left hand and are loaded at runtime. The method has been developed with respect to single user. The real time images will be captured first and then stored in directory and on recently captured image and feature extraction will take place to identify which sign has been articulated by the user through SIFT(scale invariance Fourier transform) algorithm. The comparisons will be performed in arrears and then after comparison the result will be produced in accordance through matched key points from the input image to the image stored for a specific letter already in the directory or the database the outputs for the following can be seen in below sections. There are 26 signs in Indian Sign Language corresponding to each alphabet out which the proposed algorithm provided with 95% accurate results for 9 alphabets with their images captured at every possible angle and distance.
„Emotional Flowers“ User Centered Game DesignMartin Ortner
The document discusses the development of the game "Emotional Flowers" which allows children to create virtual plants by performing facial expressions. It describes conducting user tests and workshops with children to inform the game's design. The development used an iterative process including concept creation, prototyping, and user evaluations. The goal was to actively involve users to shape game details and eliminate problems through repeated testing.
The document discusses haptics, which is the science of touch. It defines haptics as deriving from the Greek word meaning "being able to come into contact." The document outlines different types of haptic feedback including tactile and force feedback. It discusses how haptic devices work and how they are different from other input devices in providing both input and output. Examples of commonly used haptic devices are also provided such as exoskeletons, cybergloves, and Phantom devices. Applications of haptics include virtual reality, telepresence, games, surgical simulation, and military training.
The document discusses mobile user experience design. It covers key elements of mobile UX like context, functionality, information architecture, content, visual design, usability, user input, social aspects, trustworthiness, and feedback. It also discusses common dilemmas in mobile UX like choosing between native apps vs mobile web vs responsive web vs hybrid approaches, and selecting between iOS, Android and Windows platforms. The document provides examples to illustrate these concepts and dilemmas in mobile UX.
Improving Two-Thumb Text Entry on Touchscreen DevicesAalto University
Presentation at ACM CHI'13 in Paris by Antti Oulasvirta (Max Planck Institute for Informatics). Work done in collaboration with Keith Vertanen (Montana Tech) and Per Ola Kristensson (University of St Andrews)
Similar to Fishing or a Z?: Investigating the Effects of Error on Mimetic and Alphabet Device-based Gesture Interaction (20)
CHI 2018 - Measuring, Understanding, and Classifying News Media Sympathy on T...Abdallah El Ali
CHI 2018 slides for the paper: Measuring, Understanding, and Classifying News Media Sympathy on Twitter after Crisis Events
DOI: https://dl.acm.org/citation.cfm?id=3174130
Paper: https://abdoelali.com/pdfs/paper556.pdf
My slides for the public keynote speech I gave for my doctoral thesis defense.
You can grab an e-copy of my dissertation here: http://www.abdoelali.com/pdfs/phdthesis_abdallah_elali.pdf
Photographer Paths: Sequence Alignment of Geotagged Photos for Exploration-ba...Abdallah El Ali
Slides for the talk I gave at CSCW 2013, held in San Antonio, TX, USA.
The full paper reference is:
El Ali, A., van Sas, S. & Nack, F. (2013). Photographer Paths: Sequence Alignment of Geotagged Photos for Exploration-based Route Planning. In proceedings of the 16th ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '13), 2013, San Antonio, Texas.
Paper link: http://staff.science.uva.nl/~elali/pdfs/p985-el-ali.pdf
My introductory slides on interaction design and the basics of prototyping for the Intelligent Interactive Systems master's Information Science course given at the University of Amsterdam.
Understanding Contextual Factors in Location-aware Multimedia MessagingAbdallah El Ali
A talk I gave at ICMI 2010, held in Beijing, China.
The full paper reference is:
El Ali, A., Nack, F. & Hardman, L. (2010). Understanding contextual factors in location-aware multimedia messaging. In Proceedings of the 12th international conference on Multimodal Interfaces, 2010, Beijing, China.
A 1-hour introductory lecture on multimodal interaction that I gave to bachelor HCI students. Included a section on how to get started in this exciting line of research.
1) The document discusses the basics of Android including what Android is, its architecture, features, and development tools. Android is an open-source operating system led by Google that powers smartphones and tablets.
2) It provides an overview of the core components of an Android application including activities, services, content providers and broadcast receivers. It also discusses the main elements in the Android development environment like the AndroidManifest.xml file, activities, views and intents.
3) The document walks through a tutorial example to create a simple Android app with text, buttons and multiple activities. It demonstrates how to declare UI elements in the XML layout, handle button clicks from Java code, and launch a new activity using an intent defined
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Fishing or a Z?: Investigating the Effects of Error on Mimetic and Alphabet Device-based Gesture Interaction
1. Fishing or a Z?: Investigating the Effects of Error on
Mimetic and Alphabet Device-based Gesture Interaction"
Oct.
23,
2012
Abdallah
‘Abdo’
El
Ali
Johan
Kildal
Vuokko
Lantz
h6p://staff.science.uva.nl/~elali/
5. Introduction"
Device-‐based
gestures:
Gesturing
by
moving
a
smartphone
device
in
3D
space
Research
seGngs,
home
environments,
everyday
mobile
interac5on
Alterna5ve
to
when
users
are
encumbered
(e.g.,
manual
mul5tasking)
Natural
Promising
alterna5ve
to
mobile
touchscreen/keyboard
input
under
situa5onal
impairments
5
6. Motivation"
But
errors
are
an
inevitable
part
of
interac5on
with
technology…
Many
gesture
classes
are
available
(e.g.,
iconic,
symbolic,
deic5c)
for
use
in
smartphones,
but
which
have
minimum
user
frustra5on
when
recogni5on
errors
occur?
We
inves5gate
user
error
tolerance
for
two
iconic
gesture
sets
used
in
HCI:
mime5c
and
alphabet
gestures
6
7. Related Work"
Gesture-‐based
Interac5on
Gesture-‐Task
mapping
(Khnel
et
al.,
2011;
Ruiz
et
al.,
2011)
Social
acceptability
(Rico
&
Brewster,
2010)
Gesture
Taxonomies:
Deic5c,
symbolic,
physical,
mime5c/pantomimic,
abstract
(‘
m/
’)
(Rime
&
Schiaratura,
1991;
…)
Recogni5on
Errors
Speech:
Repeat
(Suhm
et
al.,
2001)
and
hyperar5culate
(Oviad
et
al.,
1998)
Mul5modal:
Modality-‐switching
(“spiral
depth”
of
6)
(Oviad
&
van
Gent,
1996)
Touch-‐less
Vision-‐based
Gesture
Recogni5on:
How
many
errors
before
switching
to
keyboard
input?
40%
user
error
tolerance
before
modality
switching
(Karam
&
Schraefel,
2006)
Lidle
work
on
which
gesture
sets
are
most
robust
to
errors
during
gesture-‐based
interac5on!
7
8. Iconic Gestures"
Mimetic / Pantomimic Gestures: natural, familiar, easy to learn, varied by
activities
e.g., Fishing, calling, …
Alphabet Gestures: familiar, easy to learn, varied by stroke
e.g., letter “C”, letter “S”, …
8
9. Research Question"
What
are
the
effects
of
unrecognized
gestures
on
user
experience,
and
what
are
the
differences
between
mime5c
and
alphabet
gestures
(under
varying
error
rates:
0-‐20%,
20-‐40%,
40-‐60%)?
Hypotheses:
Mime5c
gestures
→
users
less
familiar
with
ideal
shape
→
more
gesture
varia5on
under
high
error
rates
→
but
lower
subjec5ve
workload
due
to
higher
degrees
of
freedom
Alphabet
gestures
→
users
more
familiar
with
ideal
shape
→
more
rigid
gestures
under
increasing
error
rates
→
but
higher
subjec5ve
workload
due
to
lower
degrees
of
freedom
9
13. Study Design"
Qualita5ve
Study;
Automated
Wizard-‐of-‐Oz
method
(Fabbrizio
et
al.,
2005)
24
subjects
(16
male,
8
female)
aged
between
22-‐41
(M=
29.6,
SD=
4.5)
Mixed
between-‐
and
within
subject
factorial
design:
2
(gesture
type:
mime5c
vs.
alphabet)
x
3
(error
rate:
low
vs.
med
vs.
high)
Experiment
in
Presenta5on®,
Wii
Remote®
interac5on
using
GlovePie™
Random
error
distribu5on
across
trials
(20
prac5ce,
180
test)
Tutorial
&
videos
given
of
how
to
'properly'
perform
each
gesture
Data
collected:
1. Modified
NASA-‐TLX
workload
[0-‐20
range]
ques5onnaire
data
(Hart
&
Wickens,
1990;
Brewster,
1994)
2. Experiment
logs
3. Video
recordings
of
subjects’
gesture
interac5on
13 4. Post-‐experiment
interviews
20. Workload"
Mime5c
gestures
are
beder
tolerated
up
to
error
rates
of
40%
(cf.,
Karam
&
Schraefel,
2006),
compared
with
error
rates
of
up
to
only
20%
for
alphabet
gestures
From
a
usability
perspec5ve,
mime5c
gestures
more
robust
to
recogni5on
failures
than
alphabet
gestures
20
21. Observations"
Mime5c
gestures
evolve
into
real-‐world
counterparts
under
error,
alphabet
gestures
tend
to
become
more
rigid
and
well
structured
Canonical
Varia5ons
via
posi5ve
reinforcement:
Survival
of
the
fidest
gesture
varia5ons
E.g.,
S12
exhibited
real-‐world
varia5ons
on
both
the
Fishing
and
Trashing
gestures
Varia5ons
develop
as
low
as
spiral
depth
of
2
(i.e.,
min.
2
recogni5on
errors)
21
22. User Feedback"
Perceived
Canonical
Varia5ons
S9:
“ The
shaking,
that
was
the
hardest
one
because
you
couldn’t
just
shake
freely
[gestures
in
hand],
it
had
to
be
more
precise
shaking
[swing
to
the
leA,
swing
to
the
right]
so
not
just
any
sort
of
shaking
[shakes
hand
in
many
dimensions]”
Cultural
and
Individual
differences
S10:
“For
the
glass
filling,
there
are
many
ways
to
do
it.
SomeFmes
very
fast,
someFmes
slow
like
beer.”
Perceived
Performance
Ad
hoc
explana5ons
(e.g.,
fa5gue)
given
why
there
were
more
errors
in
some
blocks
S18:
“[Performance]
between
the
first
and
second
blocks
[baseline
and
low
error
rate
condiFons],
it
was
the
same...
10-‐15%.”
Social
Acceptability
Alphabet
gestures
less
socially
acceptable
when
they
fail
S16:
“When
it
doesn’t
take
your
C,
you
keep
doing
it,
and
it
looks
ridiculous.”
22
25. Implications for Gesture Recognition"
Mime5c
gestures
evolve
into
real-‐world
counterparts
under
error,
alphabet
gestures
tend
to
become
more
rigid
and
well
structured
One
shot
recogni5on
for
mime5c
gestures
important!
Interes5ng
explana5ons
(e.g.,
canonical
varia5ons)
and
causes
(e.g.,
fa5gue)
given
why
there
were
more
errors
in
some
blocks
Transparency
in
gesture
recogni5on
technology
may
beder
support
users
in
error
handling
strategies
25
26. Implications for Gesture-based Interaction"
40%
error
tolerance
in
line
with
previous
work
(Karam
&
Schraefel,
2006),
which
shows
usability
of
gesture-‐based
interac5on
Mime5c
gestures
overall
have
beder
user
experience,
more
use
cases,
and
thus
more
suitable
for
device-‐based
gesture
interac5on
(even
under
high
recogni5on
error!)
26
28. Future Work"
Quan5ta5ve
assessment
of
how
many
errors
precisely
before
canonical
variant?
Other
classes
of
gestures
(e.g.,
manipula5ve)
Influence
of
device
form
factor
28
30. References"
Brewster,
S.
Providing
a
structured
method
for
integra5ng
non-‐speech
audio
into
human-‐computer
interfaces.
PhD
thesis,
University
of
York,
1994.
Fabbrizio,
G.
D.,
Tur,
G.,
and
Hakkani-‐Tur,
D.
Automated
wizard-‐of-‐oz
for
spoken
dialogue
systems.
In
Proc.
INTERSPEECH
2005
(2005),
1857–1860.
Karam,
M.,
and
Schraefel,
M.
C.
Inves5ga5ng
user
tolerance
for
errors
in
vision-‐enabled
gesture-‐based
interac5ons.
In
Proc.
AVI
’06,
ACM
(NY,
USA,
2006),
225–232.
Khnel,
C.,
Westermann,
T.,
Hemmert,
F.,
Kratz,
S.,
Mller,
A.,
and
Muller,
S.
I’m
home:
Defining
and
evalua5ng
a
gesture
set
for
smart-‐home
control.
Interna5onal
Journal
of
Human-‐Computer
Studies
69,
11
(2011),
693
–
704.
Hart,
S.,
and
Wickens,
C.
Manprint:
an
Approach
to
Systems
Integra5on.
Van
Nostrand
Reinhold,
1990,
ch.
Workload
Assessment
and
Predic5on,
257–292.
Oviad,
S.,
Maceachern,
M.,
and
Anne
Levow,
G.
Predic5ng
hyperar5culate
speech
during
human-‐computer
error
resolu5on.
Speech
Communica5on
24
(1998),
87–110.
17.
Oviad,
S.,
and
Van
Gent,
R.
Error
resolu5on
during
mul5modal
human-‐computer
interac5on.
In
Proc.
ICSLP
’96,
vol.
1
(oct
1996),
204–207.
Rico,
J.,
and
Brewster,
S.
Usable
gestures
for
mobile
interfaces:
evalua5ng
social
acceptability.
In
Proc.
CHI
’10,
ACM
(NY,
USA,
2010),
887–896.
Rime,
B.,
and
Schiaratura,
L.
Fundamentals
of
Nonverbal
Behavior.
Cambridge
University
Press,
1991,
ch.
Gesture
and
speech,
239–281.
Ruiz,
J.,
Li,
Y.,
and
Lank,
E.
User-‐defined
mo5on
gestures
for
mobile
interac5on.
In
Proc.
CHI
’11,
ACM
(NY,
USA,
2011),
197–206.
Suhm,
B.,
Myers,
B.,
and
Waibel,
A.
Mul5modal
error
correc5on
for
speech
user
interfaces.
ACM
Trans.
Comput.-‐
Hum.
Interact.
8
(2001),
60–98.
30