This presentation was on Empathic Mixed Reality, which we applied Mixed Reality technology to Empathic Computing in our studies. We shared an overview of our research and selected findings. This talk was given at ETRI and KAIST in Daejeon, South Korea, on the 24th of May 2017.
Presentation on EEG cognitive adaptive training in VR. Given by Mark Billinghurst at the IEEE VR conference in Osaka, Japan. The talk was given on March 25th, 2019
Development of video-based emotion recognition using deep learning with Googl...TELKOMNIKA JOURNAL
Emotion recognition using images, videos, or speech as input is considered as a hot topic in the field of research over some years. With the introduction of deep learning techniques, e.g., convolutional neural networks (CNN), applied in emotion recognition, has produced promising results. Human facial expressions are considered as critical components in understanding one's emotions. This paper sheds light on recognizing the emotions using deep learning techniques from the videos. The methodology of the recognition process, along with its description, is provided in this paper. Some of the video-based datasets used in many scholarly works are also examined. Results obtained from different emotion recognition models are presented along with their performance parameters. An experiment was carried out on the fer2013 dataset in Google Colab for depression detection, which came out to be 97% accurate on the training set and 57.4% accurate on the testing set.
E MOTION I NTERACTION WITH V IRTUAL R EALITY U SING H YBRID E MOTION C...ijcsit
Human computer interaction (HCI) considered main aspect in virtual reality (VR) especially in the context
of emotion, where users can interact with virtual reality through their emotions and it could be expressed in
virtual reality. Last decade many researchers focused on emotion classification in order to employ emotion
in interaction with virtual reality, the classification will be done based on Electroencephalogram (EEG)
brain signals. This paper provides a new hybrid emotion classification method by combining self-
assessment, arousal valence dimension and variance of brain hemisphere activity to classify users’
emotions. Self-assessment considered a standard technique used for assessing emotion, arousal valence
emotion dimension model is an emotion classifier with regards to aroused emotions and brain hemisphere
activity that classifies emotion with regards to right and left hemisphere. This method can classify human
emotions, two basic emotions is highlighted i.e. happy and sad. EEG brain signals are used to interpret the
users’ emotional. Emotion interaction is expressed by 3D model walking expression in VR. The results
show that the hybrid method classifies the highlighted emotions in different circumstances, and how the 3D
model changes its walking style according to the classified users’ emotions. Finally, the outcome is
believed to afford new technique on classifying emotions with feedback through 3D virtual model walking
expression.
Human Emotion Recognition using Machine Learningijtsrd
It is quite interesting to recognize the human emotions in the field of machine learning. Using a person's facial expression one can know his emotions or what the person wants to express. But at the same time it's not easy to recognize one's emotion easily its quite challenging at times. Facial expression consist of various human emotions such as sad, happy , excited, angry, frustrated and surprise. Few years back Natural language processing was used to detect the sentiment from the text and then it took a step forward towards emotion detection. Sentiments can be positive, negative or neutral where as emotions are more refined categories. There are many techniques used to recognize emotions. This paper provides a review of research work carried out and published in the field of human emotion recognition and various techniques used for human emotions recognition. Prof. Mrs. Dhanamma Jagli | Ms. Pooja Shetty "Human Emotion Recognition using Machine Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25217.pdfPaper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/25217/human-emotion-recognition-using-machine-learning/prof-mrs-dhanamma-jagli
Presentation on EEG cognitive adaptive training in VR. Given by Mark Billinghurst at the IEEE VR conference in Osaka, Japan. The talk was given on March 25th, 2019
Development of video-based emotion recognition using deep learning with Googl...TELKOMNIKA JOURNAL
Emotion recognition using images, videos, or speech as input is considered as a hot topic in the field of research over some years. With the introduction of deep learning techniques, e.g., convolutional neural networks (CNN), applied in emotion recognition, has produced promising results. Human facial expressions are considered as critical components in understanding one's emotions. This paper sheds light on recognizing the emotions using deep learning techniques from the videos. The methodology of the recognition process, along with its description, is provided in this paper. Some of the video-based datasets used in many scholarly works are also examined. Results obtained from different emotion recognition models are presented along with their performance parameters. An experiment was carried out on the fer2013 dataset in Google Colab for depression detection, which came out to be 97% accurate on the training set and 57.4% accurate on the testing set.
E MOTION I NTERACTION WITH V IRTUAL R EALITY U SING H YBRID E MOTION C...ijcsit
Human computer interaction (HCI) considered main aspect in virtual reality (VR) especially in the context
of emotion, where users can interact with virtual reality through their emotions and it could be expressed in
virtual reality. Last decade many researchers focused on emotion classification in order to employ emotion
in interaction with virtual reality, the classification will be done based on Electroencephalogram (EEG)
brain signals. This paper provides a new hybrid emotion classification method by combining self-
assessment, arousal valence dimension and variance of brain hemisphere activity to classify users’
emotions. Self-assessment considered a standard technique used for assessing emotion, arousal valence
emotion dimension model is an emotion classifier with regards to aroused emotions and brain hemisphere
activity that classifies emotion with regards to right and left hemisphere. This method can classify human
emotions, two basic emotions is highlighted i.e. happy and sad. EEG brain signals are used to interpret the
users’ emotional. Emotion interaction is expressed by 3D model walking expression in VR. The results
show that the hybrid method classifies the highlighted emotions in different circumstances, and how the 3D
model changes its walking style according to the classified users’ emotions. Finally, the outcome is
believed to afford new technique on classifying emotions with feedback through 3D virtual model walking
expression.
Human Emotion Recognition using Machine Learningijtsrd
It is quite interesting to recognize the human emotions in the field of machine learning. Using a person's facial expression one can know his emotions or what the person wants to express. But at the same time it's not easy to recognize one's emotion easily its quite challenging at times. Facial expression consist of various human emotions such as sad, happy , excited, angry, frustrated and surprise. Few years back Natural language processing was used to detect the sentiment from the text and then it took a step forward towards emotion detection. Sentiments can be positive, negative or neutral where as emotions are more refined categories. There are many techniques used to recognize emotions. This paper provides a review of research work carried out and published in the field of human emotion recognition and various techniques used for human emotions recognition. Prof. Mrs. Dhanamma Jagli | Ms. Pooja Shetty "Human Emotion Recognition using Machine Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25217.pdfPaper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/25217/human-emotion-recognition-using-machine-learning/prof-mrs-dhanamma-jagli
Emotion recognition using image processing in deep learningvishnuv43
User’s emotion using its facial expressions will be detected. These expressions can be derived from the live feed via system's camera or any pre-existing image available in the memory. Emotions possessed by humans can be recognized and has a vast scope of study in the computer vision industry upon which several researches have already been done.
We propose a compact CNN model for facial expression recognition.
The work has been implemented using Python Open Source Computer Vision Library (OpenCV) and NumPy,pandas,keras packages. The scanned image (testing dataset) is being compared to training dataset and thus emotion is predicted.
Dynamic Personalization of Gameful Interactive SystemsGustavo Tondello
These are the slides of my Ph.D. thesis oral defence at the University of Waterloo on June 20, 2019.
Gameful design, the process of creating a system with affordances for gameful experiences, can be used to increase user engagement and enjoyment of digital interactive systems. It can also be used to create applications for behaviour change in areas such as health, wellness, education, customer loyalty, and employee management. However, existing research suggests that the qualities of users, such as their personality traits, preferences, or identification with the task, can influence gamification outcomes.
Given how user qualities shape the gameful experience, it is important to understand how to personalize gameful systems. Current evidence suggests that personalized gameful systems can lead to increased user engagement and be more effective in helping users achieve their goals than generic ones. However, to create this kind of system, designers need a specific method to guide them in personalizing the gameful experience to their target audience. To address this need, this thesis proposes a method for personalized gameful design with three steps: (1) classification of user preferences, (2) classification and selection of gameful design elements, and (3) heuristic evaluation of the design.
Furthermore, this thesis describes the design, implementation, and pilot evaluation of a software platform for the study of personalized gameful design. It integrates nine gameful design elements built around a main instrumental task, enabling researchers to observe and study the gameful experience of participants. The platform is flexible so the instrumental task can be changed, game elements can be added or removed, and the level and type of personalization or customization can be controlled. This allows researchers to generate different experimental conditions to study a broad range of research questions.
Our personalized gameful design method provides practical tools and clear guidelines to help designers effectively build personalized gameful systems.
Ch5 Social interaction in individual vs. partner playing begonapino.comBegoña Pino
Social interaction in individual vs. partner playing - Research study - Pino, B. (2006) "Computers as an environment for facilitating social interaction in children with autistic spectrum disorders". PhD Thesis, University of Edinburgh, UK
Emotion-oriented computing: Possible uses and applicationsAndré Valdestilhas
This article discusses the concepts of using digital television affective computing and computer vision.
The proposal involves the union of some techniques such as capturing facial expressions through a video
camera, use of accelerometers in ball and touch holograms to work a certain level of interactivity with the
viewer. Some uses of the proposal in question are described, such as control of the hearing, background
content, among others. This article reveals numerous benefits that can be addressed with the use of
matters presented which can be applied in a broad context, such as for the blind in video games, among
others
Empathic Computing: Developing for the Whole MetaverseMark Billinghurst
A keynote speech given by Mark Billinghurst at the Centre for Design and New Media at IIIT-Delhi. Given on June 16th 2022. This presentation is about how Empathic Computing can be used to develop for the entre range of the Metaverse.
Empathic Computing: Delivering the Potential of the MetaverseMark Billinghurst
Invited guest lecture by Mark Billingurust given at the MIT Media Laboratory on November 21st 2023. This was given as part of Professor Hiroshi Ishii's class on Tangible Media
Empathic Computing: Designing for the Broader MetaverseMark Billinghurst
Keynote talk given by Mark Billinghurst at the CHI 2023 Workshop on Towards and Inclusive and Accessible Metaverse. The talk was given on April 23rd 2023.
Beyond Buzz - Web 2.0 Expo - K.Niederhoffer & M.Smithkategn
A framework to measure a conversation based on approaches from social psychology and sociology. Beyond quantity of buzz, we propose measuring the context of conversation: the signal, person, role, and ecosystem.
Emotion recognition using image processing in deep learningvishnuv43
User’s emotion using its facial expressions will be detected. These expressions can be derived from the live feed via system's camera or any pre-existing image available in the memory. Emotions possessed by humans can be recognized and has a vast scope of study in the computer vision industry upon which several researches have already been done.
We propose a compact CNN model for facial expression recognition.
The work has been implemented using Python Open Source Computer Vision Library (OpenCV) and NumPy,pandas,keras packages. The scanned image (testing dataset) is being compared to training dataset and thus emotion is predicted.
Dynamic Personalization of Gameful Interactive SystemsGustavo Tondello
These are the slides of my Ph.D. thesis oral defence at the University of Waterloo on June 20, 2019.
Gameful design, the process of creating a system with affordances for gameful experiences, can be used to increase user engagement and enjoyment of digital interactive systems. It can also be used to create applications for behaviour change in areas such as health, wellness, education, customer loyalty, and employee management. However, existing research suggests that the qualities of users, such as their personality traits, preferences, or identification with the task, can influence gamification outcomes.
Given how user qualities shape the gameful experience, it is important to understand how to personalize gameful systems. Current evidence suggests that personalized gameful systems can lead to increased user engagement and be more effective in helping users achieve their goals than generic ones. However, to create this kind of system, designers need a specific method to guide them in personalizing the gameful experience to their target audience. To address this need, this thesis proposes a method for personalized gameful design with three steps: (1) classification of user preferences, (2) classification and selection of gameful design elements, and (3) heuristic evaluation of the design.
Furthermore, this thesis describes the design, implementation, and pilot evaluation of a software platform for the study of personalized gameful design. It integrates nine gameful design elements built around a main instrumental task, enabling researchers to observe and study the gameful experience of participants. The platform is flexible so the instrumental task can be changed, game elements can be added or removed, and the level and type of personalization or customization can be controlled. This allows researchers to generate different experimental conditions to study a broad range of research questions.
Our personalized gameful design method provides practical tools and clear guidelines to help designers effectively build personalized gameful systems.
Ch5 Social interaction in individual vs. partner playing begonapino.comBegoña Pino
Social interaction in individual vs. partner playing - Research study - Pino, B. (2006) "Computers as an environment for facilitating social interaction in children with autistic spectrum disorders". PhD Thesis, University of Edinburgh, UK
Emotion-oriented computing: Possible uses and applicationsAndré Valdestilhas
This article discusses the concepts of using digital television affective computing and computer vision.
The proposal involves the union of some techniques such as capturing facial expressions through a video
camera, use of accelerometers in ball and touch holograms to work a certain level of interactivity with the
viewer. Some uses of the proposal in question are described, such as control of the hearing, background
content, among others. This article reveals numerous benefits that can be addressed with the use of
matters presented which can be applied in a broad context, such as for the blind in video games, among
others
Empathic Computing: Developing for the Whole MetaverseMark Billinghurst
A keynote speech given by Mark Billinghurst at the Centre for Design and New Media at IIIT-Delhi. Given on June 16th 2022. This presentation is about how Empathic Computing can be used to develop for the entre range of the Metaverse.
Empathic Computing: Delivering the Potential of the MetaverseMark Billinghurst
Invited guest lecture by Mark Billingurust given at the MIT Media Laboratory on November 21st 2023. This was given as part of Professor Hiroshi Ishii's class on Tangible Media
Empathic Computing: Designing for the Broader MetaverseMark Billinghurst
Keynote talk given by Mark Billinghurst at the CHI 2023 Workshop on Towards and Inclusive and Accessible Metaverse. The talk was given on April 23rd 2023.
Beyond Buzz - Web 2.0 Expo - K.Niederhoffer & M.Smithkategn
A framework to measure a conversation based on approaches from social psychology and sociology. Beyond quantity of buzz, we propose measuring the context of conversation: the signal, person, role, and ecosystem.
Keynote talk given by Mark Billinghurat at the Foundation of Digital Games (FDG) 2021 conference on August 5th 2021. The talk was on how Empathic Computing techniques can be used to create new type of games.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
9. Education:
2011 - 15 Ph.D. in Computer Science (University of Canterbury, New Zealand)
2006 - 08 M.Sc. in Computer Science (Asian Institute of Technology (AIT), Thailand)
2000 - 03 B.Sc. in Physics/Computer Science (University of Canterbury, New Zealand)
Research Experience:
Jun 2016-present Research Fellow (ECL, University of South Australia, Australia)
Summer 2014 Research Intern (MIC group, Microsoft Research, WA)
Summer 2013 Visiting Scholar (MxR Lab, Institute for Creative Technologies, USC, CA)
2011-14 Research assistant (HIT Lab NZ, University of Canterbury)
Additional Experience:
2015 - 16 Unity Director (QuiverVision, Japan)
2009 - 10 Computer Forensic Specialist (Royal Thai Police, Thailand)
2005 - 09 Forensic Scientist (Royal Thai Police, Thailand)
Thammathip Piumsomboon, Ph.D.
13. Trends in Technology
Contents by Prof. Mark Billinghurst
https://medium.com/@marknb00/the-coming-age-of-empathic-computing-617caefc7016
14. Contents by Prof. Mark Billinghurst
https://medium.com/@marknb00/the-coming-age-of-empathic-computing-617caefc7016
Interaction Technology
Physiological Sensing
Emotiv
Empatica
Implicit
Explicit
15. Contents by Prof. Mark Billinghurst
https://medium.com/@marknb00/the-coming-age-of-empathic-computing-617caefc7016
Content Capture
3D Image/Space Capture
Matterport
Google
Project Tango
Time
Photo
Film
Live
Video
Panorama
360 Video
3D Space
1850 1900 1940 1990 2000 2010
2D Static
Immersive
Live
Experience
Realism
16. Contents by Prof. Mark Billinghurst
https://medium.com/@marknb00/the-coming-age-of-empathic-computing-617caefc7016
Networking Speeds
Network Innovation
5G
Text
Audio
Natural
Video
18. “Seeing with the Eyes of another,
Listening with the Ears of another,
and Feeling with the Heart of another..”
- Alfred Adler
What is empathy?
Contents by Prof. Mark Billinghurst
https://medium.com/@marknb00/the-coming-age-of-empathic-computing-617caefc7016
19. What is Empathy Computing?
Contents by Prof. Mark Billinghurst
https://medium.com/@marknb00/the-coming-age-of-empathic-computing-617caefc7016
21. 1. Understanding: Systems that can understand your feelings and
emotions
2. Experiencing: Systems that help you better experience the world of
others
3. Sharing: Systems that help you better share the experience of others
- Prof.Mark Billinghurst
How to achieve Empathic Computing?
22. What is Mixed Reality?
Empathic
ComputingMixed Reality
23. Milgram and Kishino’s Mixed Reality on the Reality-Virtuality Continuum
P. Milgram and F. Kishino, "A taxonomy of mixed reality visual displays," IEICE TRANSACTIONS on
Information and Systems, vol. 77, pp. 1321-1329, 1994.
24. F. Steinicke, G. Bruder, K. Rothaus, & K. Hinrichs.
Poster: A virtual body for augmented virtuality by
chroma-keying of egocentric videos. In 3D User
Interfaces, 2009. 3DUI 2009.
Microsoft HoloLens
26. 1. Understanding: Systems that can understand your feelings and
emotions
2. Experiencing: Systems that help you better experience the world of
others
3. Sharing: Systems that help you better share the experience of others
- Prof.Mark Billinghurst
Sensors
VR
AR
Why applying MR to Empathic Computing?
27. Affordances of MR interfaces align well with requirements necessary for
Empathic Computing
1. MR naturally support collaboration in 3D environments (real/virtual)
2. MR is highly personalized platforms easy for personal data capture
(embedded sensors) and user’s environment (context)
3. Data captured could also be shared and experience with a remote
person in MR, enabling them to feel as if they are there
Why applying MR to Empathic Computing?
28. 3.Through Heart and Eyes:
Sharing WhatYou Feel and Interacting with WhatYou See
29. 3.Through Heart and Eyes: Sharing WhatYou Feel
and Interacting with What You See
3.1 Sharing Where You Gaze
3.2 Sharing What You Feel
3.3 Interacting with What You See
3.4 Enhancing Your Collaboration
30. 3.1 Sharing WhereYou Gaze
3.2 Sharing WhatYou Feel
3.3 Interacting with What You See
3.4 Enhancing Your Collaboration
32. T. Piumsomboon, A. Dey, B. Ens, G. Lee, and M. Billinghurst, “CoVAR: Mixed-Platform Remote Collaboration
Between Augmented and Virtual Realities with Shared Collaboration Cues“, in reviewing process.
By providing shared collaboration cues?
• Our collaboration cues
• CoVAR: System Overview
• Experimental Setup
• Variables
• Summary
41. 3.1 Sharing Where You Gaze
Summary
• Collaboration cues in FoV and Gaze are crucial for improving collaborations.
• Head-gaze (FoV + head-ray) condition was found the most useful since head-
gaze input was used as the default interaction method even in the eye-gaze
condition (avoid confounding factor). This utilizes the implicit nature of
shared interaction and collaboration/communication cue in gaze.
T. Piumsomboon, A. Dey, B. Ens, G. Lee, and M. Billinghurst, “CoVAR: Mixed-Platform Remote Collaboration
Between Augmented and Virtual Realities with Shared Collaboration Cues“, in reviewing process.
42. 3.2 Sharing WhatYou Feel
3.3 Interacting with What You See
3.4 Enhancing Your Collaboration
3.1 Sharing WhereYou Gaze
44. By measuring and sharing physiological cues?
We know:
• VR can trigger emotional response
• Heart-rate can be an indicator of emotional response
• Sharing physiological feedback increases positive affect
A. Dey, T. Piumsomboon, Y. Lee, and M. Billinghurst, "Effects of Sharing Physiological States of Players in a
Collaborative Virtual Reality Gameplay," presented at the Proceedings of the 2017 CHI Conference on Human
Factors in Computing Systems, Denver, Colorado, USA, 2017.
51. Data Collected
• Raw heart rate
• Positive and negative affect schedule (PANAS)
• Subjective Questionnaire (four point Likert-scale)
• Relative head orientation
3.2 Sharing What You Feel
Hypotheses
When heart-rate feedback is shown:
• Observers will feel more connected to the active player
• Generate more positive affect
• More interaction between collaborators
Scary game:
• Will trigger more subjective understanding of emotions
Participants
26 (13 in each group)
7 females
Age: m=30.5, s.d.=5.2
52. 3.2 Sharing What You Feel
Raw heart-rate
• No significant difference
• Slightly higher heart-rate in scary zombie game
53. 3.2 Sharing What You Feel
Positive and negative affect
schedule (PANAS)
• Significant effect of gaming
• Scary zombie game had more
positive and negative affects
• No significant (p=.15) effect of
heart-rate visualization
54. Relative head orientation
• Significant effect of
gaming experiences
• Joyous game had more
aligned head orientation
than scary game
3.2 Sharing What You Feel
55. Summary
• Game had a significant effect on PANAS
• Heart-rate feedback showed promises to be effective
3.2 Sharing What You Feel
A. Dey, T. Piumsomboon, Y. Lee, and M. Billinghurst, "Effects of Sharing Physiological States of Players in a
Collaborative Virtual Reality Gameplay," presented at the Proceedings of the 2017 CHI Conference on Human
Factors in Computing Systems, Denver, Colorado, USA, 2017.
60. T. Piumsomboon, G. Lee, R. W. Lindeman, and M. Billinghurst, "Exploring natural eye-gaze-based interaction for
immersive virtual reality," in 2017 IEEE Symposium on 3D User Interfaces (3DUI), 2017, pp. 36-39.
a. Examples of eye-gaze + gestures
interaction for Mixed Reality
b. Examples of natural eye-gaze-based
interaction for immersive Virtual Reality
By using our natural inputs and designing around
our natural behaviour?
61. a. Examples of eye-gaze + gestures interaction
for Mixed Reality
64. b. Examples of natural eye-gaze-based interaction
for immersive Virtual Reality
65. Overview of our Eye-gaze-based interaction
• Duo-Reticles
• Radial Pursuit
• Nod and Roll
Initial Study
• Variables
• Results
3.3 Interacting with What You See
T. Piumsomboon, G. Lee, R. W. Lindeman, and M. Billinghurst, "Exploring natural eye-gaze-based interaction for
immersive virtual reality," in 2017 IEEE Symposium on 3D User Interfaces (3DUI), 2017, pp. 36-39.
66. HTC Vive + Pupil Labs Eye Tracker
3.3 Interacting with What You See
67. A laptop PC running our
software on Unity version
5.4.1f1.
HTC Vive Kit + a pair of
Pupil Labs eye trackers with
a binocular mount
An iMac running Pupil
Labs Capture software
Hardware Setup
3.3 Interacting with What You See
68. 3.3 Interacting with What You See
Overview
Type of Eye Movement Interaction Technique
Eye Saccade Duo-Reticles
Smooth Pursuit Radial Pursuit
Vestibulo-Ocular Reflex (VOR) Nod and Roll
Vergence None Tested
71. 3.3 Interacting with What You See
Duo-Reticles (DR)
Inertial Reticle (IR)
Real-time Reticle (RR)
A-1
As RR and IR are aligned,
alignment time counts down
A-2 A-3
Selection completed
76. Nod and Roll – Video 1
3.3 Interacting with What You See
77. 3.3 Interacting with What You See
C-2
C-1 Head-gaze Reticle (HR)
Real-time Reticle (RR)
C-3
Nod and Roll
78. Nod and Roll – Video 2
3.3 Interacting with What You See
79. 3.3 Interacting with What You See
Study IndependentVariable: Interaction Technique
Part 1 Duo-Reticles vs Gaze-Dwell 1 (GD1)
Part 2 Radial Pursuit vs Gaze-Dwell 2 (GD2)
Part 3 Explorative
Initial Study
81. 3.3 Interacting with What You See
Study
Dependent Variables
Objective Measures Subjective Measures
Part1
(DR)
Task Completion Time
# Errors
Usability Ratings
Semi-structured interview
Part 2
(RP)
Task Completion Time
# Errors
Usability Ratings
Semi-structured interview
Part 3
(NR)
None
Usability Ratings
Semi-structured interview
Dependent Variables
82. 3.3 Interacting with What You See
Cond Median p Cond Median p Cond Median
GD1 5 GD2 5
DR 5 RP 5
GD1 5 GD2 5
DR 4 RP 5
GD1 5 GD2 5
DR 5 RP 5
GD1 5 GD2 5
DR 5 RP 6
GD1 6 GD2 6
DR 6 RP 6
GD1 6 GD2 6
DR 6 RP 6
GD1 5 GD2 5
DR 6 RP 4
GD1 3 GD2 2
DR 2 RP 3
GD1 2 GD2 3
DR 6 RP 5
1 2 3 4 5 6 7
I prefered this technique 0.02 0.12
I felt tired using it 0.09 0.03 NR 4
It was frustrating to use 0.14 0.30 NR 3
It was fun to use 0.14 0.07 NR 6
I need to concentrate to use it 0.33 0.09 NR 5
It was easy for me to use 0.07 0.07 NR 5
I felt satisfied using it 0.14 0.03 NR 5
It felt natural to use 0.17 0.07 NR 4
I could interact precisely 0.23 0.17 NR 4
PART 1 PART 2 PART 3
Frequencies Frequencies Frequencies
Usability Ratings
▪ No performance difference (as expected)
▪ Most participants preferred Duo-Reticles over Gaze-Dwell 1
▪ Radial Pursuit was more satisfying and less fatiguing than Gaze-Dwell 2
83. T. Piumsomboon, G. Lee, R. W. Lindeman, and M. Billinghurst, "Exploring natural eye-gaze-based interaction for
immersive virtual reality," in 2017 IEEE Symposium on 3D User Interfaces (3DUI), 2017, pp. 36-39.
3.3 Interacting with What You See
Summary
• Three novel eye-gaze-based interaction techniques inspired by natural eye
movements
• An initial study found positive results supporting our approaches
▪ Similar performance as Gaze-Dwell, but superior user experience
• Will continue to apply the same principles to improve user experience
using eye gaze for immersive VR
84. 3.4 Enhancing Your Collaboration
3.1 Sharing WhereYou Gaze
3.2 Sharing WhatYou Feel
3.3 Interacting with What You See
86. T. Piumsomboon, and M. Billinghurst, “CoVAR: Collaborative Virtual and Augmented Reality System for
Remote Collaboration“, on going research.
a. Examples of VR user body scaling
b. An example of VR user snapping to AR
perspective
By utilizing virtuality of the collaborations?
98. Project:Visualization of physiological data
HAO CHEN
PhD Student
Investigating how to visualise the physiological data to the
players to help them perceive the data more effectively.
Particularly, we are exploring multi-sensory (visual, audio,
and haptic) visualization of physiological data.
The goal of this project is to make VR experiences more
empathetic and higher in presence.
Contact:
Arindam.Dey@unisa.edu.au