3. 3
Human-AI-Collaboration
Studying the Effect of AI Code Generators on Supporting Novice
Learners in Introductory Programming [Kazemitabaar et al.]
• Contribution: Conduct a user study to investigate
• How novice learners write and modify with and without AI code
generator
• How novice learners’ performance differ
• What are the effects of AI code generator
• Used Model: Codex
Kazemitabaar, Majeed, et al. "Studying the effect of AI Code Generators on Supporting Novice Learners in Introductory Programming." Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 2023.
5. 5
Human-AI-Collaboration
“What It Wants Me To Say”: Bridging the Abstraction Gap
Between End-User Programmers and Code-Generating Large
Language Models [Liu et al.]
• Contribution: Provide an example solution of abstraction matching-
Grounded abstraction matching, and show an example of usage
• Used Model: Codex
Liu, Michael Xieyang, et al. "“What It Wants Me To Say”: Bridging the Abstraction Gap Between End-User Programmers and Code-Generating Large Language Models." Proceedings of the 2023 CHI Conference on Human Factors in
Computing Systems. 2023.
7. 7
Human-AI-Collaboration
Are Two Heads Better Than One in AI-Assisted Decision Making?
Comparing the Behavior and Performance of Groups and
Individuals in Human-AI Collaborative Recidivism Risk
Assessment [Chiang et al.]
• Contribution: Investigate if both the process and the outcome of
people’s interactions with AI-driven decision aids can be affected by
whether the decisions are made individually or collectively
Chiang, Chun-Wei, et al. "Are Two Heads Better Than One in AI-Assisted Decision Making? Comparing the Behavior and Performance of Groups and Individuals in Human-AI Collaborative Recidivism Risk Assessment." Proceedings of
the 2023 CHI Conference on Human Factors in Computing Systems. 2023.
9. 9
Mental Healthcare
When Recommender Systems Snoop into Social Media, Users
Trust them Less for Health Advice [Sun et al.]
• Contribution: Conduct a user study of a fitness plan recommender
system
• Identify threat to identity and privacy concerns as two significant
user perceptions
• Test the effects of providing user choice in mitigating the
potential negativity brought by health RS
Sun, Yuan, et al. "When Recommender Systems Snoop into Social Media, Users Trust them Less for Health Advice." Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 2023.
11. 11
Mental Healthcare
The Perceived Utility of Smartphone and Wearable Sensor Data
in Digital Self-tracking Technologies for Mental Health [Kruzan
et al.]
• Contribution: Conduct an interview and explore self-tracking
technologies
• Patients’ mental health self-management practices
• their interest in and preferences for a self-tracking technology
• Their comfort with sharing data from self-tracking technology
with primary care providers.
Kruzan, Kaylee Payne, et al. "The Perceived Utility of Smartphone and Wearable Sensor Data in Digital Self-tracking Technologies for Mental Health." Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems.
2023.
12. 12
Comment Visualization
Who Does Not Benefit from Fact-checking Websites?: A
Psychological Characteristic Predicts the Selective Avoidance of
Clicking Uncongenial Facts [Tanaka et al.]
• Contribution
• propose a new index which measures selective avoidance
separately from selective exposure
• Investigate psychological characteristics associated with users’
selective avoidance of uncongenial facts
Tanaka, Yuko, et al. "Who Does Not Benefit from Fact-checking Websites? A Psychological Characteristic Predicts the Selective Avoidance of Clicking Uncongenial Facts." Proceedings of the 2023 CHI Conference on Human Factors in
Computing Systems. 2023.
14. 14
Virtual Reality
Towards a Metaverse Workspace: Opportunities, Challenges,
and Design Implications [Park et al.]
• Contribution
• Conduct semi-structured in-depth interviews with workers who
have experienced working in a physical office, remotely, and/or in
a metaverse
Park, Hyanghee, Daehwan Ahn, and Joonhwan Lee. "Towards a Metaverse Workspace: Opportunities, Challenges, and Design Implications." Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 2023.
15. 15
Virtual Reality
Embodying Physics-Aware Avatars in Virtual Reality [Tao et al.]
• Contribution
• Implement Physics-aware self-avatar
• Compare physics correction with one-to-one mapping of user’s
motion
• Investigate the thresholds of user preference of physics
corrections
Tao, Yujie, et al. "Embodying Physics-Aware Avatars in Virtual Reality." Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 2023.
18. 18
Virtual Reality
FingerMapper: Mapping Finger Motions onto Virtual Arms to
Enable Safe Virtual Reality Interaction in Confined Spaces
[Tseng et al.]
• Contribution
• Map fingers to virtual arms and hands
Tseng, Wen-Jie, et al. "FingerMapper: Mapping Finger Motions onto Virtual Arms to Enable Safe Virtual Reality Interaction in Confined Spaces." Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 2023.
Today I’m gonna talk about some papers we saw and discussed in CHI 23.
Ac
I’m gonna talk about 3 papers on Human-AI-Collaboration, 2 Papers on Mental Healthcare, 1 paper on Comment Visualization, and 3 papers in Virtual Reality.
First, I’ll talk about papers on Human-AI-Collaboration.
Interest in LLM went off the chart since LLM, especially ChatGPT and GPT-3 came out. Many LLM models for supporting humans came out. And nowadays, it is common to work with LLM. However, its effect is not exactly investigated.
Many researchers, including us, wanted to discover it and conducted a user study. In CHI 23, there are papers.
The first paper is 제목 by 저자.
The authors wanted to figure out whether AI Code generator helps novice programmers to learn introductory programming. So they 컨트리뷰션
The next paper is 제목 by 저자
In this paper, the authors wanted to solve the problems about prompting. When end users input prompts to LLM, they often struggle to express their intent in the prompt, or the model misunderstands users’ intent.
This results in a different output from the users’ intent, and this kind of problem is called abstraction matching problem.
In detail,
The third paper is about comparing behaviors and performance of groups and individuals.
Most papers about Human-AI-Collaboration focus on the interaction between model and individuals. But this paper examined the collaboration can be affected by whether the decisions are made by groups or individuals.
In this user study, participants took a roll of judge, and they had to decide whether the defendant will reoffend in 2 years, with the help of LLM. The study was measured with 6 metrics : ~~~~,
Reliance: How appropriate users rely on AI model
Fairness : no bias and discrimination
Confidence: Users’ confidence in AI model and final decision.
Accountability: responsibility to final decision
The result showed that
This result supports our motivation to add LLM in MR office.
Next topic is Mental Healthcare
As you know, there is an ongoing project on this topic and we are developing a mobile application to help the patients and doctors. In CHI, there were a few referencable papers.
The first one is ~~ by ~~
This paper’s topic is about personalized fitness program, which is slightly different from ours.
But they investigated the using of sensitive data such as SNS data, and the concern about it. So this might be referencable when we handle sensitive data.
In the user study, the authors used 6 personalization approaches, and each participant’s fitness program was personalized by one of them.
Half of the participants had an option whether they change the approach or stick with the current one.
The result showed that 67.5% changed the approach and they preferred In-app filtering to social media filtering.
In fact, participants whose fitness program was personalized by social media data felt more identity threats and privacy concerns, reducing trust in the program.
Meanwhile, providing user choice reduced threat and increased a sense of agency, and increased trust in the program.
The next paper is ~~
They conducted an intervews
The result showed that the patients manage symptoms by doing some effortfulr maintenance activies such as exercise, or doing some small, focused task which bring a joy.
yet they also highlight a misalignment in patient needs and current efforts to use sensors.
This paper felt quite important to me because we collect self-tracking data, and we need to motivate patients to collect their data with smartphone. The result
Next topic is Comment visualization.
There is a project which analysis people’s emotions, beliefs, etc in the youtube comment section. This task is for comparing comments on different channels talking same topic, and examine how people get affected by the channel, and other comments.
They divided click behavior following this figure.
42% Fact-Exposure group were willing to do belief-examining click while only 7% of fact-avoidance group were.
This result
The interview showed that the work in metaverse has strength for management, collaboration, wellbing etc.
But it also had some drawbacks such as risk of surveillance.
One of the interesting drawbacks was a paradox that Working in metaverse also demands physical workplace, as it is the part of remote work. I think VR/MR can minimize this problem and finding how to do that will be interesting topic.
Next is ~~
Embodiment and presence in VR is stronger when the alignment between the user and avatar is strong. But one-to-one matching is not always optimal because there are mismatching cases such as user is colliding with virtual object.
The authors invegestied if physical feedback to self-avatar can help on those cases, so they implemented physics-aware self-avatar
Here’s the footage
Final is ~~
In this paper, the authors wanted to reduce the space requirement for VR, and the tried to substitute arm motion by mapping them to finger motion.
The mapped virtual arm to index finger and virtual hand to thumb.