[Lin, 2022] Stephen Shiao-ru Lin, Nisal Menuka Gamage, Kithmini Herath, and Anusha Withana. 2022. MyoSpring: 3D Printing Mechanomyographic Sensors for Subtle Finger Gesture Recognition. In Sixteenth
International Conference on Tangible, Embedded, and Embodied Interaction (TEI '22). Association for Computing Machinery, New York, NY, USA, Article 15, 1–13.
[Yamashita, 2017] Koki Yamashita, Takashi Kikuchi, Katsutoshi Masai, Maki Sugimoto, Bruce H. Thomas, Yuta Sugiura, CheekInput: Turning Your Cheek into an Input Surface by Embedded Optical Sensors on
a Head-mounted Display, In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (VRST ’17), ACM, Article 19, 8 pages, November 8-10, 2017, Gothenburg, Sweden.
[Kikuchi, 2017] Takashi Kikuchi, Yuta Sugiura, Katsutoshi Masai, Maki Sugimoto, Bruce H. Thomas, EarTouch: Turning the Ear into an Input Surface, In Proceedings of the 19th International Conference on
Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’17). ACM, New York, NY, USA, Article 27, 6 pages.
[Ogata, 2015] Masa Ogata and Michita Imai. 2015. SkinWatch: skin gesture interaction for smart watch. In Proceedings of the 6th Augmented Human International Conference (AH '15). Association for
Computing Machinery, New York, NY, USA, 21–24.
[Ogawa, 2021] Ryoma Ogawa, Kyosuke Futami, and Kazuya Murao. 2021. NasalBreathInput: A Hands-Free Input Method by Nasal Breath Gestures using a Glasses Type Device. In The 23rd International
Conference on Information Integration and Web Intelligence (iiWAS2021). Association for Computing Machinery, New York, NY, USA, 620–624.
[Masai, 2020] K. Masai, K. Kunze, D. Sakamoto, Y. Sugiura and M. Sugimoto, "Face Commands - User-Defined Facial Gestures for Smart Glasses," 2020 IEEE International Symposium on Mixed and
Augmented Reality (ISMAR), 2020, pp. 374-386,
[Hashimoto, 2018] Takuma Hashimoto, Suzanne Low, Koji Fujita, Risa Usumi, Hiroshi Yanagihara, Chihiro Takahashi, Maki Sugimoto, Yuta Sugiura, TongueInput: Input Method by Tongue Gestures Using
Optical Sensors Embedded in Mousepiece, In Proceedings of the SICE Annual Conference 2018, IEEE, 6 pages, September 11-14, 2018, Nara, Japan.
[William, 2016] William Saunders and Daniel Vogel. 2016. Tap-Kick-Click: Foot Interaction for a Standing Desk. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems (DIS '16).
Association for Computing Machinery, New York, NY, USA, 323–333.
[Kunimi, 2022] Yusuke Kunimi, Masa Ogata, Hirotaka Hiraki, Motoshi Itagaki, Shusuke Kanazawa, and Masaaki Mochimaru. 2022. E-MASK: A Mask-Shaped Interface for Silent Speech Interaction with Flexible
Strain Sensors. In Augmented Humans 2022 (AHs 2022). Association for Computing Machinery, New York, NY, USA, 26–34.
[Lee, 2020] Hyein Lee, Yoonji Kim, and Andrea Bianchi. 2020. MAScreen: Augmenting Speech with Visual Cues of Lip Motions, Facial Expressions, and Text Using a Wearable Display. In SIGGRAPH Asia 2020
Emerging Technologies(SA '20). Association for Computing Machinery, New York, NY, USA, Article 2, 1–2
[Kusabuka, 2020] Takahiro Kusabuka and Takuya Indo. 2020. IBUKI: Gesture Input Method Based on Breathing. In Adjunct Publication of the 33rd Annual ACM Symposium on User Interface Software and
Technology (UIST '20 Adjunct). Association for Computing Machinery, New York, NY, USA, 102–104.
25
参考文献
[Suzuki, 2020] Suzuki, Y., Sekimori, K., Yamato, Y., Yamasaki, Y., Shizuki, B., Takahashi, S. (2020). A Mouth Gesture Interface Featuring a Mutual-Capacitance Sensor Embedded in a Surgical Mask. In: Kurosu,
M. (eds) Human-Computer Interaction. Multimodal and Natural Interaction. HCII 2020. Lecture Notes in Computer Science(), vol 12182. Springer, Cham.
[Schwarz, 2010] Julia Schwarz, Chris Harrison, Scott Hudson, and Jennifer Mankoff. 2010. Cord input: an intuitive, high-accuracy, multi-degree-of-freedom input method for mobile devices. In Proceedings of
the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). Association for Computing Machinery, New York, NY, USA, 1657–1660.
[Olwal, 2018] Alex Olwal, Jon Moeller, Greg Priest-Dorman, Thad Starner, and Ben Carroll. 2018. I/O Braid: Scalable Touch-Sensitive Lighted Cords Using Spiraling, Repeating Sensing Textiles and Fiber Optics.
In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST '18). Association for Computing Machinery, New York, NY, USA, 485–497.
[Shahmiri, 2019] Fereshteh Shahmiri, Chaoyu Chen, Anandghan Waghmare, Dingtian Zhang, Shivan Mittal, Steven L. Zhang, Yi-Cheng Wang, Zhong Lin Wang, Thad E. Starner, and Gregory D. Abowd. 2019.
Serpentine: A Self-Powered Reversibly Deformable Cord Sensor for Human Input. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). Association for Computing
Machinery, New York, NY, USA, Paper 545, 1–14.
[Morris, 2010] Meredith Ringel Morris, Jacob O. Wobbrock, and Andrew D. Wilson. 2010. Understanding users' preferences for surface gestures. In Proceedings of Graphics Interface 2010 (GI '10). Canadian
Information Processing Society, CAN, 261–268.
[Vatavu, 2015] Radu-Daniel Vatavu and Jacob O. Wobbrock. 2015. Formalizing Agreement Analysis for Elicitation Studies: New Measures, Significance Test, and Toolkit. In Proceedings of the 33rd Annual ACM
Conference on Human Factors in Computing Systems (CHI '15). Association for Computing Machinery, New York, NY, USA, 1325–1334.
[Fukahori, 2015] Koumei Fukahori, Daisukes Sakamoto, and Takeo Igarashi. 2015. Exploring Subtle Foot Plantar-based Gestures with Sock-placed Pressure Sensors. In Proceedings of the 33rd Annual ACM
Conference on Human Factors in Computing Systems (CHI '15). Association for Computing Machinery, New York, NY, USA, 3019–3028.
26
参考文献