EarAuthCam: Personal Identification and Authentication Method Using Ear Image...sugiuralab
Earphones are now used for longer hours than before with the advancement in wireless technology and miniaturization. In addition, the application of earphones has become more diverse, and opportunities to access highly confidential information through them have increased. We propose a method comprising a hearable device equipped with a small camera for user authentication from ear images. This method improves the security of the hearable device. Ear images are first captured with the camera. The ear regions in the images are then extracted using a mask region-based convolutional neural network. Finally, the user is identified using histograms of oriented gradient features and a support vector machine (SVM). Our method was able to identify 18 participants with an accuracy of 84.1%. Users are authenticated through unsupervised anomaly detection using an autoencoder with an error rate of 8.36%. This method facilitates hands- and eye-free operations without requiring any explicit authentication action by the user.
EarAuthCam: Personal Identification and Authentication Method Using Ear Image...sugiuralab
Earphones are now used for longer hours than before with the advancement in wireless technology and miniaturization. In addition, the application of earphones has become more diverse, and opportunities to access highly confidential information through them have increased. We propose a method comprising a hearable device equipped with a small camera for user authentication from ear images. This method improves the security of the hearable device. Ear images are first captured with the camera. The ear regions in the images are then extracted using a mask region-based convolutional neural network. Finally, the user is identified using histograms of oriented gradient features and a support vector machine (SVM). Our method was able to identify 18 participants with an accuracy of 84.1%. Users are authenticated through unsupervised anomaly detection using an autoencoder with an error rate of 8.36%. This method facilitates hands- and eye-free operations without requiring any explicit authentication action by the user.
Converting Tatamis into Touch Sensors by Measuring Capacitancesugiuralab
This document summarizes a research paper that proposes a method to convert tatami floor mats into touch sensors by measuring capacitance. Conductive sheets are placed under the tatami surface. When a person contacts the tatami, capacitance is measured between the sheets and their skin to detect the touch position. The system identifies 12 hand gestures with approximately 90% accuracy. Future work includes enabling multi-touch detection and using the sensors for footprint tracking and pose estimation.
Pinch Force Measurement Using a Geomagnetic Sensorsugiuralab
This document proposes measuring pinch force using the geomagnetic sensor in a smartphone. A device with embedded magnets and springs is attached to the smartphone. As force is applied, the magnet's distance from the sensor changes, altering the magnetic flux density. Measurements found a strong correlation between force and magnetic flux density. Future work includes testing different smartphone models and collecting user feedback to improve usability.
Smartphone-Based Teaching System for Neonate Soothing Motionssugiuralab
This document describes a proposed smartphone-based teaching system to help first-time caregivers learn how to properly soothe neonates. The system uses sensors in a stuffed toy and a smartphone to capture posture angles and acceleration during cradling motions. It provides real-time feedback on the user's form compared to expert cradling motions. An experiment tested the system's effectiveness in improving users' cradling posture after training compared to just watching a video. Results showed the system helped users better match the expert's inclination angle, indicating it could help ensure neonate safety by teaching proper neck support. Future work is needed to improve measurement accuracy and further validate the system.
Tactile Presentation of Orchestral Conductor's Motion Trajectorysugiuralab
This document proposes presenting a conductor's motion trajectory tactilely for visually impaired musicians using vibrators. It describes capturing conducting movements, mapping them to vibrators, and using tactile apparent movement. An experiment found trajectory presentation helped predict beat timing better than single vibrations, especially for tempo changes and start cues. Future work includes developing a universal device.
TouchLog: Finger Micro Gesture Recognition Using Photo-Reflective Sensorssugiuralab
The researchers developed a fingernail-sized device using 7 photo-reflective sensors to detect finger microgestures based on fingertip skin deformation. They implemented a random forest classifier to recognize 11 gestures with an average accuracy of 91.1% for the general model and 91.5% for the individual model. Future work will focus on addressing limitations like user dependence and developing a device that can be worn comfortably for real-world use.
Seeing the Wind: An Interactive Mist Interface for Airflow Inputsugiuralab
Human activities can introduce variations in various environmental cues, such as light and sound, which can serve as inputs for interfaces. However, one often overlooked aspect is the airflow variation caused by these activities, which presents challenges in detection and utilization due to its intangible nature. In this paper, we have unveiled an approach using mist to capture invisible airflow variations, rendering them detectable by Time-of-Flight (ToF) sensors. We investigate the capability of this sensing technique under different types of mist or smoke, as well as the impact of airflow speed. To illustrate the feasibility of this concept, we created a prototype using a humidifier and demonstrated its capability to recognize motions. On this basis, we introduce potential applications, discuss inherent limitations, and provide design lessons grounded in mist-based airflow sensing.
Identification and Authentication Using Claviclessugiuralab
Identification and Authentication Using Clavicles
Yohei Kawasaki, Yuta Sugiura
2023 62nd Annual Conference of the Society of Instrument and Control Engineers (SICE), Mie, Japan, 2023
Estimation of Violin Bow Pressure Using Photo-Reflective Sensorssugiuralab
Estimation of Violin Bow Pressure Using Photo-Reflective Sensors presents a method for quantitatively estimating bow pressure during violin playing using photo-reflective sensors attached to the bow. Five sensors measure the distance between the bow stick and hair, which changes with applied pressure. A random forest regression model is trained on sensor distance values and actual pressure measurements to estimate pressure based solely on sensor values. In experiments, the model estimated bow pressure with an R2 of 0.84, MAE of 0.11N, and MAPE of 19.1% when tested on data from an experienced violinist. The goal is to provide visual feedback to support practice by quantifying bow pressure.
A Virtual Window Using Curtains and Image Projectionsugiuralab
A Virtual Window Using Curtains and Image Projection
Naoharu Sawada, Takumi Yamamoto, Yuta Sugiura
In Proceedings of the 15th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR2023) , IEEE, August 18-19, 2023, Taipei, Taiwan.
Augmented Sports of Badminton by Changing Opening Status of Shuttle’s Featherssugiuralab
Augmented Sports of Badminton by Changing Opening Status of Shuttle’s Feathers
Takumi Yamamoto*, Ryohei Baba*, Yuta Sugiura (* Contribution equally)
In Proceedings of the 15th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR2023) , IEEE, August 18-19, 2023, Taipei, Taiwan.
Carpal Tunnel Syndrome Estimation through Median Nerve Segmentation in Ultr...sugiuralab
This study aimed to improve the accuracy of estimating Carpal Tunnel Syndrome (CTS) by segmenting the median nerve in ultrasound videos and extracting features from the time-series data. The researchers used deep learning to automatically track and segment the median nerve in ultrasound videos as subjects performed finger movements. They extracted features from individual video frames as well as the time-series data and used these to estimate CTS. The estimations using features from the videos and time-series data showed higher sensitivity and specificity compared to estimations using single image features. The researchers concluded that segmenting the median nerve in ultrasound videos and using features from the time-series data can improve the accuracy of CTS estimation.
Virtual IMU Data Augmentation by Spring-Joint Model for Motion Exercises Reco...sugiuralab
This document proposes a method to augment virtual IMU data using a spring-joint model for motion exercise recognition without real data. The method aims to address the limitation of current virtual IMU data which has a limited motion length. It introduces a spring-joint virtual sensor module that can simulate different acceleration distributions and augment the virtual acceleration data spatially and temporally. An experiment tested the method on three aerobic exercises and showed the proposed data augmentation approach improved motion recognition accuracy from 45.5% to 85.3% compared to non-augmented virtual data.
Converting Tatamis into Touch Sensors by Measuring Capacitancesugiuralab
This document summarizes a research paper that proposes a method to convert tatami floor mats into touch sensors by measuring capacitance. Conductive sheets are placed under the tatami surface. When a person contacts the tatami, capacitance is measured between the sheets and their skin to detect the touch position. The system identifies 12 hand gestures with approximately 90% accuracy. Future work includes enabling multi-touch detection and using the sensors for footprint tracking and pose estimation.
Pinch Force Measurement Using a Geomagnetic Sensorsugiuralab
This document proposes measuring pinch force using the geomagnetic sensor in a smartphone. A device with embedded magnets and springs is attached to the smartphone. As force is applied, the magnet's distance from the sensor changes, altering the magnetic flux density. Measurements found a strong correlation between force and magnetic flux density. Future work includes testing different smartphone models and collecting user feedback to improve usability.
Smartphone-Based Teaching System for Neonate Soothing Motionssugiuralab
This document describes a proposed smartphone-based teaching system to help first-time caregivers learn how to properly soothe neonates. The system uses sensors in a stuffed toy and a smartphone to capture posture angles and acceleration during cradling motions. It provides real-time feedback on the user's form compared to expert cradling motions. An experiment tested the system's effectiveness in improving users' cradling posture after training compared to just watching a video. Results showed the system helped users better match the expert's inclination angle, indicating it could help ensure neonate safety by teaching proper neck support. Future work is needed to improve measurement accuracy and further validate the system.
Tactile Presentation of Orchestral Conductor's Motion Trajectorysugiuralab
This document proposes presenting a conductor's motion trajectory tactilely for visually impaired musicians using vibrators. It describes capturing conducting movements, mapping them to vibrators, and using tactile apparent movement. An experiment found trajectory presentation helped predict beat timing better than single vibrations, especially for tempo changes and start cues. Future work includes developing a universal device.
TouchLog: Finger Micro Gesture Recognition Using Photo-Reflective Sensorssugiuralab
The researchers developed a fingernail-sized device using 7 photo-reflective sensors to detect finger microgestures based on fingertip skin deformation. They implemented a random forest classifier to recognize 11 gestures with an average accuracy of 91.1% for the general model and 91.5% for the individual model. Future work will focus on addressing limitations like user dependence and developing a device that can be worn comfortably for real-world use.
Seeing the Wind: An Interactive Mist Interface for Airflow Inputsugiuralab
Human activities can introduce variations in various environmental cues, such as light and sound, which can serve as inputs for interfaces. However, one often overlooked aspect is the airflow variation caused by these activities, which presents challenges in detection and utilization due to its intangible nature. In this paper, we have unveiled an approach using mist to capture invisible airflow variations, rendering them detectable by Time-of-Flight (ToF) sensors. We investigate the capability of this sensing technique under different types of mist or smoke, as well as the impact of airflow speed. To illustrate the feasibility of this concept, we created a prototype using a humidifier and demonstrated its capability to recognize motions. On this basis, we introduce potential applications, discuss inherent limitations, and provide design lessons grounded in mist-based airflow sensing.
Identification and Authentication Using Claviclessugiuralab
Identification and Authentication Using Clavicles
Yohei Kawasaki, Yuta Sugiura
2023 62nd Annual Conference of the Society of Instrument and Control Engineers (SICE), Mie, Japan, 2023
Estimation of Violin Bow Pressure Using Photo-Reflective Sensorssugiuralab
Estimation of Violin Bow Pressure Using Photo-Reflective Sensors presents a method for quantitatively estimating bow pressure during violin playing using photo-reflective sensors attached to the bow. Five sensors measure the distance between the bow stick and hair, which changes with applied pressure. A random forest regression model is trained on sensor distance values and actual pressure measurements to estimate pressure based solely on sensor values. In experiments, the model estimated bow pressure with an R2 of 0.84, MAE of 0.11N, and MAPE of 19.1% when tested on data from an experienced violinist. The goal is to provide visual feedback to support practice by quantifying bow pressure.
A Virtual Window Using Curtains and Image Projectionsugiuralab
A Virtual Window Using Curtains and Image Projection
Naoharu Sawada, Takumi Yamamoto, Yuta Sugiura
In Proceedings of the 15th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR2023) , IEEE, August 18-19, 2023, Taipei, Taiwan.
Augmented Sports of Badminton by Changing Opening Status of Shuttle’s Featherssugiuralab
Augmented Sports of Badminton by Changing Opening Status of Shuttle’s Feathers
Takumi Yamamoto*, Ryohei Baba*, Yuta Sugiura (* Contribution equally)
In Proceedings of the 15th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR2023) , IEEE, August 18-19, 2023, Taipei, Taiwan.
Carpal Tunnel Syndrome Estimation through Median Nerve Segmentation in Ultr...sugiuralab
This study aimed to improve the accuracy of estimating Carpal Tunnel Syndrome (CTS) by segmenting the median nerve in ultrasound videos and extracting features from the time-series data. The researchers used deep learning to automatically track and segment the median nerve in ultrasound videos as subjects performed finger movements. They extracted features from individual video frames as well as the time-series data and used these to estimate CTS. The estimations using features from the videos and time-series data showed higher sensitivity and specificity compared to estimations using single image features. The researchers concluded that segmenting the median nerve in ultrasound videos and using features from the time-series data can improve the accuracy of CTS estimation.
Virtual IMU Data Augmentation by Spring-Joint Model for Motion Exercises Reco...sugiuralab
This document proposes a method to augment virtual IMU data using a spring-joint model for motion exercise recognition without real data. The method aims to address the limitation of current virtual IMU data which has a limited motion length. It introduces a spring-joint virtual sensor module that can simulate different acceleration distributions and augment the virtual acceleration data spatially and temporally. An experiment tested the method on three aerobic exercises and showed the proposed data augmentation approach improved motion recognition accuracy from 45.5% to 85.3% compared to non-augmented virtual data.