SIGGRAPH 2017 Theater: SIGGRAPH in Japanese and Japan CG Showcase by Yoichi O...Yoichi Ochiai
Dr. Yoichi Ochiai and his Digital Nature Group from the University of Tsukuba will be showcasing several projects at SIGGRAPH in Japanese and Japanese CG Showcase. Dr. Ochiai is an Assistant Professor and Advisor to the President at the University of Tsukuba and also serves as the CEO of Pixie Dust Technologies. The Digital Nature Group, consisting of 43 researchers and students, will present projects in the technical papers, emerging technologies, studio, and poster sessions involving holographic acoustic levitation, plasma rendering, digital fabrication, virtual reality, and more. Visitors are invited to the group's booth to learn about their collaborative work.
Tomato disease detection using deep learning convolutional neural networkPriyanka Pradhan
1. The document presents a method for detecting and classifying diseases in tomato leaves using a convolutional neural network model.
2. The proposed CNN model achieved an overall accuracy of 96.26% on the PlantVillage tomato dataset, outperforming fine-tuned InceptionResNetV2 and InceptionV3 models.
3. The model consists of four convolutional layers, four max pooling layers, and three fully connected layers, and is able to detect different tomato diseases with good individual class accuracies.
Social LSTM is an important 2016 paper that introduced a method to model human-human interactions for the purpose of trajectory prediction in crowded spaces. It uses separate LSTMs for each trajectory connected through a social pooling layer, allowing spatially close LSTMs to share information. This helps the model account for how a person's behavior may be influenced by others in their immediate area. Experiments on standard datasets showed it reduces trajectory prediction error compared to methods without social modeling. Future work could extend it to model multiple object types and include static scene inputs.
SIGGRAPH 2017 Theater: SIGGRAPH in Japanese and Japan CG Showcase by Yoichi O...Yoichi Ochiai
Dr. Yoichi Ochiai and his Digital Nature Group from the University of Tsukuba will be showcasing several projects at SIGGRAPH in Japanese and Japanese CG Showcase. Dr. Ochiai is an Assistant Professor and Advisor to the President at the University of Tsukuba and also serves as the CEO of Pixie Dust Technologies. The Digital Nature Group, consisting of 43 researchers and students, will present projects in the technical papers, emerging technologies, studio, and poster sessions involving holographic acoustic levitation, plasma rendering, digital fabrication, virtual reality, and more. Visitors are invited to the group's booth to learn about their collaborative work.
Tomato disease detection using deep learning convolutional neural networkPriyanka Pradhan
1. The document presents a method for detecting and classifying diseases in tomato leaves using a convolutional neural network model.
2. The proposed CNN model achieved an overall accuracy of 96.26% on the PlantVillage tomato dataset, outperforming fine-tuned InceptionResNetV2 and InceptionV3 models.
3. The model consists of four convolutional layers, four max pooling layers, and three fully connected layers, and is able to detect different tomato diseases with good individual class accuracies.
Social LSTM is an important 2016 paper that introduced a method to model human-human interactions for the purpose of trajectory prediction in crowded spaces. It uses separate LSTMs for each trajectory connected through a social pooling layer, allowing spatially close LSTMs to share information. This helps the model account for how a person's behavior may be influenced by others in their immediate area. Experiments on standard datasets showed it reduces trajectory prediction error compared to methods without social modeling. Future work could extend it to model multiple object types and include static scene inputs.
ICRA 2019 (IEEE International Conference on Robotics and Automation; https://www.icra2019.org/ )の参加速報を書きました。
この資料には下記の項目が含まれています。
・ICRA 2019の概要
・ICRA 2019での動向や気付き
・ICRAの重要技術
・今後の方針
・論文まとめ(102本あります)
文献紹介:EfficientDet: Scalable and Efficient Object DetectionToru Tamaki
Mingxing Tan, Ruoming Pang, Quoc V. Le; EfficientDet: Scalable and Efficient Object Detection, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 10781-10790
https://openaccess.thecvf.com/content_CVPR_2020/html/Tan_EfficientDet_Scalable_and_Efficient_Object_Detection_CVPR_2020_paper.html
ICRA 2019 (IEEE International Conference on Robotics and Automation; https://www.icra2019.org/ )の参加速報を書きました。
この資料には下記の項目が含まれています。
・ICRA 2019の概要
・ICRA 2019での動向や気付き
・ICRAの重要技術
・今後の方針
・論文まとめ(102本あります)
文献紹介:EfficientDet: Scalable and Efficient Object DetectionToru Tamaki
Mingxing Tan, Ruoming Pang, Quoc V. Le; EfficientDet: Scalable and Efficient Object Detection, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 10781-10790
https://openaccess.thecvf.com/content_CVPR_2020/html/Tan_EfficientDet_Scalable_and_Efficient_Object_Detection_CVPR_2020_paper.html
EarAuthCam: Personal Identification and Authentication Method Using Ear Image...sugiuralab
Earphones are now used for longer hours than before with the advancement in wireless technology and miniaturization. In addition, the application of earphones has become more diverse, and opportunities to access highly confidential information through them have increased. We propose a method comprising a hearable device equipped with a small camera for user authentication from ear images. This method improves the security of the hearable device. Ear images are first captured with the camera. The ear regions in the images are then extracted using a mask region-based convolutional neural network. Finally, the user is identified using histograms of oriented gradient features and a support vector machine (SVM). Our method was able to identify 18 participants with an accuracy of 84.1%. Users are authenticated through unsupervised anomaly detection using an autoencoder with an error rate of 8.36%. This method facilitates hands- and eye-free operations without requiring any explicit authentication action by the user.
Converting Tatamis into Touch Sensors by Measuring Capacitancesugiuralab
This document summarizes a research paper that proposes a method to convert tatami floor mats into touch sensors by measuring capacitance. Conductive sheets are placed under the tatami surface. When a person contacts the tatami, capacitance is measured between the sheets and their skin to detect the touch position. The system identifies 12 hand gestures with approximately 90% accuracy. Future work includes enabling multi-touch detection and using the sensors for footprint tracking and pose estimation.
Pinch Force Measurement Using a Geomagnetic Sensorsugiuralab
This document proposes measuring pinch force using the geomagnetic sensor in a smartphone. A device with embedded magnets and springs is attached to the smartphone. As force is applied, the magnet's distance from the sensor changes, altering the magnetic flux density. Measurements found a strong correlation between force and magnetic flux density. Future work includes testing different smartphone models and collecting user feedback to improve usability.
Smartphone-Based Teaching System for Neonate Soothing Motionssugiuralab
This document describes a proposed smartphone-based teaching system to help first-time caregivers learn how to properly soothe neonates. The system uses sensors in a stuffed toy and a smartphone to capture posture angles and acceleration during cradling motions. It provides real-time feedback on the user's form compared to expert cradling motions. An experiment tested the system's effectiveness in improving users' cradling posture after training compared to just watching a video. Results showed the system helped users better match the expert's inclination angle, indicating it could help ensure neonate safety by teaching proper neck support. Future work is needed to improve measurement accuracy and further validate the system.
Tactile Presentation of Orchestral Conductor's Motion Trajectorysugiuralab
This document proposes presenting a conductor's motion trajectory tactilely for visually impaired musicians using vibrators. It describes capturing conducting movements, mapping them to vibrators, and using tactile apparent movement. An experiment found trajectory presentation helped predict beat timing better than single vibrations, especially for tempo changes and start cues. Future work includes developing a universal device.
TouchLog: Finger Micro Gesture Recognition Using Photo-Reflective Sensorssugiuralab
The researchers developed a fingernail-sized device using 7 photo-reflective sensors to detect finger microgestures based on fingertip skin deformation. They implemented a random forest classifier to recognize 11 gestures with an average accuracy of 91.1% for the general model and 91.5% for the individual model. Future work will focus on addressing limitations like user dependence and developing a device that can be worn comfortably for real-world use.
Seeing the Wind: An Interactive Mist Interface for Airflow Inputsugiuralab
Human activities can introduce variations in various environmental cues, such as light and sound, which can serve as inputs for interfaces. However, one often overlooked aspect is the airflow variation caused by these activities, which presents challenges in detection and utilization due to its intangible nature. In this paper, we have unveiled an approach using mist to capture invisible airflow variations, rendering them detectable by Time-of-Flight (ToF) sensors. We investigate the capability of this sensing technique under different types of mist or smoke, as well as the impact of airflow speed. To illustrate the feasibility of this concept, we created a prototype using a humidifier and demonstrated its capability to recognize motions. On this basis, we introduce potential applications, discuss inherent limitations, and provide design lessons grounded in mist-based airflow sensing.
Identification and Authentication Using Claviclessugiuralab
Identification and Authentication Using Clavicles
Yohei Kawasaki, Yuta Sugiura
2023 62nd Annual Conference of the Society of Instrument and Control Engineers (SICE), Mie, Japan, 2023
Estimation of Violin Bow Pressure Using Photo-Reflective Sensorssugiuralab
Estimation of Violin Bow Pressure Using Photo-Reflective Sensors presents a method for quantitatively estimating bow pressure during violin playing using photo-reflective sensors attached to the bow. Five sensors measure the distance between the bow stick and hair, which changes with applied pressure. A random forest regression model is trained on sensor distance values and actual pressure measurements to estimate pressure based solely on sensor values. In experiments, the model estimated bow pressure with an R2 of 0.84, MAE of 0.11N, and MAPE of 19.1% when tested on data from an experienced violinist. The goal is to provide visual feedback to support practice by quantifying bow pressure.
A Virtual Window Using Curtains and Image Projectionsugiuralab
A Virtual Window Using Curtains and Image Projection
Naoharu Sawada, Takumi Yamamoto, Yuta Sugiura
In Proceedings of the 15th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR2023) , IEEE, August 18-19, 2023, Taipei, Taiwan.
7. クッションのセンサ化
UIST 2011
Yuta Sugiura, Gota Kakehi, Anusha Withana, Calista Lee, Daisuke Sakamoto, Maki Sugimoto, Masahiko Inami, and Takeo Igarashi, Detecting shape deformation of soft
objects using directional photoreflectivity measurement, In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST ’11), ACM,
509-516, October 16-19, 2011, Santa Barbara, CA, USA.
7
8. クッションのセンサ化
UIST 2011
8
Yuta Sugiura, Gota Kakehi, Anusha Withana, Calista Lee, Daisuke Sakamoto, Maki Sugimoto, Masahiko Inami, and Takeo Igarashi, Detecting shape deformation of soft
objects using directional photoreflectivity measurement, In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST ’11), ACM,
509-516, October 16-19, 2011, Santa Barbara, CA, USA.
9. クッションのセンサ化
UIST 2011
9
Yuta Sugiura, Gota Kakehi, Anusha Withana, Calista Lee, Daisuke Sakamoto, Maki Sugimoto, Masahiko Inami, and Takeo Igarashi, Detecting shape deformation of soft
objects using directional photoreflectivity measurement, In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST ’11), ACM,
509-516, October 16-19, 2011, Santa Barbara, CA, USA.
11. カーペットのキャンバス化
UIST 2014
11
Yuta Sugiura, Koki Toda, Takayuki Hoshi, Youichi Kamiyama, Takeo Igarashi, and Masahiko Inami, Graffiti fur: turning your carpet into a computer display, In Proceedings
of the 27th annual ACM symposium on User interface software and technology (UIST ’14), ACM, 149-156, October 5-8, 2014, Hawaii, USA.
12. カーペットのキャンバス化
UIST 2014
12
Yuta Sugiura, Koki Toda, Takayuki Hoshi, Youichi Kamiyama, Takeo Igarashi, and Masahiko Inami, Graffiti fur: turning your carpet into a computer display, In Proceedings
of the 27th annual ACM symposium on User interface software and technology (UIST ’14), ACM, 149-156, October 5-8, 2014, Hawaii, USA.
13. カーペットのキャンバス化
UIST 2014
13
Yuta Sugiura, Koki Toda, Takayuki Hoshi, Youichi Kamiyama, Takeo Igarashi, and Masahiko Inami, Graffiti fur: turning your carpet into a computer display, In Proceedings
of the 27th annual ACM symposium on User interface software and technology (UIST ’14), ACM, 149-156, October 5-8, 2014, Hawaii, USA.
21. • MRIなどの画像検査
• 症状の有無を簡易に判断するス
クリーニング手法
• 10秒テスト [*]
• 頚髄症スクリーニングの代表的手法
• 手指の高速な開閉運動(グーパー運
動)における敏捷性などを評価
• 10秒間の反復回数が概ね20回以下の場
合,頚髄症の疑いありと判断
頚髄症の発見・診断手法
21
[*] Cook, C., et al.: Reliability and diagnostic accuracy of clinical special tests for myelopathy in patients seen for cervical dysfunction. The Journal of Orthopaedic and
Sports Physical Therapy 39(3), 172–178 (2009).
[**] Ono, K., et al.: Myelopathy hand. New clinical signs of cervical cord damage. The Journal of Bone and Joint Surgery, British Volume 69(2), 215–219 (1987).
グーパー
運動の
イメージ
圧迫された頚髄の
MRI画像 [**]
実際の患者の動作
22. • カメラ映像を用いた頚髄
症のスクリーニング
• 手軽に利用できるスク
リーニング手法
• 身近にあるスマートフォン
の内蔵カメラを利用
• 従来手法と比較して高い
分類性能を実現
• 感度:90%
• 特異度:93%
研究の概要
22
提案手法の流れ
グーパー運動の撮影
Ryota Matsui, Takafumi Koyama, Koji Fujita, Hideo Saito, Yuta Sugiura, Video-Based Hand
Tracking for Screening Cervical Myelopathy, ISVC 2021.
33. • 画面上のキャラクタの出現に合わせて母指を動かす
• 母指の画面上の座標データから特徴量を抽出
33
モバイル端末で手根管症候群を推定できるか?
Koji Fujita*, Takuro Watanabe*, Tomoyuki Kuroiwa, Toru Sasaki, Akimoto Nimura, and Yuta Sugiura(*Koji Fujita and Takuro Watanabe are joint first authors),A Tablet-
Based App for Carpal Tunnel Syndrome Screening: Diagnostic Case-Control Study,Journal of Medical Internet Research - Mobile Health and Ubiquitous Health, Vol.7,
Iss.9, e14172,2019
34. • Support Vector Machine (SVM)による2クラス分類
34
分類結果
ゲームプレイ中の母指の運動によって手根管症候群の
疾患推定が可能
Koji Fujita*, Takuro Watanabe*, Tomoyuki Kuroiwa, Toru Sasaki, Akimoto Nimura, and Yuta Sugiura(*Koji Fujita and Takuro Watanabe are joint first authors),A Tablet-
Based App for Carpal Tunnel Syndrome Screening: Diagnostic Case-Control Study,Journal of Medical Internet Research - Mobile Health and Ubiquitous Health, Vol.7,
Iss.9, e14172,2019
感度 特異度
93.0% 73.0%
35. • with 東京医科歯科大学 藤田浩二先生、二村昭元先生
手根管症候群患者のスクリーニング
Koyama T, Sato S, Toriumi M, Watanabe T, Nimura A, Okawa A, Sugiura Y, Fujita K, A Screening Method Using Anomaly Detection on a Smartphone for Patients With
Carpal Tunnel Syndrome: Diagnostic Case-Control Study, JMIR Mhealth Uhealth 2021;9(3):e26320.
35
36. • 動機
• 医療現場では陽性と陰性のサンプルが不均衡
• 陽性のデータを十分に確保できない可能性
• データセットと特徴抽出
• 健常者と手根管症候群患者のスマホゲーム操作
時(12方向に2周動かす)の接触位置情報を中心
部からの距離値
• 手法
• 異常検知手法を活用
• 健常データ12手分を正常値としてオートエン
コーダを用いて学習モデルを構築
• 検証
• 健常者:27手、患者:36手
36
陽性(患者)教師データは手に入りにくい
Koyama T, Sato S, Toriumi M, Watanabe T, Nimura A, Okawa A, Sugiura Y, Fujita K, A Screening Method Using Anomaly Detection on a Smartphone for Patients With
Carpal Tunnel Syndrome: Diagnostic Case-Control Study, JMIR Mhealth Uhealth 2021;9(3):e26320.
感度 特異度 AUC
94% 67% 0.86 12方向を学習した場合
異常検知によって陰性データのみで疾患推定が可能
陽性と陰性のデータ不均衡の問題を解決
37. • 動機
• 効率的な推定方法の検討
• 手法
• 全方向(12方向)から特定の方向
に絞って評価
• 検証
• 健常者:28手、患者:36手
• オートエンコーダ
37
特に推定に寄与する運動はどれ?
特定の方向の母指運動に絞ることに
よって効率的な推定が可能
文字入力やウェブブラウジングなど通常
のスマホ操作の延長線上での推定に期待
Koyama T, Sato S, Toriumi M, Watanabe T, Nimura A, Okawa A, Sugiura Y, Fujita K, A Screening Method Using Anomaly Detection on a Smartphone for Patients With
Carpal Tunnel Syndrome: Diagnostic Case-Control Study, JMIR Mhealth Uhealth 2021;9(3):e26320.
感度 特異度 AUC
全方向 92.9% 69.4% 0.85
6方向 100.0% 85.7% 0.956
38. • 動機
• 日常動作でも書字の動作に着目して疾患推
定できるか?
• データセットと特徴抽出
• 渦巻きを書いているときの健常者と手根
管症候群患者のペンの筆圧を計測
• 筆圧の変化量を周波数成分に変換して、
Support Vector Machineで学習
• 検証と結果
• 健常群:31名、疾患群:33名
• 感度:82%
• 特異度:71%
• AUC:0.81
38
日常動作から疾患推定ができるか?
Takuro Watanabe, Takafumi Koyama, Eriku Yamada, Akimoto Nimura, Koji Fujita, and Yuta Sugiura. 2021. The Accuracy of a Screening System for Carpal Tunnel Syndrome
Using Hand Drawings, Journal of Clinical Medicine 10, no. 19: 4437.
日常動作の中でも書字動作で疾患の推定が可能