Human activities can introduce variations in various environmental cues, such as light and sound, which can serve as inputs for interfaces. However, one often overlooked aspect is the airflow variation caused by these activities, which presents challenges in detection and utilization due to its intangible nature. In this paper, we have unveiled an approach using mist to capture invisible airflow variations, rendering them detectable by Time-of-Flight (ToF) sensors. We investigate the capability of this sensing technique under different types of mist or smoke, as well as the impact of airflow speed. To illustrate the feasibility of this concept, we created a prototype using a humidifier and demonstrated its capability to recognize motions. On this basis, we introduce potential applications, discuss inherent limitations, and provide design lessons grounded in mist-based airflow sensing.
Seeing the Wind: An Interactive Mist Interface for Airflow Input
1. Seeing theWind:
An Interactive Mist Interface forAirflow Input
1: Department of Information and Computer Science, Keio University, Japan
2: Guangzhou Institute ofTechnology, Xidian University, China
Seeing theWind
An Interactive Mist Interface for Airflow Input
Tian Min1, Chengshuo Xia1, 2,TakumiYamamoto1 andYuta Sugiura1
2. 2
Airflow: HumanActivity Recognition
In addition to naturally occurring airflow,
Human activities can generate airflow through various means,
such as blowing, walking, or waving, among other methods.
9. 9
Mist SensingTechnique
MistTypes Airflow Direction Airflow Intensity
Empty
Dry Ice
Humidifier
Cigarette
HotWater
Incense
Vaporizer
SensorData
(mm)
Time (frame) Number of Particles (per/L)
Higher particle concentrations result in a higher efficiency of blocking the infrared light.
11. 11
Mist SensingTechnique
MistTypes Airflow Direction Airflow Intensity
ToF Sensor
Anemometer
Fan
Time (Frame)
Notably, we observed a clear nonlinear relationship between airflow velocity and mist dispersal
Sensor
Data
(mm)
13. 13
Implementation: Recognition Method
Sensor 1
Sensor 2
Sensor 3
Proportion of PositiveValues (PPV)
Mean of PositiveValues (MPV)
Mean of Indices of PositiveValues (MIPV)
Longest Stretch of PositiveValues (LSPV)
A Single Convolutional Output
InputTime Series Apply Random Kernels Generate Feature Map
Linear
Classifier
Example Sensor Data Features
ROCKETClassifier [Dempster, 2020]
14. 14
Evaluation
10Tested Motions Dataset Results
(a) Blow Left (b) Blow Front (c) Blow Right
(d) Double Blow Left (e) Long Blow Front (f) Double Blow Right
(g) Flip Book (h)Wave with Paper
(i) Jumping Jack (j) Squat
15. 15
Evaluation
10Tested Motions Dataset Results
Blow Left
Blow Front
Blow Right
Flip Book
Wave with Paper
Double Blow Left
Long Blow Front
Double Blow Right
Jumping Jack
Squat
12 Participants
10 Motions
9 Male, 3 Female
Average Age 22
Right-handed
10 Times 160 Frame
Repeat the Same Motion 3 ToF Sensors
16. 16
Evaluation
10Tested Motions Dataset Results
With all 3 sensors With all 2 side sensors With single middle sensors
91.17%
(SD=4.22)
86.43%
(SD=3.46)
62.61%
(SD=2.12)
Accuracies are reported with 10-fold validation.
19. 19
Limitations
- Unexplored Airflow Properties
- What is a good interaction distance?
- Other Mist/Smoke Type
- Performance of dry ice?
- Alternative Sensor Alignment
- If we add more sensors?
- Qualitative Results
- Will user acknowledge this kind of input?
20. 20
Thank you for your attention
1: Department of Information and Computer Science, Keio University, Japan
2: Guangzhou Institute ofTechnology, Xidian University, China
Tian Min1, Chengshuo Xia1, 2,TakumiYamamoto1 andYuta Sugiura1
Seeing theWind: An Interactive Mist Interface of Airflow Input
Editor's Notes
Hi everyone, I’m glad here to present the work of seeing the wind, together with Dr. Chengshuo Xia, Takumi Yamamoto, and Dr. Yuta Sugiura from Keio University and Guangzhou Institute of Technology, Xidian University.
In this paper, we introduce a human activity recognition method with the variation in airflows.
Airflow, or saying wind is prevalent phenomenon in nature and our surroundings.
We live in the earth atmosphere and the actions we take, could generate airflow by such as blowing, walking, or waving among others.
If we can capture the variation in airflow, it’s possible to do recognition.
So what are the available devices or sensors we have got to measure airflow so far?
We have anemometers, mass flow sensors, or pressure sensors, which are commonly employed in integrated systems and specialized domains.
These techniques use on the principle that the airflow generates pressure on object surfaces, so they’re quite effective to be set in pipes for gas transportation or industrial machines.
However, Although effective, these methods often prove less accessible for interaction purposes.
We came up with an idea of using the mist’s property of blocking the light.
Mist scattering in the air can represent the airflow variations, rendering it detectable by proximity-based sensor. For example, time of flight sensors.
The variation of airflow will affect the shape of mist, and further, affect the sensor readings.
Next, I’m going to give a brief review on the related work on mist-based and airflow interaction.
For the airflow output, they’re mainly designed for the sense augmentation as tactile display or VR enhancement. On the input side, microphones are usually use to capture the wind noise generated by blowing, opening the new input technique known as “blowing gestures”
On the mist-based interaction side, due to the fact that mist can be used as mid-air display, previous research spent lots of efforts in controlling the mist’s shape and optimize the display quality. However, there’s hardly research on detecting airflow variation visually, or using mist as an input method.
So, we get a humidifier, and implement this prototype as shown on the left side.
On the right side, is a demonstration of the sensor data when someone blow into the mist.
You can see the maximum value of y-axis jump to 3000 from 100, that’s when the blow dismiss the mist, and the ToF sensor detect the height of the celling.
Here’s some example of the motions that can affect the sensor data, such as blowing, clapping, waving, closing a book, walking by, opening door, squatting and jumping.
Given that there’s limited reference, we design 3 experiments to explore the factors and principals of mist sensing technique.
First one I’m going to introduce is the impact of mist type. We got 6 types of mist or smoke, as shown on the right side, trying to see if they have different performances of obstruction or refraction on the Infrared laser from the ToF sensor.
The experiment set up is shown on the left side, we put a ToF sensor 15 cm away from a barrier, and place the mist generating source 5 cm away from the ToF sensor. Then, we record the sensor data for a period of time.
Here’s the result, on the left hand side shows the sensor data over 140 frame.
One the right hand side, the number of particles per liter in the environment is recorded, using an air monitor.
Color of the line and bars are correspond to the legend.
We can tell that higher particle concentrations, could result in a higher efficiency of blocking, refracting or absorbing the infrared light from the sensor.
The second experiment is whether we can know the directional information of airflows by setting sensors on different axis.
We set two ToF sensor in the close proximity to the mist nozzle on both sides. The color of sensors are correspond to the curve color below.
As shown in the figure, when airflow disperse the mist from the right side, we observed the peak on green curve first, and then the blue curve, indicating that it’s feasible to detect the airflow direction.
By increasing the number of sensors near the nozzle, it is possible to detect airflow direction in a 360-degree range.
The third experiment is about the airflow intensity.
Different airflow velocities are expected to cause varying dispersion effects on mist.
To further explore this phenomenon, we placed an anemometer close to the sensor and the nozzle to measure the airflow velocity arriving at the mist’s source.
By adjusting the spinning speed and position of the fan, we record the sensor data within 1400 frame with a specific wind speed at the nozzle. When the air is still, the mist can fully cover the sensor throughout the period.
We organized the sensor data into a heat map shown on the right. The color of each grid cell reflects the magnitude of the sensor data. By analyzing the proportions of different colors within each row, we can evaluate the impact of continuous airflow on mist dispersion with velocity changes. Notably, we observed a clear nonlinear relationship between airflow velocity and influence.
When the airflow velocity is below 1 m/s, an increase in airflow velocity cannot effectively influence the mist. However, within the range of 1 m/s to 2 m/s, increasing the airflow velocity become more effective.
Our proof-of-concept prototype is consist of a table-top humidifier, and a 3D-printed bracket for sensor support.
The humidifier is in a cylindrical shape. The nozzle is 20 mm wide. And it can spray mist in front of it with an elevation angle of around 45 degrees.
In order to maintain an appropriate value on the sensors and quickly reset once the disturbance is over, we aim for a place where has stable presence of mist. In this case, it is the area slightly beneath the nozzle.
For the software implementation, our main concern was to think of using an appropriate machine learning method.
Considering the complexity of the airflow patterns, the manually selected features would be challenging and time-consuming. Therefore we find the ROCKET Classifier, an exceptionally fast and accurate time series classification method.
Basic idea of the ROCKET classifier is using random convolutional kernels.
Then, we step into the evaluation stage.
First I’m going to introduce 10 tested motions.
Based on related work on blowing gestures, we extended these motions by variations from different directions with respect to the device.
Additionally, we explored the use of common objects that can generate airflow, resulting in two motions of: Flip Book (From Left) and Wave with Paper (In Front with a half-folded A4 paper).
We also chose two indoor fitness exercises: Jumping Jack and Squat, bringing the total number of motions to 10
For the data collection, we recruit 12 participants, asking each of them to conduct 10 motions 10 times.
For each time, we record 160 frame of data from 3 ToF sensor.
Here comes the result of the within-user accuracy.
For each confusion matrix, on the bottom left there’s a small icon of the top view of the device. Green color indicate the sensor at the corresponding position is activated.
With all three sensors, the classification can reach an accuracy over 90 percent, and this accuracy going down as the number of sensors decrease.
We also tested the different combination of using two sensor, but the result did not change very much compared to using to sides sensor.
We also investigate how much data could achieve a decent result for training. We find using seventy samples per motion can get an accuracy around 90%.
However, when we split one subject out, and looking for the cross-user performance, the result is not satisfying, only have an accuracy of 79.3%, and the variance is quite high.
This could caused by the individual difference when conducting the motions. Considering in practical scenarios, it’s challenging to quantify blowing gestures like the intensity of wind. We did not give strict instruction to participant on how exactly to perform the motions during the experiment. Participant are only ask to conduct the motion at ease.
We also made a supplementary study on the
Most existing blowable interface designs that based on microphone input are concentrate on detecting blowing.
We believe that mid-air gestures, such as waving and clapping, can also generate airflow similar to the tested motions. Considering that the proposed system essentially detects variations in mist shape, it is possible for users to generate airflow with similar characteristics with different motions during the interaction.
Therefore we bring four gestures, namely wave and clap from two direction into the classifier. This bring down the accuracy to 80.28
However, For this prototype based on a humidifier, which usually serves as a stationary desktop device, using a few motions to control the humidifier’s functionality or surrounding IoT devices is sufficient. If there is a need to recognize more complex airflow patterns, richer data features and a larger number of samples would be needed.
Finally, I’m going to point out some limitations of this work, and content that remain unexplored.
First one is the airflow properties of specific motions are not tested. For example, what is the approximate wind speed cause by a human waving hand?
Secondly, the prototype is based on humidifier, and the mist is composed of tiny water drips. The recognition results are based on this specific type of mist. We did not cover the accuracy on other type of mist or smoke in this paper.
Thirdly, the sensor alignment and configurations are not thoroughly. The number of sensor tested is up to 3, and there might be a further improvement on performance if we adding more sensors.
At last, the paper only contain validation on the capability of the prototype, from a quantitative aspect, the level of acceptance or acknowledgement from user is not measured through interviews.
That’s all from my presentation, thank you very much for your attention!