Skinput is a new skin-based interface that allows users to use their own arms and hands as touchscreens by sensing different ultra low-frequency sounds that are generated when knocking various parts of skin.
This technology is the collaboration between Desney Tan and Dan Morris at Microsoft’s research lab in Redmond, Washington and Chris Harrison at Carnegie Mellon University.
This presentation is on "SKINPUT"....
Allowing you to use your body as a touch-screen interface. Wonder how it is possible?? Check out the presentation.
Please try to maintain the orignality- DO NOT COPY
by:-
Shridhar Sharma
This presentation is on "SKINPUT"....
Allowing you to use your body as a touch-screen interface. Wonder how it is possible?? Check out the presentation.
Please try to maintain the orignality- DO NOT COPY
by:-
Shridhar Sharma
Skinput is a technology that appropriates the human body for acoustic transmission.
It allows the skin to be used as an input surface.
It was developed by Chris Harrison, Desney Tan, and Dan Morris of the Microsoft Research's Computational User Experiences Group
Its first public appearance was at Microsoft'sTechFest 2010
Microsoft has not commented on the future of the project, other than it is under active development. It has been reported this may not appear in commercial devices for at least 2 years
Skinput is an input technology that uses bio-acoustic sensing to localize finger taps on the skin. When augmented with a pico-projector, the device can provide a direct manipulation, graphical user interface on the body.
Skinput is a technology that appropriates the human body for acoustic transmission.
It allows the skin to be used as an input surface.
It was developed by Chris Harrison, Desney Tan, and Dan Morris of the Microsoft Research's Computational User Experiences Group
Its first public appearance was at Microsoft'sTechFest 2010
Microsoft has not commented on the future of the project, other than it is under active development. It has been reported this may not appear in commercial devices for at least 2 years
Skinput is an input technology that uses bio-acoustic sensing to localize finger taps on the skin. When augmented with a pico-projector, the device can provide a direct manipulation, graphical user interface on the body.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The Microsoft company have developed Skinput, a technology that appropriates the human body for acoustic transmission, allowing the skin to be used as an input surface. In particular, we resolve the location of finger taps on the arm and hand by analyzing mechanical vibrations that propagate through the body. We collect these signals using a novel array of sensors worn as an armband. This approach provides an always available, naturally portable, and on-body finger input system. We assess the capabilities, accuracy and limitations of our technique through a two-part, twenty-participant user study. To further illustrate the utility of our approach, we conclude with several proof-of-concept applications we developed.
SkinTrack is a wearable system that enables continuous
touch tracking on the skin. It consists of a ring, which emits
a continuous high frequency AC signal, and a sensing
wristband with multiple electrodes. Due to the phase delay
inherent in a high-frequency AC signal propagating through
the body, a phase difference can be observed between pairs
of electrodes. SkinTrack measures these phase differences
to compute a 2D finger touch coordinate. Our approach can
segment touch events at 99% accuracy, and resolve the 2D
location of touches with a mean error of 7.6mm. As our
approach is compact, non-invasive, low-cost and lowpowered,
we envision the technology being integrated into
future smartwatches, supporting rich touch interactions
beyond the confines of the small touchscreen.
Popularity of mobiles devices increasing day by day due to the advantage like portability, mobility and flexibility but the limited size gives very less interactive surface area. We cannot just make the device large without losing benefit of small size.
So the Microsoft company has developed Skinput , a technology that appropriates the human body for acoustic transmission, allowing the skin to be used as an input surface. Human body produce different vibration when we tap on our body parts.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
1. Presented By-
Ayush Pratap Singh
3rd Year, Comp Sc & Engineering
ABES Institute of Technology
Human Body as Input
2. INTRODUCTION
Devices with significant computational power and
capabilities can now be easily carried on our bodies.
However, their small size typically leads to limited
interaction space (e.g. diminutive screens, buttons) and
consequently diminishes their usability and functionality.
Since it cannot simply make buttons and screens larger
without losing the primary benefit of small size, consider
alternative approaches that enhance interactions with
small mobile systems. One option is to opportunistically
appropriate surface area from the environment for
interactive purposes.
3. • There is one surface that has been previous
overlooked as an input canvas, and one that happens
to always travel with us: our skin.
• Appropriating the human body as an input device is
appealing not only because we have roughly two
square meters of external surface area, but also
because much of it is easily accessible by our hands
(e.g., arms, upper legs, torso).
• Furthermore, proprioception – our sense of how our
body is configured in three-dimensional space –
allows us to accurately interact with our bodies in an
eyes-free manner.
• For example, we can readily flick each of our fingers,
touch the tip of our nose, and clap our hands together
without visual assistance. Few external input devices
can claim this accurate, eyes-free input characteristic
and provide such a large interaction area.
4. What is Skinput?
Skinput is a new skin-based interface that allows users
to use their own arms and hands as touchscreens by
sensing different ultra low-frequency sounds that are
generated when knocking various parts of skin.
This technology is the collaboration between Desney Tan
and Dan Morris at Microsoft’s research lab in Redmond,
Washington and Chris Harrison at Carnegie Mellon
University.
5. The primary goal of Skinput is to provide an always available mobile input
system– that is, an input system that does not require a user to carry or pick
up a device. A number of alternative approaches have been proposed that
operate in this space. Techniques based on computer vision are popular.
These, however, are computationally expensive and error prone in mobile
Is a logical choice for always-available input, but is limited in its precision in
unpredictable acoustic environments, and suffers from privacy and scalability
issues in shared environments. Other approaches have taken the form of
wearable computing. This typically involves a physical input device built in a
form considered to be part of one’s clothing. For example, glove-based input
allow users to retain most of their natural hand movements, but are
cumbersome, uncomfortable, and disruptive to tactile sensation.
Always-Available Input
Bio-Sensing
Skinput leverages the natural acoustic conduction properties of the human
body to provide an input system, similar to bone
conduction microphones that are typically worn near the ear,
where they can sense vibrations propagating from the
mouth and larynx during speech.
6. Skinput, a novel input technique that allows the skin to be used as a finger
input surface. In this prototype system, the arm has been chosen as it
provides considerable surface area for interaction, including a contiguous
and flat area for projection. Furthermore, the forearm and hands contain a
complex assemblage of bones that increases acoustic distinctiveness of
different locations. To capture this acoustic information, a wearable armband
has been developed that’s non-invasive and easily
7. Bio-Acoustics
When a finger taps the skin, several distinct forms of acoustic energy are
produced. Some energy is radiated into the air as sound waves; this energy
is not captured by the Skinput system. Among the acoustic energy
transmitted through the arm, the most readily visible are transverse waves,
created by the displacement of the skin from a finger impact.
The amplitude of these ripples is correlated to both the tapping force and
to the volume and compliance of soft tissues under the impact area. In
general, tapping on soft regions of the arm creates higher amplitude
transverse waves than tapping on boney areas (e.g., wrist, palm, fingers),
which have negligible compliance.
8. In addition to the energy that propagates on the surface of the arm, some
energy is transmitted inward, toward the skeleton. These longitudinal
(compressive) waves travel through the soft tissues of the arm, exciting the
bone, which is much less deformable then the soft tissue but can respond
to mechanical excitation by rotating and translating as a rigid body. This
excitation vibrates soft tissues surrounding the entire length of the bone,
resulting in new longitudinal waves that propagate outward to the skin.
9. Sensing
To capture the rich variety of acoustic information we use an array
of highly tuned vibration sensors. Specifically, we employ small,
cantilevered piezo films (MiniSense100, Measurement Specialties, Inc.).
By adding small weights to the end of the cantilever, we are able to alter
the resonant frequency, allowing the sensing element to be responsive to
a unique, narrow, low-frequency band of the acoustic spectrum. The
cantilevered sensors are naturally insensitive
to forces parallel to the skin (e.g., shearing motions caused by
stretching). Thus, the skin stretch induced by many routine movements
(e.g., reaching for a doorknob) tends to be attenuated. However, the
sensors are highly responsive to motion perpendicular to the skin plane
– perfect for capturing transverse surface waves and longitudinal waves
emanating from interior structures. Finally, our sensor design is relatively
inexpensive and can be manufactured in a very small form factor
rendering it suitable for inclusion in future mobile devices
(e.g., an arm-mounted audio player).
Figure shows the response
curve for a sensor tuned to a
resonant frequency of 78Hz.
The curve shows a ~14dB
drop-off ±20Hz away from the
resonant frequency
10. Armband Prototype
The armband features two arrays of five sensing elements. The decision to
have two sensor packages was motivated by the focus on the arm for input.
In particular, when placed on the upper arm (above the elbow), we hope to
collect acoustic information from the fleshy bicep area in addition to the
firmer area on the underside of the arm, with better acoustic coupling to the
Humerus, the main bone that runs from shoulder to elbow. When the sensor
was placed below the elbow, on the forearm, one package was located near
the Radius, the bone that runs from the lateral side of the elbow to the
thumb side of the wrist, and the other near the Ulna, which runs parallel to
this on the medial side of the arm closest to the body. Each location thus
provided slightly different acoustic coverage and information, helpful in
disambiguating input location.
11. Processing
• We employ a Mackie Onyx 1200F audio interface to digitally capture
data from the ten sensors (http://mackie.com). This was connected via
Firewire to a conventional desktop computer, where a thin client written
in C interfaced with the device using the Audio Stream Input/Output
(ASIO) protocol.
• Each channel was sampled at 5.5kHz, a sampling rate that would be
considered too low for speech or environmental audio, but was able to
represent the relevant spectrum of frequencies transmitted through the
arm.
• Data was then sent from our thin client over a local socket to our
primary application, written in Java. This program performed three key
functions. First, it provided a live visualization of the data from our ten
sensors, which was useful in identifying acoustic features (Figure 6).
Second, it segmented inputs from the data stream into independent
instances (taps). Third, it classified these input instances.
12. • A brute force machine learning approach is employed , computing 186
features in total, many of which are derived combinatorially. For gross
information, we include the average amplitude, standard deviation and
total (absolute) energy of the waveforms in each channel (30 features).
From these, we calculate all average amplitude ratios between channel
pairs (45 features). We also include an average of these ratios (1
feature).
• We calculate a 256-point FFT for all ten channels, although only the
lower ten values are used (representing the acoustic power from 0Hz to
193Hz), yielding 100 features. These are normalized by the highest-
amplitude FFT value found on any channel. We also include the center of
mass of the power spectrum within the same 0Hz to 193Hz range for
each channel, a rough estimation of the fundamental frequency of the
signal displacing each sensor (10 features).
• Subsequent feature selection established the all-pairs amplitude ratios
and certain bands of the FFT to be the most predictive features. These
186 features are passed to a Support Vector Machine (SVM) classifier.
Before the SVM can classify input instances, it must first be trained to the
user and the sensor position. This stage requires the collection of several
examples for each input location of interest.
• When using Skinput to recognize live input, the same 186 acoustic
features are computed on-the fly for each segmented input. These are
fed into the trained SVM for classification.
13.
14. EXPERIMENT
Participants
To evaluate the performance of our system, 13
participants (6 male and 7 female) were recruited.
These participants represented a diverse cross-section of potential
ages and body types. Ages ranged from 20 to 56 (mean
38.3), and computed body mass indexes (BMIs) ranged
from 20.5 (normal) to 31.9 (obese).
Experimental Conditions
Three input groupings were selected.
It is assumed that these groupings, are of particular interest with respect to
interface design, and at the same time, push the limits of our sensing
capability. From these three groupings, we derived five different
experimental conditions.
15. 1. Fingers (Five Locations)
One set of gestures tested had participants tapping on
the tips of each of their five fingers. The fingers offer interesting
affordances that make them compelling to appropriate for input.
Foremost, they provide
clearly discrete interaction points,
which are even already
well-named (e.g., ring finger). In
addition to five finger tips, there
are 14 knuckles (five major, nine
minor), which, taken
together, could offer 19 readily
identifiable input locations
on the fingers alone. Second,we
have exceptional finger-to
finger dexterity, as demonstrated
when we count by tapping
on our fingers. Finally, the fingers
are linearly ordered,
which is potentially useful for interfaces like number entry, magnitude
control (e.g., volume), and menu selection.
16. At the same time, fingers are among the most uniform appendages on
the body, with all but the thumb sharing a similar skeletal and muscular
structure. This drastically reduces acoustic variation and makes
differentiating among them difficult. Additionally, acoustic information
must cross as many as five (finger and wrist) joints to reach the
forearm, which further dampens signals. For this experimental
condition, decided to place the sensor arrays on the forearm,
just below the elbow.
17. Whole Arm (Five Locations)
Another gesture set investigated the use of five input locations
on the forearm and hand: arm, wrist, palm, thumb and middle finger. We
selected these locations for two important reasons. First, they are distinct
and named parts of the body (e.g., “wrist”). This allowed participants to
accurately tap these locations without training or markings. Additionally,
these locations proved to acoustically distinct during piloting, with the large
spatial spread of input points offering further variation. We used these
locations in three different conditions
18. We used these locations in three different conditions. One condition
placed the sensor above the elbow, while another placed it below. This
was incorporated into the experiment to measure the accuracy loss across
this significant articulation point (the elbow).
19. Not only this is a very high density of input locations (unlike the whole-arm
condition), but it also relied on an input surface (the forearm) with a high
degree of physical uniformity (unlike, e.g., the hand). We expect that these
factors would make acoustic sensing difficult. Moreover, this location is
compelling due to its large and flat surface area, as well as its immediate
accessibility, both visually and for finger input.
Forearm (Ten Locations)
To maximize the surface area for input, we placed the sensor
above the elbow, leaving the entire forearm free. Rather than
naming the input locations, as was done in the previously
described conditions, we employed small, colored stickers to mark
input targets. This was both to reduce confusion (since locations
on the forearm do not have common names) and to increase input
consistency.
20. To train the system, participants were instructed to comfortably tap each
location ten times, with a finger of their choosing. This constituted one
training round. In total, three rounds of training data were collected per
input location set (30 examples per location, 150 data points total).
Procedure
We used the training data to build an SVM
classifier. During the subsequent testing
phase, we presented participants with simple
text stimuli (e.g. “tap your wrist”), which
instructed them where to tap. The order of
stimuli was randomized, with each location
appearing ten times in total. The system
performed real-time segmentation and
classification, and provided immediate
feedback to the participant.
22. Higher accuracies can be achieved by collapsing the ten
input locations into groups. A-E and G were created using a
design-centric strategy. F was created following
analysis of per-location accuracy data.
26. Success so far…
Cicret is supposedly the one brand that has successfully made it to the
market and into hands(or wrists) of the general public with its wearable
technology that has been designed along the lines of Skinput.
27.
28. Conclusion
In this seminar, we have presented our approach to
appropriating
the human body as an input surface. We have described
a novel, wearable bio-acoustic sensing array that we built
into an armband in order to detect and localize finger taps
on the forearm and hand. Results from experiments
have shown that the system performs very well for a series
of gestures, even when the body is in motion. Additionally,
Initial results have been presented demonstrating other
potential
uses of our approach, which we hope to further explore
in future work. These include single-handed gestures, taps
with different parts of the finger, and differentiating between
materials and objects. We conclude with descriptions
of several prototype applications that demonstrate the rich
design space we believe Skinput enables.
29. Acknowledgements
It gives me great pleasure to express my
gratitude to
Prof. Rizwan Khan for his guidance,
support and encouragement.
Without his motivation, the successful
completion of this seminar would not
have been possible.
30. References
• Chris Harrison, Desney Tan, and Dan Morris,
Microsoft Research and Human-Computer
Interaction Institute
Carnegie Mellon University
(www.chrisharrison.net)