The document summarizes a study on the typing performance of blind users on mobile devices. Over 8 weeks, 5 novice blind users participated in weekly 20-minute typing tasks on an Android device. Results showed users slowly improved their words per minute over time due to factors like land-on accuracy and movement efficiency increasing. Substitution errors were most common, though users corrected most errors. However, corrections took significant time. The study provides insights into challenges blind users face with typing and suggests areas for future work like more efficient correction methods and leveraging touch movement models.
Haptic technology,or haptics,is a tactile feedback technology which takes advantage of the sense of touch by applying forces, vibrations, or motions to the user.The word haptic, from the Greek word haptikos, means pertaining to the sense of touch and comes from the Greek verb haptesthai, meaning to contact or to touch.
Haptics are enabled by actuators that apply forces to the skin for touch feedback, and controllers. The actuator provides mechanical motion in response to an electrical stimulus.
Haptic technology,or haptics,is a tactile feedback technology which takes advantage of the sense of touch by applying forces, vibrations, or motions to the user.The word haptic, from the Greek word haptikos, means pertaining to the sense of touch and comes from the Greek verb haptesthai, meaning to contact or to touch.
Haptics are enabled by actuators that apply forces to the skin for touch feedback, and controllers. The actuator provides mechanical motion in response to an electrical stimulus.
Explore the importance of English language skills and their impact on communication, connection, and personal growth. Break through language barriers and get ready to embrace the mighty power of English for a future that shines bright!
As per the expert guidance we have designed the software (English Language Lab)
As per experts' advice, student has to complete the course in 440 sessions
Software testing tools are evolving. More testing frameworks are emerging through the open source community and commercial vendors. In addition, we’re starting to see the rise of machine-learning (ML) and artificial intelligence (AI) in testing solutions.
Given this evolution, it is important to map the tools that match both the practitioners’ skills and their testing types. When referring to the testing practitioners, we mainly look at three different personas:
-The business tester
-The software developer in test (SDET)
-The software developer
These practitioners are tasked with creating, maintaining, and executing unit tests, build acceptance tests, integration, regression, and other nonfunctional tests.
In this webinar led by Perfecto’s Chief Evangelist, Eran Kinsbruner, you will learn the following:
-How should testing types be dispersed among the three personas and throughout the DevOps pipeline?
-What tools should each of these three personas use for the creation and execution of tests?
-What are the key benefits to continuous testing when mapped correctly?
What are the seeds of social media strategy? How do you lead it for competitive advantage? Our presentation outlines the essential elements and describes the
DevOps: Cultural and Tooling Tips Around the WorldDynatrace
To watch this webinar replay, please join us here:
https://info.dynatrace.com/apm_wc_devops_journey_series_tips_around_the_world_na_registration.html
DevOps: Cultural and Tooling Tips Around the World
DevOps! One of the most abused terms in the software industry over the last few years. One of the reasons for this is that the term can mean something totally different, depending on what your role is, and what kind of business you are in. Yet, it is a very real practice with solid benefits that allow companies to build better quality software faster, and with lower cost and risk.
In this 30-minute “secret sauce” session, Andreas Grabner, DevOps Activist at Dynatrace, shares customer learnings and best practices from DevOps adopters around the world. You’ll gain insights from questions like:
• What does DevOps really mean for developers, testers and operators?
• How do companies like Facebook deploy twice a day without big issues?
• How does DevOps work in industries like finance, government, and healthcare where tight regulations exist?
• Is Dev responsible for Ops? Or only if you are working in a Cloud environment?
• What is different and unique as we move from old-fashioned on-prem software to hybrid and Cloud apps?
• Why is talking to people the forgotten DevOps tool?
Presented at the WebRTC Paris Conference from 16-18 December. Sharing the challenges faced by developers in using WebRTC, as well as a survey of developers aware but not developing with WebRTC to understand their experiences. Also reviews a model (analogy to America history) of how WebRTC will develop, as I consider Moore's crossing the chasm inaccuracy for this embedded technology rather than product. We need to focus on helping not hyping developers in 2015.
Moderated vs Unmoderated Research: It’s time to say ELMO (Enough, let’s move ...UserZoom
Does this sound familiar? Researchers sitting around a meeting table arguing about which methods to use, especially when it comes to unmoderated remote testing vs moderated? Usually without any empirical data?
In this webinar we'll give you the power of data to say "ELMO!" (Enough, let’s move on!) and end the argument once and for all.
We collected this data by conducting 10 moderated and 10 unmoderated remote sessions across six tasks on Patagonia.com, in order to show how moderated and unmoderated remote studies compare in terms of the number and severity of usability issues surfaced.
Register for this upcoming webinar and discover the theoretical and actual strengths and weaknesses of various user research methods to stop the argument before it even begins.
Currently we are having a project of Human Computer Interaction (HCI) course in which we are developing a mobile app named "Announcer".
This is a project slide presentation of our "Announcer" mobile app.
Click on our blogspot here to know more:
yujinnohikari.blogspot.com
prototyping software credit to: justinmind.com
HoliBraille: Multipoint Vibrotactile Feedback on Mobile DevicesHugo Nicolau
Presentation at Web for All Conference 2015 (W4A'15) #w4a15
We propose HoliBraille, a system that enables Braille input and output on current mobile devices. We use vibrotactile motors combined with dampening materials in order to actuate directly on users’ fingers. The prototype can be attached to current capacitive touchscreen devices enabling multipoint and localized feedback. HoliBraille can be leveraged in several applications including educational tools for learning Braille, as a communication device for deaf-blind people, and as a tactile feedback system for multitouch Braille input. We conducted a user study with 12 blind participants on Braille character discrimination. Results show that HoliBraille is effective in providing localized feedback; however, character discrimination performance is strongly related with number of simultaneous stimuli. We finish by discussing the obtained results and propose future research avenues to improve multipoint vibrotactile perception.
Explore the importance of English language skills and their impact on communication, connection, and personal growth. Break through language barriers and get ready to embrace the mighty power of English for a future that shines bright!
As per the expert guidance we have designed the software (English Language Lab)
As per experts' advice, student has to complete the course in 440 sessions
Software testing tools are evolving. More testing frameworks are emerging through the open source community and commercial vendors. In addition, we’re starting to see the rise of machine-learning (ML) and artificial intelligence (AI) in testing solutions.
Given this evolution, it is important to map the tools that match both the practitioners’ skills and their testing types. When referring to the testing practitioners, we mainly look at three different personas:
-The business tester
-The software developer in test (SDET)
-The software developer
These practitioners are tasked with creating, maintaining, and executing unit tests, build acceptance tests, integration, regression, and other nonfunctional tests.
In this webinar led by Perfecto’s Chief Evangelist, Eran Kinsbruner, you will learn the following:
-How should testing types be dispersed among the three personas and throughout the DevOps pipeline?
-What tools should each of these three personas use for the creation and execution of tests?
-What are the key benefits to continuous testing when mapped correctly?
What are the seeds of social media strategy? How do you lead it for competitive advantage? Our presentation outlines the essential elements and describes the
DevOps: Cultural and Tooling Tips Around the WorldDynatrace
To watch this webinar replay, please join us here:
https://info.dynatrace.com/apm_wc_devops_journey_series_tips_around_the_world_na_registration.html
DevOps: Cultural and Tooling Tips Around the World
DevOps! One of the most abused terms in the software industry over the last few years. One of the reasons for this is that the term can mean something totally different, depending on what your role is, and what kind of business you are in. Yet, it is a very real practice with solid benefits that allow companies to build better quality software faster, and with lower cost and risk.
In this 30-minute “secret sauce” session, Andreas Grabner, DevOps Activist at Dynatrace, shares customer learnings and best practices from DevOps adopters around the world. You’ll gain insights from questions like:
• What does DevOps really mean for developers, testers and operators?
• How do companies like Facebook deploy twice a day without big issues?
• How does DevOps work in industries like finance, government, and healthcare where tight regulations exist?
• Is Dev responsible for Ops? Or only if you are working in a Cloud environment?
• What is different and unique as we move from old-fashioned on-prem software to hybrid and Cloud apps?
• Why is talking to people the forgotten DevOps tool?
Presented at the WebRTC Paris Conference from 16-18 December. Sharing the challenges faced by developers in using WebRTC, as well as a survey of developers aware but not developing with WebRTC to understand their experiences. Also reviews a model (analogy to America history) of how WebRTC will develop, as I consider Moore's crossing the chasm inaccuracy for this embedded technology rather than product. We need to focus on helping not hyping developers in 2015.
Moderated vs Unmoderated Research: It’s time to say ELMO (Enough, let’s move ...UserZoom
Does this sound familiar? Researchers sitting around a meeting table arguing about which methods to use, especially when it comes to unmoderated remote testing vs moderated? Usually without any empirical data?
In this webinar we'll give you the power of data to say "ELMO!" (Enough, let’s move on!) and end the argument once and for all.
We collected this data by conducting 10 moderated and 10 unmoderated remote sessions across six tasks on Patagonia.com, in order to show how moderated and unmoderated remote studies compare in terms of the number and severity of usability issues surfaced.
Register for this upcoming webinar and discover the theoretical and actual strengths and weaknesses of various user research methods to stop the argument before it even begins.
Currently we are having a project of Human Computer Interaction (HCI) course in which we are developing a mobile app named "Announcer".
This is a project slide presentation of our "Announcer" mobile app.
Click on our blogspot here to know more:
yujinnohikari.blogspot.com
prototyping software credit to: justinmind.com
HoliBraille: Multipoint Vibrotactile Feedback on Mobile DevicesHugo Nicolau
Presentation at Web for All Conference 2015 (W4A'15) #w4a15
We propose HoliBraille, a system that enables Braille input and output on current mobile devices. We use vibrotactile motors combined with dampening materials in order to actuate directly on users’ fingers. The prototype can be attached to current capacitive touchscreen devices enabling multipoint and localized feedback. HoliBraille can be leveraged in several applications including educational tools for learning Braille, as a communication device for deaf-blind people, and as a tactile feedback system for multitouch Braille input. We conducted a user study with 12 blind participants on Braille character discrimination. Results show that HoliBraille is effective in providing localized feedback; however, character discrimination performance is strongly related with number of simultaneous stimuli. We finish by discussing the obtained results and propose future research avenues to improve multipoint vibrotactile perception.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
4. previous :: alternative techniques
Bonner et al.,
2010
Azenkot et al., 2012
Oliveira et al., 2011
Southern et al., 2012
Yfantidis et al., 2006
Guerreiro et al., 2015
35. summary :: major results
Improve performance at slow rate
Due to several factors
Substitutions are predominant
Correct most errors
Corrections are time consuming
36. summary :: major results
Improve performance at slow rate
Due to several factors
Substitutions are predominant
Correct most errors
Corrections are time consuming
37. summary :: major results
Improve performance at slow rate
Due to several factors
Substitutions are predominant
Correct most errors
Corrections are time consuming
38. summary :: major results
Improve performance at slow rate
Due to several factors
Substitutions are predominant
Correct most errors
Corrections are time consuming
39. summary :: major results
Improve performance at slow rate
Due to several factors
Substitutions are predominant
Correct most errors
Corrections are time consuming
Hi everyone, my name is Hugo Nicolau and the work I’ll be presenting today entitled “typing performance of blind users” was done during my stay at the Rochester Institute of Technology, in collaboration with Kyle Montague, Tiago Guerreiro, André Rodrigues, and Vicki Hanson.
As most of you know, since 2008 Apple and latter Google included accessibility features in their smartphones allowing blind users to explore an interface by dragging their fingers on the screen while earing what they are touching, and then by performing a double tap or a split tap they can select the intended target.
And this simple exploration technique allows blind people to use a multitude of applications and perform multiple tasks with their device, including text-entry, which is the focus of this work.
Since 2006, many text-entry techniques have been proposed from gestural interfaces to Braille-based techniques, all of them trying to improve the non-visual typing process on touchscreen devices
However, looking back although there is a considerable body of work, we realized that there isn’t much knowledge about the actual typing process. Most research is limited to performance comparisons between input techniques.
And comparisons are great to establish that differences occur in terms of speed and errors, but they fail to provide insights into why they occurred. Essentially, there is a lack of knowledge on users’ actual typing behaviors. And without this knowledge, it is unclear how to improve current input techniques.
With this in mind, we aim to bridge this gap and contribute with new insights about blind users typing process on touchscreen mobile devices
We were particularly interested in observing users’ learning experience, how performance evolved, and why did participants achieved that performance level. Our ultimate goal is to identify new opportunities to reduce the learning overhead and support better non-visual input on mobile devices.
To do that, we conducted a longitudinal study of 8 weeks
With 5 novice blind users. Participants age ranged from 23 to 55 years and non had prior experience with touchscreen screen readers.
We basically gave them a new Samsung S3 mini and some basic training on how to use the device and a virtual keyboard. And asked them to use it as their main mobile phone.
We conducted controlled weekly typing sessions of 20 minutes for 8 weeks
We also collected real-world usage measures. I hope you had the chance to talk with Kyle during yesterday’s poster session, about our data collection framework, if not he’s still around
In this paper, we used that data to control for device usage of individual participants and relate to lab performance.
Although if you are interested in in-the-wild performance measures come talk with me afterwards.
Participants entered a total of 32,764 characters over eight weeks. They spent a total of 51 hours actively entering text. However, there is a high variance in usage results both between participants and weeks. For instance, while P2 and P3 were particularly active in the fourth week, P4 were more active in the last two weeks. P5 was the least active with an average usage of 12.5 minutes per week, while P2 spent on average more than 2 hours typing per week.
Starting with speed, how did typing speed evolved over time?
Participants improved on average 2.4 wpm in 8 weeks; from 1.6 wpm to 4 wpm. Although all participants improved over time, learning rate was slow with an improvement of 0.3 wpm per week,.
We also noticed that P2 and P4 had atypical changes in performance in week four and seven, respectively. Without having collect real-world device usage it would have been very hard to understand these changes, since they were clearly influenced by usage
For instance, for P2, we notice that performance dropped after she installed a 3rd party app, WhatsApp. When debriefing P2 she mentioned perceiving the speech feedback being slower while typing. In fact, this is a known issue with this particular application. Although we are not able to confirm that speech feedback changed, we can show that both number of pauses and duration of pauses during typing increased
Regarding P4, the abrupt increase is most likely related with the increase of usage in week seven. he mentioned that he was finally using his phone to the fullest, particularly sending and receiving text messages, skype messages and so forth.
So, overall why did participants improve typing speed over time? Previous work has not looked into this question, and in fact it is impossible to answer it if we only look at performance measures. There’s a need to understand how users actually type and explore the keyboard. They did improve because they move their fingers faster while exploring the keyboard? How fast? Do they start to land their fingers closer to targets? Do they move their fingers less? How exactly does that change over time? These are the kinds of questions we aimed to answer
First of all, participants significantly improve their land-on accuracy in the 8-week period, in week one only 27% of landing positions were inside the target. In week eight this value increases to 48. Also, by week eight, 91% of the times, participants land either inside the intended key or an adjacent key. This shows that users start to gain a better spatial understanding of the keyboard and the device.
Regarding movement, and exploration paths, how did they improve? First they start to make more efficient explorations; meaning that they get closer and closer to the optimal path from their landing position to the intended target. As a results the number of visited keys also decreases over time from an average of 5 to 2 visited keys by week 8.
We also looked at target re-entries, which consist of entering the same target for the second time. These are particularly important because users receive audio feedback every time they enter a target. Overall, target re-entries decreased from 6.6 (SD=3.2) to 0.8 (SD=0.3), which suggests that over time it is easier to find keys and with less errors
(PAUSE)In summary speed improvements are due to a combination of factors, from landing accuracy to more efficient movements
Regarding error rates, how does that change over time?
We looked at total error rate, which represents the percentage of incorrect characters that were entered, even those that were corrected. Overall participants started with an average total error rate of 26% (SD=11.7%) and finished with 7.4% (SD=1.7%) . The most significant drop occurred in week 2
Looking at the errors in the final sentence, we found that when given the chance, users tend to correct most errors, resulting in high quality transcribed sentences. This goes in line with previous findings for sighted users. There is a decrease over the 8 weeks from 9% to 1.6%
In terms of correction performance it was fairly constant throughout the 8 weeks. And we found that on average 3 out of 10 characters were unnecessarily deleted. The reason for this to happen is that sometimes participants only notice errors when receiving word feedback, and then they need to delete several characters to correct the error and some of those were correct
Examining the time spent on those corrections, they spent on average 13% of their time correcting text in the last week. This corresponds to nearly 14 seconds correcting on a 5 word sentence
Going into further detail, and analyzing types of errors. Substitution errors were consistently higher than insertions and omissions. Substitutions errors are just incorrect characters. Although there was a significant decrease over time, from 24% (SD=12%) to 6% (SD=1%), they still remain significantly higher than the remaining types of errors. We want to understand why this happen
Overall, participants had similar error rates across all intended keys. No row, column, or side patterns emerged from weekly data. Unlike sighted users that experience substitution patterns towards a predominant direction [5, 17], blind users’ patterns are less clear. This is most likely related with the differences between visual and auditory feedback when acquiring keys. Some of the keys have fewer or no data because they are less common in the Portuguese letter
Again, traditional performance measures do not provide any insights into why an outcome occurred or how to improve it
In the particular case of substitution, for sighted users these usually occur when their finger is inside the intended target and when lifting the finger, it slips to an adjacent key
However, we didn’t find this to be true with our user group. Overall, only 38% of substitution errors were slips (constant over time). Taking into account that users should receive speech feedback before selecting the intended key, this can seem a counter-intuitive result. So, we started analyzing whether participants’ finger paths crossed the intended target at some point during movement.
And that happen for 64% (SD=9.8%) of substitution; however, although they were inside the target they failed to select it. We were puzzled by this result so we started doing a manual examination of the recorded videos. We noticed that most of these cases were related to a significant delay between speech feedback and input, which resulted in a mismatch between the key being heard and the one being touched that moment.
This still leaves 36% of substitution errors unaccounted for.
In these cases participants did not even hear the key they were aiming for. In these ones it is harder for us to understand why. We believe it can be due to several reasons, including accidental touches, phonetically similar letters, users just gave up during exploration, or overconfidence of aiming to the key without hearing it
So, major results from this study. I didn’t have time to go through all of them, please refer to the paper for a more complete analysis. But what I showed here today,
Is that users improve both entry speed and accuracy, although at slow rate. This has been previously reported
However, with this work we uncover why those improvements happen. They are mostly due to a combination of factors, such as landing closer to intended keys, performing more efficient keyboard explorations, lower number of target re-entries, and lower movement times.
Also, the most common type of errors are substitutions.
Regarding correction strategies, users correct most typing errors, which …
consumes on average 13% of input time.
Regarding implications or lessons learned
Because corrections are still time consuming and inefficient we need better correction techniques. For instance, none of our participants used cursor-positioning operations throughout the study. Also, participants did not use auto-correct or auto-complete suggestions. We are not sure why, if it was because it was too hard, or maybe it wasn’t useful, or simply because they unaware of these features. Future work should investigate auditory interfaces for these features. As far as we know, this is an unexplored topic.
Also, we observed that 64% of substitution errors could be due to a mismatch between speech output and touch information. Also, as we have seen in the case of one participants it can affect the way to type as well. So, future non-visual keyboards should prioritize synchronization between input and output modalities.
The majority of omission errors (68%) go by undetected and therefore uncorrected. Language- based solutions such as spellcheckers seem to be the only plausible solution. Again, we still don’t know whether these features would be useful or desirable.
For those of you interested in building touch models to compensate errors, although blind users do not show a predominant touch offset direction, most substitution errors were adjacent keys. Also, finger movement data can provide evidence of what particular key users are trying to select; however, this is more relevant for novice users. Expert users already land on the intended target most of the times (or at least very close). So, touch models that adapt to skill level and learning rate are probably a good idea.