Making Tech TactileJody MedichKicker StudioCo-Founder & Principal
Pineal Gland(aka The Third Eye)
“Embodied Cognition”
Monkey see, monkey do.
Some rights reserved by Aaron Webb
“Don’t make me be mean.”
Sense of touch is developed early.
Tactile feedback and cognition                   Sense of touch is developed early.
Braille   Printed Text
The Interface of Visual Reading
We do not raed ervey lteter but thewrod as a wlohe. It deosn’t mttaerwaht oredr the ltteers in a wrodare, olny the frist a...
FOCUS POINT32–25%   45%   75%       100%      75%   45%   32–25%
The Interface of Braille
A single Type-1 Braille character.                                1 2                                3 4                  ...
a   b   c   d   e   f   g   h   i   jk   l   m   n   o   p   q   r   s   tu   v   w   x   y   z
Just like visual reading,Braille relies on scanning,skimming, and searching.
FOCUS   CONTEXT
Accessibility Tools
Mechanical Pin Reader
Audio Tools                            Return to previous pageCancel current action                            Cursor keys...
V-Braille
TH
TH E
HE Q
E QU
QUI
UIC
The
The
quick
quick
brown
brown
AUDIO FEEDBACK
AUDIO FEEDBACK
AUDIO FEEDBACK
AUDIO FEEDBACK
Thanks for listening.Any questions?Kicker Studio300 Brannan Street, Suite 207San Francisco, CA 94107ph. 415-796-3434jody@k...
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Making Tech Tactile by Jody Medich
Upcoming SlideShare
Loading in...5
×

Making Tech Tactile by Jody Medich

3,842

Published on

Talk by Jody Medich, Co-Founder of Kicker Studio, given at Device Design Day 2011 hosted by Kicker Studio.

Published in: Design, Technology, Business
0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
3,842
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
4
Embeds 0
No embeds

No notes for slide
  • Hello! Hope you are all enjoying today as much as I am. Thank you so much for participating!\nAt Kicker Studio, we’re very interested in the question of physical interface. We’re interested in discovering how we can take technology off the screen and make it become a sort of additive that can be embedded into any product to make it more powerful, more strong, more smart.\nI want to talk to you today about making technology tactile. But first, I’d like to back up a bit and explain why this is so important. Our relationship to technology is rapidly evolving at a time when our mental model of the mind/body connection is changing. I’ll spend some time looking at that, explain how that impacts our expectation of technology, and then tell you about a few concept projects we’re working on to capitalize on this opportunity. \n\n\n
  • Rene Descartes, in the early 1600, more than 100 years before the industrial revolution, envisioned the brain as a pump that moved "animating fluid" through the body. The mind, Descartes argued, had no body or form. Instead it was an abstract entity that interacted with the body through the pineal gland. (mind like a heart)\nThe modern computer, developed after WWII, embraced this mental model. The brain was the computer with the mind the software that runs it.\n
  • This meant that we really only needed a window into the operating system and a simple manner of introducing commands. It was our minds that mattered, and these minds operated on language. We only need control the “mind” of the computer, not the “body”, because the body is not important. \nEventually, we combined something that was so decidedly mechanical (the typewriter) with television (a contained electronic fantasy land where the laws of physics do not apply) and made them both super powered. It was an easy relationship, and the mental model made sense to the populace. Almost everyone could understand the concept behind it. That world, in there, where anything is possible, is controlled through these specific language based means. \n
  • in doing so, we made it other. We quarantined it. We put it in a box and used specialized tools to control it. That shit is dangerous. And the fewer interactions it has with the real world the better.\n
  • Our relationship to technology is rapidly changing. We are developing a more physical relationship to technology. Now we can touch it, move it like it’s air. We’re letting it out of the box and creating new ways for engaging it. Technology is now in our personal space and we can communicate with it through touch, voice, and gesture just like we do to everything in our physical world. \n
  • \n“Embodied Cognition”\nThe evolution of technology control points comes at a time when our understanding of how the brain functions in rapidly changing. Beginning in the 1980’s, scholars began to rethink the way we “think”. A growing body of new research suggests that we think not just with our brains, but with our bodies. This is called Embodied Cognition.\n\nThis is a series of sketches from a recent study at the ad firm TBWA...\n\nPlay simon says.\n\n
  • \n aha! i got some of you. we’re all familiar with this phenomenon. it’s similar to monkey see, monkey do. \n\nIn 1995, an team of scientists in Italy made a major discovery. They found something called "mirror neurons". These neurons respond in a similar way, whether we see someone perform an action, hear it described, or do the action ourselves. Because they play a role in both acting and thinking, mirror neurons suggested that the mind and body might not be so separate after all.\n
  • We can see this at play in the room today. Many of you are taking notes. This not only helps you when you go back later to look at details, but it also helps you to remember. The simple act of writing while taking in information means you are much more likely to remember the information because you have engaged multiple senses.\nThere are several recent studies, the latest published in November of 2008, that enforce this idea of embodied cognition. One showed that children can solve math problems better if they are told to use their hands while thinking. Another suggested that stage actors remember their lines better when they are moving. And in one study published in 2007, subjects asked to move their eyes in a specific pattern while puzzling through a brainteaser were twice as likely to solve it. These studies suggest that involving the body in thought actually helps cognition. Our mental experience is more than just the brain. It is physical as well. \n\n
  • Pranav Mistry's Sixth Sense\n\nThere are a series of emerging technologies that capitalize on this evolving mind/body concept. Technologies like gesture, touch, haptics, and a multitude of physical sensors mean that we confront and control technology like we do everything else in our world. We’ve lured technology out of the incubator and into the world. We can touch it. Move it. We can finally be physical with technology. This means there are new opportunities for innate control of technology, but first we need to establish understandable mental models. \n\nMany of these emerging technologies are still in their infancies and see the most use in gaming or secondary communication. We have not yet closed the logic loop of interacting with it in this new bodily way. We have finally figured out how to capture all this stuff, but we forgot to train the technology to respond in a way that we would expect. The feedback is lacking. We are relying primarily on video or audio feedback, yet innately we expect multi-sensory feedback. It’s in our world, but we’re still trying to make it lame like we’re afraid it will take over.\n\n
  • This is a video of people playing with the kinect driving game. Notice how they respond as if there should be a sense of mass. They are the types of moves we make when interacting with our everyday world. These are the instinctive moves we create in our mind while we try to innately control of technology. We imagine there is a mass because that is how we relate to our world. \n\n
  • These bodily interactions change our expectations of how the technology should respond. There is an etiquette of behaviors that we anticipate when interacting on that level. When these rules are not followed they feel foreign and unfamiliar, requiring too much thought. \n\nI had to deal with a very painful phone tree recently which was clearly designed to keep me occupied while they answered other calls. Worse was the transition to the human operator. What really made me feel unwelcome was the fact that even after struggling through the complicated menus that could not understand my vocal responses, the human operator had none of the information I had provided. And I had to give it all again. This was a very unsatisfying exchange. Not just because of all the bureaucratic loops, but also because the mental model of the interaction did not match my expectations. Instead it felt foreign and uncomfortable. My simple etiquette expectations were not met. \n\nSo, how do people expect these different technologies to communicate? Some are obvious. Voice recognition, for example, is clearly a technology that screams for language and voice comprehension, as well as polite manners. \n\n
  • And, perhaps not surprisingly, this applies to gesture as well. While we were working on the gestural language for the Canesta entertainment center a few years back we saw first hand how much of a role etiquette and emotion plays in comfort, and ease, of particular gestures. We had a gesture...\n\nNOTE: TALK ABOUT THE INFO IN THIS ARTICLE: “What's particularly interesting to neuroscientists is the role that movement seems to play even in abstract thinking. Glenberg has done multiple studies looking at the effect of arm movements on language comprehension. In Glenberg's work, subjects were asked to determine whether a string of words on a computer screen made sense. To answer they had to reach toward themselves or away from themselves to press a button.\nWhat Glenberg has found is that subjects are quicker to answer correctly if the motion in the sentence matches the motion they must make to respond. If the sentence is, for example, "Andy delivered the pizza to you," the subject is quicker to discern the meaning of the sentence if he has to reach toward himself to respond than if he has to reach away. The results are the same if the sentence doesn't describe physical movement at all, but more metaphorical interactions, such as "Liz told you the story," or "Anne delegates the responsibilities to you."\nThe implication, Glenberg argues, is that "we are really understanding this language, even when it's more abstract, in terms of bodily action."\nThere is a linguistic sensory vernacular that we can build on when it comes to employing emerging technologies. Now that the technology is developed, it’s time to teach it to communicate in ways we as humans understand and expect. Otherwise, it feels foreign and wrong.”\n\n
  • there is a distinct vernacular of interacting with our physical world. \n
  • It seems to be then that when designing new control points for physical interaction we should consider the following: \n1. Be polite and allow me to be polite back. I don’t talk to strangers in 0’s and 1’s and I shouldn’t have to do so for you. \n3. Please don’t make this difficult for me. Don’t make me go through unnecessary hoops because it is easier for you. \n4. Hold up your end of the bargain. If i take the time and effort to engage, reward my effort. \n
  • So, what about touch? What is the etiquette and language of touch? The way people understand the world is through physical experience. The first sense we develop is touch. Given the importance of this sense in early development, it’s easy to see how tactile associations — heaviness requires effort, roughness leads to friction, hard objects are inflexible — impact a developing mind’s understanding social situations. Those early connections between physical experience and mental understanding don’t ever disappear. As we grow up, physical experience shapes how we conceptualize our world and the way we socialize with other people.\n
  • So how do you like those chairs? Are they comfortable? Yeah? Ok great. Imagine, for a moment, if you were sitting in these chairs. How receptive would you be to hearing talks all day? How would you feel about me? Knowing I was responsible for the decision? \n\nA recent study, published in the June 24th issue of Science, reveals that tactile feedback impacts our cognition of the world around us. To test the connection, the researchers conducted a variety of experiments simulating real-world social interactions. In one, test participants played the part of employers interviewing job applicants. When holding a heavy clipboard, they were more likely to consider candidates to be serious, and thought of their own judgements as especially important.\nThey also tested car buyers by placing some in soft chairs and others in hard ones. People sitting in stiff chairs rather than soft held out for an extra $350 price cut.\nThis study suggests that there is a very direct relation between emotion, behavior and tactile sense. And conversely, by creating different physical experiences, it triggers people’s unconcious emotions, thoughts and behaviors. What does this mean to touch technology? How can we use this to create more intuitive touch points using tactile feedback?\n
  • \nTouch puts technology right at our fingertips. Yet, somehow when we lead the beast out, we lost the mouse clicks and tactile keys. We’ve removed the tangible feedback and instead rely primarily on visual or audio. It’s so close we can touch it, but the only sensation touchscreens communicate is glass. This creates an even bigger divide than the traditional tools for interacting with technology.\n\n\n
  • Touch puts technology right at our fingertips. Yet, somehow when we lead the beast out, we lost the mouse clicks and tactile keys. We’ve removed the tangible feedback and instead rely primarily on visual or audio. It’s so close we can touch it, but the only sensation touchscreens communicate is glass. This creates an even bigger divide than the traditional tools for interacting with technology.\n\nWe’ve been playing a lot recently with hi-fidelity vibrotactile feedback, also known as haptics. A very simple example of this is the vibrate function on your phone. But there are very sophisticated hi-definition products about to go to market. We’re really excited about what’s happening. We see the possibility for experiencing different types of textures and channels of communication, and we wanted to establish a baseline that we could build upon. \n\n
  • so, what exactly is the baseline vernacular of a tactile interface? we thought about it and realized that there was someone we could ask. the blind. the blind experience much of the world through touch. who better to articulate what it’s like to communicate through tactile sense. \n
  • Through the National Braille Press, we came across some very interesting studies done at MIT. Participants were monitored via MRI while reading. Some of the subjects were sighted, and read printed text. The others were visually impaired and read Braille. The findings were shocking. Braille readers use many of the same brain regions to process tactile print as sighted people use to process visual print. This means that giving tactile context to information will significantly help both the sighted and the blind. great! We decided to take a look at the role spatial context plays in reading in both the tactile or visual method to try to find some efficiencies between the two.\n
  • \n
  • In visual reading, the eye moves around the page in a series of expected ways to digest information. The underlying grid enables reading activities including scanning, skimming, and searching.\n
  • Scanning is essential to reading. There are many efficiencies in written language. Most readers only read the first couple letters & the length of the word to comprehend it.\n
  • Skimming is a function of spatial control. The ordered rows allow the eyes to briskly look for a particular item at a speed three to four times faster than normal reading. It is the temporal control of information. It is sometimes referred to as verbosity control because you it enables the reader to control the amount of information they are taking it. The reader can move quickly, only taking in general information like the general shape and size of the word or every specific letter depending on how fast they move their eyes. \n
  • A reader can search as a result of skimming and scanning because they can rapidly sort through a bunch of information to find and focus on the important portion.\n
  • \n
  • Braille as a method of communication may have a limited audience, but it is helpful to examine as a successful example of tactile interface for large amounts of information. Entire libraries are printed and read through this method of encoding by thousands of people everyday. \nWe learned from our Neuroscience friend, Dr. Alan Rorie, there are several neurological factors that come together to enable someone to read Braille. The key contribution is from the Merkel Cell. It is stimulated by angles and points and responds to frequencies that are low and narrow, between 5 - 15 mhz. Basically, it’s the nerve in the finger that enables the reader to actually detect the raised Braille cells. Another neurological contributor are the Mishear Corpuscles. These nerves sense low frequencies and are rapidly adapting. In other words, they quickly notice frequencies and in essence go numb to them. It is therefore necessary for the finger to move over the texture, rather than the texture be fed to the finger in place. \n
  • \n
  • These 6 cells contain dots which are either not raised or raised. \n
  • The consistency of the grid enables the encoding of language. \n
  • \n
  • The Braille “interface” is similar to printed text. In Western culture, text is printed left to right, top to bottom, in ordered rows. \n
  • The hands work in a method similar to the eyes in visual reading in order to develop spatial context. One hand is reads specific characters while the other gathers spatial information such as word and sentence length. \n
  • This much is probably clear by looking at a page of braille. We learned something very interesting from a man named Noel Runyon who has been working on interfaces for the blind since the early days of IBM. He is, I believe he phrased it, coincidentally also blind. He explained the one thing sighted people always miss is the importance of the negative spaces to a Braille reader. They are equally as important as the positive spaces because it is those absences that define the edges of content. The gullies where there is no printed text helps the hands to keep track of the direction and location of the braille, and also helps the reader to establish where the cells start and stop. The resulting grid ultimately provides direct spatial manipulation of text. The reader can then skim, scan and search just like a sighted reader does with printed text. \n
  • It’s like the printed lines on a piece of notebook paper let’s the writer know where to put the writing, the lines let the reader know where the string of content is contained. \n
  • \n
  • This one has 32 characters. \n
  • Audio tools read the screen to the user who uses key commands to navigate around the visual interface. The user is limited to moving around the screen in relative ways, which allows the user to understand the relationship between elements, but not the overall spatial relationship of elements on the page. It also provides control over verbosity: which is the amount of data provided from syllables, whole words, or individual letters depending on the amount of detail the user requires. Apple’s VoiceOver 3 replaced the key commands with touchpad multi-touch gestures so it is possible to map these controls spatially, if we had a tactile interface. \n
  • And this is V-Braille, an experimental assistive technology for touchscreen developed at the University of Washington. It divides the screen into 6 equal sections. As the user touches each section, they receive a haptic response indicating if it is occupied or not. Unfortunately, it limits focus to one cell of one letter at a time. Recently Nokia added time coding to VBraille, so that the tones are received by the user like Morse Code. \n
  • Imagine trying to read war and peace one letter at a time. It’s next to impossible. Now imagine the enormity of the internet. This will not work. Let’s look at a different model that will enable more spatial context. \n
  • here is the kicker tactile touchscreen reader. we believe we can create a better spatial understanding of information on touch screen devices by creating a tactile grid developed with hi-fidelity, multi-channel haptics -- coming soon to a mobile device very near you. here’s how it works. \n
  • \n
  • \n
  • Speed of drag would provide dynamic control of verbosity settings, with a proportional relationship between speed of drag and amount of detail provided. \n
  • Speed of drag would provide dynamic control of verbosity settings, with a proportional relationship between speed of drag and amount of detail provided. \n
  • Speed of drag would provide dynamic control of verbosity settings, with a proportional relationship between speed of drag and amount of detail provided. \n
  • Speed of drag would provide dynamic control of verbosity settings, with a proportional relationship between speed of drag and amount of detail provided. \n
  • Speed of drag would provide dynamic control of verbosity settings, with a proportional relationship between speed of drag and amount of detail provided. \n
  • Speed of drag would provide dynamic control of verbosity settings, with a proportional relationship between speed of drag and amount of detail provided. \n
  • Speed of drag would provide dynamic control of verbosity settings, with a proportional relationship between speed of drag and amount of detail provided. \n
  • Speed of drag would provide dynamic control of verbosity settings, with a proportional relationship between speed of drag and amount of detail provided. \n
  • Speed of drag would provide dynamic control of verbosity settings, with a proportional relationship between speed of drag and amount of detail provided. \n
  • Gestural control will enable easy modality control. With a one fingered drag, the reader will receive audio feedback; with a double fingered drag, the reader will receive braille-encoded feedback.\n
  • Gestural control will enable easy modality control. With a one fingered drag, the reader will receive audio feedback; with a double fingered drag, the reader will receive braille-encoded feedback.\n
  • The resulting tactile interface will restore for visually impaired persons a cognitive sense of space essential to reading unlike any available modern accessibility tools. And because it would be basic software, suddenly the entire catalog of digital content would instantly become available to people with visual impairment. \n
  • But it would also enable keyboards like this one, which could easily translate into simple grids for navigating all kinds of screens, and surfaces, eyes free. Modality controls could instead control menus, perhaps a double swipe across keys could provide letters, another with numbers. There are all kinds of possibilities. \n
  • \nWe are in an era of physical interface. Haptics are just one of many new tactile technologies being developed in the market place, many of which will have very different connotations to users. As we move forward in this time of embodied cognition, and technology, we need to step back and ask ourselves what do these experiences actually communicate. By teaching technology to speak human, with a vernacular and etiquette that we understand, we can build products that are more powerful, more strong, more smart because they will feel familiar and relatable. \n\n
  • \n
  • Making Tech Tactile by Jody Medich

    1. 1. Making Tech TactileJody MedichKicker StudioCo-Founder & Principal
    2. 2. Pineal Gland(aka The Third Eye)
    3. 3. “Embodied Cognition”
    4. 4. Monkey see, monkey do.
    5. 5. Some rights reserved by Aaron Webb
    6. 6. “Don’t make me be mean.”
    7. 7. Sense of touch is developed early.
    8. 8. Tactile feedback and cognition Sense of touch is developed early.
    9. 9. Braille Printed Text
    10. 10. The Interface of Visual Reading
    11. 11. We do not raed ervey lteter but thewrod as a wlohe. It deosn’t mttaerwaht oredr the ltteers in a wrodare, olny the frist and lsat.
    12. 12. FOCUS POINT32–25% 45% 75% 100% 75% 45% 32–25%
    13. 13. The Interface of Braille
    14. 14. A single Type-1 Braille character. 1 2 3 4 5 6
    15. 15. a b c d e f g h i jk l m n o p q r s tu v w x y z
    16. 16. Just like visual reading,Braille relies on scanning,skimming, and searching.
    17. 17. FOCUS CONTEXT
    18. 18. Accessibility Tools
    19. 19. Mechanical Pin Reader
    20. 20. Audio Tools Return to previous pageCancel current action Cursor keys scroll aroundTab through all the links the page
    21. 21. V-Braille
    22. 22. TH
    23. 23. TH E
    24. 24. HE Q
    25. 25. E QU
    26. 26. QUI
    27. 27. UIC
    28. 28. The
    29. 29. The
    30. 30. quick
    31. 31. quick
    32. 32. brown
    33. 33. brown
    34. 34. AUDIO FEEDBACK
    35. 35. AUDIO FEEDBACK
    36. 36. AUDIO FEEDBACK
    37. 37. AUDIO FEEDBACK
    38. 38. Thanks for listening.Any questions?Kicker Studio300 Brannan Street, Suite 207San Francisco, CA 94107ph. 415-796-3434jody@kickerstudio.comwww.kickerstudio.com

    ×