• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Kevin Slavin - Reality Is Plenty, Thanks
 

Kevin Slavin - Reality Is Plenty, Thanks

on

  • 2,207 views

 

Statistics

Views

Total Views
2,207
Views on SlideShare
2,203
Embed Views
4

Actions

Likes
5
Downloads
0
Comments
0

3 Embeds 4

http://www.linkedin.com 2
http://twitter.com 1
https://twitter.com 1

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

CC Attribution-NonCommercial-ShareAlike LicenseCC Attribution-NonCommercial-ShareAlike LicenseCC Attribution-NonCommercial-ShareAlike License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • The town was Upside-Down and Backwards Town, with a newspaper printed upside down and backwards, posters on the wall, signs on the stores, all of it constructed through a phony lens to reshape the world to what it needed to be.  Norman said it took a few weeks to be able to read like that. It took a few weeks to train his brain to see the world as the Navy needed him to see it.I’m not sure how much of this story is true. But the story of that town is close enough to some well-known methods of re-training the human eye, and the town, whether it existed or not, is close enough to some of the spook desert towns of the Second World War. I picture this place, when I hear the stories we tell ourselves about our future in “Augmented Reality.” Was Norman’s upside-down-and-backwards town any more or less real than New York City? The way he was trained to see: was it more or less real than the ways we see to begin with?
  • In 1929, Jean Piaget was studying the ways that children see the world. He noted that children believed in “extramission” – that the eye actually emitted images during vision, rather than taking them in. If this seems childish, consider that “Emission Theory” was the dominant model for conceiving of human vision until Ibn al-Haytham disproved it in 1021. Ptolemy, Plato, Euclid: all of their theories of optical phenomena involved light beams shooting from the eye to construct imagery in front of us. From the stars in the sky to the apple on the ground, in Emission Theory, all of it is “painted by the eye.” in a disturbing study from 2002, something like 50% of college students understand the process of seeing as some variation of Emission Theory.“Intromission theory” is the clear and modern alternative: the retina detects photons of light in the world, causes neural impulses to travel to the brain, and the brain produces meaning from that transmission. It was easier to reprogram Navy recruits to see the world upside-down and backwards, than it was to get college students to understand vision as perceiving, instead of projecting.Vision is generally thought of as directed outward, away from the self, toward specific objects.” If you understand vision like that – as contemporary college students do – it’s only natural to “augment reality” with an additional eye projecting additional meaning onto the empty world that surrounds us.
  • The phrases “AR” and “Augmented Reality,” firmly entrenched in the 2010 vernacular, are only 20 years old. They were coined by Tom Caudell, a researcher at Boeing, working with money from DARPA. Caudell was focused on a specific challenge in airplane manufacturing: building the clusters of wire and switches that form the aircraft’s neurological system. In 1990, this was done with “foamboards”: large sheets of plywood (left image) that had 1:1 mockups of the entire plane. Factory workers would wind their way across dozens of boards, pulling and fusing wire according to the markers in front of them.  Augmented Reality started with building machines to fly, with cables running up into the helmet. The photo on the right is from 1995. If you look at this picture, you see a man hooked up to a set of wires that terminate at his eyes. There is another set of wires that terminate at his fingertips. He is building the nervous system for a Boeing airplane, building the world on a blueprint visible to his eyes alone. His eyes make the blueprint real, and his hands make the airplane real.  Between one set of wires and the other is his neuromuscular response to what he alone sees, his own nervous system. This system is following a program written by Boeing.  Augmented Reality starts here.
  • In 1984, James Cameron unleashed The Terminator, starring a cyborg assassin sent through time from a post-apocalypse 2029.  The assassin – played by the Governor of a post-bankruptcy California – is carefully considered. To all outward appearances, it has human traits and appearances. On the inside, like all computing machines, it’s just metal, wires, and a series of algorithms. There are sensing apparata, of course, including sophisticated optics that function like human vision, but better.  In the film, we periodically see the POV of the Terminator itself (above.) This is one of the first cinematic representations of how machines see the world. The Terminator sees the reality that we see, but it sees more; there’s a steady stream of text superimposed on the raw optical imagery.  The machine sees the world like the worker at Boeing, looking at data in front of him to inform, guide and evaluate his actions. What’s curious about the Terminator’s vision is that it uses text, as if a computer needs to read. Learning from Los Angeles in 1984, vision is so singularly important that a microprocessor would be unable to respond without having eyes to read the text it is writing. This is the fallacy of augmented reality: it asserts that the eye (not the brain) is the unified center of perception, thought, reality.
  • In Rules of Play: game design fundamentals, Katie Salen and Eric Zimmerman outline an argument for the immersive fallacy,  “… the idea that the pleasure of a media experience lies in its ability to sensually transport the participant into an illusory, simulated reality. According to the immersive fallacy, this reality is so complete that ideally the frame falls away so that the player truly believes that he or she is part of an imaginary world.” In Augmented Reality, the proposal is that immersion – reality – is most effectively transformed through mimetic representation. The image above, from an AR company called Total Immersion, makes the baseball card “come alive,” as if the 3D model of the player has more life than ten years of the player’s stats, printed on the back of the card. Salen and Zimmerman cite film studies scholar Elena Gorfinkel, who writes: “the confusion in this conversation has emerged because representational strategies are conflated with the effect of immersion. Immersion itself is not tied to a replication or mimesis of reality.”In fact, replication and mimesis often make things seem less real, a phenomenon well known to roboticists. There is an instinctively repulsive response to robots with appearance and motion between a "barely human" and "fully human" entity. This space is referred to as “the uncanny valley.” As we aspire to lenses that render the world in front of us, we are at the frontier of this valley. Not for the human face, but for the world around us, the valley between “barely real” and “fully real.”  Companies like Layar and Wikitude propose to augment the streets around us with imports from the region. The uncanny valley will finally have real-world geography, with real-world citizens, further and further from home.  Our cities, which we have browsed since their inception, will become searchable. Where cities have secrets, Google has facts. In the transposition of the two on the human eye, those facts will make the city feel further and further away.
  • The effects of the Tamagotchi digital pet are well documented, selling 70 million units since its debut in 1996. In Hawaii, some schools banned them, because some Tamagotchis could starve to death in less than half a day without care. They were too demanding. In some ways, they were most present when they couldn’t be seen. The immersion created by the Tamagotchi is created with an 8x8 grid of black and white pixels. It doesn’t aspire to mimetic visual representation – rather, the Tamagotchi becomes real by behavingreal: by being demanding, rewarding, hungry, vulnerable.  In 1996, Tamagotchi was competing for attention with the first wave of 3D videogames: Super Mario 64, Duke Nukem 3D, Tomb Raider. The obvious trend in entertainment was to increase the polygon count, on the slippery slope to the uncanny valley. Tamagotchi outsold all of those, because reality is augmented when it feels different, not when it appears different.  When the senses of time, obligation, and rewards are altered, the aspiration to 3D optical Augmented Reality begins to feel like pornography. A thin veneer of the actual experience, flattened for to the eye, the one sense most easily fooled to begin with.
  • No experience I’ve ever had prepared me for how real it would feel, this ghost sweeping through the room. Not because the technology made him visible, but because the technology made him real. Papa Bones would arrive unexpectedly, move things around on the table, and then move on. That is the weight of the invisible, augmented.
  • … a haptic navigational device that requires only the sense of touch to guide a user. No maps, no text, no arrows, no lights. It sits on the palm of one's hand and leans, vibrates and gravitates towards a preset location. Akin to someone pointing you in the right direction, there is no need to find your map, you simply follow as the device leans toward your destination.  In Momo, we see another alternative – and optimistic – path to augmenting reality. Google maps shows us our world from above and draws a line for the route. Conventional AR has us look through a lens to see where we are going. Momo augments the sense of location with the body, not the eye. With Momo, the user is pushed softly by an object in her hands, nudged towards her destination, rather than directed. Her eyes are restored to their original function, taking in the world around her. Not the world in front of her, but the world around her.
  • Most stories of technologies – and the dreams of technologies – either begin or end with the military. AR is no different. Some of the earliest, and arguably most useful, applications of the ideas were to Helmet-Mounted Displays (HMDs) for fighter pilots.  Above left, the first HMD, used by AH-64 pilots. If emission theory is understood to be hellfire missiles shooting out of the eye, instead of rays of light, then it is realized in 1984. The vision of the pilot is linked to the actions of the fighter’s arsenal. The missile goes where the pilot looks.  In contemporary air combat, the pilot is the slowest and least responsive component of the aircraft. Thus, for the F-35 Joint Strike Fighter, the helmet – the pilot’s vision – is given as much attention as the rest of the plane. To look at an F-35 pilot, you see him for what he is: a giant eyeball, an eyeball with agency, strapped to a body that is strapped to the plane.  This is enormously effective for pilots, because they are in the sky, travelling past Mach 1. There’s very little time to think. In many ways, there simply is no reality except what they see, no reality except for the world directly in front of them
  •   In 1994, Michael Flanagan and Andrew Harrison published their study, “The Effects of Automobile Head-up Display Location for Younger and Older Drivers.” There have been many attempts to transfer the ideas and technologies of the augmented vision of the pilot to that of the driver. Above, the Head-Up Display (HUD) installed in the Peugeot 3008.  For the most part, these experiments have underperformed or failed. The driver is different than the pilot. Their sense of the world is gained not just by what they are focused on, but by what they are not focused on, what lies in their periphery, what they hear, what catches their eye.  For the driver, reality is not augmented by drawing focus to a single point straight ahead. Singular focus – in which the eye looks at, rather than around – diminishes reality, closes it down. As it turns out, for the driver, as for most everyone, reality is understood to be the whole world around us, not just the world in front of us.
  • In World War II the navy gave Norman tools to shift the way he saw the world. These tools didn’t change what he saw, as augmented reality sets out to do, but they changed how he saw.This is the noble purpose facing designers looking at the future. The principles and goals of augmenting reality—of using technology to enhance or alter the perception of reality—may not be best expressed by designing anything to look at.The artists Chris Woebken and Kenichi Okada have designed Animal Superpowers (2008, page 134), including an ant apparatus that “allows you to feel like an ant by magnifying your vision 50x through microscope antennas in your hand. You can perceive all the tiny cracks anddetails of a surface through this. It allows you to ‘see’ through your hands and to dive into a secret and hidden world.”9 Reality is augmented, to be sure, but not by adding a layer, not by making something to look at. It’s about making something different to see with, to feel the world—the real world—in ways that we’ve never known. These are the astronauts on Earth, reexploring the planet none of us will leave.They are inventing new ways to see rather than new things to look at. Because there’s no shortage of things to see: reality is already plenty, thanks.
  • In World War II the navy gave Norman tools to shift the way he saw the world. These tools didn’t change what he saw, as augmented reality sets out to do, but they changed how he saw.This is the noble purpose facing designers looking at the future. The principles and goals of augmenting reality—of using technology to enhance or alter the perception of reality—may not be best expressed by designing anything to look at.The artists Chris Woebken and Kenichi Okada have designed Animal Superpowers (2008, page 134), including an ant apparatus that “allows you to feel like an ant by magnifying your vision 50x through microscope antennas in your hand. You can perceive all the tiny cracks anddetails of a surface through this. It allows you to ‘see’ through your hands and to dive into a secret and hidden world.”9 Reality is augmented, to be sure, but not by adding a layer, not by making something to look at. It’s about making something different to see with, to feel the world—the real world—in ways that we’ve never known. These are the astronauts on Earth, reexploring the planet none of us will leave.They are inventing new ways to see rather than new things to look at. Because there’s no shortage of things to see: reality is already plenty, thanks.

Kevin Slavin - Reality Is Plenty, Thanks Kevin Slavin - Reality Is Plenty, Thanks Presentation Transcript

  • @slavin_fpo
  • HELLO AMSTERDAM.
    A PRESENTATION THAT DIDN’T EXIST
    ONE HOUR AGO BUT COMES FROM
    SOME STUFF I’VE BEEN WRITING ABOUT
    AND I HAVE NO IDEA WHAT’S ABOUT
    TO HAPPEN OR HOW LONG IT WILL TAKE.
    @slavin_fpo
  • REALITY IS PLENTY, THANKS.
    12 ARGUMENTS FOR KEEPING THE NAKED EYE NAKED.
  • UPSIDE DOWN AND
    BACKWARDS TOWN (1943)
  • EMISSION THEORY
    (1929 / 1021 / 2002)
  • AUGMENTING REALITY BEGINS
    WITH BUILDING MACHINES
    THAT FLY (1990  1995)
  • BEFORE MACHINES CAN
    THINK, THEY NEED TO LEARN
    TO READ (REALLY?) (1984/2029)
  • SETTLERS OF THE UNCANNYVALLEY
  • WHAT HAPPENS WHEN WEAREN’T LOOKING (1996)
  • IT’S EASY TO BELIEVE IN
    GHOSTS BECAUSE THEY
    ARE INVISIBLE (2006)
  • BEYOND THE EYE (2007)
  • AUGMENTING REALITY STARTS
    WITH BUILDING MACHINES
    THAT FLY (1984/2006).
  • AUGMENTING REALITY STARTS
    WITH BUILDING MACHINES
    THAT FLY (1994).
  • THE WORLD AROUND US (NOW)
  • NO SHORTAGE OF THINGS TO SEE /
    REALITY IS PLENTY, THANKS