Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

The UX of Tomorrow: Designing for the Unknown by Jeff Feddersen


Published on

MIT Enterprise Forum of NYC hosted The UX of Tomorrow: Designing for the Unknown on June 4th, 2015 at Shutterstock featuring Beverly May, Ryan Gossen, Jay Vidyarthi, and Jeff Feddersen. This is Jeff's presentation from the event.

Trained in computer science and music, Jeff works with software and hardware to make computers do new and unusual things. He is currently part of a team developing a sculptural reflection of energy and resource flows in what is being heralded as the world`s greenest office building. His work for groups ranging from the Hayden Planetarium and the Connecticut Science Center to Sony and HBO has resulted in award-winning public interactive experiences.

Jeff teaches at NYU`s graduate Interactive Telecommunications Program, where he has a residency to develop video curricula supporting physical computing and energy. His novel musical instruments and kinetic sound sculptures have been performed on and exhibited internationally, and he is the co-inventor of an electronic wind instrument based on the Japanese shakuhachi (US patent #7723605).

The next ten years of technology will see many of Ray Kurzweil`s predictions come alive: Embedded, invisible, unwired electricity and internet-based interactions will drive every aspect of our lived environment. The physical and digital worlds are merging, powered by incredible changes in computing, universal connectivity as well as Artificial Intelligence (AI) and machine learning. This pending wave is certain to change every aspect of our human-computer interaction.

Major technological leaps present interesting design and UX challenges and require a wholesale shift in perspective by designing for the as-yet unknown. Screens, keyboards, and mouse dominated yesterday and today. Tomorrow, these systems will be initiated, controlled, and tracked through location and environment, semantic context, a wave of the arm, a blink of an eye, a directed gaze, a heartbeat, a crowd-driven trend, even a brainwave.

Whole new approaches and design systems need to be considered for what the next wave of products do, what they look and feel like, and how they can be more meaningful, useful, relevant, and intuitive.

This talk discussed the UX of tomorrow for the next wave of product design based on some of the very first products and services on the market that hint at the integrate

Published in: Technology
  • Be the first to comment

The UX of Tomorrow: Designing for the Unknown by Jeff Feddersen

  1. 1. The UX of Tomorrow: Designing for the Unknown MIT Enterprise Forum of NYC June 4, 2015 Jeff Feddersen
  2. 2. Background: three alternate UX projects • Li Ning Sport Challenge • HBO “Superwall” • Target StyleScape Physical Computing @ NYU • What pcomp is • How it is taught • Example projects
  3. 3. Large body-controlled interactive game (pre Kinect) Li Ning Reactive Wall With: Ziba, AV&C Photo: Ziba
  4. 4. Flow Dodge Stretch
  5. 5. Primesensedepthcamera
  6. 6. Video: Ziba
  7. 7. Interactive touch UI integrated with large video wall, computer vision system, and SMS. HBO Superwall With: BLT, Apologue, AV&C
  8. 8. C
  9. 9. Video: Jamil Thompson
  10. 10. UI UI UI UI •Video playback, texture layer, and variable compositing mask on Vista Spyder and Watchout systems. •4 independant instances of a java-based UI, running 2160x1920 @ 60fps (separate HD UI and alpha channels) •Crowd sensing cameras •Participant surveilance photo cameras
  11. 11. With: Mother NYC, AV&C, Brooklyn Research Target StyleScape 120’ LED cinemagraph with mixed interactives along entire length combining tangible, computer-vision, mobile, and human-directed moments Photo: Mother NYC
  12. 12. Photo: Mother NYC
  13. 13. Fun Side Photo: Brooklyn Research
  14. 14. Functional Side Photo: Brooklyn Research
  15. 15. 1. Interactive Overview Gizmos Eyes Mobile Humans Simple hardware sensors strategically located throughout the space Video/Depth streams processed to support interaction Guests use their devices Event staff in the mix Four broad categories of interaction tech have distinct infrastructure, execution, and cost implications. Any of the four can be mixed together, and each can be scaled from small+targeted to broad+comprehensive integration with the cinemagraph. Design docum ent for Stylescape
  16. 16. Gizmos Capture Board CPU Data to cinemagraph Display wall Proximity sensor Floor pad Switch Button Motion detector In this scenario, the space will have small, simple sensors custom-built into the environment or integrated with props. A single central computer reads the state of each sensor, filters the data, and sends triggers to the video system. Design docum ent for Stylescape
  17. 17. Eyes CPU Data to cinemagraph Display Wall CPU CPU CPU CPU CPU CPU Cameras - either 2- or 3D, and used singly or in an array - watch the crowd. Computers (approximately 1 per image stream) process the data into triggers for the video system. Design docum ent for Stylescape
  18. 18. Mixed Capture CPU Data to cinemagraph Display wall CPU CPU The four categories can be mixed together to best support specific interactions. However, cost and effort are cumulative because there is almost no infrastructure overlap. CPU Design docum ent for Stylescape
  19. 19. Summary Gizmos Eyes Mobile Humans Simple, scalable, many possibilities from the same components Could be cool and subtle. Can cover large space Familiarity, contact beyond event Flexible, open ended, resilient Needs integration into props, lots of cabling breakable Needs lots of processing to extract smart triggers. Optical cameras: lighting Common Requires staffing, training, management Low cost (scalable) High cost (1:1 CPU camera) All costs in software/ campaign Low cost Medium effort (scalable) High effort Medium to High effort Low Design docum ent for Stylescape
  20. 20. 3. Plan IR RE CC CC RE IR IR IR Mic Custom or stock control surface CPU With mic in ADC 6-12 channels CC * * * * These might also be accomplished with sensors. IR RE CC Mic Infrared, motion, or similar Rotary encoder or similar Contact closure Microphone Design docum ent for Stylescape
  21. 21. Common attributes: • Heterogenous systems with distinct boundaries • Components joined by “network glue” • (Typically UDP/OSC in my case) • Concept precedes solution
  22. 22. Physical Computing
  23. 23. PCOMP Overview Engadget
  24. 24. From: WHAT IS PHYSICAL COMPUTING? Physical Computing is an approach to computer-human interaction design that starts by considering how humans express themselves physically. Computer interface design instruction often takes the computer hardware for given — namely, that there is a keyboard, a screen, speakers, and a mouse or trackpad or touchscreen — and concentrates on teaching the software necessary to design within those boundaries. In physical computing, we take the human body and its capabilities as the starting point, and attempt to design interfaces, both software and hardware, that can sense and respond to what humans can physically do.
  25. 25. Requires thinking about • 1-bit • digital I/O e.g. button, LED • Many-bits • analog I/O e.g. knob, fading LED • Ways to transduce aspects of the physical world to varying electrical properties (typically changing resistance->changing voltage)
  26. 26. Handle messy “real-world” inputs Derive meaning from input: what did user do vs. what did user want Reconnect to meaningful output
  27. 27. Learn communication protocols like… • Asynchronous Serial • I2C • SPI …so you can connect to other “smart” components such as: • Accelerometers • GPS • Display drivers • just about anything else…
  28. 28. Example: Terminus
  29. 29. David Cihelna, Yurika Mulase, 2014 3D Navigation
  30. 30.
  31. 31. Example: Equilibrium
  32. 32. Jonathan Han and Yuhang Jedy Chen, 2014 2 person “synchronization” game
  33. 33.
  34. 34. Example: Descriptive Camera
  35. 35. Matt Richardson, 2012
  36. 36.