What we mean by meaning: new structural properties of information architecture IAS15

14,741 views

Published on

Conference session: What we mean by meaning: new structural properties of information architecture. Presented at The Information Architecture Summit 2015, Minneapolis, Minnesota, 24 April 2015 and 25 April 2015 by Marsha Haverty

Published in: Design
3 Comments
55 Likes
Statistics
Notes
No Downloads
Views
Total views
14,741
On SlideShare
0
From Embeds
0
Number of Embeds
4,839
Actions
Shares
0
Downloads
135
Comments
3
Likes
55
Embeds 0
No embeds

No notes for slide

What we mean by meaning: new structural properties of information architecture IAS15

  1. 1. I came to information architecture from concepts I encountered in grad school in the late 90s: Vannevar Bush and Doug Engelbart instrumenting and visualizing human associativity, hypertext theory, information visualization, information seeking behavior. That was where I first realized that information can have structure, and spatial and perceptual qualities, and behavior. After grad school, I got a job as an information architect in an agency setting and went to the first IA Summit. I’m wafting some of the concepts from Lou Rosenfeld’s talk that are still so relevant today. I went to the second Summit, then contributed a paper to the JASIST special topic issue on IA in 2002. I was very interested in the notion that IA was a new field without its own internal body of theory. What did that mean? What was that like? Eleven years flashed before my eyes. I finally went back to the IA Summit in 2013 to find IA described as, “The structural integrity of meaning across contexts,” by Jorge Arango. I was first introduced to the notion of embodied cognition from Andrew Hinton’s talk. Last year, I brought a poster on data visualization techniques. And here we are: IAS16. I recently joined Autodesk, helping mechanical designers collaborate around 3D geometry for product design.
  2. 2. If Information Architecture worries about the structural integrity of meaning across contexts, then the spirit of this talk is to zoom in and really look at the nature of information and the nature of meaning to inform our work.
  3. 3. If Information Architecture worries about the structural integrity of meaning across contexts, then the spirit of this talk is to zoom in and really look at the nature of information and the nature of meaning to inform our work.
  4. 4. We will see that if we look at these things through the lens of embodied cognition, we see structural properties of IA that we couldn’t see before looking only from traditional cognition.
  5. 5. Before we get to the new structural properties, we will first visualize the nature of meaning. To visualize the nature of meaning, we’ll build a scene. And this scene starts with the sun. Image credit: NASA/SDO
  6. 6. Let’s add to our scene a tree in nature, and a built chair.
  7. 7. The Sun gives radiant light that shines on all the things.
  8. 8. All the things reflect the light. That’s what we experience as ambient light.
  9. 9. Or what James J. Gibson, the founder of Ecological Psychology back in the 1960s, calls an ambient energy array.
  10. 10. In this ambient energy array, we detect surfaces and edges and textures. But the light shifts, we move around objects, objects move around us: it’s not these surfaces, edges, textures themselves that we pick up, but the relationships among them. These relationships are invariant structure. And it’s the invariant structure that we detect, regardless of our perspective.
  11. 11. This invariant structure is information. That’s what information *is*. And it’s already out there in the environment, we don’t need our brains to do any special processing.
  12. 12. If we think of a chair, we don’t need to have seen the chair from every possible perspective to recognize it. We pick up the invariant relationships among its edges and surface and textures that let us understand the seat and the back and the legs and its the same chair we’ve seen before.
  13. 13. Let’s add to our scene an actor-observer (NOTE: this actor-observer is based on the observer in JJ Gibson’s 1960’s diagrams from his book, An Ecological Approach to Visual Perception).
  14. 14. As we’ve seen, objects give information in the form of invariant structure in relationships among surfaces, edges, and textures.
  15. 15. Our actor-observer brings to this her goals...
  16. 16. her actions...
  17. 17. And it is the confluence of goals...
  18. 18. Actions...
  19. 19. And information...
  20. 20. Where meaning emerges. Meaning emerges in this confluence.
  21. 21. We’ve considered information about objects we see visually in the form of invariant relationships among surfaces, edges, and textures. There are other types of information in the ambient energy array.
  22. 22. The layout of a collection of objects gives information about way finding.
  23. 23. We humans are wired for mechanical information: touch, cold, warm, pain.
  24. 24. And chemical information: taste and scent.
  25. 25. And these are all perceptual information types.
  26. 26. We also have linguistic information.
  27. 27. Words on a surface; physical or digital overlays.
  28. 28. Words through the air; spoken or projected.
  29. 29. Gestures evoking concepts.
  30. 30. Even introspection in our heads. All of this makes up the ambient energy array, the *information* in our environment.
  31. 31. We’ve looked at where meaning emerges, now let’s zoom in to look at the mechanism that let’s us interact with information in the environment. We’ll start with perceptual information. For perceptual information, the invariant structure, the information, is in the form of affordances: what we can do to engage with objects. When we engage with affordances for perceptual information, we form what is called an action-perception coupling.
  32. 32. As an example of a perception-action coupling, we’ll look at The Outfielder Problem. How does a baseball outfielder know where to go to catch a fly ball? Traditional cognition suggests the outfield sees the initial trajectory, figures out where the ball is going to land, and runs to that spot. But that’s not what happens. The outfielder actually forms a perception-action coupling with angle relationships of the ball. The outfielder problem described in, Wilson, A. & Golanka, S. (2013). Embodied cognition is not what you think it is. Frontiers in Cognitive Psychology.
  33. 33. If the ball starts angling to one side, the outfielder shifts to remove the angle. The perception-action coupling is simply: eliminate horizontal angles. The outfielder problem described in, Wilson, A. & Golanka, S. (2013). Embodied cognition is not what you think it is. Frontiers in Cognitive Psychology.
  34. 34. In this way, outfielders make a series of course corrections to maintain the angle relationship and appear to drift to end up in the right spot to catch the ball. (There is a perception-action coupling the outfielder forms with vertical geometry of the ball too, but for simplicity, we are looking at the horizontal coupling.) The outfielder problem described in, Wilson, A. & Golanka, S. (2013). Embodied cognition is not what you think it is. Frontiers in Cognitive Psychology.
  35. 35. Language is different. Light doesn’t reflect off of concepts. What is that handle, that aspect of the semantics of a concept that allows us to engage its meaning? We don’t have a word for that. It doesn’t fit the strict definition of affordance. That’s something still under debate in the embodied cognitive psychology community. Perhaps we can contribute in that area. Sabrina Golanka, an embodied cognitive psychologist, rolls both of these up to say we form an information-behavior coupling to cover how we engage both types of information. See http://psychsciencenotes.blogspot.com/2014/06/a-gibsonian- analysis-of-linguistic.html for Sabrina Golanka’s notion of an information-behavior coupling (and forthcoming paper).
  36. 36. An information-behavior coupling is how and where meaning emerges. But, these things aren’t frozen in place.
  37. 37. Our goals-directed actions and aspects of the environment are changing, and if we’re going to maintain this information-behavior coupling, these things must co-evolve.
  38. 38. In fact, we can say that human cognition is the state space of information-behavior couplings that form, break, co-evolve with goal-directed action and environment dynamics.
  39. 39. But, what does all this flux and co-evolution mean about meaning? The nature of meaning is flow.
  40. 40. Flows have properties. Meaning has viscosity, or ease of flow. Meaning has texture depending on what facets of information are participating in the flow; meaning is subject to permeability in what it’s flowing through. We’ll look at each of these in turn to see how we may use designed structures to dial them up or down depending on our needs.
  41. 41. First, we’ll look at viscosity, or the ease of the flow of meaning. I’m going to introduce a new IA construct to account for viscosity in the flow of meaning.
  42. 42. To do this, I want to ask the question: is information like water? Image credit: Impact of a drop of water on a water surface, by Roger McLassus https://commons.wikimedia.org/wiki/File:2006-01-28_Drop-impact.jpg
  43. 43. If we run all the permutations on the different relative amounts of temperature and pressure, we end up with what we know as the phase- space of water. The nature of water is drastically different as a solid vs. liquid or gas. We can phase-shift a solid by melting it to a liquid, or freeze a liquid into a solid.
  44. 44. I am asking the question, what if we do the same thing for the two types of information: perceptual and linguistic? What is it like to interact with different combinations of perceptual and linguistic information? Do we find neighborhoods of combinations in which it is drastically different to engage with the meaning of information? Phase-space from Haverty, M (forthcoming). Meaning as flow: structural properties of information architecture, Journal of Information Architecture.
  45. 45. Perceptual information is tacit and reflexive. Once we’ve learned to detect it, we don’t have think about it to engage with it. Perception flows easy. It has a low viscosity, like water. Perceptual information is tacit and reflexive. Once we’ve learned to detect it, we don’t have think about it to engage with it. Perception flows easy. It has a low viscosity, like water.
  46. 46. Language is laden with awareness and associativity, and requires attention for us to use it. Language is much more viscous. It takes more work to flow because we have to think about it, actively attend to it.
  47. 47. Generally, the area dominated by perceptual information is reflexive. The area dominated by linguistic information is more attentive. Phase-space from Haverty, Marsha: Meaning as flow: structural properties of information architecture (forthcoming), Journal of Information Architecture.
  48. 48. But we can get more granular than that. If we have little or no language, all perceptual information, we are engaging with meaning in a very visceral manner: mechanical motions and eye movements. Similarly, if we look at the area with little perceptual information, we are operating in the world of concepts. We are thinking conceptually to engage meaning. If we have a lot of language, especially if it’s abstract, which we’ll look at later, we need intense concentration, in this highest viscosity state. If we have a lot of perceptual information to deal with, we must engage with intense coordination. And when we are faced with a lot of information of either type, an emotional response is often triggered. This isn’t the only place where emotion can occur: we can have an emotional component anywhere along the phase-space, but the presence of large amounts of information does tend to evoke emotion. Further, if we have too much information, we become overloaded and unable to engage well with meaning. Phase-space from Haverty, Marsha: Meaning as flow: structural properties of information architecture (forthcoming), Journal of Information Architecture.
  49. 49. For Twitter, before inline image preview, the information-behavior coupling we form to engage with meaning in this place was dominated by language. The nature of the information on Twitter put us in the mode of reading words.
  50. 50. We bring a variety of goals to our engagement with Twitter: the intrigue of coming across interesting articles or images or thoughts that we wouldn’t have encountered in another way; humor; discourse sampling (getting a sample of latest news or conversations among our friends or peers). The nature of the information we engage on Twitter before image preview was primarily language. We had some perceptual information in the avatars, but those were constrained to a consistent spatial position along the periphery and used for occasional glances for source information. Engaging with meaning on Twitter is not like reading a novel where you have a thread of and then this happened and then this happened… Twitter is semantic juxtaposition. It requires concentration to do all that concept hopping from tweet to unrelated tweet, even though we’re flowing the same activity: reading. And there’s a lot of it. We got really good at scanning through the never-ending semantic mashup. Because of the high-concentration concept hopping, the flow of meaning had a highly viscous nature.
  51. 51. When Twitter introduced inline images, suddenly our mode of engaging meaning by scanning words with eye movements was interrupted. We now have these perceptual swaths interrupting our high-concentration scanning. We either have to skip over them, or we have to phase-switch from scanning words to glancing images. We probably don’t even think about the images anymore, we’re so used to them, but what it’s like to meaningfully engage with Twitter, to form an information-behavior coupling with Twitter, has phase-shifted. We’re acting- engaging with meaning in a different phase-space neighborhood. It’s not the same.
  52. 52. This is a visual archive of 5 years of issues of a design blog called Infosthetics. Each issue is color coded by key categories, and they are ordered by time, most recent at the top, oldest at the bottom.
  53. 53. When you select a category, in this case Architecture, we get an immediate visual understanding of how this category spanned the years of issues. We see some clusters early on at the bottom, and most recently. We see other categories that co-occurred in the same issue. We could interact to get details about the concept.
  54. 54. We would locate this visual archive well down in the perceptual dominated region. Category labels are used as semantic anchors for a primarily reflexive gleaning of the distribution of topics across time.
  55. 55. In the movie HER, Theodore has a relationship with an AI that manifests solely as words projected in his ear. He builds a complete relationship made of language.
  56. 56. We would locate the movie HER way up in the intense concentration neighborhood of the phase-space with virtually no perceptual information. In a relationship, actively attending is the point. And Theodore is a writer; he loves words. This mode, this extremely high viscosity suits him, and is fulfilling for him.
  57. 57. Shortly after this movie came out, Ben Schneiderman said, “The future of computing will be more visual than verbal. Voice is important for human relationships, but can’t keep up with the human mind’s desire for information abundance and swift decisions.”
  58. 58. Our design projects likely fall in this general area. More dominant on language, but also use perceptual information to help our users make sense of text; we worry about white space and layout, especially planning for adapting to different viewport sizes. We infuse our navigation and function triggers with edges and surfaces and textures. We craft node-link structures for associative wayfinding.
  59. 59. But because our digital environment can be anywhere, with more language, more to perceive, our designs are actually phase-shifted to a different neighborhood.
  60. 60. To mitigate this attention overload, our designs are starting to make use of information at the extremes of the phase space; especially, for wearable sensor visualizations and Internet of Things displays.
  61. 61. But often, the system status is fully perceptual, and then, as soon as we go to engage with it, it phase-shifts us to language. That may be appropriate: the situation may need that level of attention required by forming information- behavior couplings in the more viscous linguistic mode. But sometimes language is just too viscous for the situation.
  62. 62. I want to consider an example of fitting the nature of the information- behavior coupling to the situation. This is a concept for a new way of designing a car dashboard control panel. Instead of a dashboard of buttons and labels, it’s just a blank screen. The driver touches the screen anywhere and the interface is summoned to that spot. Depending on the number of fingers, the driver is controlling a different aspect of the car: radio, heater, whatever. The driver simply drags fingers up or down to adjust.
  63. 63. And it’s a forgiving information-behavior coupling. The driver drags the control up or down, but doesn’t have to be exact: as long as the motion is generally up or generally down, that’s close enough to maintain the engagement. The driver is worried about wayfinding, not hitting a cyclist, talking to a passenger… This is a design and information modality that respects the high viscosity of the surrounding situation and the need to phase-shift dashboard interaction to more perceptual and forgiving coupling. This information structure starts perceptual and stays perceptual.
  64. 64. Given the pervasive information ecosystem in which we find ourselves, we need to consider the entire phase-space when we make decisions about the information in our designs. We need to note 2 phase-space locations: the design itself, and the design in the greater information ecosystem. We need to decide when to use the higher viscosity of language and when to offload some meaning to perception.
  65. 65. The next aspect of the flow of meaning we’ll discuss is texture. The texture of meaning is to ask what facets of perceptual and linguistic information are participating in the information in our designs.
  66. 66. We’ll start with linguistic texture facets. Let’s consider some usual IA facets: controlled vocabulary, faceted classification, taxonomy, ontology, content strategy.
  67. 67. Let’s consider these along the phase-space. Controlled vocabulary is a high-viscosity state of information as we carefully select among related labels. Taxonomy infuses concepts with some perceptual qualities in the form of semantic groupings.
  68. 68. Faceted classification is a berrypicking journey (in the Marcia Bates sense) all across the language-dominated phase-space. Content strategy (taken as information that has bound with it instructions for how to shape-shift (semantically and physically) across viewports and other context factors), is a mapping walking a harmonious line between language and perception. NOTE: For more on berrypicking, see: Bates, Marcia (1989). THE DESIGN OF BROWSING AND BERRYPICKING TECHNIQUES FOR THE ONLINE SEARCH INTERFACE accessed online: http://pages.gseis.ucla.edu/faculty/bates/ berrypicking.html
  69. 69. Ontology is an information-behavior coupling. Actually, it’s a set of information-behavior couplings in which the relationships among the conceptual entities serve as the invariant structure.
  70. 70. Let’s consider another facet of language: where the concept falls on the concept spectrum, concrete to abstract. A concept is concrete if it has a physical referent, is spatially constrained, and for which it is easy to visualize context. The concept “spoon” is very concrete: we may easily picture a spoon as an object located in a place; we can easily visualize multiple contexts (a spoon in a bowl, in a drawer, stirring a pot). We’d locate the concept “spoon” in the still language-dominated region of the phase-space, but in a region of lower viscosity, and more perceptual influence. The concept “calculus” though is highly abstract. We can’t picture calculus as an object located in a place; we’d have trouble thinking of multiple contexts for calculus (unless we were mathematicians). It requires highly conceptual, highly viscous concentration to engage with the meaning of calculus.
  71. 71. Vannavar Bush, in the 1940s built a machine called the Differential Analyzer. This machine mechanically performed calculus through physical movements of levers and rods and gears. It was said that, “Those who used the Analyzer acquired what [Bush] called a ‘mechanical calculus,’ an internalized knowledge of the machine…like a combination of motor memory and mathematical skill, learned directly from the machine. Bush described how one user, “did not understand [calculus] in any formal sense, he understood the fundamentals; he had it under his skin.” From Belinda Barnet’s history, Memory Machines: The Evolution of Hypertext.
  72. 72. Bush’s machine phase-shifts the abstraction of “knowing the meaning of calculus” from a high-viscosity act of conceptual concentration to a purely visceral understanding of visual and physical relationships. One may later try to describe or summarize this new understanding using words, but the meaning itself was already gleaned in a purely perceptual manner. In that perceptual coupling is where the meaning emerged.
  73. 73. Two other methods for phase-shifting abstract concepts include using metaphors and context priming. There is a lot written already about metaphors. Part of what makes abstract concepts abstract is that it’s hard to think of contexts for them. If we provide context ahead of or along with the concept, it essentially phase-shifts that concept to a lower region on the phase-space, requiring less active concentration and lower viscosity to engage. This type of shift is likely much less extreme than the Differential Analyzer example, but it can work to dial down the viscosity.
  74. 74. Let’s switch to perceptual texture facets. We don’t all need to become information visualization specialists, but we should recognize that it is edges, surfaces, and texture relationships that we detect as information, and we should consider these aspects and tune them to suit the information objects we design. Do we need to adjust surface/edge/texture qualities to show our information objects are… fixed vs. movable? overlapping vs. fused? NOTE: here are a few excellent references for surface/edge/texture design qualities: Semiology of Graphics, Jacques Bertin Readings in Information Visualization, Stuart Card, Jock Mackinlay, Ben Schneiderman Information Visualization: Perception for Design, Colin Ware Visualization Analysis & Design, Tamara Munzner
  75. 75. With respect to the wayfinding and place element of perception, this territory is huge and well-covered in past IAS talks and all the IA books out there.
  76. 76. The other aspect of visual information from the ecological psychology point of view is events. Objects have locomotion and physical transformation, and occlusion (objects overlap, are hidden, are revealed). As designers, we have revealed additional meaning to types of events we can instrument in our information structures. David Kirsh considers the meaningful events we can bring to external representations. Things like rearrangement, and if we have many instances of the same representation, we can enact different events on each to explore alternatives. Karl Fast has a framework of epistemic interactions, or meaningful interactions we can do with visual representations. Things like: chunking, cloning, collecting, composing, cutting, fragmenting, probing, rearranging, repicturing. All of these facets are design dials for meaningful events. Kirsh, D. (2010). Thinking with external representations. AI & Society. Fast, K. & Sedig, K. (2010). Interaction and the epistemic potential of digital libraries. International Journal of Digital Libraries.
  77. 77. Essentially, we can take advantage of the materiality of diagrams (visualizations, images, even physical models), to enact (and maintain) meaningful events. It’s that maintain part that’s really important when we think about meaning as flow.
  78. 78. Next I want to talk about some factors that impact the permeability of the flow of meaning. In our case, permeability relates to how well the actor-observer can engage with and control the information-behavior coupling.
  79. 79. Some of these permeability factors are familiar things to us and are things we already worry about, but we need to frame them in terms of embodied cognition and meaning-as-flow.
  80. 80. We can have high permeability, where the flow of meaning is completely unobstructed all the way down to completely obstructed with no flow, meaning unable to emerge. Sabrina Golanka has three continuums of information that I’m framing here as permeability factors. The ability to detect structure: if the actor-observer has not yet learned to recognize invariant structure, there’s no chance to engage that structure. Coordinating behavior: if the actor-observer detects structure, but is unable to coordinate actions to engage appropriately with the structures, the flow can’t happen either. And it’s a continuum because we can get better by degrees with experience over time. Structure persistence: if the information is needed to maintain a coupling over time, but is intermittent or vanishes, the coupling breaks and the flow of meaning stops. Permeability factors adapted from Sabrina Golanka’s Information Taxonomy: http:// psychsciencenotes.blogspot.co.uk/2013/03/a-taxonomy-of-information.html
  81. 81. I would like to add another permeability factor: tolerance. How precise must our behavior be to maintain the coupling? If the coupling has a wide tolerance, it is forgiving if our actions veer a bit, or the information persistence waivers. If the coupling has a narrow tolerance, our actions and the information which we are engaging must be much more precise, or the flow of meaning is interrupted or stopped. Tolerance is particularly helpful to consider when we think about the case of the distracted driver using a dashboard control display or a tiny display on our arms or in our palms while we’re walking.
  82. 82. When we look through the lens of embodied cognition, we recognize that our designed information structures participate directly in the information- behavior couplings that give rise to the flow of meaning. With our design choices, we can modify what it’s like to engage meaning with our structures. Because of pervasive digital overlays, we must to consider the entire phase-space of information in our designs.
  83. 83. At core, we are tribal hunter-gatherer-poets that move in the world and understand with things. inForm image credit: MIT Media Lab http://tangible.media.mit.edu/project/inform/ meadow image credit: Nicholas Tonelli via flickr

×