My ph.d Defence


Published on

Slides for mu ph.d. defence, November 1. 2007, IT-University of Copenhagen

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

My ph.d Defence

  1. 1. Trust within Technology Risk, Existential Trust, and Reflective Designs in Human- Computer Interaction Mads Bødker Ph.d defence, IT-University of Copenhagen November 1. 2007
  2. 2. Outline • Conceptual work on the concept of trust - from trust in to trust within (existential trust) • “Backdrop”: the risk-society and the pervasiveness/ubiquity of computers • Ties these concepts to HCI as a discipline that not only enables faster and more efficient interaction with machines, but also (increasingly) makes the world available to us • HCI increasingly defines the ways in which we encounter the world and the ways in which we think about our relations to technology • HCI can and should supplement its focus on the design of efficiency and transparency with a focus on designing for openness, engagement, and interpretation (Reflective HCI)
  3. 3. • Contribution: making a different perspective on the concepts of trust and trusting in relation to technology possible • Opening a new door towards an increased sensitivity to links between design and trust • Partial, tentative conclusions and results... • Not the final word on the matter, nor a complete theoretical rework of trust and trusting, but an attempt to create new figurations of trust, enabling new possible ways to talk about trust and trusting in HCI
  4. 4. Introduction • Big issues in HCI? • Critical HCI: how to reappraise some central assumptions in HCI: transparency, efficiency, experience, users, meaning etc. • Asserting the importance of HCI • The cultural meaning of “the risky computer” • The technological sublime of a world pervaded with computer technologies: we are not spectators to an amazing world of technologies, we are intimately involved in it • elusiveness, the invisible • ineffable space, the unspeakable
  5. 5. Risk society/culture • Risk society thesis applied on a world where computers are increasingly everywhere • Matters-of-fact / Matters-of-concern • Computers and computer networks are implied in more and more aspects of our daily activities • Risk not as a problem of the social (hierarchical, rule-bound, Gesällschafft) but a perspective within culture (vertical, symbolic value-based, Gemeinschafft): Risk Culture (Lash) • Reflective actors, knowledgeable agency - cognitive reflexivity • Aesthetic reflexivity: Signifiers of technological risk (ranging from the annoying to the cataclysmic) abound...
  6. 6. • Technological culture: Technologies, to be effective, must withdraw from revealing themselves - they must take the form of obviousness. • Modernist ideal of control - by having technology disappear into the fabric of the everyday, we gain an increased control of our world • Disappearence as domestication • Technological risks contradict the “instrumental fidelity” of modernist assumptions about technology
  7. 7. Engagements in Risk • How do we approach trust when we cannot rely solely on an unproblematic consumption of institutionally sanctioned security and expertise? • Attempts: democratization of risk assessment (in various forms by e.g. Wynne, Sclove, Fischer, Beck etc.) • Inspired by Habermasian discourse ethics • Finding a communal basis for “reason” and for decisions • Consensus conferences, Lay/Participatory Technology Assessment (PTA)
  8. 8. • Democratizing risk engagement does not necessarily confront the kinds of “excess” or cultural risk tropes (narratives, stories, images, sounds) • It does engage with well-representable sources of risk - “we, the lay public, are concerned over an increasing amount of surveillance carried out in the work place, at home, on the move etc...” • People such as Wynne and Fischer have problematized the way “Lay Perspectives” are moderated, tamed by being ultimately grounded by scientific rationality - decisions are made more with respect to expert arguments than to the biased “reason” by an anxious, distrusting public
  9. 9. • Another way to understand risks: The problem of agency and identity risk: Brian Wynne, Timothy Melley, Mary Douglas • Identity risks: rather than provable threats, risks can also be seen in the ways that the risk “subject” is performed, the way the subject is allowed to voice and utter concern and anxiety • “Agency panic”, the technological sublime • Risk as the pollution of categories (where does the technological end, where does the human begin...) - “matter out of place” • People’s stories, mass-mediated or popular depictions of risk in computers are not carriers of affective/aesthetic biases, but ways in which to make sense of an ineffable computer pervaded everyday... • How to engage in these forms of risk and distrust?
  10. 10. HCI and trust • Not a mainstream concept in HCI • It has been proposed (e.g. Shneiderman, Friedman et al) that in order to foster a trusting relationship between humans and their technologies, we need to make them “open for inspection”, allowing users to supervise and oversee that technologies and the institutions that support them function correctly and consistently • Or, design technologies to be domesticated, to become tacit infrastructures
  11. 11. • The notion of trust that is traditionally applied within HCI (and also CMC) is what we could call a human-social/relational prototypical form of trust • How can we trust in e-commerce sites, what cues do we need? • How can we trust the integrity of our technologies? • How can we trust that our computers perform correctly? • ...etc. • Security design within HCI proposes: design interfaces to maximize prudent user behaviour, design for inspection that still allows for optimal efficiency • Key question: How do we design interfaces that allow us to trust in computers?
  12. 12. • Mimics the directional trust that we all know in some form or another from our daily undertakings • Works from a prototypical form of trust that argues that trust is a relation between two or more actors who have some form of bounded agency to act in a relatively unpredictable way (i.e. the other can potentially act inconsistently or in ways that are destructive to our goals) • That trust has a behavioral correlate - a causal relation between use and trust: “if a user is using, he or she is also trusting”
  13. 13. • I argue that trust in HCI (in the broadest sense) can be about more than a unidirectional relation that begins with the human and is directed towards the technology. The ways in which we design our computer technological environments can also be seen to potentially influence our experience of ourselves • HCI’s relevance can be grounded in certain cultural experience • How are users (people) supposed to relate to technology, how are they supposed to understand themselves and their activities as actors in a technological world? • Key question: How can HCI perhaps begin to adress the kinds of aesthetic, “excessive” forms of risk? How can the design of interactions provide people with the feeling that their own voices, their own concerns are made possible?
  14. 14. Existential Trust • Such a dialogue with computer pervaded environments, so I argue, potentially makes possible another kind of trust: • Existential trust, trust as an epistemological category - a certain aspect on knowing • The trust we have in our own ability to know the world • Self-trust: how to trust our own ways of making meaning, our own sense making, if scientific explanations and rationalities will often dominate or disregard “subjective” modes of interpretation • Trust within technology
  15. 15. Reflective HCI/design • Critical Technical Practice: question the fundamental assumptions about the nature of interaction btw. people and technology and the role of designers in mediating that interaction. • Interactions that invite reflection on the attitudes that underpin our ideas of technology and humanity • The interface and interaction as interesting sites of cultural expression and dialogue around social and cultural issues • For example: How can technologies begin to “appreciate” users interpretations of risk and the rich imaginaries that have grown up around them rather than discount them as flawed forms of reasoning? • Engagement with the aesthetic, excessive aspects of risk culture, not the designing out of risks
  16. 16. HALT Fig. 1: Ghost graphics on PDA, image by DELCA project, IT-University of Copenhagen DELCA, IT-University 2005 In the following I will not be concerned with the dynamic ecology of the project, but primarily on the narrative construction of the system and one of the ghost DELCA’s in particular. Many of the ghosts in the DELCA project were “type cast” for the project including the Butler, a conventional if somewhat arrogant way-finding assistant, Physical Joe, a grumpy “sarge” of a workout ghost that urged people to use the stairs rather than the escalators, or Printer Jan who could be
  17. 17. • The relation to existential forms of trust, self-trust, and the ability to “dwell” in a world where computer technologies permeate most aspects of our lives • Possibility of voicing concerns over technology in subjective/aesthetic registers • Reflective interactions that encourage/demand participation in a dialogue around designs, - they are open to interpretation and differing perspectives, subverting traditional hierarchies of user and designer/ expert • Designs that focus attention on the effects and implications of technology • Interactions that take cultural concerns seriously and allow for “inspection” of these
  18. 18. • An attempt to foresee and develop some conceptual insight into the moralizing and performative aspects of technology • A material ethics? Achterhuis argues that we should consider these aspects, even design for them sensibly • What Reflective HCI could potentially contribute with to such a material ethics, is the design of interactions that enable a disclosure of these efforts to perform users in specific ways • Enabling users to hold technologies accountable for the ways in which technologies narrate them...
  19. 19. ...?