my ph.d defence

22
Trust within Technology Risk, Existential Trust, and Reflective Designs in Human- Computer Interaction Mads Bødker Ph.d defence, IT-University of Copenhagen November 1. 2007

Upload: mads-bodker

Post on 05-Dec-2014

3.304 views

Category:

Technology


0 download

DESCRIPTION

Slides for mu ph.d. defence, November 1. 2007, IT-University of Copenhagen

TRANSCRIPT

Page 1: My ph.d Defence

Trust within TechnologyRisk, Existential Trust, and Reflective Designs in Human-

Computer Interaction

Mads BødkerPh.d defence, IT-University of Copenhagen

November 1. 2007

Page 2: My ph.d Defence

• Conceptual work on the concept of trust - from trust in to trust within (existential trust)

• “Backdrop”: the risk-society and the pervasiveness/ubiquity of computers

• Ties these concepts to HCI as a discipline that not only enables faster and more efficient interaction with machines, but also (increasingly) makes the world available to us

• HCI increasingly defines the ways in which we encounter the world and the ways in which we think about our relations to technology

• HCI can and should supplement its focus on the design of efficiency and transparency with a focus on designing for openness, engagement, and interpretation (Reflective HCI)

Outline

Page 3: My ph.d Defence

• Contribution: making a different perspective on the concepts of trust and trusting in relation to technology possible

• Opening a new door towards an increased sensitivity to links between design and trust

• Partial, tentative conclusions and results...

• Not the final word on the matter, nor a complete theoretical rework of trust and trusting, but an attempt to create new figurations of trust, enabling new possible ways to talk about trust and trusting in HCI

Page 4: My ph.d Defence

• Big issues in HCI?

• Critical HCI: how to reappraise some central assumptions in HCI: transparency, efficiency, experience, users, meaning etc.

• Asserting the importance of HCI

• The cultural meaning of “the risky computer”

• The technological sublime of a world pervaded with computer technologies: we are not spectators to an amazing world of technologies, we are intimately involved in it

• elusiveness, the invisible

• ineffable space, the unspeakable

Introduction

Page 5: My ph.d Defence

Risk society/culture• Risk society thesis applied on a world where computers are increasingly

everywhere

• Matters-of-fact / Matters-of-concern

• Computers and computer networks are implied in more and more aspects of our daily activities

• Risk not as a problem of the social (hierarchical, rule-bound, Gesällschafft) but a perspective within culture (vertical, symbolic value-based, Gemeinschafft): Risk Culture (Lash)

• Reflective actors, knowledgeable agency - cognitive reflexivity

• Aesthetic reflexivity: Signifiers of technological risk (ranging from the annoying to the cataclysmic) abound...

Page 6: My ph.d Defence
Page 7: My ph.d Defence
Page 8: My ph.d Defence
Page 9: My ph.d Defence

• Technological culture: Technologies, to be effective, must withdraw from revealing themselves - they must take the form of obviousness.

• Modernist ideal of control - by having technology disappear into the fabric of the everyday, we gain an increased control of our world

• Disappearence as domestication

• Technological risks contradict the “instrumental fidelity” of modernist assumptions about technology

Page 10: My ph.d Defence

• How do we approach trust when we cannot rely solely on an unproblematic consumption of institutionally sanctioned security and expertise?

• Attempts: democratization of risk assessment (in various forms by e.g. Wynne, Sclove, Fischer, Beck etc.)

• Inspired by Habermasian discourse ethics

• Finding a communal basis for “reason” and for decisions

• Consensus conferences, Lay/Participatory Technology Assessment (PTA)

Engagements in Risk

Page 11: My ph.d Defence

• Democratizing risk engagement does not necessarily confront the kinds of “excess” or cultural risk tropes (narratives, stories, images, sounds)

• It does engage with well-representable sources of risk - “we, the lay public, are concerned over an increasing amount of surveillance carried out in the work place, at home, on the move etc...”

• People such as Wynne and Fischer have problematized the way “Lay Perspectives” are moderated, tamed by being ultimately grounded by scientific rationality - decisions are made more with respect to expert arguments than to the biased “reason” by an anxious, distrusting public

Page 12: My ph.d Defence

• Another way to understand risks: The problem of agency and identity risk: Brian Wynne, Timothy Melley, Mary Douglas

• Identity risks: rather than provable threats, risks can also be seen in the ways that the risk “subject” is performed, the way the subject is allowed to voice and utter concern and anxiety

• “Agency panic”, the technological sublime

• Risk as the pollution of categories (where does the technological end, where does the human begin...) - “matter out of place”

• People’s stories, mass-mediated or popular depictions of risk in computers are not carriers of affective/aesthetic biases, but ways in which to make sense of an ineffable computer pervaded everyday...

• How to engage in these forms of risk and distrust?

Page 13: My ph.d Defence

HCI and trust

• Not a mainstream concept in HCI

• It has been proposed (e.g. Shneiderman, Friedman et al) that in order to foster a trusting relationship between humans and their technologies, we need to make them “open for inspection”, allowing users to supervise and oversee that technologies and the institutions that support them function correctly and consistently

• Or, design technologies to be domesticated, to become tacit infrastructures

Page 14: My ph.d Defence

• The notion of trust that is traditionally applied within HCI (and also CMC) is what we could call a human-social/relational prototypical form of trust

• How can we trust in e-commerce sites, what cues do we need?

• How can we trust the integrity of our technologies?

• How can we trust that our computers perform correctly?

• ...etc.

• Security design within HCI proposes: design interfaces to maximize prudent user behaviour, design for inspection that still allows for optimal efficiency

• Key question: How do we design interfaces that allow us to trust in computers?

Page 15: My ph.d Defence

• Mimics the directional trust that we all know in some form or another from our daily undertakings

• Works from a prototypical form of trust that argues that trust is a relation between two or more actors who have some form of bounded agency to act in a relatively unpredictable way (i.e. the other can potentially act inconsistently or in ways that are destructive to our goals)

• That trust has a behavioral correlate - a causal relation between use and trust: “if a user is using, he or she is also trusting”

Page 16: My ph.d Defence

• I argue that trust in HCI (in the broadest sense) can be about more than a unidirectional relation that begins with the human and is directed towards the technology. The ways in which we design our computer technological environments can also be seen to potentially influence our experience of ourselves

• HCI’s relevance can be grounded in certain cultural experience

• How are users (people) supposed to relate to technology, how are they supposed to understand themselves and their activities as actors in a technological world?

• Key question: How can HCI perhaps begin to adress the kinds of aesthetic, “excessive” forms of risk? How can the design of interactions provide people with the feeling that their own voices, their own concerns are made possible?

Page 17: My ph.d Defence

Existential Trust

• Such a dialogue with computer pervaded environments, so I argue, potentially makes possible another kind of trust:

• Existential trust, trust as an epistemological category - a certain aspect on knowing

• The trust we have in our own ability to know the world

• Self-trust: how to trust our own ways of making meaning, our own sense making, if scientific explanations and rationalities will often dominate or disregard “subjective” modes of interpretation

• Trust within technology

Page 18: My ph.d Defence

Reflective HCI/design

• Critical Technical Practice: question the fundamental assumptions about the nature of interaction btw. people and technology and the role of designers in mediating that interaction.

• Interactions that invite reflection on the attitudes that underpin our ideas of technology and humanity

• The interface and interaction as interesting sites of cultural expression and dialogue around social and cultural issues

• For example: How can technologies begin to “appreciate” users interpretations of risk and the rich imaginaries that have grown up around them rather than discount them as flawed forms of reasoning?

• Engagement with the aesthetic, excessive aspects of risk culture, not the designing out of risks

Page 19: My ph.d Defence

159

Fig. 1: Ghost graphics on PDA, image by DELCA project, IT-University of Copenhagen

In the following I will not be concerned with the dynamic ecology of the project, but primarily on

the narrative construction of the system and one of the ghost DELCA’s in particular. Many of the

ghosts in the DELCA project were “type cast” for the project including the Butler, a conventional

if somewhat arrogant way-finding assistant, Physical Joe, a grumpy “sarge” of a workout ghost

that urged people to use the stairs rather than the escalators, or Printer Jan who could be

persuaded to shuffle the printing cue, given a reasonable amount of encouragement. However,

among these ghosts with a relatively well-defined functionality, there were also ghosts that

primarily did duty as part of the narrativization of the system. One of these was HALT. HALT is

cast as a long discarded AI of unknown origin who, after having been disconnected due to a

rather worrying incident, now haunts the IT-University network. HALT of course was developed

with reference to the malfunctioning HAL9000 of Stanley Kubrick’s “2001: A Space Odyssey”

fame. Its graphical representation is consistent with this reference, and its voice is a paraphrase on

the original HAL9000 voice by Douglas Rain in a somewhat slower and slightly demented

sounding version. The primary reason for introducing a ghost like HALT into the system is, as

mentioned, not to have it perform tasks or to be functionally assistive in any way. HALT has two

functions. One is to advance the overall narrative experience of the system and the other is to

provide an incentive for user reflection on key concerns that would be likely to occur in an

environment so closely surveilled as the one proposed in the DELCA project. Given the location

tracking system and the navigational and person finding features of the system, a good amount of

DELCA, IT-University 2005

HALT

Page 20: My ph.d Defence

• The relation to existential forms of trust, self-trust, and the ability to “dwell” in a world where computer technologies permeate most aspects of our lives

• Possibility of voicing concerns over technology in subjective/aesthetic registers

• Reflective interactions that encourage/demand participation in a dialogue around designs, - they are open to interpretation and differing perspectives, subverting traditional hierarchies of user and designer/expert

• Designs that focus attention on the effects and implications of technology

• Interactions that take cultural concerns seriously and allow for “inspection” of these

Page 21: My ph.d Defence

• An attempt to foresee and develop some conceptual insight into the moralizing and performative aspects of technology

• A material ethics? Achterhuis argues that we should consider these aspects, even design for them sensibly

• What Reflective HCI could potentially contribute with to such a material ethics, is the design of interactions that enable a disclosure of these efforts to perform users in specific ways

• Enabling users to hold technologies accountable for the ways in which technologies narrate them...

Page 22: My ph.d Defence

...?