cdilect9 - the university of edinburgh · 2014-11-06 · 11/6/14 4 vision-based techniques: state...

13
11/6/14 1 Case Studies in Design Informatics 1 and Case Studies in Design Informatics 2 Jon Oberlander Lecture 9: Affective Computing: Input recognition http://www.inf.ed.ac.uk/teaching/courses/cdi1/ 1 Course Timetable Week Topic Mon Wed Thu Submit 16:00 Thu 1 SUI Intro (JO) Wired for speech (JO) 2 SUI Dialogue systems (JO) Tutorial Dialogue and error (CM) 3 SUI Speech synthesis (MA) Tutorial Mymyradio (MA) 4 SUI Talk, things & animals (JO) Tutorial <No class> A1 5 ADI Student case 1 (TBC) Tutorial Student case 2 (TBC) 6 ADI Student case 3 (TBC) Tutorial Student case 4 (TBC) A2-draft 7 ADI Student case 5 (TBC) Tutorial <No class> 8 AC Affective computing (JO) Tutorial Affective input (JO) A2 9 AC Affective output (JO) Tutorial Affect in text (CL) 10 AC Affective in eyes (RH) Tutorial Affective agents (CM) 11 Reflection (JO) (Tutorial) A3 2 Structure of lecture 1. Affective recognition: an overview Zeng et al. 2009 2. Examples in human-computer interaction: a) Pollick et al. 2001 b) Nasoz 2004 c) Kapoor et al. 2007 d) Baileson et al. 2008 3. Social signal processing Curhan and Pentland 2007 3 A survey ! Zhihong Zeng, Maja Pantic, Glenn I. Roisman, and Thomas S. Huang. (2009) ! A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions. ! IEEE Transactions On Pattern Analysis And Machine Intelligence, 31. ! Further reading: Alessandro Vinciarelli, Maja Pantic, Hervé Bourlard. Social signal processing: Survey of an emerging domain. (2009) Image and Vision Computing 27, 1743–1759 4

Upload: others

Post on 19-Apr-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: cdiLect9 - The University of Edinburgh · 2014-11-06 · 11/6/14 4 Vision-based techniques: State of the art ! Methods have been proposed to detect attitudinal and nonbasic affective

11/6/14  

1  

Case Studies in Design Informatics 1 and

Case Studies in Design Informatics 2 Jon Oberlander

Lecture 9: Affective Computing: Input recognition

http://www.inf.ed.ac.uk/teaching/courses/cdi1/

1

Course Timetable

Week Topic Mon Wed Thu Submit 16:00 Thu

1 SUI Intro (JO) Wired for speech (JO)

2 SUI Dialogue systems (JO) Tutorial Dialogue and error (CM)

3 SUI Speech synthesis (MA) Tutorial Mymyradio (MA)

4 SUI Talk, things & animals (JO) Tutorial <No class> A1

5 ADI Student case 1 (TBC) Tutorial Student case 2 (TBC)

6 ADI Student case 3 (TBC) Tutorial Student case 4 (TBC) A2-draft

7 ADI Student case 5 (TBC) Tutorial <No class>

8 AC Affective computing (JO) Tutorial Affective input (JO) A2

9 AC Affective output (JO) Tutorial Affect in text (CL)

10 AC Affective in eyes (RH) Tutorial Affective agents (CM)

11 Reflection (JO) (Tutorial) A3

2

Structure of lecture

1.  Affective recognition: an overview –  Zeng et al. 2009

2.  Examples in human-computer interaction: a)  Pollick et al. 2001 b)  Nasoz 2004 c)  Kapoor et al. 2007 d)  Baileson et al. 2008

3.  Social signal processing –  Curhan and Pentland 2007

3

A survey

!  Zhihong Zeng, Maja Pantic, Glenn I. Roisman, and Thomas S. Huang. (2009)

!  A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions.

!  IEEE Transactions On Pattern Analysis And Machine Intelligence, 31.

!  Further reading: –  Alessandro Vinciarelli, Maja Pantic, Hervé Bourlard. Social

signal processing: Survey of an emerging domain. (2009) Image and Vision Computing 27, 1743–1759

4

Page 2: cdiLect9 - The University of Edinburgh · 2014-11-06 · 11/6/14 4 Vision-based techniques: State of the art ! Methods have been proposed to detect attitudinal and nonbasic affective

11/6/14  

2  

The need for affect recognition

!  Ubiquitous computing needs to move beyond current HCI designs: –  keyboard + mouse –  transmission of explicit messages

!  “ignoring implicit information about the user, such as changes in the affective state.

!  Yet, a change in the user’s affective state is a fundamental component of human-human communication.

!  Some affective states motivate human actions, and others enrich the meaning of human communication.”

!  A system should be able to: –  detect subtle changes in user (affective) behaviour –  initiate interactions based on this (rather than waiting for

commands)

Quoting Zeng et al. 2009

Applications

!  Customer services, !  Call centers,

–  make an appropriate response or pass control over to human operators

!  Intelligent automobile systems, –  monitor the vigilance of the driver and apply an appropriate

action to avoid accidents !  Game and entertainment industries. !  Affect-related research

–  improve the quality of the research by improving the reliability of measurements and speeding up the currently tedious manual task of processing data

!  Personal wellness and assistive technologies –  automated detectors of affective states and moods, including

fatigue, depression, and anxiety

Quoting Zeng et al. 2009

State of the Art: a problem

!  Approaches that are trained and tested on a deliberately displayed series of exaggerated affective expressions,

!  Approaches that are aimed at recognition of a small number of prototypical (basic) expressions of emotion (i.e., happiness, sadness, anger, fear, surprise, and disgust), and

!  Single-modal approaches, where information processed by the computer system is limited to either face images or the speech signals.

Quoting Zeng et al. 2009

A problem because …

!  “deliberate behavior differs in visual appearance, audio profile, and timing from spontaneously occurring behavior.”

!  “spontaneous smiles are longer in the total duration, can have multiple apexes (multiple rises of the mouth corners), appear before or simultaneously with other facial actions such as the rise of the cheeks, and are slower in the onset and offset times than the posed smiles (e.g., a polite smile)”

!  “integrating the information from audio and video leads to an improved performance of affective behavior recognition”

!  “Current techniques for the detection and tracking of facial expressions are sensitive to head pose, clutter, and variations in lighting conditions, while current techniques for speech processing are sensitive to auditory noise.”

Quoting Zeng et al. 2009

Page 3: cdiLect9 - The University of Edinburgh · 2014-11-06 · 11/6/14 4 Vision-based techniques: State of the art ! Methods have been proposed to detect attitudinal and nonbasic affective

11/6/14  

3  

Associating affect and signals

!  Message judgement: “what underlies a displayed behavior such as affect or personality“ –  e.g. anger

!  Sign judgment: “describe the appearance, rather than the meaning, of the shown behavior such as facial signal, body gesture, or speech rate.“ –  e.g. FACS Action Units; Speech prosody, non-linguistic

vocalisations (laughs, cries, sighs, yawns) !  Context-dependence:

–  is a smile: •  Polite, ironic, joyful, greeting?

Quoting Zeng et al. 2009

Databases of spontaneous affect

!  Human-human conversation –  Interviews, Phone conversations, Meetings

!  HCI –  Wizard of Oz, Dialogue systems

!  Video kiosk –  Affective video reactions

Quoting Zeng et al. 2009

Labelling

!  Basic categories, dimensions, … and !  application-specific categories:

1.  interest, 2.  boredom, 3.  confusion, 4.  frustration, 5.  fatigue, 6.  empathy, 7.  stress, 8.  irony, 9.  annoyance, 10. amusement, 11. helplessness, 12. panic, 13. shame, 14. reprehension, and 15. rebelliousness.

Quoting Zeng et al. 2009

Vision-based techniques

!  Such as –  M.F. Valstar, H. Gunes, and M. Pantic, “How to Distinguish

Posed from Spontaneous Smiles Using Geometric Features,” Proc. Ninth ACM Int’l Conf. Multimodal Interfaces (ICMI ’07), pp. 38-45, 2007.

–  M. Valstar, M. Pantic, Z. Ambadar, and J.F. Cohn, “Spontaneous versus Posed Facial Behavior: Automatic Analysis of Brow Actions,” Proc. Eight Int’l Conf. Multimodal Interfaces (ICMI ’06), pp. 162-170, 2006.

!  … characterize temporal dynamics of facial actions and employ parameters like speed, intensity, duration, and the co-occurrence of facial muscles activations

!  geometry (shapes, locations) vs appearance (texture, such as wrinkles, furrows)

Quoting Zeng et al. 2009

Page 4: cdiLect9 - The University of Edinburgh · 2014-11-06 · 11/6/14 4 Vision-based techniques: State of the art ! Methods have been proposed to detect attitudinal and nonbasic affective

11/6/14  

4  

Vision-based techniques: State of the art

!  Methods have been proposed to detect attitudinal and nonbasic affective states such as confusion, boredom, agreement, fatigue, frustration, and pain from facial expressions.

!  Initial efforts were conducted to analyze and automatically discern posed (deliberate) facial displays from genuine (spontaneous) displays

!  First attempts are reported toward the vision-based analysis of spontaneous human behavior based on 3D face models, based on fusing the information from facial expressions and head gestures, and based on fusing the information from facial expressions and body gestures

Quoting Zeng et al. 2009

Vision-based techniques: State of the art

!  Few attempts have also been made toward the context-dependent interpretation of the observed facial behavior. Advanced techniques in feature extraction and classification have been applied and extended in this field.

!  A few real-time robust systems have been built thanks to the advance of relevant techniques such as real-time face detection and object tracking.

!  Most studies still “focus only on the analysis of facial gestures without taking into consideration other visual cues like head movements, gaze direction, and body gestures.”

Quoting Zeng et al. 2009 14

Audio-based techniques

!  Methods have been proposed to detect nonbasic affective states, including coarse [need less data] affective states such as negative and nonnegative states, application-dependent affective states, and nonlinguistic vocalizations like laughter and cry

!  A few efforts have been made to integrate para-linguistic features and linguistic features such as lexical, dialogic, and discourse features

!  Few investigations have been conducted to make use of contextual information to improve the affect recognition performance

!  Few reported studies have analyzed the affective states across languages

Quoting Zeng et al. 2009

Audio-based techniques: state of the art

!  Some studies have investigated the influence of ambiguity of human labeling on recognition performance and proposed measures of comparing human labelers and machine classifiers

!  Advanced techniques in feature extraction, classification, and natural language processing have been applied and extended in this field. Some studies have been tested on commercial call data

!  “we have to consider the multiple functions of prosody that include information about the expressed affect and a variety of linguistic functions”

Quoting Zeng et al. 2009 16

Page 5: cdiLect9 - The University of Edinburgh · 2014-11-06 · 11/6/14 4 Vision-based techniques: State of the art ! Methods have been proposed to detect attitudinal and nonbasic affective

11/6/14  

5  

Multimodal techniques

!  Efforts have been reported to detect and interpret nonbasic genuine (spontaneous) affective displays in terms of coarse affective states such as positive and negative affective states, quadrants in the evaluation-activation space, and application-dependent states

!  Few studies have been reported on efforts to integrate other affective cues aside from the face and the prosody such as body and lexical features

!  Few attempts have been made to recognize affective displays in specific naturalistic settings (e.g., in a car) and in multiple languages

!  Various multimodal data fusion methods have been investigated. In particular, some advanced data fusion methods have been proposed, such as HMM-based fusion, NN-based fusion, and BN-based fusion

Quoting Zeng et al. 2009

Four Examples

!  Articles describing aspects of affect recognition a)  Pollick et al. 2001 b)  Nasoz et al. 2004 c)  Kapoor et al. 2007 d)  Baileson et al. 2008

Affect recognition from movement

!  Frank E. Pollick, Helena M. Paterson, Armin Bruderlin, Anthony J. Sanford (2001) Perceiving affect from arm movement. Cognition.

!  Visual perception of affect from point-light displays of arm movements

!  Two actors were instructed to perform drinking and knocking movements with ten different affects while the three-dimensional positions of their arms were recorded.

!  Point-light animations of these natural movements and phase-scrambled, upside-down versions of the same knocking movements were shown to participants who were asked to categorize the affect of the display.

Quoting Pollick et al. 2001

Pollick et al. results

!  For the natural movements the resulting two-dimensional psychological space was similar to a circumplex with the first dimension appearing as activation and the second dimension as pleasantness.

!  Dimension 1 of the psychological space was highly correlated to the kinematics of the movement.

!  These results suggest that the corresponding activation of perceived affect is a formless cue that relates directly to the movement kinematics while the pleasantness of the movement appears to be carried in the phase relations between the different limb segments.

Quoting Pollick et al. 2001 20

Page 6: cdiLect9 - The University of Edinburgh · 2014-11-06 · 11/6/14 4 Vision-based techniques: State of the art ! Methods have been proposed to detect attitudinal and nonbasic affective

11/6/14  

6  

Pollick et al. results

Quoting Pollick et al. 2001 21

Pollick et al. results

Quoting Pollick et al. 2001 22

Affect recognition from physiology

!  Fatma Nasoz, Kaye Alvarez, Christine L. Lisetti, Neal Finkelstein (2004)

!  Emotion recognition from physiological signals using wireless sensors for presence technologies. Cogn Tech Work 6: 4–14

!  Developing computer systems that can recognise its users’ emotional states and adapt to them accordingly in order to enhance social presence.

!  Currently, we are working on recognising users’ emotions with non-invasive technologies measuring physiological signals of autonomic nervous system arousal (skin temperature, heart rate, and GSR), which are then mapped to their corresponding emotions

Quoting Nasoz et al. 2004

Nasoz et al. scenario

!  Showed film snippets, gathered user data using SenseWear armband (GSR, heart rate, and temperature; 29 participants)

!  “our algorithms recognised sadness with 67% (KNN), 78% (DFA), and 92% (MBP) accuracy rates, although only 56% of the participants reported that they experienced sadness.

!  Similar results were obtained for surprise with KNN, DFA, and MBP; and for anger and frustration with MBP.”

Quoting Nasoz et al. 2004 24

Page 7: cdiLect9 - The University of Edinburgh · 2014-11-06 · 11/6/14 4 Vision-based techniques: State of the art ! Methods have been proposed to detect attitudinal and nonbasic affective

11/6/14  

7  

Nasoz et al. results (larger is better)

Quoting Nasoz et al. 2004 25

Affect recognition from physiology and posture

!  Ashish Kapoor, Winslow Burleson, Rosalind W. Picard

!  (2007) Automatic prediction of frustration. Int. J. Human-Computer Studies 65 (2007) 724–736

!  Predicting when a person might be frustrated can provide an intelligent system with important information about when to initiate interaction.

!  For example, an automated Learning Companion or Intelligent Tutoring System might use this information to intervene, providing support to the learner who is likely to otherwise quit, while leaving engaged learners free to discover things without interruption.

!  This paper presents the first automated method that assesses, using multiple channels of affect-related information, whether a learner is about to click on a button saying ‘‘I’m frustrated.’’

Quoting Kapoor et al. 2007

Kapoor et al. scenario

!  The user sits in front of a wide screen plasma display. !  On the display appears an agent and 3D environment. !  The user can interact with the agent and can attend to and

manipulate objects and tasks in the environment. !  The chair that the user sits in is instrumented with a high-

density pressure sensor array. !  The mouse detects applied pressure throughout its usage. !  The user also wears a wireless skin conductance sensor on

a wristband with two adhesive electrode patches on the hand.

!  Two cameras are in the system, a video camera for offline coding and the Blue-Eyes camera to record elements of facial expressions.

Quoting Kapoor et al. 2007 Quoting Kapoor et al. 2007 28

Page 8: cdiLect9 - The University of Edinburgh · 2014-11-06 · 11/6/14 4 Vision-based techniques: State of the art ! Methods have been proposed to detect attitudinal and nonbasic affective

11/6/14  

8  

Kapoor et al. processing

Quoting Kapoor et al. 2007 29

Kapoor et al. processing

Quoting Kapoor et al. 2007 30

Kapoor et al. results

!  The new method was tested on data gathered from 24 participants using an automated Learning Companion.

!  Their indication of frustration was automatically predicted from the collected data with 79% accuracy (chance 58%).

!  The new assessment method is based on Gaussian process classification and Bayesian inference.

!  Its performance suggests that non-verbal channels carrying affective cues can help provide important information to a system for formulating a more intelligent response.

Quoting Kapoor et al. 2007 31

Kapoor et al. results: best predictors (smaller is better)

Quoting Kapoor et al. 2007 32

Page 9: cdiLect9 - The University of Edinburgh · 2014-11-06 · 11/6/14 4 Vision-based techniques: State of the art ! Methods have been proposed to detect attitudinal and nonbasic affective

11/6/14  

9  

Kapoor et al. results (larger is better)

Quoting Kapoor et al. 2007 33

Humanity recognition from head movements

!  Jeremy N. Bailenson, Nick Yee, Kayur Patel, Andrew C. Beall (2008) Detecting digital chameleons. Computers in Human Behavior 24 (2008) 66–87.

!  Conversations are characterized by an interactional synchrony between verbal and nonverbal behaviors … people who mimic the verbal … and nonverbal behaviors … gain social advantage

!  we examined how effective people were at explicitly detecting mimicking computer agents and the consequences of mimic detection in terms of social influence and interactional synchrony

!  “how will people react to an interactant when they know that interactant’s behavior is a direct mimic of their own?”

!  “convergence behavior, designed to increase social integration, actually backfires when it is detected.”

Quoting Bailenson et al. 2008

Bailenson et al. scenarios

1.  Pairs of interactants communicated with one another by pushing a button and seeing two indicators, one of which lit up when the other participant hit his or her button, and the other of which lit up according to a computer algorithm. –  Participants attempted to determine which of the two buttons

represented the actual human and rated their confidence in that decision.

2.  Participants entered an immersive virtual environment and listened to a persuasive message administered by an embodied agent. –  We varied the veridicality of the mimic behavior. –  The agent either mirror-mimicked the participants’ head

movements (left was left), congruently-mimicked their head movements (left was right), or axis-shift-mimicked their head movements transformed onto a separate axis as it verbally delivered a persuasive message (left was up).

Quoting Bailenson et al. 2008

Bailenson et al. scenarios

Quoting Bailenson et al. 2008 36

Page 10: cdiLect9 - The University of Edinburgh · 2014-11-06 · 11/6/14 4 Vision-based techniques: State of the art ! Methods have been proposed to detect attitudinal and nonbasic affective

11/6/14  

10  

Baileson et al. results 1

!  When the computer agent mimicked them, users were significantly worse than chance at identifying the other human. …

!  Mimickers were more likely to pass the Turing Test than actual humans!

Quoting Bailenson et al. 2008 37

Baileson et al. results 1

Quoting Bailenson et al. 2008 38

Bailenson et al. results 2

!  Participants were more likely to detect mimicry in an agent that mirror-mimicked their head movements (three degrees of freedom) than agents that either congruently mimicked their behaviors or mimicked those movements on another rotational axis.

!  Participants were more likely to detect the mirror-mimic condition (80%) than either the congruent-mimic (44%) condition or the axis-switch condition (38%).

!  Explicit detection of the mimic caused the agent to be evaluated poorly in terms of trustworthiness and the warmth factors

Quoting Bailenson et al. 2008

Upshot

!  Sophisticated algorithmic mimicry may provide one means of achieving this personality adaptation automatically without having to parse and understand personality.

!  For example, mimicry of verbosity, greeting phrases, speech patterns, expressiveness, and intonation may provide a very good approximation for apparent personality for computer agents.

Quoting Bailenson et al. 2008 40

Page 11: cdiLect9 - The University of Edinburgh · 2014-11-06 · 11/6/14 4 Vision-based techniques: State of the art ! Methods have been proposed to detect attitudinal and nonbasic affective

11/6/14  

11  

Social signal processing: an example

!  Jared R. Curhan and Alex Pentland. (2007) Thin Slices of Negotiation: Predicting Outcomes From Conversational Dynamics Within the First 5 Minutes. Journal of Applied Psychology, 92, 802–811.

!  “Across a wide range of studies, Ambady and Rosenthal (1992) found that observations lasting up to 5 minutes in duration predicted their criterion for accuracy with an average effect size (r) of .39.

!  This effect size corresponds to 70% accuracy in a binary decision task.

!  It is astounding that observation of such a thin slice of behavior can predict important behavioral outcomes such as professional competence, criminal conviction, and divorce, when the predicted outcome is sometimes months or years in the future.

Quoting Curhan and Pentland 2007 41

Measuring (conversational) social signals

!  Pentland (2004) constructed four measures of vocal quality and conversational interaction that could possibly serve as predictive social signals.

!  These four measures, … are designated activity, engagement, emphasis, and mirroring

Quoting Curhan and Pentland 2007 42

Activity

!  Our simplest measure is activity, which is the fraction of time a person is speaking.

!  In studies involving competitive settings, such as a negotiation, speaking time was positively correlated with dominance over the outcome

!  Hypothesis 1: An individual’s activity level during the first 5 minutes of the negotiation will be positively correlated with his or her own individual outcome.

Quoting Curhan and Pentland 2007 43

Engagement

!  Engagement is measured by the influence that one person has on the other’s conversational turn taking.

!  By quantifying the conditional probability of Person A’s current state (speaking vs. not speaking) given Person B’s previous state, we obtain a measure of Person B’s engagement (i.e., Person B’s influence over the turn-taking behavior).

!  Influence over conversational turn taking is popularly associated with good social skills or higher social status

!  Hypothesis 2: An individual’s level of engagement during the first 5 minutes of the negotiation will be positively correlated with his or her own individual outcome.

Quoting Curhan and Pentland 2007 44

Page 12: cdiLect9 - The University of Edinburgh · 2014-11-06 · 11/6/14 4 Vision-based techniques: State of the art ! Methods have been proposed to detect attitudinal and nonbasic affective

11/6/14  

12  

Emphasis

!  Emphasis is measured by variation in speech prosody— specifically, variation in pitch and volume.

!  Hypothesis 3: An individual’s level of emphasis during the first 5 minutes of the negotiation will be negatively correlated with his or her own individual outcome but positively correlated with the counterpart’s individual outcome.

Quoting Curhan and Pentland 2007 45

Mirroring

!  When the observable behavior of one individual is mimicked or “mirrored” by another, this could signal empathy, which has been shown to positively influence the smoothness of an interaction as well as mutual liking

!  e.g. when waitresses mimicked the speech of their customers, they received higher tips than when they did not mimic their customers’ speech

!  We treated the occurrence of short back-and-forth exchanges (i.e., reciprocated short utterances) as a proxy for vocal mimicry, which we call mirroring

!  Hypothesis 4: An individual’s frequency of mirroring during the first 5 minutes of the negotiation will be positively correlated with his or her own individual outcome.

Quoting Curhan and Pentland 2007 46

Curhan and Pentland Study

!  The task was an employment negotiation between a candidate (middle manager) and a recruiter (vice president) concerning the candidate’s compensation package.

!  Participants were randomly formed into 56 same-sex dyads, with one member of each dyad randomly assigned the role of middle manager and the other assigned the role of vice president.

Quoting Curhan and Pentland 2007 47

Curhan and Pentland Results 1

!  Four conversational dynamics (or speech features) occurring within the first 5 minutes of a negotiation were highly predictive of subsequent individual outcomes.

!  In fact, the overall effect sizes demonstrated in this study (r = .60 for middle managers and r = .48 for vice presidents) were considerably higher than the average effect size from past thin slices research (r = .39, Ambady & Rosenthal, 1992).

Quoting Curhan and Pentland 2007 48

Page 13: cdiLect9 - The University of Edinburgh · 2014-11-06 · 11/6/14 4 Vision-based techniques: State of the art ! Methods have been proposed to detect attitudinal and nonbasic affective

11/6/14  

13  

Curhan and Pentland Results 2

1.  Middle managers who spoke more tended to have vice president counterparts who earned better individual outcomes, as illustrated by the vice president partner effect (beta = .36, p < .05). However, the activity level of vice presidents was not associated with middle manager individual outcomes (beta = –.11, ns).

2.  No effect. 3.  As predicted by Hypothesis 3, prosodic emphasis during

the first 5 minutes was negatively correlated with a participant’s own individual outcomes (beta = –.28, p < .05) and positively correlated with the individual outcomes of that participant’s counterpart (beta = .42, p < .01).

4.  Middle managers earned better individual outcomes when vocal mirroring was high (at the dyad level) in the first 5 minutes (beta = .30, p < .05). However, vice presidents’ individual outcomes were not related to mirroring (beta = –.08, ns).

Quoting Curhan and Pentland 2007 49

Curhan and Pentland Summary

!  Proportion of speaking time was associated with individual outcomes for vice presidents but not for middle managers.

!  Conversely, vocal mirroring during the first 5 minutes benefited middle managers yet not vice presidents.

!  The use of prosodic emphasis during the first 5 minutes appears to be a liability in negotiation, as it was associated with worse outcomes for oneself and better outcomes for one’s counterpart.

Quoting Curhan and Pentland 2007 50

Summary

!  Current work follows Pantic et al. in focus: –  Spontaneous, interactive, multimodal affect

!  But not all of it is about helping machines recognise and respond to human affect.

!  The area of social signal processing has as much emphasis on diagnosing human-human interaction as on human-computer interaction design. –  Cf. “Reality Mining”

51