developing the 2015 boots christmas tv advert using manual facial coding

2
By using experienced facial coders to pretest two spoken narratives for the Boots 2015 Christmas TV ad, in a second-by-second study, and applying System 1 and 2 responses to the content, those elements that could be combined to speak directly to viewers’ emotions were identified Developing the Boots Christmas TV ad using facial coding By John Habershon, Momentum Research H ow can research help to ensure a TV ad will provoke the desired emotional response from viewers? We carried out facial coding to test two spoken narratives for the Boots 2015 Christmas TV ad. We wanted to know which scenes described in the narratives provoked an emotional response. We asked the respondents to listen to two ‘stories’ describing a Christmas scene, each lasting just over four minutes. When analysing the video of respondents listening to the narratives, we identified at every second which emotions each person was experiencing and were able to identify the specific scene (or the word) which triggered the emotion. The two audio files, descriptions of Christmas scenes narrated by an actor, were provided by Boots’ ad agency Mother. The method used was adapted from our work with TV ads and trailers where we capture responses each second as make informed decisions on the content of the TV ad. Facial coding is now firmly established in market research; however, this is usually done automatically using webcam footage, analysed by a computer which recognises facial expressions. Interpreting facial expressions by human coders has a number of advantages over computers. Broadly, they can be put into four main categories: Computer algorithms are usually programmed to recognise seven ‘basic’ emotions as specified by Paul Ekman, namely happiness, sadness, anger, surprise, disgust, fear and contempt. This list is restrictive and not very helpful when we want to measure consumers’ responses to advertising. For this project we targeted six subtle and sometimes mixed emotions, directly relevant to the aims of the client: pleasure, humour, active engagement, puzzlement, displeasure and disengagement. However, the list can vary according to the content; for example, for some TV commercials, we might include sadness or discomfort. Within these specific emotional labels, we can include subtle and mixed emotions which human coding is sensitive enough to pick up – a feeling of wonder or awe, for example, would come under ‘pleasure’. The category ‘active engagement’ is not a single emotion, strictly speaking, but an important sign that the listener or viewer is feeling involved with what they are experiencing, as shown in body movement. With high-quality footage at a high frame rate (unlike webcam footage) we can capture micro-expressions lasting as little as a fifth of a second. It is important to capture these involuntary, fleeting emotions from the System 1 brain. Observers in the course of market research interviews necessarily focus on the slower, ‘socially directed’ expressions from the System 2 brain (‘I like this and I’m showing it’). Training in this type of facial coding involves experience gained through many the audience views ads and programmes in an auditorium. When undertaken by experienced facial coders, it creates a forensic map of emotions second by second for the duration of the ad. We created an animated graphic showing a moving highlight on each word as it was spoken. The emotions we measured were: pleasure; humour; active engagement; puzzlement; displeasure and disengagement. These emotions were represented by moving lines in the graphic and identified at precise moments in the ad. Capturing these non-verbal responses was part of a comprehensive project led by Discovery Research to help develop the Boots Christmas TV ad using in-depth interviews, telephone interviews and group discussions. In the discussions, we explored what is immediately attractive and what appeals after more careful consideration. In the visual mythology and traditions of Christmas, what captures the imagination of consumers in the target market? What are the images and words which evoke these meanings of Christmas? To give some context to the facial coding responses, we wanted to know how our respondents felt about Christmas and how they defined it – a time of family, celebration, personal indulgence, or giving? Perhaps also a time of magic, fantasy and nostalgia? This data provided the client and the agency with detailed and rich context for the creative ideas. It also helped to explain the emotions identified by facial coding. The narrative scripts contained a number of scenes, each described in the form of a word picture by the narrator. We had three objectives: which words spark an emotional response? Which scenes carry the most emotional force? Which of the two routes is the most emotionally powerful? The results, together with the feedback on tastes and preferences from the discursive qualitative research, enabled Boots and its agency to

Upload: john-habershon

Post on 22-Jan-2018

3.863 views

Category:

Marketing


0 download

TRANSCRIPT

Page 1: Developing the 2015 Boots Christmas TV Advert Using Manual Facial Coding

By using experienced facial coders to pretest two spoken narratives for the Boots 2015 Christmas TV ad, in a second-by-second study, and applying System 1 and 2 responses to the content, those elements that could be combined to speak directly to viewers’ emotions were identified

Developing the Boots Christmas TV ad using facial coding

By John Habershon, Momentum Research

How can research help to ensure a TV ad will provoke the desired emotional response from viewers? We carried out facial coding to test

two spoken narratives for the Boots 2015 Christmas TV ad.

We wanted to know which scenes described in the narratives provoked an emotional response. We asked the respondents to listen to two ‘stories’ describing a Christmas scene, each lasting just over four minutes.

When analysing the video of respondents listening to the narratives, we identified at every second which emotions each person was experiencing and were able to identify the specific scene (or the word) which triggered the emotion.

The two audio files, descriptions of Christmas scenes narrated by an actor, were provided by Boots’ ad agency Mother.

The method used was adapted from our work with TV ads and trailers where we capture responses each second as

make informed decisions on the content of the TV ad.

Facial coding is now firmly established in market research; however, this is usually done automatically using webcam footage, analysed by a computer which recognises facial expressions. Interpreting facial expressions by human coders has a number of advantages over computers. Broadly, they can be put into four main categories: Computer algorithms are usually programmed to recognise seven ‘basic’ emotions as specified by Paul Ekman, namely happiness, sadness, anger, surprise, disgust, fear and contempt. This list is restrictive and not very helpful when we want to measure consumers’ responses to advertising. For this project we targeted six subtle and sometimes mixed emotions, directly relevant to the aims of the client: pleasure, humour, active engagement, puzzlement, displeasure and disengagement. However, the list can vary according to the content; for example, for some TV commercials, we might include sadness or discomfort. Within these specific emotional labels, we can include subtle and mixed emotions which human coding is sensitive enough to pick up – a feeling of wonder or awe, for example, would come under ‘pleasure’. The category ‘active engagement’ is not a single emotion, strictly speaking, but an important sign that the listener or viewer is feeling involved with what they are experiencing, as shown in body movement. With high-quality footage at a high frame rate (unlike webcam footage) we can capture micro-expressions lasting as little as a fifth of a second. It is important to capture these involuntary, fleeting emotions from the System 1 brain. Observers in the course of market research interviews necessarily focus on the slower, ‘socially directed’ expressions from the System 2 brain (‘I like this and I’m showing it’).

Training in this type of facial coding involves experience gained through many

the audience views ads and programmes in an auditorium. When undertaken by experienced facial coders, it creates a forensic map of emotions second by second for the duration of the ad.

We created an animated graphic showing a moving highlight on each word as it was spoken. The emotions we measured were: pleasure; humour; active engagement; puzzlement; displeasure and disengagement. These emotions were represented by moving lines in the graphic and identified at precise moments in the ad.

Capturing these non-verbal responses was part of a comprehensive project led by Discovery Research to help develop the Boots Christmas TV ad using in-depth interviews, telephone interviews and group discussions.

In the discussions, we explored what is immediately attractive and what appeals after more careful consideration. In the visual mythology and traditions of Christmas, what captures the imagination of consumers in the target market? What are the images and words which evoke these meanings of Christmas?

To give some context to the facial coding responses, we wanted to know how our respondents felt about Christmas and how they defined it – a time of family, celebration, personal indulgence, or giving? Perhaps also a time of magic, fantasy and nostalgia? This data provided the client and the agency with detailed and rich context for the creative ideas. It also helped to explain the emotions identified by facial coding.

The narrative scripts contained a number of scenes, each described in the form of a word picture by the narrator. We had three objectives: which words spark an emotional response? Which scenes carry the most emotional force? Which of the two routes is the most emotionally powerful? The results, together with the feedback on tastes and preferences from the discursive qualitative research, enabled Boots and its agency to

Page 2: Developing the 2015 Boots Christmas TV Advert Using Manual Facial Coding

range of body movements – leaning towards the sound source, tilting the head to one side to listen, nodding in agreement. Under active engagement we also included emotions such as surprise (raised eyebrows).

We cannot know exactly how much attention a respondent is paying to the narrative. However, it is evident when the listener is distracted – when they are focusing on another object, or actively looking around the room. We categorised this as disengagement. There were relatively few instances of this, so it was not a significant factor. More significant were the positive emotions which the respondents showed. For pleasure we look for slightly upturned lips, together with a widening of the eyes. Pleasure shows in a relaxed mouth (not compressed) and the body posture open and relaxed. Under pleasure we included specific subtle positive emotions such as wonder, inspiration and satisfaction.

When a section provokes a humorous response we can see the smile on the lips combined with movement in the body (particularly shoulders) and eyes crinkled, sometimes accompanied by an exhalation of air. It can be difficult to distinguish between a smile of pleasure and a gentle chuckle when humour is involved. Energy in the body is the key. But we also rely on context: when a joke has been made and bodily energy

is expended, together with a smile, then humour is present. When an ad is particularly successful, often the humour is followed by several seconds of pleasure or active engagement, as the person’s mood is lifted.

Puzzlement is easy to see, with the characteristic lowered and furrowed brow and a narrowing of the eyes. This is an important negative sign that the scene is not adequately described or makes sense. But it can of course be a positive if it is an intentional mystery, later resolved. When displeasure is prompted, we also see a lowered brow, but with the mouth more clearly turned down and a range of other actions such as a nose wrinkle.

System 1 ad development frees us from the requirement of creating more finished concepts to test. The unconscious brain will respond to an image or a collection of words in a microsecond, whether it is in a carefully crafted mock-up, or simply shown on a piece of paper. When our respondents were listening to the story, they weren’t consciously evaluating the description, but responding almost instantly to the sound picture.

In this project we combined System 1 facial coding with the verbal element of the research to gain System 2 responses.

When we had compiled the System 1 data, and revealed the pattern of emotional responses, the discursive qualitative research provided an account of why particular words sparked off immediate responses. Second, by discussion and qualitative probing we were able to assess the power of the storyline.

This project demonstrates that identifying emotional responses by facial coding alone is not sufficient. The success of the ad also depends upon how meaningful these emotional fragments are within the story. A good example is the John Lewis 2011 ad ‘The Long Wait’, which features a number of emotional moments, but contains them within a compelling story and a satisfying ending.

By applying combined System 1 and 2 responses to the content, we aimed to gain a clearer understanding of what will make the story and the visual elements work for the target audience. We can thus help to identify those elements which can be combined to speak directly to viewers’ emotions.

hours of observing consumers and their non-verbal behaviour on video. Coders train their brains to focus on the areas of the face which will give clues to the emotion being shown. Emotion recognition by coders can involve using slow motion and careful analysis to confirm an impression, but mainly it involves using the System 1 brain to identify emotions in real time.

In this case, recognising the expressions of emotion while respondents are only listening and not viewing, presents a bigger challenge. As the respondents were not looking at a screen, determining disengagement by simply recording when someone looks away was not an option. Instead we recorded whether the listener was showing signs of distraction.

Since we asked respondents to listen to four-minute-plus narratives (rather than, for example, 30-second radio ads), a greater concentration span was required from them. Also, unlike a 30-second radio ad, which might feature statements designed to amuse or grab attention, these scripts were subtle, aiming to evoke emotions by painting a word picture.

At some points we could see the respondent listening with greater intensity, and this we term active engagement. This is a broad category, involving the listener showing energy in activity prompted by the narrative. When a person was particularly involved with a section of the narrative, we could see a

Admap propogates thought leadership in brand communications and is published monthly in print. To subscribe visit www.warc.com/myadmap

This article was first published in Admap magazine January 2016 ©Warc www.warc.com/admap