transmedia analytics - user engagement and interaction analytics in transmedia narratives
Embed Size (px)
DESCRIPTIONUntil now, producers of interactive documentaries made use of the free tool Google Analytics, to identify the levels of engagement of users with their creations. But this already existing analytics tool is mainly useful for traditional websites. The data that it captures is not fully suitable for the improvement of interactive documentaries and does not answer questions about how users navigate through the interface and make use of features like intro-videos, helps menus or autopilot narrative options. Or more importantly, what story they construct. In other words: which narrative paths they choose through the database-structured narrative content. As a case study the interactive documentary The Last Hijack, produced by Submarine, was analyzed. By recombining existing data (derived from Google Analytics) with custom captured data, and visualizing it in a readable manner, it becomes possible to extract answers on questions documentary makers would like to know.
TRANSMEDIA ANALYTICS User Engagement and Interaction Analytics in Transmedia Narratives
Transmedia Analytics Team - MediaLAB Amsterdam, the Netherlands Anne van Egmond, Sieta van Horck, Yannick Diezenberg, Geert Hagelaar,
Until now, producers of interactive documentaries made use of the free tool Google Analytics, to
identify the levels of engagement of users with their creations. But this already existing analytics tool is
mainly useful for traditional websites. The data that it captures is not fully suitable for the
improvement of interactive documentaries and does not answer questions about how users navigate
through the interface and make use of features like intro-videos, helps menus or autopilot narrative
options. Or more importantly, what story they construct. In other words: which narrative paths they
choose through the database-structured narrative content. As a case study the interactive
documentary The Last Hijack, produced by Submarine, was analyzed. By recombining existing data
(derived from Google Analytics) with custom captured data, and visualizing it in a readable manner, it
becomes possible to extract answers on questions documentary makers would like to know.
Subsequently, by translating these data into visual language, interactive documentary makers are
given concrete and actionable handles for improving their projects.
TRANSFORMING DATA INTO VISUAL LANGUAGE Data Visualizations as Knowledge Machines
Up until now, producers of interactive documentaries made use of the free analytics tool - Google
Analytics - to identify the levels of ‘engagement’ of users with their projects. But the information
generated by this tool is not fully suitable for the improvement of interactive documentaries. The
structure of a regular website is usually hierarchical and based on a homepage where the user can
navigate to multiple pages, while the structure of transmedia narratives is distributed differently. This
makes it hard to apply similar measurements and concepts. Because of their focus on traditional
websites, success is benchmarked by (commercial) goals and the way user interaction contributes
towards this goal. This is not sufficient for the specific content of interactive documentary and the
thing producers want to know.
They would like to detect for example technical problems with watching the documentary; are there
certain browser, screen resolutions or devices that cause an imperfect interaction with the
documentary, so they can specific dive into this when in comes to improving their documentary.
Knowing which videos are mostly watched and in which order is also example of this, to see if the
specific placement of this elements in the interface can be improved. In the area of user acquisition
information about which referral sources drive people to the site is an example, this could
consequentially mean investing in certain business relationships. In other words: engagement (or
‘‘successful interaction’) is different for the specific kind of interactive documentary content. Producers
don't need the measurement of commercial success, but instead would like know how people interact
with their creation in order to be able to improve them.
The specific kind of content of interactive documentaries asks for the redefinitions of terms and a
clear idea of what it is that is desirable that users do with this content. The project described in this
paper aimed to overcome these limitations by building an analytics tool, in the form of an interactive
dashboard, that is specifically designed for this type of database structured media. It will both capture
and presents data in a way that is comprehensive and actionable for the different professionals and
artists that are typically involved in a transmedia production. Its essential that the visualizations within
the dashboard highlight relevant information and make problems with the documentary stand out in
order to overcome a ubiquitous limitation in Google Analytics; you need to spend a lot of time with it to
understand what you need to look for. Bringing this together the main question within this research
paper focusses on how existing data out of the Google Analytics API can be combined with (custom
captured) new kinds of data and be interrogated so it generates actionable information for interactive
documentary producers in the improvement of their creations. And, consequently, how these
correlations can be visualized in such a way that it highlights specific problems.
The first section of this paper will revolve around investigating within what social and cultural context
interactive documentary is expressed. We discuss how interactive documentary opens up a new
range of possibilities for both users and documentary makers. Subsequently, we argue why it is
necessary to develop new tools for analysis of these new media objects.
Thereafter, we elaborate on the limitations of an already existing analysis tool, Google Analytics. Here
we argue that Google Analytics is not fully suitable for the improvement of interactive documentary, as
this tool focuses on traditional websites with hierarchical structure. Therefore, a redefinition of the
term in light of the interactive documentary is coined, for only then it becomes possible interrogate
what engagement actually means for this kind of content.
Finally, using user data from Submarine interactive documentary The Last Hijack, we argue how we
attempt to overcome these limitations. By combining already existing data from the Google Analytics
API with custom captured data we show what kind of new information this can produce. Thereafter, we
will discuss two visualization (out of the twenty-one that are featured in the dashboard), in order to
illustrate the design- and thinking processes behind this.
1. Related Work
1.1 Database and Narrative Hardy states that the narrative is ‘a primary act of mind transferred to art from life’ (12). Stories are
determining for how we think and understand our lives and fundamental for the way we arrange and
organize human experience into a structured whole. The narrative is so deeply rooted and so
accustomed that it is naturalised. We are no longer aware of the prevalence it has on the way we live
our lives; storytelling engages with everything we do. According to Manovich this is grounded in the
immense influence of the novel and cinema. He poses the narrative as the dominant paradigm for
cultural expression in times of the modernism. But the current digital age strongly challenges this
Many new media objects do not tell stories; they do not have a beginning or end; in fact, they do not
have any development, thematically, formally, or otherwise that would organise their elements into a
sequence. Instead, they are collections of individual items, with every item possessing the same
significance as any other (218).
Databases have in common that they are structured in such a way that information is extremely
searchable and can be navigated in different manners. Consequently, this provides a different
experience than reading a book or watching a film. The database structure presents multiple
approaches on culture, is open for different interpretations and different paths for constructing a
story. It does not necessarily show causalities and a chronological order on an explicit level, like a
narrative does (Manovich 66). This effectuates a changing role for both the user and (digital)
storytellers. Interactive documentaries - as database structured, digital storyspaces - are treated as an
expression of this new paradigm. It evidently appears that both users and makers of interactive
documentary have new possibilities and difficulties to deal with.
1.2 Interactive documentary
Interactive documentary as a digital storyspace can be explained in terms of database structured
culture and consequently stands for big changes in how we experience stories. Compared to the
traditional documentary, as a (linear) narrative, interactive documentaries in different forms have an
important thing in common. Namely the way they position the user. Interactive documentary
effectuates a engaged and active audience that is able to choose to receive the content (Nash 2012;
Marles 2012; Gaudenzi 2013). One does not have to necessarily watch a story in a fixed linear order,
but is instead able to choose their own path of navigation, and in some cases even add content.
Users of interactive documentaries are not passive viewers but active participants that can ‘controle
over content’ (Nash 200). They have the power to actually ‘do something’ “ in order in order to fulfill the
desire to know how the story will end, or to explore alternative storylines” (Gaudenzi 10).
Despites Marles agrees on this active role for users, she challenges Manovich’s ideas about the
differences between narrative and database. Neither the database, nor the narrative can give an
adequate circumscription of digital media as a phenomena (80). They should not be taken in account
as each others opposites. Databases are characteristic for contemporary digital age, but they should
be considered as a narrative that leaves room for “known-knowns, known-unknowns, and
unknown-unknowns” (82) that can exist within the same content space. The narrative has a more
flexible framework when it's told through digital media. Marles defines interactive documentaries as
stories with a ‘greater possibility space’ (82-84). They have flexible narrative spaces where the user can
navigate through the content and accordingly can create meaning in a way that is simply not possible
within in a linear order (85).
But not the role of the user has changed, also the role of documentary producers has been altered in
a time where stories are told on digital platforms. The question for documentary makers is not longer
about what a documentary should mean, but how to organize the digital (database) storyspace in such
a way that it can create different meanings through multiple paths. Makers do not longer guide the
users through a story but instead create spaces where they set the circumstance in which the
individual can evolve into a experimenting an active user. With the lack of a dominant narrative voice,
the meaning of the story strongly depends on both the organization of the digital space storyspace by
the maker, and the way users navigate through this space (Gaudenzi 140). It also opens up a new
range of possibilities for documentary makers in terms of feedback. The online distribution of
interactive documentary makes it possible for documentary makers to capture data about how users
interact with their creations, so they can accordingly improve them.
1.3 Visualizing data
Data visualizations offer concrete handles for the transmission of knowledge. In order to transform
these datasets into meaningful and actionable data for interactive documentary makers, the data is
transformed into interactive visualizations. Various studies prove the power of visualizations when it
comes to transferring knowledge. Apparently, a human’s input channel capacity is greater when visual
abilities are used (Koffka 1935; Kosslyn 1980; Bertin 1983; Tufte 1983; Miller 1994; Burkhard 2005).
Evidently, when transferring knowledge, interactive visualization help to fascinate people, allow to
represent and explore complex data, or trigger new insights. Significant here, is that not only already
existing data is visualized in a new way, but these data points are complemented by custom captured
data based upon user data derived from The Last Hijack Interactive. Thus data from the Google
Analytics API are combined with custom captured data and consequentially, translated into a visual
language. However, although aesthetics help draw the eye closer, data visualization should not only
be pretty to look at, but also contain valuable information. Owen claims: “the primary objective in data
visualization is to gain insight into an information space” (23). Consequentially, functionality comes first.
By highlighting problem areas and possible best scenarios the data visualizations become more
readable and easy to understand. Fry also argues how this is helpful: “Whenever we analyse data, our
goal is to highlight its features in order of their importance reveal patterns, and simultaneously show
features that exist across multiple dimensions” (1).
Limitations of Google Analytics
Before understanding why a new analytics tool is needed for analysing user data, it is important to
comprehend why already existing analytics tools do not provide the preferred insights for interactive
documentary producers. First, Google Analytics is mainly useful for traditional websites. These
websites often consist of a hierarchical structure, in contrast to the structure of interactive
documentaries. These often consist of one page, featuring various videos which the user can freely
navigate through. Therefore, the structure of an interactive documentary can be better approached as
a network structure. Furthermore, Google Analytics suggest that a website is successful when
generating much revenue. Engagement here, means that visitors are ‘engaged’ when generating actual
revenue. Therefore, Google Analytics is focused on commercial content. However, this is not the right
model for analyzing user data from interactive documentaries. Apparently, the definition of
engagement is highly content specific.
Overcoming limitations: correlation existing data
But by combining this existing data it does become possible to provide actionable handles for the
improvement of interactive documentary. Google Analytics does capture for example that people drop
off and after what time they do, but it remains unclear what this has caused. By correlating this
information with for example the type of browser, this could indicate a possible cause. When a great
amount of people leave your documentary very fast, using a specific browser, this could indicate
technical problems with playing the documentary with this browser. This is just one specific example,
but a lot more more correlations can be helpful; such as data about user behavior, but also technical
aspects and audience acquisition.
Overcoming limitations: custom capturing
But these handles cannot be provided by just recombining data within the Google API. Therefore
customized data capturing is implemented and correlated with existing data, so it has become possible
to answer the specific questions documentary makers have. In order to distinguish itself from the
standard Google Analytics tool and to put the data into context, thus making it actionable, the
prototype will make use of custom reports, user segmentation and filtering, which can be find detailed
below. These custom reports were developed in a way that allows for correlations and identification of
problems. Here we distinguished between two types of data: dimensions and metrics, which we
elaborate on in the next section. The custom dimensions and metrics are described below. The custom
capturing is based upon the Last Hijack Interactive, so some dimensions and metrics are very specific
for this project. However, some dimensions and metrics are also generalizable for other interactive
First, possible combinations were overthought and written down, to afterwards test them in Google’s
Query Explorer; an interactive tool to execute Core Reporting API queries, without actually coding it. It
allows to play with the Core Reporting API by building queries to test if it is possible to extract the
demanded data from Google Analytics.
The Query Explorer includes different parameters. The ‘metrics’ parameter can be identified as the
aggregated statistics for useractivity to your site, such as clicks or pageViews, which is always returned
in numbers. The ‘dimensions’ parameter breaks down metrics by common criteria. A dimension should
be considered as the character for something you measure. This could be ‘ga: browser’ or ‘ga: city’ in
order to break down for example pageViews of your site, which is more interesting than just seeing
numbers. Th However, when dimensions are requested, values are segmented by dimension value. If
a query has no dimensions parameter, the returned metrics provide aggregate values for the
requested date range, such as overall pageViews.
Any request must supply at least one metric (with a maximum of ten metrics) and a request cannot
consist only of dimensions. The difficulty here is that a metric can be used in combination with other
dimensions or metrics, but only where valid combinations apply for that metric. With the ‘segment’
parameter you can specify a subset of visits. The subset of visits matched happens before dimensions
and metrics are calculated. With ‘filters’ you specify a subset of all data matched in analytics, for
example ‘ga: country==Canada’. Finally, the ‘sort’ parameter stands for the order and direction in which
you want to retrieve the results, based on multiple dimensions and metrics.
The Google Query Explorer also allows for the custom captured data to be correlated with already
existing data from Google Analytics. Therefore, it becomes possible to explore new and meaningful
correlations between already existing data with custom captured data.
All working correlations (and accordingly visualizations created for this prototype) are categorized into
four different specific areas to investigate interactive documentaries. Namely: ‘Content Navigation’,
where the main questions is about how are users exploring the content. Within ‘Interface’, the focus is
on the question if elements are in the interface are placed in such a way that users can find the basic
navigational features they need. ‘User Acquisition’ aims at providing insights about where users are
coming from and this affect the way users engage with the content. And lastly, ‘Technical’ is centered
around the questions if one can identify technical problems with watching the documentary.
3. Case study ‘The Last Hijack Interactive’ (2014). A documentary about Somali piracy, The Last Hijack, was also the
case study within the Transmedia Analytics project. The Last Hijack Interactive is an online transmedia
experience in which film, footage and animation are combined. The central story is about a ship that
underwent a hijacking that lasted for one and a half month. The documentary both highlights the
experience of the pirates and the crew (Submarine). They documentary always starts with an
introduction narrative. The active role of the users immediately becomes clear; they can choose to
either fully watch or skip this intro. The main page consist of an timeline with videos, where every dot
represents a video. The user can freely navigate through these videos, read additional information
and switch between different perspectives on the story, but can also choose to follow the story within
in a predefined path, the so called autopilot (Submarine). The full URL to the documentary is to be
found in the bibliography.
3.1Redefining engagement With reference to Gaudenzi’s interactive documentary genre taxonomie (2009). She distinguishes four
types of interactive documentaries based on their ‘modes of interaction’, that indicates different levels
and possibilities for interaction: the conversational mode, experiential mode, participatory mode and the
hyperlink mode. We will not go into a detailed description of all of them, but instead dive into the specific
mode that concerns The Last Hijack, namely the ‘hyperlink mode’.
The hyperlink mode could be best described as a closed video database. The user has an explorative
role in the sense that they can navigate through the database by clicking on existing capabilities. This
kind of structure encourages the user to not watch the film in a linear, fixed way, but to choose their
own path through the story. The closed nature makes it the perfect fashion for the author to control,
while leaving the user to decide on how he wants to receive the story. This control of both sides makes
this form of interactive documentary the most commonly used.
Last Hijack consists of one page but there is the possibility to navigate through a timeline below the
film and read additional information while watching it. But in essence interactivity is quite limited; the
user can navigate through the structure but can not create it. The user constructs an individual 'story'
that consists of the segments which are selected during the navigation process. The larger the
database, the greater the chance that there a unique experience of the story is given. Or in other
words that there is been chosen an unique navigational path through the story (Lister et al 22). Based
on the aim of hyperlinked content, namely navigating through it, indicators for an engaged users are the
1. Great amount of switches between different videos indicates that a user is actively exploring
the content, which corresponds to the aim of the content within ‘hyperlink mode’. Therefore we
define the first level of engagement indicator as a lot of different videos watched.
2. The second level of engagement considers a stronger level of engagement, namely that the user
has watched a high number of different videos AND watched a relatively high percentage of
the content of those videos (the difference between leaving a video halfway and watching it
until the end).
3. The most intensive level of engagement for the kind of content Last Hijack considers the
combination of watching a high number of different videos, watching a relatively high
percentage of the total content of the videos AND has switched a lot between perspectives.
3.2 Custom dimensions
Dimension 1 - intro state Measures if visitors, fully watch the intro, skip the intro or went away from the site while watching the intro.
Dimension 2 - time spent on intro (string, in seconds and milliseconds)
Measures how much time is spent on intro, before deciding
to either skip or leave the website.
This could indicate that a certain moment in the intro drives
or discourages people to further explore the content. It could also indicate a certain tension curve; the moment users get ‘bored’ after a certain number of seconds, regardless of the content.
Dimension 3 - first video watched Captures which video is the first video that is watched when entering the website. This dimension is disabled in the capturing because the first video watched is always the same for The Last Hijack; it wouldn't produce new information.
Dimension 4 - last video watched Captures which video is the last video that is watched before leaving the website. This could indicate if certain videos are causing people to leave. Combining this for example with the average Loading Time and browser this combination could state that people leave after this video because of loading problems with a certain browser and documentary makers should dive into this.
Dimension 5 - first clicked element Captures which element is first clicked on when entering the website.
Dimension 6 - seconds before first click
captures how many seconds before the first click.
Dimension 7 - viewport width Captures which viewport width (in pixels) is being used when watching the documentary. This helps understand how much space the user has available (in width) for watching the documentary, without toolbars or clutter.
Dimension 8 - viewport height Captures which viewport height (in pixels) is being used when watching the documentary. This helps understand how much space the user has available (in height) for watching the documentary, without toolbars or clutter.
Dimension 9 till 12 are used as metrics
Dimension 9 - "ga('event', 'hover', 'metric', 'totalinvisit', 0)"
Captures how many hovers are being triggered in total visit. Note that this dimension is not used within the final prototype because dimension was disabled. Capturing this kind of data is very intensive and slowed the interactive documentary down.
Dimension 10 - "ga('event', 'hover', 'metric', 'persecondinvisit', 0)"
Captures how many hovers are being triggered per second. Note that this dimension is not used within the final prototype because dimension was disabled. Capturing this kind of data is very intensive and slowed the interactive documentary down.
Dimension 11 - "ga('event', 'mouse', 'metric', 'distanceinvisit', 67)"
Captures mouse distance in visit, calculated in number of pixels travelled with the mouse pointer within a session. High mouse nervousness with shorter session duration could indicate a confused or impatient user.
Dimension 12 - "ga('event', 'mouse', 'metric', 'nervousness', 6)"
Captures pixels travelled per second in visit. The users interacts with the content through the mouse, therefore low mouse distance and higher session duration could indicate low engagement with the interactive function of the project. Low pixels travelled combined with long session duration might indicate a user that ‘leans back’ and uses the autopilot, while ahigh number of pixels travelled combined with a very short session duration could possibly indicate a confused user. High amount of pixels travelled combined with a long session duration might indicate a very engaged user.
Besides the custom dimensions, also custom events are captured and analyzed. Here we used three
Videopath Captures the videopath that people choose while visiting the website, per user. Captured data features per video, how many seconds this video was watched. Example: videopathintro:57:1-S-A.mp4:100:2-W-A.mp4:46:3-S-A.mp4:4:6-1-S-A.mp4:53:6-1-W-A.mp4:6:6-3-W-A.mp4:5:6-Videos with a S in the name indicate a Somali perspective on the story, while a W indicates a Western perspective on the story. This allows to individually follow your users through the narrative. Also, considering that we capture how much time is spent on which videos per user, it now become possible to calculate what the percentage of videos watched is.
Elements clicked Captures how many times users clicked on the ‘help’ button and infographics.
Watched videos Captures per video how many times each video was watched.
4. Discussions and results
To illustrate how engagement is measured with regard to interactive documentary and how this
specific definition is extrapolated, the visualizations based upon video popularity is discussed. The
second visualization that is being reviewed gives insights into the way visualizations should over
thought when designing them as actionable pieces of information.
Visualization: popularity of a video
One of the things producers would like to know is which videos are most/least watched. There are
several reasons for this. In case of The Last Hijack the content gives two different perspectives on the
story; the Somali and Western perspective. In combination with additional information about the rise of
Somali piracy, this specific way of structuring aims to provide a ‘ a unique view on a sinister world’
(Submarine). Therefore its good to know how people navigate through the content to see if this aim is
achieved. As we discussed while redefining engagement; the goal of hyperlinked content is to set the
circumstances for users to reveal as active and exploring. Combining a list of all videos with the
amount of times it has been watched is useful for this. But only this does not indicate if a video is
popular. Yes, its positive when certain videos are chosen relatively a lot, as long as it is in balance with
other videos. Dominant peaks and declines could indicate that a video is for example better placed in
the interface than another. Combining this information with for example loadingTime or Browser could
also explain these peaks and declines.
Producers would also like to see that users watch a lot of different videos in order to construct a
complete a possible story. Therefore it is necessary to define the popularity of a video not only by how
many a videos is watched by how many different people, but also how much of its content is watched.
When not doing this, a video that is watched a lot of different times is automatically pointed is popular,
while this is not the case when its watched a few seconds on average. This could actually tell the
opposite; the video is well places and therefore clicked on a lot, but the actual content does not
encourage further engagement with the video. Therefore a video is considered as popular when it is
both watched a lot of times and the great part of its content is watched.
These consideration are translated into the data visualization as can be found in figure I. This chart
can be used to indicate the engagement or popularity per video. The graph shows what your top
content (video) is in terms of how many times it was played and what your top content is in terms of
how much of the video is watched. This is expressed in percentages of the content watched. The
rationale for this is when you would express this in minutes, it would not be possible to effectively
compare. A video that has a total duration of two minutes and is watched one minute on average by
more people than a video that has a total duration of one minute but is also watched for minute on
average, automatically appears as more popular. While this is not the case. The first mentioned video
is watched for 50% average,, while the latter is watched for 100%. Because people have different
durations it would not be possible to compare their popularity in term of time watched. Therefore
content watched is expressed in percentages. The specific correlation used here is: eventCategory=
browser| eventAction= watched | sessions | % watched (event action=videoPath). Correlating both
data out of the Google API and customed data effectively results in new and actionable data.
The size of the ball indicates how many times a video is watched, while the place of the ball on the
horizontal line indicates how many of the video is watched entirely in % on average. A multiselect
option is added to extract the data engagement data of the videos that are in autoplay. The videos
within autoplay will induce an increment of video plays. With filtering the autoplay video plays the data
that tells us the level engagement remains in the visualization. The circles within the more saturated
red area, could indicate that this worth looking at since this signifies videos that are relatively less fully
Figure I Popularity of a vide
Visualization: last video watched
The process of visualizing the ‘last video watched’ graph raised questions of representing correlations
in such a way that it provides actionable information. The visualization can be found in figure II. This
graph shows which video is mostly watched as last video before leaving the site. This custom captured
data adds new information to existing data. Yet it should be carefully over thought when visualizing in
order to be valid and right information. Two important discussions were of relevance here.
The correlation used for this visualization is: lastVideo | sessions or event action==
The videos are visualized in the same manner they are organised within the documentary. Namely,
within a categorised timeline where every dot represents a video. The more a video ‘caused’ a ‘drop
off’ or is mostly watched as last video before leaving the site, the more its saturated in red.
Figure II Last Video watched
But it came apparent that when visitors choose a autoplay path the last video will pointed as ‘bad’ or as
cause of people leaving, while this not actually indicates a problem. Instead this is one of your most
succesfull videos; it stand for people watching the whole story, leaving after the last video in the
autoplay. Therefore it is possible to filter the information about ‘the last video watched’ before people
leaving, on visit length. The question then becomes what videos most likely cause people leaving in the
first five minutes. Those videos are ‘problematic’ and are worth diving into.
Another discussion occurred. When basing the saturating of red on numbers does not valid
When stating that a video is fifty times related to a drop off (in the sense that it was the last video
watched before leaving), this automatically appears to be more concerning than a video with ten drop
offs. But this is not always appropriate. When a video is fifty times watched before dropping off but
the video has been watched five hundred times (= 10 % drop off), this is less of a problem than a video
that has been watched ten time before dropping off while its also watched ten times in total (= 100%
drop off). Therefore, numbers should be depicted as relative, rather than absolute. Through this
thinking process the visualization has become an actionable piece of information.
Interactive documentaries - as database structured, digital storyspaces - have presented both users
and makers of interactive documentary with new possibilities and difficulties to deal with. The user has
become active. One does not have to necessarily watch the story in a fixed linear order, but is instead
able to choose their own path of navigation. Furthermore, the online distribution of interactive
documentaries makes it possible for documentary makers to capture data about how users interact
with their creations. However, Google Analytics does not provide the prefered insights into these
questions, while Google Analytics focuses on traditional websites. Moreover, because of their focus
on traditional websites success is benchmarked by (commercial) goals and the way user interaction
contributed towards this goal. Therefore the term engagement is redefined for the specific kind of
content that interactive documentaries provide in. Three levels of engagement are formed for analyzing
user interaction with content. These levels function as fundament for recombining data from the
Google Analytics API and custom captured data. Although now meaningful correlations between data
are made, in order for them to provide insights to the questions documentary makers have, they
should be translated into an visual language. However, although although aesthetics help draw the
eye closer, data visualization should not only be pretty to look at, but also contain valuable
information. Therefore, we focused on highlighting problem areas and best case scenarios. In this
manner, it becomes possible for interactive documentary producers to gain the prefered insights into
their questions and improve their documentary accordingly.
DISCUSSIONS & FURTHER RESEARCH
Although this research already gave meaningful insights into questions surrounding user engagement
within interactive documentaries, it is important to be aware of the fact that this project was based
upon a case study; The Last Hijack Interactive. Even though some correlations and visualizations are
generalizable for other interactive projects, some are very content specific. Therefore, in order for
these results to be fully generalizable, further research should be done.
BIBLIOGRAPHY Bertin, Jacques. "Semiology of graphics: diagrams, networks, maps." (1983). Burkhard, Remo Aslak. "Towards a framework and a model for knowledge visualization: Synergies
between information and knowledge visualization."Knowledge and information visualization.
Springer Berlin Heidelberg, 2005. 238-255.
Nash, Kate. "Modes of interactivity: analysing the webdoc." Media, Culture & Society 34.2 (2012):
Card, S.K., Mackinlay, J.D., & Shneiderman, B. (1999). Readings in Information Visualization; Using
Vision to think. Los Altos, CA: Morgan Kaufmann.
Gaudenzi, Sandra. The Living Documentary: from representing reality to co-creating reality
in digital interactive documentary. Diss. 2013.
Koffka, K. (1935). The Principles of Gestalt Psychology. New York: Harcourt, Brace & World.
Kosslyn, S.M. (1980). Images and Mind. Cambridge, MA: Harvard University Press. Lister, Martin, et al. New media: A critical introduction. Routledge, 2008. Marles, Janet Elizabeth. "Database Narratives, Possibility Spaces: Shape-shifting and interactivity in
documentary." (2012): 77-92.
Manovich, Lev. “Principles of New Media”. The language of new media. MIT press, 2001.
Manovich, L. (2011) ‘Trending: the promises and the challenges of big social data’, in Debates in the
Digital Humanities, ed. M. K. Gold, The University of Minnesota Press, Minneapolis, MN.
Miller, George A. "The magical number seven, plus or minus two: Some limits on our capacity for
processing information." Psychological review 101.2 (1994): 343.
Nash, Kate. "Modes of interactivity: analysing the webdoc." Media,
Culture & Society 34.2 (2012): 195-210.
Owen et al., “Definitions and Rationale for Visualization,” Apr. 2008;
Submarine. 2014. Submarine. 4 mei 2014. < http://www.submarine.nl>.
The Last Hijack Interactive. Dir. Femke Wolting. Submarine. 2014.
Tufte, Edward R., and P. R. Graves-Morris. The visual display of quantitative information. Vol. 2.
Cheshire, CT: Graphics press, 1983.