panel: social tagging and folksonomies: indexing, retrieving... and beyond? - searching and...
TRANSCRIPT
J a c e k G w iz d k aDepartment of Library and Information Science
Rutgers University
S u n d a y , O c t 0 9 , 2 0 11
Panel: Social Tagging and Folksonomies: Indexing, Retrieving…and Beyond?
Searching and browsing via tag clouds
CONTACT:
www.jsg.tel
Process of Tagging
• Users associate tags with web resources
• Tags serve in social, structural, and semantic role– structural role: starting points for navigation; helping users to orient themselves
– semantic role: description of a set of associated resources
Tag Clouds
3
My Claims
• Tag Clouds help in information search– by saving searchers’ effort
• Tag Clouds do not support browsing tasks– do not show relationships and do not show history
Not just claims…
4
Research Question
• Do tag clouds benefit users in search tasks?
5
User Interface with Overview Tag Cloud
Our retrieval system populated with data from delicious 6
List UI
Overview Tag Cloud UI
Search Result List Tag Cloud
User Actions in Two Interfaces
7
View Search Results
View one result
page
Start
End
Delete Tag New Tag
Click Result
URL
Click “Back” button Click “Done” &
enter answer
1. List
2. Overview Tag Cloud
click
Experiment Design• 37 participants
– Working memory assessed using memory span task (Francis & Neath 2003)
• Within subject design with 2 factors: task and user interface• Tasks
– everyday information search (e.g., travel, shopping) at two levels of task complexity
– Four task rotations for each of two user interfaces
8
Measures
• Task completion time
• Cognitive effort: – from mouse clicks: user decisions expressed as user selection of
search terms = number of queries, opening documents to view
– from eye-tracking – reading effort measures: (based on intermediate
reading model) scanning vs. reading; length of reading sequences; reading fixation duration, number of regression fixations in reading sequence, spacing of fixations in reading sequence.
• Task outcome = relevance * completeness
9
Results
10
Results : Time and User Behavior
• Overview Tag Cloud + List made users faster and more efficient
– less time on task: 191s in Overview+List vs. 261s in List UI
– less queries: 7 in Overview+List vs. 8.3 in List UI
– no significant differences in task outcomes
Overview Tag Cloud facilitated formulation of more effective queries
11
Results : Cognitive Effort
•verview Tag Cloud + List required less effort, higher efficiency
– less fixations (total and mean reading seq len) – more efficient
– less regressions – less difficulty in reading
12List Overview Tag Cloud + List
Results : Cognitive Effort
•verview Tag Cloud + List required less effort, higher efficiency
– less fixations (total and mean reading seq len) – more efficient
– less regressions – less difficulty in reading
•omparing only results list region in two UI conditions
– less effort invested in results list in Overview Tag Cloud + List
verview Tag Cloud helped to lower cognitive demands
13List Overview Tag Cloud + List
Did Tag Cloud Help All Users?
No – there are individual differences
Two users, same UI and same task 14
Is Tag Cloud Helpful?Yes! Overview Tag Cloud + List UI made people faster and required less effort
also reflected in a number of eye-tracking measures
15
Browsing large sets of tagged documents
16
An Example of Browsing (CiteULike)
• A typical model of browsing with tag clouds:
Pivot browsing: a lightweight navigation mechanism
1. information 2. retrieval 3. algorithms 4. phylogeny
Is There a Problem?
18
Users’ ConceptualizationsThe labyrinth
… being lost
19
The journey
… switching direction
and being stack
The space
… increasing distance, and continuity18 participants
What’s the Problem?
• Users – feel lost
– experience “switching”, yet expect some continuity
• In Pivot Browsing each step is treated as a separate move– View is “re-oriented” - New list of documents along with their tags
– At each step context is switched
– Relationships between steps are not shown • e.g., overlap between tag clouds not indicated
• Pivot browsing seems to be not lightweight– conceptualizing multiple tags assigned in different quantities to different
documents is difficult
Research Questions
• How can we support continuity in “tag-space” browsing?
• How can we promote better understanding
of tag-document relationships (sensemaking)?
Recall: Example of Navigation (CiteULike)
1. information 2. retrieval 3. algorithms 4. phylogeny
User Interface with “History tag clouds” (Tag Trails)
Supporting continuity in tag-space navigation by providing history
information retrieval algorithms phylogeny
History tag clouds
User Interface with Heat map (Tag Trails 2)
Supporting continuity in tag-space navigation by providing history and making (some) relationships (more) explicit
Tag cloud
Results list
Column-tags: most recentlyvisited tags from left to right
Row-tags: selection of most frequent tags
Cells color-coded according to tag’s df
Heat map
Summary & Conclusions
• Tagging – “metadata for free”: does the effort pay off?
• Yes, but not for all tasks
• Tag clouds– helpful in search tasks
– but to support browsing new presentations of tags needed
25
Thank you! Questions?
Jacek Gwizdka | contact: http://jsg.tel
Related publications:Gwizdka, J. (2009a). What a difference a tag cloud makes: Effects of tasks and cognitive abilities on
search results interface use. Information Research, 14(4), paper 414. Available online at <http://informationr.net/ir/14-4/paper414.html>
Gwizdka, J. (2010c). Of kings, traffic signs and flowers: Exploring navigation of tagged documents. In Proceedings of Hypertext’2010 (pp. 167-172). ACM Press.
Gwizdka, J. & Bakelaar, P. (2009a). Tag trails: Navigating with context and history. CHI ’09 extended abstracts (pp. 4579-4584). ACM Press.
Gwizdka, J. & Bakelaar, P. (2009b). Navigating one million tags. Short paper and poster presented at ASIS&T’2009, Vancouver, BC, Canada.
Cole, M.J. & Gwizdka, J. (2008). Tagging semantics: Investigations with WordNet. Proceedings of JCDL’2008. ACM Press.
Gwizdka, J. & Cole, M.J. (2007). Finding it on Google, finding it on del.icio.us. In L. Kovács, N. Fuhr, & C. Meghini (Eds.), Lecture notes in computer science (LNCS): Vol. 4765. Research and advanced technology for digital libraries, ECDL’2007. (pp. 559-562). Springer-Verlag
Extra Slides
• Intro to Reading model
• Tag cloud examples
27
Introducing Reading Model
• Scanning fixations provide some semantic information– limited to foveal visual field (1° visual acuity) (Rayner & Fischer, 1996)
• Reading fixation sequences provide more information than isolated “scanning” fixations
– information is gained from the larger parafoveal region (5° beyond foveal focus; asymmetrical, in dir of reading) (Rayner et al., 2003)
– some types of semantic information is available only through reading sequences
• We implemented the E-Z Reader reading model (Reichle et al., 2006)
– Lexical fixations duration >113 ms (Reingold & Rayner, 2006)
– Each lexical fixation is classified to Scanning or Reading (S,R)
– These sequences used to create a two-state model
28
Reading Model – States and Characteristics
• Two states: transition probabilities
• Number of lexical fixations and duration
29
Example Reading Sequence
30
Tag Clouds Everywhere!
31