answering twitter questions: a model for recommending answerers through social collaboration

51
1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration Laure Soulier Pierre and Marie Curie University LIP6, Paris - France Lynda Tamine Paul Sabatier University IRIT, Toulouse - France Gia-Hung Nguyen Paul Sabatier University IRIT, Toulouse - France October 25, 2016 1 / 30

Upload: upmc-sorbonne-universities

Post on 24-Jan-2017

80 views

Category:

Science


0 download

TRANSCRIPT

Page 1: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

Answering Twitter Questions:a Model for Recommending Answerers through Social

Collaboration

Laure SoulierPierre and Marie Curie UniversityLIP6, Paris - France

Lynda TaminePaul Sabatier UniversityIRIT, Toulouse - France

Gia-Hung NguyenPaul Sabatier UniversityIRIT, Toulouse - France

October 25, 2016

1 / 30

Page 2: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

PLAN

1. Context and motivationsSocial media-based information accessCollaboration and social media-based informationa access

2. Related Work

3. The CRAQ Model

4. Experimental Evaluation

5. Conclusion

2 / 30

Page 3: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

CONTEXT AND MOTIVATIONSSOCIAL MEDIA-BASED INFORMATION ACCESS

• Activity on social platforms

• Social networks: communication tool for the general public

3 / 30

Page 4: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

CONTEXT AND MOTIVATIONSSOCIAL MEDIA-BASED INFORMATION ACCESS

• Why choosing social platforms for asking questions?

I Large audience and wide range of topics[Harper et al., 2008, Jeong et al., 2013,Tamine et al., 2016]

I Specific audience, expertise→ trust,personalisation, and contextualisation[Morris et al., 2010]

I Friendsourcing through people addressing(”@”, forward) [Liu and Jansen, 2013,Teevan et al., 2011, Fuchs and Groh, 2015]

I Communication, exchange, sensemaking[Morris, 2013, Evans and Chi, 2010,Tamine et al., 2016]

• Limitations of social platforms

I Majority of questions without response[Jeong et al., 2013, Paul et al., 2011]

I Answers mostly provided by members of theimmediate follower network[Morris et al., 2010, Rzeszotarski et al., 2014]

I Social and cognitive cost of friendsourcing(e.g., spent time and deployed effort)[Horowitz and Kamvar, 2010, Morris, 2013].

Design implications

• Enhancement of social awareness (creating social ties to active/relevant users)

• Recommendation of collaborators (asking questions to crowd instead of followers)

4 / 30

Page 5: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

CONTEXT AND MOTIVATIONSSOCIAL MEDIA-BASED INFORMATION ACCESS

• Why choosing social platforms for asking questions?

I Large audience and wide range of topics[Harper et al., 2008, Jeong et al., 2013,Tamine et al., 2016]

I Specific audience, expertise→ trust,personalisation, and contextualisation[Morris et al., 2010]

I Friendsourcing through people addressing(”@”, forward) [Liu and Jansen, 2013,Teevan et al., 2011, Fuchs and Groh, 2015]

I Communication, exchange, sensemaking[Morris, 2013, Evans and Chi, 2010,Tamine et al., 2016]

• Limitations of social platforms

I Majority of questions without response[Jeong et al., 2013, Paul et al., 2011]

I Answers mostly provided by members of theimmediate follower network[Morris et al., 2010, Rzeszotarski et al., 2014]

I Social and cognitive cost of friendsourcing(e.g., spent time and deployed effort)[Horowitz and Kamvar, 2010, Morris, 2013].

Design implications

• Enhancement of social awareness (creating social ties to active/relevant users)

• Recommendation of collaborators (asking questions to crowd instead of followers)

4 / 30

Page 6: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

CONTEXT AND MOTIVATIONSSOCIAL MEDIA-BASED INFORMATION ACCESS

• Why choosing social platforms for asking questions?

I Large audience and wide range of topics[Harper et al., 2008, Jeong et al., 2013,Tamine et al., 2016]

I Specific audience, expertise→ trust,personalisation, and contextualisation[Morris et al., 2010]

I Friendsourcing through people addressing(”@”, forward) [Liu and Jansen, 2013,Teevan et al., 2011, Fuchs and Groh, 2015]

I Communication, exchange, sensemaking[Morris, 2013, Evans and Chi, 2010,Tamine et al., 2016]

• Limitations of social platforms

I Majority of questions without response[Jeong et al., 2013, Paul et al., 2011]

I Answers mostly provided by members of theimmediate follower network[Morris et al., 2010, Rzeszotarski et al., 2014]

I Social and cognitive cost of friendsourcing(e.g., spent time and deployed effort)[Horowitz and Kamvar, 2010, Morris, 2013].

Design implications

• Enhancement of social awareness (creating social ties to active/relevant users)

• Recommendation of collaborators (asking questions to crowd instead of followers)

4 / 30

Page 7: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

CONTEXT AND MOTIVATIONSSOCIAL MEDIA-BASED INFORMATION ACCESS

• Why choosing social platforms for asking questions?

I Large audience and wide range of topics[Harper et al., 2008, Jeong et al., 2013,Tamine et al., 2016]

I Specific audience, expertise→ trust,personalisation, and contextualisation[Morris et al., 2010]

I Friendsourcing through people addressing(”@”, forward) [Liu and Jansen, 2013,Teevan et al., 2011, Fuchs and Groh, 2015]

I Communication, exchange, sensemaking[Morris, 2013, Evans and Chi, 2010,Tamine et al., 2016]

• Limitations of social platforms

I Majority of questions without response[Jeong et al., 2013, Paul et al., 2011]

I Answers mostly provided by members of theimmediate follower network[Morris et al., 2010, Rzeszotarski et al., 2014]

I Social and cognitive cost of friendsourcing(e.g., spent time and deployed effort)[Horowitz and Kamvar, 2010, Morris, 2013].

Design implications

• Enhancement of social awareness (creating social ties to active/relevant users)

• Recommendation of collaborators (asking questions to crowd instead of followers)

4 / 30

Page 8: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

CONTEXT AND MOTIVATIONSSOCIAL MEDIA-BASED INFORMATION ACCESS

• Why choosing social platforms for asking questions?

I Large audience and wide range of topics[Harper et al., 2008, Jeong et al., 2013,Tamine et al., 2016]

I Specific audience, expertise→ trust,personalisation, and contextualisation[Morris et al., 2010]

I Friendsourcing through people addressing(”@”, forward) [Liu and Jansen, 2013,Teevan et al., 2011, Fuchs and Groh, 2015]

I Communication, exchange, sensemaking[Morris, 2013, Evans and Chi, 2010,Tamine et al., 2016]

• Limitations of social platforms

I Majority of questions without response[Jeong et al., 2013, Paul et al., 2011]

I Answers mostly provided by members of theimmediate follower network[Morris et al., 2010, Rzeszotarski et al., 2014]

I Social and cognitive cost of friendsourcing(e.g., spent time and deployed effort)[Horowitz and Kamvar, 2010, Morris, 2013].

Design implications

• Enhancement of social awareness (creating social ties to active/relevant users)

• Recommendation of collaborators (asking questions to crowd instead of followers)

4 / 30

Page 9: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

CONTEXT AND MOTIVATIONSCOLLABORATION AND SOCIAL MEDIA-BASED INFORMATION ACCESS: TWO SIDE IN THE SAME COIN?

• Social media-based information accessI Seeking, answering, sharing,

bookmarking, and spreading informationI Improving the search outcomes through

social interactions

• CollaborationI Identifying and solving a shared

complex problemI Creating and sharing knowledge within

a work team

• Social media-based collaborationI Leveraging from the ”wisdom of the

crowd”I Implicit or explicit intents (sharing,

questioning, and/or answering)I Tasks: social question-answering, social

search, real-time search

Our contribution

• Identifying a group of socially authoritative users with complementary skills to overpass the localsocial network

• Gathering diverse pieces of information

→ Recommending a group of collaborators

5 / 30

Page 10: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

CONTEXT AND MOTIVATIONSCOLLABORATION AND SOCIAL MEDIA-BASED INFORMATION ACCESS: TWO SIDE IN THE SAME COIN?

• Social media-based information accessI Seeking, answering, sharing,

bookmarking, and spreading informationI Improving the search outcomes through

social interactions

• CollaborationI Identifying and solving a shared

complex problemI Creating and sharing knowledge within

a work team

• Social media-based collaborationI Leveraging from the ”wisdom of the

crowd”I Implicit or explicit intents (sharing,

questioning, and/or answering)I Tasks: social question-answering, social

search, real-time search

Our contribution

• Identifying a group of socially authoritative users with complementary skills to overpass the localsocial network

• Gathering diverse pieces of information

→ Recommending a group of collaborators

5 / 30

Page 11: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

CONTEXT AND MOTIVATIONSCOLLABORATION AND SOCIAL MEDIA-BASED INFORMATION ACCESS: TWO SIDE IN THE SAME COIN?

• Social media-based information accessI Seeking, answering, sharing,

bookmarking, and spreading informationI Improving the search outcomes through

social interactions

• CollaborationI Identifying and solving a shared

complex problemI Creating and sharing knowledge within

a work team

• Social media-based collaborationI Leveraging from the ”wisdom of the

crowd”I Implicit or explicit intents (sharing,

questioning, and/or answering)I Tasks: social question-answering, social

search, real-time search

Our contribution

• Identifying a group of socially authoritative users with complementary skills to overpass the localsocial network

• Gathering diverse pieces of information

→ Recommending a group of collaborators

5 / 30

Page 12: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

PLAN

1. Context and motivations

2. Related WorkPioneering workComparison of previous work

3. The CRAQ Model

4. Experimental Evaluation

5. Conclusion

6 / 30

Page 13: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

RELATED WORKPIONEERING WORK: AARDVARK [HOROWITZ AND KAMVAR, 2010]

Aardvark [Horowitz and Kamvar, 2010]

• The village paradigm: towards a social dissemination of knowledgeI Information is passed from person to personI Finding the right person rather than the right document

7 / 30

Page 14: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

RELATED WORKPIONEERING WORK: SEARCHBUDDIES [HECHT ET AL., 2012]

SearchBuddies [Hecht et al., 2012]

• A crowd-powered socially embedded search engine

• Leveraging users’ personal network to reach the right people/information

• Soshul Butterflie: Recommending people • Investigaetore: Recommending urls

8 / 30

Page 15: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

RELATED WORKCOMPARISON OF PREVIOUS APPROACHES

Previous work Exper

tise/

inter

est

Responsiv

enes

s

Socia

l activ

ity

Users’

connec

tednes

s

Compati

bility

Optimiza

tion of th

e

outcom

e

Complem

entar

ityof

users’

skill

s

Rec

o.us

ers

Expert finding [Balog et al., 2012] XAuthoritative users/influencers[Pal and Counts, 2011]

X X

Aardvark[Horowitz and Kamvar, 2010]

X X X X

SearchBuddies [Hecht et al., 2012] XMentionning users/spreaders[Wang et al., 2013, Gong et al., 2015]

X X

Rec

o.gr

oup

ofus

ers CrowdStar [Nushi et al., 2015] X X X X

Question routing for collab.Q&A[Chang and Pal, 2013]

X X X X

Recommended targeted stranger[Mahmud et al., 2013]

X X X X

Crowdworker[Ranganath et al., 2015]

X X X

Our work X X X X X

9 / 30

Page 16: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

PLAN

1. Context and motivations

2. Related Work

3. The CRAQ ModelOverviewLearning the pairwise collaboration likelihoodBuilding the collaborative group of users

4. Experimental Evaluation

5. Conclusion

10 / 30

Page 17: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

THE CRAQ MODEL: ANSWERING TWITTER QUESTIONS THROUGH ACOLLABORATIVE GROUP RECOMMENDATION ALGORITHMOVERVIEW

• Indentifying a groupof complementaryanswerers who couldprovide the questionerwith a cohesive andrelevant answer.

• Gathering diversepieces of informationposted by users

• Maximization of thegroup entropy

11 / 30

Page 18: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

THE CRAQ MODEL: ANSWERING TWITTER QUESTIONS THROUGH ACOLLABORATIVE GROUP RECOMMENDATION ALGORITHMSTAGE A: LEARNING THE PAIRWISE COLLABORATION LIKELIHOOD

Objective

Estimating the potential of collaboration between a pair of users

• Hypotheses:I On Twitter, collaboration between users is noted by the @ symbol

[Ehrlich and Shami, 2010, Honey and Herring, 2009]I Trust and authority enable to improve the effectiveness of the collaboration

[McNally et al., 2013]I Collaboration is a structured search process in which users might or might not be

complementary [Sonnenwald et al., 2004, Soulier et al., 2014]

12 / 30

Page 19: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

THE CRAQ MODEL: ANSWERING TWITTER QUESTIONS THROUGH ACOLLABORATIVE GROUP RECOMMENDATION ALGORITHMSTAGE A: LEARNING THE PAIRWISE COLLABORATION LIKELIHOOD

Objective

Estimating the potential of collaboration between a pair of users

• Hypotheses:I On Twitter, collaboration between users is noted by the @ symbol

[Ehrlich and Shami, 2010, Honey and Herring, 2009]I Trust and authority enable to improve the effectiveness of the collaboration

[McNally et al., 2013]I Collaboration is a structured search process in which users might or might not be

complementary [Sonnenwald et al., 2004, Soulier et al., 2014]

12 / 30

Page 20: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

THE CRAQ MODEL: ANSWERING TWITTER QUESTIONS THROUGH ACOLLABORATIVE GROUP RECOMMENDATION ALGORITHMSTAGE A: LEARNING THE PAIRWISE COLLABORATION LIKELIHOOD

Objective

Estimating the potential of collaboration between a pair of users

• Hypotheses:I On Twitter, collaboration between users is noted by the @ symbol

[Ehrlich and Shami, 2010, Honey and Herring, 2009]I Trust and authority enable to improve the effectiveness of the collaboration

[McNally et al., 2013]I Collaboration is a structured search process in which users might or might not be

complementary [Sonnenwald et al., 2004, Soulier et al., 2014]

12 / 30

Page 21: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

THE CRAQ MODEL: ANSWERING TWITTER QUESTIONS THROUGH ACOLLABORATIVE GROUP RECOMMENDATION ALGORITHMSTAGE B: BUILDING THE COLLABORATIVE GROUP OF USERS

Objective

Building the smallest group of collaborators maximizing the cohesiveness and relevance of thecollaborative response

• Identifying candidate collaborators through a temporal ranking model[Berberich and Bedathur, 2013]

13 / 30

Page 22: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

THE CRAQ MODEL: ANSWERING TWITTER QUESTIONS THROUGH ACOLLABORATIVE GROUP RECOMMENDATION ALGORITHMSTAGE B: BUILDING THE COLLABORATIVE GROUP OF USERS

Objective

Building the smallest group of collaborators maximizing the cohesiveness and relevance of thecollaborative response

• Extracting the collaborator groupI Maximizing entropy equivalent to minimizing the information gain [Quinlan, 1986]

IG(g, uk) = [H(g)

H(g) ∝ −∑uj∈g

P(uj|q) · log(P(uj|q))

− H(g|uk)

H(g|uk) = p(uk) · [−∑uj∈g

uj 6=uk

P(uj|uk) · log(P(uj|uk))]

] (1)

I Recursive decrementation of candidate collaborators through the information gain metric

t∗ = arg maxt∈[0,...,|U|−1]

∂2 IGr(gt,u)∂u2 |u=ut (2)

Given ut= argminuj′∈gt IGr(gt, uj′ )

And gt+1= gt \ ut

14 / 30

Page 23: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

PLAN

1. Context and motivations

2. Related Work

3. The CRAQ Model

4. Experimental EvaluationEvaluation ProtocolResults

5. Conclusion

15 / 30

Page 24: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONEVALUATION PROTOCOL

Evaluation objectives

• RQ1: Do the tweets posted by the collaborative group members recommended by the CRAQ allowthe building of an answer?

• RQ2: Are the recommended group-based answers relevant?

• RQ3: What is the synergic effect of the CRAQ-based collaborative answering methodology?

• Datasets

1 Hurricane #Sandy(October 2012)

2 #Ebola virus epidemic(2013-2014)

Collection Sandy EbolaTweets 2,119,854 2,872,890Microbloggers 1,258,473 750,829Retweets 963,631 1,157,826Mentions 1,473,498 1,826,059Reply 63,596 69,773URLs 596,393 1,309,919Pictures 107,263 310,581

16 / 30

Page 25: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONEVALUATION PROTOCOL

• Question identification [Jeong et al., 2013]I Filtering tweets ending with a question markI Excluding mention tweets and tweets including URLsI Filtering tweets with question-oriented hashtags [Jeffrey M. Rzeszotarski, 2014] (e.g., #help,

#askquestion, ...)I Excluding rhetorical questions (Crowdflower)

Sandy 41 questions Would love to #help to clear up the mess #Sandy made. Any waypeople can help? Voluntery groups?

Ebola 21 questions How do you get infected by this Ebola virus though?? #Twoogle

• Collaboration likelihood featuresName Description

Aut

hori

ty

Importance Number of followersNumber of followingsNumber of favorites

Engagement Number of tweetsActivity Number of topically-related tweetswithin the In-degree in the topictopic Out-degree in the topic

Com

plem

enta

rity

Topic Jansen-Shanon distance between topical-based representation of users’ interestsobtained through the LDA algorithm

Multimedia Number of tweets with videoNumber of tweets with imagesNumber of tweets with linksNumber of tweets with hashtagsNumber of tweets with only text

Opinion Number of tweets with positive opinionpolarity Number of tweets with neutral opinion

Number of tweets with negative opinion

I Authority-based features (trust andexpertise of each user)

Xjj′ = log(µ(Xj,Xj′ )

σ(Xj,Xj′ )) (3)

I Complementarity-based features(complementairty of collaborators)

Xjj′ =|Xj − Xj′ |Xj + Xj′

(4)

17 / 30

Page 26: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONEVALUATION PROTOCOL

• Question identification [Jeong et al., 2013]I Filtering tweets ending with a question markI Excluding mention tweets and tweets including URLsI Filtering tweets with question-oriented hashtags [Jeffrey M. Rzeszotarski, 2014] (e.g., #help,

#askquestion, ...)I Excluding rhetorical questions (Crowdflower)

Sandy 41 questions Would love to #help to clear up the mess #Sandy made. Any waypeople can help? Voluntery groups?

Ebola 21 questions How do you get infected by this Ebola virus though?? #Twoogle

• Collaboration likelihood featuresName Description

Aut

hori

ty

Importance Number of followersNumber of followingsNumber of favorites

Engagement Number of tweetsActivity Number of topically-related tweetswithin the In-degree in the topictopic Out-degree in the topic

Com

plem

enta

rity

Topic Jansen-Shanon distance between topical-based representation of users’ interestsobtained through the LDA algorithm

Multimedia Number of tweets with videoNumber of tweets with imagesNumber of tweets with linksNumber of tweets with hashtagsNumber of tweets with only text

Opinion Number of tweets with positive opinionpolarity Number of tweets with neutral opinion

Number of tweets with negative opinion

I Authority-based features (trust andexpertise of each user)

Xjj′ = log(µ(Xj,Xj′ )

σ(Xj,Xj′ )) (3)

I Complementarity-based features(complementairty of collaborators)

Xjj′ =|Xj − Xj′ |Xj + Xj′

(4)

17 / 30

Page 27: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONEVALUATION PROTOCOL

• BaselinesI MMR: diversity-based ranking model of tweets [Carbonell and Goldstein, 1998] (tweet level)I U: best user of the temporal ranking model [Berberich and Bedathur, 2013] (individual)I CRAQ-TR: CRAQ w/o group entropy maximizationI SM: community detection algorithm based on the graph structure [Cao et al., 2015]I STM: Topical Sensitive PageRank applied on users [Haveliwala, 2002]

• Evaluation workflow

Question +Tweets of users

Evaluating tweetsand building an answer

Assessingthe relevance

of users’ tweets

of built answers

Ground truth

Question Top ranked tweets of recommended group members Answer built by the crowdWould love to#help to clear upthe mess #Sandymade. Any waypeople can help?Voluntery groups?

- My prayers go out to those people out there that have beenaffected by the storm. #Sandy- Makes me want to volunteer myself and help the Red Cross andrescue groups.#Sandy- Rescue groups are organized and dispatched to help animals inSandy’s aftermath. You can help by donating. #SandyPets- ASPCA, HSUS, American Humane are among groups on theground helping animals in Sandy’s aftermath. Help them with adonation. #SandyPets #wlf

Rescue groups are organized and dis-patched ASPCA, HSUS, American Hu-mane, Donate to @RedCross, @Hu-maneSociety, @ASPCA.

18 / 30

Page 28: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONEVALUATION PROTOCOL

• BaselinesI MMR: diversity-based ranking model of tweets [Carbonell and Goldstein, 1998] (tweet level)I U: best user of the temporal ranking model [Berberich and Bedathur, 2013] (individual)I CRAQ-TR: CRAQ w/o group entropy maximizationI SM: community detection algorithm based on the graph structure [Cao et al., 2015]I STM: Topical Sensitive PageRank applied on users [Haveliwala, 2002]

• Evaluation workflow

Question +Tweets of users

Evaluating tweetsand building an answer

Assessingthe relevance

of users’ tweets

of built answers

Ground truth

Question Top ranked tweets of recommended group members Answer built by the crowdWould love to#help to clear upthe mess #Sandymade. Any waypeople can help?Voluntery groups?

- My prayers go out to those people out there that have beenaffected by the storm. #Sandy- Makes me want to volunteer myself and help the Red Cross andrescue groups.#Sandy- Rescue groups are organized and dispatched to help animals inSandy’s aftermath. You can help by donating. #SandyPets- ASPCA, HSUS, American Humane are among groups on theground helping animals in Sandy’s aftermath. Help them with adonation. #SandyPets #wlf

Rescue groups are organized and dis-patched ASPCA, HSUS, American Hu-mane, Donate to @RedCross, @Hu-maneSociety, @ASPCA.

18 / 30

Page 29: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONEVALUATION PROTOCOL

• BaselinesI MMR: diversity-based ranking model of tweets [Carbonell and Goldstein, 1998] (tweet level)I U: best user of the temporal ranking model [Berberich and Bedathur, 2013] (individual)I CRAQ-TR: CRAQ w/o group entropy maximizationI SM: community detection algorithm based on the graph structure [Cao et al., 2015]I STM: Topical Sensitive PageRank applied on users [Haveliwala, 2002]

• Evaluation workflow

Question +Tweets of users

Evaluating tweetsand building an answer

Assessingthe relevance

of users’ tweets

of built answers

Ground truth

Question Top ranked tweets of recommended group members Answer built by the crowdWould love to#help to clear upthe mess #Sandymade. Any waypeople can help?Voluntery groups?

- My prayers go out to those people out there that have beenaffected by the storm. #Sandy- Makes me want to volunteer myself and help the Red Cross andrescue groups.#Sandy- Rescue groups are organized and dispatched to help animals inSandy’s aftermath. You can help by donating. #SandyPets- ASPCA, HSUS, American Humane are among groups on theground helping animals in Sandy’s aftermath. Help them with adonation. #SandyPets #wlf

Rescue groups are organized and dis-patched ASPCA, HSUS, American Hu-mane, Donate to @RedCross, @Hu-maneSociety, @ASPCA.

18 / 30

Page 30: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONRESULTS

RQ1: Do the tweets posted by the collaborative group members recommended by the CRAQ allow thebuilding of an answer?

• Testing whether the CRAQ is effective in providing useful tweets in terms of relatedness to thequestion topic and complementarity.

0 1 2 3

10

20

30

40

50

Sandy

0 1 2 3

0

10

20

30

40

50

60

70Ebola

MMR U CRAQ-TR SM STM CRAQ

• Lowest rate for the Not related category (0)

• Highest proportion of 2+3 Related and helpful• Complementarity of tweets is not satisfying w.r.t. baselines

I Negative regression estimate of complementarity-based features in the collaborationlikelihood model - phase A

19 / 30

Page 31: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONRESULTS

RQ1: Do the tweets posted by the collaborative group members recommended by the CRAQ allow thebuilding of an answer?

• Testing whether the CRAQ is effective in providing useful tweets in terms of relatedness to thequestion topic and complementarity.

0 1 2 3

10

20

30

40

50

Sandy

0 1 2 3

0

10

20

30

40

50

60

70Ebola

MMR U CRAQ-TR SM STM CRAQ

• Lowest rate for the Not related category (0)• Highest proportion of 2+3 Related and helpful

• Complementarity of tweets is not satisfying w.r.t. baselinesI Negative regression estimate of complementarity-based features in the collaboration

likelihood model - phase A

19 / 30

Page 32: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONRESULTS

RQ1: Do the tweets posted by the collaborative group members recommended by the CRAQ allow thebuilding of an answer?

• Testing whether the CRAQ is effective in providing useful tweets in terms of relatedness to thequestion topic and complementarity.

0 1 2 3

10

20

30

40

50

Sandy

0 1 2 3

0

10

20

30

40

50

60

70Ebola

MMR U CRAQ-TR SM STM CRAQ

• Lowest rate for the Not related category (0)• Highest proportion of 2+3 Related and helpful• Complementarity of tweets is not satisfying w.r.t. baselines

I Negative regression estimate of complementarity-based features in the collaborationlikelihood model - phase A

19 / 30

Page 33: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONRESULTS

RQ1: Do the tweets posted by the collaborative group members recommended by the CRAQ allow thebuilding of an answer?

• Testing whether the CRAQ is effective in providing useful tweets in terms of relatedness to thequestion topic and complementarity.

• Testing whether those provided tweets allow building a cohesive answer.

Sandy Ebola

10

15

20

25

30

35

Avg Percentage of selected tweets

Sandy Ebola

10

20

30

40

50

60

70

80

Number of built answers

MMR U CRAQ-TR SM STM CRAQ

• U: highest rate of selected tweets / lowest number of built answers

• CRAQ: Lack of tweet complementarity does not impact the ability of therecommended group to answer the query

20 / 30

Page 34: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONRESULTS

RQ1: Do the tweets posted by the collaborative group members recommended by the CRAQ allow thebuilding of an answer?

• Testing whether the CRAQ is effective in providing useful tweets in terms of relatedness to thequestion topic and complementarity.

• Testing whether those provided tweets allow building a cohesive answer.

Sandy Ebola

10

15

20

25

30

35

Avg Percentage of selected tweets

Sandy Ebola

10

20

30

40

50

60

70

80

Number of built answers

MMR U CRAQ-TR SM STM CRAQ

• U: highest rate of selected tweets / lowest number of built answers• CRAQ: Lack of tweet complementarity does not impact the ability of the

recommended group to answer the query

20 / 30

Page 35: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONRESULTS

RQ2: Are the recommended group-based answers relevant?

• Testing the relevance of the built answers

MMR U CRAQ-TR SM STM CRAQ

Sand

y

ba 43 29 75 74 67 771 11: 25.58% 9: 31.03% 23: 30.67% 24: 32.43% 21: 31.34% 17: 22.08%2 20: 46.51% 14: 48.28% 33: 44.00% 34: 45.95% 24: 35.82% 39: 50.65%3 12: 27.91% 6: 20.69% 19: 25.33% 16: 21.62% 22: 32.84% 21: 27.27%2+3 32: 74.42% 20: 68.97% 52: 69.33% 50: 67.57% 46: 68.66% 60: 77.92%

Ebol

a

ba 22 11 39 30 37 411 4: 21.05% 3: 27.27% 15: 38.46% 8: 26.67% 15: 40.54% 10: 24.39%2 6: 31.58% 4: 36.36% 18: 46.15% 15: 50% 16: 43.24% 22: 53.66%3 9: 47.37% 4: 36.36% 6: 15.38% 7: 23.33% 6: 16.22% 9: 21.95%2+3 15: 78.95% 8: 72.72% 24: 1.53% 22: 73.33% 22:59.46% 31: 75.61%

• CRAQ enables to build a higher number of answers, among them a higher proportionof Partly relevant and Relevant answers

I U: reinforces our intuition that a single user might have an insufficient knowledge (even ifrelated) to solve a tweeted question.

I MMR: gives rise to the benefit of building answers from the users perspective rather than thetweets regardless of their context.

I CRAQ-TR (best baseline): building a group by gathering individual users identified asrelevant through their skills (tweet topical similarity with the question) is not alwaysappropriate.

21 / 30

Page 36: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONRESULTS

RQ2: Are the recommended group-based answers relevant?

• Testing the relevance of the built answers

MMR U CRAQ-TR SM STM CRAQ

Sand

y

ba 43 29 75 74 67 771 11: 25.58% 9: 31.03% 23: 30.67% 24: 32.43% 21: 31.34% 17: 22.08%2 20: 46.51% 14: 48.28% 33: 44.00% 34: 45.95% 24: 35.82% 39: 50.65%3 12: 27.91% 6: 20.69% 19: 25.33% 16: 21.62% 22: 32.84% 21: 27.27%2+3 32: 74.42% 20: 68.97% 52: 69.33% 50: 67.57% 46: 68.66% 60: 77.92%

Ebol

a

ba 22 11 39 30 37 411 4: 21.05% 3: 27.27% 15: 38.46% 8: 26.67% 15: 40.54% 10: 24.39%2 6: 31.58% 4: 36.36% 18: 46.15% 15: 50% 16: 43.24% 22: 53.66%3 9: 47.37% 4: 36.36% 6: 15.38% 7: 23.33% 6: 16.22% 9: 21.95%2+3 15: 78.95% 8: 72.72% 24: 1.53% 22: 73.33% 22:59.46% 31: 75.61%

• CRAQ enables to build a higher number of answers, among them a higher proportionof Partly relevant and Relevant answers

I U: reinforces our intuition that a single user might have an insufficient knowledge (even ifrelated) to solve a tweeted question.

I MMR: gives rise to the benefit of building answers from the users perspective rather than thetweets regardless of their context.

I CRAQ-TR (best baseline): building a group by gathering individual users identified asrelevant through their skills (tweet topical similarity with the question) is not alwaysappropriate.

21 / 30

Page 37: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONRESULTS

RQ2: Are the recommended group-based answers relevant?

• Testing the relevance of the built answers

MMR U CRAQ-TR SM STM CRAQ

Sand

y

ba 43 29 75 74 67 771 11: 25.58% 9: 31.03% 23: 30.67% 24: 32.43% 21: 31.34% 17: 22.08%2 20: 46.51% 14: 48.28% 33: 44.00% 34: 45.95% 24: 35.82% 39: 50.65%3 12: 27.91% 6: 20.69% 19: 25.33% 16: 21.62% 22: 32.84% 21: 27.27%2+3 32: 74.42% 20: 68.97% 52: 69.33% 50: 67.57% 46: 68.66% 60: 77.92%

Ebol

a

ba 22 11 39 30 37 411 4: 21.05% 3: 27.27% 15: 38.46% 8: 26.67% 15: 40.54% 10: 24.39%2 6: 31.58% 4: 36.36% 18: 46.15% 15: 50% 16: 43.24% 22: 53.66%3 9: 47.37% 4: 36.36% 6: 15.38% 7: 23.33% 6: 16.22% 9: 21.95%2+3 15: 78.95% 8: 72.72% 24: 1.53% 22: 73.33% 22:59.46% 31: 75.61%

• CRAQ enables to build a higher number of answers, among them a higher proportionof Partly relevant and Relevant answers

I U: reinforces our intuition that a single user might have an insufficient knowledge (even ifrelated) to solve a tweeted question.

I MMR: gives rise to the benefit of building answers from the users perspective rather than thetweets regardless of their context.

I CRAQ-TR (best baseline): building a group by gathering individual users identified asrelevant through their skills (tweet topical similarity with the question) is not alwaysappropriate.

21 / 30

Page 38: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONRESULTS

RQ3: What is the synergic effect of the CRAQ-based collaborative answering methodology?

• Measuring the synergic effect of simulated collaboration within the recommended groups ofusers with respect to the search effectiveness based on the tweets published by those groupmembers.

MMR U CRAQ-TR SM STM CRAQValue %Chg Value %Chg Value %Chg Value %Chg Value %Chg Value

SandyPrecision 0.24 +92.93** 0.46 +2.32 0.33 +42.01* 0.21 +124.71***0.49 -4.1 0.47Recall 0.09 +95.19* 0.1 +81.63* 0.15 +16.18 0.09 +105.96* 0.1 +80.09* 0.18F-measure 0.12 +78.22* 0.15 +41.79 0.19 +10.59 0.12 +84.42* 0.15 +40.48 0.21EbolaPrecision 0.22 +153.65***0.64 -12.12 0.5 +12.22 0.3 +89.59** 0.45 +24.5 0.57Recall 0.07 +155.59***0.11 +69.96* 0.22 -18.08 0.12 +46.80 0.06 +216.56***0.18F-measure 0.09 +164.17***0.21 +17.46 0.28 -11.64 0.17 +50.07 0.1 +159.07***0.25

• MMR: sustains observed analysis on the lack of user context

• U: consistent with previous work highlighting the synergic effect of a collaborativegroup

• CRAQ-TR: no significant differences in effectiveness / lower ratio of relevant answers.Benefit of the group entropy maximization based on the collaboration likelihood.

• SM: benefit of overpassing strong ties (users’ local network) to select relevant strangers• STM: topically relevant tweets issued from the most socially authoritative are not

obviously relevant to answer the tweeted question.

22 / 30

Page 39: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONRESULTS

RQ3: What is the synergic effect of the CRAQ-based collaborative answering methodology?

• Measuring the synergic effect of simulated collaboration within the recommended groups ofusers with respect to the search effectiveness based on the tweets published by those groupmembers.

MMR U CRAQ-TR SM STM CRAQValue %Chg Value %Chg Value %Chg Value %Chg Value %Chg Value

SandyPrecision 0.24 +92.93** 0.46 +2.32 0.33 +42.01* 0.21 +124.71***0.49 -4.1 0.47Recall 0.09 +95.19* 0.1 +81.63* 0.15 +16.18 0.09 +105.96* 0.1 +80.09* 0.18F-measure 0.12 +78.22* 0.15 +41.79 0.19 +10.59 0.12 +84.42* 0.15 +40.48 0.21EbolaPrecision 0.22 +153.65***0.64 -12.12 0.5 +12.22 0.3 +89.59** 0.45 +24.5 0.57Recall 0.07 +155.59***0.11 +69.96* 0.22 -18.08 0.12 +46.80 0.06 +216.56***0.18F-measure 0.09 +164.17***0.21 +17.46 0.28 -11.64 0.17 +50.07 0.1 +159.07***0.25

• MMR: sustains observed analysis on the lack of user context• U: consistent with previous work highlighting the synergic effect of a collaborative

group

• CRAQ-TR: no significant differences in effectiveness / lower ratio of relevant answers.Benefit of the group entropy maximization based on the collaboration likelihood.

• SM: benefit of overpassing strong ties (users’ local network) to select relevant strangers• STM: topically relevant tweets issued from the most socially authoritative are not

obviously relevant to answer the tweeted question.

22 / 30

Page 40: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONRESULTS

RQ3: What is the synergic effect of the CRAQ-based collaborative answering methodology?

• Measuring the synergic effect of simulated collaboration within the recommended groups ofusers with respect to the search effectiveness based on the tweets published by those groupmembers.

MMR U CRAQ-TR SM STM CRAQValue %Chg Value %Chg Value %Chg Value %Chg Value %Chg Value

SandyPrecision 0.24 +92.93** 0.46 +2.32 0.33 +42.01* 0.21 +124.71***0.49 -4.1 0.47Recall 0.09 +95.19* 0.1 +81.63* 0.15 +16.18 0.09 +105.96* 0.1 +80.09* 0.18F-measure 0.12 +78.22* 0.15 +41.79 0.19 +10.59 0.12 +84.42* 0.15 +40.48 0.21EbolaPrecision 0.22 +153.65***0.64 -12.12 0.5 +12.22 0.3 +89.59** 0.45 +24.5 0.57Recall 0.07 +155.59***0.11 +69.96* 0.22 -18.08 0.12 +46.80 0.06 +216.56***0.18F-measure 0.09 +164.17***0.21 +17.46 0.28 -11.64 0.17 +50.07 0.1 +159.07***0.25

• MMR: sustains observed analysis on the lack of user context• U: consistent with previous work highlighting the synergic effect of a collaborative

group• CRAQ-TR: no significant differences in effectiveness / lower ratio of relevant answers.

Benefit of the group entropy maximization based on the collaboration likelihood.

• SM: benefit of overpassing strong ties (users’ local network) to select relevant strangers• STM: topically relevant tweets issued from the most socially authoritative are not

obviously relevant to answer the tweeted question.

22 / 30

Page 41: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONRESULTS

RQ3: What is the synergic effect of the CRAQ-based collaborative answering methodology?

• Measuring the synergic effect of simulated collaboration within the recommended groups ofusers with respect to the search effectiveness based on the tweets published by those groupmembers.

MMR U CRAQ-TR SM STM CRAQValue %Chg Value %Chg Value %Chg Value %Chg Value %Chg Value

SandyPrecision 0.24 +92.93** 0.46 +2.32 0.33 +42.01* 0.21 +124.71***0.49 -4.1 0.47Recall 0.09 +95.19* 0.1 +81.63* 0.15 +16.18 0.09 +105.96* 0.1 +80.09* 0.18F-measure 0.12 +78.22* 0.15 +41.79 0.19 +10.59 0.12 +84.42* 0.15 +40.48 0.21EbolaPrecision 0.22 +153.65***0.64 -12.12 0.5 +12.22 0.3 +89.59** 0.45 +24.5 0.57Recall 0.07 +155.59***0.11 +69.96* 0.22 -18.08 0.12 +46.80 0.06 +216.56***0.18F-measure 0.09 +164.17***0.21 +17.46 0.28 -11.64 0.17 +50.07 0.1 +159.07***0.25

• MMR: sustains observed analysis on the lack of user context• U: consistent with previous work highlighting the synergic effect of a collaborative

group• CRAQ-TR: no significant differences in effectiveness / lower ratio of relevant answers.

Benefit of the group entropy maximization based on the collaboration likelihood.• SM: benefit of overpassing strong ties (users’ local network) to select relevant strangers

• STM: topically relevant tweets issued from the most socially authoritative are notobviously relevant to answer the tweeted question.

22 / 30

Page 42: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

EXPERIMENTAL EVALUATIONRESULTS

RQ3: What is the synergic effect of the CRAQ-based collaborative answering methodology?

• Measuring the synergic effect of simulated collaboration within the recommended groups ofusers with respect to the search effectiveness based on the tweets published by those groupmembers.

MMR U CRAQ-TR SM STM CRAQValue %Chg Value %Chg Value %Chg Value %Chg Value %Chg Value

SandyPrecision 0.24 +92.93** 0.46 +2.32 0.33 +42.01* 0.21 +124.71***0.49 -4.1 0.47Recall 0.09 +95.19* 0.1 +81.63* 0.15 +16.18 0.09 +105.96* 0.1 +80.09* 0.18F-measure 0.12 +78.22* 0.15 +41.79 0.19 +10.59 0.12 +84.42* 0.15 +40.48 0.21EbolaPrecision 0.22 +153.65***0.64 -12.12 0.5 +12.22 0.3 +89.59** 0.45 +24.5 0.57Recall 0.07 +155.59***0.11 +69.96* 0.22 -18.08 0.12 +46.80 0.06 +216.56***0.18F-measure 0.09 +164.17***0.21 +17.46 0.28 -11.64 0.17 +50.07 0.1 +159.07***0.25

• MMR: sustains observed analysis on the lack of user context• U: consistent with previous work highlighting the synergic effect of a collaborative

group• CRAQ-TR: no significant differences in effectiveness / lower ratio of relevant answers.

Benefit of the group entropy maximization based on the collaboration likelihood.• SM: benefit of overpassing strong ties (users’ local network) to select relevant strangers• STM: topically relevant tweets issued from the most socially authoritative are not

obviously relevant to answer the tweeted question.

22 / 30

Page 43: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

PLAN

1. Context and motivations

2. Related Work

3. The CRAQ Model

4. Experimental Evaluation

5. Conclusion

23 / 30

Page 44: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

CONCLUSION AND PERSPECTIVES

Discussion

• Novel method for answering questions on social networks: recommending a group ofsocially active and complementary collaborators.

• Relevant factors:I Information gain provided by a user to the groupI Complementarity and topical relevance of the related tweetsI Trust and authority of the group members

• Method applicable for other social platforms (Facebook, community Q&A sites, ...)

Future Directions

• Limitation of the predictive model of collaboration likelihood relying on basicassumptions of collaborations (mentions, replies, retweets).

I Deeper analysis of collaboration behavior on social networks to identify collaborationpatterns.

• Automatic summarization of candidate answers to build a collaborative answer.

24 / 30

Page 45: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

CONCLUSION AND PERSPECTIVES

Discussion

• Novel method for answering questions on social networks: recommending a group ofsocially active and complementary collaborators.

• Relevant factors:I Information gain provided by a user to the groupI Complementarity and topical relevance of the related tweetsI Trust and authority of the group members

• Method applicable for other social platforms (Facebook, community Q&A sites, ...)

Future Directions

• Limitation of the predictive model of collaboration likelihood relying on basicassumptions of collaborations (mentions, replies, retweets).

I Deeper analysis of collaboration behavior on social networks to identify collaborationpatterns.

• Automatic summarization of candidate answers to build a collaborative answer.

24 / 30

Page 46: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

THANK YOU!

@LaureSoulier @LyndaTamine @ngiahung

25 / 30

Page 47: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

REFERENCES I

Balog, K., Fang, Y., de Rijke, M., Serdyukov, P., and Si, L. (2012).

Expertise retrieval.Foundations and Trends in Information Retrieval, 6(2-3):127–256.

Berberich, K. and Bedathur, S. (2013).

Temporal Diversification of Search Results.In SIGIR #TAIA workshop. ACM.

Cao, C., Caverlee, J., Lee, K., Ge, H., and Chung, J. (2015).

Organic or organized?: Exploring URL sharing behavior.In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 513–522.

Carbonell, J. and Goldstein, J. (1998).

The use of MMR, diversity-based reranking for reordering documents and producing summaries.In Proceedings of the Annual International SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’98, pages 335–336.ACM.

Chang, S. and Pal, A. (2013).

Routing questions for collaborative answering in community question answering.In ASONAM ’13, pages 494–501. ACM.

Ehrlich, K. and Shami, N. S. (2010).

Microblogging inside and outside the workplace.In Proceedings of the Fourth International Conference on Weblogs and Social Media, ICWSM 2010.

Evans, B. M. and Chi, E. H. (2010).

An elaborated model of social search.Information Processing & Management (IP&M), 46(6):656–678.

26 / 30

Page 48: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

REFERENCES II

Fuchs, C. and Groh, G. (2015).

Appropriateness of search engines, social networks, and directly approaching friends to satisfy information needs.In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2015, pages1248–1253.

Gong, Y., Zhang, Q., Sun, X., and Huang, X. (2015).

Who will you ”@”?pages 533–542. ACM.

Harper, F. M., Raban, D. R., Rafaeli, S., and Konstan, J. A. (2008).

Predictors of answer quality in online q&a sites.In Proceedings of the 2008 Conference on Human Factors in Computing Systems, CHI 2008, 2008, Florence, Italy, April 5-10, 2008, pages 865–874.

Haveliwala, T. H. (2002).

Topic-sensitive PageRank.In Proceedings of the International Conference on World Wide Web, WWW ’02, pages 517–526. ACM.

Hecht, B., Teevan, J., Morris, M. R., and Liebling, D. J. (2012).

Searchbuddies: Bringing search engines into the conversation.In WSDM ’14.

Honey, C. and Herring, S. (2009).

Beyond Microblogging: Conversation and Collaboration via Twitter.In HICSS, pages 1–10.

Horowitz, D. and Kamvar, S. D. (2010).

The Anatomy of a Large-scale Social Search Engine.In WWW ’10, pages 431–440. ACM.

27 / 30

Page 49: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

REFERENCES III

Jeffrey M. Rzeszotarski, M. R. M. (2014).

Estimating the social costs of friendsourcing.In Proceedings of CHI 2014. ACM.

Jeong, J.-W., Morris, M. R., Teevan, J., and Liebling, D. (2013).

A crowd-powered socially embedded search engine.In ICWSM ’13. AAAI.

Liu, Z. and Jansen, B. J. (2013).

Factors influencing the response rate in social question and answering behavior.In Computer Supported Cooperative Work, CSCW 2013, pages 1263–1274.

Mahmud, J., Zhou, M. X., Megiddo, N., Nichols, J., and Drews, C. (2013).

Recommending targeted strangers from whom to solicit information on social media.In IUI ’13, pages 37–48. ACM.

McNally, K., O’Mahony, M. P., and Smyth, B. (2013).

A model of collaboration-based reputation for the social web.In ICWSM.

Morris, M. R. (2013).

Collaborative Search Revisited.In Proceedings of the Conference on Computer Supported Cooperative Work, CSCW ’13, pages 1181–1192. ACM.

Morris, M. R., Teevan, J., and Panovich, K. (2010).

What do people ask their social networks, and why?: a survey study of status message q&a behavior.In Proceedings of the 28th International Conference on Human Factors in Computing Systems, CHI 2010, pages 1739–1748.

28 / 30

Page 50: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

REFERENCES IV

Nushi, B., Alonso, O., Hentschel, M., and Kandylas, V. (2015).

Crowdstar: A social task routing framework for online communities.In ICWE ’15, pages 219–230.

Pal, A. and Counts, S. (2011).

Identifying topical authorities in microblogs.In WSDM ’11, pages 45–54. ACM.

Paul, S. A., Hong, L., and Chi, E. H. (2011).

Is twitter a good place for asking questions? A characterization study.In Proceedings of the Fifth International Conference on Weblogs and Social Media.

Quinlan, J. (1986).

Induction of decision trees.Machine Learning, 1(1):81–106.

Ranganath, S., Wang, S., Hu, X., Tang, J., and Liu, H. (2015).

Finding time-critical responses for information seeking in social media.In 2015 IEEE International Conference on Data Mining, ICDM 2015, pages 961–966.

Rzeszotarski, J. M., Spiro, E. S., Matias, J. N., Monroy-Hernandez, A., and Morris, M. R. (2014).

Is anyone out there?: unpacking q&a hashtags on twitter.In CHI Conference on Human Factors in Computing Systems, CHI’14, pages 2755–2758.

Sonnenwald, D. H., Maglaughlin, K. L., and Whitton, M. C. (2004).

Designing to support situation awareness across distances: an example from a scientific collaboratory.Information Processing & Management (IP&M), 40(6):989–1011.

29 / 30

Page 51: Answering Twitter Questions: a Model for Recommending Answerers through Social Collaboration

1. Context and motivations 2. Related Work 3. The CRAQ Model 4. Experimental Evaluation 5. Conclusion

REFERENCES V

Soulier, L., Shah, C., and Tamine, L. (2014).

User-driven System-mediated Collaborative Information Retrieval.In SIGIR ’14, pages 485–494. ACM.

Tamine, L., Soulier, L., Jabeur, L. B., Amblard, F., Hanachi, C., Hubert, G., and Roth, C. (2016).

Social media-based collaborative information access: Analysis of online crisis-related twitter conversations.In HT ’16.

Teevan, J., Ramage, D., and Morris, M. R. (2011).

#twittersearch: a comparison of microblog search and web search.In Proceedings of the Forth International Conference on Web Search and Web Data Mining, WSDM 2011, pages 35–44.

Wang, B., Wang, C., Bu, J., Chen, C., Zhang, W. V., Cai, D., and He, X. (2013).

Whom to mention: Expand the diffusion of tweets by @ recommendation on micro-blogging systems.In Proceedings of the 22Nd International Conference on World Wide Web, WWW ’13, pages 1331–1340. ACM.

30 / 30