monitoring conceptual development with meaningful interaction analysis

Post on 03-Nov-2014

725 Views

Category:

Technology

1 Downloads

Preview:

Click to see full reader

DESCRIPTION

Invited presentation given at the 2nd workshop on Natural Language Processing in Support of Learning: Metrics, Feedback and Connectivity (NLPSL'10) in Bucharest, Romania, September 15, 2010.

TRANSCRIPT

Monitoring Conceptual Development

Fridolin WildKMi, The Open University

Concepts things we can (easily) learn from

or express in language

• Tying shoelaces• Douglas Adams’

‘meaning of liff’:– Epping: The futile movements

of forefingers and eyebrows used when failing to attract the attention of waiters and barmen.

– Shoeburyness: The vague uncomfortable feeling you get when sitting on a seat which is still warm from somebody else's bottom

I have been convincingly

Sapir-Whorfed by this book.

Semantic = Meaning = …

Meaningful Interaction Analysis

Two-Mode factor analysis of the co-occurrences in the

terminologyResults in a latent-semantic

vector spaceWhich can be analysed with

Network Analysis

The mathemagics behindMeaningful Interaction Analysis

disambiguation with context heterogeneous corpus

The mathemagics behindMeaningful Interaction Analysis

associative closenessmeaning space

The mathemagics behindMeaningful Interaction Analysis

network analysis is used to identify communities of related understanding

Usage Example: Reflecting on Conceptual Change

Reflection is an interactive process of creative sense-making

of the past

Capturing traces in text

Internal latent-semantic graph structure (MIA output)

Software Support: Conceptual Inspection Analytics

Emergent Reference Models

Evaluation

• Evaluating effectiveness: measure of the accuracy in representing conceptual development

• Can be measured with two complementary methods by assessing the external validity of:– Concept Annotation: effectiveness in selecting

accurate conceptual descriptors (with ratings)– Concept Proximity: effectiveness in representing

proximity (with card-sorts)– By comparing against human ratings

of 18 first-year medical students of the University of Manchester Medical School aged 19-21

Concept Annotation• Annotation of 5 authentic

postings again on ‘safe prescribing’

• Selection of 10 top-loading concepts

• Adding of 5 random distracters• Participants ranked on Likert

Scale of 1 to 5how good the concept described the posting

• Human Interrater Correlation was measured with free marginal Kappa (Randolph, 2005)

Conflated categories (1+2,3,4+5)

Concept Proximity (1)• Four authentic learner blog

postings about ‘safe prescribing’ generated ~ 50 top-loading concepts each

• Printed on cards• Participants grouped

them in piles• Comparison of participant

clustering with kmeans-based clustering in the MIA space

• 1% of term pairs put into same cluster by more than 12 participants

• 7% by between 7 and 12

1% term pairs: Spearman’s Rho as interrater correlation

Proximity (2)

Proximity (3)

• Silhouette width in the MIA space

Silhouette plots depict for each observation, how good the balance between its distances to its other cluster members compared to its distances within the next close cluster is.

(Rousseeuw, 1986)

Conclusion• 1st year students do not have much agreement in rating the

annotations, could be a sign of heterogeneous frames of reference

• Activation strength of the 10 concepts has not been taken into account (would be interesting!)

• Still: pretty good clustering results in the upper range• Lower range: could be an artefact of the clustering (clustering of

a folded-in posting, not clustering in the space)• All in all: points towards rigorous use of thresholds• Near human results (at the human overlap)• Near human results (producing clearly better results than chance,

but no perfect agreement)

The End

top related