beautiful research data (structured data and open refine)

Post on 15-Jan-2015

117 Views

Category:

Software

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

http://serai.utsc.utoronto.ca/rrsi2014 "Unlike traditional academic conferences, the Roots & Routes Summer Institute features a combination of informal presentations, seminar-style discussions of shared materials, hands-on workshops on a variety of digital tools, and small-group project development sessions. The institute welcomes participants from a range of disciplines with an interest in engaging with digital scholarship; technical experience is not a requirement. Graduate students (MA and PhD), postdoctoral fellows and faculty are all encouraged to apply."

TRANSCRIPT

Beautiful Research

Data

Kirsta Stapelfeldt, Coordinator

UTSC Library’s Digital Scholarship Unit

In this presentation

● Part One: preparing to create machine-readable data at the onset of a research endeavour

● Part Two: Working with “messy,” datasets

Benefits of machine-readable data

● Easier to query for new insights● Easier to mount in a computing environment● Easier to share with others

Just a .csv + Fusion Tables

● Fusion tables is an experimental, web-based chrome app

● Took a spreadsheet that Natalie has been working on and loaded it into the app

● Results have not been massaged at all● We can expect additional benefits from

having structured data in the future

Part oneIn which you have no research data...yet

Best Case Scenario

You start by utilizing some best practices

4 Pieces of low-hanging fruit...

1. No word documents

● database (even a spreadsheet) not .docs● avoid a lot of style information in your

research documents (such as bolding and italicizing text, or moving things to other areas of the page using the tab key or spacebar)

● Why?

Look beyond the surface.

& n

&nsbp; &nsbp; &nsbp; &nsbp; no thank you!

http://www.bartleby.com/103/33.html

Beauty is more than browser deep

http://www.gutenberg.org/ebooks/18827

2. Use consistent formats for elements such as date & language

● i.e. dates recorded consistently where possible (05/25/2014)

3. Taxonomies & Standards

● use controlled vocabularies for keywords, place names, person names of relevanceo using an open format for a place name can make

geocoding much easiero stay consistent in a given language

4. Text Encoding

● Ensure you are using Unicode (UTF-8)

● How do you know ? o Notepad can be your friendo Test a sample between systems

http://www.string-functions.com/encodingerror.aspx

Changing the way you think about your research

processDraw a picture

1. Think small.

Atomistic information (what is the smallest meaningful unit of information you are collecting?)

For example:● A person’s name, religion, and DOB● Mention of a location or name● Repeated occurrence

2. Connect the dots.

What are the relationships between your data elements?

Useful tool: The Entity Relationship Diagram

Draft Dragomans Content Model

Crow’s Foot Notation

Exercise - Building an ERD

Part twoYour data is a mess

Tools for dealing with messy data

● Regular Expressions● Open Refine

Regular Expressions: Find & Replace on Steroids

● Available in most productivity suites (iWork, Microsoft Word, Libre Office/Open Office)

● Often syntax is a little different across systems

“The regular expression(?<=\.) {2,}(?=[A-Z]) matches at least two spaces occurring after period (.) and before an

upper case letter as highlighted in the text above.”

Open Refine

● Similar to spreadsheet software

● Installed on your computer, but used through your browser

● “Power Tool” for messy data

Following will draw heavily from this lesson - http://programminghistorian.org/lessons/cleaning-data-with-openrefine (Thanks to Seth van Hooland, Ruben Verborgh, Max De Wilde)

Base Assumption of Open Refine

● You have “structured data” ● some consistent and machine-readable

logic has been applied to your datao Excel, .csv, XML

● you may have structured data and not know ito Check export options from any software you

regularly use

1. Remove duplicates 2. Remove blanks3. Make data atomistic (smallest meaningful

unit)4. Keep terms/formats consistent

Set appropriate options and “Create Project”

Project is created with 75,814 rows.

1. Look for Blank

Records

See if any RecordIDs are blank by using a numeric facet

“Non-numeric” rows are blank.

Hovering over the cell makes an “edit” link visible

The “blank” fields actually contained a single whitespace. You can delete the whitespace and then select “Apply to All Identical Cells” -

A confirmation message will always show up noting what you’ve done, and giving you a chance to “undo”

2. Look for Duplicate Records using Record ID

(since it should be unique)

Sorting is a visual tool only unless you “Reorder rows permanently”

“Blank down” will delete the second instance of a duplicated “Record ID”

Then, we can facet the “Record ID” column by blank records.

the “true” facet contains all the blank records.

Clicking the “true” link will narrow to the blank records, which can then be removed.

3. Make data atomistic

“Category” contains numerous categories separated by the “|” character

You can tell the system to split the cells using this character.

Now only single categories appear.

Creating a text facet on “Categories” brings up all the options in this column.

We can “cluster” to detect similar terms that might have variances in spelling or capitalization

4. Make terms consistent

This interface allows you to select which term is authoritative. You can then merge terms together.

a couple of additional features...

The “Undo/Redo” tab allows you to back up in steps to the creation of your project, if you make a mistake.

A “text filter” can allow you to search in a column (by regular expression too!)

Refine has its own set of regular expressions that can be used to perform functions on data.

https://github.com/OpenRefine/OpenRefine/wiki/GREL-Functions

A full list of these is available on Github.

Finally, projects can be exported as Refine projects, but also in a number of additional structured formats.

Do this frequently.

Structured data is beautiful data. Make a plan to create structured data during your research

Clean legacy data or data you inherit, by becoming a regular expression (regex) expert and/or using a tool like OpenRefine.

Go to your library or ITS department to see if you can get support. Thanks for listening to me!

top related