2006.04.20 - slide 1is 240 – spring 2006 prof. ray larson university of california, berkeley...

68
2006.04.20 - SLIDE 1 IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday 10:30 am - 12:00 pm Spring 2006 http://www.sims.berkeley.edu/academics/courses/ is240/s06/ Principles of Information Retrieval Lecture 26: Web Searching

Post on 22-Dec-2015

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 1IS 240 – Spring 2006

Prof. Ray Larson

University of California, Berkeley

School of Information Management & Systems

Tuesday and Thursday 10:30 am - 12:00 pm

Spring 2006http://www.sims.berkeley.edu/academics/courses/is240/s06/

Principles of Information Retrieval

Lecture 26: Web Searching

Page 2: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 2IS 240 – Spring 2006

Mini-TREC

• Proposed Schedule– February 14-16 – Database and previous

Queries– March 2 – report on system acquisition and

setup– March 2, New Queries for testing…– April 20, Results due– April 25, Results and system rankings– May 9, Group reports and discussion

Page 3: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 3IS 240 – Spring 2006

Term Paper

• Should be about 10-15 pages on:– some area of IR research (or practice) that

you are interested in and want to study further– Experimental tests of systems or IR

algorithms– Build an IR system, test it, and describe the

system and its performance

• Due May 9th (Last day of class)

Page 4: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 4IS 240 – Spring 2006

Today

• Review– Web Crawling and Search Issues– Web Search Engines and Algorithms

• Web Search Processing– Parallel Architectures (Inktomi - Brewer)– Cheshire III Design

Credit for some of the slides in this lecture goes to Marti Hearst and Eric Brewer

Page 5: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 5IS 240 – Spring 2006

Web Crawlers

• How do the web search engines get all of the items they index?

• More precisely:– Put a set of known sites on a queue– Repeat the following until the queue is empty:

• Take the first page off of the queue• If this page has not yet been processed:

– Record the information found on this page

» Positions of words, links going out, etc

– Add each link on the current page to the queue

– Record that this page has been processed

• In what order should the links be followed?

Page 6: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 6IS 240 – Spring 2006

Page Visit Order

• Animated examples of breadth-first vs depth-first search on trees:http://www.rci.rutgers.edu/~cfs/472_html/AI_SEARCH/ExhaustiveSearch.html

Page 7: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 7IS 240 – Spring 2006

Sites Are Complex Graphs, Not Just Trees

Page 1

Page 3Page 2

Page 1

Page 2

Page 1

Page 5

Page 6

Page 4

Page 1

Page 2

Page 1

Page 3

Site 6

Site 5

Site 3

Site 1 Site 2

Page 8: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 8IS 240 – Spring 2006

Web Crawling Issues

• Keep out signs– A file called robots.txt tells the crawler which

directories are off limits• Freshness

– Figure out which pages change often– Recrawl these often

• Duplicates, virtual hosts, etc– Convert page contents with a hash function– Compare new pages to the hash table

• Lots of problems– Server unavailable– Incorrect html– Missing links– Infinite loops

• Web crawling is difficult to do robustly!

Page 9: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 9IS 240 – Spring 2006

Search Engines

• Crawling

• Indexing

• Querying

Page 10: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 10IS 240 – Spring 2006

Web Search Engine Layers

From description of the FAST search engine, by Knut Risvikhttp://www.infonortics.com/searchengines/sh00/risvik_files/frame.htm

Page 11: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 11IS 240 – Spring 2006

Standard Web Search Engine Architecture

crawl theweb

create an inverted

index

Check for duplicates,store the

documents

Inverted index

Search engine servers

userquery

Show results To user

DocIds

Page 12: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 12IS 240 – Spring 2006

More detailed architecture,

from Brin & Page 98.

Only covers the preprocessing in

detail, not the query serving.

Page 13: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 13IS 240 – Spring 2006

Indexes for Web Search Engines

• Inverted indexes are still used, even though the web is so huge

• Most current web search systems partition the indexes across different machines– Each machine handles different parts of the data

(Google uses thousands of PC-class processors and keeps most things in main memory)

• Other systems duplicate the data across many machines– Queries are distributed among the machines

• Most do a combination of these

Page 14: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 14IS 240 – Spring 2006

Search Engine QueryingIn this example, the data for the pages is partitioned across machines. Additionally, each partition is allocated multiple machines to handle the queries.

Each row can handle 120 queries per second

Each column can handle 7M pages

To handle more queries, add another row.

From description of the FAST search engine, by Knut Risvikhttp://www.infonortics.com/searchengines/sh00/risvik_files/frame.htm

Page 15: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 15IS 240 – Spring 2006

Querying: Cascading Allocation of CPUs

• A variation on this that produces a cost-savings:– Put high-quality/common pages on many

machines– Put lower quality/less common pages on

fewer machines– Query goes to high quality machines first– If no hits found there, go to other machines

Page 16: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 16IS 240 – Spring 2006

Google

• Google maintains (probably) the worlds largest Linux cluster (over 15,000 servers)

• These are partitioned between index servers and page servers– Index servers resolve the queries (massively

parallel processing)– Page servers deliver the results of the queries

• Over 8 Billion web pages are indexed and served by Google

Page 17: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 17IS 240 – Spring 2006

Search Engine Indexes

• Starting Points for Users include

• Manually compiled lists– Directories

• Page “popularity”– Frequently visited pages (in general)– Frequently visited pages as a result of a query

• Link “co-citation”– Which sites are linked to by other sites?

Page 18: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 18IS 240 – Spring 2006

Starting Points: What is Really Being Used?

• Todays search engines combine these methods in various ways– Integration of Directories

• Today most web search engines integrate categories into the results listings

• Lycos, MSN, Google

– Link analysis• Google uses it; others are also using it• Words on the links seems to be especially useful

– Page popularity• Many use DirectHit’s popularity rankings

Page 19: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 19IS 240 – Spring 2006

Web Page Ranking

• Varies by search engine– Pretty messy in many cases– Details usually proprietary and fluctuating

• Combining subsets of:– Term frequencies– Term proximities– Term position (title, top of page, etc)– Term characteristics (boldface, capitalized, etc)– Link analysis information– Category information– Popularity information

Page 20: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 20IS 240 – Spring 2006

Ranking: Hearst ‘96

• Proximity search can help get high-precision results if >1 term– Combine Boolean and passage-level

proximity– Proves significant improvements when

retrieving top 5, 10, 20, 30 documents– Results reproduced by Mitra et al. 98– Google uses something similar

Page 21: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 21IS 240 – Spring 2006

Ranking: Link Analysis

• Assumptions:– If the pages pointing to this page are good,

then this is also a good page– The words on the links pointing to this page

are useful indicators of what this page is about

– References: Page et al. 98, Kleinberg 98

Page 22: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 22IS 240 – Spring 2006

Ranking: Link Analysis

• Why does this work?– The official Toyota site will be linked to by lots

of other official (or high-quality) sites– The best Toyota fan-club site probably also

has many links pointing to it– Less high-quality sites do not have as many

high-quality sites linking to them

Page 23: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 23IS 240 – Spring 2006

Ranking: PageRank

• Google uses the PageRank• We assume page A has pages T1...Tn which

point to it (i.e., are citations). The parameter d is a damping factor which can be set between 0 and 1. d is usually set to 0.85. C(A) is defined as the number of links going out of page A. The PageRank of a page A is given as follows:

• PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))

• Note that the PageRanks form a probability distribution over web pages, so the sum of all web pages' PageRanks will be one

Page 24: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 24IS 240 – Spring 2006

PageRank

T2Pr=1Pr=1

T1Pr=.725Pr=.725

T6Pr=1Pr=1

T5Pr=1Pr=1

T4Pr=1Pr=1

T3Pr=1Pr=1

T7Pr=1Pr=1

T8T8Pr=2.46625Pr=2.46625

X1 X2

APr=4.2544375Pr=4.2544375

Note: these are not real PageRanks, since they include values >= 1

Page 25: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 25IS 240 – Spring 2006

PageRank

• Similar to calculations used in scientific citation analysis (e.g., Garfield et al.) and social network analysis (e.g., Waserman et al.)

• Similar to other work on ranking (e.g., the hubs and authorities of Kleinberg et al.)

• How is Amazon similar to Google in terms of the basic insights and techniques of PageRank?

• How could PageRank be applied to other problems and domains?

Page 26: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 26IS 240 – Spring 2006

Today

• Review– Web Crawling and Search Issues– Web Search Engines and Algorithms

• Web Search Processing– Parallel Architectures (Inktomi – Eric Brewer)– Cheshire III Design

Credit for some of the slides in this lecture goes to Marti Hearst and Eric Brewer

Page 27: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 27IS 240 – Spring 2006

Page 28: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 28IS 240 – Spring 2006

Page 29: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 29IS 240 – Spring 2006

Page 30: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 30IS 240 – Spring 2006

Page 31: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 31IS 240 – Spring 2006

Page 32: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 32IS 240 – Spring 2006

Page 33: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 33IS 240 – Spring 2006

Page 34: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 34IS 240 – Spring 2006

Page 35: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 35IS 240 – Spring 2006

Page 36: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 36IS 240 – Spring 2006

Page 37: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 37IS 240 – Spring 2006

Page 38: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 38IS 240 – Spring 2006

Page 39: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 39IS 240 – Spring 2006

Page 40: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 40IS 240 – Spring 2006

Page 41: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 41IS 240 – Spring 2006

Page 42: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 42IS 240 – Spring 2006

Page 43: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 43IS 240 – Spring 2006

Page 44: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 44IS 240 – Spring 2006

Page 45: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 45IS 240 – Spring 2006

Page 46: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 46IS 240 – Spring 2006

Page 47: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 47IS 240 – Spring 2006

Digital Library Grid Initiatives:Cheshire3 and the Grid

Ray R. LarsonUniversity of California, Berkeley

School of Information Management and Systems

Rob SandersonUniversity of Liverpool

Dept. of Computer Science

Thanks to Dr. Eric Yen and Prof. Michael Buckland for parts of this presentation

Presentation from DLF Forum April 2005

Page 48: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 48IS 240 – Spring 2006

Overview

• The Grid, Text Mining and Digital Libraries– Grid Architecture– Grid IR Issues

• Cheshire3: Bringing Search to Grid-Based Digital Libraries– Overview– Grid Experiments– Cheshire3 Architecture– Distributed Workflows

Page 49: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 49IS 240 – Spring 2006

Grid

mid

dlew

are

Chem

i cal

Eng i

neer

i ng

Applications

ApplicationToolkits

GridServices

GridFabric

Clim

ate

Data

Grid

Rem

ote

Com

putin

g

Rem

ote

Visu

aliza

tion

Colla

bora

torie

s

High

ene

rgy

phy

sics

Cosm

olog

y

Astro

phys

ics

Com

bust

ion

.….

Porta

ls

Rem

ote

sens

ors

..…Protocols, authentication, policy, instrumentation,Resource management, discovery, events, etc.

Storage, networks, computers, display devices, etc.and their associated local services

Grid Architecture -- (Dr. Eric Yen, Academia Sinica,

Taiwan.)

Page 50: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 50IS 240 – Spring 2006

Chem

i cal

Eng i

neer

i ng

Applications

ApplicationToolkits

GridServices

GridFabric

Grid

mid

dlew

are

Clim

ate

Data

Grid

Rem

ote

Com

putin

g

Rem

ote

Visu

aliza

tion

Colla

bora

torie

s

High

ene

rgy

phy

sics

Cosm

olog

y

Astro

phys

ics

Com

bust

ion

Hum

anitie

sco

mpu

ting

Digi

tal

Libr

arie

s

Porta

ls

Rem

ote

sens

ors

Text

Min

ing

Met

adat

am

anag

emen

t

Sear

ch &

Retri

eval …

Protocols, authentication, policy, instrumentation,Resource management, discovery, events, etc.

Storage, networks, computers, display devices, etc.and their associated local services

Grid Architecture (ECAI/AS Grid Digital Library

Workshop)

Bio-

Med

ical

Page 51: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 51IS 240 – Spring 2006

Grid-Based Digital Libraries

• Large-scale distributed storage requirements and technologies

• Organizing distributed digital collections• Shared Metadata – standards and

requirements• Managing distributed digital collections• Security and access control• Collection Replication and backup• Distributed Information Retrieval issues

and algorithms

Page 52: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 52IS 240 – Spring 2006

Grid IR Issues

• Want to preserve the same retrieval performance (precision/recall) while hopefully increasing efficiency (I.e. speed)

• Very large-scale distribution of resources is a challenge for sub-second retrieval

• Different from most other typical Grid processes, IR is potentially less computing intensive and more data intensive

• In many ways Grid IR replicates the process (and problems) of metasearch or distributed search

Page 53: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 53IS 240 – Spring 2006

Cheshire3 Overview

• XML Information Retrieval Engine – 3rd Generation of the UC Berkeley Cheshire system,

as co-developed at the University of Liverpool.– Uses Python for flexibility and extensibility, but

imports C/C++ based libraries for processing speed– Standards based: XML, XSLT, CQL, SRW/U, Z39.50,

OAI to name a few.– Grid capable. Uses distributed configuration files,

workflow definitions and PVM (currently) to scale from one machine to thousands of parallel nodes.

– Free and Open Source Software. (GPL Licence)– http://www.cheshire3.org/ (under development!)

Page 54: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 54IS 240 – Spring 2006

Cheshire3 Server Overview

API

INDEXING

T R RX E AS C NL O ST R F D O R M S

SEARCH

P HR AO NT DO LC EO RL

DB API

REMOTESYSTEMS

(any protocol)

XMLCONFIG

& MetadataINFO

INDEXES

LOCAL DB

STAFF UI

CONFIG

NETWORK

RESULTSETS

SCAN

USERINFOC

ONFIG&CONTROL

ACCESSINFO

AUTHENTICATION

CLUSTERING

Native calls

Z39.50SOAPOAI

JDBC

Fetch IDPut ID

OpenURL

APACHE

INTERFACE

SERVERCONTROL

UDDIWSRP

SRW

Normalization

ClientUser/

Clients

OGIS

Cheshire3 SERVER

Page 55: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 55IS 240 – Spring 2006

Cheshire3 Grid Tests

• Running on an 30 processor cluster in Liverpool using PVM (parallel virtual machine)

• Using 16 processors with one “master” and 22 “slave” processes we were able to parse and index MARC data at about 13000 records per second

• On a similar setup 610 Mb of TEI data can be parsed and indexed in seconds

Page 56: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 56IS 240 – Spring 2006

SRB and SDSC Experiments

• We are working with SDSC to include SRB support

• We are planning to continue working with SDSC and to run further evaluations using the TeraGrid server(s) through a “small” grant for 30000 CPU hours– SDSC's TeraGrid cluster currently consists of 256 IBM cluster nodes,

each with dual 1.5 GHz Intel® Itanium® 2 processors, for a peak performance of 3.1 teraflops. The nodes are equipped with four gigabytes (GBs) of physical memory per node. The cluster is running SuSE Linux and is using Myricom's Myrinet cluster interconnect network.

• Planned large-scale test collections include NSDL, the NARA repository, CiteSeer and the “million books” collections of the Internet Archive

Page 57: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 57IS 240 – Spring 2006

Cheshire3 Object Model

UserStore

User

ConfigStoreObject

Database

Query

Record

Transformer

Records

ProtocolHandler

Normaliser

IndexStore

Terms

ServerDocument

Group

Ingest ProcessDocuments

Index

RecordStore

Parser

Document

Query

ResultSet

DocumentStore

Document

PreParserPreParserPreParser

Extracter

Page 58: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 58IS 240 – Spring 2006

Cheshire3 Data Objects

• DocumentGroup: – A collection of Document objects (e.g. from a file, directory, or

external search)

• Document:– A single item, in any format (e.g. PDF file, raw XML string,

relational table)

• Record:– A single item, represented as parsed XML

• Query:– A search query, in the form of CQL (an abstract query language for

Information Retrieval)

• ResultSet:– An ordered list of pointers to records

• Index:– An ordered list of terms extracted from Records

Page 59: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 59IS 240 – Spring 2006

Cheshire3 Process Objects

• PreParser: – Given a Document, transform it into another Document (e.g.

PDF to Text, Text to XML)

• Parser:– Given a Document as a raw XML string, return a parsed Record

for the item.

• Transformer:– Given a Record, transform it into a Document (e.g. via XSLT,

from XML to PDF, or XML to relational table)

• Extracter:– Extract terms of a given type from an XML sub-tree (e.g. extract

Dates, Keywords, Exact string value)

• Normaliser:– Given the results of an extracter, transform the terms,

maintaining the data structure (e.g. CaseNormaliser)

Page 60: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 60IS 240 – Spring 2006

Cheshire3 Abstract Objects

• Server: – A logical collection of databases

• Database:– A logical collection of Documents, their

Record representations and Indexes of extracted terms.

• Workflow:– A 'meta-process' object that takes a workflow

definition in XML and converts it into executable code.

Page 61: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 61IS 240 – Spring 2006

Workflow Objects

• Workflows are first class objects in Cheshire3 (though not represented in the model diagram)

• All Process and Abstract objects have individual XML configurations with a common base schema with extensions

• We can treat configurations as Records and store in regular RecordStores, allowing access via regular IR protocols.

Page 62: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 62IS 240 – Spring 2006

Workflow References

• Workflows contain a series of instructions to perform, with reference to other Cheshire3 objects

• Reference is via pseudo-unique identifiers … Pseudo because they are unique within the current context (Server vs Database)

• Workflows are objects, so this enables server level workflows to call database specific workflows with the same identifier

Page 63: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 63IS 240 – Spring 2006

Distributed Processing

• Each node in the cluster instantiates the configured architecture, potentially through a single ConfigStore.

• Master nodes then run a high level workflow to distribute the processing amongst Slave nodes by reference to a subsidiary workflow

• As object interaction is well defined in the model, the result of a workflow is equally well defined. This allows for the easy chaining of workflows, either locally or spread throughout the cluster.

Page 64: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 64IS 240 – Spring 2006

Workflow Example1

<subConfig id=“buildWorkflow”><objectType>workflow.SimpleWorkflow</objectType><workflow> <log>Starting Load</log> <object type=“recordStore” function=“begin_storing”/> <object type=“database” function=“begin_indexing”/> <for-each> <object type=“workflow” ref=“buildSingleWorkflow”> </for-each> <object type=“recordStore” function=“commit_storing”/> <object type=“database” function=“commit_indexing”/> <object type=“database” function=“commit_metadata”/></workflow></subConfig>

Page 65: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 65IS 240 – Spring 2006

Workflow Example2

<subConfig id=“buildSingleWorkflow”><objectType>workflow.SimpleWorkflow</objectType><workflow> <object type=“workflow” ref=“PreParserWorkflow”/> <try> <object type=“parser” ref=“NsSaxParser”/> </try> <except> <log>Unparsable Record</log> <raise/> </except> <object type=“recordStore” function=“create_record”/> <object type=“database” function=“add_record”/> <object type=“database” function=“index_record”/> <log>Loaded Record</log></workflow></subConfig>

Page 66: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 66IS 240 – Spring 2006

Workflow Standards

• Cheshire3 workflows do not conform to any standard schema

• Intentional:– Workflows are specific to and dependent on the

Cheshire3 architecture– Replaces the distribution of lines of code for

distributed processing– Replaces many lines of code in general

• Needs to be easy to understand and create• GUI workflow builder coming (web and

standalone)

Page 67: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 67IS 240 – Spring 2006

External Integration

• Looking at integration with existing cross-service workflow systems, in particular Kepler/Ptolemy

• Possible integration at two levels:– Cheshire3 as a service (black box) ... Identify

a workflow to call.– Cheshire3 object as a service (duplicate

existing workflow function) … But recall the access speed issue.

Page 68: 2006.04.20 - SLIDE 1IS 240 – Spring 2006 Prof. Ray Larson University of California, Berkeley School of Information Management & Systems Tuesday and Thursday

2006.04.20 - SLIDE 68IS 240 – Spring 2006

Conclusions

• Scalable Grid-Based digital library services can be created and provide support for very large collections with improved efficiency

• The Cheshire3 IR and DL architecture can provide Grid (or single processor) services for next-generation DLs

• Available as open source via:http://cheshire3.sourceforge.net orhttp://www.cheshire3.org/