usability and impact of digitallibraries: a review

25
Usability and impact of digital libraries: a review Sudatta Chowdhury, Monica Landoni and Forbes Gibb Department of Computer and Information Sciences, University of Strathclyde, Glasgow, UK Abstract Purpose – The main objective of this paper is to review work on the usability and impact of digital libraries. Design/methodology/approach – Work on the usability and impact of digital libraries is reviewed. Specific studies on the usability and impact of digital libraries in specific domains are also discussed in order to identify general and specific usability and impact measures. Findings – The usability studies reviewed in this paper show that a number of approaches have been used to assess usability. In addition to the technical aspects of digital library design (e.g. architecture, interfaces and search tools), there are a number of usability issues such as globalisation, localisation, language, culture issues, content and human information behaviour. Digital libraries should, however, be evaluated primarily with respect to their target users, applications and contexts. Research limitations/implications – Although a digital library evaluation study may have several objectives, ranging from the evaluation of its design and architecture to the evaluation of its usability and its impact on the target users, this paper focuses on usability and impact. Originality/value – This paper provides insights into the state-of-the art in relation to the usability and impact of digital libraries. Keywords Digital libraries, User studies Paper type Research paper Introduction Research and development in the field of digital libraries has grown significantly over the last decade, and a large number of operational digital libraries are now in existence. These include hybrid libraries through which users can get access to digital information resources alongside traditional print-based information resources. In the light of the extensive resources, effort and enthusiasm that have gone into building digital libraries, it is appropriate to look at how researchers have tackled the problem of evaluating their effectiveness. An early review of digital libraries Chowdhury and Chowdhury (1999) identified a number of areas of ongoing research. However, that paper did not include work on the usability and evaluation of digital libraries as, at the time, only limited research had taken place in these areas. This situation has now changed and both system- and user-centred evaluations of digital libraries have taken place. Borgman (2000) notes that early digital library research was focused mainly on the development of digital library systems or on the development of services, while Saracevic (2000) comments that practical applications have outpaced the emergence of methods for evaluating them. There are several reasons for the lack of interest in the evaluation of digital libraries during the early period of digital library development. The current issue and full text archive of this journal is available at www.emeraldinsight.com/1468-4527.htm OIR 30,6 656 Refereed article received 25 April 2006 Approved for publication 28 June 2006 Online Information Review Vol. 30 No. 6, 2006 pp. 656-680 q Emerald Group Publishing Limited 1468-4527 DOI 10.1108/14684520610716153

Upload: nucleo-de-multimidia-e-internet

Post on 22-Mar-2016

219 views

Category:

Documents


1 download

DESCRIPTION

CHOWDHURY,Sudatta. LANDONI, Monica. GIBB, Forbes. The main objective of this paper is to review work on the usability and impact of digital libraries.

TRANSCRIPT

Usability and impact of digitallibraries: a review

Sudatta Chowdhury, Monica Landoni and Forbes GibbDepartment of Computer and Information Sciences, University of Strathclyde,

Glasgow, UK

Abstract

Purpose – The main objective of this paper is to review work on the usability and impact of digitallibraries.

Design/methodology/approach – Work on the usability and impact of digital libraries isreviewed. Specific studies on the usability and impact of digital libraries in specific domains are alsodiscussed in order to identify general and specific usability and impact measures.

Findings – The usability studies reviewed in this paper show that a number of approaches have beenused to assess usability. In addition to the technical aspects of digital library design (e.g. architecture,interfaces and search tools), there are a number of usability issues such as globalisation, localisation,language, culture issues, content and human information behaviour. Digital libraries should, however,be evaluated primarily with respect to their target users, applications and contexts.

Research limitations/implications – Although a digital library evaluation study may haveseveral objectives, ranging from the evaluation of its design and architecture to the evaluation of itsusability and its impact on the target users, this paper focuses on usability and impact.

Originality/value – This paper provides insights into the state-of-the art in relation to the usabilityand impact of digital libraries.

Keywords Digital libraries, User studies

Paper type Research paper

IntroductionResearch and development in the field of digital libraries has grown significantly overthe last decade, and a large number of operational digital libraries are now in existence.These include hybrid libraries through which users can get access to digitalinformation resources alongside traditional print-based information resources. In thelight of the extensive resources, effort and enthusiasm that have gone into buildingdigital libraries, it is appropriate to look at how researchers have tackled the problemof evaluating their effectiveness. An early review of digital libraries Chowdhury andChowdhury (1999) identified a number of areas of ongoing research. However, thatpaper did not include work on the usability and evaluation of digital libraries as, at thetime, only limited research had taken place in these areas. This situation has nowchanged and both system- and user-centred evaluations of digital libraries have takenplace.

Borgman (2000) notes that early digital library research was focused mainly on thedevelopment of digital library systems or on the development of services, whileSaracevic (2000) comments that practical applications have outpaced the emergence ofmethods for evaluating them. There are several reasons for the lack of interest in theevaluation of digital libraries during the early period of digital library development.

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/1468-4527.htm

OIR30,6

656

Refereed article received25 April 2006Approved for publication28 June 2006

Online Information ReviewVol. 30 No. 6, 2006pp. 656-680q Emerald Group Publishing Limited1468-4527DOI 10.1108/14684520610716153

Saracevic (2004) concludes that evaluation of digital libraries is not impossible but is avery difficult task to accomplish because of:

. complexity – digital libraries are highly complex and therefore are hard toevaluate;

. pre-maturity – digital libraries are in an early stage of development;

. interest levels – interest in evaluation has been limited;

. funding – limited funds have been made available for evaluation;

. culture – evaluation is not a part of the culture of operating digital libraries; and

. cynicism – who wants to know about or demonstrate performance?

In the same tone, Borgman et al. (2000) state that digital libraries are difficult toevaluate due to their richness, complexity, and variety of uses and users. Chowdhuryand Chowdhury (1999, 2003) conclude that a new set of parameters is required for theevaluation of digital libraries in order to reflect this diversity. A digital library is acomplex construct and its success, if that is what is to be measured through anevaluation study, will depend on a number of factors. They suggest that, in addition tocontent, information retrieval and usability, several other factors such as, hardware,software and networking, data formats, access and transfer times, failure rates, anddevelopment and maintenance costs should also be used.

According to Saracevic (2000) the ultimate goal of digital library evaluation is tostudy how digital libraries transform research, education, learning and life. This goalleads to a set of associated questions such as: what is to be evaluated, by which criteria,within which boundaries, from which perspectives, etc.? These questions are rooted inthe nature and development of digital libraries as well as to the perspectives ofdifferent categories of digital library researchers. Marchionini (2000) advocates, thatthe ultimate goal of digital library evaluation is to assess the impact of digital librarieson their patrons’ lives, and the larger social milieu. He suggests that in order to assesshow good a digital library is, one has to study how it influences the day-to-dayactivities of the target users, and thus society as a whole.

While the initial period of digital library research paid relatively little attention toevaluation, over the last five years or so a number of researchers have attempted toevaluate different aspects of digital libraries. This is reflected in bibliographies such as:the DELOS bibliography (http://dilib.ionio.gr/wp7/literature.html), Giersch et al. (2003),Zhang (2004) and Neuhaus (2005). There are also regular international workshops ondigital library evaluation under the DELOS programme and conferences such as JCDL(www.jcdl2005.org), ECDL (www.ecdl2005.org/) and ICADL (www.icadl2005.ait.ac.th/).These studies have focused on two broad areas: usability and impact studies. Work inthese two areas has been largely influenced by human information behaviour studies. Anumber of guidelines and toolkits for digital library evaluation have also beendeveloped. For example, Saracevic (2000) proposed a framework for digital libraryevaluation along with a set of criteria and guidelines, and the eValued (www.evalued.uce.ac.uk/) and EQUINOX (http://equinox.dcu.ie/) projects have also developedevaluation frameworks. However, to date, there is no standard model for digitallibrary evaluation, nor is there a comprehensive set of models and toolkits that can beused by digital library evaluators. In the words of Saracevic (2005), although many

Impact ofdigital libraries:

a review

657

evaluation concepts, approaches, and models are available, “it seems that they had littleor no visible impact on those actually doing evaluation”.

Based on an analysis of over 80 studies on digital library evaluation, Saracevic(2004) concludes that evaluation of digital libraries is still in its formative years andthat: “evaluation is not a wide or even growing activity in digital libraries. As a matterof fact, evaluation is more conspicuous by its absence or minimal presence in the vastmajority of works on digital libraries; in both research and practice, evaluation seemsto be an exception rather than a rule” (Saracevic, 2005). This paper provides a review ofthe state-of-the-art of studies into the usability and user impact of digital libraries, witha focus on user-centred evaluation. It begins with a general discussion of usability,impact and human information behaviour (HIB) studies and then moves onto morespecific methods and criteria for usability and evaluation.

Usability, impact and HIB studiesUsability studiesIdentification of the factors that contribute to usability is the first step in digital libraryevaluation (Borgman and Rasmussen, 2005). However, usability has differentmeanings depending on the discipline from which the evaluation originates. Forinstance, librarians perceive the usability of an information service in terms of efficientand effective access to information (Chowdhury, 2004b). The HCI community, on theother hand, generally define usability with respect to the user interface (Hansen, 1998;Nielsen and Lavy, 1994) and especially on the effectiveness, efficiency and/orsatisfaction of the user with a particular interface (Choudhury et al., 2002; Nielsen,1993a, b; Marchionini and Komlodi, 1998; Norlin, 2000).

In the context of digital libraries, usability has been defined as how easily andeffectively users can find information from a digital library, with an increasingemphasis being placed on the user (Dillon, 1994). Van House et al. (1996) recommendthat the usability of digital libaries depends on three key components: content,functionality and the user interface. However, Park (2000) comments that most earlystudies on the usability of multiple online databases focused on technical andperformance issues rather than on interaction issues. Borgman (2000) comments thatthe objectives of usability studies have shifted substantially: initially the purpose wasto shape human beings to adapt to the technology, while now the objective is to shapethe technology to suit human needs and capabilities.

Chowdhury and Chowdhury (2003) suggest that usability is a relative concept andmust be judged on the basis of a digital library’s intended goals. Emphasising theimportance of cultural issues on the usability of information services, Duncker et al.(2000) comment that misinterpretation of the importance of colours, forms, symbols,metaphors and language for users coming from different cultural backgrounds cansignificantly affect the usability and user friendliness of digital libraries. In otherwords, culture influences “who we are, how we think, how we behave, and how werespond to our environment. Above all, it determines how we learn” (Dunn andMarinetti, n.d.). Recent studies on usability testing (discussed later in this paper) withspecific references to digital libraries include those of Blandford and Buchanan (2003),Allen (2002), Dickstein and Mills (2000), and Mitchell (1999).

OIR30,6

658

Impact studiesThe problems of assessing the impact of information and information services are wellrecognised, although it is difficult to formulate direct measures of evaluation (see forinstance, Feeney and Grieves, 1994; Lancaster, 1993). Therefore, several indirectmeasures for evaluation of library and information services have been proposed byresearchers (see for example, Baker and Lancaster, 1991; Hill, 1999; Lancaster, 1993).The problems of measuring the impact of information services in the digital library agehave increased due to factors such as the location, nature and interests of the users andtheir contexts, the variety and complexity of information resources, the nature of theinformation infrastructure which forms the backbone of the digital library systems,and so on. A set of parameters, and standard and universal benchmarks for measuringthe impact of digital libraries have not yet appeared. However, recent studies haveassessed the impact of digital libraries on specific groups of users and their activities;for example, the Alexandria Digital Library Prototype (ADEPT, www.alexandria.ucsb.edu/), Perseus Digital Library (PDL, www.perseus.tufts.edu/) and National ScienceDigital Library (NSDL, http://nsdl.org/). These have attempted to assess the use andimpact of digital libraries on the activities of the target users. These and other studiesdealing with the impact of digital libraries are reviewed later in this paper.

HIB studiesStudies dealing with information behaviour and information seeking are generallycalled HIB studies. The field of HIB is related to the cognitive approach to interactiveinformation retrieval and seeks to investigate the broader issues related to informationseeking and use (Spink et al., 2002; Wilson et al., 2002). In this context there are anumber of models for interactive information search and retrieval that place users atthe centre of an information retrieval system. Two major research papers reviewinguser-centred information retrieval models are those of Wilson (1999) and Ingwersen(2001).

Wilson’s (1999) generalised model of information seeking identifies five major setsof variables that may affect access to, and use of, information: psychological,demographic, role-related or interpersonal, environmental, and source characteristics.Later researchers have expanded this list as well as identifying the challenges toinformation access. Chowdhury (2004a) and Chowdhury and Chowdhury (2003)discuss the barriers to information access in the modern digital library environment,while the psychological dimension has been subject to recent investigation byHeinstrom (2003). Recent developments in the fields of internet, web and digitallibraries have significantly influenced the ways in which users access and useelectronic information, and the issues have been explored recently by a number ofresearchers. For example, Kim (2002) has explored the impact of cognitive style onusers’ web search behaviour, concluding that online search experience influencesnavigation style, and that cognitive style influences search time.

Since users are at the centre of a digital library evaluation it is reasonable toconclude that information seeking and HIB have a significant influence on digitalusability and impact studies. In other words, any digital library evaluation should alsoconsider the information behaviour of the target users.

Impact ofdigital libraries:

a review

659

Why evaluate?In the context of digital library evaluation, an obvious question is: “Why is evaluationof the digital library important?”. In the context of general web site evaluation, Wilson(2002) suggests that developers should find ways to evaluate the usability andusefulness of their sites, even though they believe that their sites are intuitive to use oreasy to learn. Reeves et al. (2003) suggest there are a number of reasons to evaluate: tomeet a requirement established by a funding agency; for political reasons; or becausepeople believe it is the right thing to do. They identify several types of evaluationincluding: service evaluation, usability evaluation, information retrieval evaluation,biometrics evaluation, transaction log analysis, survey methods, interviews and focusgroups, observations, experiments, and evaluation reporting. To this we may addbenchmarking, i.e. establishing the relative strengths and weaknesses of aninformation service.

The eValued (Evidence Base, 2004) project toolkit suggests that evaluation of anelectronic information service (EIS) is carried out for five main reasons:

(1) for strategic planning with respect to services;

(2) for day-to-day management of a service;

(3) for investigating uses and impact of a service;

(4) for improving services; and

(5) for justifying services.

Finally, the recommendations of a DELOS workshop (Department of InformationEngineering, 2004) were that an evaluation is conducted in order to determine theusefulness, usability and economics of digital libraries.

The evaluation processEvaluation – general guidelinesSaracevic (2000) suggests that an evaluation project should, at the outset, select theelements to be evaluated and should clearly indicate what is included, and what isexcluded. He proposes seven general classes or levels of evaluation of digitallibraries:

(1) Social level – at this level the major objective is to assess how well a digitallibrary supports the needs, roles and practices of a society or a community.

(2) Institutional level – at this level the objective is to assess how well a digitallibrary supports the organisational mission and objectives.

(3) Individual level – at this level the objective is to assess how well a digital librarysupports the information needs, tasks, and activities of individuals or group(s)of users.

(4) The interface level – at this level the objective is to assess how well a digitallibrary interface supports access, searching, navigation, browsing, andinteractions with a digital library.

(5) Engineering level – at this level the objective is to assess how well the hardware,networks and related technologies work.

OIR30,6

660

(6) Processing level – at this level the objective is to assess how well the variousprocedures, techniques, algorithms, operations, etc., perform.

(7) Content level – at this level the objective is to assess how well the informationresources are selected, represented, organised, structured and managed.

These levels of evaluation are not mutually exclusive, and indeed in many cases theresults of one level may need to be considered in the context of the results from anotherlevel. Nevertheless, digital library evaluation should be based on both the user- or thesystem-centred approaches (as discussed later), and the main challenge is how to makeboth the user- and system-centred approaches work together.

The HyLife (2002) project recommends that an evaluation usually involves thefollowing stages:

(1) design of the evaluation;

(2) drawing up an evaluation plan;

(3) data gathering and recording;

(4) data analysis and interpretation of results; and

(5) presentation of findings.

Each of these stages may involve a number of tasks and may be quite complexdepending on what is being evaluated.

Before undertaking an evaluation it is important to prepare a proper plan. To do so,one will have to identify the decisions that an evaluation should inform. Identifying thedecisions may not be an easy task, but is an essential step towards establishing theaspects to be evaluated (e.g. user interface or collection quality), the criteria forevaluation (e.g. accuracy or relevance), and the most appropriate methods (e.g.usability testing or online surveys). Reeves et al. (2003) propose a five-point guidelinefor the evaluation of digital libraries:

(1) identify the decisions that the digital library must inform;

(2) identify the questions that need to be addressed to inform the pendingdecisions;

(3) identify the evaluation methods and instruments that will be used to collect theinformation needed to address the above questions;

(4) carry out the evaluation in a manner that is effective and efficient; and

(5) report the evaluation results in an accurate and timely manner so that it canprovide the information needed to make the best possible decisions.

Saracevic (2000, 2004) proposes a similar set of guidelines for evaluation of digitallibraries:

. Construct for evaluation. What to evaluate what is actually meant by a digitallibrary what is encompassed and what elements (components, parts, processes)are to be involved in the evaluation?

. Context of evaluation. What is the goal, framework, viewpoint, or level(s) ofevaluation what is critical for a selected level of evaluation what are theobjective(s) for that level?

Impact ofdigital libraries:

a review

661

. Criteria reflecting performance as related to selected objectives. What are theparameters for performance what dimensions or characteristics should beevaluated?

. Measures reflecting selected criteria to record the performance. What measuresand metrics should be used?

. Methodology for doing evaluation. What measuring instruments, samples andprocedures should be used for data collection and for data analysis?

However Saracevic (2000, 2004) warns that many of these concepts are not yet fullydeveloped and that establishing clear definitions and scope for these concepts is afundamental challenge for any digital library evaluation.

Larsen and Borgman (2003) list a number of issues that arose from the ECDL2003workshop on digital library evaluation. The list includes issues related to informationusers, research methods and parameters for evaluation such as digital library contentsand characteristics, information search patterns, transaction data, etc. The eVALUEd(www.evalued.uce.ac.uk/about.htm), EQUINOX (http://equinox.dcu.ie/) and JUBILEE(http://informationr.net/ir/9-2/paper167.html) projects have also produced a set ofmethods and guidelines that may be appropriate for digital library evaluation.

Although a number of general methods and guidelines have been developed andproposed by researchers, Saracevic (2004) warns that there is no one standard oragreed best method for evaluating digital libraries, and one has to choose the bestpossible approach given a particular set of circumstances. Consequently, digital libraryevaluation projects have used a number of methods including questionnaire surveys,interviews, observation, think aloud, focus groups, task performance, log analysis,usage analysis, record analysis, experiments, economic analysis, ethnographicanalysis, and case studies.

Bertot (2004) identifies four evaluation strategies to assess digital libraries:

(1) Outputs assessment – which involves identification of the numbers of activitiesthat patrons engage, such as the number of databases used, to determine theusage of resources and services.

(2) Performance measures – which evaluate the specific resources, or services interms of efficiency and effectiveness, such as cost per item downloaded.

(3) Service quality – which determines the overall quality of resources and servicesto meet the quality standard.

(4) Outcomes assessment – which determines the effects on patrons in terms oftheir benefits.

Snead et al. (2005) report that it is possible to create a rich and robust evaluationmethodology that can meet the needs of diverse user populations by combiningfunctionality, usability, and accessibility.

Evaluation modelsDifferent models for evaluating digital libraries have been proposed. However, asBorgman et al. (2000) comment very few of these models are proven or universallyaccepted. Saracevic (2000) comments that the evaluation of digital libraries should

OIR30,6

662

“. . . be looking at and contributing to the gaining of uniformity for access and useacross a number of digital libraries and not only single efforts.”

Marchionini (2000) lists a number of approaches that can be used for digitallibraries:

. User-centred evaluation, which focuses on the cognitive, interactive, andcontextual aspects of IR and considers users, use, situations, context, andinteractions with the system.

. System-centred evaluation, which is based on the traditional IR model andignores users and their interactions with the system.

. Formative evaluation, which is used to evaluate a product, tool or service beforeand during its development in order to refine and improve it.

. Summative evaluation, which is undertaken when a product, tool or service isready for marketing.

Formative and summative evaluations are usually conducted within a controlledlaboratory environment, where subjects are observed performing specific tasks.

According to Borgman (2004) and Borgman and Rasmussen (2005) four types ofevaluation are relevant to digital libraries:

(1) formative evaluation, which is used at the initial stage of design in order toestablish goals and outcomes;

(2) summative evaluation, which is used at the end of a project to assess how fargoals were met;

(3) iterative evaluation, which takes place throughout a project with a view toimproving the system incrementally; and

(4) comparative evaluation, which uses standard or common measures to comparesimilar systems.

An evaluation can be qualitative or quantitative in nature, or a combination of both. Datamay be collected in a number of ways as follows (Marchionini, 2000; Reeves et al., 2003):

(1) Observations, such as:. baseline observations, designed to get first-hand data using observers in

classrooms or laboratories to take notes of activities in a semi-structuredform; the purpose is to help the evaluators become situated within the settingand to get familiar with the subjects;

. structured observations, where a specific protocol is used systematically toobserve and record the behaviour of a sample of individuals;

. participant observations, where the evaluator is allowed to interact, forexample ask or answer questions, with the subjects being observed in asemi-structured way;

. think-aloud observations, which aim to determine what cognitive activityunderlies behaviour; here the subjects may be asked to think aloud whilethey are working on specific tasks;

. transaction log analysis, where user actions are automatically captured andanalysed.

Impact ofdigital libraries:

a review

663

(2) Interviews, which may be structured, semi-structured or unstructured. Whilestructured interviews are relatively quick and easy to conduct and the data easyto analyse, semi- and unstructured interviews allow the evaluator to probe andgather more detailed information.

(3) Document analysis, which involves critical analysis of the documents producedby the users with a view to assessing the resources used; a simple example mayinvolve citation analysis of the papers, theses or dissertations produced by theusers to find out the references used in the documents and whether they havebeen accessed using a specific digital library.

(4) Task/job analysis, with a view to identifying the various tasks performed bythe users and the information resources used to accomplish each task.

Another evaluation model, the Instructional Architect (IA), was developed to enableusers to discover, select, re-use, sequence, and annotate digital library learning objectsstored in SMETE Open Federation of digital libraries (www.smete.org). This wasintended to increase the utility of NSDL resources for the classroom teacher (Dorwardet al., 2002). Evaluation techniques used in this model included:

. a needs assessment (pilot survey of audience);

. expert review of the interface;

. prototype testing by the target audience;

. teachers’ use of newly released Open Source software; and

. case analysis of experienced teacher use.

Finally, Jeng (2004, 2005) proposes methods and instruments to assess the usability ofdigital libraries with particular reference to academic libraries, and shows that there isa close relationship between the effectiveness, efficiency, satisfaction, and learnabilityof a digital library.

However, to date there are no standard evaluation models for digital libraries.Although researchers are working to develop a common evaluation framework,Saracevic (2005) emphasises that “there are no more or less standardised criteria fordigital library evaluation . . . Thus evaluators have chosen their own evaluation criteriaas they went along. As a result, criteria for digital library evaluation fluctuate widelyfrom effort to effort”.

Usability issuesUsability studiesUsability of a digital library primarily relates to its accessibility, i.e. how easily userscan interact with the interface of the digital library, how easily they can find usefulinformation, how easily they can use the retrieved information, etc. In general ifinformation can be accessed easily then the digital library will be used frequently. Byimplication, therefore, a well-designed digital library should have good usabilityfeatures. Sumner (2005) emphasises that understanding user needs, their works,innovative user interfaces and interaction mechanisms can influence better use ofdigital library resources, collections, and services.

The usability of digital libraries has inevitably been studied from a number ofdifferent perspectives. Some of these studies have highlighted the general usability

OIR30,6

664

preferences of users while others have identified the most desired usability features ofdigital libraries. Usability of a digital library depends on a number of factors, such asthe effectiveness and efficiency of the information access system, the ease of use andfriendliness of the user interface, users’ needs, usage patterns, etc. The heterogeneityand distribution of information resources will also be a factor (Park, 2000). Bishop et al.(2000) point out that a digital library is a space where users engage with theinformation infrastructure, and hence usability problems, user attitudes, specific usesituations and work practices are important points for studies.

Theng et al. (2000) conducted a study with 45 users using the ACM Digital Library,the Networked Computer Science Technical Reference Library (NCSTRL), and the NewZealand Digital Library (NZDL). They found that:

. many users preferred digital libraries because of their fast search facilities mostusers preferred a guided tour of the digital library and some preferredrecommendations for selection of the resources while they browse;

. powerful search facilities and better and clearer display results can make digitallibraries more efficient;

. users get lost when using digital libraries; and

. most users experienced that there was insufficient information to understand thestructure of the digital libraries.

Adams and Blandford (2005) identify the changing information requirements, whichthey define as an “information journey”, of users in two domains: health and academia.They propose that the process involves the following stages:

(1) initiation, which is driven actively by a specific task or condition, or passivelyby friends, family, information intermediaries and the press;

(2) facilitation, which is driven by the effectiveness of tools for retrievinginformation; and

(3) interpretation in which the user makes sense of retrieved information based ontheir needs.

However, they note that awareness of the resources is a major problem, and therefore,propose the need for press alert for recent articles on a particular subject with relatedcurrent research and professional articles.

Usability factorsSeveral researchers have noted that the HCI community places emphasis on theusability of user interfaces, especially in assessing their effectiveness, efficiency and/oruser satisfaction (Choudhury et al., 2002; Nielsen, 1993a, b; Marchionini and Komlodi,1998; Norlin, 2000). The best known and widely used usability guidelines are those ofShneiderman (1998):

. strive for consistency in terminology, layout, instructions, fonts and colour;

. provide shortcuts for skilled users;

. provide appropriate and informative feedback about the sources and what isbeing searched for;

Impact ofdigital libraries:

a review

665

. design for closure so that users know when they have completed searching theentire collection or have viewed every item in a browse list;

. permit reversal of actions so that users can undo or modify actions; for example,they should be able to modify their queries or go back to the previous state in asearch session;

. support user control, allowing users to monitor the progress of a search and to beable to specify the parameters to control a search;

. reduce short-term memory load: the system should keep track of importantactions performed by the users and allow them to jump easily to a previousaction, for example, a former query or a specific result set;

. simple error-handling facilities to allow users to rectify errors easily; all errormessages should be clear and specific;

. provide plenty of space for entering text in search boxes; and

. provide alternative interfaces for expert and novice users.

Blandford and Buchanan (2003) list the following criteria for assessing the usability ofdigital libraries:

. achieving goals – how effectively and efficiently can the user achieve their goalswith a system;

. learnability – the question is how easily can the user learn to use the system;

. help and error recovery – how well the system helps the user avoid makingerrors, or recover from errors;

. user experience – how much the user enjoys working with the system; and

. context – how well the system fits within the context in which it is used.

Arguing that user requirements change from one search session to another, oreven within a given search session, however, as Bollen and Luce (2002) point out,usability factors such as user preferences and satisfaction tend to be highlytransient and specific; for example, the user search focus can shift from onescientific domain to another between, or even within, retrieval sessions. Therefore,they recommend that research on these issues needs to focus on more stablecharacteristics of a given user community, such as “the community’s perspectiveon general document impact and the relationships between documents in acollection”.

Chowdhury (2004b) provides a detailed checklist of usability features for digitallibraries:

(1) Interface features:. types of interface (e.g. simple vs expert search interface);. language(s) of the interface;. navigation options, shortcuts, and system information;. screen features, i.e. use of colours, typography, layout and graphics; and. personalisation of the interface, e.g. permanent choice of interface language

and/or retrieval level, number of records on one page, sort options.

OIR30,6

666

(2) Search process. Database/resource selection:. options for selection; and. cross-database search facilities.

(3) Query formulation.

(4) Search options for text, multimedia, specific collection, etc. Access points: searchfields.

(5) Search operators.

(6) Results manipulation:. format(s) for display of records;. number of records that can be displayed;. navigation in the list of records;. marking of the records;. sort options; and. printing, exporting and e-mailing of records.

(7) Help:. appropriateness;. usability;. consistency of terminology, design and layout; and. linguistic correctness.

Models for usability studiesResearchers have used two main approaches to evaluate the usability of a system:empirical (which involve testing systems with users) and analytical techniques (whichinvolve usability personnel assessing systems using established theories andmethods). Evaluation of user interfaces has been particularly popular in usabilitystudies of digital libraries. Dillon (2005) points out “the human factors of interfacedesign for information usage were best thought in terms of the following levels:physical, perceptual, and socio-cognitive”.

Blandford (2004) notes that scenarios can be useful for understanding new situations.For example, imagine that a user is conducting a literature search on a new topic and issearching Ingenta to find relevant articles. If the user has very little knowledge aboutsophisticated information seeking skills, then initially the search will be exploratory, butgradually will become more focused as the user becomes familiar with the contents andstructure of the digital library. This model, therefore, is particularly suitable for studyinginformation seeking and access in digital libraries (for further discussion see subsection“Major findings – information seeking and access”).

Kim (2002) reports on an empirical model called the Digital Library InformationSeeking Process (DLISP) which has been developed to evaluate the usability of a digitallibrary. This model establishes the events that occur in chronological order when usersinteract with a digital library and users’ perceived difficulties while interacting withthe system. The model can be particularly useful for the evaluation and re-design of adigital library. Blandford et al. (2004) report on a set of four different techniques:heuristic evaluation, cognitive walkthrough, claims analysis, and Concept-based

Impact ofdigital libraries:

a review

667

Analysis of Surface and Structural Misfits (CASSM) which were applied to differentdigital libraries. They comment that none of the techniques is exactly suitable forassessing the usability of any particular digital library.

Dillon (2005) proposes a qualitative framework for evaluation called TIME that canbe used as “an advanced organizer for design, as a guide for heuristic and expertusability evaluation, and as a means of generating scientific conjectures about theusability of any electronic text”. TIME focuses on four elements:

(1) Task – what users want to do.

(2) Information model – what structures aid use.

(3) Manipulation of materials – how users access the components of the document.

(4) Ergonomics of visual displays – how they affect human perception ofinformation. Dillon (1999) further claims that “TIME offers a simple frameworkfor evaluators to employ, either alone or in conjunction with existing methods,which will enhance the process and communicability of usability evaluations”.

Sandusky (2002) describes a framework for evaluating the usability of different typesof digital libraries: audience, institution, access, content, services, design anddevelopment are the attributes included in the framework.

Pan et al. (2004) evaluate the Kinetic Model for Design Digital Library (K-MODDL),based on the Context, Interaction, Attitude, and Outcomes (CIAO!) framework. The aimof this evaluation is that to identify and assess the integration of K-MODDL resourcesinto middle school and university undergraduate classes to facilitate better learningand understanding.

Major findingsUser interfaces. The patterns of interactions between a user and a system can bestudied through usability research. One may look for answers to such questions aswhat the users are looking for, how easily they can find the required information, whatinhibits them from using a digital library, and so on. Sometimes users find the interfaceis a common barrier to accessing a digital library. User interfaces should be designed tobe simple to understand and easy to use, and moreover, should be interactive. Penget al. (2004) have conducted a usability study of the interface of the hybrid library ofNanyang Technological University in Singapore using Nielsen and Levy’s (1994) userinterface heuristics. The general conclusion of the study was that an attractive andgraphical user interface is not necessarily simple and easy-to-use.

Cultural issues are often very important considerations and they play a vital role inthe use and impact of a user interface. Liew (2005) discusses the usability and userinterface features of a digital library of Maori cultural heritage. Emphasising theimportance of cultural issues on the usability of information services, Duncker et al.(2000) comment, that misinterpretation of the importance of colours, forms, symbols,metaphors and language for users from different cultural backgrounds, cansignificantly affect the usability and friendliness of digital libraries.

Information seeking and access. Stelmaszewska and Blandford (2002) studied howthe user interface can support users to develop different search strategies. Theyfocused on the difficulties users experience when search results fail to meet theirexpectations. They have identified seven stages, based on Kuhlthau’s (1988),

OIR30,6

668

Marchionini’s (1995), and Sutcliffe and Ennis’s (1998) models, which are: initiation,selection, exploration, query formulation, results examination, document collection,and results presentation. They conclude that “patterns identified in this study holdcrucial knowledge about users’ behaviour” that can provide a basis for understandingwhat users do when looking for specific information.

Keith et al. (2002) report on an investigation that involves three linked case studieswhere the claims analysis technique was used to study the information seekingbehaviour of users. Users’ information seeking behaviour in the digital library age asshown through the longitudinal studies of academic users’ information seeking in theJUBILEE project (Banwell and Coulson, 2004) was judged to be extremely useful indesigning digital libraries with improved usability features.

Some digital library evaluation studies have focused on usability from theperspectives of the organisation of, and access to, information resources. These studiesare often classed under personalised digital libraries. For example, Meyyappan et al.(2001, 2004) have reported on the usability of a prototype task-based digital librarywhere they have studied the task-based information organisation and access system.

Impact studies. As mentioned earlier in this paper, one of the most importantobjectives of digital library evaluation is to assess the impact on target users. Manyresearchers have conducted evaluation studies to find out how digital libraries are usedby target users to accomplish their tasks or to meet their information needs. Probablyone of the earliest and most widely known impact studies was conducted by Borgmanand his colleagues in the context of the Alexandria digital library.

Over the past ten years several research projects have studied the impact of digitallibraries on education. Borgman et al. (2004) report on two case studies – AlexandriaDigital Earth Prototype (ADEPT) and Center for Embedded Networking Sensing(CENS) – which show that scientific digital libraries can support both research andteaching applications. ADEPT is a digital library of geo-spatial information resourcesand allows instructors and students to discover, manipulate and display dynamicgeographical processes in undergraduate classrooms. ADEPT offers an opportunity toevaluate learning activities and focuses on assessing learning outcomes as a result ofthe implementation of successive ADEPT prototypes in undergraduate classrooms(Borgman et al., 2000). It also provides an exciting opportunity to assess the usefulnessof personal digital libraries in instruction and learning. Borgman et al. (2000) providean observation of undergraduate students’ use of ADEPT in the classroom. Leazer et al.(2000a, b) investigated the impact of ADEPT on teaching and learning inundergraduate classes. Borgman et al. (2004, 2005) also discussed how academicsused the geographical information for preparing their lecture notes as compared withresearch activities.

Waters (2001) mentions four key principles for the development of digital librariesin higher education:

(1) to create scholarly value by exploiting the distinctive features of the technology;

(2) to create collections of coherence and integrity;

(3) to protect and foster an intellectual commons for scholarly and educational use;and

(4) to be realistic about costs, especially the costs of distributing content andsustaining ongoing operations.

Impact ofdigital libraries:

a review

669

Champeny et al. (2004) discuss the development of a digital learning environment(DLE) and its implementation in undergraduate geography courses. They also reportsome interesting findings such as, usability for the instructor increased between twoterms: classroom presentations seemed to be useful for understanding concepts; andweb access to presentations was useful for study and review.

The Perseus Digital Library (PDL) was developed to assess the impact on users inthe field of humanities (Marchionini, 2000). Data were collected through observations(baseline, structured, participant, think aloud, transaction log analysis), interviews,document analysis, and learning analysis. The findings of the evaluation shed somelight on the following points:

. physical infrastructure – issues related to hardware and software reliability tonetworking, technical support and training, etc.;

. conceptual infrastructure – issues related to usage especially user support andtraining, etc.;

. mechanical advantage – access to a large amount and variety of informationeasily with a few mouse clicks;

. augmentation – how PDL augments teaching and learning; and

. community development/systematic change – how PDL brings changes atorganisational levels such as academic departments and schools, and therefore inturn, facilitates the advancement of PDL itself.

The National Science Technology Engineering Mathematics (STEM) Digital Library(NSDL) was designed to be a comprehensive source in science, technology, engineeringand mathematics education. The NSDL consists of over 100 projects including thedevelopment of collections, library services, core integration services, and libraryresearch (Jones and Sumner, 2002). The core services have a central portal thatprovides access to distributed educational resources. Different methods, such as webserver logs, survey instruments, collection assessment techniques and interviews, wereused to examine library usage, collection growth and library governance processes.

Maly et al. (n.d.) proposed two alternative frameworks for digital library evaluation.In the first approach, the evaluator identifies the necessary, desirable and undesirablecharacteristics of digital libraries for undergraduate science education, and the projectis then assessed against these characteristics. In the second approach the evaluatorfirst proposes some hypotheses for evaluating the impact, identifies the characteristicsfor these hypotheses, and then identifies and implements measures to validate thehypotheses.

Madle et al. (2003) propose a methodology for evaluating the impact of the NationalElectronic Library for Communicable Disease (NeLCD) on users’ knowledge, attitudeand behaviour. This methodology combines transaction log analysis – which providesinformation about the sites a user is searching – and a questionnaire of pre- andpost-use of the digital library – which helps assess the impact of the library on usersand their work as well evaluating the usability of digital library. In this context,Nicholas et al. (2003) note that while evaluating the users’ knowledge, attitude andbehaviour, a small number of users always gives a clearer picture about individualuser’s behaviour.

OIR30,6

670

Sumner and Marlino (2004) propose three approaches – cognitive tools, componentrepositories, and knowledge networks - with specific examples drawn from the DigitalLibrary for Earth System Education (DLESE) and the NSDL for educational digitallibraries. They conclude that the three approaches can help to deconstruct the digitallibrary metaphor to generate better understandings about the impact of a library oneducational practice. They also claim that these three models can reflect the complexinteractions between humans, technology, and context in educational digital libraries.

Summary and conclusionAlthough there were relatively few evaluation studies during the first period ofdevelopment of digital libraries, this area of research has attracted significant attentionover the past few years. The last five or so years may be considered as the first era ofdigital library evaluation where researchers have used different methods andtechniques with the specific objectives of assessing usability and the impact on thetarget users. To date only a few digital library evaluation studies are based onfull-fledged operational digital libraries, as compared with small-scale prototypes ofspecialised digital libraries with limited content and scope. Nevertheless, these studiesshow a significant way forward for digital library evaluation research.

A standard set of tools and benchmarks for digital library design and evaluationhas yet to be developed, but one can notice some significant developments. Howstandard design and evaluation techniques can be modified to better serve the specificneeds of digital libraries, and other information technologies such ashypertext/hypermedia for education have been identified (Coleman and Sumner,2004). Digital libraries differ significantly from one another in terms of their nature,content, target users, access mechanisms, etc., and consequently it is difficult tomeasure the usability of such diverse digital libraries through one set of universallyaccepted tools and benchmarks. This has been evident through the various usabilitystudies reported in this paper where different approaches have been used to assessusability. Digital libraries are designed for specific users and to provide support forspecific activities. Hence digital libraries should be evaluated in the context of theirtarget users and specific applications and contexts. Users should be at the centre of anydigital library evaluation and their characteristics, information needs and informationbehaviour should be given priority when designing any usability study. The genericguidelines and evaluation parameters suggested by researchers such as Marchionini(2000), Saracevic (2004), Blandford and Buchanan (2003), Chowdhury (2004a) andBorgman (2004) should form the basis for a usability study. It is also important that thefindings of user information behaviour studies as reported by Banwell and Coulson(2004), Spink et al. (2002), and Wilson et al. (2002) should be taken into considerationwhen designing an evaluation study.

The usability studies discussed in this paper have reported different kinds ofusability features and characteristics of specific digital libraries and informationresources. They reveal that in addition to the technical issues of digital library designarchitecture, information retrieval tools and user interfaces, there are a number ofusability issues related to digital libraries, such as:

. globalisation and localisation;

. language, in terms of specific language materials and multilingual informationaccess and management;

Impact ofdigital libraries:

a review

671

. culture, traditions and cross- and multicultural aspects;

. content, in terms of homogeneous and. heterogeneous information resources,mono- multimedia resources, local and distributed information resources, etc.;and

. human information behaviour issues, especially when the users are remote anddistributed.

Usability studies have also noted several specific usability issues. For example, Joneset al. (2004) point out that disciplinary and subject differences have a significantinfluence on the usability of digital libraries. Borgman et al. (2005) report thatsearching by concept is essential but difficult due to the different ways in which dataand images can be interpreted and used in a digital library environment. Chowdhury(2004b) highlights a number of usability problems currently facing digital libraryusers. Although not explicitly mentioned in many digital library usability studies,there is a major problem that still remains to be resolved in the digital and hybridlibrary world. In the digital library world, the onus is still on the users to decide whereto look for a specific item of information, and also to consolidate the results retrievedfrom heterogeneous systems. This often stands as the biggest problem in informationaccess. A number of researchers (see for example, Borgman et al. (2005), Chowdhury(2004b), Meyyappan et al. (2001, 2004) have highlighted the importance of personaldigital libraries and suggested that a personal digital library could help users. Whenreporting on academics’ use of ADEPT, Borgman et al. (2005) suggest that:

Each user needs his or her own personal space in which to manage digital objects. Some of thepersonal digital library content may be selected from a shared space; other content will beimported from personal collections of research and teaching resources. Once gathered, thefaculty often wishes to manipulate, annotate, select, and augment resources for their ownpurposes.

Chowdhury (2004b) recommends a task-based information access system and aone-stop window for accessing digital libraries where the user will not have to spendtime on deciding where and how to search for the required information; instead, theuser may just specify a task on hand, and the system would then do the search andrecommend a set of information resources suitable for accomplishing the chosen task.Progress towards this can be seen in projects such as MIND, which developed tools forresource selection and data fusion (Wu et al., 2004; Gibb, 2002). While such a systemmay be quite useful, digital library designers may adapt systems that are prevalent inthe web environment. There are many web aggregator services that, given a set of userrequirements, gather, filter and present information relevant to the user needs.Examples of web aggregator services in the news world include Google News, Awasu,Headlinepost, etc. Aggregator services have been in existence in the information worldfor quite some time, but they do only part of the job as compared with web aggregatorservices. For example online services such as Dialog are aggregators that provideaccess to a range of online databases through one search interface, and consolidate theresults based on system or user-defined criteria. Similarly services like Ingenta providetheir users with access to a range of electronic resources drawn from several e-journalsand databases. While they are good services, in the digital and hybrid library world

OIR30,6

672

users often find it difficult to decide which source to select and search, and the problemis compounded by the different search features and facilities provided by the services.

Over the past few years, a number of personalisation and recommendation systemsand services have come up which aim to improve the usability of information servicesby retrieving information to meet specific user needs, and filtering out unwantedinformation (see for example Frias-Martinez et al., 2006; Renda and Straccia, 2005; Fanet al., 2005; Hicks, 2003; Anand and Mobasher, 2005). However, such systems are notwithout flaws (Shahabi and Chen, 2003), as Anand and Mobasher (2005) comment“most personalization systems tend to use a static profile of the user; however, userinterests are not static, changing with time and context,” and, “only a number ofsystems attempted to handle the dynamics within the user profile” (Billsus andPazzani, 2000; McCallum and Nigam, 1998; Koychev and Schwab, 2000).

A better solution would be to adopt a service like the ones provided by the webaggregators, where the users would only specify their information need, or morespecifically the task or a profile, and the system would take care of the rest of theprocess. Such services are available on payment, for example, from Dialog, the DialogProfound, Dialog Newsroom and Dialog LiveNews (www.dialog.com/products/productline/profound.shtml), etc. Arguing in favour of fee-based informationaggregator services, Stewart (2004) comments that “when it comes to complexsearches on complex issues, with increased accuracy in search results, informationaggregators have proven that their products provide a service that delivers realsavings in time and money.” From the usability point of view if such services can be setup to automatically search, filter and display information required by a user, they willgreatly enhance the use of digital libraries.

In conclusion, it may be stated that several research activities targeted towardsimproving the usability of digital information services have taken place in the recentpast, and/or are on-going now, and seeing these changes in the digital world, one cansay that digital libraries will be an ubiquitous tool in our everyday life and activities infuture.

References

Adams, A. and Blandford, A. (2005), “Digital libraries support for the user’s ‘InformationJourney’”, 5th ACM/IEEE Joint Conference on Digital Libraries (JCDL 2005), Denver, CO,June 7-11, pp. 160-9, available at: www.uclic.ucl.ac.uk/annb/docs/aaabJCDL05preprint.pdf

Allen, M. (2002), “A case study of the usability testing of the University of South Florida’s virtuallibrary interface design”, Online Information Review, Vol. 26 No. 1, pp. 40-53.

Anand, S.S. and Mobasher, B. (2005), “Intelligent techniques for web personalization”,in Mobasher, B. and Anand, S.S. (Eds), Intelligent Techniques for Web Personalization,Lecture Notes in Artificial Intelligence (LNAI), Vol. 3169, Springer-Verlag, Berlin, pp. 1-36,available at: http://maya.cs.depaul.edu/,mobasher/papers/am-itwp-springer05.pdf

Baker, S.L. and Lancaster, F.W. (1991), Measurement and Evaluation of Library Services, 2nd ed.,Information Resources Press, Arlington, VA.

Banwell, L. and Coulson, G. (2004), “Users and use study methodology: the JUBILEE project”,Paper 167, Information Research, Vol. 9 No. 2, available at: http://InformationR.net/ir/9-2/paper167.html

Bertot, J.C. (2004), “Assessing digital libraries: approaches, issues and considerations”, availableat: www.kc.tsukuba.ac.jp/dlkc/e-proceedings/papers/dlkc04pp72.pdf

Impact ofdigital libraries:

a review

673

Billsus, D. and Pazzani, M.J. (2000), “User modelling for adaptive news access”, User Modellingand User-adapted Interaction, Vol. 10, pp. 147-80.

Bishop, A.P., Peterson, L.J., Neumann, S.L., Star, M.C., Ignacio, E. and Sandusky, R.J. (2000),“Digital libraries: situating use in changing information infrastructure”, Journal of theAmerican Society for Information Science, Vol. 51 No. 4, pp. 394-413.

Blandford, A. (2004), “Understanding user’s experiences: evaluation of digital libraries”, paperpresented at the DELOS Workshop on Evaluation of Digital Libraries, Padova, Italy,available at: www.delos.info/eventlist/wp7_ws_2004/Blandford.pdf

Blandford, A. and Buchanan, G. (2003), “Usability of digital libraries: a source of creativetensions with technical developments”, TCDL Bulletin, available at: www.ieee-tcdl.org/Bulletin/current/blandford/blandford.html

Blandford, A., Keith, S., Connell, I. and Edwards, H. (2004), “Analytical usability evaluation fordigital libraries: a case study”, Proceedings of the 2004 Joint ACM/IEEE Conference onDigital Libraries, available at: http://portal.acm.org

Bollen, J. and Luce, R. (2002), “Evaluation of digital library impact and user communities byanalysis of usage patterns”, D-Lib Magazine, Vol. 8 No. 6, available at: www.dlib.org/dlib/june02/bollen/06bollen.html

Borgman, C.L. (2000), From Gutenberg to the Global Information Structure: Access to Informationin the Networked World, ACM Press, New York, NY.

Borgman, C.L. (2004), “Evaluating the uses of digital libraries”, paper presented at the DELOSWorkshop on Evaluation of Digital Libraries, Padova, Italy, available at: http://dlib.ionio.gr/wp7/WS2004_Borgman.pdf

Borgman, C. and Rasmussen, E. (2005), “Usability of digital libraries in a multiculturalenvironment”, in Theng, Y.-L. and Foo, S. (Eds), Design and Usability of Digital Libraries:Case Studies in the Asia-Pacific, Information Science Publishing, London, pp. 270-84.

Borgman, C.L., Gilliland-Swetland, A.J., Leazer, G.L., Mayer, R., Gwynn, D., Gazan, R. andMautone, P. (2000), “Evaluating digital libraries for teaching and learning inundergraduate education: a case study of the Alexandria Digital Earth Prototype(ADEPT)”, Library Trends, Vol. 49 No. 2, special issue: Assessing Digital Library Services,pp. 228-50.

Borgman, C.L., Leazer, G.H., Gilliland-Swetland, A., Millwood, K., Champeny, L., Finley, J. andSmart, L.J. (2004), “How geography professors select materials for classroom lectures:implications for the design of digital libraries”, Proceedings of the 4th ACM/IEEE-CS JointConference on Digital Libraries, Tucson, Arizona, pp. 179-85.

Borgman, C.L., Smart, L.J., Millwood, K.A., Finley, J.R., Champeny, L., Gilliland, A.J. and Leazer,G.H. (2005), “Comparing faculty information seeking in teaching and research:implications for the design of digital libraries”, Journal of the American Society forInformation Science, Vol. 56 No. 6, pp. 636-57.

Champeny, L., Borgman, C.L., Leazer, G.H., Gilliland-Swetland, A.J., Millwood, K.A., D’Avolio, L.,Finley, J.R., Smart, L.J., Mautone, P.D., Mayer, R.E., Johnson, R.A., Chen, H., Christel, M.and Lim, E.-P. (2004), “Developing a digital learning environment: an evaluation of designand implementation processes”, Proceedings of the 2004 Joint ACM/IEEE Conference onDigital Libraries, Tucson, Arizona, ACM, New York, NY, pp. 37-46.

Choudhury, S., Hobbs, B., Lone, M. and Flores, N. (2002), “A framework for evaluating digitallibrary services”, D-Lib Magazine, Vol. 8 Nos 7/8, available at: http://dlib.org/dlib/july02/choudhury/07choudhury.html

Chowdhury, G.G. (2004a), Introduction to Modern Information Retrieval, Facet Publishing,London.

OIR30,6

674

Chowdhury, G.G. (2004b), “Access and usability issues of scholarly electronic publications”,in Gorman, G.E. and Rowland, F. (Eds), Scholarly Publishing in an Electronic Era.International Yearbook of Library and Information Management, 2004/2005, FacetPublishing, London, pp. 77-98.

Chowdhury, G.G. and Chowdhury, S. (1999), “Digital library research: major issues and trends”,Journal of Documentation, Vol. 55 No. 4, pp. 409-48.

Chowdhury, G.G. and Chowdhury, S. (2003), Introduction to Digital Libraries, Facet Publishing,London.

Coleman, A. and Sumner, T. (2004), “Digital libraries and user needs: negotiating the future”,Journal of Digital Information, Vol. 5 No. 3, special issue, available at: http://jodi.tamu.edu/Articles/v05/i03/editorial/

Department of Information Engineering (2004), “DELOS workshop on the evaluation of digitallibraries”, Department of Information Engineering, University of Padua, available at:http://dlib.ionio.gr/wp7/workshop2004.html

Dickstein, R. and Mills, V. (2000), “Usability testing at the University of Arizona Library: how tolet the users in on the design”, Information Technology and Libraries, Vol. 19 No. 3,pp. 144-51.

Dillon, A. (1994), Designing Usable Electronic Text: Ergonomic Aspects of Human InformationUsage, Taylor and Francis, Bristol.

Dillon, A. (1999), “Evaluating on TIME: a framework for the expert evaluation of digital interfaceusability”, draft of a paper published in the International Journal on Digital Libraries, Vol. 2Nos 2/3, available at www.ischool.utexas.edu/,adillon/publications/evaluating.html

Dillon, A. (2005), “Evaluating on time: a framework for the expert evaluation of digital libraryinterface usability”, available at: www.ischool.utexas.edu/,adillon/publications/evaluating.html

Dorward, J., Reinke, D. and Recker, M. (2002), “An evaluation model for a digital library servicestool”, Proceedings of the 2002 Joint ACM/IEEE Conference on Digital Libraries, Portland,Oregon, ACM, New York, NY, pp. 322-3.

Duncker, E., Theng, Y.L. and Mohd-Nasir, N. (2000), “Cultural usability in digital libraries”,Bulletin of the American Society for Information Science, Vol. 26 No. 4, pp. 21-2, availableat: www.asis.org/Bulletin/May-00/duncker__et_al.html

Dunn, P. and Marinetti, A. (n.d.), “Cultural adaptation: necessity for global elearning”, availableat: www.linezine.com/7.2/articles/pdamca.htm

Evidence Base (2004), “eValued: an evaluation toolkit for e-library developments”, available at:www.evalued.uce.ac.uk/tutorial/index.htm

Fan, W., Gordon, M.D. and Pathak, P. (2005), “Effective profiling of consumer informationretrieval needs: a unified framework and empirical comparison”, Decision SupportSystems, Vol. 40 No. 2, pp. 213-33.

Feeney, M. and Grieves, M. (Eds) (1994), The Value and Impact of Information, Bowker-Saur,London.

Frias-Martinez, E., Magoulas, G.D., Chen, S.Y. and Macredie, R.D. (2006), “Automated usermodeling for personalized digital libraries”, International Journal of InformationManagement, Vol. 26 No. 3, pp. 234-48.

Gibb, F. (2002), “Resource selection and data fusion for multimedia international digital libraries:an overview of the MIND project”, Proceedings of the EU/NSF All Projects Meeting, Rome,25-26 March, 2002, ERCIM-02-W02, ERCIM, Sophia-Antipolis, pp. 51-6.

Impact ofdigital libraries:

a review

675

Giersch, S., Butcher, K. and Reeves, T. (2003), “Annotated bibliography of evaluating theeducational impact of digital libraries”, available at: http://eduimpact.comm.nsdl.org/evalworkshop/eval_ann-bib_09-29-03.doc

Hansen, P. (1998), “Evaluations of IR user interface: implications for user interface design”,Human IT: Tildskrift for Studier av IT ur ett Humanvetenskapligt Perspektiv, Vol. 2,available at: www.hb.se/bhs/ith/2-98/ph.htm

Heinstrom, J. (2003), “Five personality dimensions and their influence on information behaviour”,Information Research, Vol. 9 No. 1, paper 165, available at: http://InformationR.net/ir/9-1/paper165.html

Hicks, D. (2003), “Supporting personalization and customization”, Computers in Industry, Vol. 52No. 1, pp. 71-9.

Hill, M.W. (1999), The Impact of Information on Society: An Examination of Its Nature, Valueand Usage, Bowker Saur, London, available at: http://wotan.liu.edu/dois/data/Articles/aibbolaibv:40:y:2000:i:1:p:112.html

HyLiFe (2002), “The HyLiFe hybrid library toolkit; information landscapes”, available at: http://hylife.unn.ac.uk/toolkit/infoland2.htmlyes

Ingwersen, P. (2001), “Cognitive information retrieval”, in Williams, M.E. (Ed.), Annual Review ofInformation Science and Technology, Vol. 34, Information Today for the American Societyfor Information Science and Technology, Medford, NJ, pp. 3-52.

Jeng, J. (2004), “Usability of digital libraries: an evaluation model”, Proceedings of the 2004 JointACM/IEEE Conference on Digital Libraries (JCDL2004), Tucson, Arizona, ACM, NewYork, NY.

Jeng, J. (2005), “What is usability in the context of the digital library and how can it bemeasured?”, Information Technology & Libraries, Vol. 24 No. 2, pp. 47-57.

Jones, C. and Sumner, T. (2002), “Evaluation of the National Science Digital Library”,in Blandford, A. and Buchanan, G. (Eds), JCDL’02 Workshop on Usability of DigitalLibraries, pp. 5-7, available at: www.uclic.ucl.ac.uk/annb/DLUsability/JCDL02.html

Jones, C., Zenios, M. and Griffiths, J. (2004), “Academic use of digital resources: disciplinarydifferences and the issue of progression”, Proceedings of the Networked LearningConference, available at: www.shef.ac.uk/nlc2004/Proceedings/symposia/

Keith, S., Blandford, A., Fields, B. and Theng, Y.L. (2002), “An investigation into the applicationof claims analysis to evaluate usability of a digital library interface”, in Blandford, A. andBuchanan, G. (Eds), JCDL’02 Workshop on Usability of Digital Libraries, available at:www.uclic.ucl.ac.uk/annb/DLUsability/JCDL02.html

Kim, K. (2002), “A model-based approach to usability evaluation for digital libraries”,in Blandford, A. and Buchanan, G. (Eds), JCDL’02 Workshop on Usability of DigitalLibraries, available at: www.uclic.ucl.ac.uk/annb/DLUsability/JCDL02.html

Koychev, I. and Schwab, L. (2000), “Adapting to drifting user’s interests”, available at: www.cs.ubc.ca/,mike/papers/KoySch00.pdf

Kuhlthau, C. (1988), “Longitudinal case studies of the information search process of users inlibraries”, Library and Information Science Research, Vol. 10 No. 3, pp. 257-304.

Lancaster, F.W. (1993), If You Want to Evaluate Your Library, 2nd ed., University of Illinois,Graduate School of Library and Information Science, Champaign, IL.

Larsen, R. and Borgman, C. (2003), “Digital library evaluation – metrics, testbeds, and processes”,Notes from the Workshop at ECDL 2003 Trondheim, Norway, available at: www.sis.pitt.edu/,ecdl2003/result_1.htm

OIR30,6

676

Leazer, G.H., Gilliland-Swetland, A.J. and Borgman, C.L. (2000a), “Evaluating the use of ageographic digital library in undergraduate classrooms: ADEPT”, Proceedings of the5th ACM Conference on Digital Libraries, San Antonio, Texas, ACM, New York, NY,pp. 248-9.

Leazer, G.H., Gilliland-Swetland, A.J., Borgman, C.L. and Mayer, R.E. (2000b), “Classroomevaluation of the Alexandria Digital Earth Prototype (ADEPT)”, Proceedings of theAmerican Society for Information Science 2000 Annual Conference, ASIS, Chicago, IL,pp. 334-40, available at: http://is.gseis.ucla.edu/adept/pubs/asisadept.htm

Liew, C.L. (2005), “Cross-cultural design and usability of a digital library supporting access toMaori cultural heritage resources”, in Theng, Y.-L. and Foo, S. (Eds), Design and Usabilityof Digital Libraries: Case Studies in the Asia-Pacific, Information Science Publishing,London, pp. 284-97.

McCallum, A. and Nigam, K. (1998), “A comparison of event models for naıve Bayes textclassification”, available at: www.cs.cmu.edu/,knigam/papers/multinomial-aaaiws98.pdf

Madle, G., Kostkova, P., Mani-Saada, J. and Weinberg, J. (2003), “Development of a methodologyto evaluate the impact of a medical digital library on user knowledge, attitude andbehaviour”, available at: www.hon.ch/Mednet2003/abstracts/491753869.html

Maly, K., Nelson, M., Shen, S., Zeil, S. and Zubair, M. (n.d.), “Digital library for undergraduatescience education evaluation framework”, available at: http://dlib.cs.odu.edu/completed-projects/ndif/nsf/ dlib2/udifplan/evaluationframework/evaluation.doc

Marchionini, G. (1995), Information Seeking in Electronic Environments, Cambridge UniversityPress, Cambridge.

Marchionini, G. (2000), “Evaluating digital libraries: a longitudinal and multifaceted view”,Library Trends, Vol. 49 No. 2, pp. 304-33.

Marchionini, G. and Komlodi, A. (1998), “Design of interfaces for information seeking”, AnnualReview of Information Science and Technology, Vol. 33, pp. 89-130.

Meyyappan, N., Chowdhury, G.G. and Foo, S. (2001), “Use of a digital work environment (DWE)prototype to create a user-centred university digital library”, Journal of InformationScience, Vol. 27 No. 4, pp. 249-64.

Meyyappan, N., Foo, S. and Chowdhury, G.G. (2004), “Design and evaluation of a task-baseddigital library for the academic community”, Journal of Documentation, Vol. 60 No. 4,pp. 449-75.

Mitchell, S. (1999), “Interface design considerations in libraries”, in Stern, D. (Ed.), DigitalLibraries: Philosophies, Technical Design Considerations, and Example Scenarios, HaworthPress, Binghamton, NY, pp. 131-81.

Neuhaus, C. (2005), “Digital library: evaluation and assessment bibliography”, available at:www.uni.edu/neuhaus/digitalbibeval.html

Nicholas, D., Huntington, P. and Williams, P. (2003), “Micro-mining log files: a method forenriching the data yield from internet log files”, Journal of Information Science, Vol. 29,Summer, pp. 391-404.

Nielsen, J. (1993a), “Iterative user interface design”, IEEE Computer, Vol. 26 No. 11, pp. 32-41.

Nielsen, J. (1993b), Usability Engineering, Academic Press, Boston, MA.

Nielsen, J. and Levy, J. (1994), “Measuring usability preference vs performance”, Communicationsof the ACM, Vol. 37 No. 4, pp. 66-76.

Norlin, E. (2000), “Reference evaluation: a three-step approach-surveys, unobtrusiveobservations, and focus groups”, College and Research Libraries, Vol. 61 No. 6, pp. 546-53.

Impact ofdigital libraries:

a review

677

Pan, B., Gay, G., Saylor, J., Hembrooke, H. and Henderson, D. (2004), “Usability, learning, andsubjective experience: user evaluation of K-MODDL in an undergraduate class”,Proceedings of the 2004 Joint ACM/IEEE Conference on Digital Libraries, Tucson,Arizona, ACM, New York, NY, pp. 188-9.

Park, S. (2000), “Usability, user preferences, effectiveness, and user behaviours when searchingindividual and integrated full-text databases: implications for digital libraries”, Journal ofthe American Society for Information Science, Vol. 51 No. 5, pp. 456-68.

Peng, L.K., Ramaiah, C.K. and Foo, S. (2004), “Heuristic-based user interface evaluation atNanyang Technological University in Singapore”, Program, Vol. 38 No. 1, pp. 42-59.

Reeves, T.C., Apedoe, X. and Woo, Y.H. (2003), Evaluating Digital Libraries: A User-friendlyGuide, NSDL.ORG, University of Georgia, Athens, GA.

Renda, M.E. and Straccia, U. (2005), “A personalized collaborative digital library environment:a model and an application”, Information Processing & Management, An Asian DigitalLibraries Perspective, Vol. 41 No. 1, pp. 5-21.

Sandusky, R.J. (2002), “Digital library attributes: framing usability research”, in Blandford, A.and Buchanan, G. (Eds), JCDL’02 Workshop on Usability of Digital Libraries, pp. 5-7,available at: www.uclic.ucl.ac.uk/annb/DLUsability/JCDL02.html

Saracevic, T. (2000), “Digital library evaluation: toward evolution of concepts – 1 – evaluationcriteria for design and management of digital libraries”, Library Trends. Assessing DigitalLibrary Services, Vol. 49 No. 2, pp. 350-69, available at: www.scils.rutgers.edu/, tefko/LibraryTrends2000.pdf

Saracevic, T. (2004), “Evaluation of digital libraries: an overview, presented at the DELOSWorkshop on the Evaluation of Digital Libraries”, available at: http://dlib.ionio.gr/wp7/ws2004_Saracevic.pdf

Saracevic, T. (2005), “How were digital libraries evaluated? Presentations at the course andconference Libraries in the Digital Age (LIDA 2005), Dubrovnik, Croatia, 30 May-3 June”,available at: www.scils.rutgers.edu/, tefko/DL_evaluation_LIDA.pdf

Shahabi, C. and Chen, Y.-S. (2003), “Web information personalization: challenges andapproaches”, Proceedings of Databases in Networked Information Systems:3rd International Workshop, DNIS 2003, Aizu, Japan, September 22-24, Lecture Notesin Computer Science, Springer, Berlin, pp. 5-15.

Shneiderman, B. (1998), Designing the User Interface: Strategies for Effective Human-ComputerInteraction, 3rd ed., Addison-Wesley, Reading, MA.

Snead, J.T., Bertot, J.C., Jaeger, P.T. and McClure, C.R. (2005), “Developing multi-method,iterative, and user-centered evaluation strategies for digital libraries: functionality,usability, and accessibility”, available at: www.ii.fsu.edu/presentations/digilib_asist2005.pdf

Spink, A., Wilson, T., Ford, N., Foster, A. and Ellis, D. (2002), “Information-seeking and mediatedsearching, part 1, theoretical framework and research design”, Journal of the AmericanSociety for Information Science and Technology, Vol. 53 No. 9, pp. 695-703.

Stelmaszewska, H. and Blandford, A. (2002), “Patterns of interactions: user behaviour in responseto search results”, available at: www.uclic.ucl.ac.uk/annb/DLUsability/Stelmaszewska29.pdf

Stewart, B. (2004), “No free lunch: the web and information aggregators”, SCIP, Vol. 7 No. 3,available at: www.dialog.com/products/productline/profound.shtml

Sumner, T. (2005), “Report on the Fifth ACM/IEEE Joint Conference on Digital Libraries – cyberinfrastructure for research and education, June 7-11, 2005, Denver, Colorado”, D-Lib

OIR30,6

678

Magazine, Vol. 11 No. 7/8, July/August, available at: www.dlib.org/dlib/july05/sumner/07sumner.html

Sumner, T. and Marlino, M. (2004), “Digital libraries and educational practice: a case for newmodels”, Proceedings of JCDL’04, Tucson, Arizona, June 7-11, pp. 170-8.

Sutcliffe, A. and Ennis, M. (1998), “Towards a cognitive theory of information retrieval”,Interacting with Computers, Vol. 10 No. 3, pp. 321-51.

Theng, Y.L., Mohd-Nasir, N. and Thimbleby, H. (2000), “Purpose and usability of digitallibraries”, Proceedings of the 5th ACM Conference on Digital Libraries, ACM, New York,NY, pp. 238-9.

Van House, N.A., Butler, M.H., Ogle, V. and Schiff, L. (1996), “User-centred iterative design fordigital libraries”, D-Lib Magazine, available at www.dlib.org/dlib/february96/02vanhouse.html

Waters, D. (2001), “Developing digital libraries: four principles for higher education”,EDUCAUSE Review, September/October, pp. 58-9.

Wilson, B. (2002), “Usability and evaluation, editorial”, D-Lib Magazine, Vol. 8 No. 10.

Wilson, T. (1999), “Models of information behaviour research”, Journal of Documentation, Vol. 55No. 3, pp. 249-70.

Wilson, T., Ford, N., Ellis, D., Foster, A. and Spink, A. (2002), “Information seeking and mediatedsearching, part 2, uncertainty and its correlates”, Journal of the American Society forInformation Science and Technology, Vol. 53 No. 9, pp. 704-15.

Wu, S., Crestani, F. and Gibb, F. (2004), “New methods for results merging in distributedinformation retrieval”, Proceedings of the ACM SIGIR 2003 Workshop on DistributedInformation Retrieval, Toronto, Canada, August, Lecture Notes in Computer Science, 2924,Springer-Verlag, Heidelberg, pp. 84-100.

Zhang, Y. (2004), “Digital library evaluation”, available at: www.scils.rutgers.edu/,miceval/research/DL_eval.html

Further reading

Borgman, C.L. (1999), “What are digital libraries?”, Competing Visions. Information Processingand Management, Vol. 35 No. 3, pp. 227-43.

Dai, H. and Mobasher, B. (2005), “Integrating semantic knowledge with web usage mining forpersonalization”, in Scime, A. (Ed.), Web Mining: Applications and Techniques, Idea GroupPublishing, Medford, NJ, Ch. 13.

Hartland-Fox, B. and Dalton, P. (2002), “eVALUEd – an evaluation model for e-librarydevelopments”, Ariadne, Vol. 31, available at: www.ariadne.ac.uk/issue31/evalued/

Saracevic, T. (1999), “Information science”, Journal of the American Society for InformationScience, Vol. 50 No. 12, pp. 1051-63.

Saracevic, T. and Dalbello, M. (2001), “A survey of digital library education”, available at: www.ffzg.hr/lida/lida2001/present/saracevic_dalbello.doc

Shneiderman, B., Byrd, D. and Croft, W.B. (1998), “Sorting out searching: a user-interfaceframework for text searches”, Communications of the ACM, Vol. 41 No. 4, pp. 95-8.

Spink, A. and Cool, C. (1999), “Education for digital libraries”, D-Lib Magazine, Vol. 5 No. 5,available at: www.dlib.org/dlib/may99/05spink.html

Wilson, R. and Landoni, M. (2003), “Evaluating the usability of portable electronic books”,Proceedings of the 2003 ACM Symposium on Applied Computing (SAC), Melbourne,March 9-12, ACM, New York, NY, pp. 564-8.

Impact ofdigital libraries:

a review

679

Wilson, R., Landoni, M. and Gibb, F. (2003), “The Web Book experiments in electronic textbookdesign”, Journal of Documentation, Vol. 59 No. 4, pp. 454-77.

Wilson, R., Shortreed, J. and Landoni, M. (2004), “A study into the usability of e-encyclopaedias”,in Haddad, H., Omicini, A., Wainwright, R.L. and Liebrock, L.M. (Eds), Proceedings of the2004 ACM Symposium on Applied Computing (SAC), Nicosia, Cyprus, March 14-17,pp. 1688-92.

Corresponding authorSudatta Chowdhury can be contacted at: [email protected]

OIR30,6

680

To purchase reprints of this article please e-mail: [email protected] visit our web site for further details: www.emeraldinsight.com/reprints