2 qualitative research methods for evaluating computer

26
Introduction Qualitative research methods are being used increasingly in evaluation studies, including evaluations of computer systems and information tech- nology. This chapter provides an overview of the nature and appropriate uses of qualitative methods and of key considerations in conducting quali- tative research. The goal of qualitative research is understanding issues or particular situations by investigating the perspectives and behavior of the people in these situations and the context within which they act. To accomplish this, qualitative research is conducted in natural settings and uses data in the form of words rather than numbers. Qualitative data are gathered prima- rily from observations, interviews, and documents, and are analyzed by a variety of systematic techniques. This approach is useful in understanding causal processes, and in facilitating action based on the research results. Qualitative methods are primarily inductive. Hypotheses are developed during the study so as to take into account what is being learned about the setting and the people in it. Qualitative methods may be combined with quantitative methods in conducting a study.Validity threats are addressed primarily during data collection and analysis. The chapter discusses these points and uses an evaluation of a clinical laboratory information system to illustrate them. Computer information systems can significantly improve patient care, hospital management and administration, research, and health and medical education. However, many systems do not achieve these goals. Dowling esti- mated that 45% of computer-based medical information systems failed due to user resistance, even though these systems were sound technologically. Thus, the stakes in developing, implementing, and evaluating such systems are high [1]. Different evaluation objectives require different methodological approaches. Many evaluations of medical computer information systems focus on impacts such as costs and benefits, timeliness, completeness, error 2 Qualitative Research Methods for Evaluating Computer Information Systems Bonnie Kaplan and Joseph A. Maxwell 30

Upload: others

Post on 27-Oct-2021

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 2 Qualitative Research Methods for Evaluating Computer

Introduction

Qualitative research methods are being used increasingly in evaluationstudies, including evaluations of computer systems and information tech-nology. This chapter provides an overview of the nature and appropriateuses of qualitative methods and of key considerations in conducting quali-tative research.

The goal of qualitative research is understanding issues or particular situations by investigating the perspectives and behavior of the people inthese situations and the context within which they act. To accomplish this,qualitative research is conducted in natural settings and uses data in theform of words rather than numbers. Qualitative data are gathered prima-rily from observations, interviews, and documents, and are analyzed by avariety of systematic techniques. This approach is useful in understandingcausal processes, and in facilitating action based on the research results.

Qualitative methods are primarily inductive. Hypotheses are developedduring the study so as to take into account what is being learned about thesetting and the people in it. Qualitative methods may be combined withquantitative methods in conducting a study. Validity threats are addressedprimarily during data collection and analysis.

The chapter discusses these points and uses an evaluation of a clinicallaboratory information system to illustrate them.

Computer information systems can significantly improve patient care,hospital management and administration, research, and health and medicaleducation. However, many systems do not achieve these goals. Dowling esti-mated that 45% of computer-based medical information systems failed dueto user resistance, even though these systems were sound technologically.Thus, the stakes in developing, implementing, and evaluating such systemsare high [1].

Different evaluation objectives require different methodologicalapproaches. Many evaluations of medical computer information systemsfocus on impacts such as costs and benefits, timeliness, completeness, error

2Qualitative Research Methods for Evaluating ComputerInformation Systems

Bonnie Kaplan and Joseph A. Maxwell

30

Page 2: 2 Qualitative Research Methods for Evaluating Computer

rates, retrievability, usage rates, user satisfaction, and clinician behaviorchanges [2,3]. Quantitative methods are excellent for studying these kindsof evaluation questions, in which selected features of the information tech-nology, the organization, the user, and the information needs generally aretreated as independent, objective, and discrete entities, and as unchangingover the course of the study [4].

When a researcher or evaluator wishes to study issues that are not easilypartitioned into discrete entities, or to examine the dynamics of a processrather than its static characteristics, qualitative methods are more usefulthan solely quantitative ones. The strengths of qualitative research methodslie in their usefulness for understanding the meaning and context of thephenomena studied, and the particular events and processes that make upthese phenomena over time, in real-life, natural settings [5]. When evaluat-ing computer information systems, these contextual issues include social,cultural, organizational, and political concerns surrounding an informationtechnology; the processes of information systems development, installation,and use (or lack of use); and how all these are conceptualized and perceivedby the participants in the setting where the study is being conducted [6].Thus, qualitative methods are particularly helpful for any of the following:

• To determine what might be important to measure, why measured resultsare as they are, or if the subject of study cannot be measured easily

• To understand not only what happened, or what people are respondingto, but why; to understand how people think or feel about something andwhy they think that way, what their perspectives and situations are andhow those influence what is happening; to understand and explore whata technology (such as an newborn nursery telemonitoring system) orpractice (such as using a computer to access health information) meansto people

• To investigate the influence of social, organizational, and cultural contexton the area of study, and vice versa

• To examine causal processes, and not simply what causal relationshipsexist

• To study processes as they develop and emerge, rather than in outcomesor impacts; for example, to investigate the development process for theapplication under study in parallel with that process so that you canimprove the application development as it progresses.

Qualitative research methods have undergone significant development inrecent years [7–10] and are being increasingly used in evaluation researchboth within and outside health care [6,10–13]. There also is a growing literature on combining qualitative and quantitative methods [14–22]. Thepurpose of this chapter is to explain what qualitative approaches can con-tribute to medical computer systems evaluations. We begin by describingthe nature and goals of qualitative research and evaluation, and illustratethese with an example of the use of qualitative methods in computer

2. Qualitative Research Methods 31

Page 3: 2 Qualitative Research Methods for Evaluating Computer

information systems evaluation. We then discuss some key considerationsin qualitative evaluation research and present the most important methodsused in this approach.

The Nature of Qualitative Research

“Qualitative research” refers to a range of approaches that differ signifi-cantly among themselves, but that share some defining characteristics andpurposes. These approaches are known by a variety of terms, some broadand some quite specific. The more general terms, which are more or lessequivalent to “qualitative research,” are “field research” and “naturalisticresearch.” More specific terms, denoting particular types of qualitativeresearch, are “interpretive research,” “ethnographic research,” “phenome-nological research,” “hermeneutic research,” “humanistic research,” andsome kinds of case studies or action research [23]. We use “qualitativeresearch” to refer to all of these.

Qualitative research typically involves systematic and detailed study ofindividuals in natural settings, instead of in settings contrived by theresearcher, often using open-ended interviews intended to elicit detailed,in-depth accounts of the interviewee’s experiences and perspectives on specific issues, situations, or events. Qualitative methods employ data in theform of words: transcripts of open-ended interviews, written observationaldescriptions of activities and conversations, and documents and other arti-facts of people’s actions. Such data are analyzed in ways that retain theirinherent textual nature.This is because the goals of qualitative research typ-ically involve understanding a phenomenon from the points of view of theparticipants, and in its particular social and institutional context.These goalslargely are lost when textual data are quantified and aggregated [5].

Reasons for Qualitative Research

There five main reasons for using qualitative methods in evaluating com-puter information systems:

1. Understanding how a system’s users perceive and evaluate that systemand what meanings the system has for them

Users’ perspectives generally are not known in advance. It is difficult toascertain or understand these through purely quantitative approaches. Byallowing researchers to investigate users’ perspectives in depth, qualitativemethods can contribute to the explanation of users’ behavior with respectto the system, and thus to the system’s successes and failures and even ofwhat is considered a “success” or “failure” [6].

32 B. Kaplan and J.A. Maxwell

Page 4: 2 Qualitative Research Methods for Evaluating Computer

2. Understanding the influence of social and organizational context onsystems use

Computer information systems do not exist in a vacuum; their imple-mentation, use, and success or failure occur in a social and organiza-tional context that shapes what happens when that system is introduced.Some researchers consider this so important as to treat “context” as intrin-sically part of the object of study rather than as external to the informationsystem. Because of “context,” in important respects, a system is not the same system when it is introduced into different settings [24,25]. As is truefor users’ perspectives, the researcher usually does not know in advancewhat all the important contextual influences are. Qualitative methods areuseful for discovering and understanding these influences, and also fordeveloping testable hypotheses and theories.

3. Investigating causal processes

Although experimental interventions can demonstrate that causal rela-tionships exist, they are less useful in showing how causal processes work[26–28]. Qualitative methods often allow the researcher to get inside theblack box of experimental and survey designs and to discover the actualprocesses involved in producing the results of such studies. Qualitativeresearch is particularly useful for developing explanations of the actualevents and processes that led to specific outcomes [7], or when causality ismultidirectional and there is no clear effect or impact of one factor on somespecific outcome [6]. In this way, qualitative methods can yield theories andexplanations of how and why processes, events, and outcomes occur [29].

4. Providing formative evaluation that is aimed at improving a programunder development, rather than assessing an existing one

Although quantitative and experimental designs often are valuable inassessing outcomes, they are less helpful in giving those responsible forsystems design and implementation timely feedback on their actions. Qual-itative evaluation can help both in system design as well as in studies ofsystem use [6]. Using qualitative methods can help in identifying potentialproblems as they are forming, thereby providing opportunities to improvethe system as it develops. These evaluations also allow for varying andchanging project definitions and how the system and organization aremutually transformative, thereby enabling learning by monitoring the manyexperiments that naturally occur spontaneously as part of the processes ofimplementation and use [6].

5. Increasing the utilization of evaluation results

Administrators, policy makers, systems designers, and practitioners oftenfind purely quantitative studies of little use because these studies do not seemrelated to their own understanding of the situation and the problems they

2. Qualitative Research Methods 33

Page 5: 2 Qualitative Research Methods for Evaluating Computer

are encountering. Qualitative methods, by providing evaluation findings thatconnect more directly with these individuals’ perspectives, can increase thecredibility and usefulness of evaluations for such decision makers [10].

An Example: Evaluating a Clinical LaboratoryComputer Information System

These attributes of qualitative research are illustrated by a study of a clin-ical laboratory computer information system used by different laboratorieswithin one department of an academic medical center [16,30–32]. Thissystem was evaluated by combining both quantitative and qualitativemethods. A survey questionnaire was designed to assess the impact of thecomputer system on work in the laboratories. Qualitative data gatheredfrom interviews, observations, and open-ended questionnaire questionswere used to determine what changes were attributed to the computersystem. Statistical analysis of the survey data initially revealed no differ-ences among laboratory technologists’ responses. Qualitative data analysisof their answers to open-ended questions indicated that laboratory tech-nologists within each laboratory differed in their reactions to the system,as did laboratories as a whole. Some focused on work increases, whereasothers emphasized improved laboratory results reporting and service.

Although the quantitative survey data provided no apparent reason forthese differences, the qualitative data did, leading to further investigation.This investigation revealed that different technologists had different viewsof their jobs, and these different views affected their attitudes toward thecomputer system. For some technologists, the system enhanced their jobs,while for others, it interfered with their jobs, even though they ostensiblyhad the same jobs and were using the same system. Neither the re-searchers nor the laboratory personnel expected this finding, though thefinding rang true. Further analysis of the quantitative data supported thisexplanation for the differences among laboratories and among technolo-gists. In the original quantitative analysis, few differences were discernibleamong technologists or among laboratories from the quantitative databecause standard quantitative measures of job characteristics assumed auniformity of job situations and perceptions. However, this uniformity didnot exist, as revealed in qualitative data that identified technologists’ ownviews of their jobs and of the system.

This example illustrates several features of qualitative research.First, it wasnot possible to design, in advance,a quantitative study that would have testedthe right hypotheses, because appropriate hypotheses could not be known inadvance.A qualitative approach enabled the researchers to see how individ-uals construed the information technology, their jobs, and the interactionbetween the laboratory computer information system and their jobs. Thus,the researchers were able to generate productive hypotheses and theory.

34 B. Kaplan and J.A. Maxwell

Page 6: 2 Qualitative Research Methods for Evaluating Computer

Second, the qualitative data enabled the researchers to make sense oftheir quantitative findings. The qualitative data helped to explain why thequantitative results were as they were. This is one example of the pointmade above—that qualitative research often can uncover the causal pro-cesses that explain quantitative results.

Third, the qualitative data were able to serve these purposes because theyhelped the researchers understand the system from the points of view ofthose involved with it. These points of view are crucial to studying issuessuch as computer systems acceptance or rejection, or the changes that occurwhen a new system is introduced.

Fourth, a variety of human, contextual, and cultural factors affect systemacceptance in actual use. Qualitative data enabled the researchers to under-stand the contexts in which the system was developed, installed, and used,and thus to understand differences among laboratories.

Finally, the results had face validity. They were believable to the labora-tory director in the hospital where the study was done, to laboratory per-sonnel in other hospitals, and even outside of hospitals where workersshowed characteristics similar to the laboratory technologists’. Because theresults were credible and the description of the laboratory technologists wererecognizable, they were useful for others.This is the primary means by whichqualitative studies can be generalized or results made transferable: not bystatistical inference to some defined population, but through the develop-ment of a theory that has applicability beyond the setting studied [33], as wasdone in this study [16].

In the remainder of this chapter, we discuss some important considera-tions in designing and conducting evaluations that use qualitative methods.

Getting Started

The most important initial question for an evaluator is whether qualitativemethods are appropriate for conducting the study. For this, it is importantto consider what qualitative methods can add to an evaluation: what kindsof questions they are capable of answering, and what value they have.

Research Questions and Evaluation GoalsQualitative methods typically are used to understand the perception of aninformation system by its users, the context within which the system isimplemented or developed, and the processes by which changes occur oroutcomes are generated. They usually focus on the description, interpreta-tion, and explanation of events, situations, processes, and outcomes, ratherthan the correlation of variables, and tend to be used for understanding aparticular case or for comparison of a small number of cases, rather thanfor generalization to a specified population. They are useful for systemati-cally collecting so-called “anecdotal” evidence and turning the experiencesthey describe into data that can be rigorously collected and analyzed.

2. Qualitative Research Methods 35

Page 7: 2 Qualitative Research Methods for Evaluating Computer

Thus, the questions posed in a qualitative study are initially framed as“what,” “how,” and “why” queries, rather than as whether a particularhypothesis is true or false. The fundamental question is “What is going onhere?” This question is progressively narrowed, focused, and made moredetailed as the evaluation proceeds. Qualitative studies may begin with specific concerns or even suppositions about what is going on, but majorstrengths of qualitative methods are avoiding tunnel vision, seeing the un-expected, disconfirming one’s assumptions, and discovering new ways ofmaking sense of what is going on. Qualitative evaluators typically beginwith questions such as:

• What is happening here?• Why is it happening?• How has it come to happen in this particular way?• What do the people involved think is happening?• How are these people responding to what is happening?• Why are these people responding that way?

To answer these questions, qualitative evaluators attempt to understandthe way others construe, conceptualize, and make sense of what is happen-ing in a particular situation. In doing this, they must become familiar withthe everyday behaviors, habits, work routines, and attitudes of the peopleinvolved as these people go about their daily business. It also is importantfor evaluators to become familiar with the language or specialized jargonused by people involved with the study. Knowledge of behaviors, habits,routines, attitudes, and language provides a way of identifying key conceptsand values. This knowledge enables the evaluator not only to better under-stand what is going on, but also to present findings in terms meaningful to the participants. Policy makers, department administrators, systemsdesigners, and others will be able to recognize the situations being reportedand, therefore, know better how to address them. In addition, individualsoutside the organization where the evaluation is conducted will have suffi-cient context to develop a good understanding of it.

Further, qualitative methods can be used throughout the entire systemsdevelopment and implementation process. They treat a computer infor-mation system project as a process, rather than as an object or event. Bydoing this, the evaluator can play an active role in the project, offering evaluations as the project progresses (formative evaluations) instead ofhaving to wait until the project is completed (summative evaluations). Inthis way, evaluators can serve as a bridge between the interests of systemsdevelopers and systems users [6].

Recognizing diversity of perceptions also is important; “various individ-uals . . . may perceive it [an innovation] in light of many possible sets ofvalues” [34]. For example, in Lundsgaarde and colleague’s [35,36] evalua-tion of PROMIS, individuals who thought their professional status wasenhanced by the system were more positive than those who felt their status

36 B. Kaplan and J.A. Maxwell

Page 8: 2 Qualitative Research Methods for Evaluating Computer

was lowered. Other values also can play a role; Hirschheim and Klein [37]illustrate this for systems developers, Kaplan [38–40] for the physician anddeveloper values in systems development and use, while Sicotte and col-leagues discuss a system failure in terms that contrast differences in valuesand goals among nurses and system developers [41,42]. A major strengthof qualitative approaches is their sensitivity to this diversity and to uniqueevents and outcomes.

Role of TheoryTheory is useful for guiding a study. Familiarity with the subject of study orwith a wide range of theories and situations, for example, can help theresearcher make sense of occurrences in the particular study being con-ducted. It can help the evaluator to not overlook important issues and helpprovide a set of constructs to be investigated. In this way, theory can shaperesearch questions and focus. Theory also will guide a researcher’s inter-pretation and focus. Theories of knowledge and epistemologies underlyingresearch approaches influence how the project is conceived, how theresearch is carried out, and how it is reported. For example, Kaplandescribes how three different theoretical perspectives would lead to differ-ent interpretations of Lundsgaarde, Fischer, and Steele’s findings in theirevaluation of PROMIS [43].

Theory may play different roles in qualitative research. Two differentapproaches may be taken, or combined. In the first, the evaluator workswithin an explicit theoretical frame. For example, postmodern and construc-tivist theories are becoming increasingly popular in studies of informationsystems [6,44]. In the second approach, the evaluator attempts to avoid priorcommitment to theoretical constructs or to hypotheses formulated beforegathering any data. Nevertheless, in each approach, qualitative researchersdevelop categories and hypotheses from the data. The two approaches maybe combined. For example, an evaluator may start with research questionsand constructs based on theory,but instead of being limited to or constrainedby prior theory, also attempts to develop theory, hypotheses, and categoriesthrough using a strategy known as “grounded theory” [45,46].

Regardless of which approach is used, an evaluator cannot avoid a priortheoretical orientation that affects research and evaluation questions, aswell as affecting the methods chosen for investigating those questions. Thedifference between the two approaches is not whether the evaluator hassome prior theoretical bent—that is unavoidable—but whether the evalu-ator deliberately works within it or tries to work outside it.

Gaining EntryAn evaluation begins with the process of the researcher gaining access tothe setting and being granted permission to conduct the evaluation. How

2. Qualitative Research Methods 37

Page 9: 2 Qualitative Research Methods for Evaluating Computer

this is done shapes the researcher’s relationship with the participants in thestudy, and, consequently, affects the nature of the entire study [12,18]. Someof these effects bear on validity issues, as discussed below. In addition topractical and scientific issues, negotiating a research or evaluation studyraises ethical ones [6,10,47]. To help address some of the ethical concerns,we believe that, to the extent possible, all participants in the setting beingevaluated should be brought into the negotiation process. Furthermore,doing interviews or observations may intrude into people’s private lives,work spaces, and homes, and probe their feelings and thoughts. Personalissues may easily arise, so the researcher needs sensitivity to respectpeople’s privacy and sensibilities. Often confidentiality is promised, whichmay require significant steps to protect people’s identities.

Qualitative Research DesignQualitative research primarily is inductive in its procedures. Qualitativeresearchers assume that they do not know enough about the perspectivesand situations of participants in the setting studied to be able to formulatemeaningful hypotheses in advance, and instead develop and test hypothesesduring the process of data collection and analysis.For the same reasons,qual-itative evaluations tend to be in-depth case studies of particular systems.

Qualitative research design involves considerable flexibility [5,7], for tworeasons. First, many aspects of the project change over time, including theprocesses being studied, evaluation goals, definitions of “success,” and whothe stakeholders may be [6]. As they change, the study itself also may needchanging. Second, qualitative inquiry is inductive and often iterative in thatthe evaluator may go through repeated cycles of data collection and ana-lysis to generate hypotheses inductively from the data. These hypotheses,in turn, need to be tested by further data collection and analysis. Theresearcher starts with a broad research question, such as “What effects willinformation systems engendered by reforms in the UK’s National HealthService have on relative power and status among clinical and administra-tive staff in a teaching hospital?” [48]. The researcher narrows the study bycontinually posing increasingly specific questions and attempting to answerthem through data already collected and through new data collected forthat purpose. These questions cannot all be anticipated in advance. As theevaluator starts to see patterns, or discovers behavior that seems difficultto understand, new questions arise. The process is one of generatinghypotheses and explanations from the data, testing them, and modifyingthem accordingly. New hypotheses may require new data, and, conse-quently, potential changes in the research design.

Data Collection

The most important principle of qualitative data collection is that every-thing is potential data. The evaluator does not rigidly restrict the scope of

38 B. Kaplan and J.A. Maxwell

Page 10: 2 Qualitative Research Methods for Evaluating Computer

data collection in advance, nor use formal rules to decide that some dataare inadmissible or irrelevant. However, this approach creates two poten-tial problems: validity and data overload.

Validity issues are addressed below. The problem of data overload is insome ways more intractable.The evaluator must continually make decisionsabout what data are relevant and may change these decisions over thecourse of the project. The evaluator must work to focus the data collectionprocess, but not to focus it so narrowly as to miss or ignore data that wouldcontribute important insights or evidence.

Qualitative evaluators use three main sources for data: (1) observation,(2) open-ended interviews and survey questions, and (3) documents andtexts. Qualitative studies generally collect data by using several of thesemethods to give a wider range of coverage [49]. Data collection almostalways involves the researcher’s direct engagement in the setting studied,what often is called “fieldwork.” Thus, the researcher is the instrument forcollecting and analyzing data; the researcher’s impressions, observations,thoughts, and ideas also are data sources. The researcher incorporates thesewhen recording qualitative data in detailed, often verbatim form as fieldnotes or interview transcripts. Such detail is essential for the types of analy-sis that are used in qualitative research. We discuss each of these datasources in turn, drawing again on Kaplan and Duchon’s study and severalother studies for examples.

ObservationObservation in qualitative studies typically involves the observer’s activeinvolvement in the setting studied; it is usually called “participant observa-tion” to distinguish it from passive or non-interactive observation. Partici-pant observation allows the observer to ask questions for clarification ofwhat is taking place and to engage in informal discussion with system users,as well as to record ongoing activities and descriptions of the setting. It pro-duces detailed descriptive accounts of what was going on (including verbalinteraction), as well as eliciting the system users’ own explanations, evalu-ations, and perspectives in the immediate context of use, rather than retro-spectively. Such observation often is crucial to the assessment of a system.For example, Kaplan and Duchon went to the laboratories to observe whattechnologists actually did, rather than simply depend on verbal reports orjob descriptions; Forsythe [50,51], in her studies of physicians’ informationneeds, attended hospital rounds and recorded each request for information.

Observation also can be conducted when evaluating the potential uses ofa proposed computer information system. Kaplan, for example, observedhow the flowsheets in the patient record were used in an intensive care unitwhen it was suggested that flowsheets be replaced by a computer terminalthat would display laboratory data in graphic form. She observed that onlythe pharmacist consulted the flowsheets. When a physician came to see thepatient, or a new nurse came on duty, he or she assessed the patient’s con-

2. Qualitative Research Methods 39

Page 11: 2 Qualitative Research Methods for Evaluating Computer

dition by talking to the nurse who had been caring for that patient, ratherthan by consulting the patient’s record.These observations raised a numberof issues that would need addressing if a computer display of flowsheetinformation were to be implemented successfully. In another study prepara-tory to developing a system to make clinical images part of an online patientrecord, physician use of images was studied [52].

Open-Ended Interviews and Survey QuestionsOpen-ended interviewing requires a skillful and systematic approach toquestioning participants. This can range from informal and conversationalinterviews to ones with a specific agenda. There are two distinctive featureof open-ended interviewing. First, the goal is to elicit the respondent’s viewsand experiences in his or her own terms, rather than to collect data that aresimply a choice among preestablished response categories. Second, theinterviewer is not bound to a rigid interview format or set of questions, butshould elaborate on what is being asked if a question is not understood,follow up on unanticipated and potentially valuable information with addi-tional questions, and probe for further explanation.

For example, Kaplan and Duchon interviewed laboratory directors andchief supervisory personnel to determine what they expected the potentialeffects of the computer system would be on patient care, laboratory oper-ations, and hospital operations. They asked such questions as “What effectsdo you expect the computer system to have?” so as not to constrain whatthe interviewees would answer. They also asked “What do you think thisstudy should focus on?” so as to explore issues they had not anticipated.

A close analogue to open-ended interviewing, for large groups of respon-dents, is using open-ended survey questions. Kaplan and Duchon includedin their survey such open-ended questions as “What important changes do you think the computer system has caused?” The final question on thesurvey was a request for any additional comments. Such questions areimportant to include in interviews and questionnaires to insure that unanticipated issues are explored.

Another way to investigate the views of groups of respondents is throughfocus groups. This involves interviewing several people together, and addsan opportunity for those present to react and respond to each others’remarks [53,54].

Documents and TextsDocuments, texts, pictures or photographs, and artifacts also can be valu-able sources of qualitative data. For example, Nyce and Graves [55] ana-lyzed published texts, case memoirs, and novels written by physicians intheir study of the implications of knowledge construction in developingvisualization systems in neurology. In Kaplan’s studies of the acceptanceand diffusion of medical information systems [38–40,56], she did close read-

40 B. Kaplan and J.A. Maxwell

Page 12: 2 Qualitative Research Methods for Evaluating Computer

ings of original source documents: published research papers; populariza-tions in medical magazines, newsletters, and books; conference reports;memoirs of individuals who developed the systems; and books commis-sioned by federal agencies.

Data Analysis

The basic goal of qualitative data analysis is understanding: the search forcoherence and order. The purpose of data analysis is to develop an under-standing or interpretation that answers the basic question of what is goingon here. This is done through an iterative process that starts by developingan initial understanding of the setting and perspectives of the people beingstudied. That understanding then is tested and modified through cycles ofadditional data collection and analysis until an adequately coherent inter-pretation is reached [7,10].

Thus, in qualitative research, data analysis is an ongoing activity thatshould start as soon as the project begins and continue through the entirecourse of the research [5]. The processes of data collection, data analysis,interpretation, and even research design are intertwined and depend oneach other.

As with data collection, data analysis methods usually cannot be preciselyspecified in advance. As noted previously, qualitative data collection andanalysis have an inductive, cyclic character. As Agar describes it:

You learn something (“collect some data”), then you try to make sense out of it(“analysis”), then you go back and see if the interpretation makes sense in light ofnew experience (“collect more data”), then you refine your interpretation (“moreanalysis”), and so on. The process is dialectic, not linear [57].

All forms of qualitative data analysis presuppose the existence of detailedtextual data, such as observational field notes, interview transcripts, or docu-ments.There also is a tendency to treat as “textual” other nonnumeric formsof data, such as diagrams or photographs.A necessary first step in data analy-sis, prior to all of the subsequent techniques, consists of reading the data.Thisreading is done to gain familiarity with what is going on and what people aresaying or doing,and to develop initial ideas about the meaning of these state-ments and events and their relationships to other statements and events.Even at later stages in data analysis, it often is valuable to go back and rereadthe original data in order to see if the developing hypotheses make sense.Allof the analysis techniques described below depend on this prior reading; theyrequire the ongoing judgment and interpretation of the researcher.

There are four basic techniques of qualitative data analysis: (1) coding,(2) analytical memos, (3) displays, and (4) contextual and narrative analy-sis. They are used, separately and in combination, to help identify themes;develop categories; and explore similarities and differences in the data, and

2. Qualitative Research Methods 41

Page 13: 2 Qualitative Research Methods for Evaluating Computer

relationships among them. None of these methods is an algorithm that canbe applied mechanically to the data to produce “results.” We briefly discusseach of the four techniques.

CodingThe purpose of coding, in qualitative research, is different from that inexperimental or survey research or content analysis. Instead of applying apreestablished set of categories to the data according to explicit, unam-biguous rules, with the primary goal being to generate frequency counts ofthe items in each category, it instead involves selecting particular segmentsof data and sorting these into categories that facilitate insight, comparison,and the development of theory [46]. While some coding categories may bedrawn from the evaluation questions, existing theory, or prior knowledge ofthe setting and system, others are developed inductively by the evaluatorduring the analysis, and still others are taken from the language and con-ceptual structure of the people studied. The key feature of most qualitativecoding is that it is grounded in the data [45] (i.e., it is developed in interac-tion with, and is tailored to the understanding of, the particular data beinganalyzed).

Analytical MemosAn analytical memo is anything that a researcher writes in relationship tothe research, other than direct field notes or transcription. It can range froma brief marginal comment on a transcript, or a theoretical idea incorporatedinto field notes, to a full-fledged analytical essay. All of these are ways ofgetting ideas down on paper, and of using writing as a way to facilitatereflection and analytical insight. Memos are a way to convert theresearcher’s perceptions and thoughts into a visible form that allows reflec-tion and further manipulation [7,46]. Writing memos is an important analy-sis technique, as well as being valuable for many other purposes in theresearch [5], and should begin early in the study, perhaps even before start-ing the study [58].

DisplaysDisplays, such as matrices, flowcharts, and concept maps, are similar tomemos in that they make ideas, data, and analysis visible and permanent.They also serve two other key functions: data reduction, and the presenta-tion of data or analysis in a form that allows it to be grasped as a whole.These analytical tools have been given their most detailed elaboration byMiles and Huberman [7], but are employed less self-consciously by manyother researchers. Such displays can be primarily conceptual, as a way ofdeveloping theory, or they can be primarily data oriented. Data-orienteddisplays, such as matrices, can be used as an elaboration of coding; thecoding categories are presented in a single display in conjunction with a

42 B. Kaplan and J.A. Maxwell

Page 14: 2 Qualitative Research Methods for Evaluating Computer

reduced subset of the data in each category. Other types of displays, suchas concept maps, flowcharts, causal networks, and organizational diagrams,display connections among categories.

Contextual and Narrative AnalysisContextual and narrative analysis has developed mainly as an alternativeto coding (e.g., [59]). Instead of segmenting the data into discrete elementsand resorting these into categories, these approaches to analysis seek tounderstand the relationships between elements in a particular text, situa-tion, or sequence of events. Methods such as discourse analysis [60],narrative analysis [59,61], conversation analysis [62]; profiles [63], or ethnographic microanalysis [64] identify the relationships among the dif-ferent elements in that particular interview or situation, and their meaningsfor the persons involved, rather than aggregating data across contexts.Coffey and Atkinson [65] review a number of these strategies.

SoftwareQualitative methods produce large amounts of data that may not be readilyamenable to manipulation, analysis, or data reduction by hand. Computersoftware is available that can facilitate the process of qualitative analysis[66,67]. Such programs perform some of the mechanical tasks of storing andcoding data, retrieving and aggregating previously coded data, and makingconnections among coding categories, but do not “analyze” the data in thesense that statistical software does. All of the conceptual and analy-tical work of making sense of the data still needs to be done by the evalua-tor. There are different types of programs, some developed specifically fordata analysis, and others (including word processors, textbase managers, andnetwork builders) that can be used for some of the tasks of analysis. For relatively small-scale projects, some qualitative researchers advocate notusing any software besides a good word processor. A very sophisticated and powerful program may be difficult to use if it has unneeded features, soit is advisable to carefully consider what the program needs to do beforecommitting to its use. Weitzman and Miles [66] and Weitzman [67] providea useful list of questions to consider in choosing software.

Validity

Validity in qualitative research addresses the necessarily “subjective”nature of data collection and analysis. Because the researcher is the instru-ment for collecting and analyzing data, the study is subjective in the senseof being different for different researchers. Different researchers mayapproach the same research question by collecting different data or byinterpreting the same data differently.

2. Qualitative Research Methods 43

Page 15: 2 Qualitative Research Methods for Evaluating Computer

Qualitative researchers acknowledge their role as research instrumentsby making it an explicit part of data collection, analysis, and reporting. Asin collecting and analyzing any data, what the evaluator brings to the task—his or her biases, interests, perceptions, observations, knowledge, and criti-cal faculties—all play a role in the study.

Qualitative researchers include in their studies specific ways to under-stand and control the effects of their background and role. They recognizethat the relationships they develop with those studied have a major effecton the data that can be gathered and the interpretations that can be devel-oped [8,12]. The researcher’s relationships and rapport with study partici-pants significantly influence what people will reveal in interviews and theextent to which they alter their behavior in response to an observer’s pres-ence. Similarly, researchers recognize that their personal experiences andtheoretical bents influence their choice of evaluation questions, data, andinterpretation. Qualitative researchers consider it their responsibility tocarefully articulate previous beliefs and constantly question every obser-vation and every interpretation so as to help avoid being blinded or misdirected by what they bring to the study [68]. They also report theirbackgrounds to study participants and the audience for the evaluation,including the research community, so that others may consider the poten-tial influence on study results.

The product of any qualitative analysis is an interpretation, rather thana purely “objective” account. It often is valuable for several researchers toanalyze the same data and compare results, but discrepancies between different researchers’ interpretations do not automatically invalidate theresults. Because of the flexibility and individual judgment inherent in quali-tative methods, reliability generally is weaker than in quantitative designs,but validity often is stronger; qualitative researchers’ close attention tomeaning, context, and process make them less likely to ask the wrong questions or overlook or exclude important data [69]. Thus, the loss of reliability is counterbalanced by the greater validity that results from theresearcher’s flexibility, insight, and ability to use his or her tacit knowledge.

To further insure validity, qualitative researchers typically assess specificvalidity threats during data collection and analysis by testing these threatsagainst existing data or against data collected specifically for this purpose[5,7,69–72]. Particular strategies include: (1) collecting rich data, (2) payingattention to puzzles, (3) triangulation, and (4) feedback or member check-ing, and (5) searching for discrepant evidence and negative cases.We discusseach of these in turn.

Rich DataRich data are data that are detailed and varied enough that they provide afull and revealing picture of what is going on, and of the processes involved[73]. Collecting rich data makes it difficult for the researcher to see only

44 B. Kaplan and J.A. Maxwell

Page 16: 2 Qualitative Research Methods for Evaluating Computer

what supports his or her prejudices and expectations and thus provides atest of one’s developing theories, as well as provides a basis for generating,developing, and supporting such theories.

PuzzlesOne underlying assumption of qualitative methods is that things makesense [74]. They make sense to the people involved in the setting, whounderstand the situation in ways the research must discover or determine.Moreover, the evaluator must make sense of things. If the evaluator has notunderstood how sense is to be made of a situation, the evaluator has notyet achieved an adequate interpretation, perhaps because not enough datahave been collected, or because the problem is being approached from thewrong perspective or theoretical framework. In particular, the evaluatormust pay careful attention to resolving surprises, puzzles, and confusions asimportant in developing a valid interpretation [75].

TriangulationQualitative researchers typically collect data from a range of individualsand settings. Multiple sources and methods increase the robustness ofresults. Using more than one source of data and more than one method ofdata collection allows findings to be strengthened by cross-validating them.This process generally is known as “triangulation” [15].

When data of different kinds and sources converge and are found con-gruent, the results have greater credibility than when they are based on onlyone method or source [15,33,49,76]. However, when the data seem todiverge, in line with the assumption that things make sense and the impor-tance of focusing on puzzles or discrepancies, an explanation must be soughtto account for all of them [77].

Feedback or Member CheckingThis is the single most important way of ruling out the possibility of mis-interpreting the meaning of what participants say and do or what theresearcher observed, and the perspective the participants have on what isgoing on. Feedback, or member checking, involves systematically gatheringfeedback about one’s conclusions from participants in the setting studied [47]and from others familiar with the setting. The researcher checks that theinterpretation makes sense to those who know the setting especially well. Inaddition, this is an important way of identifying the researcher’s biases [5]and affords the possibility for collecting additional important data.

Searching for Discrepant Evidence and Negative CasesIdentifying and analyzing discrepant data and negative cases is a key partof the logic of validity testing in qualitative research. Instances that cannot

2. Qualitative Research Methods 45

Page 17: 2 Qualitative Research Methods for Evaluating Computer

be accounted for by a particular interpretation or explanation can point upimportant defects in that account. There are strong pressures to ignore datathat do not fit prior theories or conclusions, and it is important to rigorouslyexamine both supporting and discrepant data. In particularly difficult cases,the only solution may be to report the discrepant evidence and allowreaders to draw their own conclusions [23].

Example

We illustrate how issues of reliability and validity can be addressed bydrawing on Kaplan and Duchon’s study.

In the clinical laboratory information system evaluation, Kaplan had asystems designer’s working knowledge of computer hardware and software,and of terminology in clinical settings, and in particular, with order entryand results reporting systems for a clinical laboratory. She was aware thatthis background influenced her study. As the primary field researcher, shecould listen to, and participate in, discussions among laboratory staff andhave a better understanding of them. In designing the study, Kaplan, aninformation systems specialist, sought colleagues with backgrounds dif-ferent from hers. Duchon, a specialist in organizational behavior, was un-familiar with clinical laboratories and with information systems. Each ofthese two researchers had to be convinced of the other’s interpretations.Further, the study’s sponsors and participants were aware of theresearchers’ backgrounds, which also were reported in publications so that others would be able to consider for themselves what effects theresearchers’ backgrounds might have.

Kaplan and Duchon collected data from multiple sources using severaldifferent methods.This provided them with rich data that led to puzzles anddiscrepancies that required resolution.Resolving these resulted in significantinsights. For example, Kaplan and Duchon explored the puzzle presented byinterviewees repeatedly saying the computer system would not change lab-oratory technologists’ jobs but that it would change what technologists did.Kaplan and Duchon developed hypotheses and tentative theories to explainhow the interviewees might not see a contradiction in their statements.

They also cross-validated their results by comparing their data. Qualita-tive and quantitative data at first seemed not to agree.The quantitative datainitially indicated no differences among laboratories in their response tothe computer system, yet differences were evident in the qualitative data.Discrepancies also occurred in only the qualitative data because technolo-gists in the same laboratory disagreed over whether the computer systemwas a benefit. Rather than assuming that some technologists simply werewrong, or that either the qualitative or quantitative data were in error, anexplanation was needed to allow for all these responses.

Resolving these puzzles and reconciling all the data contributed to amuch richer final interpretation that resulted in a theory of how views of

46 B. Kaplan and J.A. Maxwell

Page 18: 2 Qualitative Research Methods for Evaluating Computer

one’s job and views of a computer system are related. Study results weremade available to laboratory managers for comment, and presented to lab-oratory directors for discussion, thus creating opportunities for feedbackand member checking of the researchers’ interpretations. Further feedbackwas obtained by presenting the theory to staff from the laboratories studiedas well as to knowledgeable individuals from other, related settings.

Units and Levels of Analysis

Often qualitative evaluation research focuses on individuals and thengroups individuals in familiar ways, for example, by occupation or location.Important differences among the individuals may be obscured by groupingthem together in this way. For example, in the Kaplan and Duchon study,individual technologists could be categorized based on how they concep-tualized their jobs, and also the individual laboratories within the institu-tion could be so categorized. Simply considering the category “laboratorytechnologist” would have lost these findings and revealed little of interestin how laboratory technologists responded to the new laboratory informa-tion system. Further, there are alternatives to taking individuals as units of analysis. Researchers can study how communities pursue their goalsthrough using information technology [78] or conduct evaluations that crossorganizational, geographic, or political boundaries through virtual healthcare [79]. Research designs might employ different degrees of granularityand different units and levels of analysis, and investigate how changes rippleacross them [6].

Conclusion

We have presented an overview of qualitative research and how it can beused for evaluating computer information systems.This chapter has coveredtechniques for data collection and analysis, and discussed how and why suchmethods may be used. We have suggested research designs and data col-lection and analysis approaches that meet methodological guidelines usefulwhen developing an evaluation plan: (1) focus on a variety of technical,economic, people, organizational, and social concerns; (2) use multiplemethods; (3) be modifiable; (4) be longitudinal; and (5) be formative as wellas summative [80–82].

We believe that qualitative methods are useful because they providemeans of answering questions that cannot be answered solely by othermethods.The strengths of qualitative methods relate primarily to the under-standing of a system’s specific context of development and use, the waysdevelopers and users perceive the system, and the processes by which thesystem is accepted, rejected, or adapted to a particular setting. We believethat these are crucial issues for the development, implementation, and evaluation of computer information systems. Consequently, qualitative

2. Qualitative Research Methods 47

Page 19: 2 Qualitative Research Methods for Evaluating Computer

methods can make an important contribution to research and evaluation ofcomputer information systems.

Additional ReadingQualitative Methods

Patton [10] is an excellent introduction to qualitative research methods. Italso is one of the best works on qualitative approaches to evaluation. Moreadvanced discussion of theory and methods of qualitative research can befound in Hammersley and Atkinson [8] and in Denzin and Lincoln [9].

Specific techniques for qualitative data analysis are presented in Milesand Huberman [7], Coffey and Atkinson [65], and Strauss and Corbin [46].A useful guide to both data analysis and writing of qualitative research isWolcott [58].

Rogers’s [24] work on the adoption of innovations is relevant to the intro-duction of computer information systems.

Information Systems Research Theory and Methodological Frameworks

Useful discussions of theoretical perspectives in information systemsresearch can be found in several papers. Kling [83], Kling and Scacchi [4],Lyytinen [84], and Markus and Robey [29] present theoretical frameworksthat are relevant to studies of the social aspects of computing.The paradigmsof information systems development Hirschheim and Klein [37] discuss alsoare applicable to research approaches and, in fact, were derived from such a framework. Kaplan [43] illustrates the influences of theoretical stance using a medical information system as an example. Mumford, Fitzgerald,Hirschheim,and Wood-Harper [85];Nissen,Klein,and Hirschheim [86];Lee,Liebenau, and DeGross [11]; and Kaplan,Truex,Wastell,Wood-Harper, andDe Gross [13] reflect trends in information systems research methods, includ-ing the development of qualitative research methods in this area.

Evaluation Studies of Computing Systems

Lundsgaarde, Fischer, and Steele [35] conducted an exemplary evaluationof a medical information system that combines both qualitative and quan-titative methods. The study’s primary results are summarized in Fischer,Stratman, and Lundsgaarde [36]. Kaplan and Duchon [16] give a detailedaccount of how a medical system evaluation actually progressed, includingissues pertaining to combining qualitative and quantitative methods.Kaplan [30,31] reports qualitative methods and findings of the study, andKaplan and Duchon [32] include quantitative results.

Both and Kaplan [2] and Kaplan and Shaw [6] cite a number of excellentqualitative studies. Kaplan explains the advantages of using qualitativemethods for evaluating computer applications, while Kaplan and Shawprovide a comprehensive critical review of evaluation in medical informatics.

48 B. Kaplan and J.A. Maxwell

Page 20: 2 Qualitative Research Methods for Evaluating Computer

Turkle [87,88] and Zuboff [89], though not concerned with applicationsof computers in medicine, each superbly illustrate the kind of observationsand analysis possible by using qualitative methods. Walsham [90] providesdiscussion and examples of an interpretive approach to studying informa-tion systems.

GlossaryAnalytical memo (or memo, for short): Broadly defined, any reflective

writing the researcher does about the research, ranging from a marginalcomment on a transcript, or a theoretical idea incorporated into field-notes, to a full-fledged analytical essay.

Case study: An empirical inquiry that investigates a phenomenon within aspecific natural setting and uses multiple sources of evidence.

Coding: Segmenting the data into units and rearranging them into cate-gories that facilitate insight, comparison, and the development of theory.

Context: The cultural, social, and organizational setting in which a study isconducted, together with the history of and influences on the project andthe participants in it. Context also includes the relationships between theevaluation sponsor, the researchers, and those who work in or influencethe setting.

Contextual analysis or narrative analysis: Analyzing the relation-ships between elements in a particular text, situation, or sequence ofevents.

Display: Any systematic visual presentation of data or theory; elaboratedas a method of qualitative data analysis by Miles and Huberman [7].

Ethnography: A form of qualitative research that involves the researcher’srelatively long-term and intensive involvement in the setting studied,that employs participant observation and/or open-ended interviewing asmajor strategies, and that attempts to understand both the cultural per-spective of the participants and the influence of the physical and socialcontext in which they operate.

Field notes: Detailed, descriptive records of observations.Field research: See fieldwork.Fieldwork or field research: The researcher’s direct engagement in the

setting studied.Formative evaluation: Evaluation of a developing or ongoing program

or activity. The evaluation is aimed at improving the program or activity while it is being developed or implemented. See summative evaluation.

Grounded theory: A theory that is inductively derived from, and testedagainst, qualitative data during the course of the research; also, anapproach to qualitative research that emphasizes this method of theorydevelopment [45,46].

2. Qualitative Research Methods 49

Page 21: 2 Qualitative Research Methods for Evaluating Computer

Induction: A process by which generalizations are made from many par-ticular instances found in the data.

Iteration: Repetition of a series of steps, as in a repeating cycle of data col-lection, hypothesis formulation, hypothesis testing by more data collec-tion, additional hypothesis formulation, etc.

Member checking: Getting feedback from participants in the study tocheck the researchers’ interpretation.

Narrative analysis: See contextual analysis.Open-ended interviewing: A form of interviewing that does not employ a

fixed interview schedule, but allows the researcher to follow the respon-dent’s lead by exploring topics in greater depth and also by pursuingunanticipated topics.

Open-ended questions: Interview or survey questions that are to beanswered in the respondent’s own words, rather than by selecting pre-formulated responses.

Participant observation: A form of observation in which the researcherparticipates in the activities going on in a natural setting and interactswith people in that setting, rather than simply recording their behavioras an outside observer.

Qualitative research: A strategy for empirical research that is conducted innatural settings, that uses data in the form of words (generally, thoughpictures, artifacts, and other non-quantitative data may be used) ratherthan numbers, that inductively develops categories and hypotheses, andthat seeks to understand the perspectives of the participants in the settingstudied, the context of that setting, and the events and processes that aretaking place there.

Rich data: Data that are detailed, comprehensive, and holistic.Robustness: Interpretations, results, or data that can withstand a variety of

validity threats because they hold up even if some of the underpinningsare removed or prove incorrect.

Summative evaluation: Evaluation that is aimed at assessing the value of adeveloped program for the purpose of administrative or policy decisions.This evaluation often is done by testing the impact of the program afterit has been implemented. See formative evaluation.

Triangulation: The cross-checking of inferences by using multiple methods,sources, or forms of data for drawing conclusions.

Validity: The truth or correctness of one’s descriptions, interpretations, orconclusions.

Validity threat: A way in which one’s description, interpretation, or con-clusion might be invalid, also known as “rival hypothesis” or “alternativeexplanation.”

References[1] A.F. Dowling, Jr. Do hospital staff interfere with computer system implemen-

tation? Health Care Management Review 5 (1980) 23–32.

50 B. Kaplan and J.A. Maxwell

Page 22: 2 Qualitative Research Methods for Evaluating Computer

[2] B. Kaplan, Evaluating informatics applications—some alternative approaches:Theory, social interactionism, and call for methodological pluralism,International Journal of Medical Informatics 64 (2001) 39–56.

[3] B. Kaplan, Evaluating informatics applications—clinical decision supportsystems literature review, International Journal of Medical Informatics 64(2001) 15–37.

[4] R. Kling and W. Scacchi, The web of computing: Computer technology as social organization, in: M.C. Yovitz, editor Advances in Computers, Vol. 21(Academic Press, New York, 1982), pp. 2–90.

[5] J.A. Maxwell, Qualitative Research Design:An Interactive Approach (Sage Pub-lications, Thousand Oaks, CA, 1996).

[6] B. Kaplan and N.T. Shaw, People, organizational, and social issues: Future direc-tions in evaluation research, Methods of Information in Medicine 43 (2004)215–231.

[7] M.B. Miles and A.M. Huberman, Qualitative Data Analysis: An ExpandedSourcebook (Sage Publications, Beverly Hills, CA, 1994).

[8] M. Hammersley and P. Atkinson, Ethnography: Principles in Practice(Routledge, London, 1995).

[9] N. Denzin and Y. Lincoln, Handbook of Qualitative Research (Sage Publica-tions, Thousand Oaks, CA, 2000).

[10] M.Q. Patton, Qualitative Research and Evaluation Methods (Sage Publications,Thousand Oaks, CA, 2001).

[11] A.S. Lee, J. Liebenau, and J.I. DeGross, Information Systems and QualitativeResearch: IFIP Transactions (Chapman and Hall, London, 1997).

[12] J.A. Maxwell, Realism and the role of the researcher in qualitative psychology,in: M. Kiegelmann, editor, The Role of the Researcher in Qualitative Psychol-ogy (Verlag Ingeborg Huber, Tuebingen, Germany, 2002), pp. 11–30.

[13] B. Kaplan, D.P. Truex III, D. Wastell, A.T. Wood-Harper, and J.I. DeGross,Relevant Theory and Informed Practice: Looking Forward from a 20 Year Per-spective on IS Research (Kluwer Academic Publishers, London, 2004).

[14] T.D. Cook and C.S. Reichardt, Qualitative and Quantitative Methods in Evalu-ation Research (Sage Publications, Beverly Hills, 1979).

[15] T.D. Jick, Mixing qualitative and quantitative methods: Triangulation in action,in: J.V. Maanen, editor, Qualitative Methodology (Sage Publications, BeverlyHills, CA, 1983), pp. 135–148.

[16] B. Kaplan and D. Duchon, Combining qualitative and quantitative approachesin information systems research: A case study, Management InformationSystems Quarterly 12 (1988) 571–586.

[17] L.H. Kidder and M. Fine, Qualitative and quantitative methods: When storiesconverge, in: M.M. Mark and R.L. Shotland, editors, Multiple Methods inProgram Evaluation (Jossey-Bass, San Francisco, 1987), pp. 57–75.

[18] J.A. Maxwell, P.G. Bashook, and L.J. Sandlow, Combining ethnographic and experimental methods in educational research: A case study, in: D.M.Fetterman and M.A. Pitman, editors, Educational Evaluation: Ethnography inTheory, Practice, and Politics (Sage Publications, Beverly Hills, CA, 1986),pp. 121–143.

[19] J.C. Greene and V.J. Caracelli, Advances in Mixed-Method Evaluation:The Challenges and Benefits of Integrating Diverse Paradigms. New Direc-tions for Evaluation, Vol. 74 (Summer 1997) (Jossey-Bass, San Francisco,1997).

2. Qualitative Research Methods 51

Page 23: 2 Qualitative Research Methods for Evaluating Computer

[20] A. Tashakkori and C. Teddlie, Mixed Methodology: Combining Qualitative andQuantitative Approaches (Sage Publications, Thousand Oaks CA, 1998).

[21] J.A. Maxwell and D. Loomis, Mixed methods design: An alternative approach,in:A.Tashakkori and C.Teddlie, editors, Handbook of Mixed Methods in Socialand Behavioral Research (Sage Publications, Thousand Oaks, CA, 2002), pp.241–271.

[22] A. Tashakkori and C. Teddlie, Handbook of Mixed Methods in Social andBehavioral Research (Sage Publications, Thousand Oaks, CA, 2002).

[23] H. Wolcott, Writing Up Qualitative Research (Sage Publications, ThousandOaks, CA, 1990).

[24] E.M. Rogers, Diffusion of Innovations (The Free Press, New York, 2003).[25] B. Kaplan, Computer Rorschach test:What do you see when you look at a com-

puter? Physicians & Computing 18 (2001) 12–13.[26] T. Cook, Randomized experiments in education: A critical examination of the

reasons the educational evaluation community has offered for not doing them,Educational Evaluation and Policy Analysis 24 (2002) 175–199.

[27] W.R. Shadish, T.D. Cook, and D.T. Campbell, Experimental and Quasi-Experimental Designs for Generalized Causal Inference (Houghton Mifflin,Boston, 2002).

[28] J.A. Maxwell, Causal explanation, qualitative research, and scientific inquiry ineducation, Educational Researcher 33 (2004) 3–11.

[29] M.L. Markus and D. Robey, Information technology and organizationalchange: Causal structure in theory and research, Management Science 34(1988) 583–598.

[30] B. Kaplan, Impact of a clinical laboratory computer system: Users’ perceptions,in: R. Salamon, B.I. Blum, and J.J. Jørgensen, editors, Medinfo 86: Fifth Con-gress on Medical Informatics, North-Holland, Amsterdam (1986) 1057–1061.

[31] B. Kaplan, Initial impact of a clinical laboratory computer system: Themescommon to expectations and actualities, Journal of Medical Systems 11 (1987)137–147.

[32] B. Kaplan and D. Duchon, A job orientation model of impact on work sevenmonths post-implementation, in: B. Barber, D. Cao, D. Qin, and G. Wagner,editors, Medinfo 89: Sixth Conference on Medical Informatics, North-Holland,Amsterdam (1989) pp. 1051–1055.

[33] R.K. Yin, Case Study Research: Design and Methods (Sage Publications,Thousand Oaks, CA, 1984).

[34] E.M. Rogers, Diffusion of Innovations (The Free Press, New York, 1983).[35] H.P. Lundsgaarde, P.J. Fischer, and D.J. Steele, Human Problems in Computer-

ized Medicine (The University of Kansas, Lawrence, KS, 1981).[36] P.J. Fischer, W.C. Stratman, and H.P. Lundsgaarde, User reaction to PROMIS:

Issues related to acceptability of medical innovations, in: J.G.Anderson and S.J.Jay, editors, Use and Impact of Computers in Clinical Medicine (Springer, NewYork, 1987), pp. 284–301.

[37] R. Hirschheim and H.K. Klein, Four paradigms of information systems devel-opment, Communications of the ACM 32 (1989) 1199–1216.

[38] B. Kaplan. User acceptance of medical computer applications: A diffusionapproach, in: B.I. Blum, editors, Proceedings of the Symposium on ComputingApplications in Medical Care, Silver Spring, IEEE Computer Society Press(1982), pp. 398–402.

52 B. Kaplan and J.A. Maxwell

Page 24: 2 Qualitative Research Methods for Evaluating Computer

[39] B. Kaplan, The computer as Rorschach: Implications for management and useracceptance, in: R.E. Dayhoff, editor, Proceedings Symposium ComputingApplication Medical Care, Silver Spring, IEEE Computer Society Press (1983),pp. 664–667.

[40] B. Kaplan, The influence of medical values and practices on medical computer applications, in: J.G. Anderson and S.J. Jay, editors, Use and Impact of Computers in Clinical Medicine (Springer, New York, 1987), pp.39–50.

[41] C. Sicotte, J. Denis, and P. Lehoux, The computer-based patient record:A strategic issue in process innovation, Journal of Medical Systems 22 (1998)431–443.

[42] C. Sicotte, J. Denis, P. Lehoux, and F. Champagne, The computer-based patientrecord: Challenges toward timeless and spaceless medical practice, Journal ofMedical Systems 22 (1998) 237–256.

[43] B. Kaplan, Models of change and information systems research, in: H.-E.Nissen, H.K. Klein, and R. Hirschheim, editors, Information Systems Research:Contemporary Approaches and Emergent Traditions (North Holland,Amsterdam, 1991), pp. 593–611.

[44] B. Kaplan, D.P. Truex III, D. Wastell, and A.T. Wood-Harper, Young Turks, OldGuardsmen, and the conundrum of the broken mold: A progress report ontwenty years of IS research, in: B. Kaplan, D.P. Truex III, D. Wastell, A.T.Wood-Harper, and J.I. De Gross, editors, Information Systems Research:Relevant Theory and Informed Practice (Kluwer Academic Publishers, Boston,Dordrecht, London, 2004) pp. 1–18.

[45] B.G. Glaser and A.L. Strauss, The Discovery of Grounded Theory: Strategiesfor Qualitative Research (Aldine, New York, 1967).

[46] A. Strauss and J.M. Corbin, Basics of Qualitative Research:Techniques and Pro-cedures for Developing Grounded Theory (Sage Publications, Thousand Oaks,CA, 1998).

[47] E.G. Guba and Y.S. Lincoln, Fourth Generation Evaluation (Sage Publications,Newbury Park, CA, 1989).

[48] B. Kaplan, National Health Service reforms: Opportunities for medical infor-matics research, in: K.C. Lun, P. Deglulet,T.E. Piemme, and O. Reinhoff, editors,Medinfo 92: Seventh Conference on Medical Informatics, Amsterdam, ElsevierScience Publishers (1992), pp. 1166–1171.

[49] T.V. Bonoma, Case research in marketing: Opportunities, problems, and aprocess, Journal of Marketing Research 22 (1985) 199–208.

[50] D.E. Forsythe, B. Buchanan, J. Osheroff, and R. Miller, Expanding the conceptof medical information: An observational study of physicians’ informationneeds, Computers and Biomedical Research 25 (1992) 181–200.

[51] J. Osheroff, D. Forsythe, B. Buchanan, R. Bankowitz, B. Blumenfeld,and R. Miller, Physicians’ information needs: Analysis of clinical questionsposed during patient care activity, Annals of Internal Medicine 14 (1991)576–581.

[52] B. Kaplan, Objectification and negotiation in interrupting clinical images:Implications for computer-based patient records, Artificial Intelligence in Medicine 7 (1995) 439–454.

[53] D. Fafchamps, C.Y. Young, and P.C. Tang, Modelling work practices: Input tothe design of a physician’s workstation, in: P.D. Clayton, editor, Proceedings

2. Qualitative Research Methods 53

Page 25: 2 Qualitative Research Methods for Evaluating Computer

Symposium Computing Application Medical Care, New York, McGraw Hill(1991), pp. 788–792.

[54] R.A. Krueger and M.A. Casey, Focus Groups: A Practical Guide for AppliedResearch (Sage Publications, Thousand Oaks, CA, 2000).

[55] J.M. Nyce and W.I. Graves, The construction of knowledge in neurology: Impli-cations for hypermedia system development,Artificial intelligence in Medicine29 (1990) 315–322.

[56] B. Kaplan, Development and acceptance of medical information systems: Anhistorical overview, Journal of Health and Human Resources Administration11 (1988) 9–29.

[57] M.H. Agar, The Professional Stranger: An Informal Introduction to Ethnogra-phy (Academic Press, New York, 1980).

[58] H. Wolcott, Writing Up Qualitative Research (Sage Publications, ThousandOaks, CA, 2001).

[59] E. Mishler, Research Interviewing: Context and Narrative. (Harvard UniversityPress, Cambridge, MA, 1986).

[60] J.P. Gee, S. Michaels, and M.C. O’Connor, Discourse analysis, in: M.D.LeCompte, W.L. Millroy, and J. Preissle, editors, The Handbook of QualitativeResearch in Education, Vol. 227–291 (Academic Press, San Diego, 1992).

[61] C.K. Riessman, Narrative Analysis (Sage Publications, Thousand Oaks, CA,1993).

[62] G. Psathas, Conversation Analysis: The Study of Talk-in-Interaction (Sage Pub-lications, Thousand Oaks, CA, 1955).

[63] I.E. Seidman, Interviewing as Qualitative Research: A Guide for Researchers in Education and the Social Sciences (Teachers College Press, New York,1998).

[64] F. Erickson, Ethnographic microanalysis of interaction, in: M.D. LeCompte,W.L. Millroy, and J. Preissle, editors, The Handbook of Qualitative Research inEducation (Academic Press, San Diego, 1992), pp. 201–225.

[65] A. Coffey and P. Atkinson, Making Sense of Qualitative Data (Sage Publica-tions, Thousand Oaks, CA, 1996).

[66] E. Weitzman and M. Miles, Computer Programs for Qualitative Data Analysis(Sage Publications, Thousand Oaks, CA, 1995).

[67] E. Weitzman, Software and qualitative research, in: N. Denzin and Y. Lincoln,editors, Handbook of Qualitative Research (Sage Publications, Thousand Oaks,CA, 2000), pp. 803–820.

[68] P. Eckert, Jocks and Burnouts: Social Categories and Identity in the High School(Teachers College Press, New York, 1989).

[69] J. Kirk and M.L. Miller, Reliability and Validity in Qualitative Research (SagePublications, Thousand Oaks, CA, 1986).

[70] M.A. Eisenhart and K.R. Howe, Validity in educational research, in: M.D.LeCompte, W.L. Millroy, and J. Preissle, editors, The Handbook of QualitativeResearch in Education (Academic Press, San Diego, 1992), pp. 643–680.

[71] J.A. Maxwell, Understanding and validity in qualitative research, Harvard Edu-cational Review 62 (1992) 279–300.

[72] J.A. Maxwell, Using qualitative methods for causal explanation, Field Methods16(3), (August 2004), pp. 243–264.

[73] H.S. Becker, Field work evidence, in: H.S. Becker, editor, Sociological Work:Method and Substance (Aldine, Chicago, 1970), pp. 39–62.

54 B. Kaplan and J.A. Maxwell

Page 26: 2 Qualitative Research Methods for Evaluating Computer

[74] E. Bredo and W. Feinberg, Part two: The interpretive approach to social andeducational research, in: E. Bredo and W. Feinberg, editors, Knowledge andValues in Social and Educational Research (Temple University Press, Philadel-phia, 1982), pp. 115–128.

[75] M.H. Agar, Speaking of Ethnography (Sage Publications, Beverly Hills, CA,1986).

[76] I. Benbasat, D.K. Goldstein, and M. Mead,The case research strategy in studiesof information systems, MIS Quarterly 11 (1987) 369–386.

[77] M.G. Trend, On the reconciliation of qualitative and quantitative analyses: Acase study, in: T.D. Cook and C.S. Reichardt, editors, Qualitative and Quantita-tive Methods in Evaluation Research (Sage Publications, Beverly Hills, CA,1979), pp. 68–86.

[78] B. Kaplan, L. Kvasny, S. Sawyer, and E.M. Trauth, New words and old books:Challenging conventional discourses about domain and theory in informationsystems research, in: M.D. Myers, E.A. Whitley, E. Wynn, and J.I. De Gross,editors, Global and Organizational Discourse About Information Technology(Kluwer Academic Publishers, London, 2002), pp. 539–545.

[79] B. Kaplan, P.F. Brennan, A.F. Dowling, C.P. Friedman, and V. Peel, Towards aninformatics research agenda: Key people and organizational issues, Journal ofthe American Medical Informatics Association 8 (2001) 234–241.

[80] B. Kaplan, A model comprehensive evaluation plan for complex informationsystems: Clinical imaging systems as an example, in:A. Brown and D. Remenyi,editors, Proceedings 2nd European Conference on Information TechnologyInvestment Evaluation, Henley on Thames, Birmingham, England, OperationalResearch Society (1995), pp. 14–181.

[81] B. Kaplan, Organizational evaluation of medical information systems, in: C.P.Friedman and J.C. Wyatt, editors, Evaluation Methods in Medical Informatics(Springer, New York, 1997), pp. 255–280.

[82] B. Kaplan, Addressing organizational issues into the evaluation of medicalsystems, Journal of the American Medical Informatics Association 4 (1997)94–101.

[83] R. Kling, Social analyses of computing: Theoretical perspectives in recentempirical research, ACM Computing Surveys 12 (1980) 61–110.

[84] K. Lyytinen, Different perspectives on information systems: Problems and solu-tions, ACM Computing Surveys 19 (1987) 5–46.

[85] E. Mumford, G. Fitzgerald, R. Hirschheim, and A.T. Wood-Harper. ResearchMethods in Information Systems (North Holland, Amsterdam, 1985).

[86] H.-E. Nissen, H.K. Klein, and R. Hirschheim. Information Systems Research:Contemporary Approaches and Emergent Traditions (North Holland,Amsterdam, 1991).

[87] S. Turkle, The Second Self: Computers and the Human Spirit (Simon & Schuster, New York, 1984).

[88] S. Turkle, Life on the Screen: Identity in the Age of the Internet (Simon & Schuster, New York, 1995).

[89] S. Zuboff, In the Age of the Smart Machine: The Future of Work and Power(Basic Books, New York, 1988).

[90] G. Walsham, Interpreting Information Systems in Organizations (Wiley,Chichester, 1993).

2. Qualitative Research Methods 55