mser case study - courses.helsinki.fi · case study (1) (yin 2003, 2014) a case study is an...
TRANSCRIPT
Methods for SE Research
Case Study
Tomi Männistö
This material is licensed under the Creative Commons BY-NC-SA License
Overview of the lecture■ Case study
■ What’s it about ■ Unit of analysis ■ Case selection
■ Qualitative data collection ■ Sources of evidence ■ Principles of data collection ■ Interviews
■ Pointers to get started with case study
Tomi Männistö
Case Study (1) (Yin 2003, 2014)
■ A case study is an empirical inquiry that ■ investigates a contemporary phenomenon within its real-life context, especially
when ■ the boundaries between phenomenon and context are not clearly defined
■ You use case study to deliberately cover contextual conditions—believing that they might be highly relevant to your phenomenon in study
■ Phenomenon and context are not always distinguishable in real-life situations
(As a reference, consider: Gray DE. Doing Research in the Real World, 2nd ed., Sage, 2009.)
Tomi Männistö
Misunderstanding 1: General, theoretical (context-independent) knowledge is more valuable than concrete, practical (context-dependent) knowledge.
Misunderstanding 2: One cannot generalize on the basis of an individual case; therefore, the case study cannot contribute to scientific development.
Misunderstanding 3: The case study is most useful for generating hypotheses; that is, in the first stage of a total research process, whereas other methods are more suitable for hypotheses testing and theory building.
Misunderstanding 4: The case study contains a bias toward verification, that is, a tendency to confirm the researcher’s preconceived notions.
Misunderstanding 5: It is often difficult to summarize and develop general propositions and theories on the basis of specific case studies.
Five misunderstandings or oversimplifications about the nature of case studies (Flyvbjerg 2006)
Tomi Männistö
Possible roles for qualitative data collection■ Major approach for the entire study
■ E.g., understanding product family practices in companies ■ For gaining understanding of the problem
■ E.g., practical problems in acceptance testing ■ A means for validation
■ E.g., collecting data on what actually happened, feelings, improvement opportunities etc. after taking in use / piloting a requirements management tool
Tomi Männistö
Tomi Männistö
Problem Study Evaluation
Scientific knowledge (previous work)
Context(industry, …)
Research Questions
Results
Publication
Understanding
Conceptualisation
Scientific knowledge (related work)
Scientific knowledge (methodological
literature)
New knowledge
Novelty
Relevance
Research approach Existing
knowledge
Context(industry, …)
Applicability,Utility
New knowledge
Study Design
Important components of case study (Yin 2003, 2014)
(1) research questions (study questions) ≠ data collection questions
(2) propositions, if any (3) unit(s) of analysis
■ what the ”case” is?
(4) logic linking data with propositions (5) criteria for interpreting findings
Tomi Männistö
Examples of units of analysis (adapted from Patton 2002; not mutually exclusive)
■ People focused ■ Individuals ■ Teams
■ Structure focused ■ Projects ■ Programmes ■ Organisations ■ Units of organisations
■ Geography focused ■ Countries ■ Market areas
■ Activity focused ■ Critical incidents ■ Time periods ■ Crises ■ Quality assurance violations ■ Releases
■ Time based ■ Particular days, months ■ Iterations ■ Last quartiles ■ Full moons
Tomi Männistö
Examples of research topics / questions (pair programming)
■ What differences in code quality exist in students who program individually compared to those using pair programming
■ How does pair programming fit for developing safety critical software ■ How widely do Finnish software companies use pair programming in their
product development ■ What kind of backlog management method and tool would help optimising
the amount of pair programming within a team ■ How do developers feel about using pair programming ■ How does pair programming contribute to the team culture ■ How to take pair programming into use in industrial product development
Tomi Männistö
Qualitative data collection
Data collection: Unit of analysis
■ Examples ■ Finnish software product companies ■ Companies utilising agile development ■ Agile product development projects ■ Agile development (practices in) teams ■ Developer (experience on pair programming)
■ May be holistic or embedded, (i.e., single or multiple units of analysis, respectively) ■ For example, you may have the main research question on the
level of companies, investigate some projects in detail
Tomi Männistö
The deductive logic of research (Creswell 2009)
Researcher tests or verifies a theory
Researcher tests hypotheses or research questions from theory
Researcher defines and operationalises variables derived from the theory
Researcher measures or observes variables using an instrument to obtain scores
Tomi Männistö
Researcher looks for broad patterns, generalisations, or theories from themes or categories
Researcher analyses data to form themes and categories
Researcher asks open-ended questions of participants or records field-notes
Researcher gathers information
Researcher poses generalisations or theories from past experiences and literature
The inductive logic of research (Creswell 2009)
Tomi Männistö
Data collection: Purposeful selection* of cases (Patton 2002)
■ Selection / sampling logic captures well the difference between quantitative and qualitative methods
■ Quantitative ■ Larger samples, random selection ■ Logic and power in selecting statistically representative (probability)
sample lies in its purpose: generalisation ■ Qualitative
■ Focus on relatively small samples, even 1, selected purposefully ■ Logic and power of purposeful selection derives from the emphasis
on in-depth understanding
* Patton uses the term sampling
Tomi Männistö
Sample size in qualitative inquiry (Patton 2002)
■ There are no rules for sample size in qualitative inquiry ■ Sample size depends on what you want to know, what will be
useful, what will have credibility, and what can be done with available time and resources
■ Does the sampling strategy support study’s purpose? ■ The validity, meaningfulness and insight gained have more
to do with the information richness of the cases selected and observational / analytical characteristics of the researcher than the sample size
Theoretical saturation
Tomi Männistö
(Flyvbjerg, B., 2006. Five Misunderstandings About Case-Study Research. Qualitative Inquiry, 12(2), pp.219–245.)ple working with organic solvents suffered brain damage. Instead of choos-ing a representative sample among all those enterprises in the clinic’s areathat used organic solvents, the clinic strategically located a single workplacewhere all safety regulations on cleanliness, air quality, and the like had beenfulfilled. This model enterprise became a critical case: If brain damagerelated to organic solvents could be found at this particular facility, then itwas likely that the same problem would exist at other enterprises that wereless careful with safety regulations for organic solvents. Via this type of stra-tegic choice, one can save both time and money in researching a given prob-lem. Another example of critical case selection is the above-mentioned stra-tegic selection of lead and feather for the test of whether different objects fallwith equal velocity. The selection of materials provided the possibility to for-mulate a generalization characteristic of critical cases, a generalization of thesort, “If it is valid for this case, it is valid for all (or many) cases.” In its nega-tive form, the generalization would be, “If it is not valid for this case, then it isnot valid for any (or only few) cases.”
How does one identify critical cases? This question is more difficult toanswer than the question of what constitutes a critical case. Locating a criti-
230 Qualitative Inquiry
Table 1Strategies for the Selection of Samples and Cases
Type of Selection Purpose
A. Random selection To avoid systematic biases in the sample. The sample’s size isdecisive for generalization.
1. Random sample To achieve a representative sample that allows for generalizationfor the entire population.
2. Stratified sample To generalize for specially selected subgroups within thepopulation.
B. Information-oriented selection
To maximize the utility of information from small samples andsingle cases. Cases are selected on the basis of expectations abouttheir information content.
1. Extreme/deviantcases
To obtain information on unusual cases, which can be especiallyproblematic or especially good in a more closely defined sense.
2. Maximumvariation cases
To obtain information about the significance of variouscircumstances for case process and outcome (e.g., three to fourcases that are very different on one dimension: size, form oforganization, location, budget).
3. Critical cases To achieve information that permits logical deductions of the type,“If this is (not) valid for this case, then it applies to all (no)cases.”
4. Paradigmaticcases
To develop a metaphor or establish a school for the domain that thecase concerns.
at SAGE Publications on May 17, 2011qix.sagepub.comDownloaded from
Tomi Männistö
Examples of information-rich selection■ Extreme cases
■ E.g., badly failed agile projects (but, try to understand why the case was extreme), or agile project with tough quality requirements (e.g., safety or performance)
■ Intensive ■ E.g., companies with many agile projects
■ Max variation / stratified ■ E.g., failed, ok and successful agile project
■ Typical case ■ (However, a typical or average case may not be the richest in information)
■ Critical case ■ E.g., product development project for the lead product
■ Practically relevant ■ E.g., the best performing (best pilot or example); forerunner (first one for something new)
■ Opportunistic ■ E.g., next new project, friend’s project
■ Convenience ■ Easy to get data from (e.g., partner companies in a research project; related to you work)
Tomi Männistö
success = talent + luck
Regression to the mean
(Kahneman D. Thinking fast and slow, 2011)
Tomi Männistö
Sources of qualitative data■ Six sources of evidence (Yin 2003)
■ Documentation ■ Archival records ■ Interviews ■ Direct observation ■ Participant observation ■ Physical artefacts
■ Three kinds of qualitative data (Patton 2002) ■ Interviews ■ Observation, participant observation ■ Documents, archival research
Tomi Männistö
Three principles of data collection (Yin2003)
(1) Use multiple sources of evidence • Triangulation
• of data sources (data triangulation)
• among investigators (investigator triangulation)
• of perspectives to the same data set (theory triangulation)
• of methods (methodological triangulation)
(2) Create a cases study database • Case study notes, documentation,...
(3) Maintain a chain of evidence • The principle is to allow an external observer (reader) to follow the
derivation of any evidence, ranging from initial research question to ultimate case study conclusions
Tomi Männistö
Data collection: Interviews
■ Planning ■ Link the data collection to research questions
■ Goals for the interview ■ Maintaining focus on the topic; without forgetting sensitivity
■ Formulating the questions ■ Neutrality ■ No fishing
■ Testing / piloting the questions ■ Doing it
■ Recording (audio, video, photos) ■ Note taking (roles: interviewer, note taker, recorder, …)
Tomi Männistö
Data collection: Interviews
■ Interviewees ■ Individual, group, focus group
■ Format of questions ■ Survey, structured ■ Semi-structured, thematic ■ Open
■ Order of questions ■ Answer types
■ Selection form predefined choices, quantity, frequency ■ Answer in own words, opinion, explanation, ...
pause, hesitation, laughter, defensive excuse, ...to a question or theme
■ Open discussion
Tomi Männistö
Data analysis
■ Transcribing, listening to recordings ■ Do not rely on your memory ■ Use all data sources
■ Iterative ■ Start as soon as possible
■ Systematic ■ E.g., use of grounded theory analysis
■ Tools ■ Pens, markers (different colour), post-its ■ ATLAS.ti ■ NVivo
Tomi Männistö
Additional material to get started
■ Easterbrook S, Singer J, Storey M-A and Damian D. Selecting empirical methods for Software Engineering research. In Shull F, Singer J and Sjøberg DIK (eds.) Guide to advanced empirical software engineering, Springer, 2008.
■ Gray DE, Doing research in the real world. 2nd ed., Sage Publications, 2009. ■ Hirsjärvi & Hurme. Tutkimushaastattelu – Teemahaastattelun teoria ja käytäntö. Yliopistopaino,
2001. ■ Runeson, P. & Höst, M., 2009. Guidelines for conducting and reporting case study research in
software engineering. Empirical Software Engineering, 14(2), pp.131–164.
■ References from http://www.uk.sagepub.com/gray/journal.htm in relation to book by Gray (2009). ■ Introduction to Case Study by Winston Tellis. The Qualitative Report, Volume 3, Number 2, July,
1997 (http://www.nova.edu/ssss/QR/QR3-2/tellis1.html)
■ Application of a Case Study Methodology by Winston Tellis The Qualitative Report, Volume 3, Number 3, September, 1997 (http://www.nova.edu/ssss/QR/QR3-3/tellis2.html)
Tomi Männistö
References
■ Creswell JW. Research Design -- Qualitative, Quantitative and Mixed Methods Approaches. 3rd ed.: Sage Publications; 2009.
■ Flyvbjerg, B. Five Misunderstandings About Case-Study Research. Qualitative Inquiry, 12(2), 2006:219–245.
■ Kahneman D. Thinking fast and slow. Farrar, Straus and Giroux. 2011 ■ Patton MQ. Qualitative evaluation and research methods, 3rd ed., Sage
Publications, 2002. ■ Richie J and Lewis J. Qualitative research in practice. Sage Publications,
2003. ■ Yin RK. Case study research: design and methods. 5th ed.: Sage
Publications; 2014. (3rd ed., 2003)
Tomi Männistö