study glossary & research questions

81
STUDY GUIDE 1 STUDY GUIDE & GLOSSARY OF TERMS IN RESEARCH NURSING RESEARCH The Power of Inquiry, Examination, Exploration and Investigation Braham & Williams Generated 2015

Upload: ecaroh-hew-smailliw

Post on 09-Dec-2015

224 views

Category:

Documents


0 download

DESCRIPTION

Design to assist students with understanding research

TRANSCRIPT

STUDY GUIDE 1

STUDY GUIDE & GLOSSARY OF TERMS IN RESEARCH

NURSING RESEARCH

The Power of Inquiry, Examination, Exploration and Investigation

Braham & Williams Generated 2015

STUDY GUIDE 2

Area of concern where there is a gap in the knowledge base needed for practice. Research problem

Concise, clear statement of the specific goal or aim of the study that is generated from the research problem Purpose statement

A Set of highly abstracted, related to constructs that broadly explains the phenomena of interest, expresses assumptions, and reflects a philosophical stance Conceptual framework

Concise interrogative statements that are worded in the present tense and includes one or more variables Research question

Formal statement of the expected relationship(s) between two or more variables in a specified population Hypothesis

An intervention or activity that is manipulated by the researcher to create an effect on the dependent variable also referred to as the “cause” Independent variable

The response, behavior, effect or outcome that is predicted and measured in research Dependent variable

A group of individuals who meet the sampling criteria and to which the study findings will be generalized Target population

Description of how variables or concepts will be measured or manipulated in a study Operational definition

Statements that are taken for granted or are considered true, even though they have not been scientifically tested Assumption

Braham & Williams Generated 2015

STUDY GUIDE 3

Overall plan for collecting and analyzing data Research design

Randomizations, comparison/control groups, manipulation of the treatment are present Experimental design

Designs with limited control that were developed to provide alternative means for examining causality (cause & effect) in situations not conducive to experimental controls Quasi-experimental design

The strength of a design to produce accurate results, Threats to validity are classified as internal and external. Validity (study)

The extent to which the effects (response) detected in the study are a true reflection of reality, rather than the result of extraneous variables (influences outside of the study) Internal validity

Concerned with the extent to which study findings can be generalized beyond the sample used in the study External validity

Responses change due to the number of times tested with the same test due to previous known incorrect responses, and change of experience due to previous testing Testing effect

# of subjects that fall out of a study before its completion Mortality

Sample growing older, wiser, stronger, hungrier, more tired and its influence on the results Selection maturity

Braham & Williams Generated 2015

STUDY GUIDE 4

Change in instruments between the pre and posttest rather than actual results of a treatment Instrumentation (threat to validity)

Movement or regression of extreme scores toward the man in studies with a pre/post test design Differential statistical regression

Event not related to the study but occurs during the study History

Individual differences that exist in the subjects before they are chosen to participate in a study Selection bias

The capacity of the study to detect differences, minimum power is 80% Power

Represents the consistency of the measure obtained Reliability

Focused on comparing two versions of the same instrument equivalence

Addresses the correlation of various items within the instrument or internal consistency; determined by split-half reliability homogeneity

The degree to which an instrument measures what it is supposed to measure validity (instrument)

Tests the relationship of two concepts making the study more specific theoretical framework

Conducted to reduce, organize, and give meaning to data data analysis

Braham & Williams Generated 2015

STUDY GUIDE 5

Precise, systematic gathering of information relevant to the research purpose on specific objectives, questions, or hypotheses of a study. data collection

Subjects may behave in a particular manner because they are aware of their participation in a study Hawthorne Effect

Subjects may alter their behavior because the treatment is new, novelty effect

Performance of the subjects may be affected by characteristics of the researcher experimenter effect

Method of decreasing a type II error (accepting a false null hypothesis--minimum acceptable power is 0.80 power analysis

alpha--an index of how probable it is that the findings are reliable, alpha of 0.05 means that 5 out of 100 times the researcher makes a type I error (rejecting a true null hypothesis) level of significance

0.50 is considered medium; concerned with the magnitude of the relationship between the variables effect size

Random sampling techniques probability sampling

Occurs when not every member of the population has an opportunity to be selected such as in convenience sampling Nonprobability sampling

Must be identified as requirements for an element or subject to be included in a sample Inclusion criteria

Must be eliminated or excluded from an element or subject from being in a sample

Braham & Williams Generated 2015

STUDY GUIDE 6

Exclusion criteria

Device that is used to collect data in a structured manner; e.g. self-report, observation, bio physiologic What is instrumentation?

Exists in all studies and can affect the measurement of the study variables and the relationship among these variables extraneous variable

Any influence or action in a study that distorts the findings or slants them away from the true or expected bias

Assigned to categories of an attribute that can be ranked; categories must be exclusive and exhaustive; cannot be demonstrated the intervals between categories are equal--use Spearman Rank-Order Correlation Coefficient ordinal scale measurement

Principles of respect for persons, beneficence, and justice relevant to the conduct of research ethical principles

Quality, properties or characteristics of person, things, or situations that change or vary and are manipulated, measured, or controlled in research study variables

Portion of the population that represents the entire population sample

Examines the extent to which the method of measurement includes all the major elements relevant to the construct being measured content related validity

Test for ordinal data rank sum test

Used to compare control to experimental group to determine if statistically significant

Braham & Williams Generated 2015

STUDY GUIDE 7

t-test

Qualitative Research Definition The investigation of phenomena, typically in an in-depth and holistic fashion, through

the collection of rich narrative (feelings, thoughts, perception, point of view) materials using a flexible research design.

Characteristics of Qualitative Research Design Requires the researcher to become the research instrument (participant observer); Requires ongoing analysis of the data to formulate subsequent strategies (emergent

design) and to determine when field work (data collection) is done.

Qualitative Research Traditions Ethnography Phenomenology Grounded Theory Narrative Research Participatory Action Research Mixing Worlds - Mixed Methods Research

Ethnography Qualitative research methodology for investigating cultures - to 'learn from'

Underlying assumption of ethnography Every human group eventually evolves a culture that guides the member's view of the

world

Culture with Ethnography Culture is not tangible, it is inferred from the words, actions, and products of

members of a group

Emic vs etic Emic= insider perspective Etic=outsider perspective

Goal of Ethnography To uncover tacit (unclear, unwritten, unspoken) knowledge

Braham & Williams Generated 2015

STUDY GUIDE 8

Phenomenological Research Seeks to understand people's everyday life experiences and perceptions. Why? Because truth about reality is found within these experiences Very small participant group (~10) and in-depth conversations Useful for poorly defined or understood human experiences

What does phenomenological research ask? What is the essence (meaning) of this experience or phenomenon and what does it

mean? What is essence in phenomenological research? Essence is what makes a phenomenon what it is, the essential aspects

Grounded Theory Inductive theory building research. Seeks to understand the 'why' of peoples actions by asking the people themselves,

then, they 'ask' the data:• First: What is the primary problem?• Second: What behaviors do people use to resolve the problem?

What does grounded mean? 'Grounded' means the problem must be discovered from within the data Simultaneous sampling, analysis, and data collection in an iterative process

Narrative research Researcher studies the lives of individuals through the use of 'story' Underlying premise is that stories are the mode for communicating and making

sense of life events Findings are usually a ’re-telling' of the overarching story

What do Narrative analysts ask? "WHY was the story told this way?"

Participatory Action Research Collaborative efforts in all aspects of the research design and process Goal is empowerment, move to action overtly stated intention (bias) to produce positive change within the community

where study occurs Increasing popularity

Mixed Method Research

Braham & Williams Generated 2015

STUDY GUIDE 9

The Benefits• Able to get the best of both worlds• the two major traditions dovetail progress well• Improves confidence in validity• some questions are best answered by a combination of methods

Common Applications of Mixed Methods Developing instruments (measurement tools)

o Usually not within one study, but a series of studies will use mixed methods Explication/explanation

o Quantitative can identify relationshipso Qualitative can fill in the 'why'

Intervention Developmento What is likely to be effective?o What might the problems be?

Population The entire set of individuals (or 'cases' or 'elements') in which a researcher is

interested. (e.g., study of American nurses with doctoral degrees) Whole pie

Sampling The process of selecting a portion of the population (subset) to represent the entire

population Representative sample is one whose main characteristics most closely match those of

the population A slice of the pie

Sampling Bias Distortions that arise when a sample is not representative of the population from

which it was drawn Human attributes are not homogeneous, so we need representatives of all the variety

that exists Always want to think about who did NOT participate in any given study Risk of sampling bias, when sample is not a characteristic sample of whole

population Always think about who didn't participate

Inclusion/exclusion Criteria

Braham & Williams Generated 2015

STUDY GUIDE 10

The criteria used by a researcher to designate the specific attributes of the target population, and by which participants are selected (or not selected) for participation in a study.

Exclusion: Only wants English What is necessary to qualify for the study?

NONPROBABILITY SAMPLING Nonrandom -

Less likely to produce a representative sample Methods:

• Convenience• Snowball• Quota• Consecutive• Purposive

Convenience Sampling Selection of the most readily available people as participants in a study Very common method High risk of bias/weakest form Why? The sample may be atypical Example: Nurse distributing questionnaires about vitamin use to first 100 contacts

Quota Sampling The nonrandom selection of participants in which the researcher pre-specifies

characteristics of the sample's subpopulations (or strata), to increase its representativeness

Convenience sampling methods are used, ensuring an appropriate number of cases from each strata

Improvement over strict convenience sampling, but still prone to bias

Consecutive sampling 'Rolling enrollment' of ALL people over a period of time (the longer period the

better) Reduces risk of bias, but is not always practical or relevant to the study question

Purpose Sampling Can be used for both qualitative and quantitative designs but preferably qualitative Hand picking the sample hand pick professionals who have knowledge of what u are

sampling also called judgment sampling

Braham & Williams Generated 2015

STUDY GUIDE 11

Ask questions such as who will be most knowledgeable, most typical? Limited use in quantitative research Valuable in qualitative research

Snowball Sampling Has greater affinity for qualitative designs The selection of participants by means of referrals from earlier participants; also

referred to as network sampling Helps identify difficult to find participants Has limitations... as do the others

Theoretical Sampling Qualitative sampling method Members are selected based on emerging findings/theory Aim to discover untapped categories and properties, and to ensure adequate

representation and full variation of important themes Non probability

PROBABILITY SAMPLING Random selection of participants/elements Different from random assignment into groups Each element has an equal and independent chance of being selected into the study Methods:

o Simple random samplingo Stratifiedo Clustero systematic

Random Sampling The section of a sample such that each member of a population has an equal

probability of being included Free from researcher bias Rarely feasible, especially when you have a large population

Stratified Random Sampling The random selection of study participants from two or more strata of the population

independently Improves representativeness Sometimes not possible, when the stratifying information is unavailable

Braham & Williams Generated 2015

STUDY GUIDE 12

Similar to quota sampling, but with random selection

Cluster Sampling Successive random sampling of units Large groupings ("clusters") are randomly selected first (e.g., nursing schools), then

smaller elements are randomly selected (e.g., nursing students) Practical for large or widely dispersed populations Used by census surveyors

Systematic Random Sampling The selection of study participants such that every kth (e.g., every tenth) person (or

element) is selected Essentially the same as simple random sampling

Sample Size & Power Analysis Sampling error = the difference between population values (average responses) and

sample values (average responses) What could have been vs what actually was So, the larger the sample, the smaller the sampling error (because the sample average

gets closer to the population average) Overcomes the error b/c increase in sample size The larger the sample, the more representative it is likely to be POWER ANALYSIS: procedure for estimating how large a sample should be. The

smaller the predicted differences between groups, the greater the sample size will need to be

Done pre-sampling Sampling plan is an important area to critique and evaluate

Data Collection Techniques

Qualitative

Interviewso Unstructured

o Semi-structured

Focus groups

Braham & Williams Generated 2015

STUDY GUIDE 13

Life histories/diaries Records or other forms documents Observation

Quantitative

Use of an instrument or scale Oral interview Written questionnaire Observation Bio-physiologic measurement

Primary question types Open-ended Question

o Useful for capturing what the participant prioritizeso Takes more timeo Participants may be unwilling to thoroughly address the questiono Possibility of collecting rich responses

Close-ended Questiono More difficult to developo Require participants to 'box themselves in'o Easier to analyze and compare responseso More efficiento May lack deptho May miss important responses (can't ask about what you don't know about)o Have to pick from a certain group of responses, may not accurately capture how they feel, may tweak the validity of the test

Scales (Quantitative)

Scales provide a way to capture gradations in the strength or intensity of individual characteristics in numerical form

Voted "Most likely to be seen": Likert Scale Watch for RESPONSE SET BIASES:

o Social desirability response set bias - answering in a manner that is consistent with the 'norm' (answering the way the researcher might want me to answer)o Extreme response set bias - tendency to consistently mark extremes of responseo Acquiescence response set bias - the 'yea-sayers' and the 'nay-sayers' = agreeing or disagreeing with the statements regardless of the content

Can seek to reduce the risk of these biases: sensitively worded questions, facilitating an open atmosphere, guaranteeing confidentiality, alternating + and - worded statements

Braham & Williams Generated 2015

STUDY GUIDE 14

Issue of validity Is the topic at hand likely to tempt respondents to present themselves in the best

light? Are they being asked to reveal potentially undesirable traits? Should we trust that they actually feel/act the way they say they do?

Observation Sometimes fits better than self-report, depending on the question and population

(e.g., behavior of autistic children) Again, biases and other issues come into play:

o Observer bias leading to faulty inference or description (intra and inter-rater reliability)o Validity questions - am I just seeing what I want to see or what I thought before-hand I would see?o Hasty decisions may result in incorrect classifications or ratingso Observer 'drift'

When possible, these issues are mitigated through thorough observer training, breaks, re-training, well-planned timing of observations

WHAT IS MEASUREMENT? Measurement involves rules for assigning numbers to qualities of objects in order to

designate the quantity of the attribute We're familiar with the 'rules' for measuring temp, weight, etc. Rules are also developed for measuring variables/attributes for nursing studies

Advantage of Measurement Enhances objectivity = that which can be independently verified (2 nurses weighing

the same baby using the same scale) Fosters precision ("rather tall" versus 6'2") and allows for making fine distinctions

between people Measurement is a language of communication that carries no connotations

Levels of measurement: Nominal-scale The lowest level of measurement that involves the assignment of characteristics into

categorieso Females - category 1o Males - category 2

The number assigned to the category has no inherent meaning (the numbers are interchangeable)

Useful for collecting frequencies

Braham & Williams Generated 2015

STUDY GUIDE 15

Levels of Measurement:Ordinal-scale A level of measurement that ranks, in 'order' (1, 2, 3, 4) the intensity or quality of a

variable along some dimension.1 = is completely dependent2 = needs another person's assistance3 = needs mechanical assistance4 = is completely independent

Does not define how much greater one rank is than another (no relative value given)

Level of Measurement: Interval-scale A level of measurement in which an attribute of a variable is rank ordered on a scale

that has equal distances between points on the scaleEXAM Scores - 100 - 90 – 80

Likert scales and most other questionnaires fall here The differences between scores are meaningful Amenable to sophisticated statistics

Ratio-scale measurement A level of measurement in which there are equal distances between score units, and

that has a true meaningful zero point.o Weight - 200 lbs is twice as much as 100 lbs.o Visual analog scale - 'No pain' is a true zero

Higher levels of measurement are preferred because more powerful statistics can be used to analyze the information

Errors of Measurement Values and scores from even the best measuring instruments have a certain amount

of error That which is random and varied Obtained score = True score + Error

o "Obtained score" is the score for one participant on the scale/questionnaireo "True score" is what the score would be IF the measure/instrument could be infallible. o "Error" can be both random/varied (we just have to deal with this) and systematic (bad)

Braham & Williams Generated 2015

STUDY GUIDE 16

Factors related to Errors of Measurement = THESE ARE BIASES Situational contaminants Response set biases Transitory (changing) personal factors Administration (different persons collecting information) variations Item sampling what items on test, but do they capture item of interest?

Reliability of Measuring Instruments Reliability: The consistency and accuracy with which an instrument measures the

attribute it is designed to measure A reliable instrument is close to the true score; it minimizes error

Test-Retest Reliability: Assesses the stability of an instrument by giving the same test to the same sample

twice, then comparing the scores Gets at the question of time-related factors that may introduce error. Only appropriate for those characteristics that don't change much over time

Internal Consistency: The degree to which the subparts (each item) of an instrument are all measuring the

same attribute or trait Cronbach's alpha is a reliability index that estimates the internal consistency or

homogeneity of an instrument (the closer to +1.00, the more internally consistent the instrument)

Best means of assessing the sampling of items

Interrater Reliability: The degree to which two raters or observers, operating independently, assign the

same values for an attribute being measured or observed. The more congruence, the more accurate/reliable the instrument

Validity - The degree to which an instrument measures what it is intended to measure.

o You can have reliability without validity, but you can't have validity without reliability

Content Validity

Braham & Williams Generated 2015

STUDY GUIDE 17

The degree to which the items in an instrument adequately cover the whole of the content of interest

Usually evaluated by a panel of experts in the content area

Criterion-Related Validity The degree to which scores on an instrument are correlated with an external

criterion Is there a clearly established criterion? Simulation = accurate reflection of nursing skill. If a written test attempts to

capture the info in a simulation, the simulation becomes the criterion by which validity can be tested

Construct Validity The degree to which an instrument measures the construct under investigation.

What exactly is being measured? Could it be something other than what it looks like?

DESCRIPTIVE STATISTICS Synthesize and describe the data set (information) Example - what is the average weight loss of patients with cancer? Provide foundational information and theory for inferential statistics Helps you assess the representativeness of the sample These are valuable

Inferential Statistics Provide a means for drawing conclusions about a population, given the data from

a sample Based on the laws of probability Allows objective criteria for hypothesis testing

Research hypothesis: Patients exposed to movie on breastfeeding will breastfeed longer than those who do

not see the movie A tentative conclusion about the relationship between two variables

Null hypothesis: There is no difference in breastfeeding length between the two groups: 1) seeing

movie 2) not seeing movie

Braham & Williams Generated 2015

STUDY GUIDE 18

Our goal: rejection of the null hypothesis, because we cannot directly demonstrate that the research hypothesis is correct

p value (significance of a relationship) values tell you whether the results are likely to be real Simply means that the results are not likely to be attributed to a chance occurrence In a study, 'significance' refers to an investigator's hypothesis being supported

Effect size analysis: Conveys the estimated magnitude of a relationship without making any statement

about whether the apparent relationship in the data reflects a true relationship

DISCUSSION SECTION

What's here? Interpretation of study findings Requires making multiple inferences Inference: use of logical reasoning to draw conclusions based on limited

information Can we really make valid inferences based on 'stand-ins'? Yes, if we use rigorous

design Investigators are often indirect (at best) in addressing issues of validity - you must

be the judge

Assessing good research designthe main question:

To what degree did the investigators provide reliable and valid evidence? Investigator's primary design goal - control confounding variables

Confounding variable: An extraneous, often unknown variable, that correlates (positively or negatively)

with both the dependent variable and the independent variable. • Confounding is a major threat to the validity of inferences made about cause and effect (internal validity)

Intrinsic (internal) Come with the research subjects These are factors that are simply characteristics of the individual subject Example: Physical activity intervention to improve left sided movement in pts

with CVA

Braham & Williams Generated 2015

STUDY GUIDE 19

Age/Gender Smoking HX/Physical activity HX All are extraneous variables, and all likely related to the outcome variable

(dependent) Associated with research subject

Extrinsic (external) Are part of the research situation Result in 'situational contaminants' If not sufficiently addressed, these factors raise question about whether something

in the study context influenced the results Associated with research situation or context, situational contaminants, instead of

the variable alone

Controlling extrinsic factorsGoal -

create study and data collection condition that don't change from participant to participant

What does this look like in a study?o All data collected in the same settingo All data collected at the same time of dayo Data collectors use a formal script, (interviews, observations) are trained in delivery of any verbal communicationo Intervention protocols are very specific

Controlling Intrinsic Factors 1. Random assignment into groups:

goal is to have groups that are equal with respect to ALL confounding variables, not just the ones we know about

Controlling Intrinsic Factors 2. Homogeneity:

limits confounders by including only people who are 'the same' on the confounding variable

Shows up in exclusion/inclusion criteria Limits generalizability

Example: If gender is a confounder, sample only men

Controlling Intrinsic Factors 3. Matching:

Braham & Williams Generated 2015

STUDY GUIDE 20

Researcher uses information about each individual and attempt to match them with a corresponding individual, creating comparable groups. Has practical limitations...

Controlling Intrinsic Factors 4. Statistical controls: Use of statistical tests to control for confounding variables (e.g., ANCOVA)

Construct Validity The degree to which the particulars of the study (the sample, the settings, the

treatments, the instruments) are accurate representations of the higher-order constructs they are meant to represent

If there are errors here, results can be misleading Most practically comes up in terms of the validity of tools (measurement

instruments)

Statistical Conclusion Validity The degree to which the selected statistical tests for any given study accurately

detect true relationships. Statistical power: the ability of the research design to detect true relationships. Achieved primarily through sample size, based on a power analysis Not all reports will tell you what the necessary sample size was determined to

be...

Internal Validity How possible it is to make an inference that the independent variable is truly the

causal factor? Any 'competing explanation' for the cause = threat to internal validity

Internal Validity 1. Selection Any pre-existing, often unknown, differences between groups (is a risk for any

non-randomly assigned groups)

Internal Validity 2. History Events occurring at the same time as the study that may impact outcomes (flu

shot example)

Internal Validity 3. Maturation

Braham & Williams Generated 2015

STUDY GUIDE 21

Those effects that occur simply because time has passed (wound healing, growth, post op recovery)

Internal Validity 4. Mortality/Attrition Who dropped out? Who stayed in? Are previously equivalent groups no longer

equivalent? A dropout rate of 20% or more is of concern Who is dropping out, who is staying in, and looking at group at the end, and

determine if it’s still valid, if greater than 20% it would ruin the study or make it invalid.

That's pilot studies are done

External Validity Addresses how well the relationships that are observed in a study hold up in the

real world Tied to generalizability Two main design pieces:

o How representative is the sample?o Replication prospects (answered through multi-site studies or systematic reviews)

Do the same findings hold true in a variety of settings or in diverse sample groups?

Recognize we are always looking at a sample and trying to apply it to society, getting a good sample group you can generalize

EBP and Critical thinking Requires a questioning approach, a 'why?' approach Demands a willingness to challenge the status quo Asking - what is the evidence that suggests that what I'm doing is the best

thing? Investigator carrying out the research must do a similar query: What evidence is

there that the results are true Ok to maintain a skeptic's attitude until the accuracy of evidence is evaluated

MANAGING QUALITATIVE DATA Reductionist in Nature

Example: 4 hr interview need to look through hours of data, so they need to transcribe: reducing to meaningful information (reductionist aka reduce)

When transcribing – is it accurate and valid?o Researchers do own transcribing, responsibility of researcher to do own transcription and making sure it is accurateo Overall goal is to get to know your data

Braham & Williams Generated 2015

STUDY GUIDE 22

"Immersing" oneself in the data getting to know own data "drowning in data" Maintaining Files: Computers vs. Manual cutting out and putting into files and

developing systems doing hard copy manor

Managing Qualitative DataDeveloping a Category Scheme (template)

look at data, and then prioritize after you look at it and not before, you see these and see themes or categories that begin to develop

can't make data fit predetermined categories, they need to develop as you go through datao Participation in walking program - concrete/descriptive o Asking questions of the data through 'constant comparison'

What's going on? What is this person representing What does this stand for? What does it mean What else is like this? Who else is like this What is this distinct from? What is different

o Anniversary of birth trauma - abstract/conceptualo How moms coped with it... people get PTSD symptoms related to traumatic birtho The developing themes that began to emerge was the prolog to anniversary, the day of, the day after, and looking at future date if anniversaryo Then seeing where that category fit in

Coding Qualitative Data An opportunity for refining the category scheme You can relook at data and may see a new category, some may be removed or

renamed, then you need to go back to step one and see how transcripts fit into new category scheme (Refining), go back and forth and relook at data a lot, not a liner process

Complex process it is all complex, and error and sloppiness is high because of the complexity and analysis fatigue

ANALYTIC PROCEDURES CONCEPTUAL PROCESS (QUALITATIVE)Constructionist in Nature

Taking codes and piecing back together, making an innovated whole

Search for 'themes': Identify commonalities within the data that brings meaning to the experience

under studyo you need to ask what are the relationships within and among the identified themes are.

Braham & Williams Generated 2015

STUDY GUIDE 23

Iterative process: Initial themes are identified, then analyst returns to the data with those themes

in mind, asking, "does this fit"? Refining and clarifying process... circular process of looking at data, stepping back identifying themes, going back to previously determined themes, checking with other participants themes, then going back again

Seeing if it fits, is it accurate, do others find similar themes and connections

ANALYTIC PROCEDURES - Validation of findings: aim is to minimize bias associated with analysis by only one researcher Think about validity through rechecking work, and see if it is actually about

data and not about biases that came. Researchers are good at identifying own biases. Researchers acknowledge biases and let them work with them instead of against them. It is the consumers responsibility to recognize that and see if the biases get in the way or not

Risk of only getting the perspective of that one person. You need to work to mitigate that risk

ANALYTIC PROCEDURES - Integration: Developing an overall structure - either a theory, conceptual map, or

overarching description This integration piece is the 'so what?' piece Look at the end of the research and think so what... The researcher needs to do

a good job at linking the study to practice, and the so what needs to be focused so people can see how to utilize the research. If they can't you need to be asking a certain question or reword it...

Difficult to get funded for qualitative research, and it does have a bearing on the write-up and why they think it is important and relevant

Goal is to develop an overall perspective or idea that is useful at the bedside Find in a conceptual map, or a concept analysis, or an overarching description

of moving from an inductive way, and see what these participants experience.

ANOVA Analysis of variance tests for differences in the means for three or more

groups. Compares how much members of a group differ or vary among one another

with how much the members of the group differ or vary from the members of other groups.

In other words, the test analyzes variance, comparing the variance within a group with the variance between groups. For example, an ANOVA test of

Braham & Williams Generated 2015

STUDY GUIDE 24

respiratory complications in three groups of patients categorized by smoking status calculates how much variation there is in respiratory complications within the patient group that smokes, the patient group that never smoked, and the patient group that formerly smoked. It then calculates the amount of variation in respiratory rate between the smoking patients, the patients who never smoked, and the former smokers

T-TEST Computes a statistic that reflects the differences in the means of a variable for

two different groups or at two different times for one group. The two groups being tested might consist of anything of interest to nursing,

such as men and women, single parent families and two-parent families, those who quit smoking and those who did not, or hospitals with level-one trauma centers and hospitals without them

REGRESSION The statistical procedure that we use to look at connections among three or

more variables

Measures how much two or more independent variables explain the variation in a dependent variable.

The regression procedure allows us to predict future values for the dependent variable based on values of the independent variables.

A regression analysis gives the information needed to know how much different factors independently contribute or connect to a dependent variable.

NULL HYPOTHESIS

Always predicts that there will be no relationship or difference in selected variables.

The researcher must then find enough evidence to reject that prediction, a statistically significant test result being the evidence that is required.

HYPOTHESIS

Is stated in the positive and predicts the nature and strength of a relationship or difference among variables.

It is the researcher’s hope that the results of a study support the prediction.

BLINDING The process of preventing those involved in a study (participants, intervention

agents, or data collectors) from having information that could lead to a bias,

Braham & Williams Generated 2015

STUDY GUIDE 25

e.g., knowledge of which treatment group a participant is in; also called masking.

ANONYMITY Protection of participants’ confidentiality such that even the researcher cannot

link individuals with the data they provided.

CHI SQUARED Is used to test hypotheses about differences in proportions.

PEARSON Establishes a linear relationship Most often used correlation index is Pearson’s r (the product–moment

correlation coefficient), which is computed with interval or ratio measures. There are no fixed guidelines on what should be interpreted as strong or weak

relationships, because it depends on the variables. If we measured patients’ body temperature orally and rectally, an r of .70

between the two measurements would be low. For most psychosocial variables (e.g., stress and depression), however, an r of .70 would be high. Perfect correlations (+1.00 and −1.00) are rare.

Correlation coefficients describe the direction and magnitude of a relationship between two variables, and range from −1.00 (perfect negative correlation) through .00 to +1.00 (perfect positive correlation). r,

Used with interval- or ratio-level variables.

INFERENTIAL STATISTICS Based on laws of probability, allow researchers to make inferences about a

population based on data from a sample. The sampling distribution of the mean is a theoretical distribution of the means

of an infinite number of same-sized samples drawn from a population. Sampling distributions are the basis for inferential statistics.

COVARY/COVARIANCE When two variables are connected in some way, they are said to covary. Two variables covary when changes in one are connected to consistent changes

in the other. For example, height and weight covary in healthy growing children. As the height of a child increases, the weight usually increases as well.

PARAMETRIC STATISTICS These are numbers that meet two key criteria: (1) the numbers must generally be

normally distributed—that is, the frequency distribution of the numbers is roughly bell shaped and

Braham & Williams Generated 2015

STUDY GUIDE 26

(2) the numbers must be interval or ratio numbers, such as age or intelligence score—that is, the numbers must have an order, and there must be an equal distance between each value

NONPARAMETRIC STATISTICS Used for numbers that do not have a bell-shaped distribution and are categoric

or ordinal. Categoric or ordinal numbers represent variables for which there is no

established equal distance between each category, such as numbers used to represent gender or rating of preference for car color. In the predictors of life satisfaction study, gender would be a nonparametric statistic, whereas life satisfaction scores would be a parametric statistic.

CONFIDENTIALITY Protection of study participants so that data provided are never publicly

divulged.

BASIC RESEARCH Research designed to extend the base of knowledge in a discipline for the sake

of knowledge production or theory construction, rather than for solving an immediate problem

APPLIED RESEARCH Research designed to find a solution to an immediate practical or clinical

problem.

ATTRITION The loss of participants over the course of a study, which can create bias by

changing the composition of the sample initially drawn.

CODING The process of transforming raw data into standardized form for data processing

and analysis; in quantitative research, the process of attaching numbers to categories; in qualitative research, the process of identifying recurring words, themes, or concepts within the data.

COEFFICIENT ALPHA (CRONBACH’S ALPHA) A reliability index that estimates the internal consistency of a measure

comprised of several items or subparts.

THE VALIDITY DEBATE IN QUALITATIVE RESEARCH A controversial term in qualitative circles Does the term 'validity', defined as "the quality of being sound and well-

founded" apply to qualitative research? Works like a parallel

Braham & Williams Generated 2015

STUDY GUIDE 27

Current conclusion on the debate is to agree to approach validity from a 'parallel perspective'

The terms TRUSTWORTHINESS and INTEGRITY are parallel to reliability and validity in quantitative research

Overall, there is agreement about the need for high-quality research and standards by which to determine what is 'high-quality'

CRITERIA FOR TRUSTWORTHINESS credibility, dependability, confirmability, transferability, authenticity

1. Credibility: having confidence in the truth of the data and the interpretations (parallel to

validity)

2. Dependability: the interpretations hold true over time and over conditions (parallel to reliability)

3. Confirmability: interpretations can be independently agreed upon (parallel to objectivity or

neutrality)

4. Transferability: the degree to which the interpretations have application in other settings (parallel

to generalizability) This is not the job of the researcher, but of the consumer

5. Authenticity: the degree to which researchers show the range of experiences within their data so

that readers develop a sensitivity through rich contextual descriptions

STRATEGIES TO ENHANCE QUALITY (and reduce threats to integrity/trustworthiness)

Prolonged Engagement (scope) Persistent Observation (depth) Triangulation - means of avoiding single-method, single-observer biases

o Data: time, space, persono Method: data sourceso Investigator: collaborative analysis

Audit (decision) trails and Inquiry Audits Peer Review and Debriefing Thick description- providing sufficient detail to allow the reader to make

judgments about the study's credibility

Braham & Williams Generated 2015

STUDY GUIDE 28

Evaluating Qualitative Findings Credibility (believability) - have you been convinced of the fit between the data

and the interpretations presented? Close scrutiny - is there evidence that alternative interpretations were considered?

Are limitations and their effects discussed? Importance - is there new understanding, new insight that is presented or does it

seem common sense?

Systematic Review: Systematic Reviews are the foundation of evidence based practice A rigorous synthesis of research findings on a specific research question. Can be:

Narrative - AKA "Literature Review"Statistical - AKA "Meta-Analysis

Meta-Analysis is a quantitative and statistical approach to combining findings across studies that

are reasonably similar Every meta-analysis is a systematic review, but not every systematic review is a

meta-analysis Main goal is to objectively integrate (synthesize) all the individual study findings

on a specific topic Essentially use all the activities in an individual study EXCEPT what?

•Collecting original dataQualitative

Guided by research questions and data are collected from a small number of subjects allowing an in depth study of phenomenon

Quantitative Describes phenomena seeks to test hypothesis/ answer research questions using

statistical methods

Meta-Synthesis A synthesis of a number of qualitative articles on a focused topic using specific

qualitative methodology.

Integrative review Focused review and synthesis of the literature on a specific area that follows

specific steps of literature integration and synthesis without statistical analysis.

Meta-analysis

Braham & Williams Generated 2015

STUDY GUIDE 29

Summarizes a number of studies focused on a topic using a specific statistical methodology to synthesize the findings in order to draw conclusions about the area of focus.

4 strategies for critical reading: 1. Preliminary2. Comprehensive3. Analysis4. Synthesis

Preliminary Familiarizing yourself with the content-skimming the content.

Comprehensive Understanding the parts of the researcher's purpose or intent.

Analysis Understanding the parts of the study.

Synthesis Understanding the whole article and each step of the research process in a study.

Levels of evidence: 1-7 (greatest to least)

Level 1

Systematic review or meta-analysis of randomized controlled trials (RCTs)

Level 2

A well-designed RCT

Level 3

Quasi-experimental study-Controlled trial WITHOUT randomization

Level 4

Single non-experimental study (case-control, correlational, cohort studies.

Braham & Williams Generated 2015

STUDY GUIDE 30

Level 5

Systematic reviews of descriptive and QUALITATIVE studies

Level 6

Single descriptive or QUALITATIVE studies

Level 7

Opinion of authorities and/or reports of expert committees.

THE RESEARCH QUESTION

What presents the idea that is to be examined in the study and is the foundation of the research study

THE HYPOTHESIS

What attempts to answer the research question

What are the 4 components of clinical questions?

1. Population2. Interventions3. Comparison4. Outcome

THE PURPOSE

The aims or objectives the investigator hopes to achieve with the research, not the question to be answered.

RESEARCH OR SCIENTIFIC HYPOTHESIS

A statement about the expected relationship of the variables

NULL HYPOTHESIS

States there is no relationship between the independent and dependent variables.

3 Characteristics of a research question:

1. Clearly identifies variables2. Specifies population3. Implies possibility of empirical testing

INDEPENDENT VARIABLE

Braham & Williams Generated 2015

STUDY GUIDE 31

Which variable has the presumed effect on the other variable?

DEPENDENT VARIABLE

Is often referred to as the consequence or the presumed effect that varies.

Correlational research question

Is there a relationship between X (independent variable) and y (dependent variable) in the specified population?

Comparative research question

Is there a difference in Y (dependent variable) between people who have X characteristic and those who did not have X characteristic?

Experimental research question

Is there a difference in Y (dependent variable) between group A who received X (independent variable) and group B who did not receive X?

LITERATURE REVIEW

Systematic and critical appraisal of the most important literature on a topic, is a key step in the research process that provides the basis of a research study.

CONCEPT

An image or symbolic representation of an abstract idea.

THEORY

Set of interrelated concepts, definitions and propositions that present a systematic view of phenomena of the purpose of explaining and making predictions about those phenomena.

CONCEPTUAL DEFINITION

This type of definition includes general meaning of a concept. Ex: To walk from place-to-place

OPERATIONAL DEFINITION

This type of definition includes the method used to measure the concept. Ex: Taking 4 steps without assistance

Braham & Williams Generated 2015

STUDY GUIDE 32

PICO format to generate research questions for EBP

P: Problem/pt populations; specifically defined groupI: Intervention; what interventions or event will be studied? C: Comparison of intervention: with what will the intervention be compared? O: Outcome; what is the effect of the intervention?

When looking for systematic reviews what search engine should you use?

Cochrane review

Where can you find individual original (RCTs) studies?

Medline and CINAHL(lower level of the information resource pyramid)Randomized clinical trials

What should be your first choice when looking for theoretical, clinical or research articles?

Print resources-Refereed or peer-review journals. Means that the journal has been submitted and reviewed by a panel of internal/external

experts on the topic for possible publication.

Structure of concepts and/or theories that provides the basis for development of research questions or hypotheses

Conceptual or theoretical framework-For example in a study investigating the effect of a psychoeducational telephone counseling intervention on the quality of life (QOL) for breast cancer survivors, QOL was the conceptual framework used to guide the identification and development of the study. -QOL was conceptually defined as "a multidimensional construct consisting of four domains: physical, psychosocial, social and spiritual well-being." -Each domain contributes to overall quality of life and was operationally defined as the score on the Quality of Life-Breast cancer survivors measurement instrument.

What is the overall purpose of the literature review?

The overall purpose of the literature review is to present a strong knowledge base for the conduct of the research study.

As students what should be our first choice when looking for theoretical, clinical, or research articles?

Refereed or peer-reviewed journals

Braham & Williams Generated 2015

STUDY GUIDE 33

Refereed or peer-reviewed journals

Refereed or peer-reviewed journals have a panel of internal and external reviewers who review submitted manuscripts for possible publication. The external reviewers are drawn from a pool of nurse scholars, and possibly scholars from other related disciplines who are experts in various specialties. In most cases, the reviews are "blind"; that is, the manuscript to be reviewed does not include the name of the author(s).

In contrast to quantitative studies, the literature reviews of qualitative studies are usually handled in a different manner. How is this so?

There is often little known about the topic under study. The literature may be conducted at the beginning of the study or after that data analysis is completed with qualitative.

Why is nursing research valuable to the consumers of health care?

Correct Research provides evidence that nursing care makes a difference.

What is the purpose of the World Health Organization (WHO)'s designated Collaborating Centers throughout the United States?

To provide research and clinical training in nursing to colleagues around the globe

How is evidence-based practice derived?

From research on patient outcomes

4 ways that qualitative findings can be used in EBP (4 modes of clinical application from Kearney)

1. Insight or empathy2. Assessment of status or progress3. Anticipatory guidance4. Coaching

Kearney's Categories of Qualitative Findings

1. Descriptive categories2. Shared pathway or meaning3. Depiction of experimental variation4. Dense explanatory description5. Restricted by a priori (existing theory) frameworks

DESCRIPTIVE CATEGORIES

Braham & Williams Generated 2015

STUDY GUIDE 34

Phenomenon is vividly portrayed from a new perspective; provides a map into previously uncharted territory in the human experience of health and illness.

Steps in the qualitative research process (8 steps)

1. Literature review2. Study Design3. Sample4. Setting/recruitment5. Data collection6. Data analysis7. Findings8. Conclusions

Philosophical beliefs, a world view

Paradigm-all research is based off a paradigm.

Auditability

The researcher should include enough information in the report to allow the reader to understand how the raw data lead to the interpretation.

Fittingness

The researcher provides enough detail in a qualitative research report for the reader to evaluate the relevance of the data to nursing practice.

Define research design

A framework that the researcher creates.

Means of control:

Homogenous sample Consistent data-collection procedures Manipulation of IV Randomization

Homogenous sample

Participants in the study are homogenous or have similar extraneous variables that might affect the dependent variable.

Homogeneity of the sample limits generalizability or the potential to apply the results of a study to other populations.

Braham & Williams Generated 2015

STUDY GUIDE 35

Constancy

Data collection procedures are the same for each subject; data collected in the same manner and under the same conditions.

Manipulation of the Independent variable

Experimental and control groups The independent variable is: the variable that the researcher hypothesizes will have an

effect on the dependent variable. Usually manipulated (experimental study) but sometimes cannot be manipulated (non-experimental)

Randomization

Each subject in a population as an equal chance of being selected. Each subject in the study has an equal chance of being assigned to the control group or

the experimental group.

Internal validity

Asks whether the IV really made the difference or the change in the dependent variable. Established by ruling out other factors or threats as rival explanations. Must be established before external validity can be established.

Inter-rater or Inter-observer reliability

This is a way stability between raters is documented. A reliability assessment is indicated when multiple raters observe and record a variable. Interrater reliability quantifies the stability of a measure across raters. Ex: the degree of

agreement between two or more nurses who are staging a pressure ulcer should be documented.

A simple percent agreement can be documented but a kappa statistic is even better. Used to assess the degree to which different raters/observers give consistent estimates of

the same phenomenon

VALIDITY

The extent to which an instrument measures what it was intended to measure

INTERNAL VALIDITY

The confidence that an experimental treatment or condition made a difference and that rival explanations were systematically ruled out through study design and control

EXTERNAL VALIDITY

Braham & Williams Generated 2015

STUDY GUIDE 36

The ability to generalize the findings from research study to other populations, places, and situations

TRIANGULATION

Combined use of two or more theories, methods, data sources, investigators, or analytical methods to study the phenomenon.The researches use of multiple sources to confirm a finding. This can increase the credibility of the results

Cross-checking conclusions using multiple data sources, methods or researchers to study the phenomenon

BRACKETING

The process of explicitly reflecting on and documenting the researcher's bias. A strategy to control bias where the researcher who is aware of her own biases makes

them explicit by putting them in brackets (setting them aside). By making the bias's known they are less likely to succumb to them. In qualitative research, we are less concerned with bias but still it should be

acknowledged.

Examples of descriptive design

Simple descriptiveSurveyCross-sectionalLongitudinalCase studyCorrelationPredictive

DESCRIPTIVE RESEARCH

The study of phenomena as they naturally occur. The purpose of descriptive research is the exploration and description of phenomena in

real-life situations. In nursing the descriptive design can be used to develop a theory, identify problems,

make decisions, or determine what others are doing so they can design effective nursing interventions.

DESCRIPTIVE STUDIES

May include 1 participant or many May collect data at one time or multiple times

Braham & Williams Generated 2015

STUDY GUIDE 37

No variables are manipulated in the study. When data is collected in numbers it considered a quantitative study, when data is

collected in words it’s a Qualitative study. "What is nurses' knowledge about best practices related to enteral nutrition? Is the

knowledge level consistent with practice? Is practice consistent with best available evidence?"

CROSS SECTIONAL STUDY

Looks at a single phenomenon across multiple populations as a single point in time• Economical• No waiting for an outcome to occur• Large samples possible• No loss due to attrition

"What are the differences in job satisfaction among nurses working on different types of units? Is the job satisfaction associated with experience as a nurse?"

LONGITUDINAL STUDY

Follows subjects over a period of time "What is the effect of urinary incontinence on the quality of life of long-term care

residents over time"?

Benefits of a longitudinal study

• Historical trends/causal associations• Cost effective• Efficient• Do not rely on recall

Limitations of longitudinal study

• Attrition• Dependent on accurate documentations• Once begun, cannot change• Expensive• Large samples are expensive

CORRELATION DESIGN

• Involves the analysis of two variables and seek to determine strength of relationshipasks "What is the relationship between patient satisfaction and the timeliness and effectiveness of pain relief in a fast track emergency unit"?

Benefits of correlation design

Braham & Williams Generated 2015

STUDY GUIDE 38

• Uncomplicated• Flexibility in exploring relationships• Practical applications• No data manipulations

Limitations of correlation design

• Lack control• Lack randomization• Suppressor variable may be cause• Spurious relationships

Prediction (Regression) design

Designed to look at variables at one point in time in order to predict or forecast an outcome measured at a different point in time

"Can feeding performance in neonates be predicted by indicators of feeding readiness?"

Advantages of Prediction (Regression) Design

Much info from a single data set Results can be applied to a group or individuals

Disadvantage of Predictive (Regression) Design

No assurance of causality Requires relatively large sample size

CASE STUDIES

Descriptive exploration of a single unit. The meticulous descriptive exploration of a single unit of study such as person, family

group, community, or other entity. It is purely descriptive, relies on depth of detail to reveal the characteristics and responses

in a single case. "What are the appropriate assessments and interventions for a patient experiencing

paraplegia after heart surgery? What was the course of the condition and expected responses from single patient's perspective?"

Advantage of case study

Provides in depth information Captures changes over time

Disadvantage of case studies

Braham & Williams Generated 2015

STUDY GUIDE 39

No insight provided No baseline measurement Causation cannot be inferred Results cannot be generalized

RANGE

• Distance between the two most extreme values in a data set

MEAN

• Average

MEDIAN

• Middle number

MODE

• Most common number

What is the benefit of using inferential statistics?

• They allow the researcher to determine the probability that random error is responsible for the outcome, and they give the reader information about the size of the effect

Inferential statistics

Inferring something. Just a step further than descriptive analysis. Enables the researcher to draw conclusions about a population given a sample. Because these are calculations they are focused on numerical representations of reality. Inferential analysis is the most common type of quantitative analysis used in research for

evidence.

Quasi experimental designs

Similar to experimental designs but using convenience samples or existing groups to test interventions

Ex Post Facto

Both independent and dependent variables have already occurred

How does NON-experimental design differ from experimental design?

Uses comparison groups not control group

Braham & Williams Generated 2015

STUDY GUIDE 40

It cannot be assumed that sample represents the population Likely a convenience sample More feasible that its experimental Groups may not be equivalent Rival explanations may exist

Case Study

Only descriptive Only one person/unit/community

Paired T Test

A test that may be used to compare pre-test/post-test change Comparing two groups and their differences

ANOVA

Requires nominal or ordinal and Interval or ratio data. Has a single dependent variable and may have multiple independent levels (not variables

- levels)

Quasi Experimental

These individuals are studied at the same time as the experimental group

Retrospective

A research design that looks back to determine cause of an outcome

Cross-sectional

One point in time, practical and economical

Longitudinal

Follows individuals over time

Prospective

A research design that looks forward to determine if a cause results in an outcome

Case Study

Braham & Williams Generated 2015

STUDY GUIDE 41

A meticulous exploration of a single unit of study such as a person, family, community etc

Predictive

These studies tend to forecast an outcome

Mean

The average measure of a central tendency

Mode

The most frequent number in a distribution

Standard Deviation

This unit of measurement expresses variability of the data in reference to the mean. It provides a numerical estimate of how far on an average the separate observations are from the

mean

Descriptive Analysis

Answers "What is going on RIGHT NOW" "How are you doing RIGHT NOW"

Apriori

Knowing what it is you’re going to ask ahead of time

Quantitative Research

Numbers, organized, systematic, Data is represented numerically. Many patients in the study.

Answer a question yes or no etc.

Research Design

A plan that outlines the overall approach to a study, grounded in a set a belief about knowledge and linked to the nature of the research question

Focused on answering the research question with the greatest credibility Based on the purpose of the study Outline's one's study

Braham & Williams Generated 2015

STUDY GUIDE 42

Provides the details Macro view overall approach from specific paradigm or belief system

which questions can be answered best from which perspective

Basis for design selection

Research question Researcher expertise Purpose of the study Resources Previous research and use of instruments Requirements for control (keep the extraneous variables in check) Issues about internal and external validity (generalizability)

Phases of research

Identify assumptions about gaps in knowledge (lit review) Selecting an overall approach that serves the purpose Specifying the design Develop the details

Researcher’s expertise

Educational preparedness Novice – expert Previous experiences in related research

Resources

Time How much money will this cost what resources do you require I.E. - statistician, documents, computer, materials, instruments Do you require or have personnel resources Do you have support from the institution

Previous research

Has this study been done before (use their research)

Braham & Williams Generated 2015

STUDY GUIDE 43

Can you replicate it Are there instruments that you can borrow or modify your study Does previous research support the need for his new study Is there a gap in knowledge did previous research leave an opening for continuing the study ("recommendation for

future research")

Requirements for control

What does this mean What is control - removing superfluous variables Why do we want control What are extraneous or confounding variables What does this mean in terms of external validity

External validity

To what extent can you generalize to the larger population by controlling the sample population?

How do you control this? Control who is in your sample- make sure its generalized to the population

Internal validity

Trustworthiness of findings assign subjects to groups equitably document equivalence of the study group control elements of the environment ensure treatment is applied reliably = consistency provide consistent and accurate measurements control variables extraneous (not relevant) to the study

Types of variables

dependent independent - causes something to happen extraneous or confounding - nothing to do with it but still might screw you up all must be operationalized - so you can measure them

Design decisions

Assumptions about information to be gained Purpose of the study

Braham & Williams Generated 2015

STUDY GUIDE 44

Details of the study

Purpose of the study

Exploratory - may be quantitative or qualitative Confirmatory - typically quantitative What variables will be studied

Descriptive research

Research questions begin with what or why Details process, event or outcome describe the sample the event or situation independent/dependent - level of education

Common descriptive designs

Survey designs Cross sectional- across cohorts Longitudinal – overtime Case studies Single subject design (response to event) Phenomenology ethnography - the women who lived with apes - how the apes lived - Jane Goodall

Cross sectional

• Multiple cohorts at a single point in time

Longitudinal

• Follows study participants over time

Case study

• In depth analysis

Phenomenology

• Qualitative looking at the life experience of one subject

Ethnography

• A deep study of one culture

Designs that describe relationships

Braham & Williams Generated 2015

STUDY GUIDE 45

correlation - does one cause the other predictive grounded theory - qualitative - takes a situation and gives a theory eg - what causes

postpartum depression , interview them and derive a theory

Random selection

Everyone in the population has an equal opportunity to be in the study

Which experimental design is most often used in nursing research

Quasi experimental

Three conditions that must be met to establish causality

Cause must precede the effect Probability must be established Rival explanation must be ruled out

Bias

The distortion of true findings by factors other than those being studied

Bias may be caused by

Researcher Measurements Subjects sampling procedures Data Statistical analyses extraneous variables

Sampling Theory

Sampling theory was developed to mathematically determine the most effective way to acquire a sample that would accurately reflect the population of study.

Why do we take samples instead of studying an entire population? What happens if we test an infinite number of random samples?

Central limit theorem

A mathematical theorem that is the basis for the conclusion that larger samples will represent a population more accurately than small ones

Braham & Williams Generated 2015

STUDY GUIDE 46

Essentials in sampling theory

Population Sample Elements Sampling criteria - characteristics of the participants Sampling frames - list of the lists of possible participants in sample

Sampling plans Representatives - sampling criteria determine this Randomization Sampling factors

How do you get from your population of interest to your study?

Sampling criteria Sampling frame Sampling Plan

What about bias

Is a particular group over-represented Does the sample reflect the characteristics of the population of interest Was the sample selected using flawed methods

Inferential statistics

Understanding a small part in order to infer or predict the whole truth about the whole

Probability Sampling

A sample where the elements are drawn by chance procedures Every member of the accessible population has an equal opportunity of being selected

into the study - termed nonzero chance of being selected Random Sampling Advantage: generalizability and ability to draw inferences Most likely analyzed using parametric tests

P value

Probability that you have made a mistake or wrongly interpreted your findings

Assumed alpha level

.05

Cronbach's alpha

Braham & Williams Generated 2015

STUDY GUIDE 47

measures the degree of agreement or difference in a relationship always want it to be high a perfect score = 1

Probability sampling

A sample where the elements are drawn by chance Every member of the accessible population has an equal opportunity of being selected

into the study Strong dependent upon experience and judgement of the researcher Used when probability sampling is not feasible Advantages are convenience and economy Most likely analyzed using non-parametric tests

Simple random sample

Truly random, equal opportunity

Stratified random sampling

Equal representation from the population Based on percentages

Cluster Sampling

Progressively smaller

Systematic Sampling

Typically utilizes a list First one is randomly selected, and then you count down a given number to select the

next participant

Convenience sample

The most easily accessed

Quota sampling

Desire to have a set number or equal representation

Purposive Sample

Qualitative surveys Extreme case Expert case

Braham & Williams Generated 2015

STUDY GUIDE 48

Heterogeneity

Networking

Friends and family sampling

Snowballing

Like characteristics

Sampling error

Difference in expected findings based on an infinite number of samples and your sample. This is a mathematical calculation

Types of sampling error

Random sampling error Expected difference If to large, increase sample size An increase in sample size

MEASUREMENT STRATEGIES

Validity

Measuring what you intend to measure

Reliability

Consistency

Key elements in quantitative research

Numbers are key elements because they areobjective

Standardized Consistent Precise statistically testable an accurate representation of attributes

Measurement strategy

Determine relevant attributes that demonstrate an answer to the question operationalization of the attributes

Braham & Williams Generated 2015

STUDY GUIDE 49

selecting a reliable instrument determine the validity and reliability development of protocol quality checks that ensure the process result in an accurate and complete data set

Primary Data

Collected from subjects calibrated instruments Equipment such as tape recorders paper and pencil tests online surveys observations Frequencies

Secondary data

Less reliable than primary Easier to collect Records Government sources second hand information

Assumptions about secondary data

Data are accurately recorded Data are consistent Recorded so that a common interpretation is possible

Level of measurement

Nominal Ordinal Interval Ratio

Nominal

The name 'Nominal' comes from the Latin nomen, meaning 'name' and nominal data are items which are differentiated by a simple naming system.

The only thing a nominal scale does is to say that items being measured have something in common, although this may not be described.

Nominal items may have numbers assigned to them. This may appear ordinal but is not -- these are used to simplify capture and referencing.

Braham & Williams Generated 2015

STUDY GUIDE 50

Nominal items are usually categorical, in that they belong to a definable category, such as 'employees'.

ExampleThe number pinned on a sports person.A set of countries.

Nominal

The lowest level of measurement, using numbers to classify 2 or more categories the number does not provide a numerical value categorical and discrete

Ordinal

Specifies the order of items being measured does not tell anything about how much greater one level is than another Intervals between the ranks are not equal Data has no value only frequencies can be calculated

Ordinal

Items on an ordinal scale are set into some kind of order by their position on the scale. This may indicate such as temporal position, superiority, etc.

The order of items is often defined by assigning numbers to them to show their relative position. Letters or other sequential symbols may also be used as appropriate.

Ordinal items are usually categorical, in that they belong to a definable category, such as '1956 marathon runners'.

You cannot do arithmetic with ordinal numbers -- they show sequence only. Example

The first, third and fifth person in a race.Pay bands in an organization, as denoted by A, B, C and D.

Interval Measurement

Interval data (also sometimes called integer) is measured along a scale in which each position is equidistant from one another.

This allows for the distance between two pairs to be equivalent in some way. This is often used in psychological experiments that measure attributes along an arbitrary

scale between two extremes. Interval data cannot be multiplied or divided.

Braham & Williams Generated 2015

STUDY GUIDE 51

Examplemy level of happiness, rated from 1 to 10.Temperature, in degrees Fahrenheit.

Ratio Measurement

In a ratio scale, numbers can be compared as multiples of one another. Thus one person can be twice as tall as another person. Important also, the number zero has meaning. Thus the difference between a person of 35 and a person 38 is the same as the difference

between people who are 12 and 15. A person can also have an age of zero. Ratio data can be multiplied and divided because not only is the difference between 1 and

2 the same as between 3 and 4, but also that 4 is twice as much as 2. Interval and ratio data measure quantities and hence are quantitative. Because they can be measured on a scale, they are also called scale data. Example

A person's weightThe number of pizzas I can eat before fainting

Random error

Does not affect the mean score but does affect the variability and standard deviation. Can be corrected by increasing the sample size

Systemic error

Is biased, consistent but inaccurate

Reasons for systemic errors

Inappropriate sampling Errors in measurement and procedures Missing data

Threats to validity

History (extraneous historical event which occurs during study) Maturation (subjects grow up) Testing (pretest v posttest) Instrumentation (the way that questions are asked, interval, nominal) Morality (subjects die or leave study) Selection bias (research chooses subjects) Hawthorne effect

Braham & Williams Generated 2015

STUDY GUIDE 52

Hawthorne effect

The extent to which subject change their behavior simply because they know that that behavior is being studied

External validity

The extent to which the findings can be generalized beyond the population in the study

Extraneous variables

variables that have irrelevant association with the dependent variable but can affect the study

influencing factors that you will need to include or disclose in your study factors that you will need to control, otherwise they will affect the outcome of your study

Population validity

Population chosen is representative of the population at large

Population

the entire set of subjects that the researcher is interested in

Sample

carefully selected subset of the population

External Validity

the ability to generalize the findings from a study to other populations, places and situations

Network sampling -

friends and family sampling

Non-probability -

Convenience Sampling- Those most easily accessed Quota Sampling - Desire to have set number of equal representation Purposive Sampling Extreme case Expert case

Heterogeneity Networking (friends and family sampling)Snowballing (like characteristics

Braham & Williams Generated 2015

STUDY GUIDE 53

Random selection

a method of choosing a random sample using mathematical probability to ensure the selection of subjects is completely objective

Random assignment

subjects are randomly assigned to an experimental or the control group

Sampling frame

the available population/ the potential participants who meet the def. of the population and are accessible to the researcher

Snowball sampling

Referral - violates randomness and independence, each subject is asked to recruit other subjects.

Systematic sampling

the first subject is drawn randomly, and remaining subjects are at predetermined intervals

Target population

the whole set of people to whom the results will be generalized

Data collection must be

Clear Unbiased Reliable Valid Designed to answer the research question

Physiological measures

Require calibrationExamples include

Vital signsWeight, height, BMI Scales, sphygmomanometers, otoscopes, thermometers, stethoscopes, etc.

Psychometric measures

Subjective Usually self-report

Braham & Williams Generated 2015

STUDY GUIDE 54

Many have already been validated and shown to be reliable,Examples

Pain scales Visual Analog Scales Depression scales

Qualitative collection methods

Interviews One-on-one Researcher acts as the instrument Focus groups Written Narratives Journals Observation Participative

Surveys

May be qualitative or quantitative Written or interview based Pen and pencil or online Questionnaires - self-administered with multiple options

Questionnaires

Open-ended questions (narrative) Dichotomous (nominal) 2 choices yes or no Ranking style (ordinal) 1-10 in order Multiple-choice questions (nominal) Forced answer questions Visual analog (ratio or interval) have to measure it (Bad to excellent) Semantic differential scales (interval) attribute or character good teacher - bad teacher Likert scales (ratio?) strongly agree/agree/don’t agree

Likert Scales

Not questions but items to which the individual agrees or disagrees . . . Example: This is the absolute best class I have ever taken

OptionsStrongly AgreeAgree

Braham & Williams Generated 2015

STUDY GUIDE 55

UnsureDisagreeStrongly Disagree

Semantic Differential Scales

Different from Likert scale in two ways only two extremes are labeled the continuum is based not on agree/disagree but rather on opposite adjectives that

express the respondent's feelings happy - sad

Forced Response

I would like to take this class again.Options

Oh, yes, I love this class. I would rather do nothing more than take this class again. Ms. Braham makes this class so fun, who wouldn't want to take it 2, 3, 4 times? I would encourage all my friends to take this class twice if for no other reason than to get

the Mexican Wedding Cookies!

Other

Is useful measuring subjective phenomena (e.g., pain, fatigue, anxiety) Unidimensional, quantifying intensity only Daniel's hair represents which level of stress? Agonizing, horrible, dreadful, uncomfortable, annoying, none....

Review: Reliability

What is instrument reliability? Testing (correlation coefficient) Multiple items testing the same variable or construct, what is the correlation? Cronbach's Alpha (important!) Cronbach's alpha measures how well a set of items (or variables) measure a single

construct. The higher the alpha level the better. Highest = 1.0

Types of Reliability - consistency

Inter-Rater or Inter-Observer Reliability Used to assess the degree to which different raters/observers give consistent estimates of

the same phenomenon. Stability over time: Test-Retest Reliability

Braham & Williams Generated 2015

STUDY GUIDE 56

Used to assess the consistency of a measure from one time to another. Equivalence: Parallel-Forms Reliability Used to assess the consistency of the results of two tests constructed in the same way

from the same content domain. Internal Consistency Reliability Used to assess the consistency of results across items within a test.

What is validity?

According to APA, 5th addition, validity addresses the appropriateness, meaningfulness, and usefulness of the specific inferences made from the instrument score. Or - the extent to which an instrument measures what it was intended to measure.

Validity in Measures

Instrument Sensitivity (more options provide for greater sensitivity)Yes / No; SA / A / DN / D / SD; Open endedOunces vs. PoundsVisual analog scaleDichotomous vs. visual analog

Validity in Qualitative Research

Researcher effectsTriangulation Combined use of two or more theories, methods, data sources, investigators, or analytical methods (may include corroborating evidence from different source)Weighing evidence (sifting)Contrast and comparisons(between previous findings and reviewers' analyses)Ruling out spurious findings Those that are ingenuous / not trueReplicating Check rival explanationsNegative evidenceFeedback from informants

Review: threats to internal validity in quantitative research

Historical effects - events occur during the study that have an influence on the outcome of the study - Control by random sampling to distribute effects across all groups.

Maturation effects - effects of the passage of time control by matching subjects by age, use ancova to measure effects of time.

Braham & Williams Generated 2015

STUDY GUIDE 57

Testing effects - treatment - subjects reactions (rxns) that are due to the effect of being observed/ control by use unobtrusive measures + use placebos

Instrumentation effects - influence on the outcome from the measurement itself, not the intervention control by calibration of instruments, document reliability

Placebo effects - subjects perform differently because they are aware they are in a study or as a rxn to being treated

Multiple treatment effects – o Mortality - subject attrition due to drop outs, loss of contact, death control by

project expected attrition and over sample, carefully screen subjectso Selection effects - subjects are assigned to groups in a way that does not distribute

characteristics evenly across both groups/ control by random selection, random assignments, matching subjects, stratified samples

o John Henry Effect - compensatory rivalry

o Hawthorne Effect - subjects behave differently not because of intervention but

because they are in a study. o Effects the treated subjects and the untreated ones.

Demoralization

Threats to external validity

Population validity – generalizability Selection effects – randomization Time and historical effects Novelty effects Experimenter effects

Cause and Effect

Changes in the presumed cause MUST be related to the changes in the presumed effect The presumed cause MUST occur before the presumed effect There are no other plausible alternative explanations

Statistically minimizing threats to internal validity

Test null hypotheses Determine probability of Type I and Type II error and your willingness to make a

mistake Calculate and report effect size Ensure data meet the fundamental assumptions of the statistical tests

What is important?

If your methods are flawed your results are not trustworthy.

Braham & Williams Generated 2015

STUDY GUIDE 58

Validity and Reliability are the foundation of your study and can be ensured byEliminating threats - i.e., biasControlling for the threat - i.e., designAccount for the threat - i.e., discuss the limitation

UNSTRUCTURED INTERVIEW An interview in which the researcher asks respondents questions without having a

predetermined plan regarding the content or flow of information to be gathered.

UNSTRUCTURED OBSERVATION The collection of descriptive data through direct observation that is not guided by a

formal, pre-specified plan for observing or recording the information.

SEMISTRUCTURED INTERVIEW An open-ended interview in which the researcher is guided by a list of specific topics to

cover.

SCIENTIFIC METHOD A set of orderly, systematic, controlled procedures for acquiring dependable, empirical—

and typically quantitative—information; the methodologic approach associated with the positivist paradigm.

RETROSPECTIVE DESIGN A study design that begins with the manifestation of the outcome variable in the present

(e.g., lung cancer), followed by a search for a presumed cause occurring in the past (e.g., cigarette smoking).

STRUCTURED DATA COLLECTION An approach to collecting data from participants, either through self-report or

observations, in which categories of information (e.g., response options) are specified in advance.

PROPOSAL A document communicating a research problem, proposed procedures for solving the

problem, and, when funding is sought, how much the study will cost.

PROSPECTIVE DESIGN A study design that begins with an examination of a presumed cause (e.g., cigarette smoking) and then goes forward in time to observe presumed effects (e.g., lung cancer): also called a cohort design

POSITIVELY SKEWED DISTRIBUTION An asymmetric distribution of values with a disproportionately high number of cases at

the lower end; when displayed graphically, the tail points to the right.

Braham & Williams Generated 2015

STUDY GUIDE 59

PRETEST–POSTTEST DESIGN An experimental design in which data are collected from research subjects both before

and after introducing an intervention; also called a before – after design.

PRIMARY SOURCE First-hand reports of facts or findings; in research, the original report prepared by the

investigator who conducted the study.

OBJECTIVITY The extent to which two independent researchers would arrive at similar judgments or

conclusions (i.e., judgments not biased by personal values or beliefs).

CODING The process of transforming raw data into standardized form for data processing and

analysis; in quantitative research, the process of attaching numbers to categories; in qualitative research, the process of identifying recurring words, themes, or concepts within the data.

Braham & Williams Generated 2015