psychology 1

32
Introduction to Psychology In the name of Allah The most Merciful and Beneficial 1

Upload: razi87

Post on 15-Nov-2014

330 views

Category:

Documents


1 download

TRANSCRIPT

Introduction to Psychology

In the name of Allah The most Merciful and Beneficial

1

Introduction to Psychology

Submitted to: Ms. Sadia Adan Submitted By: Malik Allah Razi Program: BBA (Hons) Section: B Batch: 36 I.D 073605-087

2

Introduction to Psychology

Scientific Method of Psychology

TopicsValidity Types Reliability Types Reliability & Validity Generalization

Page No4 4 6 8 10

1415 16 16 17 18 18

Hypernym and Hyponym Experiment Types Independent & Dependant Variables Confounding variable Correlation

3

Introduction to Psychology

Validity:Validity is the strength of our conclusions, inferences or propositions. More formally, define it as the "best available approximation to the truth or falsity of a given inference, proposition or conclusion."

Example:Let's look at a simple example. Say we are studying the effect of strict attendance policies on class participation. In our case, we saw that class participation did increase after the policy was established. Each type of validity would highlight a different aspect of the relationship between our treatment (strict attendance policy) and our observed outcome (increased class participation).

Types of Validity:There are four types of validity commonly examined in social research.

1. Conclusion validity:It asks is there a relationship between the program and the observed outcome? Or, in our example, is there a connection between the attendance policy and the increased participation we saw?

2. Internal Validity:It asks if there is a relationship between the program and the outcome we saw, is it a causal relationship? For example, did the attendance policy cause class participation to increase?

3. Construct validity:It is the hardest to understand in my opinion. It asks if there is there a relationship between how I operational zed my concepts in this study to the actual causal relationship I'm trying to study/? Or in our example, did our treatment (attendance policy) reflect the construct of attendance, and did our measured outcome - increased class participation - reflect the construct of4

Introduction to Psychology

participation? Overall, we are trying to generalize our conceptualized treatment and outcomes to broader constructs of the same concepts.

4. External validity:It is refers to our ability to generalize the results of our study to other settings. In our example, could we generalize our results to other classrooms?

Threats to Internal Validity:Confounding is a major threat to the validity of inferences made about cause and effect, i.e. internal validity There are three main types of threats to internal validity - single group, multiple group and social interaction threats. Single Group Threats apply when you are studying a single group receiving a program or treatment. Thus, all of these threats can be greatly reduced by adding a control group that is comparable to your program group to your study. A Testing Threat to internal validity is simply when the act of taking a pretest affects how that group does on the post-test. For example, if in your study of class participation, you measured class participation prior to implementing your new attendance policy, and students became forewarned that there was about to be an emphasis on participation, they may increase it simply as a result of involvement in the pretest measure - and thus, your outcome could be a result of a testing threat - not your treatment. Multiple Group Threats to internal validity involve the comparability of the two groups in your study, and whether or not any other factor other than your treatment causes the outcome. They also (conveniently) mirror the single group threats to internal validity.

5

Introduction to Psychology

Reliability:Reliability is the consistency of your measurement, or the degree to which an instrument measures the same way each time it is used under the same condition with the same subjects.

Explanation:In short, it is the repeatability of your measurement. A measure is considered reliable if a person's score on the same test given twice is similar. It is important to remember that reliability is not measured, it is estimated. There are two ways that reliability is usually estimated: test/retest and internal consistency.

Test/Retest:Test/retest is the more conservative method to estimate reliability. Simply put, the idea behind test/retest is that you should get the same score on test 1 as you do on test 2. The three main components to this method are as follows: 1.) Implement your measurement instrument at two separate times for each subject; 2). Compute the correlation between the two separate measurements. 3) Assume there is no change in the underlying condition (or trait you are trying to measure) between test 1 and test 2.

Internal Consistency:Internal consistency estimates reliability by grouping questions in a questionnaire that measure the same concept. For example, you could write two sets of three questions that measure the same concept (say class participation) and after collecting the responses, run a correlation between those two groups of three questions to determine if your instrument is reliably measuring that concept.

6

Introduction to Psychology

The primary difference between test/retest and internal consistency estimates of reliability is that test/retest involves two administrations of the measurement instrument, whereas the internal consistency method involves only one administration of that instrument.

Split-Half Reliability:In split-half reliability we randomly divide all items that purport to measure the same construct into two sets. We administer the entire instrument to a sample of people and calculate the total score for each randomly divided half. The split-half reliability estimate, as shown in the figure, is simply the correlation between these two total scores. In the example it is

Example:The NIMH Diagnostic Interview Schedule for Children, a highly structured interview covering a broad range of clinically relevant symptoms and behaviors, was administered to 242 disturbed children and their parents. Parent and child were interviewed separately and were assessed twice at a median interval of 9 days. Interclass correlations between symptom scores derived from the interviews indicated that parents were generally more reliable than children in reporting child symptoms. However, test-retest reliabilities showed an opposite age pattern for parent and child. The reliability of the child's report increased with age and was lower for children aged 6-9 than those aged 10-13 and 14-18. Conversely, the reliability of the7

Introduction to Psychology

parent's report decreased with the age of the child and was slightly higher for children aged 6-9 than those aged 10-13 and 14-18. These findings were interpreted in terms of children's cognitive development and age-related shifts in parents' perceptions and awareness of their children's behavior.

Types of Reliability:Its not possible to calculate reliability exactly. Instead, we have to estimate reliability, and this is always an imperfect endeavor. Here, I want to introduce the major reliability estimators and talk about their strengths and weaknesses. There are four general classes of reliability estimates, each of which estimates reliability in a different way. They are:

Inter-Rater or Inter-Observer Reliability:Used to assess the degree to which different raters/observers give consistent estimates of the same phenomenon.

Test-Retest Reliability:Used to assess the consistency of a measure from one time to another.

Parallel-Forms Reliability:Used to assess the consistency of the results of two tests constructed in the same way from the same content domain.

Internal Consistency Reliability:Used to assess the consistency of results across items within a test.

8

Introduction to Psychology

Explanation: Inter-Rater or Inter-Observer Reliability:People are notorious for their inconsistency. We are easily distractible. We misinterpret. So how do we determine whether two observers are being consistent in their observations? You probably should establish inter-rater reliability outside of the context of the measurement in your study. There are two major ways to actually estimate inter-rater reliability. If your measurement consists of categories -- the raters are checking off which category each observation falls in -- you can calculate the percent of agreement between the raters. The other major way to estimate inter-rater reliability is appropriate when the measure is a continuous one. There, all you need to do is calculate the correlation between the ratings of the two observers. For instance, they might be rating the overall level of activity in a classroom on a 1-to-7 scale. You could have them give their rating at regular time intervals (e.g., every 30 seconds). The correlation between these ratings would give you an estimate of the reliability or consistency between the raters.

Test-Retest Reliability:We estimate test-retest reliability when we administer the same test to the same sample on two different occasions. This approach assumes that there is no substantial change in the construct being measured between the two occasions. The amount of time allowed between measures is critical. We know that if we measure the same thing twice that the correlation between the two observations will depend in part by how much time elapses between the two measurement occasions. The shorter the time gap, the higher the correlation; the longer the time gap, the lower the correlation. This is because the two observations are related over time -- the closer in time we get the more similar the factors that contribute to error. Since this correlation is the test-retest estimate of reliability, you can obtain considerably different estimates depending on the interval.

9

Introduction to Psychology

Parallel-Forms Reliability:In parallel forms reliability you first have to create two parallel forms. One way to accomplish this is to create a large set of questions that address the same construct and then randomly divide the questions into two sets. You administer both instruments to the same sample of people. The correlation between the two parallel forms is the estimate of reliability.

Internal Consistency Reliability:In internal consistency reliability estimation we use our single measurement instrument administered to a group of people on one occasion to estimate reliability. In effect we judge the reliability of the instrument by estimating how well the items that reflect the same construct yield similar results. We are looking at how consistent the results are for different items for the same construct within the measure

10

Introduction to Psychology

Reliability & ValidityWe often think of reliability and validity as separate ideas but, in fact, they're related to each other. Here, I want to show you two ways you can think about their relationship. One of my favorite metaphors for the relationship between reliability is that of the target. Think of the center of the target as the concept that you are trying to measure. Imagine that for each person you are measuring, you are taking a shot at the target. If you measure the concept perfectly for a person, you are hitting the center of the target. If you don't, you are missing the center. The more you are off for that person, the further you are from the center.

The figure above shows four possible situations. In the first one, you are hitting the target consistently, but you are missing the center of the target. That is, you are consistently and systematically measuring the wrong value for all respondents. This measure is reliable, but no valid (that is, it's consistent but wrong). The second, shows hits that are randomly spread across the target. You seldom hit the center of the target but, on average, you are getting the right answer for the group (but not very well for individuals). In this case, you get a valid group estimate, but you are inconsistent. Here, you can clearly see that reliability is directly related to the variability of your measure. The third scenario shows a case where your hits are spread across the target and you are consistently missing the center. Your measure in this case is neither reliable nor valid. Finally, we see the scenario -- you consistently hit the center of the target. Your measure is both reliable and valid.11

Introduction to Psychology

Another way we can think about the relationship between reliability and validity is shown in the figure below. Here, we set up a 2x2 table. The columns of the table indicate whether you are trying to measure the same or different concepts. The rows show whether you are using the same or different methods of measurement. Imagine that we have two concepts we would like to measure, student verbal and math ability. Furthermore, imagine that we can measure each of these in two ways. First, we can use a written, paper-and-pencil exam. Second, we can ask the student's classroom teacher to give us a rating of the student's ability based on their own classroom observation.

The first cell on the upper left shows the comparison of the verbal written test score with the verbal written test score. But how can we compare the same measure with itself? We could do this by estimating the reliability of the written test through a test-retest correlation, parallel forms, or an internal consistency measure. What we are estimating in this cell is the reliability of the measure. The cell on the lower left shows a comparison of the verbal written measure with the verbal teacher observation rating. Because we are trying to measure the same concept, we are looking at convergent validity. The cell on the upper right shows the comparison of the verbal written exam with the math written exam. Here, we are comparing two different concepts (verbal versus math) and so we would expect the relationship to be lower than a comparison of the same concept with itself (e.g., verbal versus verbal

12

Introduction to Psychology

or math versus math). Thus, we are trying to discriminate between two concepts and we would consider this discriminant validity. Finally, we have the cell on the lower right. Here, we are comparing the verbal written exam with the math teacher observation rating. Like the cell on the upper right, we are also trying to compare two different concepts (verbal versus math) and so this is a discriminant validity estimate. But here, we are also trying to compare two different methods of measurement (written exam versus teacher observation rating). So, we'll call this very discriminant to indicate that we would expect the relationship in this cell to be even lower than in the one above it. When we look at reliability and validity in this way, we see that, rather than being distinct, they actually form a continuum. On one end is the situation where the concepts and methods of measurement are the same (reliability) and on the other is the situation where concepts and methods of measurement are different (very discriminant validity).

13

Introduction to Psychology

Generalization:The phenomenon of an organism's responding to all situations similar to one in which it has been conditioned.

Example:Over the course of the first year of life, infants develop from being generalized listeners, capable of discriminating both native and non-native speech contrasts, into specialized listeners whose discrimination patterns closely reflect the phonetic system of the native language(s). Recent work by Maye, Werker and Gerken (2002) has proposed a statistical account for this phenomenon, showing that infants may lose the ability to discriminate some foreign language contrasts on the basis of their sensitivity to the statistical distribution of sounds in the input language.

Explanation:Generalization is a foundational element of logic and human reasoning. Generalization posits the existence of a domain or set of elements, as well as one or more common characteristics shared by those elements. As such, it is the essential basis of all valid deductive inference. The process of verification is necessary to determine whether a generalization holds true for any given situation. The concept of generalization has broad application in many related disciplines, sometimes having a specialized context-specific meaning. For any two related concepts, A and B; A is considered a generalization of concept B if and only if: Every instance of concept B is also an instance of concept A; and There are instances of concept A which are not instances of concept B.

Example:For instance, animal is a generalization of bird because every bird is an animal, and there are animals which are not birds (dogs, for instance).

14

Introduction to Psychology

Hypernym and Hyponym:This kind of generalization versus specialization (or particularization) is reflected in either of the contrasting words of the word pair hypernym and hyponym. A hypernym as a generic stands for a class or group of equallyranked items, such as tree does for beech and oak; or ship for cruiser and steamer. Whereas a hyponym is one of the items included in the generic, such as lily and daisy are included in flower, and bird and fish in animal. A hypernym is super ordinate to a hyponym, and a hyponym is subordinate to hypernym.

15

Introduction to Psychology

Experiment:An experiment is a set of observations performed in the context of solving a particular problem or question, to retain or falsify a hypothesis or research concerning phenomena.

Explanation:The experiment is a cornerstone in the empirical approach to acquiring deeper knowledge about the physical world.

Types:In a controlled experiment, two virtually identical experiments are conducted. In one of them, the treatment, the factor being tested is applied. In the other, the control, the factor being tested is not applied.

Positive Control:A positive control is a procedure that is very similar to the actual experimental test but which is known from previous experience to give a positive result. The positive control confirms that the basic conditions of the experiment were able to produce a positive result, even if none of the actual experimental samples produce a positive result.

Negative Control:A negative control is known to give a negative result. The negative control demonstrates the base-line result obtained when a test does not produce a measurable positive result; often the value of the negative control is treated as a "background" value to be subtracted from the test sample results.

Example:For example, in testing a drug, it is important to carefully verify that the supposed effects of the drug are produced only by the drug itself. Doctors achieve this with a double-blind study in a clinical trial: two (statistically) identical groups of patients are compared, one of which receives the drug

16

Introduction to Psychology

and one of which receives a placebo. Neither the patients nor the doctor know which group receives the real drug, which serves both to curb researchers' bias and to isolate the effects of the drug.

Independent Variable:Experimental factors you manipulate: the treatment itself. Or The independent variables are those that are deliberately manipulated to invoke a change in the dependent variables

Dependant Variable:The dependent variables are those that are observed to change in response to the independent variables. The dependent variable is also known as the response variable, the regressing, the measured variable, the responding variable, the explained variable, or the outcome variable. It is behavior observed: In short, "if x is given, then y occurs", where x represents the independent variables and y represents the dependent variables. Dependent and independent variables refer to values that change in relationship to each other.

Examples:1. If one were to measure the influence of different quantities of fertilizer on plant growth, the independent variable would be the amount of fertilizer used (the changing factor of the experiment). The dependent variables would be the growth in height and/or mass of the plant (the factors that are influenced in the experiment) and the controlled variables would be the type of plant, the type of fertilizer, the amount of sunlight the plant gets, the size of the pots, etc. (the factors that would otherwise influence the dependent variable if they were not controlled). 2. In a study of how different doses of a drug affect the severity of symptoms, a researcher could compare the frequency and intensity of symptoms (the dependent variables) when different doses (the

17

Introduction to Psychology

independent variable) are administered, and attempt to draw a conclusion.

Confounding variable:A confounding variable is an extraneous variable in a statistical model that correlates (positively or negatively) with both the dependent variable and the independent variable. The methodologies of scientific studies therefore need to control for these factors to avoid what is known as a error A 'false positive' conclusion that the dependent variables are in a causal relationship with the independent variable. Such a relation between two observed variables is termed a spurious relationship. Thus, confounding is a major threat to the validity of inferences made about cause and effect, i.e. internal validity, as the observed effects should be attributed to the confounder rather than the independent variable.

Example:For example, assume that a child's weight and a country's gross domestic product (GDP) rise with time. A person carrying out an experiment could measure weight and GDP, and conclude that a higher GDP causes children to gain weight, or that children's weight gain boosts the GDP. However, the confounding variable, time, was not accounted for, and is the real cause of both rises.

Correlation:It is statistical measure of relationship; it revels that how closely two things are vary together and thus how well one predict the other.

Purpose:The correlation is a way to measure how associated or related two variables are. The researcher looks at things that already exist and determines if and in what way those things are related to each other. The purpose of doing

18

Introduction to Psychology

correlations is to allow us to make a prediction about one variable based on what we know about another variable. For example, there is a correlation between income and education. We find that people with higher income have more years of education. (You can also phrase it that people with more years of education have higher income.) When we know there is a correlation between two variables, we can make a prediction. If we know a groups income, we can predict their years of education.

Direction:There are two types or directions of correlation. In other words, there are two patterns that correlations can follow. These are called positive correlation and negative correlation.

Positive correlation:In a positive correlation, as the values of one of the variables increase, the values of the second variable also increase. Likewise, as the value of one of the variables decreases, the value of the other variable also decreases.

Example:Participant Income#1 #2 #3 #4 #5 #6 #7 #8 #9 #10 125,000 100,000 40,000 35,000 41,000 29,000 35,000 24,000 50,000 60,000

Years of Education19 20 16 16 18 12 14 12 16 17

19

Introduction to Psychology

The example above of income and education is a positive correlation. People with higher incomes also tend to have more years of education. People with fewer years of education tend to have lower income. This table shows some sample data. Each person reported income and years of education. We can make a graph, which is called a scatter plot. On the scatter plot below, each point represents one persons answers to questions about income and education. The line is the best fit to those points. All positive correlations have a scatter plot that looks like this. The line will always go in that direction if the correlation is positive.

Negative correlation:In a negative correlation, as the values of one of the variables increase, the values of the second variable decrease. Likewise, as the value of one of the variables decreases, the value of the other variable increases.

20

Introduction to Psychology

This is still a correlation. It is like an inverse correlation. The word negative is a label that shows the direction of the correlation.

Example:There is a negative correlation between TV viewing and class grades students who spend more time watching TV tend to have lower grades (or phrased as students with higher grades tend to spend less time watching TV). We can also plot the grades and TV viewing data, shown in the table below. The scatter plot below shows the sample data from the table. The line on the scatter plot shows what a negative correlation looks like. Any negative correlation will have a line with that direction.

Participant GPA#1 #2 #3 #4 #5 #6 #7 #8 #9 #10 3.1 2.4 2.0 3.8 2.2 3.4 2.9 3.2 3.7 3.5

TV in hours per week14 10 20 7 25 9 15 13 4 21

21

Introduction to Psychology

Disadvantage:1. The problem that most students have with the correlation method is remembering that correlation does not measure cause. 2. A correlation tells us that the two variables are related, but we cannot say anything about whether one caused the other. This method does not allow us to come to any conclusions about cause and effect.

Advantage:An advantage of the correlation method is that we can make predictions about things when we know about correlations. If two variables are correlated, we can predict one based on the other. For example, we know that SAT scores and college achievement are positively correlated. So when college admission officials want to predict who is likely to succeed at their schools, they will choose students with high SAT scores.22