nurhayati, m.pd indraprasta university jakarta. validity : does it measure what it is supposed to...

Post on 19-Jan-2016

218 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Nurhayati, M.PdIndraprasta University Jakarta

Validity : Does it measure what it is supposed to measure?

Reliability: How the representative is the measurement?

Practicality: Is it easy to construct, administer, score and interpret?

Objectivity: Do independent scorers agree?

Validity refers to whether or not a test measures what it intends to measure.

A test with high validity has items closely linked to thetest’s intended focus. A test with poor validity does not measure the content and competencies it ought to.

Content: related to objectives and their sampling.

Construct: reffering to the theoryunderlying the target

Criterion: related to concrete criteria in the real word. It can be concurrent or predictive

Face: related to the test overall appearance

Content validity refers to the connections between the test items and the subject, related tasks. The test should evaluate only the content related to the field of study in a manner sufficiently representative, relevant, and comprehensible.

It implies using the construct (concept, ideas, notion) in accordance to the state of the art in the filed construct validity seeks agreement between updated subject-matter theories and the specific measuring components of the test.

Example; a test of intelligence nowadays must include measures of multiple intelligences, rather than just logical-mathematical and linguistic ability measures.

Also refered to as instrumental validity. It is used to demostrate the accuracy of a measure or procedure by comparing it with another process or method which has has been demostrated to be valid.

Example: imagine a hands-on driving test has been proved to be an accurate test of driving skills. A written test can be validated by using a criterion related strategy in which the hands on driving test is compared to it

Concurrent validity uses statistical methods of correlation to other measures.

Examinees who are known to be eiher masters or non masters on the content measured are identified before the test is administered. Once the tests have been scored, the relationship between the examinees’ status as either masters or non masters and their performance (pass or fail)is estimated based on the test.

Predictive validity estimates the relationship of test scores to an examinee’s future performance master or non master. Predictive validity considered the question, “ how well does the test predict examinees’ future status as master or non master?”

For this type of validity, the correlation that is computed is based on the test results and the examinee’s later performances. This type of validity is especially useful for test purposes such as selection or admissions

Face Validity is determined by a review of the items and not through the use of statistical analyses.unlike content validity, face validity is not investigated through formal procedures. Instead, anyone who look over the test, including examinees, may develop an informal opinion as to whether or not the test is measuring what it is supposed to measure.

Realiability is the extent to which an experiment, test, or any measuring procedure shows the same result an repeated trials.

Equivalency: related to the co-occurrence of two items.

Stability: related to time consistency. Internal: related to the instruments. Interrater: related to the examiners’

criterion.

Equivalency reliability is the extent to which two items measure identical concepts at an identical level of difficulty. Equivalency reliability is determined by relating two set of test scores to one another to highlight the degree of relationship or association.

Stability reliability (sometimes called retest reliability) is the agreement of measuring instruments over time. To determine stability a measure or test is repeated onthe same subjects at a future date. Results arecompared and correlated withthe initial test to give a measure of stability. Instruments with a high stability reliability arethermometers, compasses, measuring cups.

Internal consistency is the extent to which tests or procedures assess the same characteristic, skill or quality. It is a measure of the precision between the measuring instrument used in a study. This type of reliability often helps researchers interpret data and predict the value of scores and the limits of the relationship among variables.

Interrrater reliability is the extent to whichmore individuals agree.Example: when two or more teachers

use a rating scale with which they are rating the students’ oral responses in an interview (1 being most negative, 5 being most positive). If one researcher gives a “1” to a student response, while another researcher give a “5” obviously the interrater reliability would be inconsistent

Validity and reliability are closely related. A test can not be considered valid unless the measurements resulting from it are reliable. Likewise, results from a test can be reliable and not necessarily valid.

Backwash (also known as washback) effect is related to the potentially positive and negative effects of test design and content on the form and content of English Language training courseware.

top related