mb0050 solved assignment 2013

19
-MB0050 ( Research Methodology -DE ) ASSIGNMENT :- Q1 :- Explain the steps involved in a research Process ? Ans.: Steps of the Research process Scientific research involves a systematic process that focuses on being objective and gathering a multitude of information for analysis so that the researcher can come to a conclusion. This process is used in all research and evaluation projects, regardless of the research method (scientific method of inquiry, evaluation research, or action research). The process focuses on testing hunches or ideas in a park and Recreation setting through a systematic process. In this process, the study is documented in such a way that another individual can conduct the same study again. This is referred to as replicating the study. Any research done without documenting the study so that others can review the process and results is not an investigation using the scientific research process. The scientific research process is a multiple-step process where the steps are interlinked with the other steps in the process. If changes are made in one step of the process, the researcher must review all the other steps to ensure that the changes are reflected throughout the process. Parks and Recreation professionals are often involved in conducting research or evaluation projects within the agency. These professionals need to understand the eight Steps of the research process as they apply to conducting a study. Table 2.4 lists the Steps of the research process and provides an example of each step for a sample research study. Step 1: Identify the Problem The first step in the process is to identify a problem or develop a research question. The research problem may be something the agency identifies as a problem, some knowledge or information that is needed by the agency, or the desire to identify a Recreation trend nationally. In the example in table 2.4, the problem that the agency has identified is childhood obesity, which is a local problem and concern within the community. This serves as the focus of the study. Step 2: Review the Literature Now that the problem has been identified, the researcher must learn more about the topic under investigation. To do this, the researcher must review the literature related to the research problem. This step provides foundational knowledge about the problem area. The

Upload: himanshu-pandey

Post on 12-Nov-2014

93 views

Category:

Documents


0 download

DESCRIPTION

assignment-2013 smu

TRANSCRIPT

-MB0050 ( Research Methodology -DE )

ASSIGNMENT :-

Q1 :- Explain the steps involved in a research Process ?

Ans.:

Steps of the Research process

Scientific research involves a systematic process that focuses on being objective and gathering a multitude of information for analysis so that the researcher can come to a conclusion. This process is used in all research and evaluation projects, regardless of the research method (scientific method of inquiry, evaluation research, or action research). The process focuses on testing hunches or ideas in a park and Recreation setting through a systematic process. In this process, the study is documented in such a way that another individual can conduct the same study again. This is referred to as replicating the study. Any research done without documenting the study so that others can review the process and results is not an investigation using the scientific research process. The scientific research process is a multiple-step process where the steps are interlinked with the other steps in the process. If changes are made in one step of the process, the researcher must review all the other steps to ensure that the changes are reflected throughout the process. Parks and Recreation professionals are often involved in conducting research or evaluation projects within the agency. These professionals need to understand the eight Steps of the research process as they apply to conducting a study. Table 2.4 lists the Steps of the research process and provides an example of each step for a sample research study.

Step 1: Identify the Problem

The first step in the process is to identify a problem or develop a research question. The research problem may be something the agency identifies as a problem, some knowledge or information that is needed by the agency, or the desire to identify a Recreation trend nationally. In the example in table 2.4, the problem that the agency has identified is childhood obesity, which is a local problem and concern within the community. This serves as the focus of the study.

Step 2: Review the Literature

Now that the problem has been identified, the researcher must learn more about the topic under investigation. To do this, the researcher must review the literature related to the research problem. This step provides foundational knowledge about the problem area. The

review of literature also educates the researcher about what studies have been conducted in the past, how these studies were conducted, and the conclusions in the problem area. In the obesity study, the review of literature enables the programmer to discover horrifying statistics related to the long-term effects of childhood obesity in terms of health issues, death rates, and projected medical costs. In addition, the programmer finds several articles and information from the Centers for Disease Control and Prevention that describe the benefits of walking 10,000 steps a day. The information discovered during this step helps the programmer fully understand the magnitude of the problem, recognize the future consequences of obesity, and identify a strategy to combat obesity (i.e., walking).

Step 3: Clarify the Problem

Many times the initial problem identified in the first step of the process is too large or broad in scope. In step 3 of the process, the researcher clarifies the problem and narrows the scope of the study. This can only be done after the literature has been reviewed. The knowledge gained through the review of literature guides the researcher in clarifying and narrowing the research project. In the example, the programmer has identified childhood obesity as the problem and the purpose of the study. This topic is very broad and could be studied based on genetics, family environment, diet, exercise, self-confidence, leisure activities, or health issues. All of these areas cannot be investigated in a single study; therefore, the problem and purpose of the study must be more clearly defined. The programmer has decided that the purpose of the study is to determine if walking 10,000 steps a day for three days a week will improve the individual’s health. This purpose is more narrowly focused and researchable than the original problem.

Step 4: Clearly Define Terms and Concepts

Terms and concepts are words or phrases used in the purpose statement of the study or the description of the study. These items need to be specifically defined as they apply to the study. Terms or concepts often have different definitions depending on who is reading the study. To minimize confusion about what the terms and phrases mean, the researcher must specifically define them for the study. In the obesity study, the concept of “individual’s health” can be defined in hundreds of ways, such as physical, mental, emotional, or spiritual health. For this study, the individual’s health is defined as physical health. The concept of physical health may also be defined and measured in many ways. In this case, the programmer decides to more narrowly define “individual health” to refer to the areas of weight, percentage of body fat, and cholesterol. By defining the terms or concepts more narrowly, the scope of the study is more manageable for the programmer, making it easier to collect the necessary data for the study. This also makes the concepts more understandable to the reader.

Step 5: Define the Population

Research projects can focus on a specific group of people, facilities, park development, employee evaluations, programs, financial status, marketing efforts, or the integration of technology into the operations. For example, if a researcher wants to examine a specific group of people in the community, the study could examine a specific age group, males or females,

people living in a specific geographic area, or a specific ethnic group. Literally thousands of options are available to the researcher to specifically identify the group to study. The research problem and the purpose of the study assist the researcher in identifying the group to involve in the study. In research terms, the group to involve in the study is always called the population. Defining the population assists the researcher in several ways. First, it narrows the scope of the study from a very large population to one that is manageable. Second, the population identifies the group that the researcher’s efforts will be focused on within the study. This helps ensure that the researcher stays on the right path during the study. Finally, by defining the population, the researcher identifies the group that the results will apply to at the conclusion of the study. In the example in table 2.4, the programmer has identified the population of the study as children ages 10 to 12 years. This narrower population makes the study more manageable in terms of time and resources.

Step 6: Develop the Instrumentation Plan

The plan for the study is referred to as the instrumentation plan. The instrumentation plan serves as the road map for the entire study, specifying who will participate in the study; how, when, and where data will be collected; and the content of the program. This plan is composed of numerous decisions and considerations that are addressed in chapter 8 of this text. In the obesity study, the researcher has decided to have the children participate in a walking program for six months. The group of participants is called the sample, which is a smaller group selected from the population specified for the study. The study cannot possibly include every 10- to 12-year-old child in the community, so a smaller group is used to represent the population. The researcher develops the plan for the walking program, indicating what data will be collected, when and how the data will be collected, who will collect the data, and how the data will be analyzed. The instrumentation plan specifies all the steps that must be completed for the study. This ensures that the programmer has carefully thought through all these decisions and that she provides a step-by-step plan to be followed in the study.

Step 7: Collect Data

Once the instrumentation plan is completed, the actual study begins with the collection of data. The collection of data is a critical step in providing the information needed to answer the research question. Every study includes the collection of some type of data—whether it is from the literature or from subjects—to answer the research question. Data can be collected in the form of words on a survey, with a questionnaire, through observations, or from the literature. In the obesity study, the programmers will be collecting data on the defined variables: weight, percentage of body fat, cholesterol levels, and the number of days the person walked a total of 10,000 steps during the class.

The researcher collects these data at the first session and at the last session of the program. These two sets of data are necessary to determine the effect of the walking program on weight, body fat, and cholesterol level. Once the data are collected on the variables, the researcher is ready to move to the final step of the process, which is the data analysis.

Step 8: Analyze the Data

All the time, effort, and resources dedicated to steps 1 through 7 of the research process culminate in this final step. The researcher finally has data to analyze so that the research question can be answered. In the instrumentation plan, the researcher specified how the data will be analyzed. The researcher now analyzes the data according to the plan. The results of this analysis are then reviewed and summarized in a manner directly related to the research questions. In the obesity study, the researcher compares the measurements of weight, percentage of body fat, and cholesterol that were taken at the first meeting of the subjects to the measurements of the same variables at the final program session. These two sets of data will be analyzed to determine if there was a difference between the first measurement and the second measurement for each individual in the program. Then, the data will be analyzed to determine if the differences are statistically significant. If the differences are statistically significant, the study validates the theory that was the focus of the study. The results of the study also provide valuable information about one strategy to combat childhood obesity in the community.

As you have probably concluded, conducting studies using the eight steps of the scientific research process requires you to dedicate time and effort to the planning process. You cannot conduct a study using the scientific research process when time is limited or the study is done at the last minute. Researchers who do this conduct studies that result in either false conclusions or conclusions that are not of any value to the organization.

This is an excerpt from Applied Research and Evaluation Methods in Recreation.

Q2 : What are descriptive research designs ? Explain the different kind of descriptive research designs ?

Ans:

The main goal of this type of research is to describe the data and characteristics about what is being studied. The idea behind this type of research is to study frequencies, averages, and other statistical calculations. Although this research is highly accurate, it does not gather the causes behind a situation. Descriptive research is mainly done when a researcher wants to gain a better understanding of a specific topic. § Descriptive research is the exploration of the existing certain phenomena. The details of the facts won't be known. The existing phenomena's facts are not known to the person

The Descriptive Method of Research D

Shane Hall is a writer and research analyst with more than 20 years of experience. His work has

appeared in "Brookings Papers on Education Policy," "Population and Development" and various Texas newspapers. Hall has a Doctor of Philosophy in political economy and is a former college instructor of economics and political science.

The descriptive method of research design helps researchers plan and carry out descriptive studies, designed to provide rich descriptive details about people, places and other phenomena. This type of research is often associated with anthropology, sociology and psychology, but researchers in other fields, such as education, use it. The descriptive method often involves extensive observation and note-taking, as well as in-depth narrative. Because it does not lend itself to in-depth analysis or hypothesis testing, some researchers regard the descriptive method as unscientific. However, a descriptive research design can serve as a first step that identifies important factors, laying a foundation for more-rigorous research.

Instructions Preparing a Descriptive Research Study

Identify the subject or phenomenon you wish to study and make sure it is appropriate for a descriptive design. Descriptive research design aims to observe and describe a subject without affecting its normal actions. Examples include an anthropologist who wants to study members of a tribe in another culture without affecting their normal behavior or an education researcher who wants to describe a new instructional method for teaching math to students at risk of dropping out.

Decide on the type of descriptive research design that will be most appropriate for your study. The most basic type is the single-subject case study, an in-depth narrative that contains extensive details and description of the subject observed. Another type of descriptive design is a comparative study, in which a researcher describes two or more sets of subjects. For example, an education researcher may want to study the implementation of a new instructional method by comparing its use in three different classrooms or three separate campuses.

Articulate your key research questions. For example, you may want to describe how an instructional method's delivery differs across campuses or how members of a particular tribe interact with one another. Knowing the research questions you want to answer will help focus your observation and other field research. Without a clear research question, descriptive research runs the risk of becoming unfocused, with the researcher taking notes or making recordings of everything being observed without knowing how to use the information collected.

Design any data collection instruments you will use, keeping in mind your key

research questions. You may design observation forms, interview forms or questionnaires, depending on the type of information you will collect for your study. Then conduct your field research, using your data collection instruments.

Conducting Descriptive Research

Read over your field notes, interview transcripts, survey information and other data collected for your study. Keep your research questions in mind as you read the material, looking for patterns and trends in the material collected. Some researchers organize their field research by using index cards organized by specific themes or patterns that emerge in the analysis.

Condense the data by drafting memos or summaries that organize extensive field notes and other information into a more concise form, which will help you write your report. Include specific examples of incidents or events you observed that help illustrate important points you want to make.

Write a report that states your research questions, describes the methods used to collect and analyze the data and reports the findings. Write in clear, concise language that conveys rich detail while keeping scientific jargon to a minimum. In the conclusions of your report, identify areas and issues for future, more rigorous scientific research.

Q3 :- Expalin th econcept of Reliability , validity and Sensitivity ?

Ans :

Program evaluation is a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and particularly about their effectiveness and efficiency. In both the public and private sectors, stakeholders will want to know if the programs they are funding, implementing, voting for, receiving or objecting to are actually having the intended effect (and to what cost). This definition focuses on the question of whether the program, policy or project has, as indicated, the intended effect. However, equally important are questions such as how the program could be improved, whether the program is worthwhile, whether there are better alternatives, if there are unintended outcomes, and whether the program goals are appropriate and useful. Evaluators help to answer these questions, but the best way to answer the questions is for the evaluation to be a joint project between evaluators and stakeholders

The process of evaluation is considered to be a relatively recent phenomenon. However, planned social evaluation has been documented as dating as far back as 2200BC (Shadish, Cook & Lentish, 1991)[citation needed]. Evaluation became particularly relevant in the U.S. in the

1960s during the period of the Great Society social programs associated with the Kennedy and Johnson administration. Extraordinary sums were invested in social programs, but the impacts of these investments were largely unknown.

Program evaluations can involve both quantitative and qualitative methods of social research. People who do program evaluation come from many different backgrounds, such as sociology, psychology, economics, and social work. Some graduate schools also have specific training programs for program evaluation.

Reliability, validity and sensitivity in program evaluation

It is important to ensure that the instruments (for example, tests, questionnaires, etc.) used in program evaluation are as reliable, valid and sensitive as possible. According to Rossi et al. (2004, p. 222), 'a measure that is poorly chosen or poorly conceived can completely undermine the worth of an impact assessment by producing misleading estimates. Only if outcome measures are valid, reliable and appropriately sensitive can impact assessments be regarded as credible'.

Reliability

The reliability of a measurement instrument is the 'extent to which the measure produces the same results when used repeatedly to measure the same thing' (Rossi et al., 2004, p. 218).The more reliable a measure is, the greater its statistical power and the more credible its findings. If a measuring instrument is unreliable, it may dilute and obscure the real effects of a program, and the program will 'appear to be less effective than it actually is' (Rossi et al., 2004, p. 219)Hence, it is important to ensure the evaluation is as reliable as possible.

Validity

The validity of a measurement instrument is 'the extent to which it measures what it is intended to measure' (Rossi et al., 2004, p. 219). This concept can be difficult to accurately measure: in general use in evaluations, an instrument may be deemed valid if accepted as valid by the stakeholders (stakeholders may include, for example, funders, program administrators, et cetera).

Sensitivity

The principal purpose of the evaluation process is to measure whether the program has an effect on the social problem it seeks to redress; hence, the measurement instrument must be sensitive enough to discern these potential changes (Rossi et al., 2004). A measurement instrument may be insensitive if it contains items measuring outcomes which the program couldn't possibly effect, or if the instrument was originally developed for applications to individuals (for example standardized psychological measures) rather than to a group setting (Rossi et al., 2004).These factors may result in 'noise' which may obscure any effect the program may have had.

Only measures which adequately achieve the benchmarks of reliability, validity and sensitivity can be said to be credible evaluations. It is the duty of evaluators to produce credible evaluations, as their findings may have far reaching effects. A discreditable evaluation which is unable to show that a program is achieving its purpose when it is in fact creating positive change may cause the program to lose its funding undeservedly.[improper synthesis?]

Steps to Program Evaluation Framework

According to the Center for Decease Control (CDC) there are six steps to a complete program evaluation. The steps described are: engage stakeholder, describe the program, focus the evaluation design, gather credible evidence, justify conclusions, and ensure use and share lessons learned .These steps can happen in a cycle framework to represent the continuing process of evaluation.

Methodological constraints and challenges

The shoestring approach

The “shoestring evaluation approach” is designed to assist evaluators operating under limited budget, limited access or availability of data and limited turnaround time, to conduct effective evaluations that are methodologically rigorous(Bamberger, Rugh, Church & Fort, 2004).This approach has responded to the continued greater need for evaluation processes that are more rapid and economical under difficult circumstances of budget, time constraints and limited availability of data. However, it is not always possible to design an evaluation to achieve the highest standards available. Many programs do not build an evaluation procedure into their design or budget. Hence, many evaluation processes do not begin until the program is already underway, which can result in time, budget or data constraints for the evaluators, which in turn can affect the reliability, validity or sensitivity of the evaluation. > The shoestring approach helps to ensure that the maximum possible methodological rigor is achieved under these constraints.

Budget constraints

Frequently, programs are faced with budget constraints because most original projects do not include a budget to conduct an evaluation (Bamberger et al., 2004). Therefore, this automatically results in evaluations being allocated smaller budgets that are inadequate for a rigorous evaluation. Due to the budget constraints it might be difficult to effectively apply the most appropriate methodological instruments. These constraints may consequently affect the time available in which to do the evaluation (Bamberger et al., 2004). Budget constraints may be addressed by simplifying the evaluation design, revising the sample size, exploring economical data collection methods (such as using volunteers to collect data, shortening surveys, or using focus groups and key informants) or looking for reliable secondary data (Bamberger et al., 2004

Time constraints

The most time constraint that can be faced by an evaluator is when the evaluator is summoned to conduct an evaluation when a project is already underway if they are given limited time to do the evaluation compared to the life of the study, or if they are not given enough time for adequate planning. Time constraints are particularly problematic when the evaluator is not familiar with the area or country in which the program is situated (Bamberger et al., 2004).Time constraints can be addressed by the methods listed under budget constraints as above, and also by careful planning to ensure effective data collection and analysis within the limited time space.

Data constraints

If the evaluation is initiated late in the program, there may be no baseline data on the conditions of the target group before the intervention began (Bamberger et al., 2004). Another possible cause of data constraints is if the data have been collected by program staff and contain systematic reporting biases or poor record keeping standards and is subsequently of little use (Bamberger et al., 2004). Another source of data constraints may result if the target group are difficult to reach to collect data from - for example homeless people, drug addicts, migrant workers, et cetera (Bamberger et al., 2004). Data constraints can be addressed by reconstructing baseline data from secondary data or through the use of multiple methods. Multiple methods, such as the combination of qualitative and quantitative data can increase validity through triangulation and save time and money. Additionally, these constraints may be dealt with through careful planning and consultation with program stakeholders. By clearly identifying and understanding client needs ahead of the evaluation, costs and time of the evaluative process can be streamlined and reduced, while still maintaining credibility.

All in all, time, monetary and data constraints can have negative implications on the validity, reliability and transferability of the evaluation. The shoestring approach has been created to assist evaluators to correct the limitations identified above by identifying ways to reduce costs and time, reconstruct baseline data and to ensure maximum quality under existing constraints (Bamberger et al., 2004).

Five-tiered approach

The five-tiered approach to evaluation further develops the strategies that the shoestring approach to evaluation is based upon. It was originally developed by Jacobs (1988)as an alternative way to evaluate community-based programs and as such was applied to a state wide child and family program in Massachusetts, U.S.A. The five-tiered approach is offered as a conceptual framework for matching evaluations more precisely to the characteristics of the programs themselves, and to the particular resources and constraints inherent in each evaluation context. In other words, the five-tiered approach seeks to tailor the evaluation to the specific needs of each evaluation context

For each tier, purpose(s) are identified, along with corresponding tasks that enable the

identified purpose of the tier to be achieved. For example, the purpose of the first tier, Needs assessment, would be to document a need for a program in a community. The task for that tier would be to assess the community's needs and assets by working with all relevant stakeholders.

While the tiers are structured for consecutive use, meaning that information gathered in the earlier tiers is required for tasks on higher tiers, it acknowledges the fluid nature of evaluation. Therefore, it is possible to move from later tiers back to preceding ones, or even to work in two tiers at the same time. It is important for program evaluators to note, however, that a program must be evaluated at the appropriate level.

The five-tiered approach is said to be useful for family support programs which emphasise community and participant empowerment. This is because it encourages a participatory approach involving all stakeholders and it is through this process of reflection that empowerment is achieved.

Methodological challenges presented by language and culture

The purpose of this section is to draw attention to some of the methodological challenges and dilemmas evaluators are potentially faced with when conducting a program evaluation in a developing country. In many developing countries the major sponsors of evaluation are donor agencies from the developed world, and these agencies require regular evaluation reports in order to maintain accountability and control of resources, as well as generate evidence for the program’s success or failure (Bamberger, 2000). However, there are many hurdles and challenges which evaluators face when attempting to implement an evaluation program which attempts to make use of techniques and systems which are not developed within the context to which they are applied (Smith, 1990). Some of the issues include differences in culture, attitudes, language and political process (Ebbutt, 1998, Smith, 1990).

Culture is defined by Ebbutt (1998, p. 416) as a “constellation of both written and unwritten expectations, values, norms, rules, laws, artifacts, rituals and behaviors that permeate a society and influence how people behave socially”. Culture can influence many facets of the evaluation process, including data collection, evaluation program implementation and the analysis and understanding of the results of the evaluation (Ebbutt, 1998). In particular, instruments which are traditionally used to collect data such as questionnaires and semi-structured interviews need to be sensitive to differences in culture, if they were originally developed in a different cultural context (Bulmer & Warwick, 1993). The understanding and meaning of constructs which the evaluator is attempting to measure may not be shared between the evaluator and the sample population and thus the transference of concepts is an important notion, as this will influence the quality of the data collection carried out by evaluators as well as the analysis and results generated by the data (ibid).

Language also plays an important part in the evaluation process, as language is tied closely to culture (ibid). Language can be a major barrier to communicating concepts which the evaluator is trying to access, and translation is often required (Ebbutt, 1998). There are a multitude of problems with translation, including the loss of meaning as well as the exaggeration or

enhancement of meaning by translators (ibid). For example, terms which are contextually specific may not translate into another language with the same weight or meaning. In particular, data collection instruments need to take meaning into account as the subject matter may not be considered sensitive in a particular context might prove to be sensitive in the context in which the evaluation is taking place (Bulmer & Warwick, 1993). Thus, evaluators need to take into account two important concepts when administering data collection tools: lexical equivalence and conceptual equivalence (ibid). Lexical equivalence asks the question: how does one phrase a question in two languages using the same words? This is a difficult task to accomplish, and uses of techniques such as back-translation may aid the evaluator but may not result in perfect transference of meaning (ibid). This leads to the next point, conceptual equivalence. It is not a common occurrence for concepts to transfer unambiguously from one culture to another (ibid). Data collection instruments which have not undergone adequate testing and piloting may therefore render results which are not useful as the concepts which are measured by the instrument may have taken on a different meaning and thus rendered the instrument unreliable and invalid (ibid).

Thus, it can be seen that evaluators need to take into account the methodological challenges created by differences in culture and language when attempting to conduct a program

evaluation in a developing country.

Q4:- Explain the questionnaire design process .

Ans:-

A good questionnaire should not be too lengthy. Simple English should be used and the question shouldn’t be difficult to answer. A good questionnaire requires sensible language, editing, assessment, and redrafting.

Questionnaire Design Process

State the information required- This will depend upon the nature of the problem, the purpose of the study and hypothesis framed. The target audience must be concentrated on.

State the kind of interviewing technique- interviewing method can be telephone, mails, personal interview or electronic interview. Telephonic interview can be computer assisted. Personal interv

iew can be conducted at respondent’s place or at mall or shopping place. Mail interview can take the form of mail panel. Electronic interview takes place either through electronic mails or through the internet.

Decide the matter/content of individual questions- There are two deciding factors for this-

Is the question significant? - Observe contribution of each question. Does the question contribute for the objective of the study?

Is there a need for several questions or a single question? - Several questions are asked in the following cases:

When there is a need for cross-checking

When the answers are ambiguous

When people are hesitant to give correct information.

Overcome the respondents’ inability and unwillingness to answer- The respondents may be unable to answer the questions because of following reasons-

The respondent may not be fully informed

The respondent may not remember

He may be unable to express or articulate

The respondent may be unwilling to answer due to-

There may be sensitive information which may cause embarrassment or harm the respondent’s image.

The respondent may not be familiar with the genuine purpose

The question may appear to be irrelevant to the respondent

The respondent will not be willing to reveal traits like aggressiveness (For instance - if he is asked “Do you hit your wife, sister”, etc.)

To overcome the respondent’s unwillingness to answer:

Place the sensitive topics at the end of the questionnaire

Preface the question with a statement

Use the third person technique (For example - Mark needed a job badly and he used wrong means to get it - Is it right?? Different people will have different opinions depending upon the situation)

Categorize the responses rather than asking a specific response figure (For example -Group for income levels 0-25000, 25000-50000, 50000 and above)

Decide on the structure of the question- Questions can be of two types:

Structured questions- These specify the set of response alternatives and the response format. These can be classified into multiple choice questions (having

various response categories), dichotomous questions (having only 2 response categories such as “Yes” or “No”) and scales (discussed already).

Unstructured questions- These are also known as open-ended question. No alternatives are suggested and the respondents are free to answer these questions in any way they like.

Determine the question language/phrasing- If the questions are poorly worded, then either the respondents will refuse to answer the question or they may give incorrect answers. Thus, the words of the question should be carefully chosen. Ordinary and unambiguous words should be used. Avoid implicit assumptions, generalizations and implicit alternatives. Avoid biased questions. Define the issue in terms of who the questionnaire is being addressed to, what information is required, when is the information required, why the question is being asked, etc.

Properly arrange the questions- To determine the order of the question, take decisions on aspects like opening questions (simple, interesting questions should be used as opening questions to gain co-operation and confidence of respondents), type of information (Basic information relates to the research issue, classification information relates to social and demographic characteristics, and identification information relates to personal information such as name, address, contact number of respondents), difficult questions (complex, embarrassing, dull and sensitive questions could be difficult), effect on subsequent questions, logical sequence, etc.

Recognize the form and layout of the questionnaire- This is very essential for self-administered questionnaire. The questions should be numbered and pre-coded. The layout should be such that it appears to be neat and orderly, and not clattered.

Reproduce the questionnaire- Paper quality should be good. Questionnaire should appear to be professional. The required space for the answers to the question should be sufficient. The font type and size should be appropriate. Vertical response questions should be used, for example:

Do you use brand X of shampoo ?

Yes

No

Pre-test the questionnaire- The questionnaire should be pre-tested on a small number of respondents to identify the likely problems and to eliminate them. Each and every dimension of the questionnaire should be pre-tested. The sample respondents should be similar to the target respondents of the survey.

Finalize the questionnaire- Check the final draft questionnaire. Ask yourself how much will the information obtained from each question contribute to the study. Make sure that irrelevant questions are not asked. Obtain feedback of the respondents on the

questionnaire.

Q5 :-The procedure of testing hypothesis requires a researcher to adopt serval steps . Describe in bref al such steps .

Ans :_

Introduction

Hypothesis testing is generally used when you are comparing two or more groups.

For example, you might implement protocols for performing intubation on pediatric patients in the pre-hospital setting. To evaluate whether these protocols were successful in improving intubation rates, you could measure the intubation rate over time in one group randomly assigned to training in the new protocols, and compare this to the intubation rate over time in another control group that did not receive training in the new protocols.

When you are evaluating a hypothesis, you need to account for both the variability in your sample and how large your sample is. Based on this information, you'd like to make an assessment of whether any differences you see are meaningful, or if they are likely just due to chance. This is formally done through a process called hypothesis testing.

Five Steps in Hypothesis Testing:

Specify the Null Hypothesis

Specify the Alternative Hypothesis

Set the Significance Level (a)

Calculate the Test Statistic and Corresponding P-Value

Drawing a Conclusion

Step 1: Specify the Null Hypothesis

The null hypothesis (H0) is a statement of no effect, relationship, or difference between two ormore groups or factors. In research studies, a researcher is usually interested in disproving the null hypothesis.

Examples:

There is no difference in intubation rates across ages 0 to 5 years.

The intervention and control groups have the same survival rate (or, the

intervention does not improve survival rate).

There is no association between injury type and whether or not the patient received an IV in the prehospital setting. \l

Step 2: Specify the Alternative Hypothesis

The alternative hypothesis (H1) is the statement that there is an effect or difference. This is usually the hypothesis the researcher is interested in proving. The alternative hypothesis can be one-sided (only provides one direction, e.g., lower) or two-sided. We often use two-sided tests even when our true hypothesis is one-sided because it requires more evidence against the null hypothesis to accept the alternative hypothesis.

Examples:

The intubation success rate differs with the age of the patient being treated (two-sided).

The time to resuscitation from cardiac arrest is lower for the intervention group than for the control (one-sided).

There is an association between injury type and whether or not the patient received an IV in the prehospital setting (two sided). \l

Step 3: Set the Significance Level (a)

The significance level (denoted by the Greek letter alpha— a) is generally set at 0.05. This means that there is a 5% chance that you will accept your alternative hypothesis when your null hypothesis is actually true. The smaller the significance level, the greater the burden of proof needed to reject the null hypothesis, or in other words, to support the alternative hypothesis.

Q6 a) What are the different of research reports available to the researcher?

b) What should be the ideal structure of a research report ?

Ans:-

There are many ways to categorize the different types of research. For example, research in different fields can be called different types of research, such as scientific research, social research, medical research, environmental research and so forth. The research methods that are used and purposes of the research also can be used to categorize the different types of research. A few of these types of research include quantitative and qualitative research; observational and experimental research; and basic, applied and developmental research.

Quantitative and Qualitative

Quantitative research is the collecting of objective numerical data. Features are classified and counted, and statistical models are constructed to analyze and explain the information that has been gathered. Some of the tools used for this type of research include questionnaires that are given to test subjects, equipment that is used to measure something and databases of existing information. The goal of quantitative research is to compile statistical evidence, so the questionnaires used in this method typically include yes-or-no questions or multiple-choice questions rather than open-ended questions such as essay questions.

Unlike quantitative research, qualitative research is subjective and seeks to describe or interpret whatever is being researched. Instead of numbers, this type of research provides information in the form of words or visual representations. It relies on the researcher to observe, record and what happens, such as participants' answers to open-ended questions, subjects' behavior or the results of experiments. Case studies are common examples of qualitative research.

Observational and Experimental

Observational research is the collection of information without interference or input from the researcher. It is the examination of things as they naturally or inherently are. The researcher simply observes, measures or records what occurs. That information is then analyzed and used to draw conclusions.

This is in contrast with experimental research, in which the researcher sets the parameters or conditions and is able to change them to determine their effects. Experimental research often occurs in laboratories but can occur anywhere. It merely requires the researcher to be able to control one or more conditions of the experiment. This method helps researchers understand how certain variables — the different aspects or conditions that can change — can affect whatever it is they are studying.

Basic, Applied and Developmental

When the purpose of research is simply to reveal or discover what is true, it can be called basic research. This type of research involves exploring that which is not known or understood. Applied research is taking what is already known and looking for ways to use it, such as to solve problems. Developmental research is similar to applied research but focuses on using what is already known to improve products or existing technology or to create something new.

b) The ideal structure of a research report :-

Most research projects share the same general structure. You might think of this structure as following the shape of an hourglass. The research process usually starts with a broad area of interest, the initial problem that the researcher wishes to study. For instance, the researcher

could be interested in how to use computers to improve the performance of students in mathematics. But this initial interest is far too broad to study in any single research project (it might not even be addressable in a lifetime of research). The researcher has to narrow the question down to one that can reasonably be studied in a research project. This might involve formulating a hypothesis or a focus question. For instance, the researcher might hypothesize that a particular method of computer instruction in math will improve the ability of elementary school students in a specific district. At the narrowest point of the research hourglass, the researcher is engaged in direct measurement or observation of the question of interest.

Once the basic data is collected, the researcher begins to try to understand it, usually by analyzing it in a variety of ways. Even for a single hypothesis there are a number of analyses a researcher might typically conduct. At this point, the researcher begins to formulate some initial conclusions about what happened as a result of the computerized math program. Finally, the researcher often will attempt to address the original broad question of interest by generalizing from the results of this specific study to other related situations. For instance, on the basis of strong results indicating that the math program had a positive effect on student performance, the researcher might conclude that other school districts similar to the one in the study might expect similar results.

Components of a Study

What are the basic components or parts of a research study? Here, we'll describe the basic components involved in a causal study. Because causal studies presuppose descriptive and relational questions, many of the components of causal studies will also be found in those others.

Most social research originates from some general problem or question. You might, for instance, be interested in what programs enable the unemployed to get jobs. Usually, the problem is broad enough that you could not hope to address it adequately in a single research study. Consequently, we typically narrow the problem down to a more specific research question that we can hope to address. The research question is often stated in the context of some theory that has been advanced to address the problem. For instance, we might have the theory that ongoing support services are needed to assure that the newly employed remain employed. The research question is the central issue being addressed in the study and is often phrased in the language of theory. For instance, a research question might be:

Is a program of supported employment more effective (than no program at all) at keeping newly employed persons on the job?

The problem with such a question is that it is still too general to be studied directly. Consequently, in most research we develop an even more specific statement, called an hypothesis that describes in operational terms exactly what we think will happen in the study. For instance, the hypothesis for our employment study might be something like:

The Metropolitan Supported Employment Program will significantly increase rates of employment after six months for persons who are newly employed (after being out of work for at least one year) compared with persons who receive no comparable program.Notice that this hypothesis is specific enough that a reader can understand quite well what the study is trying to assess.

In causal studies, we have at least two major variables of interest, the cause and the effect. Usually the cause is some type of event, program, or treatment. We make a distinction between causes that the researcher can control (such as a program) versus causes that occur naturally or outside the researcher's influence (such as a change in interest rates, or the occurrence of an earthquake). The effect is the outcome that you wish to study. For both the cause and effect we make a distinction between our idea of them (the construct) and how they are actually manifested in reality. For instance, when we think about what a program of support services for the newly employed might be, we are thinking of the "construct." On the other hand, the real world is not always what we think it is. In research, we remind ourselves of this by distinguishing our view of an entity (the construct) from the entity as it exists (the operationalization). Ideally, we would like the two to agree.

Social research is always conducted in a social context. We ask people questions, or observe families interacting, or measure the opinions of people in a city. An important component of a research project is the units that participate in the project. Units are directly related to the question of sampling. In most projects we cannot involve all of the people we might like to involve. For instance, in studying a program of support services for the newly employed we can't possibly include in our study everyone in the world, or even in the country, who is newly employed. Instead, we have to try to obtain a representative sample of such people. When sampling, we make a distinction between the theoretical population of interest to our study and the final sample that we actually measure in our study. Usually the term "units" refers to the people that we sample and from whom we gather information. But for some projects the units are organizations, groups, or geographical entities like cities or towns. Sometimes our sampling strategy is multi-level: we sample a number of cities and within them sample families.

In causal studies, we are interested in the effects of some cause on one or more outcomes. The outcomes are directly related to the research problem -- we are usually most interested in outcomes that are most reflective of the problem. In our hypothetical supported employment study, we would probably be most interested in measures of employment -- is the person currently employed, or, what is their rate of absenteeism.

Finally, in a causal study we usually are comparing the effects of our cause of interest (e.g., the program) relative to other conditions (e.g., another program or no program at all). Thus, a key component in a causal study concerns how we decide what units (e.g., people) receive our program and which are placed in an alternative condition. This issue is directly related to the research design that we use in the study. One of the central questions in research design is determining how people wind up in or are placed in various programs or treatments that we

are comparing.

These, then, are the major components in a causal study:

The Research Problem The Research Question The Program (Cause) The Units The Outcomes (Effect) The Design