observing teaching systemat isa - human kinetics · observing teaching systemat isa ... methods by...

12
Observing Teaching Systemat isa JOHN CHEFFERS Everybody knows something about teaching. For too long, how- ever, expertise has been self styled, dogma has gone unchallenged, and individual style has been the ex- cuse for a plethora of dull, ineffec- tive, and inadequate teaching be- haviors. In 1963, Gage declared that all research into the act of teaching prior to 1948 was suspect, due to inadequate descriptions of the precise nature of the teacher behavior variables. Teachers have rejected systematic description, preferring to huddle under the obliging umbrella of art. "Teaching is an art," they have said, "depend- ent upon personality, flare, and personal integrity." They may, of course, be correct. The problem is that there is no way of verifying such sweeping statements, and certainly no way, under such aegis, of distinguishing genuine artists from bogus pretenders. The scientific analysis of the teaching act has been slow in devel- oping. Plato recorded a form of teaching based on convergent ques- tioning when he described the cele- About the Author John Cheffers is Associate Professor in the Department of Movement and Health at Boston University. brated case of Socrates teaching the slave boy "Meno" the funda- mentals of Pythagoras' theorem. This classic effort led to the term "Socratic method." It is possible to analyze and critique Socrates' teaching behaviors only because Plato recorded the verbal behav- iors in detail. The provision of such data is imperative if teaching is to be analyzed, critiqued and refined, and it is only under these circum- stances that scientific study is pos- sible. In his analysis new develop- ments in the ~rofession of Health, Physical Education and Recreation during the 1975 National Conven- tion in Atlantic City, New Jersey, Larry Locke found little to rejoice about. The one direction he em- braced enthusiastically was the de- velopment of systematic observa- tion for gathering data about which a science of teaching might be de- veloped. While it is true that a few individuals have used system- atic observation for many years to analyze skill performances, es- pecially in the areas of college and professional sport, most have ig- nored the objective route. Far too much guess work has been used. Student teachers have been angered by the high degree of subjectivity shown by supervisors: practising teachers have complained at the

Upload: dinhtu

Post on 30-Aug-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

Observing Teaching Systemat isa

JOHN CHEFFERS

Everybody knows something about teaching. For too long, how- ever, expertise has been self styled, dogma has gone unchallenged, and individual style has been the ex- cuse for a plethora of dull, ineffec- tive, and inadequate teaching be- haviors. In 1963, Gage declared that all research into the act of teaching prior to 1948 was suspect, due to inadequate descriptions of the precise nature of the teacher behavior variables. Teachers have rejected systematic description, preferring to huddle under the obliging umbrella of art. "Teaching is an art," they have said, "depend- ent upon personality, flare, and personal integrity." They may, of course, be correct. The problem is that there is no way of verifying such sweeping statements, and certainly no way, under such aegis, of distinguishing genuine artists from bogus pretenders.

The scientific analysis of the teaching act has been slow in devel- oping. Plato recorded a form of teaching based on convergent ques- tioning when he described the cele-

About the Author

John Cheffers is Associate Professor in the Department of Movement and Health at Boston University.

brated case of Socrates teaching the slave boy "Meno" the funda- mentals of Pythagoras' theorem. This classic effort led to the term "Socratic method." It is possible to analyze and critique Socrates' teaching behaviors only because Plato recorded the verbal behav- iors in detail. The provision of such data is imperative if teaching is to be analyzed, critiqued and refined, and it is only under these circum- stances that scientific study is pos- sible. In his analysis new develop- ments in the ~rofession of Health, Physical Education and Recreation during the 1975 National Conven- tion in Atlantic City, New Jersey, Larry Locke found little to rejoice about. The one direction he em- braced enthusiastically was the de- velopment of systematic observa- tion for gathering data about which a science of teaching might be de- veloped. While it is true that a few individuals have used system- atic observation for many years to analyze skill performances, es- pecially in the areas of college and professional sport, most have ig- nored the objective route. Far too much guess work has been used. Student teachers have been angered by the high degree of subjectivity shown by supervisors: practising teachers have complained at the

Quest

prejudicial nature of their promo- tion evaluations; students have been frustrated in their endeavors to seek helpful means by which they can work for self change. In general, the act of teaching has lacked scientific inquiry.

The advent of serious observer systems in the late 1940's and early 50's through the agency of John Withal1 (1949), Ned Flanders (1970), Frederick Bales (1950), Mary Ashner (1961) and Marie Hughes (1962) provided the initial stimulus for the development of systematic observation. An article written by Medley and Mitzel (1963) resulted in a multitude of observation instruments which by the mid-1970's had reached into the thousands. In physical educa- tion, however, the development of systematic instrumentation was sparse until the early 1970's' when serious developmental research be- gan with Siedentop (1972) at Ohio State, Anderson (1971) at Teach- ers College, Columbia, Nygaard (1975) at Montana University, Cheffers (1974) at Boston Univer- sity, and Mancini at Ithaca College (1975).

Observer systems are tools to study "dynamic, on-going interac- tion between people" (Simon & Boyer, 1967). They provide a lan- guage for isolating behavioral phe- nomena. Batchelder and Cheffers (1976) state an observation instru- ment can be used to: (1) describe current classroom practices, (2) modify teacher behavior, (3) pro- vide a tool for the analysis of teaching, (4) give feedback about one's own teaching, (5) train stu- dent teachers, (6) discriminate be- tween patterns of teaching, (7) de- termine the relationships between various classroom behaviors and

student growth, (8) help in pro- jecting future teaching patterns. C. Kenneth Murray (1970) contends the svstematic observation move- ment provides the preservice and

'inservice teacher with techniques for identifying, observing, classify- ing and quantifying specific class- room behaviors which can help "the capacity of the neophyte to modify instructional behaviors in accordance with necessary require- ments to facilitate learning." Flan- ders (1975) believes that analysis of one's own teaching behavior permits a greater variety of teach- ing techniques to be developed, which in turn will produce more efficient learning. Arnidon (1976) suggests that systematic observa- tion provides such an efficient pic- ture of interactive patterns that process adjustments - are possible, enhancing and enriching the entire learning atmosphere.

The proliferation of new ob- server systems during the late 1960's and the 1970's is an indica- tion of the comvlex nature of teach- ing. Each system was searching to identify different parameters or variables. Certainly, many differ- ent words have been used to iden- tif y teaching (teaching, counseling, supervising, helping, intervening, change assisting) which accounts for the great variety of approaches used in the instruments developed. Whatever the variables, however, systematic observation provided a formula whereby the teaching act could be placed under microscopic scrutiny for analysis, critique and refinement.

What Systems Exist and What Do They Measure?

Current systems can be charac- terized under two broad headings:

Observing Teaching Systematically

(1) Inductive, where the systems materialize and attain form only after a series of observations have been made, and (2) deductive, where a. formula preexists and in- terpretations are made through that formula. Most deductive systems develop as a result of the inductive process.

Inductive systems

By the nature of the definition of the inductive system, classical forms are absent. However, several methods by which inductive ob- servation takes place have achieved recognition in the scientific com- munity.

Anecdotal recording' (Thorndike & Hagen, 1977). An observer describes pupil behaviors in single word, sen- tence, paragraph or short story form. Endeavors are made to describe the behaviors without evaluation, inter- pretation, or generalized opinion. Such assessment comes later when all the information is recorded. Sometimes value judgments are absent from the entire procedure. The value of this technique lies in its capacity to de- scribe specific events in detail and in their natural setting.

Critical incident study. Systems are developed around observations of vital behaviors and interactions in the clin- ical setting. A basketball coach, for instance, may develop insight from one or two critical incidents which occur during a game.

All inclusive observations. The most acceptable instrument arising from general observation techniques is Par- ticipant Observation (PO). Data gath- ered from such methodology is con- sidered valid for it registers the sense of an activity and records fleeting, relevant comments, behaviors, and critical interactions. The case for Par- ticipant Observation centers around

the argument that when observers take part in the same experiences as the group under observation, they ex- perience and record group feelings and behaviors much more accurately. They are not detached, hence they get the feeling of the group. It is possible, of course, during Participant Observa- tion to use pre-determined formulae or deductive systems.

Dialogue analysis. Sometimes observ- ers will set out with the sole purpose of recording dialogue. Their intentions are to analyze psychiatric and social- psychological perspectives eeked from the dialogues. Marie Hughes (1962) considers that verbatim records are "indispensable data" for the descrip- tion and analysis of teaching. The main problem with such data collec- tion is that often unmanageable amounts of verbiage are collected, yielding inproportionately unreward- ing results.

Problems with Observations Formed Inductively

Scientists who are dissatisfied with inductive forms of data gath- ering center their objections around the factors of objectivity and com- pleteness. They maintain it is im- possible to factor out interesting and personal reactions during ob- servation periods and complain that methods by which validity and re- liability can be gained are difficult to conceive and well nigh impos- sible to construct. Pseudo judg- ments and premature interpreta- tions occur too frequently, and the volume of busy work leads to ille- gitimate short-cut procedures. Sci- entists are also unhappy with the incompleteness of such data. They maintain that selective notice of dramatic behaviors excludes manv meaningful behaviors. .

The combined problem of inob- jectivity and incompleteness has

led researchers to prefer deductive systems of observation. It must be noted, however, that certain groups of movement professionals have consistently preferred experi- ential data, pointing out the dan- gers of preconceived formulae and detached observation. Dancer educators, movement educators, coaches, and Outward Bound lead- ers number prominently amongst their ranks.

Deductive Systems

Deductive systems are defined as systems where a formula exists re- quiring the observers to deduce what behaviors have occurred and under which categorizations they may be placed according to that formula. Scientists regard deduc- tive systems as valid tools for col- lecting data on human behavior. We will divide deductive systems into seven (7) categories: (a) Proc- ess Systems, (b) Terminal Process Systems, (c) Product Systems, (d) Content Analysis Systems, (e) Physical Environment Systems, (f) Non-human Subject Systems, (g) Systems which combine two or more of the above classifications.

Process systems. Such systems provide formulae for observing on-going proc- esses at the instant of occurrence. Methods of data gathering require the observer to be present, alert, thor- oughly familiar with the respective categories. Examples of such systems include the original Flanders' ten cate- gory system, modifications to Flan- ders' made by Cheffers (1974), Melo- grano (1971) and Dougherty (1971), group systems like Bales (1950), the feedback system of Fishman (1970), the teachers' role identification sys- tem (Trilasp) by Hurwitz (1972), the play check methodology (Hall, 1970), and the Ohio State University Teacher

Behavior Scale by Siedentop and Hughley (1975). The Individual Re- sponse Gestalt of Cheffers (Miller et al., 1974) and the movement analysis and effort analysis systems developed by Laban (1950) are further examples of Process Systems.

On-going process systems require frequent reliability checks due to the tenuous nature of the situations under observation, but in return, provide the legitimacy of instantaneous re- cording. The advent of the replay video tape machine has permitted relatively sophisticated instruments to be developed. Researchers, for in- stance, have subscripted and post- scripted CAFIAS (Cheffers Adaptation of Flanders Interaction Analysis Sys- tem) to describe:

1. Interaction patterns of students and teachers in different decision making situations (Mancini, et al., 1976)

2. Comparisons of predictive esti- mates of classroom process behav- iors in math, English, and physi- cal education classes (Batchelder, 1975)

3. Comparisons of open and tradi- tional classrooms (Evaul, 1976)

4. Comparisons of teaching patterns of liberal arts tutors in different ex- perimental circumstances (Travis, 1977)

5. Relationships between volunteer high school students and moder- ately retarded high school stu- dents in physical education pro- grams (Bechtold, 1976)

6. Interaction measures between therapists and clients (Cohen, 1976)

7. Effects of varying teacher models on the development of motor skills and self concepts (Martinek, 1977; Chertok, 1975)

8. Elementary children as modifiers of teacher behavior (Pratt & Owen, 1973)

9. The effects of movement on improving ethnic relationships (Cheffers, et al., 1976)

Observing Teaching Systematically

10. High risk programs for teacher training in psychological educa- tion (Cheffers & Mancini, 1975)

11. The effect of interaction analysis on the preparation of student teachers (Kielty, 1975, Hendrick- son, et al., 1976, and Rochester, et al., 1977)

12. Comparisons of sex differences in teacher leadership styles (Keane, 1976)

Siedentop, Rife & Boehm (1974), Hughley (1973) and Dodds (1975) have used variations of the OSU Teacher Behavior Scale to change the teaching behavior of student teachers.

The results of current research with process instrumentation gives US

cause to believe that they will even- tually provide legitimate scientific theories from which valid assumptions about the teaching-learning process can be made.

Terminal Process Systems. Terminal process systems assess what processes went on. not at the instance of oc- currence, but after the episode has finished. These systems require the observer to make a considered judg- ment, post hoc. They have the ad- vantage of being unhurried, yet the disadvantage of requiring generalized opinion. Examples of such systems include: Mosston's Form for Distin- guishing Teaching Styles (1966), the Teacher Performance Criteria Ques- tionnaire (TPCQ) of Kielty and Chef- fers (Miller, et al., 1974), and the In- dividual Response Scale of Cheffers (Miller, et al., 1974). Most skill analy- sis check lists and play analysis charts maintained by coaches and teachers in :recording games and tournaments fall in the category of terminal process systems.

Product Systems

Product systems indicate rela- tively permanent changes in stu- dent behavior, sometimes referred

to as the outcomes of teaching or student learning. Readers are fa- miliar with such systems-exam- inations, tests, quizzes, and rank- ings are some examples. Some re- searchers believe that through combining process and product sys- tems, a true estimate of the educa- tional process can be made (Rosen- shine & Furst, 1973, Flanders, 1970, and Locke, 1975).

One of the interesting forms of product systems is the rating scale. Here the observer is required to place a judgment on the quality of learning. It is much easier, of course, to simply record the quan- tity of learning. Although qualita- tive discrimination is required by society, many scientists have ques- tioned the validity of rating scales, expressing concerns over incom- pleteness and unfairness.

Content Analysis Systems

Content analysis systems refer to those instruments that tease out, and categorize the substance of the behavioral episodes. Much of the work classifying the concepts de- veloped by Bloom (1956) in cogni- tive behavior, Krathwohl (1964) in affective behavior, and Harrow (1972) in psycho-motor behavior can be classified content in nature. The Florida Taxonomy of Cogni- tive Behavior developed by Brown, Ober, Soar and Webb (Webb, 1970) and the Content Analysis System Developed by Hill (1969), based on the earlier work of Dun- can and Hough (1966), are exam- ples of content analysis systems. Movement and health profession- als use similar systems when they categorize content aspects of the various curricula.

Physical Environment Systems

Physical Environment Systems refer to systems which describe locations, physical plant, geo- graphic habitat, and building struc- tures and materiel. Researchers are beginning to see the importance of systematically describing such phe- nomena, although they have al- ways realized the limitations of physical plant. The provision of excellent facilities unfortunately does not guarantee the same excel- lence of behavior or interaction patterns. Horn (1914), who is gen- erally considered to be the father of modern systematic observation, developed an observational system which used a classroom seating chart: Puckett (1928) added a pupil functioning component. Indeed, earlier systems were predominantly concerned with describing the in- dividual functioning in a phys- ical environment (Puckett 1928, Wrightstone 1934). Coaches, when concerned with position play, and mountaineers, when concerned with scaling, frequently use phys- ical environment systems. Map- ping can be regarded as a system- atic description of the physical en- vironment.

Non-human Subject Systems

Systems which set out to de- scribe the behavior of animals, birds, and the vicissitudes of the natural elements, like wind, fire and water, are sometimes referred to as non-human subject systems. Like other deductive systems, they work from tangible, recognizable data developed in a pre-conceived formula.

Combinations of Systems Outlined Above

Many systems have endeavored to give comprehensive information based on multi-dimensional scales. The categories advocated by Sie- dentop (1976) include assessment of both student and teacher behav- ior. Bellack et a1 (1963) were con- cerned with content, process, and behavioral outcomes when they de- veloped the Bellack system. Wright and Proctor (1961) combined proc- ess, content and attitude for their system which classified verbal be- haviors in mathematics classrooms. Barrett (1971) identified the inter- relationship of six components (movement task, student response, content, teacher, learner, guidance) in her quest to describe the struc- ture of movement tasks. Most re- searchers recognize the value of describing behaviors through multi- dimensional data.

Problems with Observations Formed Deductively

Most of the criticisms of obser- vations formed deductively center around the feasibility of the for- mula, validity of the categories de- veloped, the reliability and validity of the data collected, and the dan- gers associated with generalizations arising from statistical analyses. Many of the categories within the systems are not discrete, and the vast majority of systems recorded have never been validated. Tech- nology will undoubtedly assist the development of deductive sys- tems to a point where many of these problems will disappear. Flanders, for instance, foresees the day when a computer will use a 100-category system in a classroom with greater accuracy and sensitiv-

Observing Teaching Systematically

ity than a human being can use a 10-category system. The use of the computer to analyze the reams of data produced in daily observation has made deductive systematic ob- servation feasible.

Validity and Reliability in Systematic Observation

Each time an instrument is de- veloped, it should be tested for validity and inter and intra rater reliability. In simple language,

Do the raters using the systems ac- curately record the categories the sys- tems seek to describe, and will they do it again under identical circum- stances.

The instrument itself has to be consistent and feasible. Most tests require validity and reliability coef- ficients to be expressed in the range between 0 and 1.0. Some- times other measures (variance, for instance) can be used, but all tests and instruments, if they are to be credible, must present objective measures of validity and reliability. It is recognized, of course, that ini- tial measures need to be supple- mented by demonstrated utility in clinical settings before ultimate le- gitimacy can be claimed. But the failure of instrument developers to provide accurate scientific coeffi- cients, prior to field utilization, is a continuing problem in education.

Validity

Measures of face, content, and construct validity are necessary. Face validity refers to the need to show that the instrument is some- where on target with its goals and objectives when compared with non-relevant instrumentation. Con-

tent validity is concerned with the relevance of the categories to the content area addressed. It is con- firmed through literature research and interaction amongst specialists in the relevant field. Construct va- lidity is the most important valida- tion process. It is concerned with whether the instrument adequately measures and/or predicts the traits or meanings when compared with other similar instruments measur- ing the same general constructs.

Honnigman (1966) and Cheffers (1972) used a "blind-live" method to validate their respective instru- ments based on the assumption that the encoded and decoded data arrays were sufficient to rival "live" or "on the spot" observations. Both discovered that although their data descriptions were more accurate than other compared instruments, they did not achieve the same sen- sitivity that live observance at- tained.

Reliability

For an instrument to be reliable, it must produce the same data in replicated situations. Internal con- sistency is important. Such meas- ures are today confirmed through correlational and variance tech- niques (Pearson Product, and the Hoyt Coefficient). Of even greater significance, however, is the ability of the observer to recognize the same categories under replication, and to be in reasonable agreement with another observer independ- ently administering the same in- strument. Flanders (1970), whose systematic treatment of data is per- haps the most sophisticated yet to be developed, maintains that 75% agreement is permissible with 85% agreement desirable in research.

Certainly, the legitimacy of an in- strument which cannot be adminis- tered by two independent observ- ers with an 80% reliability, is seri- ously in question.

Treatment of the Data

Observer systems produce pro- lific data. How these data are treated determines the efficacy of the system. Data analysis tech- niaues mav take the form of inter- action matrices (Flanders), flow charts (IRG), profiles (OSUTBA), descriptive passages (Bellack) or scores. One of the problems facing researchers using systematic ob- servation has been the nature of the data produced. Most data are presented-in the form of frequency counts, ratios and percentages. Such distributions should be treated with non-parametric statistical techniques. Although many statis- ticians today are not so perturbed at treating these data parametri- cally, the purists are resistant. A logarithmic or similar transforma- tion of such data is needed before parametric statistics can be used. ?his overdue development, recom- mended as far back as 1966 by Ned Flanders, is now beginning to be incorporated into computer pro- grams. The use of the more power- ful statistical techniques logically enables more powerful inferences to be made.

Use of Observation Instruments as Dependent and Independent

Variables

Whenever an instrument is used to record data in a situation where it is not free to vary, we refer to its use as "dependent." Research- ers wishing to manipulate other

variables and see if change has taken place (from a pre to a post test situation, for instance) are us- ing the instrument as a dependent variable. In this way, systematic observational instruments have made invaluable contributions. Martinek (1977) used CARAS in- teraction patterns to verify the au- thenticity of two specific treat- ments in elementary physical edu- cation teaching.

Whenever data from observation instruments are permitted to influ- ence treatment effects, we refer to these data variables as "independ- ent." Many researchers are anxious to use such techniques to improve teaching performance. Siedentop (1972), Kielty (1974), Rochester et al. (1977), and Cheff ers (1974) have engaged in research where observa- tion instruments have been used as the sole independent variable (i.e., students in treatment group X have received feedback information from the observation tool during the treatment period, and students from treatment group Y or a con- trol group have not received such information). The exciting aspects of the use of observer tools as in- dependent variables is centered in the potential for change. When, of their own conviction, human beings elect to behave differently and more effectively, greater overall changes can be expected. If data from observation instruments can bring about a self change situation, then systematic observation has more than earned its "keep."

Directions

Locke (1975) has predicted that value to physical education will ac- crue from greater emphasis on sys- tematic observation instrument de-

Observing Teaching Systematically

velopment, and a wider use of the data provided. But the slow devel- opment of this emphasis is causing concern. In most cases, the clinical needs of field personnel has far preceded the refining and validat- ing adjustments of the researcher. This has led to a proliferation of unsubstantiated measurement tools. and sloppy instrumentation proce- dures. An immediate need is to close the gap between clinical needs and research activity. Much greater sophistication in the development of instrumentation, methodologies, and analytical techniques is re- quired if observer systems are to contribute fullv.

As curriculum needs intensify across disciplines and professions, and as the knowledge explosion continues. accurate data are needed if teaching techniques are to keep abreast. We have not nearly begun to investigate, for instance, the effect of process in education by comparison with products of edu- cation. Our total preoccupation with measurement of factual learn- ing has led to a Machiavellian and unbalanced assessment of perform- ance.

Models of diagnostic prescrip- tive teaching cannot be developed effectively without the presence of valid and sensitive descriptive ana- lytic tools. Educators from all ends of the continuum maintain this is the direction in which we must go.

The case for the development of systematic observation is well doc- umented. It is obvious that this writer considers it to be imperative. If physical education teaching is to be dragged from the mire of myth and egocentric dramatization, a greater scientific effort is needed. There is little doubt. too, that even , . if teachers are born, their perform-

ances are greatly enhanced through critical self analysis. The art of teaching is substantiated by what one might refer to as the science of teaching And while it is essential for us to place teaching under the scrutiny of microscopic analysis, leading to improved techniques and greater effect, we need to heed the words of philosopher-educator John Dewey (1933), who, as ever, tends to keep our efforts in perspective:

The final test is whether the stimulus thus given to wider aims succeeds in transforming itself into power; that is to say, into the attention to detail that ensures mastery over means of execution . . . to nurture inspiring aim and executive means into har- mony with each other is at once the difficulty and the reward of the teacher.

BIBLIOGRAPHY

Amidon, E. Conversations on interaction analysis: Temple University, Philadel- phia, 1976-77.

Anderson, W. Descriptive analytic re- search on teaching, Quest, 15 (Jan. 1971), 1-8.

Aschner, M. J. The language of teach- ing in Smith, B. 0. and Ennis, R. H. (eds.) Language and concepts in edu- cation, Chicago: Rand McNally, 1961.

Bales, R. F. Interaction process analysis: A method for the study o f small groups, Addison-Wesley, Reading, Massachusetts, 1950.

Barret, K. M. The structure of move- ment tasks: A means for gaining in- sight into the nature of problem solv- ing techniques, Quest, 15, January, 1971.

Batchelder, A. S. and. Cheffers, J. T. F. CAFIAS: an interaction analysis in- strument for describing student-teacher behaviors in different learning set- tings. Paper presented at Interna- tional Conference, Quebec, July, 1976.

Batchelder, S . Process objectives and their implementation in elementary math, English, and physical education classes. Unpublished Doctoral Disser- tation, Boston University, 1975.

Bechtold, W. A study of the effect of a tutorial relationship between volun- teer high school students and mod- erately retarded peer aged students participating in individualized pro- grams. Unpublished Doctoral Disser- tation, Boston University, 1976.

Bellack, A. et al. The language of the classroom: meanings communicated in high school teaching, Part I, H.E.W. Cooperative Research Project No. 1497, Teachers College, Columbia Univer- sity, 1963.

Bloom, B. S. (ed.) Taxonomy of educa- tional objectives (Cognitive Domain), New York: David McKay Co., 1956.

Cheffers, J., Amidon, Edmund & Rodgers, Kenneth. Interaction analysis: an ap- plication to nonverbal activity, St. Paul, Minnesota: Paul S. Amidon & Associates, Inc., 1974.

Cheffers, J. and Mancini, V. Teacher training in psychological education, Final Report. Title 111 Grant (ESEA) No. 31-73-0004-1, Publication 93-10, Report available from Project Director -Fall River School System, Fall River, Massachusetts, or from the authors.

Cheffers, J., Batchelder, A. S., and Zai- chowsky, Linda. Racial integration through movement oriented programs, R.I.E., 1976.

Cheffers, J., Lowe, B., and Harrold, R. D. Sports spectator behavior assessment by techniques of behavior analysis, In- ternational Journal of Sport Psychol- ogy, VOI. 7, NO. 1, 1976, pp. 1-13.

Cheffers, J., The validation of an in- strument designed to expand the Flan- ders system of interaction analysis to describe non-verbal interaction, differ- ent varieties of teacher behavior and pupil responses. Unpublished Doc- toral Dissertation, Temple University, 1972.

Chertok, H. A comparison of two meth- ods of teaching ball handling skills to third grade students. Master Thesis, Ithaca College 1975.

Cohen, S. Therapist-client similarity and compatibility: their relationship to interpersonal interaction and speech patterns. Unpublished Doctoral Dis- sertation, Boston University, 1976.

Dewey, J. How we think, D. C. Heath & co., 1933.

Dodds, P. Behavioral competency based peer assessment model for student teacher supervision. Unpublished Doc-

toral Dissertation, The Ohio State University, 1975.

Dougherty, N. J., IV. A plan for the analysis of teacher pupil interaction in physical education classes, Quesf, 15, January 1971.

Duncan, J. K. and Hough, J. B. A con- tent classification system. Unpub- lished Paper, 1966.

Evaul, T. A comparison of open and closed classrooms, Official Report to Maryland Department of Curriculum Studies, 1976.

Fishman, S. E. A procedure for record- ing augmented feedback in physical education classes. Unpublished Doc- toral Dissertation, Teachers College, Columbia University, 1970.

Flanders, N. Analyzing teaching behav- ior, Reading, Massachusetts: Addison- Wesley Publishing Co., Inc., 1970.

Gage, N. (ed.) Handbook in research in teaching, N. L. Gage (ed.), Chicago: Rand-McNally, 1963.

Hall, V. R. Managing behavior, Part I, Merriam, Kansas: H & H Enterprises, 1970.

Harrow, A. J. A taxonomy of the psycho- motor domain, New York: David Mc- Kay Co., 1972.

Hendrickson, C., Mancini, V., Morris, H. and Fisher, C. Interaction analysis of preservice physical educators teaching behavior. Paper presented at AAHPER Convention, Milwaukee, 1976.

Hill, J. C. The analysis of content de- velopment in classroom communica- tion. Unpublished Paper, 1969.

Honnigman, F. K. Testing a three di- mensional system for analyzing teacher influence. Unpublished Doctoral Dis- sertation, Temple University, 1966.

Horn, E. Distribution of opportunity for participation among the various pupils in classroom recitations. Teachers' College Contr. Educ., No. 67, 1914.

Hughes, M. M. What is teaching? One viewpoint, Educational Leadership, 19:251-259, 1962.

Hughley, C. Modification of teaching behaviors in physical education. Un- published Doctoral Dissertation, The Ohio State University, 1973.

Hurwitz, R. The Trilasp system, Paper presented at Mini-Convention in Teaching-E.D.A., Northeastern Uni- versity, 1975. (Paper available from author, Brockport State College.)

Observing Teaching Systematically

Keane, F. J. The relationship of sex, teacher leadership style, and teacher leader behavior in teacher-student in- teraction. Unpublished Doctoral Dis- sertation, Boston University, 1976.

Kielty, G. C. Differential multi-profes- sional stimulus through interaction analysis. Unpublished Doctoral Dis- sertation, Boston University, 1975.

Krathwohl, D. R. (ed.) Taxonomy of edu- cational objectives (Affective Domain), New York: David McKay Co., 1964.

Laban, R. and Lawrence, F. C. Effort, London: McDonald & Evons, 1950.

Locke, L. Research & Instruction in Phys- ical Education. Concentape C. 60, Available AAHPER, Speech given at National Convention, AAHPER, Atlan- tic City, New Jersey, March 1975.

Mancini, V., Cheffers, J. T. F. and Zaich- kowsky, L. D. Decision making in ele- mentary children: effects in attitudes and interaction patterns, Research Quarterly, Vol. 47, No. 1, March 1976.

Martinek, T., Zaichkowsky, L., Cheffers, J. T. F. Decision making in elemen- tary children: effects on self con- cept and motor skills, Research Quarterly, 1977.

Medley, D. M. and Mitzel, H. E. Meas- uring classroom behavior by system- atic observation, in N. L. Gage (ed.) Handbook of Research in Teaching, Chicago: Rand-McNally, 1963, pp. 247- 328.

Melograno, V. Effect of teacher per- sonality, teacher choice of educational objectives, and teacher behavior in stu- dent achievement. Unpublished Doc- toral Dissertation, Temple University, 1971.

Miller, A. G., Cheffers, J. T. F., and Whitcomb, V. Physical education: teaching human movement in the ele- mentary schools, Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1974.

Mosston, M. Teaching physical educa- tion, Columbus, Ohio: Charles E. Mer- rill Publishing Co., 1966.

Murray, C. K. The systematic observa- tion movement, Journal of Research and Development in Education, Vol. 4, No. 1, Fall 1970, Athens, Georgia, pp. 3-7.

Nygaard, G. Interaction analysis of physical education classes, Research Quarterly, Vol. 46, No. 3, October 1975, pp. 351-357.

Plato. The dialogues of Plato: The Meno (translated by Benjamin Lowett), Ox-

ford: The Clarendon Press, 4th edi- tion (Vol. I), 1953.

Pratt, R. E. and Owen, S. An experi- mental study of student influence in teacher behavior. Paper presented at North Eastern Research Association, New York, 1973.

Puckett, R. C. Making supervision ob- jective, School Review, 36,1928.

Rochester, D., V. Mancini, and H. Mor- ris. The effects of supervision and instruction in use of interaction analy- sis on teaching behavior and effective- ness of pre-service teachers. Paper presented at AAHPER Seattle Con- vention, 1977.

Rosenshine, B. and Furst, N. The use of direct observation to describe teaching, Chapt. 5 in R. M. W. Trav- ers (ed.), Second Handbook of Re- search on Teaching, New York: Rand- McNally, 1973.

Siedentop, D. Behavior analysis and teacher training, Quest, 18 (May 1972), pp. 26-32.

Siedentop, D. Developing teaching skills in physical education, Boston: Hough- ton-Mifflin, 1976.

Siedentop, D., Rife, F. and Boehm, J. Modifying the managerial effective- ness of student teachers in physical education. Paper presented at Midwest Applied Behavior Analysis Conven- tion, January, 1976.

Siedentop, D. and C. Ohio State Univer- sity teacher behavior scale, JOPER, Vol. 46, No. 2, February 1975, p. 45.

Simon, A. and Boyer, G. (eds.) Mirrors for behavior: an anthology of class- room observation insfruments, Re- search for Better Schools, Inc., Broad Street, PhiIadeIphia, 1967.

Thorndike, R. L. and Hagen, E. Meas- urement and evaluation in psychology and education (4th edition), New York: John Wiley & Sons, 1977.

Travis, W. The effect of selected af- fective and cognitive teaching skills on the teaching performance of doc- toral fellows teaching social sciences. Unpublished Doctoral Dissertation, Boston University, 1977.

Webb, J. N. Taxonomy of cognitive be- havior: in systematic observation, 7ournal of Research and Development in Education, Vol. 4, No. 1, Fall 1970.

Withall, J. The development of a tech- nique for the measurement of social- emotional climate in classrooms, lour-

Quest

nal of Experirnenfal Education, 17: matics lessons. H.E.W. Cooperative 357-361, March 1949. Research Projects, 1961.

Wright, E. M. and Proctor, V. H. Sys- Wrightstone, J. W. Measuring teacher tematic observation of verbal interac- conduct of class discussion, Elemen- tion as a method of comparing mathe- tary School Journal, 34,1934.