vital signs: an assessment model ppt mitesol 2015

36
Vital Signs: An Assessment Model for a University EAP Terri Bieszka and Eva Copija Center for English Language and Culture (CELCIS) WESTERN MICHIGAN UNIVERSITY

Upload: copijae

Post on 22-Jan-2018

295 views

Category:

Education


0 download

TRANSCRIPT

Vital Signs: An Assessment Model for a University EAP

Terri Bieszka and Eva Copija

Center for English Language and Culture (CELCIS)

WESTERN MICHIGAN UNIVERSITY

History - Beginnings

• An Assessment Committee reviewed CELCIS's assessment procedure in 1996.

• Language Specialists to incorporate more alternative forms of assessment (journals, portfolios, etc.) in their student evaluations.

• In 1999, the CELCIS Director asked that an alternative form of assessment be created by the Language Specialists.

History-Beginnings Cont’d

Language Specialists:• Attended presentations at TESOL Conferences related to

assessment procedures

• Presented themselves about the ongoing CELCIS Assessment Project

• Discussed with other IEP program professionals how to improve student assessment.

• Worked toward being consistent with established practice.

• e. g. the procedure for evaluating student work was modeled on the procedure for evaluating the TWE essay, created by ETS.

History – Development Cont’d

• Assessment Project began with an examination of the program’s mission statement and curriculum.

• Selected academic tasks were then chosen for their ability to indicate student language proficiency and readiness to enter the university,

• Criteria for Recommendation for Entrance to the University were established.

History – Development Cont’d

• Five specific tasks (Study Guides, Research Paper, Writing Sample, Oral Presentation, and Lecture Notes) were chosen for evaluation.

• Students were asked to provide self-evaluations of their writing skills in the 30-minute writing sample and their discussion skills.

History – Development Cont’d

• Specific rubrics for each task and class level were written.

• Samples of student work were collected to be used as Benchmarks for a given task

• A “Benchmark” was considered to be work that was Satisfactory, but neither stellar nor marginal (such as work that would receive a “B” grade, but not an “A” or “C”).

History – Development Cont’d

• Discussions on what exactly constituted “Satisfactory” work took place.

• Consensus emerged, resulting in more consistent and reliable evaluation.

• Language Specialists gained a deeper understanding of the curriculum.

History – Development Cont’d

Formation of links between:

• End-of-semester assessment and the Curriculum,

• Mission Statement and Criteria for University Recommendation

• Language tasks and lessons in the classroom

• Rubrics and the final evaluations.

History – Development Cont’d

Results: • More classroom time spent preparing students

for these tasks, so that they can complete them successfully.

• More consistency with the goals of the program.

• More effective preparation for the university study.

• Language Specialists trained on how to use the Curriculum, Rubrics, and Benchmark samples to prepare the students and to evaluate the students’ resulting work.

History – Development Cont’d

Improvement in overall evaluation validity and reliability:

• Language Specialists were well-trained in the evaluation methods.

• Student work was objectively scored using numbers instead of names.

• Tasks themselves were chosen due to their relevance to the program’s mission and curricular goals.

History – Development Cont’d

Training for the Assessment Project:

• Included blind evaluations of previously graded student samples and discussion of the subsequent evaluation scores.

• If significant score discrepancies occurred between teachers, the reasons for and against assigning those scores were discussed.

• Over time, all of the above methods did indeed help to improve evaluation reliability.

History – Development Cont’d

• Portfolio-based assessment system was first used in the Fall 2003 semester, and

• Has since been used during every Fall and Spring semester and from 2010 also in summer,

• Since the procedure has been implemented, several modifications have taken place

(both in products and the evaluation procedure)

Current Products

• Oral presentations (all levels except pre-elementary; in Speaking/Listening classes)

• Summaries (all levels except pre-elementary; in Reading/Writing classes)

• Timed Writing Samples (all levels except pre-elementary; in Grammar/Communication classes)

• Research paper* (pre-advanced and advanced levels; in Reading/Writing)

• *program assessment only

Modifications in Products

• Study guides and lecture-notes were eliminated

• Student self-evaluations were discontinued

• Summary replaced the research paper as an assessment product

• Oral presentations follow CELCIS Style Guide with recommended rhetorical formats for each level (from personal to argumentative)

• Research paper is now used only as a part of program assessment (not the student evaluation)

Modifications - Evaluation

• Products are rated as “Satisfactory” or “Unsatisfactory” and this rating is reported on progress reports

• Originally, there was no direct effect on the grade or promotion

• Since 2010, “Satisfactory” and “Unsatisfactory” ratings affect students final class grades.• A satisfactory rating raises the grade

• e.g. B > BA

• An Unsatisfactory rating lowers the grade

• e.g. B > CB

Procedure - Preparation

CELCIS Curriculum Coordinator designates:

• Writing topic for the timed writing assessment (based on the compiled chart and schedule)

• Articles to be used for summary assessment (based on the compiled chart and schedule)

• Oral presentation topics used for assessment in Speaking/Listening classes are selected by the teachers themselves based on the Oral Presentation Style Guide

Procedure – Product Collection

Collecting student work:• All samples are collected by teachers in the last two

weeks of the class.

• Presentations are recorded by the instructors who also collect Power Point copies

• On the written products, students’ names are replaced by their i.d. numbers, (a “blind” evaluation scenario).

• Sample Collection Forms for each type of student work are filled out by the Language Specialist for that class.

Procedure – End-of-Semester

• Student samples, etc. are brought to two evaluation meetings at the end of the semester.

• Student products are organized by the Curriculum Coordinator and compiled into folders

• Curriculum Coordinator assigns teachers to committees. Each committee evaluates products from a specific level.

• Committees are assigned to work in specific rooms

Procedure - Calibration

• All members familiarize themselves with the appropriate rubrics for the task and proficiency level

• Then, they look at benchmark samples (which have previously been assigned a passing score) to calibrate their evaluations.

Procedure - Ratings

• The actual products are then read and scored as either “Satisfactory” or “Unsatisfactory.”

o Two readers must evaluate a sample as Satisfactory for that score to be final;

o Three readers must score a sample as Unsatisfactory for that score to take effect.

oUnsatisfactory scores are accompanied by a written explanation (the Student Feedback Form) of the reasons for the score.

Procedure - Recording

• Scores are recorded in the Assessment Grid (Excel)

• Unsatisfactory ratings are accompanied by suggestions for improvement

• Assessment results are reported on the end-of-semester Progress Report and given to the student

• Assessment scores from the assessment grid are merged into progress report worksheets.

• Assessment scores affect the classroom grades, just like final exams, and may result in raising or lowering the grade.

Procedure - Effects

• In borderline cases, assessment scores may affect the student advancement to the next level.

For example:

• They may result for a student with a class grade “C” to fail the class if the assessment rating is Unsatisfactory

• They may result for a student with a class grade “DC” to pass the class if the result of the assessment is Satisfactory

• They are also used to provide more information to the student and the Language Specialists about the learning that has taken place during the semester.

Assessment Review Meetings

• At the end of each end-of-semester evaluation meeting, time is set aside for feedback on the Language Tasks, Assessment Methods, and student results.

• At the end of the second assessment day, the Curriculum Coordinator leads the review meeting for all the assessment groups (committee) members

• When procedural problems are identified, suggestions are collected for making the path smoother next time.

• Minutes for these discussions are typed and distributed.

in the hope that future semesters will yield more

positive results.

Assessment Procedure Review

• Discrepancies are looked for in student results,

• E.g., a set of samples for a given task with a large number of Unsatisfactory scores.

• A discussion on how to improve teaching or evaluation methods follows.

• Suggestions are made to the Curriculum Coordinator and the Curriculum Committee.

Classroom Grades vs. Assessment Grades

• The Assessment grade is based on a blind “snapshot” of one example of a student’s work for each task

• The Assessment is structured to eliminate the possible interference of a student’s personality or special circumstances on the grade decision.

• Language Specialist’s classroom grade may have the advantage of reflecting a student’s work over time on the basis of more data.

Classroom Grades vs. Assessment Grades

• Combining the advantages of both classroom grades and assessment grades gives the program a more complete understanding of a student’s abilities.

• Finally, the classroom grades determine promotion to the next level; the Assessment grades have an effect on the promotion decision only in borderline cases.

Advantages/Rationale

• Assessment Project helps to show CELCIS’s effectiveness in using the curriculum

• Students’ results can be compared with class grades, rubrics and curricular goals.

• Assessment system can provide more feedback to students and Language Specialists about student progress as well as areas of the curriculum that are not being successfully taught.

Advantages/Rationale

• Students’ results, the effectiveness of the assessment procedures, and the curriculum are all discussed by the Language Specialists and the Director.

• Ideally, students should have mostly “Satisfactory” scores on their Language Proficiency Profiles if they are receiving passing classroom grades.

• If several students in a given class received passing scores from the teacher but received “Unsatisfactory” evaluations in a blind review of their work, the discrepancy is noted, and methods for improving classroom teaching of the curriculum are discussed.

Advantages/Rationale

• Subsequent meetings attempt to find ways to integrate those improved methods, procedures, etc. into the classroom during the following semesters.

• Evaluating student work has also led to the evaluation of the effectiveness of the assessment tools and curriculum, creating a periodic review of both.

Advantages/Rationale

• Assessment results can help resolve challenges to a student’s classroom grades.

• Assessment grades provide additional information about a student’s work over the course of the semester,

• They make grade decisions more reliable.

• They provide objective feedback from more than one Language Specialist, which further strengthens the validity of the final decision.

Future Concerns

Current assessment issues under discussion include:

• Consistency of the rubrics with the feedback forms

• Improving of the inter-rater reliability through better calibration procedures

• Improving the program record keeping methods to facilitate statistical analysis and to find out how assessment grades correlate with course grades

Future Concerns Cont’d

• More clearly showing “level of proficiency” achieved at each level of the program

• Establishing learning outcomes for each course at each level that would facilitate interpretation of progress towards goals.

• Rethinking the class grade component of the assessment (resolving grade inflation issues)

Conclusion

• Assessment Project has served not only as a source of additional feedback about the quality of student work, but also as a quality-control mechanism for classroom instruction and assessment.

• In conjunction with the Curriculum Review, it creates a broad review of CELCIS’s curriculum, methods of instruction and assessment, and their overall effectiveness.

Conclusion

• It has helped to make classroom instruction at CELCIS more consistent, more reliable, and more closely tied to the curriculum.

• It has provided more information about student progress to the Director, the Language Specialists, and to the students themselves.

References

Assessment project– History, development, and procedure.

(2008). (from the 2005 CEA Self-Study Report of 07/13/08

compiled by Thomas Marks)

Baker, I. (2013). Assessment review – Findings and

recommendations.

CELCIS curriculum description, Reprinted April 2013.

CELCIS CEA – Reaccreditation self-study report. (2015).

Online Survey

To evaluate our presentation please go to:

https://www.surveymonkey.com/r/HVHYD3P