reproductions supplied by edrs are the best that can be ... · influences on student assessment...
TRANSCRIPT
DOCUMENT RESUME
ED 445 647 HE 033 337
AUTHOR Ronco, Sharron L.; Brown, Stephanie G.TITLE Finding the "Start Line" with an Institutional Effectiveness
Inventory. AIR 2000 Annual Forum Paper.PUB DATE 2000-05-00NOTE 29p.; Paper presented at the Annual Forum of the Association
for Institutional Research (40th, Cincinnati, OH, May 21-23,2000) .
PUB TYPE Reports Evaluative (142) Speeches/Meeting Papers (150)EDRS PRICE MF01/PCO2 Plus Postage.DESCRIPTORS College Outcomes Assessment; Course Evaluation; *Evaluation
Methods; Faculty College Relationship; Higher Education;*Institutional Evaluation; Needs Assessment; OrganizationalEffectiveness; *Outcomes of Education; *ParticipativeDecision Making; *Performance Based Assessment; PerformanceFactors; Portfolio Assessment; Program Evaluation;Questionnaires; *Self Evaluation (Groups); StudentEvaluation; Student Evaluation of Teacher Performance
IDENTIFIERS *AIR Forum
ABSTRACTThis paper describes a program to measure institutional
effectiveness that involved development of an inventory checklist designed tostimulate faculty input on how student performance-related outcomes aremeasured and to evaluate current program effectiveness. The checklist,designed to be administered in an interview format, focused on threeparticular areas: (1) departmental mission, intended educational outcomes,and written methods of assessment for evaluating program effectiveness; (2)
direct indicators for assessing students' knowledge and skills, includingcapstone courses, portfolio assessments, licensure, certification,professional exams; and indirect indicators for assessing students' andothers' opinions of their learning, including student course evaluations,employer surveys, student exit interviews, and alumni surveys; and (3) usinginventory results to identify needed program improvements and additionalresources such as training, personnel, and technology. The paper concludesthat using the inventory checklist provides a non-threatening way to assessstudent outcomes, and offers an excellent opportunity for initiating asystematic and continuing process for institutional effectiveness andeducational improvement. The checklist is appended. (Contains 12 references.)(CH)
Reproductions supplied by EDRS are the best that can be madefrom the original document.
Finding the 'Start Line' with an Institutional
U.S. DEPARTMENT OF EDUCATIONOffice of Educational Research and Improvement
EDUCATIONAL RESOURCES INFORMATIONCENTER (ERIC)
ftrThis document has been reproduced asreceived from the person or organizationoriginating it.
Minor changes have been made toimprove reproduction quality.
Points of view or opinions stated in thisdocument do not necessarily representofficial OERI position or policy.
Effectiveness Inventory
1
PERMISSION TO REPRODUCE ANDDISSEMINATE THIS MATERIAL HAS
BEEN GRANTED BY
Vukra_
TO THE EDUCATIONALRESOURCES
INFORMATION CENTER (ERIC)
Sharron L. Ronco, Ph.D.Assistant Provost, Institutional Effectiveness & Analysis
Stephanie G. BrownDoctoral Candidate, Educational Leadership
Florida Atlantic University777 Glades Rd.
Boca Raton, FL 33431(561) 297-2665
(561) 297-2590 faxsroncoAfau.edu
sbrownPwolffpack.com
Paper presented at the 40th Annual Forum of the Association for Institutional Research,Cincinnati, Ohio, May 2000.
2
BEST COPY AVAILABLE
Finding the 'Start Line' with an Institutional
Effectiveness Inventory
Although the assessment movement is approaching the quarter-century mark, there arestill outposts where the use of student performance-related outcomes has not entered themainstream of institutional life. In order to open the conversation on assessment, wedesigned a checklist to inventory current campus practices. The checklist, designed to beadministered in an interview format, gathered information on intended educationaloutcomes, existing and potential assessment techniques and instruments. Results fromthe inventories were assembled into a diagnostic profile that identified resources, needs,and areas where centrally directed assessment activities would benefit all departmentsand programs.
3
It has been nearly a quarter of a century since assessment began as a way for a
college or university to measure its impact on students. Instead of focusing on inputs and
resources, assessment looks critically at student learning, articulating program goals,
setting standards for accomplishment, and measuring the extent to which standards are
met. The question, "What should our graduates know, be able to do, and value?" is the
focal point for beginning an assessment effort where outcomes matter most.
The mandate for assessment developed as various national commissions studying
higher education concluded that a "disturbing and dangerous mismatch exists between
what American society needs from higher education and what it is receiving."
(Wingspread Group on Higher Education, 1993). What began as an experimental
response to those concerns on the part of a handful of institutions has developed into a
national agenda. For the past 15 years, there has been relentless pressure for assessment
and the expectations associated with it (Gray, 1997).
Influences on student assessment have been exerted by national-level efforts,
state-level initiatives, accreditation agencies, the private sector and professional
associations. All six regional accrediting agencies now require evidence of student
assessment. According to a recent survey, all but four of fifty states reported some type
of student assessment, and the requirement to demonstrate effectiveness via student
outcomes has become part of performance funding in a number of states (Ewell, 1996;
Cole, Nettles & Sharp, 1997; Peterson and Einarson, 2000). Regardless of who guides
the effort to strengthen institutional accountability, it appears inevitable that student
assisted with data analysis for this paper.
4
outcomes assessment will endure as a topic of institutional, regional and national concern
into the 21st century.
Conceptual Framework
Like many institutions, our impetus for instituting a program of institutional
effectiveness was an impending regional accreditation visit. Undertaking assessment in
at atmosphere where many faculty are unfamiliar with the concept is a challenge,
particularly at a large, diffuse university where colleges are independent and faculty in a
single discipline may be scattered across several campuses. In order to lay the
groundwork for an initiative that is certain to claim a fair amount of time andresources
over the next several years, we decided to begin by polling departments on their existing
assessment activities. We chose to develop an inventory checklist to identify current
campus practices for assessing student performance-related outcomes.
The development of an inventory checklist was not a novel idea for such an
undertaking. The Southern Association of Colleges and Schools encourages its member
institutions to begin their evaluation of institutional effectiveness by conducting an
assessment of existing practices within the institution, and even suggests a format for the
inventory (Commission on Colleges of the Southern Association of Colleges and
Schools, 1996). Nichols (1995) discusses the importance and added value of conducting
an inventory of assessment activities. Many of his case study institutions reported the use
of an inventory for means of collecting and reporting information (Nichols, 1993).
Palomba and Banta (1999) point out that completing an inventory of current data
collection methods may uncover several activities that, although not specifically designed
52
for program assessment, can help accomplish its purpose. Institutions using inventories
reported benefits ranging from the identification of disparities in existing operations and
programs to using inventory results as benchmarks to measure future assessment efforts
(Nichols, 1995; Hodge, 1997).
Administering the Inventory Checklist
The purpose of the inventory checklist was to stimulate conversation and provoke
thought among department chairs and their units about ways in which student
performance-related outcomes are measured and to evaluate current methods for overall
program effectiveness. The inventory checklist identified strengths, weaknesses, and
needed resources in individual programs. It highlighted areas in which programs had
proven methods of assessment in place, and uncovered methods of assessment not in
place but potentially useful. It allowed respondents to determine which methods were not
desirable or applicable for program assessment.
The inventory checklist was designed to be administered in an interview setting
and with three areas of interest in mind. The first area focused on departmental mission,
intended educational outcomes, and written methods of assessment for evaluating
program effectiveness. The second area focused on direct and indirect indicators of
assessment. Direct indicators assess knowledge and skills demonstrated by the student,
such as capstone courses, portfolio assessments, licensure, certification, or professional
exams, and video and audiotape evaluations. Indirect indicators assess students' and
others' opinions of their learning, such as student course evaluations, employer surveys
and questionnaires, student exit interviews, and alumni surveys. The third area focused
6
on the use of inventory results for overall program improvement, and identification of
needed resources such as training, personnel, and technology, for improving student
outcomes and program effectiveness.
Prior to the interview, a web search was conducted to identify educational
outcomes plans from other institutions that had completed, or were in the process of
completing, the SACS reaffirmation Self-Study. At least one educational outcomes plan
was printed for each discipline, so that chairs could have a tangible example of
assessment work done by their colleagues in other institutions. The plan, along with a
copy of the inventory checklist, was provided to each department chair a few days prior
to the interview.
The interview consisted of a one-on-one discussion with each department chair.
The focus of the interview was to complete the inventory checklist, gather any supporting
documentation offered, and ask as many questions as possible about specific outcomes
assessment methods used by each department or program. The inventory checklist
proved beneficial in gathering information in several ways. First, it stimulated an
exchange of information between the department chair and the interviewer regarding
specific outcomes assessment methods used by departments and programs. Second, it
delineated examples of many different types of outcomes assessment methods, thus
encouraging departments and programs to consider assessment in broader, and less
strictly quantitative, terms. Third, it brought to realization the fact that some departments
and programs were well on their way in identifying formal assessment activities.
Likewise, departments or programs lagging in formal assessment activities were
'7 4
identified. Finally, departments and programs were able to identify and prioritize the
areas where programs, services, and operations needed to be improved.
The inventory checklist, coupled with individual interviews, proved to be an
effective method for gathering information about individual departments and programs.
The interview format was non-threatening, thereby allowing for an open and honest
dialogue between the interviewer and the department chair. Department chairs candidly
voiced their concerns, fears, and support about the reaffirmation project. Two-way
communications between the interviewer and the department chair was not only
stimulated, but also encouraged. Each party was given the opportunity to ask questions,
not only concerning the department or program, but the reaffirmation project as well.
Administering the checklist to the administrative and academic support units took
a slightly different route. Believing that it was impractical and unnecessary to interview
each unit director separately, we instead attended scheduled staff meetings in each area
and explained the institutional effectiveness cycle, walking the participants through the
checklist. We asked for the inventories to be completed and returned to our office within
a month. The response rate was good, but the self-report aspect of this collection effort
made the results a little less credible, particularly where documentation was not attached
to the checklist.
Using the Checklist Results
Information from the completed checklists was first organized onto a spreadsheet
to simplify analysis of results (Figure 1). Tallied results indicated that 80% of the
departments reported having written mission statements, 43% said that they had
85
i- igure 1. Jampte spreaasneet aata tor compiling cnecklist results.
College Business Business Business Business BusinessDepartment Marketing School of Accounting Economics FinanceProgram Marketing Accounting Economics FinanceProgram level BS MACC MAT BSWritten mission statement?Intended educational outcomes?Written methods of assessment?Separate accreditation?
Direct indicators of assessment:Comprehensive examWriting proficiency examNational examGRE subject testCertification examLicensure examLocal pretest-posttestPerformance assessmentVideo/audio tape evaluationSenior thesis/major projectPortfolio evaluationCapstone courses
YesYesYesAACSB
Currently useN/AWould useN/AN/AN/AWould useN/AN/AN/AN/A
Would use
YesYesYesNo
N/ACurrently useN/AN/ACurrently useCurrently useN/AN/AN/ACurrently useN/A
N/A
YesNoNo
AACSB
N/AN/AWould useN/AN/AN/AWould useN/AN/AN/AN/A
N/A
YesYesYesAACSB
Would useN/AN/AN/AN/AN/AN/AN/AN/A
Currently useN/A
Currently use
articulated intended educational outcomes, and 37% had defined methods of assessment
for evaluating program effectiveness in terms of measurable student outcomes. Almost
all reported that they had used assessment, however informally obtained, to improve
programs.
The ordering of direct measures shown in Figure 2 clarified which indicators were
most frequently used across departments. Direct measures are those that require students
to display their knowledge and skills as they respond to the measurement itself. Senior
projects topped the list of most frequently used direct indicators, followed by the GRE
subject tests, capstone courses and comprehensive exams. The "white space" between
shaded bars was of special interest, since it points to indicators departments would be
willing to consider making a part of their program assessments. For the most part, white
space was scarce for direct indicators; departments were either using the indicator or
were not interested in using it. Measures requiring performance rather than testing were
slightly more popular. Since fewer than half of the departments were either using or
planning to use most direct measures, it became obvious that these measures were going
to be a hard sell.
The assessment picture for indirect indicators looked more hopeful (Figure 3).
Indirect measures ask students, or those connected with them, to reflect on their learning
rather than demonstrate it. Aside from course evaluations, which most departments
engage in regularly, there was low participation but high interest in using most of the
indirect indicators. These generally fell into two categories; those involving analysis of
existing data ("Analysis of grade distribution, " "Examination of department data,"
"Comparison with peer institutions") and those requiring collection of outcomes data on
graduates. Having an empirical identification of these departmental needs gave our
office the mandate we felt was necessary to proceed with centrally directed assessment
activities.
Arraying the indicators by college provided a more detailed diagnosis of
assessment activities and needs. For example, Figure 4 illustrates that most departments
in the College of Arts and Letters had little interest in direct measures. Although
licensure and certification exams are not used in these disciplines, the rejection of almost
any kind of direct display of knowledge and skills was disconcerting. More favorable
was the interest displayed in using indirect indicators. The patterns within as well as
across the columns quickly profiled those departments likely to be cooperative in
Fig
ure
2D
irect
indi
cato
rs o
f ass
essm
ent i
n ac
adem
ic d
epar
tmen
ts
Sen
ior
thes
is/m
ajor
pro
ject
GR
E s
ubje
ct te
st
Cap
ston
e co
urse
s
Com
preh
ensi
ve e
xam
Por
tfolio
eva
luat
ion
Vid
eo/a
udio
tape
eva
luat
ion
Per
form
ance
ass
essm
ent
Writ
ing
prof
icie
ncy
exam
Cer
tific
atio
n ex
am
Nat
iona
l exa
m
Loca
l pre
test
-pos
ttest
Lice
nsur
e ex
am
Cur
rent
ly u
se 0
Wou
ld u
se E
l Not
inte
rest
ed
11
1020
3040
5060
Num
ber
of D
epar
tmen
ts
812
Fig
ure
3In
dire
ct in
dica
tors
of a
sses
smen
t in
acad
emic
dep
artm
ents
Stu
dent
cou
rse
eval
uatio
n
Cur
ricul
um/s
ylla
bus
anal
ysis
Oth
er e
valu
atio
n of
cou
rse
inst
ruct
ion
Iden
tify/
asse
ss a
t-ris
k st
uden
ts
Inte
rnsh
ip e
valu
atio
n
Com
mun
ity p
erce
ptio
n
Gra
duat
ion/
rete
ntio
n ra
tes
Ana
lysi
s of
stu
dent
gra
de d
istr
ibut
ion
Com
mun
ity s
ervi
ce p
artic
ipat
ion
Foc
us g
roup
dis
cuss
ion
Exa
min
atio
n of
dep
artm
ent d
ata
Job
plac
emen
t
Alu
mni
sur
vey
Stu
dent
sat
isfa
ctio
n su
rvey
Com
paris
on w
ith p
eer
inst
itutio
n
Exi
t int
ervi
ews
Em
ploy
er s
urve
y
Gra
duat
e sc
hool
acc
epta
nce
rate
s
Tra
ckin
g al
umni
hon
ors/
rew
ards
Per
form
ance
in g
radu
ate
scho
ol
0
0 C
urre
ntly
Use
Wou
ld u
se 0
Not
inte
rest
ed
1020
30N
umbe
r of
Dep
artm
ents
4050
60
139
14
assessment (Communications) vs. the probable resisters (Political Science). The contrast
of Arts and Letters with the profiled College of Education (Figure 5) is revealing.
Education has separate discipline accreditations and is accustomed to demonstrating
student outcomes. Their profile identifies them as potential cheerleaders or advocates
whose experiences might be used to energize their colleagues in other colleges.
A parallel ordering of assessment indicators for administrative and academic
support units showed a number of indicators already in use (Figure 6). The ones
generating most interest for future use were measures that could be implemented using a
centrally administered client satisfaction survey, and comparisons with peer institutions.
When asked what resources were needed to develop better methods for assessing
outcomes, department chairs consistently mentioned financial and human support,
training and technology. Assistance with development of instruments and tracking of
program graduates was frequently requested. Department chairs also expressed concern
over the time required to identify, gather and produce documentation and information
supporting departmental efforts for improving student performance-related outcomes.
They urged establishment of a timeline delineating tasks, responsible individuals,
resources and due dates to ensure that the process stayed on track. They believed that
this type of coordinated effort would best be achieved with institution-wide leadership
and support from the Office of Institutional Effectiveness and Analysis.
The inventory checklist analysis led us to the sobering realization that we had far
to go before accreditation time. We promptly arranged for outside consultants to conduct
a two-day workshop to further acquaint administrators, faculty and staff with the
concepts of institutional effectiveness and provide expert guidance for those willing to
1510
Fig
ure
4P
rofil
e of
indi
cato
rsC
olle
ge o
f Art
s an
d Le
tters
CO
LLE
GE
OF
AR
TS
AN
D L
ET
TE
RS
Soc
iolo
gyE
nglis
hA
nthr
opol
ogy
I I
Com
mun
icat
ion
Dire
ct in
dica
tors
=C
ompr
ehen
sive
exa
mW
ritin
g pr
ofic
ienc
y ex
amN
atio
nal e
xam
GR
E s
ubje
ct te
st_
Cer
tific
atio
n ex
amLi
cens
ure
exam
Loca
l pre
test
-pos
ttest
Per
form
ance
ass
essm
ent
Vid
eo/a
udio
tape
eva
luat
ion
Sen
ior
thes
is/m
ajor
pro
ject
Por
tfolio
eva
luat
ion
Cap
ston
e co
urse
sIn
dire
ct in
dica
tors
Com
paris
on w
ith p
eer
inst
itutio
nJo
b pl
acem
ent
Em
ploy
er s
urve
yG
radu
ate
Sch
ool a
ccep
tanc
e ra
tes
Per
form
ance
in g
radu
ate
scho
olG
radu
atio
n/re
tent
ion
rate
sE
xit i
nter
view
sS
tude
nt s
atis
fact
ion
surv
eyS
tude
nt c
ours
e ev
alua
tion
Inte
rnsh
ip e
valu
atio
nF
ocus
gro
up d
iscu
ssio
nA
lum
ni s
urve
yT
rack
ing
alum
ni h
onor
s/re
war
dsId
entif
y/as
sess
at-
nsk
stud
ents
Ana
lysi
s of
stu
dent
gra
de d
istn
butio
nE
xam
inat
ion
of d
epar
tmen
t dat
aO
ther
eva
luat
ion
of c
ours
e in
stru
ctio
nC
urric
ulum
/syl
labu
s an
alys
isC
omm
unity
per
cept
ion
Com
mun
ity s
ervi
ce p
artic
ipat
ion
1611
sH
isto
ryA
rtP
oliti
cal S
cien
Mus
ic
A =
Cur
rent
ly u
sed
B =
Wou
ld u
seC
= N
ot in
tere
sted
0h
17
Fig
ure
5P
rofil
e of
Indi
cato
rs -
Col
lege
of E
duca
tion
18
CO
LLE
GE
OF
ED
UC
AT
ION
Dire
ct in
dica
tors
Edu
catio
nal
Lead
ersh
ip
Edu
catio
nal
Tec
hnol
ogy
&R
esea
rch
Exc
eptio
nal
Hea
lthS
tude
ntT
each
erC
ouns
elor
Sci
ence
/Med
Edu
catio
nE
duca
tion
Edu
catio
nLa
b S
cien
ce
Com
preh
ensi
ve e
xam
Writ
ing
prof
icie
ncy
exam
Nat
iona
l exa
mG
RE
sub
ject
test
Cer
tific
atio
n ex
amLi
cens
ure
exam
Loca
l pre
test
-pos
ttest
Per
form
ance
ass
essm
ent
Vid
eo/a
udio
tape
eva
luat
ion
Sen
ior
thes
is/m
ajor
pro
ject
Por
tfolio
eva
luat
ion
Cap
ston
e co
urse
sIn
dire
ct in
dica
tors
Com
paris
on w
ith p
eer
inst
itutio
nJo
b pl
acem
ent
Em
ploy
er s
urve
yG
radu
ate
Sch
ool a
ccep
tanc
e ra
tes
Per
form
ance
in g
radu
ate
scho
olG
radu
atio
n/re
tent
ion
rate
sE
xit i
nter
view
sS
tude
nt s
atis
fact
ion
surv
eyS
tude
nt c
ours
e ev
alua
tion
Inte
rnsh
ip e
valu
atio
nF
ocus
gro
up d
iscu
ssio
nA
lum
ni s
urve
yT
rack
ing
alum
ni h
onor
s/re
war
dsId
entif
y/as
sess
at-
nsk
stud
ents
Ana
lysi
s of
stu
dent
gra
de d
istr
ibut
ion
Exa
min
atio
n of
dep
artm
ent d
ata
Oth
er e
valu
atio
n of
cou
rse
inst
ruct
ion
Cur
ricul
um/s
ylla
bus
anal
ysis
Com
mun
ity p
erce
ptio
nC
omm
unity
ser
vice
par
ticip
atio
n
A=
Cur
rent
ly u
sed
B=
Wou
ld u
seC
= N
ot in
tere
sted
1219
Fig
ure
6A
sses
smen
t ind
icat
ors
in a
dmin
istr
ativ
e an
d ac
adem
ic s
uppo
rt u
nits
Sta
ff di
scus
sion
/eva
luat
ions
Mea
sure
s of
vol
ume
of a
ctiv
ity
Met
hods
to g
et c
lient
feed
back
Rev
iew
of e
xist
ing
data
Mea
sure
s of
effi
cien
cy
Clie
nt s
atis
fact
ion
surv
ey
Mea
sure
s of
ser
vice
qua
lity
Sta
ndar
ds s
et b
y fe
dera
l, st
ate,
coun
ty, c
ity o
r F
AU
reg
ulat
ions
Ben
chm
arks
/Com
paris
ons
with
pee
rin
stitu
tions
Sta
ndar
ds/G
uide
lines
by
prof
essi
onal
asso
ciat
ions
Ext
erna
l eva
luat
ors/
audi
tors
05
1015
2025
30
Cur
rent
ly u
se O
Wou
ld u
se O
Not
inte
rest
edN
umbe
r of
Uni
ts
3540
4550
2013
21
get started with their plans. A decision was made to form an Academic Programs
Assessment Committee with representation from all colleges to oversee the development
of program assessment plans at all degree levels. Since research has shown that the
biggest obstacle to implementing student outcomes assessment is faculty resistance
(Maki, 1999), there was a strong effort to extend Committee membership to senior
faculty with few or no administrative responsibilities. The Committee will recommend
approval or modification of the plans to the Provost. The expressed need for technical
support to assist departments in instrument identification and data analysis resulted in the
funding of a new position for an assessment coordinator. Meanwhile, the SACS Self-
Study Committee charged with Institutional Effectiveness reviewed the inventories and
prepared an initial report for the administration on the compliance status with Section III
of the Criteria. As the process of developing assessment plans proceeded, the checklists
served as a baseline for measuring progress.
Conclusion
The assessment of student performance-related outcomes offers an excellent
opportunity for initiating a systematic and continuing process for gathering, interpreting,
and using information that will result in increasing institutional effectiveness and
educational improvement. Although a systematic process for assessing student
performance-related outcomes is mandated by accrediting agencies, the initiative can be a
catalyst for many positive changes in improving educational outcomes and institutional
effectiveness.
For us, the inventory checklist was a way to introduce the concept of student
outcomes assessment in a nonthreatening, low-risk, collegial context. It allowed us to
22 14
answer questions, allay fears, and confront resistance on an individual basis. Talking in
person is particularly useful for those who are new to the process. By reassuring faculty
and chairs that many of their existing efforts were already moving in the right direction,
we bought goodwill for the project. Reluctant faculty were often pleased to discover that
assessment did not necessarily involve only quantitative measurements and did not
require standardized testing. Finally, the inventory results got the attention of campus
administrators, whose leadership was critical for ensuring that assessment efforts stayed
on track.
Two minor limitations of the checklist surfaced during its use. There was some
initial confusion over the role this information would play in the Self-Study process.
Some chairs were concerned that the information submitted via the checklist would
represent their department's final, rather than initial, status with respect to institutional
effectiveness and that they would not have an opportunity to update their progress.
Second, the examples of direct and indirect measures on the checklist, while
comprehensive, were not intended to preclude other kinds of measures that might be
devised for program assessment. There was some risk that faculty and administrators
would not look beyond these indicators for others that-might be more appropriate for their
programs.
Successful assessment is more than a collection of techniques, instruments and
outcomes; it is a cultural issue that affects how a community of scholars defines its
responsibilities to its students (Magruder, McManis & Young, 1997). Now that we've
identified the 'Start Line', the goal of transforming the information collected via the
inventories into a successful assessment culture seems more attainable.
References
Cole, J.J.K., Nettles, M.T. and Sharp, S. (1997). Assessment of teaching and learning forimprovement and accountability: State Governing, Coordinating Board and RegionalAccreditation Association Policies and Practices. University of Michigan: NationalCenter for Postsecondary Improvement.
Commission on Colleges of the Southern Association of Colleges and Schools (1996).Resource Manual on Institutional Effectiveness, Third Edition.
Ewell, P.T. (1996). The current pattern of state-level assessment: Results of a nationalinventory. Assessment Update, 8 (3) 1-2, 12-13, 15.
Gray, P.J. (1997). Viewing assessment as an innovation: Leadership and the changeprocess. In T.W. Banta and P.J. Gray (Eds.), The Campus-Level Impact of Assessment:Progress, Problems and Possibilities. New Directions for Higher Education, 100,(Winter), 5-15.
Hodge, V. (1997). Student learning outcomes assessment and institutionaleffectiveness at Bellevue Community College: Report of the 1996-97 assessmentinventory ERIC ED 411 009.
Magruder, J., McManis, M.A. & Young, C.C. (1997). The right idea at the right time:Development of a transformational assessment culture. New Directions for HigherEducation, 100, (Winter), 17-29.
Maki, P.L. (1999). A regional accrediting commission's survey on student outcomesassessment and its response. Assessment Update, 11 (3).
Nichols, J.O. (1993). Assessment Case Studies: Common Issues in Implementation withVarious Campus Approaches to Resolution. New York: Agathon Press.
Nichols, J.O. (1995). A Practitioner's Handbook for Institutional Effectiveness andStudent Outcomes Assessment Implementation. New York: Agathon Press.
Palomba, C.A. & Banta, T.W. (1999). Assessment Essentials: Planning, Implementingand Improving Assessment in Higher Education. San Francisco: Jossey-Bass.
Peterson, M.W. and Einarson, M.K. (2000). Analytic framework of institutional supportfor student assessment. In J.C. Smart (Ed.) Higher Education: Handbook of Theory andResearch Volume XV. Bronx, NY: Agathon Press.
Wingspread Group on Higher Education (1993). An American Imperative: HigherExpectations for Higher Education. Milwaukee: Johnson Foundation.
2416
INSTITUTIONAL EFFECTIVENESS CHECKLISTFor Academic Departments
College::
Department:
Program (Major):
Level:
Part I. Does the program listed above have:
1. A written mission or statement of purpose? Yes NoIf yes, please attach a copy or reference a website and/or catalog for retrieval of this information.
2. Statements of intended educational outcomes?
(This term describes what the departmental faculty intend for a student to be able to think, know, or do when they'vecompleted a given educational program.)
Yes No
3. Written methods of assessment for evaluating program effectiveness in terms of measurable student outcomes?
Yes No
4. A separate accreditation agency or process? Yes NoIf yes, please list all accreditation agencies.
Part II. Assessment of Outcomes:
During the past year, has your program used any of the following for assessment of outcomes?Indicate "A" if currently being used; "B" if not currently being used but interested in using; and "C" if not applicable.
Direct indicators of assessment:
1 Comprehensive exams2 Writing proficiency exams3 National exams assessing subject matter knowledge (e.g., Major Field Achievement Test)4 Graduate Record Examination (GRE) subject test5 Certification exams6 Licensure exams7. Locally developed pre-test or post-test for mastery of knowledge8. Performance assessment for graduating seniors (i.e., recitals, art exhibits, science projects, etc.)9. Video and audio tape evaluations (i.e., music, art, student teaching, etc.)
10 Senior thesis/major project11. Portfolio evaluation containing representative examples of student's work
(i.e., written, creative, or scientific papers or projects)12. Capstone courses which are designed to measure student mastery of essential theoretical and
methodological issues associated with a discipline (e.g., senior level seminars)
(Indirect indicators of assessment:
1 Comparison of outcomes with peer institutions2 Job placement of graduating students3 Employer surveys and questionnaires4 Graduate school acceptance rates5 Performance in graduate school6 Student graduation/retention rates7 Exit interviews8 Student satisfaction surveys9 Student course evaluations
10 Internship evaluation11 Focus group discussions12 Alumni surveys reporting satisfaction with degree program and career success13 Tracking of alumni honors, awards, and achievements at local, state, and national levels14 Identification and assessment of at-risk students15 Analysis of student grade distributions16 Examination of information contained in department's own database17. Other evaluations of course instruction (e.g., chair or peer review)18. Curriculum/syllabus analysis (e.g., analysis of transfer student preparation)19 Community perception of program effectiveness20 Community service/volunteerism participation21. Other:
Part III. Other Information
1. Has your department used any of the indicators listed above to improve departmental programs, services, andoperations? Yes No
If yes, please identify some examples.
2. What resources (i.e., training, personnel, technology, etc.) does your department need to develop better methodsfor assessing student outcomes and improving program effectiveness?
3. Please list any additional comments or concerns.
Completed by: Date:
2?
INSTITUTIONAL EFFECTIVENESS CHECKLIST
For Administrative and Academic Support Units
Unit: Campus:
Part I. Does your unit have
1. A formal statement of purpose which supports FAU's mission and goals?Yes (please attach a copy) No
Explicit goals which support this unit's purpose?Yes No
3. Procedures to evaluate the extent to which goals are being achieved?Yes No
Part II. Evaluation Measures
During the past year, has your unit used any of the following for assessment of outcomes?Indicate "A" if currently being used; "B" if not currently being used but interested in using; and"C" if not applicable.
1 Measures of volume of activityExamples: Number of clients served, circulation data, gross sales.
Specify:
2 Measures of efficiencyExamples: Average turnaround time for filling requests, timely service/prompt response, budget information.
Specify:
3 Measures of service qualityExamples: Error rates, accuracy of the information provided.
Specify:
4 Client satisfaction surveysExamples: Student satisfaction survey, alumni survey, employer survey, customer survey.
Specify:
5 Other methods to obtain client feedbackExamples: Focus groups, comments via email, evaluation forms, suggestion box, hotline.
Specify:
Part II. Evaluation Measures - continued
6. Staff discussions/evaluations of services to clients
7. Review of existing dataExamples: Departmental routine records/reports, institutional data, audits.
Specify:
8. Standards/guidelines provided by professional associations such as SCUP, NACUBO
9. Standards set by federal, state, county, city or FAU regulations
10. External evaluators/auditors
11 Benchmarks/Comparisons with peer institutions
12. Other:
Part III. Other information
1. Have you used the results of any of the evaluation measures listed above to improve administrative and academicsupport services and operations? Yes No
If so, please identify some examples.
2. What resources (i.e., training, personnel, technology, etc.) does your unit need to develop better methods for assessing service outcomes and improving service quality and effectiveness?
3. Please list any additional comments or concerns.
F1999 01
Completed by: Date:Please return your completed form to:Institutional Effectiveness and Analysis
SO 303
.2J
U.S. Department of EducationOffice of Educational Research and Improvement (OERI)
National Library of Education (NLE)Educational Resources Information Center (ERIC)
NOTICE
REPRODUCTION BASIS
IC
This document is covered by a signed "Reproduction Release(Blanket) form (on file within the ERIC system), encompassing allor classes of documents from its source organization and, therefore,does not require a "Specific Document" Release form.
This document is Federally-funded, or carries its own permission toreproduce, or is otherwise in the public domain and, therefore, maybe reproduced by ERIC without a signed Reproduction Release form(either "Specific Document" or "Blanket").
EFF-089 (9/97)