ian scott and roger grew - module evaluation at oxford brookes
TRANSCRIPT
MODULE EVALUATION AT OXFORD BROOKES
Ian Scott and Roger Grew
MODULE EVALUATION IS CONTENTIOUS
By tutors often perceived as being assessed by those not in the best position to judge, using tools that are not fit for purpose.
REASONS TO DO IT
The process of module evaluation is part of the quality enhancement cycle, that gives voice to the learners’ experiences. . It has become an expectation of students but also other stake holders.
.
To make summative judgements on lecturers’ (instructors’) performance
SOME (COMMON) ISSUES
It has become an industry - 1.5 million data items/ year - moderate sized University –
Evidence of sustained impact for institutionalised systems is lacking; particularly in UK context.
SOME (COMMON) ISSUES
Ability (motivation and capability) to use data generated well, is very patchy.
SOME (COMMON) ISSUES
SOME (COMMON) ISSUES - VALIDITY
Most questionnaires module evaluation tools used in the UK lack any attempt to demonstrate construct validity, we simply do not really understand how the questions we ask are interpreted.
Many tutors believe that module evaluations are a beauty contest,
View supported by Shevlin et al (2000) but not by the much more extensive work of Marsh (2007).
SOME MORE (COMMON) ISSUES - VALIDITY
Unless every student answers the questions The sample used via module evaluation is always biased; how it is biased is not well understood but negative response bias was not found in studies by Kherfi (2001) or Benton et al (2010).
Issues of subject bias are not acknowledged – students of some subjects on average are just more unhappy that those of other subjects.
In the UK very few published reliability studies, in the states Marsh has produced very reliable SET questionnaires. Sample sizes is most commonly discussed issue of reliability
‘Liberal conditions’ ‘Stringent conditions’
10% sampling error; 80% 3% sampling errors; 95%confidence level; 70:30 split confidence level; 70:30 split
responses 4 or 5 compared with responses 4 or 5 compared with 1, 2, 3 1, 2, 3
Total no. of students Required no. of Response rate Required no. Response rateon the course respondents required (%) of respondents required (%)
__________________________________________________________________________________________________________________________________________________________________
10 7 75 10 10020 12 58 19 9730 14 48 29 9640 16 40 38 9550 17 35 47 9360 18 31 55 9270 19 28 64 9180 20 25 72 90
90 21 23 80 88100 21 21 87 87150 23 15 123 82200 23 12 155 77250 24 10 183 73300 24 8 209 70
(Table reproduced from Nulty, 2008 based on a formula by Dilman 2000)
BROOKES’ MODULE EVALUATION STORY
Prior to Sept 2011
Module evaluation required but for module leader to determine how
Uniform questionnaire tool and evaluation template introduced
Sept 2011 Sept 2012
Questionnaire goes electronic
Nov 2012
Full integration with Moodle and APTT- Open reporting- auto populated module evaluation form
Sept 2013
In class mobile device implementation
NSS Scores
85 82 85 85 87
Now 2015 = 90
MODULE EVALUATION AND BUSINESS INTELLIGENCE AT BROOKES
Module evaluation is part of the bigger Brookes BI offering
Strategic decision to include Module Evaluation as an integral part of our BI reporting capabilities
• Increase the usage of existing BI facilities by Academics• Add further data sets to a growing academic database• Convenient way of incorporating Module Evaluation data into
composite Programme Review reports using the BI tools
MODULE EVALUATION AND BUSINESS INTELLIGENCE AT BROOKES
Brookes’ wider Business Intelligence story
A 6 year project so far, built on:
Tried and tested technology Initially small scale pilot based approach On a journey aiming to build to an Enterprise wide solution Led by a small dedicated “Business Side” team
The APTT architecture
HR,Finance(planned)
Text, Excel
Prototype data mart
QLIKVIEW
Source Transform
KP
IsP
rog
revi
ews
Stu
dent
nu
mbe
rs
Engine
In-house student system
Oracle Data warehouse
MODULE EVALUATION AND BUSINESS INTELLIGENCE AT BROOKES
Combining Module Evaluation and BI Reporting
Quest’aire run in classroo
m
Moodle Response data auto-download
every night
File saved as a simple CSV to an
APTT external source location
Integrated pick-up and processing
in APTT
Next day appears in
Module Review
Dashboard
Also possible to combine other Data Warehouse components with Module Evaluation data into Dashboards(eg. Grades, domicile and other characteristics)
MODULE EVALUATION AND BUSINESS INTELLIGENCE AT BROOKES
Features and considerations for dashboard users: Single location to review own and institution wide responses (benchmarks
and history) Combining useful “at a glance” views with other data sets Convenient for end of semester review and report by Module Lead Standard Review template incorporating Module Evaluation data auto
generated by APTT (download buton) Evolving process – some user adoption issues, not all academics fully
engaged with report generation
Link between engagers and NSS?
0.5 1 1.5 2 2.5 3 3.5 4 4.550
55
60
65
70
75
80
85
90
95
100
Responce score
NSS
Ove
rall
% sa
ticef
actio
n
AND
Marsh .. Has consistently demonstrated that well developed, validated SET questionnaires are reliable , between instructors and overtime
Power of module evaluation questionnaire tools is at individual module level and for the development of individual instructor.
Our (preliminary) data suggest that there is no obvious link between Overall student satisfaction and the act of collecting feedback via questionnaires from students.
Using technology, it is possible to create an open system that is effective, but administratively lean; which is what we have done.
SO……