STUDENT FEEDBACK
A report to the Higher Education Funding Council for
England
Lee Harvey
October 2001
Centre for Research into Quality
The University of Central England in Birmingham,
Perry Barr, Birmingham, B42 2SU
Lee Harvey asserts his moral rights to be identified as the author of this work under the
Copyright, Designs and Patents Act 1988.
© Lee Harvey, 2001
© Lee Harvey, 2001
STUDENT FEEDBACK
Lee Harvey
In the 1980s, feedback from students about their experience in higher education was a
rarity. With the expansion of the university sector, the concerns with quality and the
growing ‘consumerism’ of higher education there has been a significant growth of, and
sophistication in, process designed to collect views from students.
Most higher education institutions, around the world, collect some type of feedback from
students about their experience of higher education. ‘Feedback’ in this sense refers to the
expressed opinions of students about the service they receive as students. This may
include perceptions about the learning and teaching, the learning support facilities (such
as, libraries, computing facilities), the learning environment, (lecture rooms, laboratories,
social space and university buildings), support facilities (refectories, student
accommodation, health facilities, student services) and external aspects of being a student
(such as finance, transport infrastructure).
Student views are usually collected in form of ‘satisfaction’ feedback. Sometimes there
are specific attempts to obtain student views on how to improve specific aspects of
provision or on their views about potential or intended future developments but this is
less usual.
Ironically, although feedback from students is assiduously collected in many institutions,
it is less clear that it is used to its full potential. Feedback from students has two main
functions:
internal information to guide improvement;
external information for potential students and other stakeholders.
Improvement
It is not always clear how views collected from students fit into institutional quality
improvement policies and processes. To be effective in quality improvement, data
collected from surveys and peer reviews must be transformed into information that can be
used within an institution to effect change.
To make an effective contribution to internal improvement processes, views of students
need to be integrated into a regular and continuous cycle of analysis, reporting, action and
feedback (Figure1).
In many cases it is not always clear that there is a means to close the loop between data
collection and effective action, let alone feedback to students on action taken.
For this to happen, requires that the institution has in place a system for:
© Lee Harvey, 2001
identifying and delegating responsibility for action;
encouraging ownership of plans of action;
accountability for action taken or not taken;
feedback to generators of the data;
committing appropriate resources.
Establishing this is not an easy task, which is why so much data on student views is not
used to effect change, irrespective of the good intentions of those who initiate the
enquiries.
It is more important to ensure an appropriate action cycle than it is to have in place
mechanisms for collecting data. At UCE, for example, there is a clear mechanism for
dealing with and acting on the data (see Appendix 1). At Edinburgh University, for
example, the focus is on reporting action taken in response to the feedback obtained.
There is no fixed approach, rather there is encouragement for the collection of feedback
using a variety of sources and methods, as seems appropriate at the course, department or
school level. This is backed up by procedures such as faculty-based audits of annual
course monitoring returns.
External information
In an era where there is an enormous choice available to potential students the views of
current students offer a useful information resource. Yet very few institutions make the
outcomes of student feedback available externally. UCE, for example, is unusual in
publishing their institution-wide student feedback survey (which reports to the level of
faculty and major programmes). It is available on a public web site and is published as a
hard-copy document with an ISBN number, which has been the case since its inception in
the late 1980s.
Institutions abroad, that have implemented the UCE Student Satisfaction approach,
including Auckland University of Technology and Lund University have published the
results. However, the norm in Britain is to consider that student views are confidential to
the University.
If the data is to be useful as an information resource, it is important that it is seen to be
collected professionally and impartially, preferably by a unit outside the faculty structure.
Types
The predominant ‘satisfaction’ survey takes five forms:
institution-level satisfaction with the total student experience or a specified sub-set;
faculty-level satisfaction with provision;
© Lee Harvey, 2001
programme-level satisfaction with the learning and teaching and related aspects of a
particular programme of study (for example, BA Business Studies)
1
;
module-level feedback on the operation of a specific module or unit of study (for
example, Introduction to Statistics)
teacher-appraisal by students.
Institution-level satisfaction
Systematic, institution-wide student feedback about the quality of their total educational
experience is an area of growing activity in UK higher education institutions. It is also a
growing concern in other countries around the world.
Institution-level satisfaction surveys are almost always based on questionnaires, which
mainly consist of questions with pre-coded answers augmented by one or two open
questions. In the main, these institution-wide surveys are undertaken by a dedicated unit
with expertise in undertaking surveys producing results to schedule.
Institution-wide surveys tend to encompass most of the services provided by the
university and are not to be confused with standardised institutional forms seeking
feedback at the programme or module level (discussed below). In the main, institution-
wide surveys seek to collect data that provide:
management information designed to encourage action for improvement;
a descriptive overview of student opinion, which can be reported as part of
appropriate accountability procedures.
The derivation of questions used in institution-wide surveys varies. The UCE approach
uses student-determined questions, usually via focus groups. In other institutions,
management or committees decide on the questions. Sometimes, institutions use or adapt
questionnaires developed at other institutions.
The way the results are used also varies. In some cases there is a clear reporting and
action mechanism. In others, it is unclear how the data helps inform decisions. In some
cases the process has the direct involvement of the senior management, while in other
universities action is realised through the committee structure.
Feedback to students of outcomes of surveys is recognised as an important element but is
not always carried out effectively, nor always produces the awareness intended. Some
institutions utilise current lines of communication between tutors and students or through
the student unions and student representatives. All of these forms depend upon the
1
In some institutions programmes of study are referred to as ‘courses’ or ‘pathways’.
However, ‘course’ is a term used in some institutions to mean ‘module’ or ‘unit’ of study,
that is, a sub-element of a programme of study. Due to the ambiguity of ‘course’, the
terms ‘programme of study’ and ‘module’ will be used in this paper.
© Lee Harvey, 2001
effectiveness of these lines of communication. Other forms of feedback used include:
articles in university magazines, posters and producing summaries aimed at students.
There are many variants as the following brief review of a small sample illustrate.
Examples
The satisfaction approach developed at UCE is the market leader and has been adopted
by many institutions in the UK and abroad. As implemented at UCE it provides a basis
for internal improvement in a top-down/bottom-up process that involves the vice-
chancellor and deans and directors of services. There is a well-developed analysis,
reporting, action and feedback cycle. The results of the student feedback questionnaire
are reported to the level of faculty and programme. The report is written in an easily
accessible style, combing satisfaction and importance ratings, that clearly shows areas of
excellence and areas for improvement. The report is published (with ISBN number) and
is available in hard copy and on a public website. The action that follows the survey is
reported back to the students through an annual publication. (See Appendix 1 for details.)
Institutions that have used the UCE model include Sheffield Hallam University,
Glamorgan University, Cardiff Institute of Higher Education, Buckingham College of
Higher Education, University of Greenwich as well as overseas institutions such as
Auckland University of Technology (New Zealand), Lund University (Sweden), City
University (Hong Kong) and Jagiellonian University (Poland). All of these institutions
have a similar approach, collecting student views to input into management decision
making. Where they vary, to date, is in the degree to which they make the findings public
and produce reports for students outlining actions that have resulted from the survey.
At the University of Greenwich, the institution-wide survey is a census of students on
half the campuses each year. The 16-page questionnaire covers all aspects of the student
experience and is based on items determined by students. The printed report and
executive summary (which commend excellence and details the areas for improvement)
go through the committee structure; the learning and quality office is responsible for
liasing with heads of department and schools to produce an action report and then an end-
of-year implementation report, which is also circulated to committees. All results, action
reports and implementation reports are available on the website to all staff and to all
students and a mid-year newsletter is produced. Staff and students are kept informed of
action via e-mails and via articles in the student newspaper and staff magazine. There are
many outcomes as a result of the survey. The schools and campus management groups
have increasingly taken these results seriously. For instance, between 1998 and 2000, the
School of Education made lots of changes, which resulted in considerable improvement
in the four unsatisfactory ratings for teaching and learning. These changes were noted by
the external reviewers.
Southampton Institute has had an annual questionnaire since 1993, which endeavoured to
capture the ‘total’ student experience. This has evolved over the years to include wider
issues and, in the last two years, has addressed the question of importance. The Institute
© Lee Harvey, 2001
currently has two main student experience surveys, one for undergraduate students and
another for postgraduates. They are annual and the questionnaire items are selected by an
Institute working group whose members are drawn from across the institution. The
survey is reported at faculty and programme-level and the individual results are sent to
course leaders. Faculty summaries are sent to the deans for their response and then the
Institute-level summary response is sent to the senior management group for theirs. A full
report is then published with summary extracts in the Student Union’s newsletter. Many
outcomes that can be linked to the survey such as more IT resources, improved catering
facilities and streamlined enrolment processes. The annual course monitoring exercise
and trend analysis contained in the annual reports ensures that action followed the
surveys. Each course leader must include the feedback from the survey in their annual
course report. Action that ensues is reported back to students via the Student Union’s
newsletter and the student representatives.
UEL had a annual institution-wide student feedback questionnaire during the1990s and
after a break will reintroduce it. The five-page questionnaire had many standard questions
that repeated from year to year about the overall quality of student experience of the
‘would you recommend UEL to a friend?’ type. Each year specific issues were identified
for inclusion on the questionnaire. These could arise from discussions at Quality
Assurance & Enhancement Committee (QAEC), or at its Services Quality Sub-
Committee (SQSC). The overall process was overseen by the UEL Quality Assurance
Department. The analysed responses from the questionnaire were widely circulated
within the University to inform the Annual Quality Improvement/Annual Monitoring
process in academic schools and central service departments. It also went to QAEC and
issues raised requiring an institutional response were incorporate into QAEC’s annual
report to Academic Board. This typically results in a reference to a senior postholder, or
head of department with a requirement for a response on actions taken or to be taken to
come back to Academic Board. The summary analysis from the questionnaire also went
to the Student/Staff consultative committee as did responses from senior management.
In 1996, the University of Portsmouth first produced a student satisfaction survey, which
covered most aspects of the student experience, and reported it to the level of faculty.
However, reporting was in the form statistical tables that did not make it easy to identify
where action may be required for improvement. The report was regarded as a ‘significant
input to quality improvement’ and the intention was that’ action for improvement follows
from discussions’.
The University of Nottingham, four-page, ‘Omnibus Survey’ has, since 1994, been used
to survey all student cohorts on their experiences as a student, measuring their
satisfaction with the range of services offered. Although relating to many services, the
survey is dominated by questions relating to choice of Nottingham as an institution at
which to study, its reputation, the induction process and the general facilities, including
accommodation and computing. There is nothing about the learning and teaching
situation or course organisation, which are consigned to module and optional course
surveys. Each student cohort is surveyed biennially. The items on the questionnaire have
evolved since 1994 through a consultation process with the Students’ Union, the
© Lee Harvey, 2001
academic secretary’s department, the registrar and the pro-vice-chancellor for student
affairs. There are fixed questions that are retained for longitudinal analysis and these are
important. However, some space on the questionnaires is used for focusing on different
subjects as appropriate. Reports are produced and distributed to staff and managers. The
results are also put on the web site. The reports are broken up into service provisions and
service providers are asked to respond to the findings. If they are negative they are
expected to outline the problem and strategies for tackling the issue. There have been
outcomes that could be linked to the omnibus survey results, for example, the abolition
of single sex halls and changes in residential hall meal times. The survey office does not
always know what changes are made as a result of the omnibus but do know the findings
are taken very seriously and changes do occur as a result.
The University of Limerick undertook an institution-wide survey in 2000, including
satisfaction with course organisation, teaching, learning resources and self-development.
The final report was only at the institution level although the research objectives were to
evaluate by department and course as well.
A number of universities have undertaken institution-wide surveys, often on a census
basis, exploring a limited number of areas of student opinion. These are often only
reported internally and are not explicitly tied to a process of feedback and action. The
surveys undertaken at Liverpool John Moores and Leeds Metropolitan appear to be of
this type. Leeds Metropolitan University (LMU) used academic committees at different
levels to interpret the results and indicate future action. At Liverpool John Moores (JMU)
breakdowns were provided at school and programme levels and should feed into action
planning. A number of these institutions report that it is difficult to pinpoint specific
action resulting from the survey findings.
The University of Plymouth has been running an institution-wide undergraduate Student
Perception Questionnaire since 1995. This was extended to include partner colleges from
1998, and a pilot postgraduate questionnaire was run in 2000–01. Topics regularly
covered include various aspects of the programme of study, support for learning
(including library and computing services), student union services, policy awareness (for
example, equal opportunities, disability) and awareness of other services included in the
University’s Student Charter commitment (for example, medical centre, childcare
services). Approximately 20000 forms are distributed each year, with an average 46%
return rate last year. Each year an institutional report is given to the University’s Quality
and Standards committee (QSC), which includes a summary of prioritised areas and an
action plan. Faculties and partner colleges then respond to QSC on the action taken, as
part of the faculty monitoring of programmes. Programme-level reports are also
distributed, with a faculty summary for comparison. The annual programme monitoring
report requires a response to the SPQ, which is discussed at programme committees
where student representatives are present. The institutional reports and action plans are
available on the University’s intranet, where it can be accessed by staff and students.
Summaries of findings and actions taken are also disseminated to staff and students via
presentations and poster displays.
© Lee Harvey, 2001
Recommendations
Institution-wide surveys should provide both data for internal improvement and
information for external stakeholders.
If the improvement function is to be effective it is first necessary to establish an
action cycle that clearly identifies lines of responsibility and feedback.
Surveys need to be tailored to fit the improvement needs of the institution. Making
use of stakeholder inputs (especially those of students) in the design of questionnaires
is a useful process in making the survey relevant.
Importance as well as satisfaction ratings are recommended as this provides key
indicators of what students regard as crucial in their experience and thus enables a
clear action focus.
For improvement purposes, reporting needs to be to the level at which effective action
can be implemented. So, for example, programme organisation needs to be reported
to the level of programmes, computing facilities to the level of faculties, learning
resources to the level of libraries and resource centres, and so on.
Reports need to be written in an accessible style. It is recommended that, rather than
tables densely packed with statistics, data should be converted to a simple grading
(that incorporates satisfaction and importance scores where the latter are used). This
makes it easy for readers to identify areas of excellence and areas for improvement.
If the survey is to provide information for external stakeholders then surveys need to
include a generic set of questions to enable some comparison of student perceptions.
Experience of many surveys in the UK and abroad shows that questionnaires derived
via consultations with students contain a core set of questions. (There is a set for
taught students, that covers both undergraduates and postgraduates (Appendix 2) and
another set for research postgraduates)
Reports need to be published, or at least the responses to the generic questions need to
be made available either on the institutions’ web site or in a central location (or both).
For external information purposes, reporting of the responses to generic questions
needs to be to the level of programmes or subject areas.
Faculty-level satisfaction with provision
Faculty-level surveys (based on pre-coded questionnaires) are similar to those undertaken
at institution level. They tend to focus only on those aspects of the experience that the
faculty controls or can directly influence. They often tend to be an unsatisfactory
combination of general satisfaction with facilities and an attempt to gather information on
satisfaction with specific learning situations.
In most cases, these surveys are an additional task for faculty administrators, they are
often based on an idiosyncratic set of questions and tend not to be well analysed, if at all.
They are rarely linked into a meaningful improvement action cycle.
© Lee Harvey, 2001
Where there is an institution-wide survey, disaggregated and reported to faculty level,
faculty-based surveys tend to be redundant. Where faculty surveys overlap with
institutional ones, there is often dissonance that affects response rates.
Examples
Edinburgh University, for example, has no institution-wide survey but one or two
faculties have their own tailor-made feedback questionnaires, available to course
directors as an option.
Recommendations
Faculty-level surveys are not really necessary if well-structured institution-wide
surveys are in place.
If faculty-level surveys are undertaken they should not clash with institution-wide
surveys, where both coexist, it is probably better to attempt to collect faculty data
through qualitative means, focusing on faculty-specific issues untouched by
institution-wide surveys.
If faculty-level surveys are undertaken they must be properly analysed and linked into
a faculty-level action and feedback cycle, otherwise cynicism will rapidly manifest
itself and undermine the credibility of the whole process.
Programme-level satisfaction with the learning and teaching
Programme-level surveys are not always based on questionnaires although most tend to
be. In some cases, feedback on programmes is solicited through qualitative discussion
sessions, which are minuted. These may make use of focus groups. Informal feedback on
programmes is a continuous part of the dialogue between students and lecturers. This
should not be overlooked as it is an important source of information at this level for
improvement.
Programme-level surveys tend to focus on the teaching and learning, course organisation
and programme-specific learning resources. However, in a modularised environment,
programme-level analysis of the learning situation tends to be ‘averaged’ and does not
necessarily provide clear indicators of potential improvement of the programme without
further enquiry at the module level.
The link into any action is far from apparent in many cases. Where a faculty undertakes a
survey of all its programmes of this type, there may be mechanisms, in theory, to
encourage action but, in practice, the time-lag involved in processing the questionnaires
by hard-pressed faculty administrators tends to result in little timely improvement
following the feedback.
© Lee Harvey, 2001
In a modularised environment, where modular-level feedback is encouraged (see below),
there is less need for programme-level questionnaire surveys.
Where the institution-wide survey is comprehensive and disaggregates to the level of
programmes, there is also a degree of redundancy in programme-level surveys. Again, if
programme-level and institutional-level run in parallel there is a danger of dissonance.
Examples
The satisfaction survey used at the Open University (OU) is primarily aimed at the
programme level, although it does cover some wider, university, issues. In many respects,
OU students are more firmly focused on their course than students in conventional
universities. The OU aims to be able to give information on student views at the
programme level and to encourage action amongst programme teams. Programmes are
re-surveyed, following action, to see if student satisfaction increases in responses to any
changes made.
The standardised programme evaluation approach reached its nadir in Australia with the
development of the Course Evaluation Questionnaire (CEQ) based on Paul Ramsden’s
well-known work. This was a national survey aimed at graduates of Australian higher
education institutions. It has been claimed that the CEQ provides some useful, although
limited, information about teaching and learning across Australia.
At the University of East London the ‘University Policy on Student Feedback on
Teaching & Learning’ requires each school to publish a policy on the evaluation of
courses, units and pathways. UEL policy specifies that ‘at pathway, or subject area level,
an anonymous questionnaire should normally be administered at intervals not exceeding
two years’. The reference is to pathway or subject areas, rather than programme, to
reflect the fact that much of the undergraduate teaching in the university is operated
through the University Degree Scheme. Pathway or subject area questionnaires are
particularly common for final year students coming to the end of their studies with them
being asked to reflect back on the whole degree.
The University of Nottingham makes available a ‘Course Experience Evaluation’
questionnaire, which can be used to evaluate programmes, although such evaluation is
optional.
Recommendations
Programme-level questionnaire surveys are probably not necessary if the institution
has both a well-structured institution-wide survey, reporting to programme level, and
structured module-level feedback.
If programme-level surveys are undertaken they should not clash with institution-
wide surveys or module-level feedback.
© Lee Harvey, 2001
If specific programme-level information is needed for improvement purposes, it is
probably better to obtain qualitative feedback on particular issues through discussion
sessions or focus groups.
If programme-level surveys are undertaken they must be properly analysed and linked
into a programme-level action and feedback cycle. This tends to be a rarity in most
institutions.
At a national level, programme-level questionnaires provide insufficient information
to assist potential students in selecting appropriate universities and programmes of
study.
Module-level feedback
Feedback on specific modules or units of study provide an important element of
continuous improvement. The feedback tends to focus on the specific learning and
teaching associated with the module, along with some indication of the problems of
accessing module-specific learning resources. Module-level feedback, both formal and
informal, involves direct or mediated feedback from students to teachers about the
learning situation within the module or unit of study.
The primary form of feedback at this level is direct informal feedback via dialogue.
However, although this feedback may often be acted upon it is rarely evident in any
accounts of improvements based on student feedback.
In most institutions, there is a requirement for some type of formal collection and
reporting of module-level feedback, usually to be included in programme annual reports.
In the main, institutions do not specify a particular data collection process. The lecturer(s)
decide on the appropriate method for the formal collection of feedback. Often, though,
institutions provide guidance and formal questionnaire templates, should the module
leader(s) wish to use them.
There is a tendency to use ‘feedback questionnaires’ at this level: sometimes standardised
questionnaires across the institution, sometimes faculty-wide and sometimes constructed
locally. Module-level questionnaire feedback is usually superficial, results in little
information on what would improve the learning situation and, because of questionnaire-
processing delays, rarely benefits the students who provide the feedback. The use of
questionnaires tends to inhibit qualitative discussion at the unit level.
Direct, qualitative feedback is far more useful in improving the learning situation within a
module of study. Qualitative discussion between staff (or facilitators) and students about
the content and approach in particular course units or modules provides a rapid and in-
depth appreciation of positive and negative aspects of taught modules. Direct feedback
might take the form of an open, formally-minuted discussion between students and
teacher(s), informal feedback over coffee, or a focus-group session, possibly facilitated
by an independent outsider. If written feedback is required, open questions are used that
encourage students to say what would constitute an improvement for them, rather than
rating items on a schedule drawn up by a teacher or, worse, an administrator.
© Lee Harvey, 2001
However, qualitative feedback is sometimes seen as more time-consuming to arrange and
analyse and, therefore, as constituting a less popular choice than handing out
questionnaires. Where compliance overshadows motivated improvement, recourse to
questionnaires is likely.
In many instances, questionnaires used for module-level feedback are not analysed
properly or in a timely fashion. Although most institutions insist on the collection of
module-level data the full cycle of analysis, reporting, action and feedback to originators
of the data rarely occurs.
Examples
Southampton Institute has an institutional form for module feedback but it is optional and
some subject groups design and use their own. Module-level feedback is used to evaluate
and review the unit of study.
Similarly, at the University of Nottingham, module-level feedback is managed by the
schools themselves. There is a university module questionnaire for schools to use if they
wish but they can use their own design. The module questionnaire focuses on
organisation and student learning rather than teacher performance. Module and course
evaluation is for curriculum development. Module evaluation is left up to individual
schools to manage and in line with the University quality manual this data must be fed
back to students.
There is a formal requirement at UEL for each module co-ordinator, supported by the
subject area co-ordinator, to produce module-level feedback. There is no standard
questionnaire although standard templates are available and support for analysis is
provided by the quality assurance department. UEL policy recognises that students get
overloaded with too many questionnaires and thus it is recommended that a variety of
feedback mechanisms be used. It is the responsibility of the subject area co-ordinator, or
course tutor to ensure that students are informed of the results of any data collected as
part of a feedback exercise and of any action taken as a result of feedback. This is done
through an oral report within a teaching session, a subject area newsletter, a posted
notice, a discussion at course committee or by e-mail or the web page. The policy
requires that summary data from unit level feedback is made available at the annual
quality improvement process. Issues requiring action at school level can thus be
identified, as can issues requiring action at institutional level which can be transmitted via
QAEC to Academic Board.
The module feedback form used by Loughborough University’s Business School is a
mixture of module evaluation and teacher performance appraisal (see below).
© Lee Harvey, 2001
Recommendations
Module-level feedback is vital for the ongoing evolution of modules and the teaching
team need to be responsive to both formal and informal feedback.
Both formal and informal feedback should be included when reporting at the module-
level.
Module-level feedback is necessary to complement institution-wide surveys, which
cannot realistically report to module-level.
Module-level feedback should be tailored to the improvement and development needs
of the module. There is no need for standardised, institution-wide, module-level
questionnaires.
As with any other feedback, module-level feedback of all types must be properly
analysed and linked into a module-level action and feedback cycle.
Module-level feedback does not need to be reported externally but should form part
of internal programme reviews.
Appraisal of teacher performance by students.
As a result of government pressure in the 1990s, institutions went through a period of
collecting student views on the performance of particular teachers, known as ‘teacher
assessment’. Many institutions use standardised programme- or module-based surveys of
student appraisal of teaching.
The use of student evaluations of teacher performance are sometimes part of a broader
peer and self-assessment approach to teaching quality. In some cases, they are used as
part of the individual review of staff and can be taken into account in promotion and
tenure situations (as at Wellington and Otago Universities in New Zealand and in many
institutions in the United States).
Teacher-appraisal surveys may provide some inter-programme comparison of teacher
performance. However, standardised teacher-appraisal questionnaires tend, in practice, to
focus on a limited range of areas and rarely address the development of student learning.
Often, the standardised form is a bland compromise, designed by managers or a
committee, that serves nobody’s purposes. They are often referred to by the derogatory
label of ‘happy forms’ as they are usually a set of questions about the reliability,
enthusiasm, knowledge, encouragement and communication skills of named lecturers.
In some institutions this appraisal has been undertaken institution wide, in others
delegated to faculties. In some institutions, or parts of them, appraisal questionnaires
persist.
Student appraisal of teachers tends to be a blunt instrument. Depending on the questions
and the analysis it has the potential to identify very poor teaching but, in the main, the
results give little indication of how things can be improved. Appraisal forms are rarely of
much use for incremental and continuous improvement.
© Lee Harvey, 2001
In the vast majority of cases, there is no feedback at all to students about outcomes. The
views on individual teacher performance is usually deemed confidential and subject to
closed performance-review or development interviews with a senior manager. At
Auckland University, for example, the process was managed by the lecturers themselves
and the results only passed on to managers in staff development interviews if the lecturer
wanted to. Copenhagen Business School is a rare example of an institution that, in the
1990s, published the results within the institution.
Students appraisal of teacher performance has a limited function, which, in practice, is
ritualistic rather than improvement-oriented. Any severe problems are usually identified
quickly via this mechanism. Repeated use leads to annoyance and cynicism on the part of
students and teachers. Students become disenchanted because they rarely receive any
feedback on the views they have offered. Lecturers become cynical and annoyed because
they see student appraisal of teaching as a controlling rather than improvement-oriented
tool.
Examples
Student Evaluation of Courses and Teaching (SECAT), initially developed at Auckland
University, is a somewhat more sophisticated approach to standardised evaluation of
teachers and modules. The system aims to identify student perceptions of the quality of
teaching and to improve teaching through staff development linked to the issues
identified by students. An institution-wide data base of 100 questions relating to teacher
performance and module organisation and structure was devised. Each lecturer was
required to select any 30 items to construct a questionnaire suited to his or her
circumstances. Responses were analysed centrally and a report provided to the lecturer.
The report of findings was also sent to the tutor’s line manager, as the basis for staff
development discussions if the lecturer so wished.
The University of Nottingham has compulsory teacher evaluation, using standardised
forms that have five fixed questions on teacher performance, used to evaluate all
teachers. They are collated centrally and the results fed back to teachers and heads of
schools confidentially. Heads of schools also receive a school mean. Teaching evaluation
is for career development.
The London School of Economics has teacher appraisal questionnaires to ‘assess
students’ opinions of course teaching’ and a separate one to ‘assess students’ opinions of
teaching of part-time class teachers’. They contain specific questions on performance of
teachers prefaced by a few questions on library provision and lectures.
Recommendations
Use of student appraisal of teaching should be sparing.
If used, avoid endlessly repeating the process.
If used, ask questions about the student learning as well as the teacher performance.
© Lee Harvey, 2001
Ensure that action is taken, and seen to be taken, to resolve and monitor the problems
that such appraisals identify.
Only report outcomes as necessary to ensure improvement.
Multiple surveys: cosmetic or inclusive
Institutions often have a mixture of the different type of student feedback, to which might
be added graduate and employer surveys. The information gathered is, far too often,
simply that — information. There are many circumstances when nothing is done with the
information. It is not used to effect changes. Often it is not even collected with a use in
mind. Perhaps, far too often, it is a cosmetic exercise.
There is more to student feedback than collecting data. In general,
If collecting student views only collect what can be made use of.
It is counterproductive to ask students for information then not use it; students become
cynical and uncooperative if they think no one really cares about what they think.
It is important to heed, examine and make use of student views.
If data from surveys of students is going to be useful then it needs to be transformed
into meaningful information.
The information needs to be clearly reported, fed into systems of accountability and
linked to a process of continuous quality improvement. The whole process must be
accountable and part of a culture of improvement.
It is important to ensure that action takes place on the basis of student views and that
action is seen to take place.
This requires clear lines of communication, so that the impact of student views are fed
back to students. In short, there needs to be a line of accountability back to the
students to close the circle. It is not sufficient that students find out indirectly, if at all,
that they have had a role in institutional policy.
Conclusion
Students are important stakeholders in the quality monitoring and assessment processes
and it is important to obtain their views.
Institution-wide surveys, reported to programme level can be very useful aids to
improvement.
However, more needs to be done to ensure that the results are communicated to potential
students.
It is quite feasible, for comparative purposes, to identify a set of generic questions that
can be used to gauge satisfaction with institutional provision and programmes of study
(see Appendix 2), which could form the basis of information to external stakeholders.
© Lee Harvey, 2001
Figure 1: Satisfaction Cycle
Report
identifying areas
for action
Consultation
process
Analysis of
results
Questionnaire
distribution
Stakeholder-
determined
questions
Feedback to
stakeholders
© Lee Harvey, 2001
Figure 2: Satisfaction and importance grid
Very un-
satisfactory
Unsatis-
factory
OK Satisfactory Very
satisfactory
Very
important
E D C B A
Important
e d c b a
Not so
important
(e) (d) (c) (b) (a)
© Lee Harvey, 2001
Figure 3: Consultation Process at UCE
Annual satisfaction report
Board of Governors VC Senate
Faculty Boards
Programme Boards
D&D
Heads
All staff
Centre
for
Research
into
Quality
© Lee Harvey, 2001
Appendix 1: The Student Satisfaction Approach
The Satisfaction Approach has been developed at the Centre for Research into Quality
over the last 15 years. The approach was designed to be an effective tool with which to
obtain, analyse and report students’ views of their total university experience in order to
effect change and improvement.
In its original ‘mission statement’, UCE’s short Statement of Purpose had three central
sections, one of which was headed ‘Quality and Satisfaction’, it asserted:
The University of Central England recognises that students are its customers and
that they are free to choose where they study. The University of Central England
is committed to enhancing further its reputation for teaching quality as assessed
by its students. This will be measured against increasing satisfaction of students
and subject to external testing by academic and professional peers.
Student feedback was, and continues to be, an important element of UCE’s quality
monitoring and the University pioneered an institution-wide process of feedback that
links directly into management strategic decision-making at the most senior level. The
Student Satisfaction research undertaken by CRQ goes beyond the Statement of Purpose
by focussing on the total learning experience of students covering the complete range of
student activities across all aspects of the institution rather than just student satisfaction
with teaching.
The Student Satisfaction approach is clearly the market leader and has been emulated and
adapted by a number of higher and further education institutions both in Britain and
overseas (including New Zealand, Sweden, Australia, South Africa and Poland). The
methodology is summarised briefly below. Further information and reports can be found
on the Centre for Research into Quality (CRQ) website at www.uce.ac.uk/crq. In
addition, the methodology has been published, in self-help form, through the Open
University Press as the Student Satisfaction Manual. Student Satisfaction Manual
(Harvey et al., 1997).
The methodology continues to evolve and allows the surveys to be flexible to address the
pressing concerns of students. The methodology can be easily adapted to different
situations. It has been used to explore the views of a variety of stakeholders: staff,
employers, placement supervisors and even football supporters. The Centre undertakes
the following satisfaction surveys on an annual or periodic basis for UCE: taught
students; research postgraduates; UCE students in partner colleges; staff; non-medical
education placement supervisors.
Designed from the outset as a management information tool, the approach integrates
student views into management strategic decision-making. At UCE, the student and staff
© Lee Harvey, 2001
satisfaction surveys are used by decision makers as a management tool, shaping policy at
an institutional level.
The Student Satisfaction approach is unique in combining the following four elements:
Student-determined questions: the Student Satisfaction research focuses on the total
learning experience as defined by students.
Satisfaction and importance ratings: the research examines student satisfaction with
aspects of provision and then identifies which of those areas are important for
students.
Management information for action: those areas, which are important to students but
where students are dissatisfied, are priority areas for management intervention.
A clear feedback and action cycle.
Student-determined questions
The areas of concern, about which students are asked to rate their satisfaction and
importance, derive from prior consultations with students. Students, in effect, determine
the questions in the questionnaire on the basis of feedback from focus-group sessions,
telephone interviews and from comments provided on the previous years’ questionnaires.
The usual approach with taught students is to convene focus groups of students to
identify those elements of their experience they regard as important, which are then used
as a basis for drawing up the questionnaire. The groups are selected to reflect the variety
of provision within the institution. They include groups from each of the faculties,
ensuring that a representative number of full and part-time courses are selected and that,
where appropriate, undergraduate and taught postgraduate provision is covered.
Satisfaction and importance
Three kinds of questions are asked about each main topic:
satisfaction ratings — students are asked to rate, on a seven-point scale, their
satisfaction with a range of sub-topics under each main heading;
importance ratings — students are asked to rate, again on a seven-point scale, how
important the set of sub-topics are to their learning experience;
patterns of use of facilities — students are asked to indicate the extent of their usage
of various facilities (for example, which computer operating system they use or
frequency of library use, and so on). Usage questions are only asked where they can
provide a basis for analysing the adequacy of service provision. Satisfaction surveys
are not market research surveys.
The inclusion of importance ratings provides a clear picture of where to focus effort to
ensure maximum improvement. For external stakeholders, importance ratings indicate
© Lee Harvey, 2001
clearly what students on a programme consider to be important elements of their learning
experience.
Management information
The statistical data collected through the survey research is transformed into management
information designed to identify clear areas for action. It does this by identifying student
satisfaction with a wide range of aspects of provision and then identifying which of those
areas are important for students. The outcomes are mapped on a satisfaction and
importance grid (Figure 2). Those areas that fall into Sectors E and D, high importance to
students but low satisfaction, are priority areas for management intervention.
Production of a report
In most cases the outcomes are reported to the Vice-Chancellor (or Pro-Vice-Chancellor),
who usually makes them available to Senate and the Board of Governors/University
Council. They are subsequently published in an annual report (in some cases with an
ISBN number). Usually, all management, academic and senior administrative staff
receive a copy. In most cases, such reports are also published on the University Intranet,
although to date few have made them available on the Internet for public consumption.
A central feature of the report are the composite rating tables and trend graphs. These are
accompanied by a commentary, which identifies the main issues. Although the survey is
based on student-determined questions, many issues recur over time which permits
monitoring of trends.
Benchmarking
The longitudinal data collected through the survey allows for benchmarking of
improvement in student satisfaction year by year. Although the methodology ensures that
current concerns of students primarily determine the nature of the questionnaire, there are
invariably items that recur, sometimes with minor modifications of wording.
Over time, more longitudinal data allows for clear indications of trends in student
perceptions of the service provided by the University. The inclusion of significant
numbers of trend graphs allows for detailed analysis of the changes in student perceptions
over time, in many cases items can be tracked back to the early 1990s. This decade-long,
detailed longitudinal data is unique to UCE.
Action and feedback
At the centre of the process is the action and feedback cycle (Figure 1). The intention is
that there is a process that identifies for responsibility for action and subsequent follow-
up to ensure action takes place. The outcomes of action are intended to be reported back
to the originators of the data — the students.
© Lee Harvey, 2001
Each institution develops its own processes for ensuring this process. At UCE, which has
been doing it longest, there is an internal consultation process that reviews action from
previous years and prioritises action based on student views, which is linked to budget
allocation letters (Figure 3).
The Vice-Chancellor and the Pro-Vice-Chancellor (Academic) interview all the deans
and heads of services about the outcomes of the report. The deans and heads of services
are expected to account for any areas where students are dissatisfied but regard as
important and find ways of overcoming them. Deans are required to indicate what action
they are intending to take and what has happened as a result of the previous year’s
agenda. The replies are made available to Senate for discussion and, as Senate papers, are
semi-public documents. Before responding to the Vice-Chancellor, the faculties make use
of the detailed data available from CRQ to look more closely at any Ds or Es. They also
undertake local analyses.
At UCE, feeding back information on action to students is important as students need to
be aware that the process includes action, that the information is collected for a purpose.
Details of the action taken is collated by CRQ and a feedback report is produced. This is
sent to students who are asked to respond to the survey (50% of all students at UCE) and
made available via libraries and resource centres for other students. Direct feedback is
also made available to student representatives on Senate, faculty boards and course
boards. In addition, short articles are usually published in the University newsletter and
Students’ Union magazine. (Copies of reports and feedback flyers are available).
Quality Culture
The Student Satisfaction approach, along with the other forms of quality monitoring at
UCE, go hand-in-hand with the development of a culture of continuous quality
improvement (CQI). The Student Satisfaction report does not herald an annual upheaval.
Rather, it identifies areas for potential action and contributes to incremental improvement
— from the point of view of students.
To be effective, staff must be convinced that the satisfaction survey is part of the CQI
process and not a vehicle for recrimination. Distrust can be minimised if everybody
knows what is going on and that something actually happens, to improve the institution,
as a result of the survey. In summary, to gain support and trust:
the process must be transparent;
senior management must be committed to the approach;
action should result — resources must be made available;
the agenda for change must be forward-looking (not recriminatory).
An effective approach involves encouraging a bottom-up quality improvement process
alongside a top-down accountability requirement. Management, in this approach, has six
strategic functions in respect of quality improvement:
© Lee Harvey, 2001
setting the parameters within which the quality improvement process takes place;
establishing a non-exploitative, suspicion-free context in which a culture of quality
improvement can flourish;
establishing and ensuring a process of internal quality monitoring;
disseminating good practice through an effective and open system of communication;
encouraging and facilitating teamworking amongst academic and academic-related
colleagues;
delegating responsibility for quality improvement to the effective units that are going
to deliver continuous improvement at the staff-student interface.
Reference
Harvey, L. et al., 1997, Student Satisfaction Manual (Buckingham, Open University
Press).
©2001 CRQ, UCE page 1
Centre for Research into Quality
INDICATIVE GENERIC STUDENT
SATISFACTION SURVEY
2002
The Student Satisfaction Survey is designed to provide an opportunity for you to
comment on your whole experience of your university
Data protection statement...
We welcome all the information that you are able to provide and assure you
that it is treated confidentially.
Please return your questionnaire as soon as possible
using the pre-paid label .
The results of the survey will be acted upon to improve
the student experience at your university
APPENDIX 2
page 2 ©2001 CRQ, UCE
COURSE DETAILS
Type of course:
Foundation / HND / HNC Undergraduate degree (e.g., BA, BSc, BMus, BEd)
Taught postgraduate (e.g., PGD, MSc, MA) Professional development course/modules
Other Please specify
-
Year of course:
1st 2nd 3rd 4th 5th+
Mode of study:
Full-time at a UCE site Sandwich (thin or thick) at a UCE site
Part-time at a UCE site Work-based or distance learner
Other Please specify
COURSE ORGANISATION AND ASSESSMENT
'Course' includes the programme or set of modules that you are studying this year.
Please rate the extent to which you are satisfied with the following aspects of your course,
and then rate how important they are to your experience as a student.
SATISFACTION IMPORTANCE
Very Very Not at all Very
dissatisfied satisfied important important
Course organisation 1234567 1234567 NA
Knowing what you can expect from your
course and your tutors
Knowing what is expected of you as a student
Prior notification of changes to course arrangements
The way your timetable is spread over the day/week
Range of topics covered in your syllabus
Workload and assessment
Time-tabling of assignments
Availability of information about assessment dates
Clarity of information about assessment criteria
Consistency of application of assessment criteria
Usefulness of tutors’/lecturers’ feedback
Promptness of feedback on assignments
UNIVERSITY FACILITIES AND STUDENTS' UNION
Please rate how satisfied you are with the following aspects of the Union of Students and
social life at university, and then rate how important they are to your experience as a
student.
SATISFACTION IMPORTANCE
Very Very Not at all Very
dissatisfied satisfied important important
1234567 1234567 NA
The general appearance of your campus
Security measures at your campus
Range of Union clubs/societies
The availability of sports facilities
Value for money in the Union shop
Appearance of Union bars
University residential accommodation (students in
university accommodation only)
©2001 CRQ, UCE page 3
LEARNING AND TEACHING
Please rate the extent to which you are satisfied with the following aspects of learning and
teaching, and then rate how important they are to your experience as a student.
SATISFACTION IMPORTANCE
Very Very Not at all Very
dissatisfied satisfied important important
1234567 1234567 NA
Learning
The course has developed your subject knowledge
You are learning what you hoped to learn
Your confidence to learn has been enhanced
There are sufficient opportunities to learn with
others (peers) on your course
Aspects of your course that prepare you for
employment
The opportunities to go on work experience
The opportunities to make links with professionals
The suitability of work experience
The organisation of work experience
The development of skills and abilities required
for your future employment
The development of your problem-solving skills
The development of your interpersonal skills
The development of your team-working skills
The development of your communication skills
The development of your practical skills
The development of your analytical ability
The development of your critical ability
Teaching
Opportunities for informal discussion with staff
The extent to which teaching staff are sympathetic
and supportive to the needs of students
The extent to which teaching staff treat
students as mature individuals
The general reliability of teaching staff
i.e., keep time/don’t cancel classes
STUDENT SERVICES
Please indicate whether you have used the following services at UCE. If you have used the
services please also rate how satisfied you are with the service you received.
USED SATISFACTION
SERVICE Very Very
dissatisfied satisfied
YES NO 1234567 NA
Careers
Chaplaincy
Child care
Counselling
Disability services
Financial services
Health care
Availability of information about Student Services
page 4 ©2001 CRQ, UCE
COMPUTING
Please rate how satisfied you are with the following aspects of the computing facilities you
mainly use at UCE, and then rate how important they are to your experience as a student.
SATISFACTION IMPORTANCE
Very Very Not at all Very
dissatisfied satisfied important important
1234567 1234567 NA
Opening hours of computer rooms
User-friendliness of computing facilities for
students with a disability
Availability of computers
Maintenance of computers
'Up-to-dateness’ of computers
'Up-to-dateness’ of software
Access to the Internet/e-mail
Reliability of the network
Training in the use of computers
Helpfulness of support staff/technicians
Availability of printers
Quality of printing
Maintenance of printers
LIBRARY
Please rate how satisfied you are with the following aspects of the university library you
mainly use, and then rate how important they are to your experience as a student.
SATISFACTION IMPORTANCE
Very Very Not at all Very
dissatisfied satisfied important important
1234567 1234567 NA
Range of books
'Up to-dateness’ of books
Multiple copies of core books
Range of journals/periodicals
Helpfulness of library staff
Opening hours
Noise levels
Availability of photocopying facilities
Access and range of services for students with
disabilities
Usefulness of CD ROMs
Adequacy of individual workspace
Adequacy of group study space
YOUR EVALUATION
Please write in the boxes below an estimate of your overall satisfaction with the following
aspects of your university education.
A rating of 0% means you are totally dissatisfied and a rating of 100% means you are totally satisfied.
University as a whole % Your Department or School %
University management % Your course %
Your Faculty % Potential career prospects %