Interdisciplinary Learning and Teaching
in Higher Education
Theory and Practice
Balasubramanyam Chandramohan, Stephen Fallows
ISBN: 9780415341318 Format: Hardback Publisher: Taylor & Francis
Ltd
Chapter 9: Student feedback on interdisciplinary programmes : Lee Harvey
Final pre-proof draft
Student feedback on interdisciplinary programmes
As student feedback in the UK is about to shift into its fourth phase it is noticeable
how little attention has been paid to the collection and analysis of views of students
on interdisciplinary, cross-disciplinary or combined studies courses.
The first phase, up to the 1990s, saw little or no formal feedback mechanisms in
higher education institutions that were designed to obtain student views. Where
student feedback impacted on the student experience it came, usually, as a result of
direct conversations, or even action, on the part of individuals or groups of students.
In the pre-mass higher education, this ad hoc engagement worked up to a point but
did not engender a sense of student-centredness or responsibility, on the part of the
teacher, to address concerns of the students. The latter had to sink or swim in the
system as it was.
The 1990s saw the flourishing of formal student feedback, overwhelmingly through
the medium of questionnaires. Much data was collected, some of it was analysed,
little of it was reported back to students and very little of it resulted in any meaningful
action. Change that occurred tended to bypass questionnaires and was the result of
direct feedback, via conversations over coffee or in corridors, course committees or
by dint of students voting with their feet. The main reasons for the relative impotence
of the feedback in phase two was that questions were framed from the point of view
of the teacher (or worse, manager), often limited to a standardised set of module-
based items relating to teacher performance. These failed to address student learning,
failed to address student concerns, became tokenistic accountability rituals, the
outcomes of which were usually deemed confidential and thus inaccessible. At root,
the impotence was compounded by an entire lack of structure designed to act upon the
results. Data was collected and mostly shelved; eventually it became obvious to all
concerned that the ritual was alienating students and becoming counterproductive.
The third phase, which continues today, overlapped the second and saw some
pioneering institutions start to develop a structured mechanism for dealing with, as
well as collecting student feedback. Lines of responsibility for action were developed
with appropriate, although not bureaucratically burdensome, reporting and sanctions.
In the main, these developments operated at the level of whole institutional feedback
surveys: student satisfaction surveys that acted as a barometer of the total student
experience. In some places, the development of these centrally-run surveys was also
co-ordinated with locally (unit or module) owned surveys of teacher capability and
with service departments own ‘customer’ feedback. However, this continues to be a
logistic and organisational problem for larger institutions. Alongside this tendency to
develop total student experience surveys, some institutions stuck to module-level data
collection but developed a more co-ordinated approach to acting on the outcomes.
However, this is still relatively rare and an assumption that instituting an institution-
wide module survey, asking everyone the same set of questions, constitutes a co-
ordinated approach remains prevalent. The key to any student feedback is not the
collection of data but the creation of mechanism for using it to implement
improvements.
The fourth phase, which is upon us in the UK, undermines the concerted improvement
approach of the third phase. The National Student Survey, with its trivial set of
questions not only takes us back to phase two but shifts the emphasis from internal
quality improvement to external profile, from substance to image and from clearly
useful data to superficial indicators designed for spurious comparative purposes rather
than as valuable management information. As the struggle for student feedback
unfolds, the concerns of interdisciplinary students continues to be ignored.
Students on non-standard programmes are usually perceived as a problem when it
comes to collecting their views, analysing and reporting them. They do not, of course,
fit standard categories, they have to be slotted into categories of their own and
generally make the whole reporting messy. However, the tendency to bulge out of
pre-set categories is but the least of the issues. Much more important is that the views
expressed by interdisciplinary students are frequently ignored because it is not clear
who is responsible for doing anything about them. Even worse, no one asks questions,
in the first place, that are germane to interdisciplinary students. Student feedback
questionnaires, explored below in more detail, tend towards a generic set of issues
that are premised on the single subject model.
Student feedback processes
Most higher education institutions, around the world, collect some type of feedback
from students about their experience of higher education, particularly the service they
receive. This may include perceptions about the learning and teaching, the learning
support facilities (such as, libraries, computing facilities), the learning environment,
(lecture rooms, laboratories, social space and university buildings), support facilities
(refectories, student accommodation, health facilities, student services) and external
aspects of being a student (such as finance, transport infrastructure).
Student views are usually collected in form of ‘satisfaction’ feedback. Sometimes
there are attempts to obtain student views on how to improve specific aspects of
provision or on their views about potential or intended future developments but this is
less usual. Indeed, it is not always clear how views collected from students fit into
institutional quality improvement policies and processes. To be effective in quality
improvement, data collected from surveys and peer reviews needs to be transformed
into information that can be used within an institution to effect change. Experience
going back to the late 1980s shows that to make an effective contribution to internal
improvement processes, views of students need to be integrated into a regular and
continuous cycle of analysis, reporting, action and feedback (Figure1).
In many cases it is not always clear that there is a means to close the loop between
data collection and effective action, let alone feedback to students on action taken. For
this to happen, requires that the institution has in place a system for:
identifying and delegating responsibility for action;
encouraging ownership of plans of action;
accountability for action taken or not taken;
feedback to generators of the data;
committing appropriate resources.
Establishing this is not an easy task, which is why so much data on student views is
not used to effect change, irrespective of the good intentions of those who initiate the
enquiries. It is, thus, more important to ensure an appropriate action cycle than it is to
have in place mechanisms for collecting data.
.
External information
In an era where there is an enormous choice available to potential students the views
of current students offer a useful information resource. Yet very few institutions make
the outcomes of student feedback available externally. UCE, Sheffield Hallam and a
few other institutions are unusual in publishing their institution-wide student feedback
survey (which reports to the level of faculty and major programmes). The results are
available on a public web site as well as published as a hard-copy with an ISBN
number, which has been the case, at UCE, since its inception in the late 1980s.
The National Student Survey (NSS) will provide information but unfortunately the
wrong information presented in the wrong way. In comparison with the sophisticated
and relevant analyses of institutional student surveys, the NSS items are trivial,
imposed irrespective of relevance, designed for comparative purposes based on
standardised subject codes and providing no sensible information on what is
necessary to improve the situation. It operates at a distance from the very feedback
and action cycles so important in ensuring effective outcomes and, of course, totally
ignores the situation of interdisciplinary students.
Types
Feedback can take various forms, including formal classroom discussions, informal
discussions over coffee, facilitated focus groups, web discussion boards, course
committees as well as the invidious questionnaire. While all the abovementioned
forms of feedback operate in most settings, they tend to attract less official weight
than the formal survey of student views; although, ironically, in most cases change is
more likely to occur as a result of direct discussion than from the analysis of
questionnaire responses. The latter, in many cases, serves only to legitimate the status
quo.
We are, however, in an era of student feedback surveys and, if handled appropriately,
they can be effective given the appropriate support infrastructure. There are, broadly
speaking, five forms:
institution-level satisfaction with the total student experience or a specified sub-set
of that experience;
faculty-level satisfaction with provision;
programme-level satisfaction with the learning and teaching and related aspects of
a particular programme of study (for example, BA Business Studies)
1
;
module-level feedback on the operation of a specific module or unit of study (for
example, Introduction to Statistics)
teacher-appraisal by students.
Institution-level satisfaction
Systematic, institution-wide student feedback about the quality of their total
educational experience is an area of growing activity. Such surveys are almost always
based on questionnaires, which mainly consist of questions with pre-coded answers
augmented by one or two open questions. In the main, these institution-wide surveys
are undertaken by a dedicated unit (either internal or external) with expertise in
undertaking surveys and producing results to schedule.
Institution-wide surveys tend to encompass most of the services provided by the
university and are not to be confused with standardised institutional forms seeking
feedback at the programme or module level (discussed below). In the main,
institution-wide surveys seek to collect data that provide:
management information designed to encourage action for improvement;
a descriptive overview of student opinion, which can be reported as part of
appropriate accountability procedures.
The derivation of questions used in institution-wide surveys varies. The Student
Satisfaction Approach developed at UCE and adopted at Sheffield Hallam, UEL,
Oxford Brookes, Buckingham Chilterns University College among others, uses
student-determined questions, usually via focus groups. In other institutions,
management or committees decide on the questions. Sometimes, institutions use or
adapt questionnaires developed at other institutions. It is, though, very important to
include the student voice in the determination of questions. This is particularly the
case if one wants to capture and include the concerns of interdisciplinary students. It
is, therefore, vital to include at least one interdisciplinary student focus group.
The way the results are used also varies. In some cases there is a clear reporting and
action mechanism. In others, it is unclear how the data helps inform decisions. In
some cases the process has the direct involvement of the senior management, while in
other universities action is realised through the committee structure. Again, there is a
danger of interdisciplinary students falling down the cracks. Reporting of views of
students who do not fall into simple subject groupings often results in their views
being sidelined or ignored altogether. Even when they are reported, it is not always
clear who is then responsible for taking up the concerns of such students. This
compounds the problem of a general lack of questions specific to the interdisciplinary
experience — not least the issues of interconnectedness, progression and coherence.
1
In some institutions programmes of study are referred to as ‘courses’ or ‘pathways’.
However, ‘course’ is a term used in some institutions to mean ‘module’ or ‘unit’ of
study, that s, a sub-element of a programme of study. Due to the ambiguity of
‘course’, the terms ‘programme of study’ and ‘module’ will be used in this paper.
Feedback to students of outcomes of surveys is an important element of institution-
wide surveys but is not always carried out effectively, nor always produces the
awareness intended. Some institutions utilise current lines of communication between
tutors and students or through the student unions and student representatives. All of
these forms depend upon the effectiveness of these lines of communication; which as
interdisciplinary students will be aware are strewn with hazards. Other forms of
feedback used include: articles in university magazines, posters and producing
summaries aimed at students but these tend to be very generalised and liable to focus
on the spectacular and newsworthy rather than the local but important concerns of
specific student groups.
Good practice in institutional surveys suggests that if the improvement function is to
be effective it is first necessary to establish an action cycle that clearly identifies lines
of responsibility and feedback. Furthermore, surveys need to be tailored to fit the
improvement needs of the institution. Making use of stakeholder inputs (especially
those of students) in the design of questionnaires is a useful process in making the
survey relevant. Importance as well as satisfaction ratings are recommended as this
provides key indicators of what students regard as crucial in their experience and thus
enables a clear action focus.
Faculty-level satisfaction with provision
Faculty-level surveys (based on pre-coded questionnaires) are similar to those
undertaken at institution level. They tend to focus only on those aspects of the
experience that the faculty controls or can directly influence. They often tend to be an
unsatisfactory combination of general satisfaction with facilities and an attempt to
gather information on satisfaction with specific learning situations.
In most cases, these surveys are an additional task for faculty administrators, they are
often based on an idiosyncratic set of questions and tend not to be well analysed, if at
all. They are rarely linked into a meaningful improvement action cycle.
Where there is an institution-wide survey, disaggregated and reported to faculty level,
faculty-based surveys tend to be redundant. Where faculty surveys overlap with
institutional ones, there is often dissonance that affects response rates. If faculty-level
surveys are undertaken they should not clash with institution-wide surveys, where
both coexist, it is probably better to attempt to collect faculty data through qualitative
means, focusing on faculty-specific issues untouched by institution-wide surveys.
If faculty-level surveys are undertaken they must be properly analysed and linked into
a faculty-level action and feedback cycle, otherwise cynicism will rapidly manifest
itself and undermine the credibility of the whole process.
Programme-level satisfaction with the learning and teaching
Programme-level surveys are not always based on questionnaires although most tend
to be. In some cases, feedback on programmes is solicited through qualitative
discussion sessions, which are minuted. These may make use of focus groups.
Informal feedback on programmes is a continuous part of the dialogue between
students and lecturers. This should not be overlooked as it is an important source of
information for improvement at this level.
Programme-level surveys tend to focus on the teaching and learning, course
organisation and programme-specific learning resources. However, in a modularised
environment, programme-level analysis of the learning situation tends to be
‘averaged’ and does not necessarily provide clear indicators of potential improvement
of the programme without further enquiry at the module level.
The link into any action is far from apparent in many cases. Where a faculty
undertakes a survey of all its programmes of this type, there may be mechanisms, in
theory, to encourage action but, in practice, the time lag involved in processing the
questionnaires by hard-pressed faculty administrators tends to result in little timely
improvement following the feedback.
In a modularised environment, where modular-level feedback is encouraged (see
below), there is less need for programme-level questionnaire surveys.
Where the institution-wide survey is comprehensive and disaggregates to the level of
programmes, there is also a degree of redundancy in programme-level surveys. Again,
if programme-level and institutional-level run in parallel there is a danger of
dissonance. Programme-level questionnaire surveys are probably not necessary if the
institution has both a well-structured institution-wide survey, reporting to programme
level, and structured module-level feedback. However, if there are interdisciplinary
programmes, specific programme feedback could be an effective way of
complementing more generic survey results.
If specific programme-level information is needed for improvement purposes, it is
probably better to obtain qualitative feedback on particular issues through discussion
sessions or focus groups. If programme-level surveys are undertaken they must be
properly analysed and linked into a programme-level action and feedback cycle. This
tends to be a rarity in most institutions.
Module-level feedback
Feedback on specific modules or units of study provide an important element of
continuous improvement. The feedback tends to focus on the specific learning and
teaching associated with the module, along with some indication of the problems of
accessing module-specific learning resources. Module-level feedback, both formal
and informal, involves direct or mediated feedback from students to teachers about
the learning situation within the module or unit of study.
The primary form of feedback at this level is direct informal feedback via dialogue.
However, although this feedback may often be acted upon it is rarely evident in any
accounts of improvements based on student feedback.
In most institutions, there is a requirement for some type of formal collection and
reporting of module-level feedback, usually to be included in programme annual
reports. In the main, institutions do not specify a particular data collection process.
The lecturer(s) decide on the appropriate method for the formal collection of
feedback. Often, though, institutions provide guidance and formal questionnaire
templates, should the module leader(s) wish to use them.
There is a tendency to use ‘feedback questionnaires’ at this level: sometimes
standardised questionnaires across the institution, sometimes faculty-wide and
sometimes constructed locally. Module-level questionnaire feedback is usually
superficial, results in little information on what would improve the learning situation
and, because of questionnaire-processing delays, rarely benefits the students who
provide the feedback. The use of questionnaires tends to inhibit qualitative discussion
at the unit level.
Direct, qualitative feedback is far more useful in improving the learning situation
within a module of study. Qualitative discussion between staff (or facilitators) and
students about the content and approach in particular course units or modules provides
a rapid and in-depth appreciation of positive and negative aspects of taught modules.
Direct feedback might take the form of an open, formally-minuted discussion between
students and teacher(s), informal feedback over coffee, or a focus-group session,
possibly facilitated by an independent outsider. If written feedback is required, open
questions are used that encourage students to say what would constitute an
improvement for them, rather than rating items on a schedule drawn up by a teacher
or, worse, an administrator.
However, qualitative feedback is sometimes seen as more time-consuming to arrange
and analyse and, therefore, as constituting a less popular choice than handing out
questionnaires. Where compliance overshadows motivated improvement, recourse to
questionnaires is likely.
In many instances, questionnaires used for module-level feedback are not analysed
properly or in a timely fashion. Although most institutions insist on the collection of
module-level data the full cycle of analysis, reporting, action and feedback to
originators of the data rarely occurs. There is, of course, considerable potential, at
module level of exploring issues pertinent to interdisciplinary students. However, this
requires an imaginative and creative approach. Using standardised questionnaires is
unlikely to be much use as they will probably not include relevant questions. Most
useful for interdisciplinary students, is likely to be the open-ended questions that are
often appended to module tick-box surveys. Indeed, it might be argued that, in
general, this is the most useful feature of module feedback, although sadly under-
analysed in many cases.
Module-level feedback is vital for the ongoing evolution of modules and the teaching
team need to be responsive to both formal and informal feedback. Both formal and
informal feedback should be included when reporting at the module-level. Module-
level feedback is necessary to complement institution-wide surveys, which cannot
realistically report to module-level. Module-level feedback should be tailored to the
improvement and development needs of the module. There is no need for
standardised, institution-wide, module-level questionnaires. Making comparisons
between modules is trivial and far less effective than year-on-year monitoring of
trends in student views about the module. As with any other feedback, module-level
feedback of all types must be properly analysed and linked into a module-level action
and feedback cycle.
Appraisal of teacher performance by students.
As a result of government pressure in the 1990s, institutions went through a period of
collecting student views on the performance of particular teachers, known as ‘teacher
assessment’. Many institutions use standardised programme- or module-based surveys
of student appraisal of teaching. The use of student evaluations of teacher
performance are sometimes part of a broader peer and self-assessment approach to
teaching quality. In some cases, they are used as part of the individual review of staff
and can be taken into account in promotion and tenure situations (although this is, as
yet, rare in the UK).
Teacher-appraisal surveys may provide some inter-programme comparison of teacher
performance. However, standardised teacher-appraisal questionnaires tend, in
practice, to focus on a limited range of areas and rarely address the development of
student learning. Often, the standardised form is a bland compromise, designed by
managers or a committee, which serves nobody’s purposes. They are often referred to
by the derogatory label of ‘happy forms’ as they are usually a set of questions about
the reliability, enthusiasm, knowledge, encouragement and communication skills of
named lecturers.
Student appraisal of teachers tends to be a blunt instrument. Depending on the
questions and the analysis it has the potential to identify very poor teaching but, in the
main, the results give little indication of how things can be improved. Appraisal forms
are rarely of much use for incremental and continuous improvement.
In the vast majority of cases, there is no feedback at all to students about outcomes.
The views on individual teacher performance is usually deemed confidential and
subject to closed performance-review or development interviews with a senior
manager. Copenhagen Business School is a rare example of an institution that, in the
1990s, published the results within the institution.
Students’ appraisal of teacher performance has a limited function, which, in practice,
is ritualistic rather than improvement-oriented. Any severe problems are usually
identified quickly via this mechanism. Repeated use leads to annoyance and cynicism
on the part of students and teachers. Students become disenchanted because they
rarely receive any feedback on the views they have offered. Lecturers become cynical
and annoyed because they see student appraisal of teaching as a controlling rather
than improvement-oriented tool.
Good practice suggests that surveys of student appraisal of teaching should be used
sparingly, without continually repeating the process. It also helps to ask about the
student learning as well as the teacher performance. Ensuring that action is taken, and
seen to be taken, to resolve and monitor the problems that such appraisals identify is
important. However, this focus is usually not helpful in exploring the subtleties of
student learning issues, such as those experienced by interdisciplinary students.
Multiple surveys: cosmetic or inclusive
Institutions often have a mixture of the different type of student feedback, to which
might be added graduate and employer surveys. The information gathered is, far too
often, simply that — information. There are many circumstances when nothing is
done with the information. It is not used to effect changes. Often it is not even
collected with a use in mind. Perhaps, far too often, it is a cosmetic exercise.
There is more to student feedback than collecting data. In general,
if collecting student views only collect what can be made use of;
it is counterproductive to ask students for information then not use it; students
become cynical and uncooperative if they think no one really cares about what they
think;
it is important to heed, examine and make use of student views;
if data from surveys of students is going to be useful then it needs to be
transformed into meaningful information;
the information needs to be clearly reported, fed into systems of accountability and
linked to a process of continuous quality improvement: the whole process must be
accountable and part of a culture of improvement;
it is important to ensure that action takes place on the basis of student views and
that action is seen to take place;
this requires clear lines of communication, so that the impact of student views are
fed back to students: in short, there needs to be a line of accountability back to the
students to close the circle; it is not sufficient that students find out indirectly, if at
all, that they have had a role in institutional policy;
data from different sources needs to be co-ordinated and triangulated.
Students are important stakeholders in the quality monitoring and assessment
processes and it is useful to obtain their views. In doing so, it is important not to
inadvertently sideline the views of specific groups, of which interdisciplinary students
are a group often rendered invisible.
Significant differences
There is, as was noted at the start of this chapter, little research on the differences in
perspectives between interdisciplinary and single-subject students. Despite all the
surveying, reported results ignore this dimension in most cases and thus hard evidence
of differences in perception is in short supply.
One London university that undertook an institution-wide survey in 2004 separated
out the responses of 90 combined honours students. Their views, on issues designed
for all students, were very similar to the university average on most of the 100 or so
items. The areas where they diverged hint at some underlying issues that might be
germane to any collection of interdisciplinary student views. Results were reported
on an A to E scale (very satisfactory (A) to very unsatisfactory (E)) with very
important items represented in upper case and less important ones as lower case.
Availability (b) and support from course representatives (c) were less
important than for the university overall (B & C respectively).
Promptness of feedback on assignments was more satisfactory (B) than for
university overall (C).
Development of analytical (A) and of critical skills (A) were more satisfactory
compared to the university overall (B in both cases).
Aspects of course that prepares you for employment was less satisfactory (C)
compared to the university mean (B).
Opportunities to go on work experience (D) and opportunities to network with
professionals (D) were both unsatisfactory compared to adequate (C) for the
university overall.
The extent to which classes run as scheduled was very satisfactory (A)
compared to the university mean (B).
Availability of personal tutors (C), support from personal tutors (C), and ready
access to academic and pastoral advice (C) were less satisfactory than for the
university as a whole (B for all three items).
Combined honours students were much more regular users of the learning
resource centre than most other groups of students. However, these students
were marginally less satisfied with the facility than students on average. They
were less satisfied with the range of books (C) and multiple copies of core
books (D) and with noise levels (C), compared with university averages (of B,
C and B respectively).
Combined honours students were also more satisfied with opening hours of
the computer rooms (A) and reliability of computers (B) than the university
mean (B and C, respectively).
Combined honours students were less satisfied with procedures for enrolment
(C) than their peers (B).
Combined honours students overall ratings showed a more positive view of the
university as a whole (66.1%) higher than students from other schools and
considerably above the mean (62.9%) for the university overall. Their rating for their
course (69.3%) was above the mean (67.7) but eclipsed by the means in five of the
nine other schools. However, they rated their potential career prospects poorly
(62.7%) compared to the university mean of (64.9%). Asked whether they would still
choose their course, respondents were positive (5.0 on a 7 point scale) but below the
university mean of 5.3.
Another northern university reported the results for a small group of combined
honours students separately in 2002.
Opportunities for work-related placements (d) and quality of workplace
experience (c) were less satisfactory (but less important) than for the
university as a whole (B in both cases). Similarly, there was less satisfaction
with ‘the course prepares you for the work place’, (C compared to the
university mean of B)
Ease with which teaching staff could be contacted (B) is slightly less
satisfactory than the university mean (A). However, they are more satisfied
(B) than any other groups (B) with the manageability of their workload. They
also regard the opportunity to present work to peers/staff as more important
than other groups of students.
They are more satisfied with noise levels (A) and availability of quiet work
space in the learning centre than the university mean (B in both cases).
They are also more satisfied with the range of media materials (A compared to
B for university as a whole).
Combined studies students regarded the efficiency of the enrolment procedure
(B) and induction to the university (B) as more important than students on
average (b and b) and were more satisfied with notification of timetable/room
alterations (B compared to C).
They were slightly less satisfied with the range of software available (B
compared to A) but more satisfied with the helpfulness of technical support
staff (A compared to B)
Combined studies students were alone in being dissatisfied with the value for
money of their course (D) (compared to other schools).
These two sets of independent results hint at some issues for combined studies
students around workplace learning and belonging. There is a suggestion that they are
to some extent better served administratively but that they are less satisfied with
central learning support, partly because they spend a lot of time using it. Much more
needs to be done to explore this initial limited results.