Why Seek Student Feedback to contribute to Learning and Teaching Enhancement?
One of the most effective ways of enhancing the learning experience and teaching offer at universities is to ask the students for feedback. UK higher education (HE) providers use a range of formal and informal mechanisms to hear the ‘student voice’ and understand the student experience. Most typically, HE providers survey their students at the end of each teaching period, and increasingly mid-module. The data and information obtained from this feedback are embedded within institutional quality enhancement processes.
Over the past decade, the move towards seeking student views has been driven by policy, regulatory and market conditions. The June 2011 Higher Education White Paper ‘Students at the Heart of the System’ set out the Government’s expectation that student evaluation at module level should be used in an ‘open and transparent’ way to inform ‘a continuous process of improving teaching quality’:
allowing students and lecturers within a university to see this feedback at an individual module level will help students to choose the best course for them and to drive an improvement in the quality of teaching.
[BIS (2011)]
The UK Quality Code for Higher Education (2018) includes this underlying practice for all higher education providers:
The provider engages students individually and collectively in the development, assurance and enhancement of the quality of their educational experience.
Additionally, the Quality Code contains the following advice and guidance:
- Providers agree strategic principles for monitoring and evaluation to ensure processes are applied systematically and operated consistently.
- Providers normalise monitoring and evaluation as well as undertaking routine formal activities.
- Providers evaluate, analyse and use the information generated from monitoring to learn and improve.
- Student engagement through partnership working is integral to the culture of higher education, however and wherever provision is delivered – student engagement is led strategically, but widely owned.
- Higher education providers, in partnership with their student body, define, promote, monitor and evaluate the range of opportunities to enable all students to engage in quality assurance and enhancement processes.
- Effective student engagement supports enhancements, innovation and transformation in the community within and outside the provider, driving improvements to the experience of students.
- Providers work in partnership with the student body to close the feedback loop.
[QAA (2018)]
Approach to Survey Delivery
Student module and course evaluation surveys are now almost universally administered and managed online. A review of how HE providers approached course and module surveys found that the most significant developments for HE providers in implementing an online evaluation system were:
The introduction of institution-wide common questions in course and module evaluation
Standardisation in the timing and reporting of these surveys
Greater consistency in practices across different departments or schools
Institution-wide, comparable course and/or module data for strategic analysis and coherent, institutional responses to student feedback
[evasys (2016)]
The review also found that questions typically asked students about teaching, assessment and feedback, academic support, learning resources and their overall satisfaction (evasys, 2016).
Dommeyer et al (2004) identified some common features to online surveying practice. Typically, online evaluation would involve the following:
Providing students with a link, usually in an email, to access the survey
Assuring students that their responses will be anonymised (confidential)
Students will respond numerically to multiple response items (Likert scale) and type answers to open questions
Students are provided with a receipt confirming that they have completed the evaluation; students are given a window of time to respond, usually near the end of term/semester; and aggregate reports will be made available to students only after the final grades are determined.
The many benefits of using online student module evaluation is recognised in the published literature (Dommeyer et al., 2004; Salmon et al. 2004; Watt et al. 2002). Watt et al. (2002) state that ‘using web-based evaluation questionnaires can bypass many of the bottlenecks in the evaluation system (e.g. data entry and administration). Another benefit of online evaluation is that it removes the need to administer surveys in class (Dommeyer et al. 2004) thus creating efficiencies (e.g. staff time and paper).
However, Nuttly (2008) identified that one of the most significant and pervasive challenges in an online student module evaluation system is that response rates are low. Research shows that response rates are generally lower when an online instrument is used, as compared to an in-class, paper-based instrument (Nulty, 2008; Anderson, Cain, & Bird, 2005; Ernst, 2006; Kulik, 2009; Benton, et al, 2010). This has raised concerns around how far valid and reliable conclusions about teaching effectiveness can be drawn from the data (Dommeyer, Baum et al. 2002). The nature of online evaluation is that it depends on student cooperation, unlike paper-based evaluations where surveys can be administered to a captive audience. Throughout the literature, low response rates are cited as the key disadvantage of an online evaluation system.
This paper considers a variety issues surrounding response rates in online course evaluation. What are the implications of low response rates in online course or module evaluation? What is an adequate response rate, that is, what response rate can be considered large enough for the survey data to provide meaningful evidence for assurance and enhancement purposes? What practical strategies and advice are available to help boost response rates?
Implications of low response rates
So what are the reasons for non-response and what are the implications of low response rates? Non-response can be due to several reasons. There can be due to a lack of motivation as the students are not completing them in class but in their own time out of class. As these evaluations are most commonly administered at the end of term, students do not necessarily feel they benefit from any improvements that may be introduced to the module as a result of their feedback. At other times, students may believe that only the teacher will see their feedback or that their views will not be taken seriously (Chapman and Joines, 2017). Students may also choose not to respond due to survey fatigue, that is they feel they are asked to complete too many surveys.
A study by Anderson et al (2006) found that most students did not respond to an online evaluation because of the following four main reasons: the students were disengaged (that is, they forgot or were too busy); they had technology problems; they perceived no benefit; and lastly, ‘other’ reasons.
Concerns around low response rates is whether those who have participated in a survey are representative of the entire population. In other words, if respondents and non-respondents have very different views, then the results from the survey would not reflect the opinion of the population as a whole. For this reason, higher response rates are generally more desirable in order to minimise the potential effect of non-response bias.
In course and module evaluation, low response rates may affect the accuracy of data. Data from these surveys are regularly used within a programme’s quality management process. So if respondents have different views from non-respondents, it is possible that the feedback provide could influence academic staff to respond in a way that may be different if they had received feedback from all students. At the department or school level, summative judgements may be made on a teacher’s performance based on erroneous data. Low response rates impacts the credibility of the data and may have some real implications on decision-making.
Liu and Armatas (2016) assert that without adequate response rates, the benefits of implementing online surveys including efficiencies in survey administration (survey distribution, collection and analysis), data management, as well as rich open text comments are not worthwhile to achieve.
Achieving a higher response rate means that the results that are collected are likely to be more representative and there is greater confidence that the student feedback is meaningful in order to drive improvements in teaching and learning (Brennan & Williams, 2004).
In Raising response rates, the HEA (2016) identified that:
The more students that take part in the survey, the more meaningful the data. The purpose of raising response rates is to make the survey more effective for enhancement across the institution. A high response gives greater confidence to results and makes it possible to deliver results at levels relevant to staff delivering teaching and learning.
However, the publication does not offer ‘explicit targets on what response rates institutions need to achieve’ but offers the following ‘general guidance’:
- the response rate to UKES 2015 was 15%;
- 15% is a low response rate for an online survey;
- 25% is an average response rate for an online survey; the response rate to PTES 2015 was 29%;
- 35% is a good response rate for an online survey;
- the response rate to PRES 2015 was 41%;
- 45% is an excellent response rate for an online survey.
[(HEA, 2016)]
According to Nuttly (2008), the best reported response rates for online surveys (47%) are only adequate for class sizes above 750 students. So what happens if the class size is less than 750 students? Using Dillman’s formula (2000), Nuttly calculated how many respondents are required and therefore, also the required response rate, table 1.
Total number of students on a module | Required number of respondents | Response rate required (%) |
---|---|---|
10 | 7 | 70% |
20 | 12 | 58% |
50 | 17 | 35% |
100 | 21 | 21% |
250 | 24 | 10% |
300 | 24 | 8% |
Table 1: Required response rates by class size summarised from Nulty (2008), based on a formula by Dilman (2000)
Nuttly, however, was insistent that his recommended response rates are only a guide of what ‘in a theoretically ideal world’ would be considered adequate. He stressed that even if the response rates are achieved, ‘great care is need to be sure that results for a survey are representative of the whole group of students enrolled’.
Boosting online response rates
So what are the strategies for raising online response rates? This has been pretty well-documented in the published literature (Chapman and Joines, 2017; HEA, 2016; Naidu et. al., 2014). With over 20 years experience in providing online evaluation solutions, the following section summarises the best of what we know and what we have found. But before that, let’s go back to basics with survey design. Rutherford (2016) has advised that:
Research has shown that surveys should take 5 minutes or less to complete. Although 6 – 10 minutes is acceptable, those that take longer than 11 minutes will likely result in lower response rates. On average, respondents can complete 5 closed-ended questions per minute and 2 open-ended questions per minute.
Academic Staff engagement
Staff engagement in the process is a key element in achieving higher student engagement. Students are more likely to complete the surveys when their lecturers and tutors encourage them to. Therefore it is important that staff also recognise the benefit of online evaluation and take the time and trouble to promote them to their students. Various ways to increase staff engagement include:
Emails from departmental staff such as heads of department and course leaders, explaining the surveys’ importance and what tutors can do to support them
An email from the strategic institutional learning and teaching lead (e.g. PVC Learning and Teaching) outlining the benefits of online evaluation to the university and department
Where available, giving teaching staff the opportunity to tailor their surveys by selecting questions from a central ‘bank’ of questions
Automated email to Module Leader on survey opening
Providing PowerPoint slides for staff to incorporate at the end of a seminar
Setting response rate targets based on previous years’ participation rates and tracking and monitoring these at university and department level, and automating emails to warn Module Leaders when this is not being met
Getting lecturers to show real-time response rates (for instance, using the evasys Instructor Portal) in the final lecture/seminar to encourage students to respond
Publicising departmental and faculty response rates to create a competition to maximise response rates
Communicate, communicate, communicate
Linked to engagement is communication. Promoting online evaluation throughout the university in the lead up to and during the live survey period will raise awareness and encourage responses. This can include:
Publicising the survey and how past feedback has been acted on on university web pages, including departmental pages and the Student Portal
Using the university’s internal PR and social media channels such as Snapchat, Twitter and Facebook to promote the survey (with a link to the Student Survey Portal)
Providing incentives for all who take part (for e.g. printer credits, cafe giveaways)
Entering all those who take part into prize draws (for e.g. iPad, Kindle Fire, graduation costs paid for)
Getting the Students’ Union to publicise and promote the surveys using their webpages, social media channels, email campaigns (for e.g. Presidents sending out the final reminder)
Displaying promotional materials around the Students’ Union
Using the university’s communications and/or marketing teams to develop a tailored campaign with your students during the survey period
Closing the Feedback Loop
Students are more likely to complete the survey if they know their feedback is important to their tutors, department and university, and that their feedback is acted upon. Closing the feedback loop will make students feel a part of an effective, value-added process and this in turn will drive higher response rates for future evaluations. This can be done by:
- sharing the results of previous years and how feedback has influenced decisions and actions e.g. through a link to a web page with more information; and
- using of the evasys Closing the Loop function (through e.g. the evasys Instructor Portal), where the Module Leader can record reflections on students’ responses and generate a Student Report to students (to which they can then respond) – either sent via email or embedded in the VLE.
This ‘you said we did’ feedback at the granular module level creates a virtuous loop of subsequently improved responses to future surveys, as students can see that their voice is being listened to and acted on.
Concluding Remarks
So where does this leave us with response rates in online evaluation? No feedback mechanism is perfect but it is better to obtain feedback than not at all. The strong emphasis placed on student voice and continual quality management in recent years as a result of both higher education policy and marketisation means that student surveys are probably here to stay for the foreseeable future. The benefits of online evaluation far outweighs its disadvantages. Beyond the savings in class time, administrative burden and the use of paper, online evaluation offers students the opportunity to access the surveys when they want and from where they want, and hopefully this will also them the space to provide more considered responses.
When it comes to what is adequate when it comes to response rates, the short answer is: it depends.
- Contextualisation is important: large class sizes will require a lower response rate than small classes (see table 1).
- Triangulate with other data sources: making high-level decisions on the outcomes of a single, or a small number of responses would be problematic, particularly in the absence of ‘local’ module-level knowledge. So ensure that decision-making is informed by multiple sources of evidence and metrics including survey outcomes over time, NSS ratings and other module information such as progression rates and grade outcomes.
- Data literacy: underlying the first two points is the need for dedicated staff who are able to make sense of the data and feed it back into the system so that it can support decision-making.
High survey response rates are always sought and valued as they are illustrative of more engaged students and more credible and insightful data. We hope that we have demonstrated in this paper that these can be achieved through a combination of cultural and technological interventions.

Teaching and Learning Specialist Dr Helen Lim – Head of Opportunities, evasys.
With more than twenty-five years of experience working in UK higher education, Helena has held senior roles at Southampton Solent University and the Higher Education Academy (now Advance HE), and is the founder of the UK and Ireland Higher Education Institutional Research (HEIR) Network. She held Honorary Fellowships with University of Liverpool and Aberystwyth University, and has lectured at University of Bath, Southampton Solent University and the Open University.