Why Seek Student Feedback to contribute to Learning and Teaching Enhancement?

One of the most effective ways of enhancing the learning experience and teaching offer at universities is to ask the students for feedback. UK higher education (HE) providers use a range of formal and informal mechanisms to hear the ‘student voice’ and understand the student experience. Most typically, HE providers survey their students at the end of each teaching period, and increasingly mid-module. The data and information obtained from this feedback are embedded within institutional quality enhancement processes.

Over the past decade, the move towards seeking student views has been driven by policy, regulatory and market conditions. The June 2011 Higher Education White Paper ‘Students at the Heart of the System’ set out the Government’s expectation that student evaluation at module level should be used in an ‘open and transparent’ way to inform ‘a continuous process of improving teaching quality’:

allowing students and lecturers within a university to see this feedback at an individual module level will help students to choose the best course for them and to drive an improvement in the quality of teaching.

[BIS (2011)]

The UK Quality Code for Higher Education (2018) includes this underlying practice for all higher education providers:

The provider engages students individually and collectively in the development, assurance and enhancement of the quality of their educational experience.

Additionally, the Quality Code contains the following advice and guidance:

  1. Providers agree strategic principles for monitoring and evaluation to ensure processes are applied systematically and operated consistently.
  2. Providers normalise monitoring and evaluation as well as undertaking routine formal activities.
  3. Providers evaluate, analyse and use the information generated from monitoring to learn and improve.
  4. Student engagement through partnership working is integral to the culture of higher education, however and wherever provision is delivered – student engagement is led strategically, but widely owned.
  5. Higher education providers, in partnership with their student body, define, promote, monitor and evaluate the range of opportunities to enable all students to engage in quality assurance and enhancement processes.
  6. Effective student engagement supports enhancements, innovation and transformation in the community within and outside the provider, driving improvements to the experience of students.
  7. Providers work in partnership with the student body to close the feedback loop.
[QAA (2018)]

Approach to Survey Delivery

Student module and course evaluation surveys are now almost universally administered and managed online. A review of how HE providers approached course and module surveys found that the most significant developments for HE providers in implementing an online evaluation system were:

  • The introduction of institution-wide common questions in course and module evaluation

  • Standardisation in the timing and reporting of these surveys

  • Greater consistency in practices across different departments or schools

  • Institution-wide, comparable course and/or module data for strategic analysis and coherent, institutional responses to student feedback

[evasys (2016)]

The review also found that questions typically asked students about teaching, assessment and feedback, academic support, learning resources and their overall satisfaction (evasys, 2016). 

Dommeyer et al (2004) identified some common features to online surveying practice. Typically, online evaluation would involve the following:

  • Providing students with a link, usually in an email, to access the survey

  • Assuring students that their responses will be anonymised (confidential)

  • Students will respond numerically to multiple response items (Likert scale) and type answers to open questions

  • Students are provided with a receipt confirming that they have completed the evaluation; students are given a window of time to respond, usually near the end of term/semester; and aggregate reports will be made available to students only after the final grades are determined.

The many benefits of using online student module evaluation is recognised in the published literature (Dommeyer et al., 2004; Salmon et al. 2004; Watt et al. 2002). Watt et al. (2002) state that ‘using web-based evaluation questionnaires can bypass many of the bottlenecks in the evaluation system (e.g. data entry and administration). Another benefit of online evaluation is that it removes the need to administer surveys in class (Dommeyer et al. 2004) thus creating efficiencies (e.g. staff time and paper).

However, Nuttly (2008) identified that one of the most significant and pervasive challenges in an online student module evaluation system is that response rates are low. Research shows that response rates are generally lower when an online instrument is used, as compared to an in-class, paper-based instrument (Nulty, 2008; Anderson, Cain, & Bird, 2005; Ernst, 2006; Kulik, 2009; Benton, et al, 2010). This has raised concerns around how far valid and reliable conclusions about teaching effectiveness can be drawn from the data (Dommeyer, Baum et al. 2002).  The nature of online evaluation is that it depends on student cooperation, unlike paper-based evaluations where surveys can be administered to a captive audience. Throughout the literature, low response rates are cited as the key disadvantage of an online evaluation system.

This paper considers a variety issues surrounding response rates in online course evaluation. What are the implications of low response rates in online course or module evaluation? What is an adequate response rate, that is, what response rate can be considered large enough for the survey data to provide meaningful evidence for assurance and enhancement purposes? What practical strategies and advice are available to help boost response rates?

Implications of low response rates

So what are the reasons for non-response and what are the implications of low response rates? Non-response can be due to several reasons. There can be due to a lack of motivation as the students are not completing them in class but in their own time out of class. As these evaluations are most commonly administered at the end of term, students do not necessarily feel they benefit from any improvements that may be introduced to the module as a result of their feedback. At other times, students may believe that only the teacher will see their feedback or that their views will not be taken seriously (Chapman and Joines, 2017). Students may also choose not to respond due to survey fatigue, that is they feel they are asked to complete too many surveys.

A study by Anderson et al (2006) found that most students did not respond to an online evaluation because of the following four main reasons: the students were disengaged (that is, they forgot or were too busy); they had technology problems; they perceived no benefit;  and lastly, ‘other’ reasons.

Concerns around low response rates is whether those who have participated in a survey are representative of the entire population. In other words, if respondents and non-respondents have very different views, then the results from the survey would not reflect the opinion of the population as a whole. For this reason, higher response rates are generally more desirable in order to minimise the potential effect of non-response bias.

In course and module evaluation, low response rates may affect the accuracy of data. Data from these surveys are regularly used within a programme’s quality management process. So if respondents have different views from non-respondents, it is possible that the feedback provide could influence academic staff to respond in a way that may be different if they had received feedback from all students. At the department or school level, summative judgements may be made on a teacher’s performance based on erroneous data. Low response rates impacts the credibility of the data and may have some real implications on decision-making.

Liu and Armatas (2016) assert that without adequate response rates, the benefits of implementing online surveys including efficiencies in survey administration (survey distribution, collection and analysis), data management, as well as rich open text comments are not worthwhile to achieve.

Achieving a higher response rate means that the results that are collected are likely to be more representative and there is greater confidence that the student feedback is meaningful in order to drive improvements in teaching and learning (Brennan & Williams, 2004).

In Raising response rates, the HEA (2016) identified that:

The more students that take part in the survey, the more meaningful the data. The purpose of raising response rates is to make the survey more effective for enhancement across the institution. A high response gives greater confidence to results and makes it possible to deliver results at levels relevant to staff delivering teaching and learning.

However, the publication does not offer ‘explicit targets on what response rates institutions need to achieve’ but offers the following ‘general guidance’:

  • the response rate to UKES 2015 was 15%;
  • 15% is a low response rate for an online survey;
  • 25% is an average response rate for an online survey; the response rate to PTES 2015 was 29%;
  • 35% is a good response rate for an online survey;
  • the response rate to PRES 2015 was 41%;
  • 45% is an excellent response rate for an online survey.
[(HEA, 2016)]

According to Nuttly (2008), the best reported response rates for online surveys (47%) are only adequate for class sizes above 750 students. So what happens if the class size is less than 750 students? Using Dillman’s formula (2000), Nuttly calculated how many respondents are required and therefore, also the required response rate, table 1.

Total number of students on a moduleRequired number of respondentsResponse rate required (%)
10770%
201258%
501735%
1002121%
2502410%
300248%

Table 1:  Required response rates by class size summarised from Nulty (2008), based on a formula by Dilman (2000)

Nuttly, however, was insistent that his recommended response rates are only a guide of what ‘in a theoretically ideal world’ would be considered adequate. He stressed that even if the response rates are achieved, ‘great care is need to be sure that results for a survey are representative of the whole group of students enrolled’.

Boosting online response rates

So what are the strategies for raising online response rates? This has been pretty well-documented in the published literature (Chapman and Joines, 2017; HEA, 2016; Naidu et. al., 2014). With over 20 years experience in providing online evaluation solutions, the following section summarises the best of what we know and what we have found. But before that, let’s go back to basics with survey design. Rutherford (2016) has advised that:

Research has shown that surveys should take 5 minutes or less to complete. Although 6 – 10 minutes is acceptable, those that take longer than 11 minutes will likely result in lower response rates. On average, respondents can complete 5 closed-ended questions per minute and 2 open-ended questions per minute.

Academic Staff engagement

Staff engagement in the process is a key element in achieving higher student engagement. Students are more likely to complete the surveys when their lecturers and tutors encourage them to. Therefore it is important that staff also recognise the benefit of online evaluation and take the time and trouble to promote them to their students. Various ways to increase staff engagement include:

  • Emails from departmental staff such as heads of department and course leaders, explaining the surveys’ importance and what tutors can do to support them

  • An email from the strategic institutional learning and teaching lead (e.g. PVC Learning and Teaching) outlining the benefits of online evaluation to the university and department

  • Where available, giving teaching staff the opportunity to tailor their surveys by selecting questions from a central ‘bank’ of questions

  • Automated email to Module Leader on survey opening

  • Providing PowerPoint slides for staff to incorporate at the end of a seminar

  • Setting response rate targets based on previous years’ participation rates and tracking and monitoring these at university and department level, and automating emails to warn Module Leaders when this is not being met

  • Getting lecturers to show real-time response rates (for instance, using the evasys Instructor Portal) in the final lecture/seminar to encourage students to respond

  • Publicising departmental and faculty response rates to create a competition to maximise response rates

Communicate, communicate, communicate

Linked to engagement is communication. Promoting online evaluation throughout the university in the lead up to and during the live survey period will raise awareness and encourage responses. This can include:

  • Publicising the survey and how past feedback has been acted on on university web pages, including departmental pages and the Student Portal

  • Using the university’s internal PR and social media channels such as Snapchat, Twitter and Facebook to promote the survey (with a link to the Student Survey Portal)

  • Providing incentives for all who take part (for e.g. printer credits, cafe giveaways)

  • Entering all those who take part into prize draws (for e.g. iPad, Kindle Fire, graduation costs paid for)

  • Getting the Students’ Union to publicise and promote the surveys using their webpages, social media channels, email campaigns (for e.g. Presidents sending out the final reminder)

  • Displaying promotional materials around the Students’ Union

  • Using the university’s communications and/or marketing teams to develop a tailored campaign with your students during the survey period

Closing the Feedback Loop

Students are more likely to complete the survey if they know their feedback is important to their tutors, department and university, and that their feedback is acted upon. Closing the feedback loop will make students feel a part of an effective, value-added process and this in turn will drive higher response rates for future evaluations. This can be done by:

  1. sharing the results of previous years and how feedback has influenced decisions and actions e.g. through a link to a web page with more information; and
  2. using of the evasys Closing the Loop function (through e.g. the evasys Instructor Portal), where the Module Leader can record reflections on students’ responses and generate a Student Report to students (to which they can then respond) – either sent via email or embedded in the VLE.

This ‘you said we did’ feedback at the granular module level creates a virtuous loop of subsequently improved responses to future surveys, as students can see that their voice is being listened to and acted on.

Concluding Remarks

So where does this leave us with response rates in online evaluation? No feedback mechanism is perfect but it is better to obtain feedback than not at all. The strong emphasis placed on student voice and continual quality management in recent years as a result of both higher education policy and marketisation means that student surveys are probably here to stay for the foreseeable future. The benefits of online evaluation far outweighs its disadvantages. Beyond the savings in class time, administrative burden and the use of paper, online evaluation offers students the opportunity to access the surveys when they want and from where they want, and hopefully this will also them the space to provide more considered responses.

When it comes to what is adequate when it comes to response rates, the short answer is: it depends.

  1. Contextualisation is important: large class sizes will require a lower response rate than small classes (see table 1).
  2. Triangulate with other data sources: making high-level decisions on the outcomes of a single, or a small number of responses would be problematic, particularly in the absence of ‘local’ module-level knowledge. So ensure that decision-making is informed by multiple sources of evidence and metrics including survey outcomes over time, NSS ratings and other module information such as progression rates and grade outcomes.
  3. Data literacy: underlying the first two points is the need for dedicated staff who are able to make sense of the data and feed it back into the system so that it can support decision-making.

High survey response rates are always sought and valued as they are illustrative of more engaged students and more credible and insightful data. We hope that we have demonstrated in this paper that these can be achieved through a combination of cultural and technological interventions.

Anderson, J., Brown, G. & Spaeth, S. (2006) ‘Online Student Evaluations and Response Rates Reconsidered’ Innovate: Journal of Online Education 2: 6, article 5

Anderson, H.M., Cain, J. & Bird, E. (2005) ‘Online Student Course Evaluations: Review of Literature and a Pilot Study’ American Journal of Pharmaceutical Education 69:1, article 5Benton, S., Webster, R., Gross, A. & Pallett, W. (2010) ‘ An analysis of IDEA student ratings of instruction using paper versus online survey methods 2002-2008 data’ IDEA Technical Report No. 16. The IDEA Center, Kansas State Universityaccessed on 26 October 2020 at https://www.ideaedu.org/Portals/0/Uploads/Documents/Technical-Reports/An-Analysis-of-IDEA-Student-Ratings-of-Instruction-Using-Paper-versus-Online-Survey-Methods-2002-2008-Data_techreport-16.pdfBrown, G. & Peterson, N. (June, 2008). Online course evaluations and response rate considerations. Centre for Teach

BIS (2011), Higher Education: Students at the Heart of the System accessed on 26 October 2020 at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/31384/11-944-higher-education-students-at-heart-of-system.pdf

Brennan, J., & Williams, R. (2004). Collecting and using student feedback. A guide to good practice. Learning and Teaching support network

Chapman, D.D. & Joines, J.A. (2017) ‘Strategies for Increasing Response Rates for Online End-of-Course Evaluations’ International Journal of Teaching and Learning in Higher Education 2017, Volume 29, Number 1, pp. 47-60

Dommeyer, C.J., Baum, P. & Hanna, R.W. (2002) ‘College Students’ Attitudes Toward Methods of Collecting Teaching Evaluations: In-Class Versus On-Line’ Journal of Education for Business, 78:1, 11-15

Dommeyer, C.J., Baum, P., Hanna, R.W & Chapman, S. (2004) ‘Gathering faculty teaching evaluations by in-class and online surveys: their effects on response rates and evaluations’ Assessment & Evaluation in Higher Education, 29: 5, 611-623

Electric Paper. (2011). Effective Course Evaluation: The Future for Quality and Standards in Higher Education London: Electric Paper

Ernst, D. (2006) ‘Student evaluations: a comparison of online vs. paper data collection’ Paper presented at the EDUCAUSE Annual Conference, Dallas,  9-12 October 2006 accessed on 26 October 2020 at https://events.educause.edu/annual-conference/2006/proceedings/student-evaluations-a-comparison-of-online-vs-paper-data-collection

evasys (2016) The devil is in the data How HE providers can benchmark their course and module performance London: Electric Paper

HEA. (2016). HEA Surveys 2016: Raising Response Rates. Accessed on 26 October 2020 at https://www.heacademy.ac.uk/system/files/downloads/guides-raising-response-rates.pdf

Kulik, J.A. (2009), Response Rates in Online Teaching Evaluation Systems

Office of Evaluations and Examinations, The University of Michigan accessed on 26 October 2020 at https://www.wku.edu/senate/archives/archives_2015/e-4-l-response-rates-research.pdf

Liu, D. C. S., & Armatas, C. (2016). Response rate and ratings for student evaluation of teaching: Does online administration matter? Asian Journal of Educational Research, Vol. 4, No. 5, pp. 1-13.

Nuttly , D. D. (2008), ‘The adequacy of response rates to online and paper surveys: what can be done?’ in Assessment & Evaluation in Higher Education, Vol. 33, No. 3, pp. 301–314

QAA (2018), The UK Quality Code for Higher Education accessed on 26 October 2020 at https://www.qaa.ac.uk/en/quality-code/advice-and-guidance

Rutherford, C. (2016) Survey Research and Acceptable Response Rates accessed on 26 October 2020 at http://www.drcamillerutherford.com/2016/05/survey-research-and-acceptable-response.html

Salmon, P., Deasy, T., and B. Garrigan (2004)  ‘What escapes the Net? A statistical comparison of responses from paper and web surveys’. Paper presented at the Evaluation Forum: Communicating Evaluation Outcomes: Issues and Approaches, Melbourne, Australia, 24–25 November 2004

Watt, S., Simpson, C., McKillop, C. & Nunn, V. (2002) ‘Electronic course surveys: does automating feedback and reporting give better results?’ Assessment & Evaluation in Higher Education 27:4, 325–337

www.gov.uk (2020) Making your service accessible: an introduction accessed on 25 November 2020 at https://www.gov.uk/service-manual/helping-people-to-use-your-service/making-your-service-accessible-an-introduction

Dr Helena Lim

Teaching and Learning Specialist Dr Helen Lim – Head of Opportunities, evasys.

With more than twenty-five years of experience working in UK higher education, Helena has held senior roles at Southampton Solent University and the Higher Education Academy (now Advance HE), and is the founder of the UK and Ireland Higher Education Institutional Research (HEIR) Network.  She held Honorary Fellowships with University of Liverpool and Aberystwyth University, and has lectured at University of Bath, Southampton Solent University and the Open University.

Share This Insight!