Guest Post by Dr Christine Couper 

Student surveys: a valuable tool?

In recent years, surveys have become a highly valued tool for gathering student feedback.

In fact, when developing their approach to regulation, the Office for Students (OfS) made clear their intention to routinely use surveys as a way of collecting student opinion.

Politics and controversy

However, the uses made of survey data can be controversial, because responses may be politicised; they can be influenced by the cult of personality; there is a growing body of evidence that there can be bias in the feedback with, for example, men receiving higher scores than women. And they have become a potentially unreliable, yet key, ingredient when making quality judgements, not only by league table compilers but also by government; as evidenced by The Teaching Excellence and Student Outcomes Framework (TEF).

All of this has led me to ponder on a key question, what are student surveys for?

Who do student surveys benefit?

There are lots of possible answers to the question of what student surveys are for, like “highlighting the strengths and weaknesses in an approach to teaching”, or “evidencing the need for change” and “giving students the opportunity to share their opinions”.

These, in turn, led to some more questions: Who are the key stakeholders when running a survey?  Do they all benefit?  Do some receive very little benefit? Can we make surveys more useful?

From a university perspective, students who participate in survey work for altruistic reasons provide a great service. Surveys may be the most efficient way to provide a voice to a whole cohort of students such that everyone gets a similar hearing. The feedback should impact directly on the teaching and learning of the next cohort. But, can students benefit from a two-way, structured conversation?

structured conversations with students post survey featured

A digital only approach to student surveys

A few years ago, I was responsible for the implementation of EvaSys as our university-wide module evaluation survey system.

This was done when most institutions either had their own, often widely devolved, internal survey processes or were using paper surveys that were handed out in class, completed by students, then gathered in and scanned by specialist scanners to consolidate the results.

We decided from the outset that we would be digital only, appreciating that we would take a hit in terms of response rates, but believing that the benefits of speed and participant confidentiality at faculty level – and the avoidance of the complex logistics of using paper and scanners – outweighed what we believed would be a temporary dip.

Opening up a dialogue at module level

Why were we so keen on speed and confidentiality?

Well, we had discovered that EvaSys had the capability to provide survey outcomes directly to participants when a survey closed.

So, participants could get access to clearly presented results when they were relevant, enabling them to judge their own experience against that of the rest of the cohort. They could see whether their views were typical or not. We closed the student feedback loop at module level.

But that was not the only way that we provided feedback to students. Module leaders were asked to read and digest their module outcomes and then use EvaSys to provide their own commentary and reflections back to their students, which could be anything from thanks for the positive comments to a short explanation on how key issues would be resolved.

By making the conversation two way we aimed to give students much greater ownership of the whole process. And because the features were built into the software, it really did happen at the click of a button.

Building momentum and consistency for the future

The data review and feedback did not stop there. A key aim of implementing a university-wide survey system was to allow comparative analysis of outcomes across modules.

While the analytics were managed centrally, the reviews were done within faculties, first by programme teams and then by the student experience committees. Both of these groups had student members, so it was possible to provide additional layers of feedback to our students as the module outcomes were reviewed within our annual teaching and learning review processes.

By developing the conversations with all our students, we hoped to provide evidence that there was direct benefit to them in continuing to complete the module evaluations, thus reducing the risk of survey fatigue- and increasing future response rates as students evidenced that their voice was being heard. We were also building positive behaviours that would impact on other surveys that staff and students are involved with, like the NSS.

Alleviating concerns associated with a new survey system

When we first started on this journey, there were academics who worried that the outcomes could be used within a disciplinary structure and others who were convinced that they would get poor scores because they were teaching a demanding subject.

We also had to decide what we would do if students posted abusive comments that had a negative effect on staff welfare.

In relation to the first, our goal has been to encourage personal reflection and with the second, much of the discussion is within discipline so less relevant. For the third, after much consideration, we agreed that students would be requested to be constructive and advised that abusive comments would be dealt with through already in place processes to manage unacceptable behaviour.

It’s great to report that this is something we have not had to action (and EvaSys now offers language redaction capability). Some personal qualities of the teaching staff may influence the students’ scores, but we have always had popular teachers who are well prepared and good at engaging, encouraging and supporting our students. Getting recognised for that within a well-regulated EvaSys framework must be a good thing. Especially as the recognition comes from the students.

Structured for the win

Whatever your aims for your survey, make sure you have a structured conversation with your students by providing feedback in a timely fashion. You’ll have developed a virtuous cycle and a win-win outcome for all.

Christine Couper

About Dr Christine Couper

Dr Christine Couper is the former Director of Strategic Planning at the University of Greenwich, where she led the implementation and ongoing direction of EvaSys.  Christine is now a director-level consultant at CouperJones Ltd specialising in strategic planning, statutory regulation and funding, data analysis and insight, project and change management, qualitative research and higher education policy.

Evasys survey automation software

Deliver Course and Module Evaluation through EvaSys Find out more

EvaExam assessment automation menu

Deliver Online and Paper Exams through EvaExam
Find out more