The National Student Survey: Review, Remodel or Remove?

Since the UK Government’s address on 10th September to Universities UK members that covered, amongst other things, news of a “wholesale review” of the National Student Survey (NSS), I’ve been keeping a keen eye on commentary, as the subject of student feedback is very close to my heart.

One piece, in particular, really resonated.  The article by Camille Kandiko Howson of Imperial College London published by Wonkhe on 14th September asks the question “Can we live without the NSS?”.

Camille makes various points around the issues that are embedded in the NSS as a ranking factor in how institutions are perceived and chosen by students, as well as the costs involved in keeping this particular ship afloat – costs borne by governments AND by institutions.

Has the National Student Survey (NSS) made a difference?

There is little doubt that the NSS has been demanding, and costly, for Universities to manage, with potentially counterproductive impacts on students through constant chase-ups to respond. There is a distinct possibility that there has been gaming and attempts to inappropriately influence outcomes. However, through my time in HE, it seems clear to me that its existence – and of course inclusion in league tables – has focussed attention on the student experience and teaching and learning delivery. Some of this may not have been wholly productive but, taken at a macro (political?) level, the NSS can be seen to have made a difference.

the national student survey review

The NSS is, of course, a “top-down” programme-level survey, reflecting students’ perceptions across a range of teaching. This can amount to teaching across 20+ modules by even more lecturers. Are students always able to make a reasonably balanced judgement when asked to answer a single question of “Staff are good at explaining things”? Can one or two poorer teaching experiences disproportionately affect that response (an unequivocal ‘yes’ from a student at one University in which I worked)?

Irrespective of any demise of the NSS (and the league table compilers will have their say), I believe that “bottom-up” module evaluation – potentially including individual lecturer evaluation (contentious I know) with effective closing the loop feedback to students – is a key element in the mix of listening to the student voice. It provides the information required by Module Leaders to take action, and provides feedback to students on what actions have been taken, letting them know that their voice is actually being heard. If done correctly, module evaluation opens a dialogue with students which, in turn, can improve student engagement and enhance partnership in the collaborative venture of learning.

In Conclusion…

The critical takeaway here, is that – no matter the vehicle used to gather the data – feedback from students provides actionable insight. Insight that can help to improve teaching and learning within an individual module. Show students that they have been heard – summarise the positives, tell them how you are going to improve and what action is being taken as a direct result of their feedback, and positive student engagement will come organically (not to say ever increasing response rates to online surveys).

Closing the feedback loop between students and universities is a topic about which I am extremely passionate. For further thoughts on the benefits that can be achieved, take a look at Why Universities need to Close the Student Feedback Loop at Module and Course Level on the EvaSys blog.

Bruce Johnson MD EvaSys UK closeup

Bruce Johnson is the Managing Director at EvaSys and has 20+ years of experience working within both large and small UK universities, including 14 years heading Student Systems at a Russell Group University. 

Evasys survey automation software

Deliver Course and Module Evaluation through EvaSys Find out more

EvaExam assessment automation menu

Deliver Online and Paper Exams through EvaExam
Find out more