Unlike Australia, unlike the UK, unlike Ireland, unlike many other countries, New Zealand has no national tertiary student survey. How come? What is the cost to our system?
In my view, the lack of a national survey of student engagement is a problem. Read on.
The survey scene
Why survey students
Surveys of students enable TEOs, student organisations, government agencies and the public to hear and respond to students’ views and values – concerning their studies, their wellbeing, their experiences of their education, their engagement with their TEOs and their instructors. They can provide a window to the value learners get from their studies. Student surveys can be used to explore topics like student engagement, how well teachers help student learning and how students support and are supported by their peers. Research has shown that student evaluations of teaching are a reliable indicator for assessing the quality of both individual teaching and the overall quality of teaching in a programme[1].
For TEOs and agencies, survey data complements quantitative data on retention and success. It helps explain trends in student achievement and attainment. It provides accountability of TEOs to their students. Capturing the voice of students is an important means of enhancing the quality of teaching, learning and the experience of students; it enables TEOs to identify and respond to students’ concerns It allows institutions to identify areas of weak and strong performance and to feed that into their quality improvement work[2].
A national student survey allows TEOs to benchmark their work against other similar organisations. Reports on the results of a national student survey can give prospective students and their advisors a clearer idea of the experience of tertiary study. It has the potential to be a research resource, a source of information that can give a view of learning across the system.
There are lots of institutional surveys ….
Most New Zealand TEOs – all the universities, many polytechnics and many private training establishments – have surveys of students; many survey their students each year. Many also survey their graduates. Most TEOs recognise the value of survey data as a means of capturing student views of their learning that can then inform their quality improvement work.
But:
- TEOs conduct surveys in isolation from their peers, so they can’t benchmark the results against their peers[3].
- When TEOs do run surveys, the external reporting of survey results is rare and any reports on survey data that do get published are light analytically and hence, of little value to prospective students or to stakeholders[4].
So TEOs are, overall, keen to survey students, but they are not keen to communicate survey results or to explain how they use survey data.
… but national surveys?
The closest we came to a national survey was more than a decade ago. The Australasian Survey of Student Engagement (AUSSE), developed by the Australian Council for Education Research (ACER) for use in Australian higher education, was made available to New Zealand TEOs between 2007 and 2012. All of the universities tried it out at least once (but never all in the same year). Ten polytechnics participated in a 2010 pilot, while in 2011, ten PTEs took part in an AUSSE pilot[5].
But before AUSSE could become embedded in New Zealand tertiary education, the Australian government changed the rules, introducing a new national portfolio of higher education student and graduate surveys called the Quality Indicators of Teaching and Learning (QILT). QILT eclipsed AUSSE and the momentum was lost; in New Zealand, many TEOs have maintained their focus on surveys, but each has its own survey and its own approach.
What do other countries do
Institutions in most Anglophone countries and many other countries participate in large-scale inter-institutional surveys of students.
NSSE
Many draw methods, material and questions from Indiana University’s National Survey of Student Engagement (NSSE), now 25 years old, which, over that time, has been taken in 1,700 institutions and by 6.5 million students across the US and Canada[6]. It is taken annually by 500 plus institutions and more than a quarter of a million students. NSSE’s question focus on themes like: academic challenge, learning with peers, experiences with instructors, and campus environment. Its validity and reliability have been researched intensively[7]. Participating institutions receive detailed reports analysing their students’ responses.
NSSE is not a “national” survey in the strict sense – it is open to institutions in 63 different jurisdictions[8] but there is no requirement on institutions to participate. But it is very important because so many institutions, so many students, do take part and because many other countries’ surveys use questions developed and road-tested by NSSE; this enables a level of international benchmarking at system level – see this paper for data that compares results over time from NSSE and equivalent surveys from Australia, Ireland, UK, China, Korea and some other countries[9].
Ireland
The Irish student survey, StudentSurvey.ie, was created by a consortium comprising the Higher Education Authority[10] (HEA) the two peak bodies representing higher education institutions and the Union of Students in Ireland. The survey has been run since 2013[11], managed by by an independent commercial research company.
The emphasis of the survey is on improving the quality of teaching. The survey aims to help institutions identify areas of strength in how students engage (so that they can continue and reinforce these practices) and to identify areas requiring further development or improvement in how students engage (so they can respond). The emphasis is on continual enhancement of institutions’ teaching and learning and student engagement.
Although participation is voluntary, all of Ireland’s higher education institutions agreed to participate, because they saw the value to themselves of the survey data. While basic data on each institution is made available publicly through the survey website, the published national reports are rich and detailed. Each institution is provided with the full raw data from its own students and with benchmarks.
The survey has a focus on student engagement, using internationally standard items drawn from the NSSE enabling a measure of international benchmarking as well as creating a stable time series. The 10 indicators developed from the survey questions are shown in the figure below[12]:
Figure 1: Irish student survey indicators

Australia
The Australian Quality Indicators for Teaching and Learning (QILT) system is a suite of four surveys:
- a student experience survey (that incorporates some student engagement items as well as more general items)
- two graduate outcomes surveys – one four to sixth months out from completion, the other three years after completion
- and an employer satisfaction survey.
Australian survey results are published with confidence intervals so that readers can assess how significant differences between institutions are.
Much of the focus of the QILT suite is on institutional accountability. Participation in QILT is used in the quality assurance system and contributes to the government’s monitoring and management of institutions and of the system. QILT data also informs student choice by allowing students to compare the survey results of different institutions through a website called ComparED. Institutions are able to analyse their own data, compare their students’ responses with national norms and use that analysis to inform their teaching quality development work; institutions are provided with their full raw data plus dashboards that provide rapid access to standard analyses and benchmarks.
While participation in QILT is not compulsory, all 42 universities and around 90 non-university higher education providers take part – likely because the regulator and quality assurance agency, TEQSA, uses student experience survey data as part of its institutional risk assessment.
QILT is run under a contract to the Australian federal Department for Education by a professional research company (owned by the Australian National University).
The United Kingdom
The UK government sponsors an annual student survey, administered on behalf of the government and the institutions by a commercial market research company. There is a common core of questions (that have their origin in the NSSE engagement approach) but the national governments in each of the four countries of the UK include questions specific to the issues that arise in their system.
The student survey data is used as part of the UK government’s teaching excellence framework (TEF) assessment.
The English government has sometimes included topical questions, questions du jour. Since 2023, for example, they have asked questions about the issue of free speech in class. While I am not sure it’s a good idea to cloud the survey – such an important barometer of system performance – with that sort of topical question, analysis of the survey data by the higher education think-tank Wonkhe.com was able to establish the scale of problems with free speech on campus, the extent to which the English higher education system does/did, in fact, have a free speech problem[13].
Why should we have a national student survey?
We already have reasonable data on students’ outcomes via the IDI. But, on teaching and learning, very little … Learning is the heart, the purpose of tertiary education. Learning occurs through thousands of interactions between learners, instructors, institutional processes, learning resources and facilities. Those interactions are mostly invisible; only the individual learner sees the whole complex process. So, what drives effectiveness, what leads to success, that’s a black box. We have thousands and thousands of anecdotes about teaching and learning effectiveness, dozens of experts, small scale research, international research. But no comprehensive, objective source of data on what drives learning performance in our system.
So our priority has to be the experience of teaching and learning, engagement with teaching, rather than an outcomes survey. We need a national survey of student experience.
The rationale for a mandatory national student experience/student engagement survey comes down to four essential points:
- All TEOs – even small TEOs – need to get robust data on how their students are experiencing and engaging with their studies, using well-tested, robust, internationally recognised measures. They need to use that data to inform their quality enhancement work – to identify areas, processes, facilities and approaches where they need target their improvement efforts. They need to maintain a time-series that allows them to trace trends and the see the effects of their improvement measures.
- And they need to be able to compare their headline results with similar TEOs, confident that the administration of the collection of data is comparable, that the survey in each TEO is seeking the same information in the same way. That allows TEOs to see whether an emerging trend is particular to their own students or if it is part of a broader sector-wide trend – for instance, if my students’ results are down, is that down to my work with my students or are there factors in the wider environment that are contributing.
- Plus …we know that teaching and learning is the black box that we need to open if we are to improve practice. So, the system needs a comprehensive research resource that can provide for deeper investigation into teaching and learning, that can create information on what works in teaching and learning, for our types of students, for the different types of tertiary education we offer.
- We need information on teaching and learning that is internationally benchmarked so we can assess our system performance against a group of peer countries.
A mandatory national survey would place institutional focus on the quality of teaching and learning and encourage TEOs to power up their teaching and learning professional development work.
Get ready for the pushback from institutions
Some TEOs, especially the larger TEOs, will argue that investigating learning and managing quality is their business. Investigating and shaping teaching and learning is something they do and, as autonomous institutions, as a matter of principle, it’s something they should deal with. Alone. They may argue that they need questions that reflect their priorities. They already do their own surveys; they have invested time and money in developing surveys that are tailored to their circumstances and mission, and they have long data time-series from their own surveys – what’s the point of a national survey that will overturn that investment? Is this just yet another attempt to rank and compare institutions? Like the Australian ComparED site? Another government take-over?
Mostly, not unreasonable points (even if a national survey wouldn’t preclude a TEO from doing the same sort of analysis the organisation does now on its own home-grown survey).
If there was a national survey, institutions would be no worse off than now, but they would get the benefit of benchmark data that they can’t get now, benchmarks that help them understand and assess their own results. Prospective students and government agencies – who see almost nothing at present – would gain insights, insights they are now blind to.
So most of that pushback is less about whether a national survey is desirable; rather, the concerns are about how a national survey might be put together, how the data is collected and managed, how it is analysed, who sees the reports…
So, if we are to have a national survey, we need one that is well designed, that addresses reasonable concerns. Is that possible?
Designing a survey that that would address institutions’ anxiety…
Let’s look at how others have dealt with these tricky issues.
Survey governance …
Institutions will be concerned at loss of control. Loss of control over an evaluative assessment of their core business. Who will decide what’s in the survey? What data will be published? Who will decide data access and security policies? Questions will arise about how the survey is targeted, its content, its timing, its emphasis ….
Those are legitimate concerns. Institutions will be key to enabling the survey to work well. They need to have confidence in the integrity of the survey and in its design.
It is important for governance to be cooperative, with sector peak bodies and student representatives having a major role in governance, policy, design, process and oversight, alongside government agencies. That’s how the Irish manage their student survey – with the two institutional peak bodies and the national student association equal partners with the government agency. The Irish model provides a template New Zealand could follow.
Managing a survey …
The Irish, the Australian and the UK surveys are managed by professional research companies, independent of government and of the sector, that have well-established protocols for serving the needs of survey owners. Managing survey data, protecting its confidentiality, delivering data with confidence intervals and margins of error and guarding the privacy of respondents and institutions. They supply a very detailed report to each institution on its own students’ responses, and they provide a summary report on the whole survey to enable institutions to benchmark their performance. But what gets published is summary data. Respondents’ privacy is protected. What the public, government agencies and other institutions see is summarised.
That’s how the system is managed in Ireland, Australia and the UK. It’s also how many TEOs manage their home-grown surveys now[14].
Types of students, types of TEOs, types of programmes …
Questions appropriate for beginning students, final year students, postgraduate students may differ. Other countries provide a common core with variations for each type of student.
In New Zealand’s case, where we have a unified sector, universities, wānanga, polytechnics, PTEs will all want to take a different slant. That suggests also tailoring surveys to different TEO types and also to different student types, with some common questions … It may even be possible to allow individual TEOs to add their own questions around a common base.
Reporting issues: ranking or benchmarking …
A lot of the pushback comes down to the nature of reports on a TEO’s survey results. Who will prepare the reports? How much detail? How and where will the reports be published? Will survey data become a feature of quality assurance?
Especially, TEOs are always going to be worried about the risk of ranking. A benchmark displays their weaknesses as well as their strengths. They worry that readers may take an overly simplistic view of engagement and satisfaction data, ignoring confidence intervals and margins of error and hence, reading too much into apparent differences, even if the difference is not statistically significant. TEOs worry for their reputation. They worry that providing information on their weaknesses may harm their enrolments.
They may also be concerned about the risk that the government may latch onto survey data to create something akin to the Australian ComparED website where prospective students are encouraged to compare institutions’ performance on multiple factors. They may be concerned about the use of survey data in quality assurance (as occurs in Australia and the UK) – will this be treated in simplistic ways?
These policy questions are matters to be settled at governance level, so TEOs, agencies and students will all have their say. The solution here is careful design of the reports on the survey results, both the national reporting and the reporting by TEOs. One of the principles to underlie the policy might be that institutions should be responsible for publishing (as a possible minimum) their summary scores on each main indicator with some trend data, and with explicit margins of error, against the norm for student/programme groups across the system (rather than at the institution next door).
The Irish model might be a starting point.
Anxiety about student flight is likely unjustified …
The Australian experience suggests that alarm at surveys causing student flight is overstated. Selecting where to study is a complex matter. Differences between institutions in survey results have to be seen in the context of other sources of difference between institutions – location, programme structure, fields of specialisation, perceived status, perceived institutional culture, availability and quality of accommodation, …
In Australia, institutions with positive SES results tend to promote their education experience in marketing while those with lower satisfaction may emphasise other aspects of their performance, alongside their teaching[15].
Analysis of Australian student experience scores by university grouping over 2017 to 2023 show that the elite Group of 8 (G8) research-intensive universities have, on average, slightly lower scores in the QILT student experience survey than other university groupings (the Australian Technology Network, the Regional Universities Network and Innovative Research Universities). Those groupings all outpoint the G8[16]. That has definitely not led to a collapse in demand for places at G8 institutions!
And the margins between institutions may be mixed and are often relatively slight. For instance, trend analysis of the Australian QILT survey of student experience shows the differences between the Australian university groupings are not large, especially allowing for margins of error; in 2023, the G8 universities averaged around 77% across the Student Experience Survey domains with the other groupings averaging between 78% and 80%[17].
Finally, the eternal question: who will pay …
The eternal question … It’s anyone’s guess whether the government will want to pay for a national student survey. But even if government does pay (as is the case in Australia, Ireland and the UK), a national survey will still place costs on institutions, who will have to compile the survey population, promote participation, distribute materials, send follow-ups and reminders …. That’s how the national survey works in other countries.
But for TEOs that currently run their own surveys, there is an offsetting savings. If the government can be persuaded to pay, perhaps it might even represent a saving for TEOs. Who knows …
Better research on teaching and learning
One of the really important benefits of a national survey is that it expands opportunities for deeper research into the student experience and, ultimately, student success.
Nationally, student experience survey data provides opportunities for detailed trend analysis that can help explain how the system is delivering for students. National survey data will have narrower margins than institutional results, making analysis more robust.
In Ireland, the managers of the survey encourage applications from researchers to analyse the national data[18].
Among the dozens of research papers from other countries that explore survey data are:
- Siobhán Nic Fhlannchadha’s analysis of the trends in the Irish student experience survey between 2016 and 2021 finding, perhaps surprisingly, that experience was still positive in 2020 as the pandemic descended, but, unsurprisingly, fell away sharply in 2021. It also compares the Irish results with those in other countries.
- Anja Pabel and Mahsood Shah analyse the results of the Australian student experience survey – part of the QILT suite of surveys – tracing trends over time and disaggregating by university grouping.
- Jonathan Neves and Jason Leman looked at the relationship between the engagement with learning of UK students and self-reported development of their skills, finding that interacting with staff is an important factor in developing skills[19].
Analyses of the AUSSE survey pilots held in New Zealand in 2010-2012 disaggregate by demographic and study-related variables – ethnicity, gender, full and part-time study and field of study – and explore those aspects in detail[20] (finding, for instance that, among university students, both Māori and Pacific students had marginally more positive responses on student engagement than the average, despite evidence that pass rates among those two groups are generally lower[21].
But, given that student experience surveys are typically run during (rather than at the end of a course), there is no discussion of the relationship between survey responses and student success or grades[22].
Plus, there is nothing that relates survey responses to other, deeper, background variables such as SES, parental education, migration status etc.
Presumably, at present, some TEOs use the results of their institutional surveys to relate survey responses to a number of demographic and study-related variables and also to grades. But it’s hard to tell how deep their analysis goes and how effective their use of the analysis is in changing practice, given that TEOs don’t usually make anything other than a very high-level summary available to outsiders.
The opportunity – deeper analysis of national data
Here is one of the great opportunities thrown open by a national survey.
Suppose that survey results were to be added to the IDI, confidentialised and matched to the respondents other IDI data – measures of student success (pass//fail, regrettably, not grades) and myriad potential explanatory variables, such as school achievement, parental education, socio-economic status, post-study outcomes …. That would be a powerful means of exploring the predisposition to satisfaction and engagement and the relationship of satisfaction and engagement to achievement and outcomes. That would create a mechanism for identifying what is effective in improving student engagement in ways that lift achievement and disseminating those findings..
That, if nothing else, makes a well-managed, well-designed national student survey an important priority for our system.
Bibliography
Ako Aotearoa and NZUSA (2013) The student voice in tertiary education settings: quality systems in practice Ako Aotearoa
Cheong K and Ong B (2016) An evaluation of the relationship between student engagement, academic achievement, and satisfaction pp 409–416 in:
Assessment for Learning Within and Beyond the Classroom Taylor’s 8th Teaching and Learning Conference 2015 Proceedings, Springer
Coates H (2005) The value of student engagement for higher education quality assurance
Quality in Higher Education Volume 11, Issue 1
Gray J and Di Loreto M (2016) The effects of student engagement, student satisfaction, and perceived learning in online learning environments NCPEA International Journal of Educational Leadership Preparation, Vol. 11, No. 1
Hackett S and Nic Fhlannchadha S (nd) Evidencing value through impact StudentSurvey.ie
Johnson D, Shoulders C, Edgar L, Graham D and Ruckers K J (2016) Relationship between academic engagement, self-reported grades, and student satisfaction NACTA Journal Vol 60(3)
Kanwar A and Sanjeeva M (2022) Student satisfaction survey: a key for quality improvement in the higher education institution Journal of Innovation and Entrepreneurship 11:27
Neves J and Leman J (2022) Engagement with learning and the development of skills in United Kingdom students in: Coates H, Gao X, Guo F and Shi J (2022) Global student engagement: policy insights and international research perspectives, Routledge
Nic Fhlannchadha S (2022) The results of the StudentSurvey.ie trends over time research, 2016-2019 All Ireland Journal of Higher Education Vol 14 No 1
Pabel A and Shah M (2025) Student experience survey: trends and insights in Australian higher education from 2017 to 2023 Perspectives: Policy and Practice in Higher Education
Pelletier C, Rose J, Russell M, Guberman D, Das K, Bland J, Bonner H Chambers C (2016) Connecting student engagement to student satisfaction: a case study at East Carolina University Journal of assessment and institutional effectiveness, Vol. 6, No. 2, 2016
Radloff A and Coates H (2009) Australasian Student Engagement Survey 2009 institution report Australian Council for Educational Research
Radloff A (2012) Student engagement at New Zealand Private Training Establishments (PTEs): Key results from the 2011 pilot of the AUSSE Australian Council for Educational Research
Radloff A (ed) (2011a)Student engagement in New Zealand’s universities Australian Council for Educational Research
Radloff A (2011b) Student engagement at New Zealand Institutes of Technology and Polytechnics: Key results from the 2010 pilot Australian Council for Educational Research
Sofroniou A, Premnath B and Poutos K (2020) Capturing student satisfaction: a case study on the National Student Survey results to identify the needs of students in STEM-related courses for a better learning experienceEducation Sciences. 2020, 10, 378
StudentSurvey.ie (2020) International comparators factsheet StudentSurvey.ie
van der Meer (2011) Māori and Pasifika students’ academic engagement: what can institutions learn from the AUSSE data in: Radloff (2011a)
Endnotes
[1] See Pabel and Shah (2025) for a summary of the evidence.
[2] Kanwar and Sanjeeva 2022, Sofroniou et al 2020
[3] The sole existing example of a national survey is Education New Zealand’s biennial surveys of international students which, obviously, is focused on only one segment of the student population.
[4] See these examples….
[5] See Radloff (2011a, 2011b,) and Radloff et al (2012)
[6] See this account of the history of the NSSE
[7] See for instance, Pelletier et al (2016) and Johnson et al (2016).
[8] 50 states in the US, 10 Canadian provinces and 3 Canadian territories.
[9] Nic Flannchadha (2022) and StudentSurvey.ie (2020)
[10] The HEA has a role in Irish higher education similar to that of the TEC in New Zealand
[11] See Hackett and Nic Flannchadha (nd)
[12] Sourced from the website StudentSurvey.ie
[13] See this story. Note that Australia has also had a free speech question, with similar results to the English one.
[14] For instance, the University of Canterbury uses the Qualtrics Survey Platform to run all their surveying.
[15] See Pabel and Shah (2025).
[16] ibid
[17] Pabel and Shah (2025) show that, in 2023, there was a measurable margin between the G8 and others on quality of teaching and overall educational experience. On learner engagement, the G8 was indistinguishable from the ATN and IRU groupings and above the RUN grouping.
[19] See the bibliography for links to the articles.
[20] See Radloff (2011a, 2011b and 2012)
[21] van der Meer (2011) in Radloff (2011a)
[22] Johnson et al (2016) get around this issue to an extent by relating survey responses to (self-reported) anticipated grades, a good indication, but not quite the same thing. This was, however, a small-scale study relating to one programme area in one institution. Interestingly, the study found no statistically significant relationship between satisfaction and self-reported anticipated grades. Neves and Leman (2022) also use self-reports to analyse the relationship between engagement and skills.