This article explores the value of garnering student feedback on teaching during a module or programme and the learning technologies that Durham provides to help make this happen.
What are the key ideas?
It has been shown that students believe that the primary value in completing evaluations of modules is to improve teaching (Chen and Hoshower, 2003). However, improvements to teaching do not benefit the students if they have long since finished the module, or indeed programme (Spencer and Schmelkin, 2002). It is thus argued that in-sessional (also known as midterm, midcourse, informal, etc.) evaluation of teaching closes the feedback loop in that it gives the lecturer the opportunity to improve teaching for the students who took the time to evaluate them (Edström, 2008; Veek, et al, 2015).
How can learning technologies help?
Paper and pens
These rather more traditional technologies can be used to garner quick feedback at the end of a teaching session. Whether writing comments under headings on a flipchart or sticking post-it notes to the wall, students are encouraged to provide feedback while the session is still fresh in their minds.
TurningPoint can be used to gauge student understanding and immediate reactions (e.g. ‘Am I going too fast, too slow or just right?), but also to collect anonymous feedback on the module itself. This can be done in real time, or via a survey that is open for an extended period. Students can respond on laptops and mobiles, and results can be downloaded for analysis.
Surveys can be delivered via duo sites, and once a survey has been created it can be copied to other module or programme sites. Anonymous results are available in duo and as downloadable spreadsheets.
Jisc Online Surveys
Jisc Online Surveys (formerly Bristol Online Surveys) is available to all Durham staff, and direct links to surveys can be supplied to students, with the option of anonymity. View results online or download them in a number of different formats for analysis. Staff need to register to use Online Surveys.
duo discussion boards and wikis
For a more social experience, students can provide their feedback via a discussion board or wiki on the duo module or programme site. Results may be more difficult to collate, and would not necessarily be anonymous, but the informality and social dimensions might encourage student engagement.
Using Office 365, pairs or small groups of students can collaborate on their evaluations. This would encourage students to take others’ views into consideration and negotiate their responses (Veek, 2015).
While an ongoing learning journal is more involved than the suggestions above, if it was appropriate to a module or programme it would give staff deeper, extended insights into students’ progress and challenges. (Of course, it would also help students to reflect on their learning journeys as well.)
As student evaluations of teaching usually involve free responses, which might include personal or sensitive information (think GDPR), we do not recommend that tools external to the University are used for evaluation.
If you would like to discuss any of these options further, please contact the Learning Technologist for your faculty.
What kinds of questions might I ask?
This will greatly depend on your context, but a popular way to get students thinking is the Stop, Start, Continue method, i.e. ‘What should I stop doing in my teaching? What should I start doing? What should I continue doing?’ (Hoon et al, 2015).
For the questions that are asked of students in your department in the University’s annual Module Evaluation Questionnaires, contact your department’s MEQ lead.
If you would like to mirror some of the National Student Survey questions, details are available on the NSS website.
What should I do with the results?
On a practical level, any data (analogue or digital) should be held securely and only shared with those who need to know.
Pedagogically, quick feedback (like that garnered in-session) should be acted on straight away, if appropriate and possible! Ideally, the lecturer would take some time to reflect on their own perceptions as well as student feedback, and perhaps discuss potential improvements with a colleague (Butcher, Davis and Highton, 2006).
More extensive feedback, such as survey results on the module or programme level, may need further analysis. Simple quantitative analysis can be done in Excel, and SPSS, R and Nvivo are available on the app hub. Whenever possible, it is also good practice to discuss feedback, and potential responses to it, with student representatives (Williams and Brennan, 2004).
References and further readingButcher, C., Davies, C. and Highton, M. (2006) Designing learning : from module outline to effective teaching. London: Routledge, pp. 194-9. Chen, Y. and Hoshower, L.B. (2003) ‘Student Evaluation of Teaching Effectiveness: An assessment of student perception and motivation‘, Assessment & Evaluation in Higher Education, 28:1, 71-88. Edström, K. (2008) ‘Doing course evaluation as if learning matters most‘, Higher Education Research & Development, 27:2, 95-106. Hoon, A.E, Oliver, E. Szpakowska, K., and Newton, P.M. (2015) ‘Use of the ‘Stop, Start, Continue’ Method is Associated with the Production of Constructive Qualitative Feedback by Students in Higher Education’. Assessment and Evaluation in Higher Education, 40 (5), 755-767. Spencer, K.J. and Schmelkin, L.P. (2002) ‘Student Perspectives on Teaching and its Evaluation‘, Assessment & Evaluation in Higher Education, 27:5, 397-409. Veek, A. et al (2015) ‘The Use of Collaborative Midterm Student Evaluations to Provide Actionable Results‘, Journal of Marketing Education, 38(3), pp. 157-69. Williams, R., and Brennan, J. (2004) ‘Collecting and Using Student Feedback Date: A Guide to Good Practice’, Higher Education Academy UK, pp. 17-8.
REVIEW DATE: October 2018