Untitled Document

An Application of Peer Feedback to Undergraduates' Writing of Critical Literature Reviews

 

Lorna I. Morrow

Department of Psychology,

University of Glasgow University

l.morrow@psy.gla.ac.uk

Tel: 0141 330 5089

 

 

Abstract

 

It has been suggested that peer assessment is beneficial to student learning, both to the student whose work is being assessed and the student who is assessing.  A peer feedback procedure was applied to a specific coursework assignment, the writing of a Critical Review, for a small group of students as part of the Level 3 Psychology course at the University of Glasgow.Peer feedback was provisionally introduced for the purposes of addressing specific issues raised by the students about the assignment, and also to encourage autonomous and self-regulated learning.  An initial evaluation of the peer feedback procedure indicated that students felt they benefited from the opportunity to engage in peer feedback.  Ideas for amendments to the procedure and evaluation of different aspects of the experience are discussed.

 

Keywords:   peer assessment, peer feedback

 

 

Introduction

 

Assessment serves different purposes in education.  While summative assessment provides educational establishments (and prospective employers, etc.) with some index of a student’s attainment of the learning objectives, the purpose of formative assessment is to provide the student with an indication of how well they are performing and how they might be able to improve.  Thus, formative assessment can be an invaluable part of the learning process if the student is able to understand the feedback and act upon it.  Traditionally, the assessment of a piece of work is carried out by the teacher (tutor assessment).  However, more recently, some courses in HE have utilised peer assessment, usually in addition to tutor assessment. 

 

Peer assessment is generally defined as students evaluating the standard of the work of their peers.  Falchikov (2001) distinguishes further between peer assessment and peer feedback, whereby peer assessment involves a grade being given, and peer feedback involves the provision of comments.  It has been suggested that not only is peer assessment (in its widest definition, including peer feedback) beneficial to the student receiving the feedback (Van den Berg, Admiraal and Pilot, 2006a), but also actually helps the student assessor.  In particular, the assessor may be able to develop a better understanding of what makes a piece of work good or bad; be better able to apply the same objectivity in evaluating their own work, thus engage in more effective self-regulation (or self-assessment); and be encouraged towards autonomous learning (e.g. Nicol and Macfarlane-Dick, 2006; Van den Berg Admiraal and Pilot, 2006a).  Other potential benefits include:

 

  • The provision of (additional) feedback to the student authors;

  • The assessors would be encouraged, by the process of giving feedback, to engage with the content of the work at a deeper level than they might otherwise (Fallows and Chandramohan, 2001), that may improve their knowledge of the topic;

  • Peer feedback does not necessarily increase the workload of the tutor.

 

Critical Review – Background and Areas for Improvement

 

Honours (Levels 3 and 4) Psychology students at the University of Glasgow are required to write Critical Reviews (CRs) as part of their coursework.  This is the first time during the course that students are asked to write CRs, as it is a progression from the more typical essays that they write in the first two years of the course.  The purpose of the CR is to assess the students' attainment of two of the intended learning outcomes of the honours course: to evaluate theory and experiments, and to write critical reports.  Students are also given the opportunity to carry out independent research during the writing of the CR, a skill which is also required for other aspects of the course, and meets the British Psychological Society's requirement that independent research skills are demonstrated on accredited courses. 

 

In a web document that students are encouraged to refer to, the definition is that "Critical Reviews are essays based on scholarship i.e. on finding and reading the literature on a topic, and adding your own considered arguments and judgements about it. CRs thus involve both reviewing an area, and exercising critical thought and judgement" (Draper, 2005). Students submit three CRs, the first of these being formative in nature, the second and third contributing towards their final degree mark. Tutor feedback is provided for the formative CR, both on a pre-submission draft of the CR and also on the final, graded CR. However, no such feedback is provided for the summative CRs, due to departmental policy and university regulations, respectively. The students are encouraged to work on their CRs over a semester (a period of approximately 2-3 months). Supervision is provided in the form of small tutorial groups (maximum of 6 students) for the first two, and on an individual basis for the third.

 

A questionnaire was designed to examine to what extent the first (formative) CR experience helped students to understand what is required in a CR, and to prepare and write subsequent summative reviews.  The feedback from one tutorial group highlighted that the students wanted to see examples of CRs (4 out of 5 students), in order to obtain more of an idea about the layout and structure, and how to approach writing a review.

 

Related to the above, a second issue that warrants consideration is that of there only being one formative CR before the summative CRs.  Students have often asked for feedback on the second (summative) CR so as to be able to apply this to the third CR, which contributes a greater percentage towards their final degree mark.  This perhaps highlighted that, regardless of whether students received a high or low mark for the CR, they had not yet been able to identify what makes a good CR. 

 

Finally, a third objective of improving the CR process was to take the opportunity to further encourage students towards independence (or co-dependence on peers, rather than dependence upon the tutor), and autonomous and active learning.  This would include encouraging students to be able to reflect what the goals should be for their work, to what extent their work meets these goals, and how they bring their current work to the level of the desired goals (Nicol and Macfarlane-Dick, 2006; Sadler, 1989).  Such active and autonomous learning would surely benefit students towards the specific goal of obtaining a good degree, but also be a valuable and transferable life skill.

 

Intervention – Encouraging Peer Feedback

 

A parsimonious solution to the three issues discussed above was to introduce a system of peer feedback.  Firstly, encouraging students to swap CRs in advance of the deadline would provide them with the opportunity to read other CRs (as requested), and so either affirm that their own CR was approaching the desired standard, or else allow time to modify their CR before submission. 

 

Secondly, having students provide feedback on each other's CRs was introduced to provide peer feedback in the absence of tutor feedback, and also help the students obtain experience of thinking about and understanding the assessment criteria, and evaluating work with regard to these criteria.  Thus, for the summative CRs the students would hopefully have developed their skills for evaluating both their own and other's work against objective criteria.  It was hoped that students would thus be able to obtain good grades for their summative CRs, or at the very least feel more confident that the work they were submitting was of a good standard.

 

Thirdly, as mentioned, peer assessment has been implicated in encouraging autonomous and self-regulated learning (Nicol and Macfarlane-Dick, 2006). It was also hoped that the students would be encouraged to become more co-dependant learners, thus realising the importance of group work towards a mutual understanding of the task.

 

Thus, to summarise, the three objectives of the intervention were to provide greater support for the writing of the CRs in the form of facilitating the reading of other CRs; provide a useful alternative to tutor feedback; to encourage self-regulation and autonomous learning, and the development of the necessary skills.

 

Method

 

Participants

 

Six CR groups (32 students, supervised by one of three tutors: one tutor supervised four groups and the other two tutors supervised one group each) were encouraged to participate in peer feedback.  Two of these groups were embarking on the CR process for the first time (i.e. writing their first, formative CR), and four for the second time (a summative CR).

 

Intervention Procedure

 

The procedure for the peer feedback process was as follows.  At the first CR meeting peer feedback was mentioned to the students.  Participation was strongly recommended by the supervisors, but was not compulsory.  Since the students indicated an interest in participating, deadlines were negotiated.

 

The number of reviews that each student commented on was determined by the students themselves.  Students were encouraged to swap their feedback well in advance of the submission deadline.

 

Use of Structured Feedback

 

The students were supplied with a structured feedback sheet (see Appendix 1) designed in accordance with the three CR assessment criteria (set by the department).  These are:

 

  • The quality of the research carried out, i.e. did they find the best papers, are they recent?

  • The quality of the write-up, i.e. is the material they found well presented and clearly structured?

  • The quality of the critical analysis – have they gone beyond an essay style, moving beyond description to encompass interesting and challenging evaluation?

  •  

    The use of the assessment criteria was to try to ensure that the feedback given to the students would be obviously useful, in terms of aiming to obtain a better grade.  Prompts were included under each of the three criteria, to encourage assessors to say "what was good" and "what could be improved".  These prompts were included to encourage peer assessors to provide a balance of positive and negative comments: to consider and affirm what was good about the CR, and provide negative feedback in a constructive way (i.e. "what could be improved" rather than "what was not good").  It was also hoped that such constructive criticism would be motivating for the student receiving the feedback, by reminding them that the not-so-good points could be improved if appropriate changes were made.

     

    It was also intended that the provision of the assessment criteria to the students in advance of the deadline would benefit them during the writing process, in terms of developing a better idea of what the important aspects were to devote their attention to, and also considering how to best meet these criteria.  Further, since the criteria emphasise content more than structure etc., this may have been helpful in discouraging the students' thinking that there is a particular format that is best. 

     

    Evaluation of the Intervention

     

    The evaluation questionnaire (see Table 1) was designed to assess to what extent peer feedback had been successful in addressing the three objectives for change.  In particular, to gain some idea of how helpful it was for the students to read other CRs, question 1c explicitly asked this.  The majority of questions asked about how helpful both the specific feedback and the process in general were towards improving their CR and gaining confidence in the process (questions 1a, b, d, e, 2a-c).  Question 3 was asked to what extent students might engage in peer feedback spontaneously for future assignments, thus examining if students were moving towards autonomous learning. Students were asked to circle the appropriate number on a 5-point Likert scale.  For questions 1 and 2, 1 on the scale represented "very unhelpful" and 5 represented "very helpful"; for question 3, 1 corresponded to "very unlikely" and 5 to "very likely". 

     

    Questions 1a-d were asked of all students, whether writing their first or second CR.  However, questions 1e, 2 and 3 were only asked of the students completing their second questionnaire.  These questions were not asked of the students writing their first CR in order to minimise the length of the questionnaire, which also asked many other questions about the experience of writing the first CR, unrelated to peer feedback.

     

    Results

     

    Of 32 students asked to complete the evaluation, only 17 students consented (6 who had just written their first CR and 11 who had written their second).  Of these, two students had not participated in peer feedback due to lack of time (in addition, one student said she had underestimated how useful it might have been).  Another student had provided feedback on other's CRs, but had not been able to distribute her own CR for comments due to illness.

     

    Table 1          Students' evaluation of the peer feedback process: percentage (and number) of students who selected each value on the Likert scale.

     

     

    N

    1

    2

    3

    4

    5

    m

    1

    To what extent did you find the following helpful towards improving your CR:

     

     

     

     

     

     

     

    1a

    Feedback from your peer(s) on what was good

    14

    -

    -

    29 (4)

    64 (9)

    7 (1)

    4

    1b

    Feedback from your peer(s) on what could be improved

    14

    -

    7 (1)

    14 (2)

    43 (6)

    36 (5)

    4

    1c

    The opportunity to read and evaluate someone else’s CR

    15

    -

    -

    7 (1)

    13 (2)

    80 (12)

    5

    1d

    The process of dialogue between yourself and your peer(s) about the CR process

    15

    -

    -

    27 (4)

    60 (9)

    13 (2)

    4

    1e

    Using the marking criteria to evaluate work

    11

    -

    9 (1)

    27 (3)

    36 (4)

    27 (3)

    4

    2

    To what extent do you think the peer review process was helpful for your confidence in the following areas:

     

     

     

     

     

     

     

    2a

    Your evaluation and assessment of CRs

    11

    -

    9 (1)

    9 (1)

    73 (8)

    9 (1)

    4

    2b

    Knowing what makes a good CR

    11

    -

    -

    36 (4)

    18 (2)

    45 (5)

    4

    2c

    Your critical thinking abilities

    11

    -

    -

    18 (2)

    64 (7)

    18 (2)

    4

    3

    How likely is it that you will engage in peer review for future course work assignments (e.g. Level 4 CR)?

    11

    -

    -

    9 (1)

    27 (3)

    64 (7)

    5

    N = total number of students who answered each question, m = median of the group responses.

     

    The results of the evaluation from the students who participated in peer feedback are presented in Table 1.  For each question, the median was always above the neutral response of 3 on the scale.  In order to establish if these differences were significant, one-sample sign tests (one-tailed) were performed.  These demonstrated that for each question the group responses differed significantly from 3 (for all sign tests, p < .05).  

     

    Regarding to what extent students found it helpful to read and evaluate someone else's CR (question 1c), the responses indicated that students found this very helpful.

     

    Also evaluated favourably by the students were the feedback given (questions 1a and b), the process of dialogue between peers and the use of the assessment criteria (questions 1d and e).  Students also indicated an increase in confidence in assessing CRs, knowing what makes a good CR, and their critical thinking abilities (questions 2a-c).  This would suggest that the peer feedback process was useful both to the author receiving feedback and also the assessor.

     

    Students indicated that they were very likely to engage in peer feedback for subsequent course work assignments (question 3), indicating a predisposition for autonomous learning.  The extent to which students' intentions became reality was examined in a follow-up mini-questionnaire at the end of Level 3.  Students were asked to indicate for how many of the remaining assignments they had engaged in peer feedback, and if they had not then to indicate a reason for this.  Out of 13 responses, 6 students did not engage in peer feedback for any assignment, while 7 did utilise peer feedback: 5 for at least one assignment, and 2 for all of the remaining assignments.  The most common reason given for why students did not utilise peer feedback was lack of time.  Other reasons given tended to be practical constraints (e.g. meeting with peers was difficult for assignments due in after a university vacation).  Only two students (for one particular assignment) thought that peer feedback would not be useful.

     

    Finally, students were also asked "to what extent do you think that the peer review process should be encouraged by the supervisor?"  Out of 11 students, 3 selected "compulsory", 7 selected "encouraged but not compulsory", and 1 selected both of these responses, mentioning that it depended upon the group.  No students selected "not encouraged".

     

    Conclusions and Further Issues for Consideration

     

    Overall, the students who participated in the implementation of peer feedback reported positive views towards it, as has been demonstrated elsewhere (e.g. Wen and Tsai, 2006).  Cursory evaluation suggests that the process was beneficial in encouraging students to read other CRs, provide useful peer feedback and actively engage with the task.  Thus, future CR groups should also benefit from the process.  However, due to the fairly low response rate it is possible that the students who did not like or benefit from the process, or were impartial, may not have returned their evaluation questionnaires.  An improvement therefore would be to consider ways of increasing student response.

                            

    Sadler (1989) suggested that for students to best make use of feedback, it is important they have an understanding of the desired performance goals, to what extent their work currently meets the desired goals, and what action they can take to reduce the gap between the desired and the actual standard.  The current peer feedback procedure encourages students at each of these levels, by, firstly, providing the marking criteria in advance of writing, in attempt to aid better understanding of the desired level.  Secondly, the evaluation of the standard of current work should be helped by encouraging consideration of own and others' work in relation to the desired standard, and the provision of feedback from peers. Thirdly, allowing opportunities for resubmission of the work in light of the constructive peer feedback (Nicol & Macfarlane-Dick, 2006; Sadler, 1989) should aid understanding of how to close the gap between the actual and desired standard (more than would be the case if the feedback were simply read but not acted upon).  Subsequent studies could more formally investigate the extent to which peer feedback aids in the developing understanding of these aspects, for example by asking students to indicate at various times throughout the CR process (e.g. before and after evaluating another's CR, or receiving feedback) to what extent they feel they have understood what the assessment criteria are and how to meet them.

     

    The procedure for the implementation of peer feedback could also be improved with regard to suggestions in the literature.  For example, one improvement would be to encourage more dialogue between the assessor and the author regarding the feedback, to be sure that the author understands the feedback, and has the opportunity to ask questions or reply to the comments (Nicol & Macfarlane-Dick, 2006).  Further, dialogue has been shown to increase the assessors' explanations of the reasons for their evaluation and their recommendations for change (Van den Berg Admiraal and Pilot, 2006b), which would be useful for the student authors to know.  Secondly, it may be beneficial to examine the feedback provided, to explore what students have understood about the process.  However, perhaps the actual quality of the feedback is not as important (at least initially) as the opportunity for learning, since the students are trying to understand the meaning of the criteria and assess their work accordingly (Van den Berg Admiraal and Pilot, 2006b).  Hopefully with such modifications to the procedure as discussed above, the implementation of peer review to the CR process will be even more beneficial to the students, and the process of the students' developing understanding will be better understood.

     

    Acknowledgements

     

    Many thanks to all the students who filled out the rather lengthy evaluation questionnaires, to Jason Bohan and Steve Draper for implementing the peer feedback process as described and collecting some of the evaluation data, to Mitchum Bock for stats advice, to Paddy O'Donnell for reading an earlier version of this paper, and to two anonymous reviewers for helpful comments.

     

    References

     

    Draper, S. W. (2005). Critical Reviews. http://www.psy.gla.ac.uk/~steve/resources/crs.html [accessed 15/08/06].

     

    Falchikov, N. (2001). Learning together: Peer tutoring in higher education. London: Routledge Falmer.

     

    Fallows, S. & Chandramohan, B. (2001). Multiple approaches to assessment: reflections on use of tutor, peer and self assessment. Teaching in Higher Education, 6 (2), 229-246.

     

    Nicol, D. J. & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31 (2), 199-218.

     

    Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 199-144.

     

    Van den Berg, I., Admiraal, W. & Pilot, A. (2006a). Design principles and outcomes of peer assessment in higher education. Studies in Higher Education, 31 (3), 341-356.

     

    Van den Berg, I., Admiraal, W. & Pilot, A. (2006b). Designing student peer assessment in higher education: analysis of written and oral peer feedback. Teaching in Higher Education, 11(2), 135-147.

     

    Wen, L. W. & Tsai, C. (2006). University students' perceptions of and attitudes toward (online) peer assessment. Higher Education, 51, 27-44.

     


    Appendix 1: Structured Feedback Sheet

    Critical Review Peer Feedback Sheet

    Student Author:

    Student Assessor:

     

     

    Quality of the research carried out, i.e. did they find the best papers, are they recent?

    What’s good

     

     

     

    What could be improved

     

     

     

    Quality of the write-up, i.e. is the material they found well presented and clearly structured?

    What’s good

     

     

     

    What could be improved

     

     

     

    Quality of the critical analysis – have they gone beyond an essay style, moving beyond description to encompass interesting and challenging evaluation?

    What’s good

     

     

     

    What could be improved

     

     

     

    Any other comments

     

     

     ISSN 1750-8428 (online) www.pestlhe.org.uk

    ã PESTLHE

     

    Refbacks

    • There are currently no refbacks.