Professor Margaret Price, Oxford Brookes University
The keynote speaker addressed student feedback in the context of a higher education environment, that is dynamic and subject to a large number of external and internal pressures. Within that context, there appears to be general consensus that feedback is a crucial component of learning and assessment, and that assessment itself is a key driver of learning. Professor Price pointed out that the discourse of assessment is relatively simple for such a complex subject and suggested a need for new and more descriptive terminology than exists at present. She also noted the added pressure of higher student expectations and a louder student voice from ‘customers’ paying high fees.
Oxford Brookes and Cardiff have collaborated on a new piece of research, examining what makes feedback good in the perceptions of the students and what domains influence these perceptions of ‘good’ and ‘bad’ in this context. The project used student researchers and a cross-discipline approach. Students taking part were asked to bring one piece of ‘good’ feedback and one ‘bad’. They were interviewed around these pieces and then the feedback was analysed. The domains used in the analysis covered quality, context, student development and expectations and were further divided into areas such as technical factors, particularity (i.e. personal/impersonal feedback), recognition of effort, assessment design (crucial), student resilience (can they accept criticism?), student desires (to learn or to achieve a high grade?) etc. The full report is available at http://www.brookes.ac.uk/aske/.
The research found that the domains overlapped and compensated for each other so that feedback that was poor in one domain might be good in another, and vice versa. Three important messages for those giving feedback that emerged were:
• Give it plenty of time
• Train, develop and support staff in giving feedback
• Limit anonymous feedback (personalised feedback scored highly)
Professor Price suggested that students need to develop their assessment literacy if they are to be able to gain the most from assessment and feedback. This can be done through developing their technical understanding of marking and grading, through self and peer assessment, and through an appreciation of what grading criteria actually mean. She pointed out that academics see hundreds of pieces of work and have a tacit understanding of what a 2:1 looks like, for example, but how are students to know this? Give them good examples was the answer and put them through stages of self-assessment, peer review, drafting, re-drafting and perhaps peer assisted learning where more experienced students support beginners and help them to develop their assessment literacy.
An overarching message resulting from the research was that ‘You don’t need to get it right all the time’: students are very forgiving of delayed feedback or low grades when they can see that there has been a real effort to engage with their individual piece of work. Examples of good practice are available at https://www.plymouth.ac.uk/whats-on/inclusive-assessment-in-practice-conference.
Finally – she concluded by saying that this should go beyond university and students should leave having developed these valuable self-evaluation skills to take with them into the future.
Report by Celia Cozens, e-Learning Content Manager, Centre for Academic Practice Enhancement (CAPE)