What Are The Key Characteristics Of Better Learning Feedback?
by Grant Wiggins, Authentic Education
On May 26, 2015, Grant Wiggins passed away. Grant was tremendously influential on TeachThought’s approach to education, and we were lucky enough for him to contribute his content to our site. Occasionally, we are going to go back and re-share his most memorable posts.
Whether or not the feedback is just ‘there’ to be grasped or offered by another person, all the examples highlight seven key characteristics of helpful feedback.
Helpful feedback is –
- Goal-referenced
- Transparent
- Actionable
- User-friendly
- Timely
- Ongoing
- Consistent
Though some of these traits have been noted by various researchers [for example, Marzano, Pickering & Pollock (2001) identify some of #3, #5, #1 and #4 in describing feedback as corrective, timely, specific to a criterion], it is only when we clearly distinguish the two meanings of ‘corrective’ (i.e. feedback vs. advice) and use all seven that we get the most robust improvements and sort out Hattie’s puzzle as to why some ‘feedback’ works and other ‘feedback’ doesn’t. Let’s look at each criterion in turn.
7 Key Characteristics Of Better Learning Feedback
1. Quality learning feedback is goal-referenced
There is only feedback if a person has a goal, takes actions to achieve the goal, and gets goal-related information fed back. To talk of feedback, in other words, is to refer to some notable consequence of one’s actions, in light of an intent. I told a joke – why? To make people laugh; I wrote the story – why? To paint vivid pictures and capture revealing feelings and dialogue for the reader to feel and see; I went up to bat – why? To get a hit. Any and all feedback refers to a purpose I am presumed to have. (Alas, too often, I am not clear on or attentive to my own goals – and I accordingly often get feedback on that disconnect.)
Given a desired outcome, feedback is what tells me if I should continue on or change course. If some joke or aspect of the writing isn’t working – a revealing, non-judgmental phrase – I need to know. To that end, a former colleague of mine when I was a young teacher asked students every Friday to fill out a big index card with “what worked for you and didn’t work for you this week.” I once heard an exasperated NFL coach say in a post-game interview on the radio: “What do you think we do out here, wind a playbook up and pray all season? Coaching is about quick and effective adjustment in light of results!”
Note that goals (and the criteria for them) are often implicit in everyday situations. I don’t typically announce when telling the joke that my aim is to make you laugh or that I wrote that short story as an exercise in irony. In adult professional and personal life, alas, goals, and criteria for which we are accountable are sometimes unstated or unclear as well – leading to needlessly sub-par performance and confusing feedback. It can be extra challenging for students: many teachers do not routinely make the long-term goals of lessons and activities sufficiently clear. Better student achievement may thus depend not on more ‘teaching’ or feedback only but constant reminders by teachers of the goal against which feedback is given: e.g. “Guys, the point here is to show, not tell in your writing: make the characters come alive in great detail! That’s the key thing we’ll be looking for in peer review and my feedback to you.” (That’s arguably the value of rubrics, but far too many rubrics are too vague to be of help.)
2. Quality learning feedback is transparent and tangible, value-neutral information about what happened
Therefore, any useful feedback system involves not only a clear goal, but transparent and tangible results related to the goal. Feedback to students (and teachers!) needs to be as concrete and obvious as the laughter or its absence is to the comedian and the hit or miss is to the Little League batter. If your goal as a teacher is to “engage” learners as a teacher, then you must look for the most obvious signs of attention or inattention; if your goal as a student is to figure out the conditions under which plants best grow, then you must look closely at the results of a controlled experiment. We need to know the tangible consequences of our attempts, in the most concrete detail possible – goal-related facts from which we can learn. That’s why samples or models of work are so useful to both students and teachers – more so than the (somewhat abstract) rubrics by themselves.
Even as little pre-school children, we learn from such results and models without adult intervention. That’s how we learned to walk; that’s how we finally learned to hold a spoon effectively; that’s how we learned that certain words magically yield food, drink, or a change of clothes from big people. Thus, the best feedback is so tangible that anyone who has a goal can learn from it. Video games are the purest example of such tangible feedback systems: for every action we take there is a tangible effect. We use that information to either stay on the same course or adjust course. The more information “fed back” to us, the more we can self-regulate, and self-adjust as needed. No “teaching” and no “advice” – just feedback! That’s what the best concrete feedback does: it permits optimal self-regulation in a system with clear goals.
Far too much educational feedback is opaque, alas, as revealed in a true story told to me years ago by a young teacher. A student came up to her at year’s end and said, “Miss Jones, you kept writing this same word on my English papers all year, and I still don’t know what it means.” “What’s the word?” she asked. “Vag-oo,” he said. (The word was ‘vague’!). Sad to say, too much teacher feedback is ‘vagoo’ – think of the English teacher code written in margins (AWK, Sent. Frag, etc.) Rarely does the student get information as tangible about how they are currently doing in light of a future goal as they get in video games. The notable exceptions: art, music, athletics, mock trial – in short, areas outside of core academics!
This transparency of feedback becomes notably paradoxical under a key circumstance: when the information is available to be obtained, but the performers do not obtain it – either because they don’t look for it or because they are too busy performing to see it. We have all seen how new teachers are sometimes so busy concentrating on “teaching” that they fail to notice that few students are attending or learning. Similarly in sports: the tennis player or batter is taking their ‘eye off the ball’ (i.e. pulling their head out instead of keeping your head still as you swing), yet few novice players ‘see’ that they are not really ‘seeing the ball.’ They often protest, in fact, when the feedback is given. The same thing happens with some domineering students in class discussion: they are so busy “discussing” that they fail to see their unhelpful effects on the discussion and on others who give up trying to participate.
That’s why it is vital, at even the highest levels of performance, to get feedback from coaches (or other able observers) and/or video to help us perceive what we may not perceive as we perform; and by extension, to learn to look for what is difficult but vital to perceive. That’s why I would recommend that all teachers video their own classes at least once per month and do some walk-throughs and learning walks, to more fully appreciate how we sometimes have blind spots about what is and isn’t happening as we teach.
It was a transformative experience for me when I did it 40 years ago (using a big Sony reel-to-reel deck before there were VHS cassettes!). What was clear to me as the teacher of the lesson in real-time seemed downright confusing on tape – visible also in some quizzical looks of my students that I had missed in the moment. And, in terms of improving discussion or Socratic Seminar, video can be transformative: when students see snippets of tape of their prior discussions they are fascinated to study it and surprised by how much gets missed in the fast flow of conversation. (Coaches of all sports have done this for decades; why is it still so rare in classrooms?)
3. Quality learning feedback provides actionable information
Thus, feedback is actionable information – data or facts that you can use to improve on your own since you likely missed something in the heat of the moment. No praise, no blame, no value judgment – helpful facts. I hear when they laugh and when they don’t; I adjust my jokes accordingly. I see now that 8 students are off task as I teach, and take action immediately. I see my classmates roll their eyes as I speak – clearly signaling that they are unhappy with something I said or the way I said it. Feedback is that concrete, specific, useful. That is not to say that I know what the feedback means, i.e. why the effect happened or what I should do next (as in the eye-rolling), merely that the feedback is clear and concrete. (That’s why great performers aggressively look for and go after the meaning of feedback.)
Thus, “good job!” and “You did that wrong” and “B+” on a paper are not feedback at all. In no case do I know what you saw or what exactly I did or didn’t do to warrant the comments. The responses are without any actionable information. Here is a question we can easily imagine learners asking themselves in response, to see this: Huh? What specifically should I do more of and less of next time, based on this information? No idea. The students don’t know what was “good” or “wrong” about what they did.
Some readers may object that feedback is not so black and white, i.e. that we may disagree about what is there to be seen and/or that feedback carries with it a value judgment about good and bad performance. But the language in question is usually not about feedback (what happened) but about an (arguable) inference about what happened. Arguments are rarely about the results, in other words; they are typically about what the results mean.
For example, a supervisor of a teacher may make an unfortunate but common mistake of stating that “many students were bored” in class. No, that’s a judgment, not a goal-based specific fact. It would have been far more useful and less debated had the supervisor said something like: “I counted inattentive behaviors lasting more than 5-10 seconds in 12 of the 25 students once the lecture was well underway. The behaviors included 2 students texting under desks, 2 passing notes, and 7-8 students at any one time making eye contact with other students, etc.
However, when you moved to the small-group exercise using the ‘mystery text’, I saw such off-task behavior in only 1 student.” These are goal-related factual statements, not judgments. Again, it doesn’t mean that the supervisor is correct in the facts and it certainly doesn’t mean they are anti-lecture; it only means that the supervisor tries to stick to facts and not jump to glib inferences – what is working and what isn’t.
Such care in offering neutral goal-related facts is the whole point of the clinical supervision of teaching and of good coaching more generally. Effective supervisors and coaches work hard to carefully observe and comment on what was perceived, in reference to shared goals. That’s why I always ask when visiting a class: Given your goals for the class, what would you like me to look for and perhaps count or code?
In my years of experience as a teacher of teachers, as an athletic coach, and as a teacher of adolescents I have always found such “pure” feedback to be accepted, not debated; and be welcomed (or at least not resisted). Performers are on the whole grateful for a 2nd pair of eyes and ears, given our blind spots as we perform. But the legacy of so much heavy-handed inferencing and gratuitous advice by past coaches/teachers/supervisors has made many performers – including teachers – naturally wary or defensive.
What effective coaches also know is that actionable feedback about what went right is as important as feedback about what didn’t work in complex performance situations. (That’s why the phrase ‘corrective information’ is not strictly-speaking accurate in describing all feedback.) Performers need feedback about what they did correctly because they don’t always know what they did, particularly as novices. It is not uncommon in coaching, when the coach describes what a performer successfully did (e.g. “THAT time you kept your head still and followed all the way through!”), to hear the performer respond quizzically, “I did??”
Similarly, the writer or teacher is sometimes surprised to learn that what she thought was unimportant in her presentation was key to audience understanding. Comedians, teachers, and artists don’t often accurately predict which aspects of their work will achieve the best results, but they learn from the ones that do. That’s why feedback can be called a reinforcement system: I learn by learning to do more of (and understand) what works and less of what doesn’t.
4. Quality learning feedback is user-friendly
Feedback is thus not of much value if the user cannot understand it or is overwhelmed by it, even if it is accurate in the eyes of experts or bystanders. Highly-technical feedback to a novice will seem odd, confusing, hard to decipher: describing the swing in baseball in terms of torque and other physics concepts to a 6-year-old will not likely yield a better hitter. On the other hand, generic ‘vagoo’ feedback is a contradiction in terms: I need to perceive the actionable, tangible details of what I did.
When I have watched expert coaches, they uniformly avoid either error of too much overly-technical information or of unspecific observations: they tell the performers one or two important things they noticed that, if they can be changed, will likely yield immediate and noticeable improvement (“I noticed you were moving this way…”), and they don’t offer advice until they are convinced the performer sees what they saw (or at least grasps the importance of what they saw).
5. Quality learning feedback is timely
The sooner I get feedback, then, the better (in most cases). I don’t want to wait hours or days to find out which jokes they laughed at or didn’t, whether my students were attentive, or which part of my paper works and doesn’t. My caveat – “in most cases” – is meant to cover situations such as playing a piano piece in recital: I don’t want either my teacher or the audience to be barking out feedback as I perform. That’s why it is more precise to say that good feedback is “timely” rather than “immediate.”
A great problem in education, however, is the opposite. Vital feedback on key performances often comes days, weeks, or even months after the performance – think of writing and handing in papers and getting back results on standardized tests. If we truly realize how vital feedback is, we should be working overtime as educators to figure out ways to ensure that students get more timely feedback and opportunities to use it in class while the attempt and effects are still fresh in their minds. (Keep in mind: as we have said, feedback does not need to come from the students’ teachers only or even people at all, before you say that this is impossible. This is a high-priority and solvable problem to address locally.)
6. Quality learning feedback is ongoing
It follows that the more I can get such timely feedback, in real-time, before it is too late, the better my ultimate performance will be – especially on complex performance that can never be mastered in a short amount of time and on a few attempts. That’s why we talk about powerful feedback loops in a sound learning system.
All adjustment en route depends upon feedback and multiple opportunities to use it. This is really what makes any assessment truly “formative” in education. The feedback is “formative” not merely because it precedes ‘summative’ assessments but because the performer has many opportunities – if results are less than optimal – to adjust the performance to better achieve the goal. Many so-called formative assessments do not build in such feedback use.
If we truly understood how feedback works, we would make the student’s use of feedback part of the assessment! It is telling that in the adult world I am often judged as a performer on my ability to adjust in light of feedback since no one can be perfect.
This is how all highly-successful computer games work, of course. If you play Angry Birds, Halo, Guitar Hero, or Tetris you know that the key to the substantial improvement possible is that the feedback is not only timely but ongoing. When you fail, you can immediately start over – even, just where you left off – to give you another opportunity to get, receive, and learn from the feedback before all is lost to forgetfulness. (Note, then, this additional aspect of user-friendly feedback: it suits our need, pace, and ability to process information; games are built to reflect and adapt to our changing ability to assimilate information.
Do you see a vital but counter-intuitive implication from the power of many ‘loops’ of feedback? We can teach less, provide more feedback, and cause greater learning than if we just teach. Educational research supports this view even if as ‘teachers’ we flinch instinctively at this idea.. That is why the typical lecture-driven course is so ineffective: the more we keep talking, the less we know what is being grasped and attended to. That is why the work of Eric Mazur at Harvard – in which he hardly lectures at all to his 200 students but instead gives them problems to solve and discuss, and then shows their results on screen before and after discussion using LRS ‘clickers’ – is so noteworthy. His students get “less” lecturing” but outperform their peers not only on typical tests of physics but especially on tests of misconceptions in physics. [Mazur (1998)]
7. Quality learning feedback is consistent
For feedback to be useful it has to be consistent. Clearly, I can only monitor and adjust successfully if the information fed back to me is stable, unvarying in its accuracy, and trustworthy. In education this has a clear consequence: teachers have to be on the same page about what is quality work and what to say when the work is and is not up to standard. That can only come from teachers constantly looking at student work together, becoming more consistent (i.e. achieving inter-rater reliability) over time, and formalizing their judgments in highly-descriptive rubrics supported by anchor products and performances. By extension, if we want student-to-student feedback to be more helpful, students have to be trained the same way we train teachers to be consistent, using the same exemplars and rubrics.
References
Bransford et al (2001) How People Learn. National Academy Press.
Clarke, Shirley (2001) Unlocking Formative Assessment: Practical Strategies for Enhancing Pupils’ Learning in the Primary Classroom. Hodder Murray.
Dweck, Carol (2007) Mindset: The New Psychology of Success, Ballantine.
Gilbert, Thomas (1978) Human Competence. McGraw Hill.
Harvard Business School Press, Giving Feedback (2006)
Hattie, John (2008) Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement, Routledge.
James, William (1899/1958) Talks to Teachers. W W Norton.
Marzano, R ; Pickering, D & Pollock J (2001) Classroom Instruction That Works: Research-Based Strategies for Increasing Student Achievement, ASCD.
Mazur, Eric (1996) Peer Instruction: A User’s Manual. Benjamin Cummings.
Nater, Sven & Gallimore R (2005) You Haven’t Taught Until They Have Learned: John Wooden’s Teaching Principles and Practices. Fitness Info Tech.
Pollock, Jane (2012) Feedback: the hinge that joins teaching and learning, Corwin Press.
Wiggins, Grant (2010) “Time to Stop Bashing the Tests,” Educational Leadership March 2010 | Volume 67 | Number 6
Wiggins, Grant (1998) Educative Assessment, Jossey-Bass.
[1] To be published in the September 2012 issue of Educational Leadership. Please do not disseminate without permission.
[2] Human Competence, Thomas Gilbert (1978), p. 178
[3] See Bransford et al (2001), pp. xx.
[4] James, William (1899/1958), p. 41.
This article was excerpted from a post that first appeared on Grant’s personal blog; Grant can be found on twitter here; image attribution flickr user flickeringbrad; You Probably Misunderstand Feedback for Learning