Making EFL Matter Pt. 2: Data and Feedback

imafiling cabinet

 

In the last few years, I’ve found myself increasingly reaching for books that are not on my SLA or Applied Linguistics shelves. Instead, books on general teaching methodology have engaged me more. It started a few years ago when I came across John Hattie’s Visible Learning for Teachers, a book in which various approaches in education are ranked by their proven effectiveness in research. This led me to further explore formative assessment, one of the approaches that Hattie identifies as particularly effective, citing Yeh (2011). I was impressed and intrigued and began searching for more–and man is there a lot out there! Dylan Wiliam’s Embedded Formative Assessment is great (leading to more blogging);  Laura Greenstein’s What Teachers Really Need to Know about Formative Assessment is very practical and comprehensive;  Visible Learning and the Science of How We Learn by Hattie and Yates has a solid  section; and Leaders of Their Own Learning by Ron Berger et al., the inspiration for this series of blog posts, places formative assessment at the heart of curricular organization. There is, as far as I know, nothing like this in TESOL/SLA. I’m not suggesting, I would like to emphasize, throwing out bathwater or babies, however. I see the content of these books as completely additive to good EFL pedagogy. So let’s go back to that for a moment.

One of my favorite lists in TESOL comes from a 2006 article by Kumaravadivelu in which he puts forth his list of macro-strategies, basic principles that should guide the micro-strategies of day-to-day language classroom teaching, as well as curriculum and syllabus design. This list never became the Pinterest-worthy ten commandments that I always thought it deserved to be. Aside from me and this teacher in Iran, it didn’t seem to catch on so much, though I’m sure you’ll agree it is a good, general set of directives. Hard to disagree with anything, right?

  1. Maximize learning opportunities
  2. Facilitate negotiated interaction
  3. Minimize perceptual mismatches
  4. Activate intuitive heuristics
  5. Foster language awareness
  6. Contextualize linguistic input
  7. Integrate language skills
  8. Promote learner autonomy
  9. Raise cultural awareness
  10. Ensure social relevance

But what is missing from the list (if we really want to take it up to 11) I can tell you now is Provide adequate formative feedback. One of the great failings of communicative language teaching (CLT) is that is has been so concerned with just getting students talking, that it has mostly ignored one of the fundamental aspects of human learning:  it is future-oriented. People want to know how to improve their work so that they can do better next time (Hattie and Yates, 2014). “For many students, classrooms are akin to video games without the feedback, without knowing what success looks like, or knowing when you attain success in key tasks” (pg. 67). Feedback helps when it shows students  what success looks like, when they can clearly see the gap between where they are now and where they need to be, and when the feedback provides actionable suggestions on what is being done right now and what learners should  do/change next to improve. It should be timely and actionable, and learners should be given ample time to incorporate it and try again (Wiliam, 2011; Greenstein, 2010).

One of the most conceptually difficult things to get used to in the world of formative feedback is the notion of data. We language teachers are not used to thinking of students’ utterances and performances as data, yet they are–data that can help us and them learn and improve. I mean, scores on certain norm-referenced tests can be seen as data, final test scores can be seen as data, and attendance can be seen as data, but we tend, I think, to look at what students do in our classes with a more communicative, qualitative, meaning-focused set of lenses. We may be comfortable giving immediate formative performance feedback on pronunciation, but for almost anything else, we hesitate and generalize with our feedback.  Ms. Greenstein, focusing on occasionally enigmatic 21st century skills, offers this:

“Formative assessment requires a systematic and planned approach that illuminates learning and displays what students know, understand, and do. It is used by both teachers and students to inform learning. Evidence is gathered through a variety of strategies throughout the instructional process, and teaching should be responsive to that evidence. There are numerous strategies for making use of evidence before, during, and after instruction” Greenstein, 2012, pg. 45).

Ms. Greenstein and others are teachers who look for data–evidence–of student learning, and look for ways involving learners in the process of seeing and acting on that data. Their point is we have a lot of data (and we can potentially collect more) and we should be using it with students as part of a system of formative feedback. Berger, Ruben & Woodfin, 2014) put it thus:

“The most powerful determinants of student growth are the mindsets and learning strategies that students themselves bring to their work–how much they care about working hard and learning, how convinced they are that hard work leads to growth, and how capably they have built strategies to focus, organize, remember, and navigate challenges. When students themselves identify, analyze, an use data from their learning, they become active agents in their own growth (Berger, Rugen & Woodfin, 2014, pg. 97).

They suggest, therefore, that students be trained to collect, analyze, and share their own learning data. This sounds rather radical, but it is only the logical extension of the rationale for having students track performance on regular tests, or making leader boards, or even giving students report cards.  It just does so more comprehensively. Of course, this will require training/scaffolding and time in class to do. The reasons for doing are worth that time and effort, they stress. Data has an authoritative power that a teacher’s “good job” or “try harder” just don’t. It is honest, unemotional, and specific, and therefore can have powerful effects. There are transformations in student mindsets, statistics literacy, and grading transparency, all of which help make students more responsible and accountable for their own learning. Among the practices they suggest that could be deployed in an EFL classroom are tracking weekly quiz results, standardized tests, or school exams, using error analysis forms for writing or speaking assignments, and using goal-setting worksheets for regular planning and reflection.

You can see the use of data for formative assessment in action in a 6th grade classroom here.

This post is part of a series considering ways to add more focus and learning to EFL classrooms by drawing on ideas and best practices from L1 classrooms.

Part 1 looked at the importance of goals.

Part 3 looks at the challenges and benefits of academic discussions

 

References

Yeh, S. S. (2011). The cost-effectiveness of 22 approaches for raising student achievement. Charlotte, North Carolina: Information Age Publishing.

Making EFL Matter Pt 1: Goals

As I write this, high school teachers across Japan are busy writing or polishing up  “Can do” targets for the students at their school, in compliance with MEXT (Ministry of Education, Culture, Science & Technology) requests. The purpose is to get schools and teachers to set language performance goals for the four skills, goals they can use in creating curriculum and in evaluating progress. It is, from what I have hear, not going smoothly. There are many reasons for this, not least of which is the novelty of the task. The challenge is to take the current practice of basically teaching and explaining textbook content and expand on that by setting more specific targets for reading, listening, writing, and speaking. Most schools did not have, nor attempt to teach, specific writing, listening, and speaking skills. In fact, I think it is fair to say that most schools and teachers viewed English more as a body of knowledge to be memorized than as sets of sub-skills or competencies that can be taught and tested. Even reading, by far the skill area that receives the most attention in senior high, is rarely broken into sub-skills or strategies that are taught/developed and tested.  So I see MEXT’s request as an attempt to break schools and teachers out of their present mindset; to get them to approach language teaching as a skill-developing undertaking, and to get them to focus on all four skills in a more balanced manner.

illustration of an opinion wheel featuring sections from agree strongly to strongly disagree

  As you can imagine, responses to the Can do list requirement have been varied. Of course schools are complying, but the interpretations of the concept of a 4-skill rubric for three years with can-do statements for each skill in each year don’t seem to be uniform. Some see the can-do statements as goals–impossible goals for at least some of the boxes of the rubric. So one problem is that many of the boxes in the rubric (especially for listening, speaking, and writing) will be filled ad hoc, never to be really dealt with by the program. Another  problem is that can-do statements as they are used in CEFR are not goals or targets, but merely descriptors. That is, they are meant to make general statements about proficiency. That is, even when schools do decide their can-do statements for the various boxes of the rubric, that will not be enough to make a difference. That’s because can-do statement can help inform in the setting of more specific targets at an institution, but they should not function as goals/targets themselves. A lot of people don’t get that, apparently.

That is to say, an important  step is missing. In order to design curricula, very specific sets of sub-skills or competencies must be explicitly drawn up. These are informed by the guiding goals (can-do statements), but are detailed and linked to classroom activities that can develop them. Let’s say we are dealing with listening. Can-do statements for a second year group might include something like this: Can understand short utterances by proficient speakers on familiar topics. This statement then needs to be broken into more detailed competencies–linked speech, ellipsis, common formulaic expressions, different accents, top-down strategies, etc., for example–that then need to be taught and tested regularly in classes. This is something that is not happening now in most public schools.

At the crux of the problem is the fact that many (actually I think we can safely say most here) teachers do not have a clear idea of the exact competencies they are aiming for in each skill area. Instead, most teachers tend to think of a few key skills that they develop with certain textbooks or activities. The MEXT assignment to write a can-do rubric could nudge schools and teachers in a certain direction, but it is a rather hopeful nudge for a rather complex problem. It could potentially be a game changer. If every teacher had to sit together and come up with can-do statements and specific competencies that they would develop in each skill area, the impact on English education could be huge. But that is unlikely to happen. The process of going from here to there is rather complicated and long hours of collaboration and re-conceptualizing are necessary. Instead,  schools are mostly assigning one unfortunate soul from the English department to do the whole thing him/herself. It is probably unrealistic to expect much change.

Actually, even if schools were to make good can-do rubrics and set specific target competencies, the real battle is only beginning. Making those targets clear to students and creating a system where learners are moving toward mastery is a tremendous challenge. According to Leaders of Their Own Learning, programs need to set knowledge, skills, and reasoning targets for students. These targets will necessarily come in clusters of micro-skills or micro-competencies. These are then reworded into can-do statements given to students at the beginning of each lesson, so students can know what they will be learning and how they are expected to perform. That’s a lot of writing, and it will require a lot of agreement to produce. And that can only come about after much discussion and conceptualization shifting and decision-making on the part of the teachers. But that’s not all. For these targets to work, everyone–teachers, students, parents, and administrators–must be on board. It is hard to imagine such vision and collegiality at public high schools in Japan.

Student-engaged assessment process diagram

From Leaders of Their Own Learning

The diagram above shows the process that Ron Berger and the other authors of Leaders of Their Own Learning recommend to improve student performance. Goal-setting is only one part of this, but it is an essential part. Without goals, none of the other activities are possible. In subsequent posts, I’ll be looking at some of the other parts, and some of the other ways that teachers and administrators can improve education.

This post is part of a series considering ways to add more focus and learning to EFL classrooms by drawing on ideas and best practices from L1 classrooms.

Part 2 looks at the use of data and feedback.

Part 3 looks at the challenges and benefits of academic discussions