What can Data do for EFL?

image of glasses

In the US, something very interesting is happening as an indirect result of standards-based testing and the establishment of charter schools: a certain interesting set of research conditions has emerged. Charter schools are themselves often experiments, established in part to be “…R&D engines for traditional public schools” (Dobbie & Fryer, 2012). As such, they often eschew many elements of traditional education practices, and instead attempt to institute the results of research into educational effectiveness from the last several decades. For many charter schools, this research-informed pedagogy assumes a unifying or rallying role, and the ideas are woven into their mission statements and school cultures, as they try to take underprivileged kids and put them on the road to college, through grit training, artistic expression, or higher expectations. Even from a brief visit to the websites of charter systems such as Uncommon Schools, MATCH, or KIPP, you can see what they are intending to do and why. And some of these charter schools have become extremely successful—in terms of academic achievement by students, and in terms of popularity. And both of these have led to new research opportunities. You see, the best charter schools now have to resort to lotteries to choose students, lotteries that create nice groups of randomly-sampled individuals: those who got in and received an experimental treatment in education, and those who didn’t and ended up going to more traditional schools. And that provides researchers with a way to compare programs by looking at what happens to the students in these groups. And some of the results may surprise you.

Let’s play one of our favorite games again: Guess the Effect Size!! It’s simple. Just look at the list of interventions below and decide whether each intervention has a large (significant, important, you-should-be-doing-this) impact, or a small (minimal, puny, low-priority) impact. Ready? Let’s go!

  1. Make small classes
  2. Spend more money per student
  3. Make sure all teachers are certified
  4. Deploy as many teachers with advanced degrees as possible
  5. Have teachers give frequent feedback
  6. Make use of data to guide instruction
  7. Create a system of high-dosage tutoring
  8. Increase the amount of time for instruction
  9. Have high expectations for students

Ready to hear the answers? Well, according to Dobbie & Fryer (2012), the first four on the list are not correlated with school effectiveness, while the next five account for a whopping 45% of the reasons schools are effective. Looking at the list, this is not surprising, especially if you are aware  of the power of formative feedback.

Some people might be a little skeptical still. Fine. Go and look for studies that prove Dobbie and Fryer wrong. You might find some. Then look at where the research was done. Is the setting like yours? Just going through this process means we are putting data to work. And that is much, much better than just going with our own instincts, which are of course based on our own experiences. I work teaching English in Japan, and I know that is a far cry from the hard knocks neighborhoods where Dobbie and Fryer looked into the effects of interventions in Match schools. But I think there are enough similarities to warrant giving credence to these results and even giving them a try at schools Tokyo. I have several reasons. First, extensive research on formative assessment, high expectations, classroom time, and pinpointed direct instruction is very robust. Nothing in their list is surprising. Second, in Japan, English is often as foreign from the daily lives of most students as physics or math are from the lives of many American teens. The motivation for learning it is likewise unlikely to be very strong at the beginning. Many of the students in the Match system are less than confident with their ability with many subjects, and are less than confident with aiming at college, a world that is often quite foreign to their lives. Many English learners in Japan similarly see English as foreign and unrelated to their lives, and the notion that they can become proficient at it and make it a part of their future social and/or professional lives, requires a great leap in faith.

But through the Match program, students do gain in confidence, and they do gain in ability, and they do get prepared for college. Given the demographic, the success of Match and the other “No Excuses” systems mentioned above is stunning. It also seems to be long lasting. Davis & Heller (2015) found that students who attended “No Excuses” schools were 10.0 percentage points more likely to attend college and 9.5 percentage points more likely to enroll for at least four semesters. Clearly the kids are getting more than fleeting bumps in scores on tests. And clearly the approach of these schools—in putting to work proven interventions—is having a positive effect, although not everyone seems to be happy.

And it’s not just that they are making use of research results. These schools are putting data to use in a variety of ways. Paul Bambrick-Sotoyo of Uncommon Schools has published a book that outlines their approach very nicely. In it we can find this:

Data-driven instruction is the philosophy that schools should constantly focus on one simple question: are our students learning? Using data-based methods, these schools break from the traditional emphasis on what teachers ostensibly taught in favor of a clear-eyed, fact-based focus on what students actually learned (pg. xxv).

Driven By Data book cover

They do this by adhering to four basic principles. Schools must create serious interim assessments that provide meaningful data. This data then must be carefully analyzed so the data produces actionable finding. And these findings must be tied to classroom practices that build on strengths and eliminate shortcomings. And finally, all of this must occur in an environment where the culture of data-driven instruction is valued and practiced and can thrive. Mr. Bambrick-Sotoyo goes through a list of mistakes that most schools make, challenges that are important to meet if data is to be used to “…make student learning the ultimate test of teaching.” The list feels more like a checklist of standard operating procedures at almost every program I have ever worked in in EFL. Inferior, infrequent or secretive assessments? Check, check, check. Curriculum-assessment disconnect? Almost always. Separation of teaching and analysis? Usually, no analysis whatsoever. Ineffective follow-up? Har har har. I don’t believe I have ever experienced or even heard of any kind of follow-up at the program level. Well, you get the point. What is happening in EFL programs in Japan now is very far removed from a system where data is put to work to maximize learning.

But let’s not stop at the program level. Doug Lemov has been building up a fine collection of techniques that teachers can use to improve learning outcomes. He is now up to 62 techniques that “put students on the path to college,” after starting with 49 in the earlier edition. And how does he decide on these techniques? Through a combination of videoing teachers and tracking the performance of their classes. Simple, yet revolutionary. The group I belonged to until this past April was trying to do something similar with EFL at public high schools in Japan, but the lack of standardized test taking makes it difficult to compare outcomes. But there is no doubt in my mind that this is exactly the right direction in which we should be going. Find out what works, tease out exactly what it is that is leading to improvement (identify the micro-skills), and then train people through micro-teaching to do these things and do them well and do them better still. Teaching is art, Mr. Lemov says in the introduction, but “…great art relies on the mastery and application of foundational skills” (pg.1). Mr. Lemov has done a great service to us by videoing and analyzing thousands of hours of classes, and then triangulating that with test results. And then further trialing and tweaking those results. If you don’t have copy of the book, I encourage you to do so. It’s just a shame that it isn’t all about language teaching and learning.

Interest in using data in EFL/ESL is also growing. A recent issue of Language Testing focused on diagnostic assessment. This is something that has grown out of the same standards-based testing that allowed the charter schools in the US to thrive. You can download a complimentary article (“Diagnostic assessment of reading and listening in a second or foreign language: Elaborating on diagnostic principles”, by Luke Harding, J. Charles Alderson, and Tineke Brunfaut). You can also listen to a podcast interview with Glen Fulcher and Eunice Eunhee Jang, one of the contributors to the special issue. It seems likely that this is an area of EFL that will continue to grow in the future.

Lessons from Training


I came across another interesting article on the BBC website today. It was temptingly titled Can you win at anything if you practise hard enough? It told the story of a young table-tennis coach from the UK, named Ben Larcombe, who attempted to take his lumpy and “unsporty” friend and turn him, over the course of a year, into a top competitive player. As part of the process of writing a book on the topic of training and improvement, the pair documented Sam Priestley’s transformation from a sort-of player to an impressively good one, at least to my eyes. You can watch the whole thing unfold before your eyes in this video. The article includes, however,  a rather bubble-bursting comment from an English tennis coach and expert named Rory Scott:

“He is nowhere near the standard of the top under-11 player in the UK.”

So the BBC writer goes on to ask this question: “Why did the project fail?” What? Just because Sam didn’t meet his goal of getting into the top 250 table tennis players in the UK in one year of practicing every day, doesn’t mean it was a failure at all. It shows the potential for people to learn when they are persistent and work hard regularly with good strategies and feedback. The rest of the article goes on to explore exactly that potential, largely from a cultural viewpoint of different attitudes to natural ability and the need to persist versus instantaneous gratification.

I’ve been seeing similar sorts of studies a lot lately. The last few years have seen a plethora of books and talks on similar topics: how far does practice take you? You can read about Josh Kaufman’s attempt to learn something in 20 hours, or watch him tell about it at TED. Or you can read about Joshua Foer attempting to get better at memorizing things in his book, Moonwalking With Einstein. Or if you are really serious about the source of greatness, whether is comes from genes or training, try The Sports Gene by David Epstein. And don’t forget Doug Lemov’s Practice Perfect, a book which has a focus on learning teaching.

Practice, I’m convinced, is important. But so are attitudes to practice, and so is the kind of practice you do and the the kind of feedback you get. If we can get these right, our learners will learn better and faster, which will lead to other benefits. Practice is a touchy issue in language teaching, a field still trying to come to terms with the “drill and kill” of the audio-lingual approach. But intense, focused practice with constructive feedback and repeated opportunities to incorporate that feedback and improve is something very important to the learning process. It takes a lot of time, to be sure, maybe even 10,000 hours (though see Mr. Epstein’s book for a good discussion on amounts of time), but impressive results are possible. That is something I want my learners to understand and buy into.

Mr. Larcombe has a website with more detailed info about the process of teaching table tennis. He is also currently preparing a book.

Making EFL Matter Pt. 6: Prepared to Learn

image of students

The present series of posts is looking at how EFL courses and classes in Japan might be improved by considering some of the techniques and activities emerging in ESL and L1 mainstream education settings. Previous posts have looked at goals, data and feedback, discussion, portfolios, and prolepsis and debate. The basic idea is to structure courses in accordance with the principles of formative assessment, so students know where they are going and how to get there, and then train them to think formatively and interact with their classmates formatively in English. All of the ideas presented in this series came not from TESOL methodology books, but rather more general education methodology books I read with my EFL teacher lens. I realise that this might put some EFL teachers off. I don’t think it should, since many of the widely-accepted theories and practices in TESOL first appeared in mainstream classes (journals, extensive reading, portfolio assessment, etc.); also,  the last few years have seen an explosion in data-informed pedagogy, and we would be wise not to ignore it. In this post, however, I’d like to go back to TESOL research for a moment and look at how some of it might be problematic. Actually, “problematic” may be too strong a word. I’m not pointing out flaws in research methodology, but I would like to suggest that there may be a danger in drawing conclusions for pedagogy from experiments that simply observe what students tend to do or not do without their awareness raised and without training.

I’ve been reading Peer Interaction and Second Language Learning by Jenefer Philps, Rebecca Adams, and Noriko Iwashita. It is a wonderful book, very nicely conceived and organized, and I plan to write a review in this blog in a short time. But today I’d just like to make a point connected with the notion of getting learners more engaged in formative assessment in EFL classes. As I was reading the book, it seemed that many of the studies cited just seemed to look at what learners do as they go about completing tasks (very often picture difference tasks, for some reason). That is, the studies seem to set learners up with a task and then watch what they do as they interact, and count how many LRE (language related episode) incidences of noticing and negotiation of language happen, or how often learners manage to produce correct target structures. Many of the studies just seem to have set learners about doing a task and then videoed them. That would be fine if we were examining chimpanzees in the wild or ants on a hill; but I strongly believe it is our job to actively improve the quality of the interactions between learners and to push their learning, not just to observe what they do. None of the studies in the book seem to be measuring organized and systematic training-based interventions for teaching how to interact and respond to peers. In one of the studies that sort of did, Kim and McDonough (2011), teachers just showed a video of students modelling certain interaction and engagement strategies as part of a two-week study. But even with that little bit of formative assessment/training, we find better results, better learning. The authors of the book are cool-headed researchers, and they organize and report the findings of various studies duly. But my jaw dropped open a number of times, if only in suspicion of what seemed to be (not) happening; my formative assessment instincts were stunned. How can we expect learners to do something if they are not explicitly instructed and trained to do so? And why would we really want to see what they do if they are not trained to do so? Just a little modelling is not bad, but there is so much more that can be done. Right Mr. Wiliam? Right Ms. Greenstein?

Philps et al. acknowledge this in places. In the section on peer dynamics, they stress the importance of developing both cognitive and social skills. “Neither can be taken for granted,” they state clearly (pg. 100). And just after that, they express the need for more training and more research on how to scaffold/train learners to interact with each other for maximum learning:

“Part of the effectiveness of peer interaction…relates to how well learners listen to and engage with one another…In task-based language teaching research, a primary agenda has been the creation of effective tasks that promote maximum opportunities for L2 learning, but an important area for research, largely ignored, is the training of interpersonal skills essential to make these tasks work as intended” (pg. 101).

But not once in their book do they mention formative assessment or rubrics. Without understanding of the rationale of providing each other with feedback, without models, without rubrics, without being shown how to give feedback or provide scaffolding to peers, how can we expect them to do so, or to do so in a way that drives learning. Many studies discussed in the book show that learners do not really trust peer feedback, and do not feel confident in giving it. Sure, if it’s just kids with nothing to back themselves up, that’s natural. But if we have a good system of formative feedback in place (clear goals, rubrics, checklists, etc.), everyone knows what to do and what to do to get better. Everyone has an understanding of the targets. They are detailed and they are actionable. And it becomes much easier to speak up and help someone improve.

Teachers need to make goals clear and provide rubrics detailing micro-skills or competencies that learners need to demonstrate. They also need to train learners in how to give and receive feedback. That is a fundamental objective of a learning community. The study I want to see will track learners as they enter and progress in such a community.


Kim, Y., & McDonough, K. (2011). Using pretask modeling to encourage collaborative learning opportunities, Language Teaching Research,15(2), 1-17.


Making EFL Matter Pt. 5: Prolepsis, Debate, and Benny Lewis

image of man reading a book As a young man, I was part of a legion of English teachers working in Japan. A large number of us “teachers” working day in and day out at language schools and colleges were actually travelers trying to save money for their next trek through Nepal or to live on a beach on Boracay or Koh Samui  (very different in 1986) for as many months as possible before they had to work again. At least some of these people, in order to be able to stay in Japan and teach/work, pretended to be in the country for the purpose of studying something–flower arrangement, karate, or Japanese language, for example. One guy, ostensibly studying Japanese, dutifully went to the immigration office each year to renew his visa. And each time, he struggled greatly with the rudimentary questions the officer asked him in Japanese. At the end of the conversation, the immigration officer would kindly offer him encouragement because “Japanese was a hard language” to learn.

That same sentiment–that you are just studying the language and can’t really use it yet–is still surprisingly common in many institutional programs for learners of many languages. I have often heard college students say that they want to go to the US “after my English is good enough.” The opposite of this “not yet” concept is  prolepsis, “the representation or assumption of a future act as if presently existing or accomplished” (from Merriam-Webster). It is a lovely little term I came across in Walqui and van Lier (2010). They recommend  treating students proleptically, “as if they already possess the abilities you are seeking to develop” (pg 84). In other words, throw them in at the deep end, and both support and expect their success. High school and college in Japan are perfect places for putting this approach into practice. Why? Because learners have already had somewhere between 4 and 10 previous years of of English exposure and learning. It’s time to stop pretending that they can’t use it. Right Benny?

People like Benny Lewis are not usually taken seriously in the TESOL world, but they should be. Watch the video and see how many things he gets right. Polyglots learn languages successfully, he says at one point, because they are motivated to “use it with people” and they go about doing so. That is some good sociocultural theory there. He also dismisses five of the barriers that people so often accept to explain their own lack of success with language learning, and addresses the growth mindset and time and resource management that he and his friends have found a way to make work for themselves. But what I find most amazing about Mr Lewis and others like him is that they are living examples of acting proleptically with language learning. They learn it, use it, love it, and  repeat. They don’t stop to worry about whether they are “ready.” They don’t let things like having few resources around, or no interlocutors nearby, to interfere. They challenge themselves to learn what they can and then actively seek out opportunities to use that, monitoring their progress by continually testing it out. I admire their passion. I  borrow strategies and techniques from them to pass on to my students. If we are not helping our students make use of Skype or Memrise or Quizlet or any of the many other tools available, we are doing a great disservice to our young charges.

But not only should we be introducing websites, we should be expecting our learners to use them and to push their learning. You can do it. No excuses. Of course you can handle basic conversations in the language. I expect nothing less than that. And let’s see what you can really do when you push yourself. I expect success. I assume it and design my activities around it. Prolepsis. We sometimes hear the word rigor used to describe education. We can also talk about holding higher expectations for our learners. Without a curriculum designed with the idea of prolepsis, however, it is likely empty talk. It sounds good, but is not actionable. Van Lier and Walqui list these three directives if we are serious about really making our curriculum, well, serious:

  • Engage learners in tasks that provide high challenge and high support;
  • engage students (and teacher) in the development of their own expertise;
  • make criteria for quality work clear for all

We can see immediately that some of the things Mr. Lewis is suggesting get learners to do these things. I’ve talked before about rubrics and portfolios and making the criteria for success clear in other blog posts, but today I’d like to finish up this post by talking about an activity that does all these things, and it gets students to perform proleptically: debate. Now debate has a bad reputation in Japan. Many teachers think it is too difficult for students. Some teachers think it focuses too much on competition. These points may have some validity, but they should not prevent you from doing debate. We do debate, like JFK said we should go to the moon, because it is difficult. And if we have students debate both sides of issues, what begins to emerge is a keen sense of examining any issue–for looking at what is important and how important, and questioning and explaining that. Debaters behave proleptically, because they have to. Debating adds critical thinking structure to discussions about plans. Debaters learn to consider the status quo. They learn to evaluate plans in terms of their effect and importance. They learn to write speeches describing these things, and they learn to listen for them and consider them critically. Because there is a set structure, we can support and scaffold our learners. But we cannot hold their hands all the way. Debate forces them to go off scripts at times, while never going off topic. There is also time pressure, and the debate takes place with other  people, an on-stage performance that is intimidating for everyone, and thus spurs learners to try harder. Yet, like scrimmaging with feedback, there are multiple opportunities to fine tune performance (and get repeated input). Every time I read about techniques to promote high standards, rigor, etc. , I always think to myself: That sounds an awful lot like debate, or Yup, debate can do that.  To me, it seems that debate is one technique that should not be left out, especially policy debate where learners research topics to come up with arguments for both sides in advance. Not only do we get four-skills language development, but we also get research skills, organization skills, and critical thinking skills development.

Show me another activity that does that.

This post is part of a series considering ways to add more focus and learning to EFL classrooms by drawing on ideas and best practices from L1 classrooms.

Part 1 looked at the importance of goals. Part 2 looked at using data and feedback. Part 3 looked at the challenges and benefits of academic discussions Part 4 looked at portfolios and assessment

Making EFL Matter Pt. 4: Portfolios and Assessment

desktop image

In principle, a portfolio is an easy to understand and intuitively attractive concept: students keep the work they produce. The real challenge of a portfolio is what you do with it. Without a clear vision of how the tool will be used, it can easily end up being a little like a child’s art box of works produced in art class in school over the years—just a repository of things we hold on to for no specific reason other than sentimental attachment. We might pull these works out to look at them from time to time, but they are not a clear record of achievement, nor can they help inform future learning decisions. The central function of a quality portfolio is to clearly provide evidence of growth and to “…engage students in assessing their growth and learning” (Berger, Rugen & Woodfin, 2014, pg. 261). Specifically what growth depends on the goals of the course or program. When a course or program has clear goals, a portfolio can have a formative or summative role in demonstrating a learner’s achievement or progress toward achieving those goals. There are also practical/logistical constraints on portfolio deployment. What artifacts should be included, how many should be included, where should the artifacts be stored, and how will the portfolio be assessed and by whom, are all important decisions. The results of these decisions can greatly impact the success of a portfolio as a learning tool.


Conceptualizing a portfolio

A portfolio is not simply a repository file. It must serve as a record of progress that is used to assess learning by the learner him/herself or by others. All decisions on its structure and deployment must start with this basic understanding. The design of the portfolio itself, and its integration into the syllabus (i.e., how it will be used on a regular basis) must aim to make it as easy as possible to record progress/achievement, to make visible evidence or patterns progress/achievement in the collected data. For this reason, not only student-produced academic work (essays, presentations, tests), but also documents that make progress and achievement salient should be kept in a portfolio. Such documents may include introductory statements, target-setting plans, records of times on tasks, assignment rubrics, progress charts, and reflection reports.


The importance of goals

In order to be effective, the portfolio must be closely aligned to the goals of the course or program and be able to show progress toward or achievement of those goals. In other words, it must be able to provide specific evidence of progress in achieving the target competencies in a way that is clear and actionable. It must also do so in a way that makes the most effective or efficient use of time. These goals can include knowledge goals, skill goals, or learning goals for constructs such as responsibility, autonomy, revision, collaboration, service and stewardship (to name a few). Without clear goals (usually arranged in a clear sequence), effective use of a portfolio cannot be possible. Without clear goals, the formative and reflective functions of a portfolio cannot be leveraged in a clear and actionable way. However, if students know what they are aiming for and can compare their work in how it meets the target competencies (using the descriptions and rubrics that define the goals/competencies), portfolios can be a powerful tool for reflection and formative feedback.


The importance of regular portfolio conversations

“In order for portfolios to be a tool for student-engaged assessment, including formative and summative assessments, they must be a regular part of the classroom conversation, not a static collection of student work” (Berger, Rugen & Woodfin, 2014, pg. 268). The portfolio must be a tool of measurement, like a bathroom scale, and can only be effective if it is used regularly. Students must regularly enter data into it (more on what kinds of data in the next section), and they must use it to look for patterns of success and gaps in learning/performance and strategy use. For this reason, providing clear guidelines and time to enter data into portfolios, facilitating the noticing of patterns and gaps, and giving opportunities for students to discuss their progress in groups, are all necessary. This will require classroom time, but also some scaffolding so students can understand how to work with data. Student-led conferences (mini presentations on progress done in groups in class) can be a useful tool. In groups, students can practice talking about learning, but also compare their progress and efforts with those of their classmates. Counselor conferences can also make use of portfolios, and if students have practiced beforehand in groups, time with counselors can be economized. Finally, to truly leverage the power of portfolios, passage presentations (public presentations where students explain and defend their learning accomplishments to groups of teachers, parents, or other concerned parties) can be particularly powerful since they are public and official. If a passage presentation system is in place, it will serve to make the portfolios more meaningful, greatly enhancing the effort students will put into entering and analyzing data and the amount of time they spend analyzing and practicing explaining their learning. Passage presentations and counselor conferences can transform student-led conferences into the role of practice for “the big games.”


Portfolio contents Pt. 1: What are we trying to develop?

Let us review our key points so far. It must be easy to enter meaningful data into the portfolio and notice trends or gaps. Noticing the trends and gaps in performance requires an understanding of the goals of the course/program, so they must be clear. The portfolio should be used regularly: students should use it to monitor their learning; and students should be able to refer to it when explaining their learning to others (groups, counselors, or others). These points are all concerned with usability, making the experience of using a portfolio as simple and smooth and effective as possible. What we actually put into the portfolio must be concerned with our learning targets. As mentioned earlier, any program or course will have multiple targets for knowledge and skill acquisition, but also for constructs such as digital literacy, critical thinking, problem solving, responsibility, autonomy, revision, collaboration, service and stewardship, and possibly others. Therefore, it is important for portfolios to contain finished work and evidence of the process of improving work through working with others, checking and revising work responsibly, and helping others to do so, too. Portfolios should also contain records of learning activities and times on tasks as evidence of autonomy and tenacity.


Portfolio contents Pt. 2: Portfolios for language learners

As part of English language courses, there are usually weekly classroom assignments for writing and presentation. There may also be other writing assignments, or other speaking assignments. As for other constructs, the following have been shown to be important for successful language learning and therefore should be part of the curriculum:

  • Time on task
  • Time management (efficient use of time)
  • Commitment to improvement/quality (accountable for learning)
  • Critical evaluation of learning strategies
  • Collaboration (accountable to others)
  • Seeking feedback and incorporating feedback (revision)


If we try to build these into our portfolio system along with our language and culture target competencies while still managing the volume of the content, I believe that we must include the following elements, in addition to a general goal statement:

  1. Drafts and final products for a limited number of assignments, including a reflection sheet with information about the goals of the assignment (and a copy of the rubric for the assignment), time spent on the assignment, attempts at getting feedback and comments on how that feedback was included;
  2. Weekly reflection sheets (including a schedule planner) in which students can plan out the study plan for their week before it happens, and then reflect upon the results afterward. There could also be sections where students can reflect upon strategy use and explain their attempts to reach certain goals;
  3. Self-access tracking charts in which students list up the reading, listening, or other self-access activities students engage in. Several of these charts can be made available to students (extensive reading charts, extensive listening charts, TOEFl/TOEIC test training, online conversation time, etc.) and students can include the charts relevant to their personal goals (though extensive reading will be required for all students).


As you can see, there is much to be decided: specifically which assignments and how many will be included; also the various forms need to be designed and created; and, for the English classes, whether completing the portfolio and discussing learning is something that we want to scaffold learners to be able to do (something that I personally think is very important).


This post is part of a series considering ways to add more focus and learning to EFL classrooms by drawing on ideas and best practices from L1 classrooms.

Part 1 looked at the importance of goals.

Part 2 looked at using data and feedback.

Part 3 looked at the challenges and benefits of academic discussions



Berger, R. Rugen, L., and Woodfin, L. (2014). Leaders of their own learning: transforming schools through student-engaged assessment. San Fransisco: Jossey-Bass.

Greenstein, L. (2012). Assessing 21st century skills: a guide to evaluating mastery and authentic learning. Thousand Oaks, CA: Corwin.

Making EFL Matter Pt. 2: Data and Feedback

imafiling cabinet


In the last few years, I’ve found myself increasingly reaching for books that are not on my SLA or Applied Linguistics shelves. Instead, books on general teaching methodology have engaged me more. It started a few years ago when I came across John Hattie’s Visible Learning for Teachers, a book in which various approaches in education are ranked by their proven effectiveness in research. This led me to further explore formative assessment, one of the approaches that Hattie identifies as particularly effective, citing Yeh (2011). I was impressed and intrigued and began searching for more–and man is there a lot out there! Dylan Wiliam’s Embedded Formative Assessment is great (leading to more blogging);  Laura Greenstein’s What Teachers Really Need to Know about Formative Assessment is very practical and comprehensive;  Visible Learning and the Science of How We Learn by Hattie and Yates has a solid  section; and Leaders of Their Own Learning by Ron Berger et al., the inspiration for this series of blog posts, places formative assessment at the heart of curricular organization. There is, as far as I know, nothing like this in TESOL/SLA. I’m not suggesting, I would like to emphasize, throwing out bathwater or babies, however. I see the content of these books as completely additive to good EFL pedagogy. So let’s go back to that for a moment.

One of my favorite lists in TESOL comes from a 2006 article by Kumaravadivelu in which he puts forth his list of macro-strategies, basic principles that should guide the micro-strategies of day-to-day language classroom teaching, as well as curriculum and syllabus design. This list never became the Pinterest-worthy ten commandments that I always thought it deserved to be. Aside from me and this teacher in Iran, it didn’t seem to catch on so much, though I’m sure you’ll agree it is a good, general set of directives. Hard to disagree with anything, right?

  1. Maximize learning opportunities
  2. Facilitate negotiated interaction
  3. Minimize perceptual mismatches
  4. Activate intuitive heuristics
  5. Foster language awareness
  6. Contextualize linguistic input
  7. Integrate language skills
  8. Promote learner autonomy
  9. Raise cultural awareness
  10. Ensure social relevance

But what is missing from the list (if we really want to take it up to 11) I can tell you now is Provide adequate formative feedback. One of the great failings of communicative language teaching (CLT) is that is has been so concerned with just getting students talking, that it has mostly ignored one of the fundamental aspects of human learning:  it is future-oriented. People want to know how to improve their work so that they can do better next time (Hattie and Yates, 2014). “For many students, classrooms are akin to video games without the feedback, without knowing what success looks like, or knowing when you attain success in key tasks” (pg. 67). Feedback helps when it shows students  what success looks like, when they can clearly see the gap between where they are now and where they need to be, and when the feedback provides actionable suggestions on what is being done right now and what learners should  do/change next to improve. It should be timely and actionable, and learners should be given ample time to incorporate it and try again (Wiliam, 2011; Greenstein, 2010).

One of the most conceptually difficult things to get used to in the world of formative feedback is the notion of data. We language teachers are not used to thinking of students’ utterances and performances as data, yet they are–data that can help us and them learn and improve. I mean, scores on certain norm-referenced tests can be seen as data, final test scores can be seen as data, and attendance can be seen as data, but we tend, I think, to look at what students do in our classes with a more communicative, qualitative, meaning-focused set of lenses. We may be comfortable giving immediate formative performance feedback on pronunciation, but for almost anything else, we hesitate and generalize with our feedback.  Ms. Greenstein, focusing on occasionally enigmatic 21st century skills, offers this:

“Formative assessment requires a systematic and planned approach that illuminates learning and displays what students know, understand, and do. It is used by both teachers and students to inform learning. Evidence is gathered through a variety of strategies throughout the instructional process, and teaching should be responsive to that evidence. There are numerous strategies for making use of evidence before, during, and after instruction” Greenstein, 2012, pg. 45).

Ms. Greenstein and others are teachers who look for data–evidence–of student learning, and look for ways involving learners in the process of seeing and acting on that data. Their point is we have a lot of data (and we can potentially collect more) and we should be using it with students as part of a system of formative feedback. Berger, Ruben & Woodfin, 2014) put it thus:

“The most powerful determinants of student growth are the mindsets and learning strategies that students themselves bring to their work–how much they care about working hard and learning, how convinced they are that hard work leads to growth, and how capably they have built strategies to focus, organize, remember, and navigate challenges. When students themselves identify, analyze, an use data from their learning, they become active agents in their own growth (Berger, Rugen & Woodfin, 2014, pg. 97).

They suggest, therefore, that students be trained to collect, analyze, and share their own learning data. This sounds rather radical, but it is only the logical extension of the rationale for having students track performance on regular tests, or making leader boards, or even giving students report cards.  It just does so more comprehensively. Of course, this will require training/scaffolding and time in class to do. The reasons for doing are worth that time and effort, they stress. Data has an authoritative power that a teacher’s “good job” or “try harder” just don’t. It is honest, unemotional, and specific, and therefore can have powerful effects. There are transformations in student mindsets, statistics literacy, and grading transparency, all of which help make students more responsible and accountable for their own learning. Among the practices they suggest that could be deployed in an EFL classroom are tracking weekly quiz results, standardized tests, or school exams, using error analysis forms for writing or speaking assignments, and using goal-setting worksheets for regular planning and reflection.

You can see the use of data for formative assessment in action in a 6th grade classroom here.

This post is part of a series considering ways to add more focus and learning to EFL classrooms by drawing on ideas and best practices from L1 classrooms.

Part 1 looked at the importance of goals.

Part 3 looks at the challenges and benefits of academic discussions



Yeh, S. S. (2011). The cost-effectiveness of 22 approaches for raising student achievement. Charlotte, North Carolina: Information Age Publishing.

Language on Stage: Debate and Musicals

In his 2011 book Creative Thinkering, Michael Michalko explains the idea of conceptual blending. What you do is take dissimilar objects or subjects and then blend them–that is, force a conceptual connection between them by comparing and contrasting features. It’s an enlightening little mental activity that can help you to come up with creative ideas or insights as you think about how the features of one thing could possibly be manifested in another. In the past few days, I’ve tried blending two activities that I’ve seen push EFL improvement more than any other: performance in club-produced musicals, and competitive policy debating. I’ve compared them with each other and with regular classroom settings,

musical image and debating image

I chose these two because over the last few years I have seen drama and debate produce language improvements that go off the charts. This improvement can be explained partly in terms of the hours on task that both of these activities require and the fact that students elect to take part voluntarily, but I don’t think that explains everything. There are certainly other possible factors: both require playing roles; both are team activities; both have performance pressure; both reward accomplishment; both require multi-modal language use and genre transformation; both require attention to meaning and form; both are complex skills that require repeated, intensive practice to achieve, and that practice is strictly monitored by everyone involved, who then give repeated formative feedback. Not complete, but not a bad list, I thought.

But then while reading Leaders of Their Own Learning, a wonderful sort of new book by Ron Berger, Leah Rugen, and Libby Woodfin, I came across this quote by a principal of a middle school in the US:

“Anytime you make the work public, set the bar high, and are transparent about the steps to make a high-quality product, kids will deliver.”

I think the speaker hit the nail on the head as to why activities like debate and drama work: public, high expectations, and clear steps. Aha: public! The dominant feature of debate and drama is that there is a public performance element to them. Students prepare, keenly aware that they will be onstage at some point; they will be in the spotlight and they will be evaluated. Of course students need support and scaffolding and lots of practice before they can get on stage, but unless there is a stage, everything else won’t matter as much. It is the driver of drivers. Is that pressure this message  suggests? Yup, but also purpose. I have seen kids transformed by the experiences of competitive debating or performing in a musical. I refuse to believe that the mediocrity I see in so many language course and programs is the way it has to be.

So, to get back to the whole reason for this little thought experiment: how can we take these best features of debate and drama and apply them to language programs? The key, I hope you will agree, is introducing a public performance element. There needs to be some kind of public element that encompasses a broad range of knowledge, skills, and micro-skills, and then there needs to be sufficient teaching, scaffolding, and practice to ensure public success. But how…?

Over the next few blog posts, I’ll be exploring these issues, drawing from ideas that are being developed in K-12 education in the US, particularly in approaches that have been working in high-challenge schools with English language learners and other at-risk learners (for example, by Expeditionary Learning, WestEd, and Uncommon Schools). In particular, I’ll be looking at ideas in Leaders of Their Own Learning (mentioned and linked above), Scaffolding the Academic Success of English Language Learners, by Walqui and Van Lier, and the soon-to-be-released second edition of Teach Like a Champion 2.0 by Doug Lemov. I’ll be looking at all of these through the lens of an EFL teacher in Japan. Many things in these books won’t be applicable in my context, but I suspect many may help inform ways of improvement here.


Repeat After Me: It’s the Feedback


The other day I observed a few lessons by a very good Japanese English teacher at a junior high school. At one point in the lesson, while the students were reading the textbook passage out loud, she walked around the classroom with two pads of post-it sticky notes, one green and one pink. As she listened to the students, she gave them pink notes for parts they did especially well, and green notes for parts they were having trouble pronouncing. On each note she wrote a specific phrase, word, or part of the word the students were either doing well or needed to improve. And at the end of the class, she made some general comments  and engaged the students in a little extra practice of specific pronunciation errors that many students were making. Each student who received a green note, however, was responsible for coming up to the teacher after class and demonstrating that they could produce the sound correctly.

I was deeply impressed for a couple of reasons. First, because this is the first time in three years and dozens of observed EFL lessons that I have seen a teacher do this. It was great to see a class that dealt with pronunciation at all; and it was especially impressive to see a Japanese teacher deal with pronunciation in this manner. Too many JTEs ignore that fact that English, like any language, is first and foremost a system of sounds. The written form dominates language lessons in Japan, where learning English has traditionally meant essentially learning to read English. Listening activities usually consist of listening to the blocks of audio that just verbalize the text content. And I think it’s fair to say that most JTEs won’t go near pronunciation in a class without an ALT or a CD ready to model the “correct” pronunciation. It takes confidence, and it takes an acceptance of the view that a JTE is a valid example of language use in the classroom–language as sounds, language as culture, language as a means of communication, all of which the teacher displayed nicely. And second, this teacher demonstrated something that is incredibly important in pronunciation learning (and indeed in all of language learning): formative feedback.

For any type of learning, it is essential that people can see what they need to do (a model), can give it a try (practice), receive feedback on their performance or learning (formative feedback), and then get a chance to do it again to correct problems. Of particular importance is the feedback loop of performing a skill or demonstrating knowledge and then receiving quick, actionable, formative feedback that can immediately be used to make improvements. Yet this simple procedure seems to be a rare thing many  language classrooms, even when the subject is as clear a skill as pronunciation. That it is effective seems to be beyond question. Hattie (2012) stresses the importance of feedback, particularly disconfirmation feedback (Hey, you’re doing that wrong!), and Wiliam (2011) makes the case for embedded formative assessment that I found so compelling I did a series of posts on the book last year. Both of these authors are concerned with general learning and teaching. In the last year or so, however, I have increasingly come across papers and books that make the case for feedback in language learning, like this one from Derwing and Munro in Pronunciation Myths:

“Ample studies have shown that improved pronunciation can be achieved through classroom instruction…However, it is becoming increasingly clear that a key factor in the success of instruction is the provision of explicit corrective feedback (pg. 47).”

Not only is explicit formative assessment important, the claim is made that it is essential. Without it, that is under conditions of exposure alone, learning (improvement of pronunciation) does not seem to happen at all! Derwing and Munro  mention two studies to back this up. The first is Saito and Lyster (2012) who managed to get Japanese students to improve with only four hours of training with the dreaded /r/ and /l/ sounds. The other is Dlaska and Krekeler (2013) who found that explicit instruction feedback was much more effective than just providing models. After years of don’t-disturb-the-learners-while-they’re-engaged directions, it seems that the role of explicit correction is finally being recognized.

You might argue that what the teacher I observed was doing was not that efficient or important. In cases, like EFL courses in Japan, where time is so limited, it may seem unreasonable to spend time on pronunciation, especially with the high importance of entrance exams and other high-stakes tests. Indeed many teachers argue that pronunciation is something they just don’t have time for. But actually, the teacher wasn’t spending much time on it at all. Most of the correction happened while the students were doing a reading fluency task (reading the text content multiple times). The teacher’s general comments and whole-class feedback/practice, took less than two minutes. Several years of similar feedback will undoubtedly have a positive effect on student pronunciation, student confidence, and student attitudes toward the importance of making the sounds of English reasonably accurately. In addition to the teacher’s feedback, student to student (peer) feedback could also be put to use. That will also help with sound discrimination training and meta-linguistic skill training.


Dlaska, A. &  Krekeler, C.  (2013). The short-term effects of individual corrective feedback on L2 pronunciation. System, 41, 25-37.

Saito, K. & Lyster, R. (2012). Effects of form-focused instruction and corrective feedback on L2 pronunciation development: the case of English /r/ by Japanese learners of English. Language Learning, 62, 595-633.



Does this Shirt Make me Look Fat? Motivation and Vocabulary

I love the topic of motivation in language learning (past posts here, here, and here, for example). In the world of TESOL, however, it’s a little  like that old joke about the weather–everyone seems to talk about it but nobody does anything about it. In Japan, I often hear students voicing out loud how they wish they could speak English (even though they are students and even though they are enrolled in an English course at the present). They  sound a lot like the people I know who talk about losing weight or exercising more: vague, dreamy, and not usually likely to succeed. TESOL research and literature talks a lot about integrative and instrumental motivation, ideal selves, and willingness to talk, etc., concepts that just seem so far from the practical reality teachers and those dreamy-eyed students really need.  So  in this post I would like to focus on the positive and practical and provide a list of things to do that improve the chances of success, drawing on formative assessment ideas and general psychological ideas for motivation.The idea is to approach motivating learners the same way one would go about motivating oneself to lose weight or start and stick to an exercise program. Instead of talking about fuzzy motivations, let’s focus on just doing it. The enemy in my sights is much the same enemy that faces the would-be dieter or exerciser,  procrastination, a powerful slayer of great intentions.

First of all, let’s get one thing straight: you can’t do much about the motivation kids bring with them to your class on Day 1, but after that, you certainly can. What you and your students do together affects how they think and feel about language learning and themselves. That is, teachers can change attitudes by changing behaviors. And as a teacher, you have a lot of power to change behaviors. As BJ Fogg says, you shouldn’t be trying to motivate behavior change, you should be trying to facilitate behavior change.

Vocabulary learning is the perfect place to try out techniques for motivation success and overcoming procrastination because it is in many ways the most autonomous-friendly part of language learning. It can easily be divided into manageable lists, and success/failure/progress can be fairly easy to observe by everyone. It is also a topic I have to run a training session on this summer and I need some practical ideas for teachers to try out with their students.

OK, here we go. In addition to using teaching techniques that make the vocabulary as easy to understand and remember as possible, try the following:

  1. Make a detailed plan with clear sub-goals that are measurable and time-based. Break the vocabulary list into specific groups and set a specific schedule for learning them. This provides a clear final target and clear actionable and incremental steps, important tenets of formative assessment. Create a complete list and  unit-by-unit or week-by-week lists. Be very clear on performance criteria for success (spelling, pronunciation, collocations, translation, etc.). Make the plans as explicit as possible, and put as much in writing as possible.
  2. Provide lots of opportunities for learners to meet and interact with the vocabulary. Learners need to actively meet target items more than 10 times each (and more than 20 times in passive meetings) if they are expected to learn them. Recycle vocabulary as much as possible.
  3. Create a system that requires regular  out-of-class study (preview/review). Out-of-class HW assignments should start by being ridiculously small at first (tiny habits–see below), such as write out two sentences one time each. Grow and share and celebrate from there.
  4. Ensure success experiences. Success is empowering. The teacher’s job is to ensure that learners can learn and can see the results of their learning. Do practice tests before the “real” test, and generally provide sufficient learning opportunities to ensure success (“over-teach” at first if you need to). Lots of practice testing is a proven technique to drive learning, and students need to do it in class and in groups, and learn how to do it on their own.
  5. Leverage social learning and pressure. Have learners learn vocabulary together, teach and help each other sometimes, encourage each other, and just generally be aware of how everyone else is succeeding. Real magic can happen if a learning community puts its mind to something.
  6. Have learners share their goals and progress, publicly in class  and with friends, family and significant others. Post results on progress boards, challenge and results charts, etc. At a very minimum, the teacher and the student herself should always know where they are and what they need to do to improve.
  7. Remind learners of the benefits of success. Provide encouragement, especially, supportive, oral positive feedback at times when it is not necessarily expected.
  8. Make sure that sub-goal success is properly recognized and rewarded. This provides a stronger sense of achievement.
  9. Make 1-8 as pleasant (fun, energetic, meaningful) as possible.

You may already be doing these things and still not getting the progress you hope for because the students just aren’t studying enough outside of class. Products of their age, they are driven by distractions–the need to check their Twitter feeds, for example, and the pressing issue of  incoming LINE comments, or whatever. But they also suffer from the oppression of the same procrastination monster that we all suffer from. Oliver Emberton has a nice post on dealing with procrastination. For teachers, I would like to call attention to the last two items on his list of recommendations: Force a start, and Bias your environment. “The most important thing you can do is start,” Mr. Emberton writes. This is certainly true.


You can counsel them on the need to turn off their devices and “study more.” But unless you give them clear, doable, and manageable tasks and start them in class, and require and celebrate their use, it is unlikely they will get done. BJ Fogg recommends that you facilitate behavioral change by promoting tiny habits. His work makes the establishment of positive habits seem so much easier. You can watch an earlier overview of his method here, or a fun TED talk here. Much of what he describes can only be done by the individual learner, but as a teacher you can set the target habit behavior and you can help learners see the fruits of their newly established habits. Just choose a vocabulary learning strategy, reduce it to it’s simplest form, and provide a place to celebrate success. Then try to grow and celebrate the continued use of these positive habits. This modern world is a hard one to study in. There are really too many distractions too close at hand. It takes real strength, real grit, to resist them and start or keep at something new. Helping students to develop this strength and grit is now part of any teacher’s job description, I think.

If you are looking for more on how to teach vocabulary, including a nice section on web and mobile app tools that can help, Adam Simpson’s blog has a nice post on vocabulary. If you are looking for something that combines the latest in TESOL theory on motivation with practical techniques for the items I listed above, Motivating Learning by Hadfield and Dornyei is the best thing I’ve seen. It has 99 activities to choose from.

Learning Styles


One idea that comes up again and again is that of learning styles. Of course you have heard about it: some people are visual learners, some people have to hear things to learn them, and some kinesthetic people need to get their whole body into the learning process or nothing sticks. Google these terms–visual, auditory, and kinesthetic–and you’ll get enough hits to make you think that learning styles constitute an established theory in learning.



Unfortunately, that assurance would be misplaced. Daniel Willingham, in When Can You Trust the Experts?, has this to say:

…there is no support for the learning styles idea. Not for visual, auditory, or kinesthetic learners…The main cost of learning styles seems to be wasted time and money, and some worry on the part of teachers who feel they ought to be paying more attention to [them]…(page 13).

This comes as quite a shock for many people because the idea is so entrenched. “Experts” talk about it often. It is mentioned in countless books and articles. I have heard it many times and repeated it myself. But, nope, it just ain’t so. There is no such thing as a “visual learner.” At least, there is no demonstrated effect in any scientific study. Mariele Hardiman summarizes the myth and the reality nicely in The Brain-Targeted Teaching Model. She cites Pashler et al. (2008), where you can read it yourself if you are still numb in disbelief (citation below). Hattie and Yates have a unit devoted to this myth if you are still not convinced. Great book, by the way.

But your intuition tells you that there are differences between learners. There most certainly are. Every brain is wired differently because of the individual’s experience and their age of development (for children). These developmental differences and experience differences are real and have very real consequences for how we should teach and the best sizes for classes, if we take differences seriously.

Essentially, differences take the form of preferences, preparation, motivation, and pace. According to David Andrews of Johns Hopkins University in the wonderful MOOC named University Teaching 101, students have preferences for the modality (yes, here we can talk about print or video or audio), groupings, and types of assessment and feedback. Students also vary in how prepared they are to learn. All learning involves connecting new knowledge to knowledge already held. If your students lack certain schema or factual knowledge, they will need more time to gain that and the target knowledge. In any given class, motivation differences (often because of prior experience) and time commitments can produce huge differences in the amount of attention and effort students will exert and sustain. Lastly, processing speeds (again because of experience and practice) in reading and auditory processing can make content more or less challenging than the instructor may think it is. Watch any class taking a test to see pacing differences in action. Students finish at very different times, and this is often unrelated to proficiency with the target content.

So, what should an EFL teacher do? Well, smaller classes are a start, but only if you are really planning to do something about it. If you are going to teach to the same middle as always, smaller classes will not necessarily give your students any benefits. Small groupings ranks only #52 in Hattie’s list of effective interventions, probably for this reason. Mr. Andrews suggests personalizing content and delivery as much as possible. He suggests getting to know your students as much as possible, and giving them as much choice as possible in how they learn. This is a delicate balancing act, in my experience. Students can be notoriously bad at understanding themselves, their strengths and weakness, and choosing better strategies. The teacher must push and pull them carefully up to better performance, offering them choices and checking that they are choosing wisely and making sufficient effort to see results. Technology can  help a lot here. Recording short lectures/lessons and making them available with transcripts to students can allow slower/less-experienced/different-preference students options for learning and reviewing that can allow them to customize the education experience for themselves. And research has shown that repeated viewing/reading and multi-modal presentation are significantly correlated with better learning; and variety and choice will keep attention better and improve motivation. One crucial part of personalization is personalizing formative feedback (a series of posts on formative assessment starts here). The power of formative feedback in driving learning should not be underestimated, but you need to be close to your students to either do that yourself or teach them or their classmates to do it. This also involves making goals salient to students with clear rubrics, so that they can see where they are going and how they are progressing, and what they need to do to get better. A recent study in math classes at an American university illustrates this. For homework assignments, some students were given formative feedback and follow-up problems based on performance, while the spacing of content repetition was controlled for maximum effectiveness. This small change resulted in a 7% improvement on the short answer section of the final exams! Personalization seems to have that kind of power, if done right.

As Mr. Andrews says, “personalization has become a standard for learning in every part of our lives except school. And it will become a standard in school.” Get ready to hear more and more about it.


Pashler, H., McDaniel, M., Roher, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological Science in the Public Interest, 9(3), 109-115.

Learning styles image fragment from https://www.home-school.com/news/discover-your-learning-style.php