Classroom debate resources

Getting students to debate in EFL classrooms is not easy. It takes time, patience, and practice, of course, but knowing how to and how much to support students is probably most important.

In the world of debate in Japan, there are two main styles: policy and parliamentary. The former requires research and more preparation, and places great emphasis no argument support, while the latter requires students to think logically on their feet. I tend to favor policy debates for reasons of academic skill development and content exploration. The following materials can be used for policy debating in EFL classrooms. This activity will require at least 6-10 hours of instruction and practice if everyone is unfamiliar with debate. Because of classroom times, a short debate is necessary. The following materials are for a short (45-min) policy debate. Please adjust the times to match your situation. In general, allowing students more preparation time before speeches and questions leads to better results.


Affirmative Constructive DebateTemplate_AffirmativeConstructive

Negative Constructive DebateTemplate_NegativeConstructive

Attack (for both) DebateTemplate_Attack Template

Defense-Summary (for both) DebateTemplate_DefenseSummary

Other Resources

Debate Management Sheet and Chairperson Script Debate Management Sheet and Chairperson Script

Debate Management Sheet and Chairperson Script (with more prep time) Debate Mgmnt Sht and Chair Script (more prep time)

Debate Flowsheet (for note taking) DebateFlowsheet

What can Data do for EFL?

image of glasses

In the US, something very interesting is happening as an indirect result of standards-based testing and the establishment of charter schools: a certain interesting set of research conditions has emerged. Charter schools are themselves often experiments, established in part to be “…R&D engines for traditional public schools” (Dobbie & Fryer, 2012). As such, they often eschew many elements of traditional education practices, and instead attempt to institute the results of research into educational effectiveness from the last several decades. For many charter schools, this research-informed pedagogy assumes a unifying or rallying role, and the ideas are woven into their mission statements and school cultures, as they try to take underprivileged kids and put them on the road to college, through grit training, artistic expression, or higher expectations. Even from a brief visit to the websites of charter systems such as Uncommon Schools, MATCH, or KIPP, you can see what they are intending to do and why. And some of these charter schools have become extremely successful—in terms of academic achievement by students, and in terms of popularity. And both of these have led to new research opportunities. You see, the best charter schools now have to resort to lotteries to choose students, lotteries that create nice groups of randomly-sampled individuals: those who got in and received an experimental treatment in education, and those who didn’t and ended up going to more traditional schools. And that provides researchers with a way to compare programs by looking at what happens to the students in these groups. And some of the results may surprise you.

Let’s play one of our favorite games again: Guess the Effect Size!! It’s simple. Just look at the list of interventions below and decide whether each intervention has a large (significant, important, you-should-be-doing-this) impact, or a small (minimal, puny, low-priority) impact. Ready? Let’s go!

  1. Make small classes
  2. Spend more money per student
  3. Make sure all teachers are certified
  4. Deploy as many teachers with advanced degrees as possible
  5. Have teachers give frequent feedback
  6. Make use of data to guide instruction
  7. Create a system of high-dosage tutoring
  8. Increase the amount of time for instruction
  9. Have high expectations for students

Ready to hear the answers? Well, according to Dobbie & Fryer (2012), the first four on the list are not correlated with school effectiveness, while the next five account for a whopping 45% of the reasons schools are effective. Looking at the list, this is not surprising, especially if you are aware  of the power of formative feedback.

Some people might be a little skeptical still. Fine. Go and look for studies that prove Dobbie and Fryer wrong. You might find some. Then look at where the research was done. Is the setting like yours? Just going through this process means we are putting data to work. And that is much, much better than just going with our own instincts, which are of course based on our own experiences. I work teaching English in Japan, and I know that is a far cry from the hard knocks neighborhoods where Dobbie and Fryer looked into the effects of interventions in Match schools. But I think there are enough similarities to warrant giving credence to these results and even giving them a try at schools Tokyo. I have several reasons. First, extensive research on formative assessment, high expectations, classroom time, and pinpointed direct instruction is very robust. Nothing in their list is surprising. Second, in Japan, English is often as foreign from the daily lives of most students as physics or math are from the lives of many American teens. The motivation for learning it is likewise unlikely to be very strong at the beginning. Many of the students in the Match system are less than confident with their ability with many subjects, and are less than confident with aiming at college, a world that is often quite foreign to their lives. Many English learners in Japan similarly see English as foreign and unrelated to their lives, and the notion that they can become proficient at it and make it a part of their future social and/or professional lives, requires a great leap in faith.

But through the Match program, students do gain in confidence, and they do gain in ability, and they do get prepared for college. Given the demographic, the success of Match and the other “No Excuses” systems mentioned above is stunning. It also seems to be long lasting. Davis & Heller (2015) found that students who attended “No Excuses” schools were 10.0 percentage points more likely to attend college and 9.5 percentage points more likely to enroll for at least four semesters. Clearly the kids are getting more than fleeting bumps in scores on tests. And clearly the approach of these schools—in putting to work proven interventions—is having a positive effect, although not everyone seems to be happy.

And it’s not just that they are making use of research results. These schools are putting data to use in a variety of ways. Paul Bambrick-Sotoyo of Uncommon Schools has published a book that outlines their approach very nicely. In it we can find this:

Data-driven instruction is the philosophy that schools should constantly focus on one simple question: are our students learning? Using data-based methods, these schools break from the traditional emphasis on what teachers ostensibly taught in favor of a clear-eyed, fact-based focus on what students actually learned (pg. xxv).

Driven By Data book cover

They do this by adhering to four basic principles. Schools must create serious interim assessments that provide meaningful data. This data then must be carefully analyzed so the data produces actionable finding. And these findings must be tied to classroom practices that build on strengths and eliminate shortcomings. And finally, all of this must occur in an environment where the culture of data-driven instruction is valued and practiced and can thrive. Mr. Bambrick-Sotoyo goes through a list of mistakes that most schools make, challenges that are important to meet if data is to be used to “…make student learning the ultimate test of teaching.” The list feels more like a checklist of standard operating procedures at almost every program I have ever worked in in EFL. Inferior, infrequent or secretive assessments? Check, check, check. Curriculum-assessment disconnect? Almost always. Separation of teaching and analysis? Usually, no analysis whatsoever. Ineffective follow-up? Har har har. I don’t believe I have ever experienced or even heard of any kind of follow-up at the program level. Well, you get the point. What is happening in EFL programs in Japan now is very far removed from a system where data is put to work to maximize learning.

But let’s not stop at the program level. Doug Lemov has been building up a fine collection of techniques that teachers can use to improve learning outcomes. He is now up to 62 techniques that “put students on the path to college,” after starting with 49 in the earlier edition. And how does he decide on these techniques? Through a combination of videoing teachers and tracking the performance of their classes. Simple, yet revolutionary. The group I belonged to until this past April was trying to do something similar with EFL at public high schools in Japan, but the lack of standardized test taking makes it difficult to compare outcomes. But there is no doubt in my mind that this is exactly the right direction in which we should be going. Find out what works, tease out exactly what it is that is leading to improvement (identify the micro-skills), and then train people through micro-teaching to do these things and do them well and do them better still. Teaching is art, Mr. Lemov says in the introduction, but “…great art relies on the mastery and application of foundational skills” (pg.1). Mr. Lemov has done a great service to us by videoing and analyzing thousands of hours of classes, and then triangulating that with test results. And then further trialing and tweaking those results. If you don’t have copy of the book, I encourage you to do so. It’s just a shame that it isn’t all about language teaching and learning.

Interest in using data in EFL/ESL is also growing. A recent issue of Language Testing focused on diagnostic assessment. This is something that has grown out of the same standards-based testing that allowed the charter schools in the US to thrive. You can download a complimentary article (“Diagnostic assessment of reading and listening in a second or foreign language: Elaborating on diagnostic principles”, by Luke Harding, J. Charles Alderson, and Tineke Brunfaut). You can also listen to a podcast interview with Glen Fulcher and Eunice Eunhee Jang, one of the contributors to the special issue. It seems likely that this is an area of EFL that will continue to grow in the future.

The Slow Drive to Data in Japanese EFL

highway image

Japanese public school education, as a whole, is remarkably cost efficient, or so it seems at first glance on paper. Japan spends right around the OECD average per child for both primary and secondary education, and much less than the U.S., the U.K., or the Scandinavian countries, or indeed most European countries. Yet, Japan continually scores high on international tests of achievement in reading, math, and science. On the most recent PISA test (2012), for example, Japan was 4th in reading and science, and 7th in math. This is a stunning achievement, one that most countries in the world would love to emulate.

No doubt some of these impressive results are at least partially due to factors outside the school and classrooms of public or government-mandated schools, however. We really can’t underestimate the effects of high expenditures by parents on supplementary education, expensive cram schools, or juku  in particular. There is an industry built up around these school-companies that boggles the minds of the uninitiated . They come in many flavors, but generally speaking do only one thing—prepare kids to take tests, especially entrance exams. They do this through a combination of tracking entrance exams and demographics, and providing intensive preparation for taking those tests. They are data collecting and processing machines, making extensive use of data for all parts of their operations–from advertising, to information gathering, to student performance tracking. They do this all in a way that is extremely impressive.  There is in Japan both a strong cultural emphasis on the importance of education, and a climate where frequent test taking is considered both normal and important. The jukus have leveraged that to create an industry that is huge, ubiquitous, and because parents are paying 35,000 yen-50,000 yen  per month per child to these businesses, economically very significant. Combined with the general education, this is  an education system that, although expensive and requiring serious commitments in time (evenings, holidays), is effective for the education of reading, math, and science.

But somehow not for English. PISA does not test English, but comparisons on norm-referenced proficiency scores across countries reveal Japan to be a poor performer. TOEFL iBT scores from 2013 show that Japan is not punching at its weight. If we look only at overall scores, Japan (70) is woefully behind China (77), South Korea (85), and Taiwan (79), but remarkably similar to Mongolia, Cambodia, and Laos. And if we look only at the scores for reading, the skill that receives by far the greatest amount of attention in the school system, the results are not really any better: Japan (18), China (20), South Korea (22), and Taiwan (20). The scores on the IELTS tests a show similar, though less pronounced pattern. On the Academic version, Japan again scores lower than its Asian neighbors: Japan (5.7). South Korea (5.9), and Taiwan (6.0). Now I know some people have validity issues when comparing countries using test data, and certainly that is true for TOEIC scores by country, because that test is so widely applied and misused. But the TOEFL iBT and the IELTS are high-stakes tests that are taken by a fairly specific, highly-motivated, and well-heeled demographic. The scores say nothing about average students in those countries, not to mention the less proficient students, to be sure, but I do think they are fair to compare. And I know that students and programs are much more than the sum of the ability of students to take tests, but come on. It is not totally wrong to say that almost the entire purpose of English education in junior and senior high school, and the accompanying jukus, is to get students ready for tests, and yet the results are still pretty poor.

graph showing percentages of jr and sr high kids who go to juku

Percentages of students who go to juku (and how often per week) from elementary school to high school

So what explains the problem? Well, this has been the subject of endless debates, from what should be taught to how it should be taught. Lots of people blame the entrance exams, but let’s be careful with that. It is probably more accurate to say that the type and quality of the entrance exams is certainly preventing the power of the juku machine to help improve the situation. What I mean is that the types of tests jukus and most schools focus on are different from tests like the TOEFL iBT or IELTS. The TOEFL iBT and IELTS assess all four skills (reading, listening, writing, and speaking), and they do so in a way that judges whether the test taker can use the skills communicatively to understand and express ideas and information. Entrance exams in Japan, however, very often have an abundance of contextless sentences and a abnormally large number of grammar-focused questions. Simply put: the preparation students engage in to pass high school or college entrance exams will not help all that much when students sit down to take the TOEFL iBT or the IELTS tests.

If entrance exams tested four skills and the quality of written and spoken expression, you can bet that the jukus would find a way to prepare students for that (and a very large number of parents would be really willing to pay them handsomely to do so), instead of the (mostly) discrete vocabulary and grammar items they can get away with focusing on now. You can be sure that they would find ways to bring data collection and analysis to bear, if they had to deal with this new reality. The fact that their system works so well for multiple choice items  and the fact that productive skills of English are not well-suited for multiple choice assessment is probably one of the biggest problems for Japanese English education.

But it’s not only the tests that are a problem. The current official policy for public school classrooms favors a better balance of the four skills, using the L2 more predominantly in the classroom for procedural and communicative interaction between the teacher and the students and between the students themselves (communicative language teaching, or CLT). However, what the Course of Study pushes for and what the teachers in classrooms are able to manage is not always the same. Of the recent policy mandates, it is the Teach-English-(mostly)-in-English directive that is causing the most consternation among teachers, probably because it is so obvious and measurable. Teachers are mostly, if often tepidly, complying with this policy, and in many cases are trying hard to make it happen, according to statistics I’ve seen. These statistics on use of English are tracked regularly using questionnaires and self-reporting by teachers. And the numbers show that about 50% of teachers are now using English at least 50% of the time they are in classrooms—although there is great variation between teachers at the school level, district level, and prefectural level. Almost no one is recording classes regularly and counting the minutes, however, with this group the only exception I know. The case of CLT use is fuzzier and less reliable still, partly because interpretations of what are and are not CLT activities vary. Compliance with CLT directives is happening, but its deployment is certainly not systematic, and it is not widespread, and it is not receiving a lot of classroom time. Even these modest changes (inroads?), however, have taken tremendous effort to achieve, both in terms of government resources and effort on the part of individual English teachers who, in most cases, never experienced lessons taught in English (or using a CLT approach) themselves as students, were not trained to conduct lessons that way in pre-service education courses or training, and received very little in-service guidance or training as they attempted to comply with government directives. It’s a lot of effort and resources going toward something that might not work, something that is debatable; because not enough clear evidence exists to prove it works. Not yet, at least. Neither the public school system, nor the Education Ministry have the resources, expertise, or system for gathering English subject performance data effectively and efficiently. In classrooms, teachers rarely track performance. At the school or program level, there is no concept of tracking micro-skill development over months or years, at least none that I’ve seen. Maybe it’s happening at some private schools, but my guess is that all anyone is tracking is multiple choice test-taking performance, with maybe some vocabulary size and reading speed in programs that have their act together a little.

The reason I bring this up, however, is to make you think about what is driving this policy, why people have the faith they have in approaches, methods, or materials, without really knowing if or to what degree they work. In the world of EFL in Japan, a lot of faith drives a lot of programs—more specifically, a lot of faith and a lot of tradition. Walk into any mid-level high school and you can find students in English classes being prepared for multiple choice tests they will never take, for example. Within existing lessons, there is a lot of tweaking to make interventions “work” better, no doubt. And while sometimes that means more effective, it could also mean more time-efficient, or easier for students to do. The honest truth is that “effective” is often hard to determine. By definition, effective interventions (even those that have been carefully researched) must be sustained for rather long periods of time—months at least. Micro-skills are hard to decide, hard to set goals for, and hard to track. But the potential effect is great.

Adding more English to classrooms might make students a little better at listening (though Eiken scores comparing prefectures that differ greatly in the amount of classroom English used seem to show no correlation). And I haven’t seen any data that suggests that students in Japan are doing better at anything English-wise because their teachers have tagged a little bit of “communicative” writing or speaking to the end of regular explanation-heavy lessons. I’ve made this point before: a little bit of CLT dabbling is unlikely to have much effect (though this should not be interpreted as criticism of introducing more CLT or any CLT activities into a classroom—you gotta start somewhere, you know). I have spoken to more than a few high school and university teachers who express great alarm at the state of grammar knowledge of the students they see regularly. The suggestion I hear is that all of this CLT stuff is coming at the expense of good old grammar teaching. While I am sure that this may be impacting the ability of students to tackle entrance exam questions, my own experience and my own opinion is that students these days are indeed more able to use at least a little of the knowledge of English they build up over the years in schools, something that was really not the case years ago. And, by the way, if you have ever sat through grammar lessons in high schools in Japan, you probably won’t think that more of that could be better for anything.

But that brings me to my point. We are all slaves to our own experience and our own perspective; as Daniel Kahneman  calls it, what you see is all there is.  All we seem to have is anecdotal evidence when it comes to program-level decisions. If only there were a way to take all that data generated by all that testing in Japan and make it work better for us. In closing, I’d like to leave you with a quote for John Hattie’s wonderful book Visible Learning for Teachers:

“The major message, however, is that rather than recommending a particular teaching method, teachers need to be evaluators of the effect of the methods that they choose” (pg. 84)

Lessons from Training


I came across another interesting article on the BBC website today. It was temptingly titled Can you win at anything if you practise hard enough? It told the story of a young table-tennis coach from the UK, named Ben Larcombe, who attempted to take his lumpy and “unsporty” friend and turn him, over the course of a year, into a top competitive player. As part of the process of writing a book on the topic of training and improvement, the pair documented Sam Priestley’s transformation from a sort-of player to an impressively good one, at least to my eyes. You can watch the whole thing unfold before your eyes in this video. The article includes, however,  a rather bubble-bursting comment from an English tennis coach and expert named Rory Scott:

“He is nowhere near the standard of the top under-11 player in the UK.”

So the BBC writer goes on to ask this question: “Why did the project fail?” What? Just because Sam didn’t meet his goal of getting into the top 250 table tennis players in the UK in one year of practicing every day, doesn’t mean it was a failure at all. It shows the potential for people to learn when they are persistent and work hard regularly with good strategies and feedback. The rest of the article goes on to explore exactly that potential, largely from a cultural viewpoint of different attitudes to natural ability and the need to persist versus instantaneous gratification.

I’ve been seeing similar sorts of studies a lot lately. The last few years have seen a plethora of books and talks on similar topics: how far does practice take you? You can read about Josh Kaufman’s attempt to learn something in 20 hours, or watch him tell about it at TED. Or you can read about Joshua Foer attempting to get better at memorizing things in his book, Moonwalking With Einstein. Or if you are really serious about the source of greatness, whether is comes from genes or training, try The Sports Gene by David Epstein. And don’t forget Doug Lemov’s Practice Perfect, a book which has a focus on learning teaching.

Practice, I’m convinced, is important. But so are attitudes to practice, and so is the kind of practice you do and the the kind of feedback you get. If we can get these right, our learners will learn better and faster, which will lead to other benefits. Practice is a touchy issue in language teaching, a field still trying to come to terms with the “drill and kill” of the audio-lingual approach. But intense, focused practice with constructive feedback and repeated opportunities to incorporate that feedback and improve is something very important to the learning process. It takes a lot of time, to be sure, maybe even 10,000 hours (though see Mr. Epstein’s book for a good discussion on amounts of time), but impressive results are possible. That is something I want my learners to understand and buy into.

Mr. Larcombe has a website with more detailed info about the process of teaching table tennis. He is also currently preparing a book.

Making EFL Matter Pt. 6: Prepared to Learn

image of students

The present series of posts is looking at how EFL courses and classes in Japan might be improved by considering some of the techniques and activities emerging in ESL and L1 mainstream education settings. Previous posts have looked at goals, data and feedback, discussion, portfolios, and prolepsis and debate. The basic idea is to structure courses in accordance with the principles of formative assessment, so students know where they are going and how to get there, and then train them to think formatively and interact with their classmates formatively in English. All of the ideas presented in this series came not from TESOL methodology books, but rather more general education methodology books I read with my EFL teacher lens. I realise that this might put some EFL teachers off. I don’t think it should, since many of the widely-accepted theories and practices in TESOL first appeared in mainstream classes (journals, extensive reading, portfolio assessment, etc.); also,  the last few years have seen an explosion in data-informed pedagogy, and we would be wise not to ignore it. In this post, however, I’d like to go back to TESOL research for a moment and look at how some of it might be problematic. Actually, “problematic” may be too strong a word. I’m not pointing out flaws in research methodology, but I would like to suggest that there may be a danger in drawing conclusions for pedagogy from experiments that simply observe what students tend to do or not do without their awareness raised and without training.

I’ve been reading Peer Interaction and Second Language Learning by Jenefer Philps, Rebecca Adams, and Noriko Iwashita. It is a wonderful book, very nicely conceived and organized, and I plan to write a review in this blog in a short time. But today I’d just like to make a point connected with the notion of getting learners more engaged in formative assessment in EFL classes. As I was reading the book, it seemed that many of the studies cited just seemed to look at what learners do as they go about completing tasks (very often picture difference tasks, for some reason). That is, the studies seem to set learners up with a task and then watch what they do as they interact, and count how many LRE (language related episode) incidences of noticing and negotiation of language happen, or how often learners manage to produce correct target structures. Many of the studies just seem to have set learners about doing a task and then videoed them. That would be fine if we were examining chimpanzees in the wild or ants on a hill; but I strongly believe it is our job to actively improve the quality of the interactions between learners and to push their learning, not just to observe what they do. None of the studies in the book seem to be measuring organized and systematic training-based interventions for teaching how to interact and respond to peers. In one of the studies that sort of did, Kim and McDonough (2011), teachers just showed a video of students modelling certain interaction and engagement strategies as part of a two-week study. But even with that little bit of formative assessment/training, we find better results, better learning. The authors of the book are cool-headed researchers, and they organize and report the findings of various studies duly. But my jaw dropped open a number of times, if only in suspicion of what seemed to be (not) happening; my formative assessment instincts were stunned. How can we expect learners to do something if they are not explicitly instructed and trained to do so? And why would we really want to see what they do if they are not trained to do so? Just a little modelling is not bad, but there is so much more that can be done. Right Mr. Wiliam? Right Ms. Greenstein?

Philps et al. acknowledge this in places. In the section on peer dynamics, they stress the importance of developing both cognitive and social skills. “Neither can be taken for granted,” they state clearly (pg. 100). And just after that, they express the need for more training and more research on how to scaffold/train learners to interact with each other for maximum learning:

“Part of the effectiveness of peer interaction…relates to how well learners listen to and engage with one another…In task-based language teaching research, a primary agenda has been the creation of effective tasks that promote maximum opportunities for L2 learning, but an important area for research, largely ignored, is the training of interpersonal skills essential to make these tasks work as intended” (pg. 101).

But not once in their book do they mention formative assessment or rubrics. Without understanding of the rationale of providing each other with feedback, without models, without rubrics, without being shown how to give feedback or provide scaffolding to peers, how can we expect them to do so, or to do so in a way that drives learning. Many studies discussed in the book show that learners do not really trust peer feedback, and do not feel confident in giving it. Sure, if it’s just kids with nothing to back themselves up, that’s natural. But if we have a good system of formative feedback in place (clear goals, rubrics, checklists, etc.), everyone knows what to do and what to do to get better. Everyone has an understanding of the targets. They are detailed and they are actionable. And it becomes much easier to speak up and help someone improve.

Teachers need to make goals clear and provide rubrics detailing micro-skills or competencies that learners need to demonstrate. They also need to train learners in how to give and receive feedback. That is a fundamental objective of a learning community. The study I want to see will track learners as they enter and progress in such a community.


Kim, Y., & McDonough, K. (2011). Using pretask modeling to encourage collaborative learning opportunities, Language Teaching Research,15(2), 1-17.


Making EFL Matter Pt. 5: Prolepsis, Debate, and Benny Lewis

image of man reading a book As a young man, I was part of a legion of English teachers working in Japan. A large number of us “teachers” working day in and day out at language schools and colleges were actually travelers trying to save money for their next trek through Nepal or to live on a beach on Boracay or Koh Samui  (very different in 1986) for as many months as possible before they had to work again. At least some of these people, in order to be able to stay in Japan and teach/work, pretended to be in the country for the purpose of studying something–flower arrangement, karate, or Japanese language, for example. One guy, ostensibly studying Japanese, dutifully went to the immigration office each year to renew his visa. And each time, he struggled greatly with the rudimentary questions the officer asked him in Japanese. At the end of the conversation, the immigration officer would kindly offer him encouragement because “Japanese was a hard language” to learn.

That same sentiment–that you are just studying the language and can’t really use it yet–is still surprisingly common in many institutional programs for learners of many languages. I have often heard college students say that they want to go to the US “after my English is good enough.” The opposite of this “not yet” concept is  prolepsis, “the representation or assumption of a future act as if presently existing or accomplished” (from Merriam-Webster). It is a lovely little term I came across in Walqui and van Lier (2010). They recommend  treating students proleptically, “as if they already possess the abilities you are seeking to develop” (pg 84). In other words, throw them in at the deep end, and both support and expect their success. High school and college in Japan are perfect places for putting this approach into practice. Why? Because learners have already had somewhere between 4 and 10 previous years of of English exposure and learning. It’s time to stop pretending that they can’t use it. Right Benny?

People like Benny Lewis are not usually taken seriously in the TESOL world, but they should be. Watch the video and see how many things he gets right. Polyglots learn languages successfully, he says at one point, because they are motivated to “use it with people” and they go about doing so. That is some good sociocultural theory there. He also dismisses five of the barriers that people so often accept to explain their own lack of success with language learning, and addresses the growth mindset and time and resource management that he and his friends have found a way to make work for themselves. But what I find most amazing about Mr Lewis and others like him is that they are living examples of acting proleptically with language learning. They learn it, use it, love it, and  repeat. They don’t stop to worry about whether they are “ready.” They don’t let things like having few resources around, or no interlocutors nearby, to interfere. They challenge themselves to learn what they can and then actively seek out opportunities to use that, monitoring their progress by continually testing it out. I admire their passion. I  borrow strategies and techniques from them to pass on to my students. If we are not helping our students make use of Skype or Memrise or Quizlet or any of the many other tools available, we are doing a great disservice to our young charges.

But not only should we be introducing websites, we should be expecting our learners to use them and to push their learning. You can do it. No excuses. Of course you can handle basic conversations in the language. I expect nothing less than that. And let’s see what you can really do when you push yourself. I expect success. I assume it and design my activities around it. Prolepsis. We sometimes hear the word rigor used to describe education. We can also talk about holding higher expectations for our learners. Without a curriculum designed with the idea of prolepsis, however, it is likely empty talk. It sounds good, but is not actionable. Van Lier and Walqui list these three directives if we are serious about really making our curriculum, well, serious:

  • Engage learners in tasks that provide high challenge and high support;
  • engage students (and teacher) in the development of their own expertise;
  • make criteria for quality work clear for all

We can see immediately that some of the things Mr. Lewis is suggesting get learners to do these things. I’ve talked before about rubrics and portfolios and making the criteria for success clear in other blog posts, but today I’d like to finish up this post by talking about an activity that does all these things, and it gets students to perform proleptically: debate. Now debate has a bad reputation in Japan. Many teachers think it is too difficult for students. Some teachers think it focuses too much on competition. These points may have some validity, but they should not prevent you from doing debate. We do debate, like JFK said we should go to the moon, because it is difficult. And if we have students debate both sides of issues, what begins to emerge is a keen sense of examining any issue–for looking at what is important and how important, and questioning and explaining that. Debaters behave proleptically, because they have to. Debating adds critical thinking structure to discussions about plans. Debaters learn to consider the status quo. They learn to evaluate plans in terms of their effect and importance. They learn to write speeches describing these things, and they learn to listen for them and consider them critically. Because there is a set structure, we can support and scaffold our learners. But we cannot hold their hands all the way. Debate forces them to go off scripts at times, while never going off topic. There is also time pressure, and the debate takes place with other  people, an on-stage performance that is intimidating for everyone, and thus spurs learners to try harder. Yet, like scrimmaging with feedback, there are multiple opportunities to fine tune performance (and get repeated input). Every time I read about techniques to promote high standards, rigor, etc. , I always think to myself: That sounds an awful lot like debate, or Yup, debate can do that.  To me, it seems that debate is one technique that should not be left out, especially policy debate where learners research topics to come up with arguments for both sides in advance. Not only do we get four-skills language development, but we also get research skills, organization skills, and critical thinking skills development.

Show me another activity that does that.

This post is part of a series considering ways to add more focus and learning to EFL classrooms by drawing on ideas and best practices from L1 classrooms.

Part 1 looked at the importance of goals. Part 2 looked at using data and feedback. Part 3 looked at the challenges and benefits of academic discussions Part 4 looked at portfolios and assessment

Making EFL Matter Pt. 4: Portfolios and Assessment

desktop image

In principle, a portfolio is an easy to understand and intuitively attractive concept: students keep the work they produce. The real challenge of a portfolio is what you do with it. Without a clear vision of how the tool will be used, it can easily end up being a little like a child’s art box of works produced in art class in school over the years—just a repository of things we hold on to for no specific reason other than sentimental attachment. We might pull these works out to look at them from time to time, but they are not a clear record of achievement, nor can they help inform future learning decisions. The central function of a quality portfolio is to clearly provide evidence of growth and to “…engage students in assessing their growth and learning” (Berger, Rugen & Woodfin, 2014, pg. 261). Specifically what growth depends on the goals of the course or program. When a course or program has clear goals, a portfolio can have a formative or summative role in demonstrating a learner’s achievement or progress toward achieving those goals. There are also practical/logistical constraints on portfolio deployment. What artifacts should be included, how many should be included, where should the artifacts be stored, and how will the portfolio be assessed and by whom, are all important decisions. The results of these decisions can greatly impact the success of a portfolio as a learning tool.


Conceptualizing a portfolio

A portfolio is not simply a repository file. It must serve as a record of progress that is used to assess learning by the learner him/herself or by others. All decisions on its structure and deployment must start with this basic understanding. The design of the portfolio itself, and its integration into the syllabus (i.e., how it will be used on a regular basis) must aim to make it as easy as possible to record progress/achievement, to make visible evidence or patterns progress/achievement in the collected data. For this reason, not only student-produced academic work (essays, presentations, tests), but also documents that make progress and achievement salient should be kept in a portfolio. Such documents may include introductory statements, target-setting plans, records of times on tasks, assignment rubrics, progress charts, and reflection reports.


The importance of goals

In order to be effective, the portfolio must be closely aligned to the goals of the course or program and be able to show progress toward or achievement of those goals. In other words, it must be able to provide specific evidence of progress in achieving the target competencies in a way that is clear and actionable. It must also do so in a way that makes the most effective or efficient use of time. These goals can include knowledge goals, skill goals, or learning goals for constructs such as responsibility, autonomy, revision, collaboration, service and stewardship (to name a few). Without clear goals (usually arranged in a clear sequence), effective use of a portfolio cannot be possible. Without clear goals, the formative and reflective functions of a portfolio cannot be leveraged in a clear and actionable way. However, if students know what they are aiming for and can compare their work in how it meets the target competencies (using the descriptions and rubrics that define the goals/competencies), portfolios can be a powerful tool for reflection and formative feedback.


The importance of regular portfolio conversations

“In order for portfolios to be a tool for student-engaged assessment, including formative and summative assessments, they must be a regular part of the classroom conversation, not a static collection of student work” (Berger, Rugen & Woodfin, 2014, pg. 268). The portfolio must be a tool of measurement, like a bathroom scale, and can only be effective if it is used regularly. Students must regularly enter data into it (more on what kinds of data in the next section), and they must use it to look for patterns of success and gaps in learning/performance and strategy use. For this reason, providing clear guidelines and time to enter data into portfolios, facilitating the noticing of patterns and gaps, and giving opportunities for students to discuss their progress in groups, are all necessary. This will require classroom time, but also some scaffolding so students can understand how to work with data. Student-led conferences (mini presentations on progress done in groups in class) can be a useful tool. In groups, students can practice talking about learning, but also compare their progress and efforts with those of their classmates. Counselor conferences can also make use of portfolios, and if students have practiced beforehand in groups, time with counselors can be economized. Finally, to truly leverage the power of portfolios, passage presentations (public presentations where students explain and defend their learning accomplishments to groups of teachers, parents, or other concerned parties) can be particularly powerful since they are public and official. If a passage presentation system is in place, it will serve to make the portfolios more meaningful, greatly enhancing the effort students will put into entering and analyzing data and the amount of time they spend analyzing and practicing explaining their learning. Passage presentations and counselor conferences can transform student-led conferences into the role of practice for “the big games.”


Portfolio contents Pt. 1: What are we trying to develop?

Let us review our key points so far. It must be easy to enter meaningful data into the portfolio and notice trends or gaps. Noticing the trends and gaps in performance requires an understanding of the goals of the course/program, so they must be clear. The portfolio should be used regularly: students should use it to monitor their learning; and students should be able to refer to it when explaining their learning to others (groups, counselors, or others). These points are all concerned with usability, making the experience of using a portfolio as simple and smooth and effective as possible. What we actually put into the portfolio must be concerned with our learning targets. As mentioned earlier, any program or course will have multiple targets for knowledge and skill acquisition, but also for constructs such as digital literacy, critical thinking, problem solving, responsibility, autonomy, revision, collaboration, service and stewardship, and possibly others. Therefore, it is important for portfolios to contain finished work and evidence of the process of improving work through working with others, checking and revising work responsibly, and helping others to do so, too. Portfolios should also contain records of learning activities and times on tasks as evidence of autonomy and tenacity.


Portfolio contents Pt. 2: Portfolios for language learners

As part of English language courses, there are usually weekly classroom assignments for writing and presentation. There may also be other writing assignments, or other speaking assignments. As for other constructs, the following have been shown to be important for successful language learning and therefore should be part of the curriculum:

  • Time on task
  • Time management (efficient use of time)
  • Commitment to improvement/quality (accountable for learning)
  • Critical evaluation of learning strategies
  • Collaboration (accountable to others)
  • Seeking feedback and incorporating feedback (revision)


If we try to build these into our portfolio system along with our language and culture target competencies while still managing the volume of the content, I believe that we must include the following elements, in addition to a general goal statement:

  1. Drafts and final products for a limited number of assignments, including a reflection sheet with information about the goals of the assignment (and a copy of the rubric for the assignment), time spent on the assignment, attempts at getting feedback and comments on how that feedback was included;
  2. Weekly reflection sheets (including a schedule planner) in which students can plan out the study plan for their week before it happens, and then reflect upon the results afterward. There could also be sections where students can reflect upon strategy use and explain their attempts to reach certain goals;
  3. Self-access tracking charts in which students list up the reading, listening, or other self-access activities students engage in. Several of these charts can be made available to students (extensive reading charts, extensive listening charts, TOEFl/TOEIC test training, online conversation time, etc.) and students can include the charts relevant to their personal goals (though extensive reading will be required for all students).


As you can see, there is much to be decided: specifically which assignments and how many will be included; also the various forms need to be designed and created; and, for the English classes, whether completing the portfolio and discussing learning is something that we want to scaffold learners to be able to do (something that I personally think is very important).


This post is part of a series considering ways to add more focus and learning to EFL classrooms by drawing on ideas and best practices from L1 classrooms.

Part 1 looked at the importance of goals.

Part 2 looked at using data and feedback.

Part 3 looked at the challenges and benefits of academic discussions



Berger, R. Rugen, L., and Woodfin, L. (2014). Leaders of their own learning: transforming schools through student-engaged assessment. San Fransisco: Jossey-Bass.

Greenstein, L. (2012). Assessing 21st century skills: a guide to evaluating mastery and authentic learning. Thousand Oaks, CA: Corwin.

Making EFL Matter Pt. 3: The Challenges and Benefits of Discussion

image of balls in a tray


Well, what do you think? This question and answer form a basic opinion exchange that is sometimes called a discussion. And it is, sort of. But just as a single decontextualized sentence is of limited use in understanding grammar, so too a brief opinion exchange does not have enough context–with all its intentions, personalities, and sociolinguistic depth–to really be called a discussion. A discussion is more complex, and ultimately more powerful, because it has a goal and requires input and interaction from multiple members, which should allow them to collectively generate better ideas (solutions, plans, etc.) than any one of the participants could have done alone.

This sounds good in theory, but it is difficult to achieve in classrooms–especially EFL classrooms where learners have a layer of linguistic difficulty on top of the conceptual and procedural challenges inherent in establishing a system of rich academic discussions. The first thing we must acknowledge is that academic discussion skills, like Rome, are not built in a day. As I mentioned earlier, they need to be incrementally developed, starting with basic conversation and interaction skills.  Without basic conversation skills, discussion is not attainable. Students who are used to pairwork and are able to use the basic greetings, openings, and closings of common conversation “scripts” (Hi. How’re you doing? So, what did you do on the weekend? Well, nice talking to you!), and can react to each other’s utterances (Uh-huh, Really?! Oh, I love that!, Really? How was it? etc.) will find discussions accessible. Absolute speaking beginners will struggle and likely fail. Speaking must be taught, skills must be developed, and regular opportunities for fluency development given, or else activities like academic discussions, and the opportunities to flex critical thinking muscles that go with them, won’t be achieved. A little bit of speaking tagged on to the end of a lesson won’t get you there (as programs in high schools in Japan are slowly waking up to).

So now we know it’s difficult and requires a program of incremental skill development starting with a foundation in basic interactive conversation skills. One question we might ask is: is it worth the trouble. Given a limited amount of time, why should so much be devoted to conversation and discussion skills development? Well, the answer comes from sociocultural learning theory. As Daniel Siegel puts it in his forward to the wonderful Social Neuroscience of Education: “We evolved in tribes, we grow in families, and we learn in groups.” Walqui and van Lier (2010), in listing up the tenets of sociocultural learning theory for their QTEL approach, focus on some of the key points: “Participation in activity is central to the development of knowledge; participation in activity progresses from apprenticeship to appropriation, or from the social to the individual plane; and learning can be observed as changes in participation over time” (pg. 6). That is to say, we learn through active participation (engagement and collaboration) with others. “Language is primarily social”…and “…learning…is essentially social in nature” (pg. 4-5). This learning does not happen by chance, however. The really really hard thing to do is to get students into that sweet spot where they are developmentally ready and linguistically scaffolded  up to the point where they can function and learn. Development becomes possible when “…teachers plan lessons beyond the students’ ability to carry them out independently” (pg. 7), but create the proper community and provide the proper scaffolding to allow for success with such lessons. To answer the question that started this paragraph, the potential benefits of learning in groups are great enough to warrant using this approach. Students can learn content and language, and collaboration skills, essential skills for the 21st century according to Laura Greenstein (who also helpfully provides a rubric and suggestions for assessing collaboration, as well as other skills).

One more potential objection to focusing on academic discussion comes from Doug Lemov. Actually, it’s not so much of an objection as request to rethink and balance your choices. In Teach Like a Champion 2.0, he suggests that both writing and discussion can be strong tools for “causing all students to do lots of the most rigorous work,..but if I had to choose just one, which admittedly I do not, I would choose writing. Hammering an ideas into precise words and syntax and then linking it to evidence and situating it within a broader argument are, for me the most rigorous work in schooling” (pg. 314).  Writing is great, and cognitively more “precise” perhaps, and definitely needs to be part of the syllabus. Discussion is by nature more social. You can do both and you should do both; finding the time to do so is challenging, however.

So what do we teach? And what are discussion skills that students need to learn and develop? The best list can be found in Academic Conversations by Jeff Zwiers and Marie Crawford. It needs to be said, however, that this book is made for L1 learners and you will need to adapt as you adopt. But the basic list and framework make it easy and intuitive to do so. Here is the list of skills:

  1. Elaborate and Clarify: Make your thinking as detailed and clear as you can, carefully explaining the rationale behind your thinking.
  2. Support Ideas with Examples: Use examples to illustrate thinking. It is a particular and powerful way of elaborating and clarifying ideas.
  3. Build On or Challenge Partners’ Ideas: Actively respond to and develop the ideas that arise, either by expanding on them or tweaking them, or pruning out the bad ones through well-considered disagreement.
  4. Paraphrase Ideas: As ideas arise, paraphrase them to both show your understanding and create a springboard for idea development and improvement.
  5. Synthesize the Discussion Points: Bring all the ideas you’ve been discussing to a conclusion. Produce a group decision or plan.

These skills need to be introduced incrementally. Some of them are more difficult to teach and practice than others, particularly with students who lack proficiency or fluency, but also for cultural reasons. Some students in Japan find it challenging to disagree, and may have trouble ranking or pruning some ideas from their synthesis. These can be trained and taught. In my experience, students are not used to making their thinking so explicit and considering  ideas so carefully. Once they get the hang of it, however, they clearly become better listeners, better collaborators, and better thinkers. And after gaining some fluency with the formulaic expressions required to do academic discussions, they sound considerably more proficient.

Assessment is not as difficult as you may imagine. Ms. Greenstein’s rubric provides a nice overview of expected performance and can be used together with teacher observations, and peer or individual reflection and feedback. Performance tests are easy to organize because we can check several students at once in their group. Focusing on the success of the group in terms of process and outcome is actually not that hard to measure.

This post is part of a series considering ways to add more focus and learning to EFL classrooms by drawing on ideas and best practices from L1 classrooms.

Part 1 looked at the importance of goals.

Part 2 looked at using data and feedback.



Making EFL Matter Pt. 2: Data and Feedback

imafiling cabinet


In the last few years, I’ve found myself increasingly reaching for books that are not on my SLA or Applied Linguistics shelves. Instead, books on general teaching methodology have engaged me more. It started a few years ago when I came across John Hattie’s Visible Learning for Teachers, a book in which various approaches in education are ranked by their proven effectiveness in research. This led me to further explore formative assessment, one of the approaches that Hattie identifies as particularly effective, citing Yeh (2011). I was impressed and intrigued and began searching for more–and man is there a lot out there! Dylan Wiliam’s Embedded Formative Assessment is great (leading to more blogging);  Laura Greenstein’s What Teachers Really Need to Know about Formative Assessment is very practical and comprehensive;  Visible Learning and the Science of How We Learn by Hattie and Yates has a solid  section; and Leaders of Their Own Learning by Ron Berger et al., the inspiration for this series of blog posts, places formative assessment at the heart of curricular organization. There is, as far as I know, nothing like this in TESOL/SLA. I’m not suggesting, I would like to emphasize, throwing out bathwater or babies, however. I see the content of these books as completely additive to good EFL pedagogy. So let’s go back to that for a moment.

One of my favorite lists in TESOL comes from a 2006 article by Kumaravadivelu in which he puts forth his list of macro-strategies, basic principles that should guide the micro-strategies of day-to-day language classroom teaching, as well as curriculum and syllabus design. This list never became the Pinterest-worthy ten commandments that I always thought it deserved to be. Aside from me and this teacher in Iran, it didn’t seem to catch on so much, though I’m sure you’ll agree it is a good, general set of directives. Hard to disagree with anything, right?

  1. Maximize learning opportunities
  2. Facilitate negotiated interaction
  3. Minimize perceptual mismatches
  4. Activate intuitive heuristics
  5. Foster language awareness
  6. Contextualize linguistic input
  7. Integrate language skills
  8. Promote learner autonomy
  9. Raise cultural awareness
  10. Ensure social relevance

But what is missing from the list (if we really want to take it up to 11) I can tell you now is Provide adequate formative feedback. One of the great failings of communicative language teaching (CLT) is that is has been so concerned with just getting students talking, that it has mostly ignored one of the fundamental aspects of human learning:  it is future-oriented. People want to know how to improve their work so that they can do better next time (Hattie and Yates, 2014). “For many students, classrooms are akin to video games without the feedback, without knowing what success looks like, or knowing when you attain success in key tasks” (pg. 67). Feedback helps when it shows students  what success looks like, when they can clearly see the gap between where they are now and where they need to be, and when the feedback provides actionable suggestions on what is being done right now and what learners should  do/change next to improve. It should be timely and actionable, and learners should be given ample time to incorporate it and try again (Wiliam, 2011; Greenstein, 2010).

One of the most conceptually difficult things to get used to in the world of formative feedback is the notion of data. We language teachers are not used to thinking of students’ utterances and performances as data, yet they are–data that can help us and them learn and improve. I mean, scores on certain norm-referenced tests can be seen as data, final test scores can be seen as data, and attendance can be seen as data, but we tend, I think, to look at what students do in our classes with a more communicative, qualitative, meaning-focused set of lenses. We may be comfortable giving immediate formative performance feedback on pronunciation, but for almost anything else, we hesitate and generalize with our feedback.  Ms. Greenstein, focusing on occasionally enigmatic 21st century skills, offers this:

“Formative assessment requires a systematic and planned approach that illuminates learning and displays what students know, understand, and do. It is used by both teachers and students to inform learning. Evidence is gathered through a variety of strategies throughout the instructional process, and teaching should be responsive to that evidence. There are numerous strategies for making use of evidence before, during, and after instruction” Greenstein, 2012, pg. 45).

Ms. Greenstein and others are teachers who look for data–evidence–of student learning, and look for ways involving learners in the process of seeing and acting on that data. Their point is we have a lot of data (and we can potentially collect more) and we should be using it with students as part of a system of formative feedback. Berger, Ruben & Woodfin, 2014) put it thus:

“The most powerful determinants of student growth are the mindsets and learning strategies that students themselves bring to their work–how much they care about working hard and learning, how convinced they are that hard work leads to growth, and how capably they have built strategies to focus, organize, remember, and navigate challenges. When students themselves identify, analyze, an use data from their learning, they become active agents in their own growth (Berger, Rugen & Woodfin, 2014, pg. 97).

They suggest, therefore, that students be trained to collect, analyze, and share their own learning data. This sounds rather radical, but it is only the logical extension of the rationale for having students track performance on regular tests, or making leader boards, or even giving students report cards.  It just does so more comprehensively. Of course, this will require training/scaffolding and time in class to do. The reasons for doing are worth that time and effort, they stress. Data has an authoritative power that a teacher’s “good job” or “try harder” just don’t. It is honest, unemotional, and specific, and therefore can have powerful effects. There are transformations in student mindsets, statistics literacy, and grading transparency, all of which help make students more responsible and accountable for their own learning. Among the practices they suggest that could be deployed in an EFL classroom are tracking weekly quiz results, standardized tests, or school exams, using error analysis forms for writing or speaking assignments, and using goal-setting worksheets for regular planning and reflection.

You can see the use of data for formative assessment in action in a 6th grade classroom here.

This post is part of a series considering ways to add more focus and learning to EFL classrooms by drawing on ideas and best practices from L1 classrooms.

Part 1 looked at the importance of goals.

Part 3 looks at the challenges and benefits of academic discussions



Yeh, S. S. (2011). The cost-effectiveness of 22 approaches for raising student achievement. Charlotte, North Carolina: Information Age Publishing.