What can Data do for EFL?

image of glasses

In the US, something very interesting is happening as an indirect result of standards-based testing and the establishment of charter schools: a certain interesting set of research conditions has emerged. Charter schools are themselves often experiments, established in part to be “…R&D engines for traditional public schools” (Dobbie & Fryer, 2012). As such, they often eschew many elements of traditional education practices, and instead attempt to institute the results of research into educational effectiveness from the last several decades. For many charter schools, this research-informed pedagogy assumes a unifying or rallying role, and the ideas are woven into their mission statements and school cultures, as they try to take underprivileged kids and put them on the road to college, through grit training, artistic expression, or higher expectations. Even from a brief visit to the websites of charter systems such as Uncommon Schools, MATCH, or KIPP, you can see what they are intending to do and why. And some of these charter schools have become extremely successful—in terms of academic achievement by students, and in terms of popularity. And both of these have led to new research opportunities. You see, the best charter schools now have to resort to lotteries to choose students, lotteries that create nice groups of randomly-sampled individuals: those who got in and received an experimental treatment in education, and those who didn’t and ended up going to more traditional schools. And that provides researchers with a way to compare programs by looking at what happens to the students in these groups. And some of the results may surprise you.

Let’s play one of our favorite games again: Guess the Effect Size!! It’s simple. Just look at the list of interventions below and decide whether each intervention has a large (significant, important, you-should-be-doing-this) impact, or a small (minimal, puny, low-priority) impact. Ready? Let’s go!

  1. Make small classes
  2. Spend more money per student
  3. Make sure all teachers are certified
  4. Deploy as many teachers with advanced degrees as possible
  5. Have teachers give frequent feedback
  6. Make use of data to guide instruction
  7. Create a system of high-dosage tutoring
  8. Increase the amount of time for instruction
  9. Have high expectations for students

Ready to hear the answers? Well, according to Dobbie & Fryer (2012), the first four on the list are not correlated with school effectiveness, while the next five account for a whopping 45% of the reasons schools are effective. Looking at the list, this is not surprising, especially if you are aware  of the power of formative feedback.

Some people might be a little skeptical still. Fine. Go and look for studies that prove Dobbie and Fryer wrong. You might find some. Then look at where the research was done. Is the setting like yours? Just going through this process means we are putting data to work. And that is much, much better than just going with our own instincts, which are of course based on our own experiences. I work teaching English in Japan, and I know that is a far cry from the hard knocks neighborhoods where Dobbie and Fryer looked into the effects of interventions in Match schools. But I think there are enough similarities to warrant giving credence to these results and even giving them a try at schools Tokyo. I have several reasons. First, extensive research on formative assessment, high expectations, classroom time, and pinpointed direct instruction is very robust. Nothing in their list is surprising. Second, in Japan, English is often as foreign from the daily lives of most students as physics or math are from the lives of many American teens. The motivation for learning it is likewise unlikely to be very strong at the beginning. Many of the students in the Match system are less than confident with their ability with many subjects, and are less than confident with aiming at college, a world that is often quite foreign to their lives. Many English learners in Japan similarly see English as foreign and unrelated to their lives, and the notion that they can become proficient at it and make it a part of their future social and/or professional lives, requires a great leap in faith.

But through the Match program, students do gain in confidence, and they do gain in ability, and they do get prepared for college. Given the demographic, the success of Match and the other “No Excuses” systems mentioned above is stunning. It also seems to be long lasting. Davis & Heller (2015) found that students who attended “No Excuses” schools were 10.0 percentage points more likely to attend college and 9.5 percentage points more likely to enroll for at least four semesters. Clearly the kids are getting more than fleeting bumps in scores on tests. And clearly the approach of these schools—in putting to work proven interventions—is having a positive effect, although not everyone seems to be happy.

And it’s not just that they are making use of research results. These schools are putting data to use in a variety of ways. Paul Bambrick-Sotoyo of Uncommon Schools has published a book that outlines their approach very nicely. In it we can find this:

Data-driven instruction is the philosophy that schools should constantly focus on one simple question: are our students learning? Using data-based methods, these schools break from the traditional emphasis on what teachers ostensibly taught in favor of a clear-eyed, fact-based focus on what students actually learned (pg. xxv).

Driven By Data book cover

They do this by adhering to four basic principles. Schools must create serious interim assessments that provide meaningful data. This data then must be carefully analyzed so the data produces actionable finding. And these findings must be tied to classroom practices that build on strengths and eliminate shortcomings. And finally, all of this must occur in an environment where the culture of data-driven instruction is valued and practiced and can thrive. Mr. Bambrick-Sotoyo goes through a list of mistakes that most schools make, challenges that are important to meet if data is to be used to “…make student learning the ultimate test of teaching.” The list feels more like a checklist of standard operating procedures at almost every program I have ever worked in in EFL. Inferior, infrequent or secretive assessments? Check, check, check. Curriculum-assessment disconnect? Almost always. Separation of teaching and analysis? Usually, no analysis whatsoever. Ineffective follow-up? Har har har. I don’t believe I have ever experienced or even heard of any kind of follow-up at the program level. Well, you get the point. What is happening in EFL programs in Japan now is very far removed from a system where data is put to work to maximize learning.

But let’s not stop at the program level. Doug Lemov has been building up a fine collection of techniques that teachers can use to improve learning outcomes. He is now up to 62 techniques that “put students on the path to college,” after starting with 49 in the earlier edition. And how does he decide on these techniques? Through a combination of videoing teachers and tracking the performance of their classes. Simple, yet revolutionary. The group I belonged to until this past April was trying to do something similar with EFL at public high schools in Japan, but the lack of standardized test taking makes it difficult to compare outcomes. But there is no doubt in my mind that this is exactly the right direction in which we should be going. Find out what works, tease out exactly what it is that is leading to improvement (identify the micro-skills), and then train people through micro-teaching to do these things and do them well and do them better still. Teaching is art, Mr. Lemov says in the introduction, but “…great art relies on the mastery and application of foundational skills” (pg.1). Mr. Lemov has done a great service to us by videoing and analyzing thousands of hours of classes, and then triangulating that with test results. And then further trialing and tweaking those results. If you don’t have copy of the book, I encourage you to do so. It’s just a shame that it isn’t all about language teaching and learning.

Interest in using data in EFL/ESL is also growing. A recent issue of Language Testing focused on diagnostic assessment. This is something that has grown out of the same standards-based testing that allowed the charter schools in the US to thrive. You can download a complimentary article (“Diagnostic assessment of reading and listening in a second or foreign language: Elaborating on diagnostic principles”, by Luke Harding, J. Charles Alderson, and Tineke Brunfaut). You can also listen to a podcast interview with Glen Fulcher and Eunice Eunhee Jang, one of the contributors to the special issue. It seems likely that this is an area of EFL that will continue to grow in the future.

Making EFL Matter Pt. 6: Prepared to Learn

image of students

The present series of posts is looking at how EFL courses and classes in Japan might be improved by considering some of the techniques and activities emerging in ESL and L1 mainstream education settings. Previous posts have looked at goals, data and feedback, discussion, portfolios, and prolepsis and debate. The basic idea is to structure courses in accordance with the principles of formative assessment, so students know where they are going and how to get there, and then train them to think formatively and interact with their classmates formatively in English. All of the ideas presented in this series came not from TESOL methodology books, but rather more general education methodology books I read with my EFL teacher lens. I realise that this might put some EFL teachers off. I don’t think it should, since many of the widely-accepted theories and practices in TESOL first appeared in mainstream classes (journals, extensive reading, portfolio assessment, etc.); also,  the last few years have seen an explosion in data-informed pedagogy, and we would be wise not to ignore it. In this post, however, I’d like to go back to TESOL research for a moment and look at how some of it might be problematic. Actually, “problematic” may be too strong a word. I’m not pointing out flaws in research methodology, but I would like to suggest that there may be a danger in drawing conclusions for pedagogy from experiments that simply observe what students tend to do or not do without their awareness raised and without training.

I’ve been reading Peer Interaction and Second Language Learning by Jenefer Philps, Rebecca Adams, and Noriko Iwashita. It is a wonderful book, very nicely conceived and organized, and I plan to write a review in this blog in a short time. But today I’d just like to make a point connected with the notion of getting learners more engaged in formative assessment in EFL classes. As I was reading the book, it seemed that many of the studies cited just seemed to look at what learners do as they go about completing tasks (very often picture difference tasks, for some reason). That is, the studies seem to set learners up with a task and then watch what they do as they interact, and count how many LRE (language related episode) incidences of noticing and negotiation of language happen, or how often learners manage to produce correct target structures. Many of the studies just seem to have set learners about doing a task and then videoed them. That would be fine if we were examining chimpanzees in the wild or ants on a hill; but I strongly believe it is our job to actively improve the quality of the interactions between learners and to push their learning, not just to observe what they do. None of the studies in the book seem to be measuring organized and systematic training-based interventions for teaching how to interact and respond to peers. In one of the studies that sort of did, Kim and McDonough (2011), teachers just showed a video of students modelling certain interaction and engagement strategies as part of a two-week study. But even with that little bit of formative assessment/training, we find better results, better learning. The authors of the book are cool-headed researchers, and they organize and report the findings of various studies duly. But my jaw dropped open a number of times, if only in suspicion of what seemed to be (not) happening; my formative assessment instincts were stunned. How can we expect learners to do something if they are not explicitly instructed and trained to do so? And why would we really want to see what they do if they are not trained to do so? Just a little modelling is not bad, but there is so much more that can be done. Right Mr. Wiliam? Right Ms. Greenstein?

Philps et al. acknowledge this in places. In the section on peer dynamics, they stress the importance of developing both cognitive and social skills. “Neither can be taken for granted,” they state clearly (pg. 100). And just after that, they express the need for more training and more research on how to scaffold/train learners to interact with each other for maximum learning:

“Part of the effectiveness of peer interaction…relates to how well learners listen to and engage with one another…In task-based language teaching research, a primary agenda has been the creation of effective tasks that promote maximum opportunities for L2 learning, but an important area for research, largely ignored, is the training of interpersonal skills essential to make these tasks work as intended” (pg. 101).

But not once in their book do they mention formative assessment or rubrics. Without understanding of the rationale of providing each other with feedback, without models, without rubrics, without being shown how to give feedback or provide scaffolding to peers, how can we expect them to do so, or to do so in a way that drives learning. Many studies discussed in the book show that learners do not really trust peer feedback, and do not feel confident in giving it. Sure, if it’s just kids with nothing to back themselves up, that’s natural. But if we have a good system of formative feedback in place (clear goals, rubrics, checklists, etc.), everyone knows what to do and what to do to get better. Everyone has an understanding of the targets. They are detailed and they are actionable. And it becomes much easier to speak up and help someone improve.

Teachers need to make goals clear and provide rubrics detailing micro-skills or competencies that learners need to demonstrate. They also need to train learners in how to give and receive feedback. That is a fundamental objective of a learning community. The study I want to see will track learners as they enter and progress in such a community.

 

Kim, Y., & McDonough, K. (2011). Using pretask modeling to encourage collaborative learning opportunities, Language Teaching Research,15(2), 1-17.

 

Making EFL Matter Pt. 2: Data and Feedback

imafiling cabinet

 

In the last few years, I’ve found myself increasingly reaching for books that are not on my SLA or Applied Linguistics shelves. Instead, books on general teaching methodology have engaged me more. It started a few years ago when I came across John Hattie’s Visible Learning for Teachers, a book in which various approaches in education are ranked by their proven effectiveness in research. This led me to further explore formative assessment, one of the approaches that Hattie identifies as particularly effective, citing Yeh (2011). I was impressed and intrigued and began searching for more–and man is there a lot out there! Dylan Wiliam’s Embedded Formative Assessment is great (leading to more blogging);  Laura Greenstein’s What Teachers Really Need to Know about Formative Assessment is very practical and comprehensive;  Visible Learning and the Science of How We Learn by Hattie and Yates has a solid  section; and Leaders of Their Own Learning by Ron Berger et al., the inspiration for this series of blog posts, places formative assessment at the heart of curricular organization. There is, as far as I know, nothing like this in TESOL/SLA. I’m not suggesting, I would like to emphasize, throwing out bathwater or babies, however. I see the content of these books as completely additive to good EFL pedagogy. So let’s go back to that for a moment.

One of my favorite lists in TESOL comes from a 2006 article by Kumaravadivelu in which he puts forth his list of macro-strategies, basic principles that should guide the micro-strategies of day-to-day language classroom teaching, as well as curriculum and syllabus design. This list never became the Pinterest-worthy ten commandments that I always thought it deserved to be. Aside from me and this teacher in Iran, it didn’t seem to catch on so much, though I’m sure you’ll agree it is a good, general set of directives. Hard to disagree with anything, right?

  1. Maximize learning opportunities
  2. Facilitate negotiated interaction
  3. Minimize perceptual mismatches
  4. Activate intuitive heuristics
  5. Foster language awareness
  6. Contextualize linguistic input
  7. Integrate language skills
  8. Promote learner autonomy
  9. Raise cultural awareness
  10. Ensure social relevance

But what is missing from the list (if we really want to take it up to 11) I can tell you now is Provide adequate formative feedback. One of the great failings of communicative language teaching (CLT) is that is has been so concerned with just getting students talking, that it has mostly ignored one of the fundamental aspects of human learning:  it is future-oriented. People want to know how to improve their work so that they can do better next time (Hattie and Yates, 2014). “For many students, classrooms are akin to video games without the feedback, without knowing what success looks like, or knowing when you attain success in key tasks” (pg. 67). Feedback helps when it shows students  what success looks like, when they can clearly see the gap between where they are now and where they need to be, and when the feedback provides actionable suggestions on what is being done right now and what learners should  do/change next to improve. It should be timely and actionable, and learners should be given ample time to incorporate it and try again (Wiliam, 2011; Greenstein, 2010).

One of the most conceptually difficult things to get used to in the world of formative feedback is the notion of data. We language teachers are not used to thinking of students’ utterances and performances as data, yet they are–data that can help us and them learn and improve. I mean, scores on certain norm-referenced tests can be seen as data, final test scores can be seen as data, and attendance can be seen as data, but we tend, I think, to look at what students do in our classes with a more communicative, qualitative, meaning-focused set of lenses. We may be comfortable giving immediate formative performance feedback on pronunciation, but for almost anything else, we hesitate and generalize with our feedback.  Ms. Greenstein, focusing on occasionally enigmatic 21st century skills, offers this:

“Formative assessment requires a systematic and planned approach that illuminates learning and displays what students know, understand, and do. It is used by both teachers and students to inform learning. Evidence is gathered through a variety of strategies throughout the instructional process, and teaching should be responsive to that evidence. There are numerous strategies for making use of evidence before, during, and after instruction” Greenstein, 2012, pg. 45).

Ms. Greenstein and others are teachers who look for data–evidence–of student learning, and look for ways involving learners in the process of seeing and acting on that data. Their point is we have a lot of data (and we can potentially collect more) and we should be using it with students as part of a system of formative feedback. Berger, Ruben & Woodfin, 2014) put it thus:

“The most powerful determinants of student growth are the mindsets and learning strategies that students themselves bring to their work–how much they care about working hard and learning, how convinced they are that hard work leads to growth, and how capably they have built strategies to focus, organize, remember, and navigate challenges. When students themselves identify, analyze, an use data from their learning, they become active agents in their own growth (Berger, Rugen & Woodfin, 2014, pg. 97).

They suggest, therefore, that students be trained to collect, analyze, and share their own learning data. This sounds rather radical, but it is only the logical extension of the rationale for having students track performance on regular tests, or making leader boards, or even giving students report cards.  It just does so more comprehensively. Of course, this will require training/scaffolding and time in class to do. The reasons for doing are worth that time and effort, they stress. Data has an authoritative power that a teacher’s “good job” or “try harder” just don’t. It is honest, unemotional, and specific, and therefore can have powerful effects. There are transformations in student mindsets, statistics literacy, and grading transparency, all of which help make students more responsible and accountable for their own learning. Among the practices they suggest that could be deployed in an EFL classroom are tracking weekly quiz results, standardized tests, or school exams, using error analysis forms for writing or speaking assignments, and using goal-setting worksheets for regular planning and reflection.

You can see the use of data for formative assessment in action in a 6th grade classroom here.

This post is part of a series considering ways to add more focus and learning to EFL classrooms by drawing on ideas and best practices from L1 classrooms.

Part 1 looked at the importance of goals.

Part 3 looks at the challenges and benefits of academic discussions

 

References

Yeh, S. S. (2011). The cost-effectiveness of 22 approaches for raising student achievement. Charlotte, North Carolina: Information Age Publishing.

Does this Shirt Make me Look Fat? Motivation and Vocabulary

I love the topic of motivation in language learning (past posts here, here, and here, for example). In the world of TESOL, however, it’s a little  like that old joke about the weather–everyone seems to talk about it but nobody does anything about it. In Japan, I often hear students voicing out loud how they wish they could speak English (even though they are students and even though they are enrolled in an English course at the present). They  sound a lot like the people I know who talk about losing weight or exercising more: vague, dreamy, and not usually likely to succeed. TESOL research and literature talks a lot about integrative and instrumental motivation, ideal selves, and willingness to talk, etc., concepts that just seem so far from the practical reality teachers and those dreamy-eyed students really need.  So  in this post I would like to focus on the positive and practical and provide a list of things to do that improve the chances of success, drawing on formative assessment ideas and general psychological ideas for motivation.The idea is to approach motivating learners the same way one would go about motivating oneself to lose weight or start and stick to an exercise program. Instead of talking about fuzzy motivations, let’s focus on just doing it. The enemy in my sights is much the same enemy that faces the would-be dieter or exerciser,  procrastination, a powerful slayer of great intentions.

First of all, let’s get one thing straight: you can’t do much about the motivation kids bring with them to your class on Day 1, but after that, you certainly can. What you and your students do together affects how they think and feel about language learning and themselves. That is, teachers can change attitudes by changing behaviors. And as a teacher, you have a lot of power to change behaviors. As BJ Fogg says, you shouldn’t be trying to motivate behavior change, you should be trying to facilitate behavior change.

Vocabulary learning is the perfect place to try out techniques for motivation success and overcoming procrastination because it is in many ways the most autonomous-friendly part of language learning. It can easily be divided into manageable lists, and success/failure/progress can be fairly easy to observe by everyone. It is also a topic I have to run a training session on this summer and I need some practical ideas for teachers to try out with their students.

OK, here we go. In addition to using teaching techniques that make the vocabulary as easy to understand and remember as possible, try the following:

  1. Make a detailed plan with clear sub-goals that are measurable and time-based. Break the vocabulary list into specific groups and set a specific schedule for learning them. This provides a clear final target and clear actionable and incremental steps, important tenets of formative assessment. Create a complete list and  unit-by-unit or week-by-week lists. Be very clear on performance criteria for success (spelling, pronunciation, collocations, translation, etc.). Make the plans as explicit as possible, and put as much in writing as possible.
  2. Provide lots of opportunities for learners to meet and interact with the vocabulary. Learners need to actively meet target items more than 10 times each (and more than 20 times in passive meetings) if they are expected to learn them. Recycle vocabulary as much as possible.
  3. Create a system that requires regular  out-of-class study (preview/review). Out-of-class HW assignments should start by being ridiculously small at first (tiny habits–see below), such as write out two sentences one time each. Grow and share and celebrate from there.
  4. Ensure success experiences. Success is empowering. The teacher’s job is to ensure that learners can learn and can see the results of their learning. Do practice tests before the “real” test, and generally provide sufficient learning opportunities to ensure success (“over-teach” at first if you need to). Lots of practice testing is a proven technique to drive learning, and students need to do it in class and in groups, and learn how to do it on their own.
  5. Leverage social learning and pressure. Have learners learn vocabulary together, teach and help each other sometimes, encourage each other, and just generally be aware of how everyone else is succeeding. Real magic can happen if a learning community puts its mind to something.
  6. Have learners share their goals and progress, publicly in class  and with friends, family and significant others. Post results on progress boards, challenge and results charts, etc. At a very minimum, the teacher and the student herself should always know where they are and what they need to do to improve.
  7. Remind learners of the benefits of success. Provide encouragement, especially, supportive, oral positive feedback at times when it is not necessarily expected.
  8. Make sure that sub-goal success is properly recognized and rewarded. This provides a stronger sense of achievement.
  9. Make 1-8 as pleasant (fun, energetic, meaningful) as possible.

You may already be doing these things and still not getting the progress you hope for because the students just aren’t studying enough outside of class. Products of their age, they are driven by distractions–the need to check their Twitter feeds, for example, and the pressing issue of  incoming LINE comments, or whatever. But they also suffer from the oppression of the same procrastination monster that we all suffer from. Oliver Emberton has a nice post on dealing with procrastination. For teachers, I would like to call attention to the last two items on his list of recommendations: Force a start, and Bias your environment. “The most important thing you can do is start,” Mr. Emberton writes. This is certainly true.

yellowBrickRoadStart

You can counsel them on the need to turn off their devices and “study more.” But unless you give them clear, doable, and manageable tasks and start them in class, and require and celebrate their use, it is unlikely they will get done. BJ Fogg recommends that you facilitate behavioral change by promoting tiny habits. His work makes the establishment of positive habits seem so much easier. You can watch an earlier overview of his method here, or a fun TED talk here. Much of what he describes can only be done by the individual learner, but as a teacher you can set the target habit behavior and you can help learners see the fruits of their newly established habits. Just choose a vocabulary learning strategy, reduce it to it’s simplest form, and provide a place to celebrate success. Then try to grow and celebrate the continued use of these positive habits. This modern world is a hard one to study in. There are really too many distractions too close at hand. It takes real strength, real grit, to resist them and start or keep at something new. Helping students to develop this strength and grit is now part of any teacher’s job description, I think.

If you are looking for more on how to teach vocabulary, including a nice section on web and mobile app tools that can help, Adam Simpson’s blog has a nice post on vocabulary. If you are looking for something that combines the latest in TESOL theory on motivation with practical techniques for the items I listed above, Motivating Learning by Hadfield and Dornyei is the best thing I’ve seen. It has 99 activities to choose from.