What can Data do for EFL?

image of glasses

In the US, something very interesting is happening as an indirect result of standards-based testing and the establishment of charter schools: a certain interesting set of research conditions has emerged. Charter schools are themselves often experiments, established in part to be “…R&D engines for traditional public schools” (Dobbie & Fryer, 2012). As such, they often eschew many elements of traditional education practices, and instead attempt to institute the results of research into educational effectiveness from the last several decades. For many charter schools, this research-informed pedagogy assumes a unifying or rallying role, and the ideas are woven into their mission statements and school cultures, as they try to take underprivileged kids and put them on the road to college, through grit training, artistic expression, or higher expectations. Even from a brief visit to the websites of charter systems such as Uncommon Schools, MATCH, or KIPP, you can see what they are intending to do and why. And some of these charter schools have become extremely successful—in terms of academic achievement by students, and in terms of popularity. And both of these have led to new research opportunities. You see, the best charter schools now have to resort to lotteries to choose students, lotteries that create nice groups of randomly-sampled individuals: those who got in and received an experimental treatment in education, and those who didn’t and ended up going to more traditional schools. And that provides researchers with a way to compare programs by looking at what happens to the students in these groups. And some of the results may surprise you.

Let’s play one of our favorite games again: Guess the Effect Size!! It’s simple. Just look at the list of interventions below and decide whether each intervention has a large (significant, important, you-should-be-doing-this) impact, or a small (minimal, puny, low-priority) impact. Ready? Let’s go!

  1. Make small classes
  2. Spend more money per student
  3. Make sure all teachers are certified
  4. Deploy as many teachers with advanced degrees as possible
  5. Have teachers give frequent feedback
  6. Make use of data to guide instruction
  7. Create a system of high-dosage tutoring
  8. Increase the amount of time for instruction
  9. Have high expectations for students

Ready to hear the answers? Well, according to Dobbie & Fryer (2012), the first four on the list are not correlated with school effectiveness, while the next five account for a whopping 45% of the reasons schools are effective. Looking at the list, this is not surprising, especially if you are aware  of the power of formative feedback.

Some people might be a little skeptical still. Fine. Go and look for studies that prove Dobbie and Fryer wrong. You might find some. Then look at where the research was done. Is the setting like yours? Just going through this process means we are putting data to work. And that is much, much better than just going with our own instincts, which are of course based on our own experiences. I work teaching English in Japan, and I know that is a far cry from the hard knocks neighborhoods where Dobbie and Fryer looked into the effects of interventions in Match schools. But I think there are enough similarities to warrant giving credence to these results and even giving them a try at schools Tokyo. I have several reasons. First, extensive research on formative assessment, high expectations, classroom time, and pinpointed direct instruction is very robust. Nothing in their list is surprising. Second, in Japan, English is often as foreign from the daily lives of most students as physics or math are from the lives of many American teens. The motivation for learning it is likewise unlikely to be very strong at the beginning. Many of the students in the Match system are less than confident with their ability with many subjects, and are less than confident with aiming at college, a world that is often quite foreign to their lives. Many English learners in Japan similarly see English as foreign and unrelated to their lives, and the notion that they can become proficient at it and make it a part of their future social and/or professional lives, requires a great leap in faith.

But through the Match program, students do gain in confidence, and they do gain in ability, and they do get prepared for college. Given the demographic, the success of Match and the other “No Excuses” systems mentioned above is stunning. It also seems to be long lasting. Davis & Heller (2015) found that students who attended “No Excuses” schools were 10.0 percentage points more likely to attend college and 9.5 percentage points more likely to enroll for at least four semesters. Clearly the kids are getting more than fleeting bumps in scores on tests. And clearly the approach of these schools—in putting to work proven interventions—is having a positive effect, although not everyone seems to be happy.

And it’s not just that they are making use of research results. These schools are putting data to use in a variety of ways. Paul Bambrick-Sotoyo of Uncommon Schools has published a book that outlines their approach very nicely. In it we can find this:

Data-driven instruction is the philosophy that schools should constantly focus on one simple question: are our students learning? Using data-based methods, these schools break from the traditional emphasis on what teachers ostensibly taught in favor of a clear-eyed, fact-based focus on what students actually learned (pg. xxv).

Driven By Data book cover

They do this by adhering to four basic principles. Schools must create serious interim assessments that provide meaningful data. This data then must be carefully analyzed so the data produces actionable finding. And these findings must be tied to classroom practices that build on strengths and eliminate shortcomings. And finally, all of this must occur in an environment where the culture of data-driven instruction is valued and practiced and can thrive. Mr. Bambrick-Sotoyo goes through a list of mistakes that most schools make, challenges that are important to meet if data is to be used to “…make student learning the ultimate test of teaching.” The list feels more like a checklist of standard operating procedures at almost every program I have ever worked in in EFL. Inferior, infrequent or secretive assessments? Check, check, check. Curriculum-assessment disconnect? Almost always. Separation of teaching and analysis? Usually, no analysis whatsoever. Ineffective follow-up? Har har har. I don’t believe I have ever experienced or even heard of any kind of follow-up at the program level. Well, you get the point. What is happening in EFL programs in Japan now is very far removed from a system where data is put to work to maximize learning.

But let’s not stop at the program level. Doug Lemov has been building up a fine collection of techniques that teachers can use to improve learning outcomes. He is now up to 62 techniques that “put students on the path to college,” after starting with 49 in the earlier edition. And how does he decide on these techniques? Through a combination of videoing teachers and tracking the performance of their classes. Simple, yet revolutionary. The group I belonged to until this past April was trying to do something similar with EFL at public high schools in Japan, but the lack of standardized test taking makes it difficult to compare outcomes. But there is no doubt in my mind that this is exactly the right direction in which we should be going. Find out what works, tease out exactly what it is that is leading to improvement (identify the micro-skills), and then train people through micro-teaching to do these things and do them well and do them better still. Teaching is art, Mr. Lemov says in the introduction, but “…great art relies on the mastery and application of foundational skills” (pg.1). Mr. Lemov has done a great service to us by videoing and analyzing thousands of hours of classes, and then triangulating that with test results. And then further trialing and tweaking those results. If you don’t have copy of the book, I encourage you to do so. It’s just a shame that it isn’t all about language teaching and learning.

Interest in using data in EFL/ESL is also growing. A recent issue of Language Testing focused on diagnostic assessment. This is something that has grown out of the same standards-based testing that allowed the charter schools in the US to thrive. You can download a complimentary article (“Diagnostic assessment of reading and listening in a second or foreign language: Elaborating on diagnostic principles”, by Luke Harding, J. Charles Alderson, and Tineke Brunfaut). You can also listen to a podcast interview with Glen Fulcher and Eunice Eunhee Jang, one of the contributors to the special issue. It seems likely that this is an area of EFL that will continue to grow in the future.

Making EFL Matter Pt. 5: Prolepsis, Debate, and Benny Lewis

image of man reading a book As a young man, I was part of a legion of English teachers working in Japan. A large number of us “teachers” working day in and day out at language schools and colleges were actually travelers trying to save money for their next trek through Nepal or to live on a beach on Boracay or Koh Samui  (very different in 1986) for as many months as possible before they had to work again. At least some of these people, in order to be able to stay in Japan and teach/work, pretended to be in the country for the purpose of studying something–flower arrangement, karate, or Japanese language, for example. One guy, ostensibly studying Japanese, dutifully went to the immigration office each year to renew his visa. And each time, he struggled greatly with the rudimentary questions the officer asked him in Japanese. At the end of the conversation, the immigration officer would kindly offer him encouragement because “Japanese was a hard language” to learn.

That same sentiment–that you are just studying the language and can’t really use it yet–is still surprisingly common in many institutional programs for learners of many languages. I have often heard college students say that they want to go to the US “after my English is good enough.” The opposite of this “not yet” concept is  prolepsis, “the representation or assumption of a future act as if presently existing or accomplished” (from Merriam-Webster). It is a lovely little term I came across in Walqui and van Lier (2010). They recommend  treating students proleptically, “as if they already possess the abilities you are seeking to develop” (pg 84). In other words, throw them in at the deep end, and both support and expect their success. High school and college in Japan are perfect places for putting this approach into practice. Why? Because learners have already had somewhere between 4 and 10 previous years of of English exposure and learning. It’s time to stop pretending that they can’t use it. Right Benny?

People like Benny Lewis are not usually taken seriously in the TESOL world, but they should be. Watch the video and see how many things he gets right. Polyglots learn languages successfully, he says at one point, because they are motivated to “use it with people” and they go about doing so. That is some good sociocultural theory there. He also dismisses five of the barriers that people so often accept to explain their own lack of success with language learning, and addresses the growth mindset and time and resource management that he and his friends have found a way to make work for themselves. But what I find most amazing about Mr Lewis and others like him is that they are living examples of acting proleptically with language learning. They learn it, use it, love it, and  repeat. They don’t stop to worry about whether they are “ready.” They don’t let things like having few resources around, or no interlocutors nearby, to interfere. They challenge themselves to learn what they can and then actively seek out opportunities to use that, monitoring their progress by continually testing it out. I admire their passion. I  borrow strategies and techniques from them to pass on to my students. If we are not helping our students make use of Skype or Memrise or Quizlet or any of the many other tools available, we are doing a great disservice to our young charges.

But not only should we be introducing websites, we should be expecting our learners to use them and to push their learning. You can do it. No excuses. Of course you can handle basic conversations in the language. I expect nothing less than that. And let’s see what you can really do when you push yourself. I expect success. I assume it and design my activities around it. Prolepsis. We sometimes hear the word rigor used to describe education. We can also talk about holding higher expectations for our learners. Without a curriculum designed with the idea of prolepsis, however, it is likely empty talk. It sounds good, but is not actionable. Van Lier and Walqui list these three directives if we are serious about really making our curriculum, well, serious:

  • Engage learners in tasks that provide high challenge and high support;
  • engage students (and teacher) in the development of their own expertise;
  • make criteria for quality work clear for all

We can see immediately that some of the things Mr. Lewis is suggesting get learners to do these things. I’ve talked before about rubrics and portfolios and making the criteria for success clear in other blog posts, but today I’d like to finish up this post by talking about an activity that does all these things, and it gets students to perform proleptically: debate. Now debate has a bad reputation in Japan. Many teachers think it is too difficult for students. Some teachers think it focuses too much on competition. These points may have some validity, but they should not prevent you from doing debate. We do debate, like JFK said we should go to the moon, because it is difficult. And if we have students debate both sides of issues, what begins to emerge is a keen sense of examining any issue–for looking at what is important and how important, and questioning and explaining that. Debaters behave proleptically, because they have to. Debating adds critical thinking structure to discussions about plans. Debaters learn to consider the status quo. They learn to evaluate plans in terms of their effect and importance. They learn to write speeches describing these things, and they learn to listen for them and consider them critically. Because there is a set structure, we can support and scaffold our learners. But we cannot hold their hands all the way. Debate forces them to go off scripts at times, while never going off topic. There is also time pressure, and the debate takes place with other  people, an on-stage performance that is intimidating for everyone, and thus spurs learners to try harder. Yet, like scrimmaging with feedback, there are multiple opportunities to fine tune performance (and get repeated input). Every time I read about techniques to promote high standards, rigor, etc. , I always think to myself: That sounds an awful lot like debate, or Yup, debate can do that.  To me, it seems that debate is one technique that should not be left out, especially policy debate where learners research topics to come up with arguments for both sides in advance. Not only do we get four-skills language development, but we also get research skills, organization skills, and critical thinking skills development.

Show me another activity that does that.

This post is part of a series considering ways to add more focus and learning to EFL classrooms by drawing on ideas and best practices from L1 classrooms.

Part 1 looked at the importance of goals. Part 2 looked at using data and feedback. Part 3 looked at the challenges and benefits of academic discussions Part 4 looked at portfolios and assessment

Making EFL Matter Pt. 4: Portfolios and Assessment

desktop image

In principle, a portfolio is an easy to understand and intuitively attractive concept: students keep the work they produce. The real challenge of a portfolio is what you do with it. Without a clear vision of how the tool will be used, it can easily end up being a little like a child’s art box of works produced in art class in school over the years—just a repository of things we hold on to for no specific reason other than sentimental attachment. We might pull these works out to look at them from time to time, but they are not a clear record of achievement, nor can they help inform future learning decisions. The central function of a quality portfolio is to clearly provide evidence of growth and to “…engage students in assessing their growth and learning” (Berger, Rugen & Woodfin, 2014, pg. 261). Specifically what growth depends on the goals of the course or program. When a course or program has clear goals, a portfolio can have a formative or summative role in demonstrating a learner’s achievement or progress toward achieving those goals. There are also practical/logistical constraints on portfolio deployment. What artifacts should be included, how many should be included, where should the artifacts be stored, and how will the portfolio be assessed and by whom, are all important decisions. The results of these decisions can greatly impact the success of a portfolio as a learning tool.

 

Conceptualizing a portfolio

A portfolio is not simply a repository file. It must serve as a record of progress that is used to assess learning by the learner him/herself or by others. All decisions on its structure and deployment must start with this basic understanding. The design of the portfolio itself, and its integration into the syllabus (i.e., how it will be used on a regular basis) must aim to make it as easy as possible to record progress/achievement, to make visible evidence or patterns progress/achievement in the collected data. For this reason, not only student-produced academic work (essays, presentations, tests), but also documents that make progress and achievement salient should be kept in a portfolio. Such documents may include introductory statements, target-setting plans, records of times on tasks, assignment rubrics, progress charts, and reflection reports.

 

The importance of goals

In order to be effective, the portfolio must be closely aligned to the goals of the course or program and be able to show progress toward or achievement of those goals. In other words, it must be able to provide specific evidence of progress in achieving the target competencies in a way that is clear and actionable. It must also do so in a way that makes the most effective or efficient use of time. These goals can include knowledge goals, skill goals, or learning goals for constructs such as responsibility, autonomy, revision, collaboration, service and stewardship (to name a few). Without clear goals (usually arranged in a clear sequence), effective use of a portfolio cannot be possible. Without clear goals, the formative and reflective functions of a portfolio cannot be leveraged in a clear and actionable way. However, if students know what they are aiming for and can compare their work in how it meets the target competencies (using the descriptions and rubrics that define the goals/competencies), portfolios can be a powerful tool for reflection and formative feedback.

 

The importance of regular portfolio conversations

“In order for portfolios to be a tool for student-engaged assessment, including formative and summative assessments, they must be a regular part of the classroom conversation, not a static collection of student work” (Berger, Rugen & Woodfin, 2014, pg. 268). The portfolio must be a tool of measurement, like a bathroom scale, and can only be effective if it is used regularly. Students must regularly enter data into it (more on what kinds of data in the next section), and they must use it to look for patterns of success and gaps in learning/performance and strategy use. For this reason, providing clear guidelines and time to enter data into portfolios, facilitating the noticing of patterns and gaps, and giving opportunities for students to discuss their progress in groups, are all necessary. This will require classroom time, but also some scaffolding so students can understand how to work with data. Student-led conferences (mini presentations on progress done in groups in class) can be a useful tool. In groups, students can practice talking about learning, but also compare their progress and efforts with those of their classmates. Counselor conferences can also make use of portfolios, and if students have practiced beforehand in groups, time with counselors can be economized. Finally, to truly leverage the power of portfolios, passage presentations (public presentations where students explain and defend their learning accomplishments to groups of teachers, parents, or other concerned parties) can be particularly powerful since they are public and official. If a passage presentation system is in place, it will serve to make the portfolios more meaningful, greatly enhancing the effort students will put into entering and analyzing data and the amount of time they spend analyzing and practicing explaining their learning. Passage presentations and counselor conferences can transform student-led conferences into the role of practice for “the big games.”

 

Portfolio contents Pt. 1: What are we trying to develop?

Let us review our key points so far. It must be easy to enter meaningful data into the portfolio and notice trends or gaps. Noticing the trends and gaps in performance requires an understanding of the goals of the course/program, so they must be clear. The portfolio should be used regularly: students should use it to monitor their learning; and students should be able to refer to it when explaining their learning to others (groups, counselors, or others). These points are all concerned with usability, making the experience of using a portfolio as simple and smooth and effective as possible. What we actually put into the portfolio must be concerned with our learning targets. As mentioned earlier, any program or course will have multiple targets for knowledge and skill acquisition, but also for constructs such as digital literacy, critical thinking, problem solving, responsibility, autonomy, revision, collaboration, service and stewardship, and possibly others. Therefore, it is important for portfolios to contain finished work and evidence of the process of improving work through working with others, checking and revising work responsibly, and helping others to do so, too. Portfolios should also contain records of learning activities and times on tasks as evidence of autonomy and tenacity.

 

Portfolio contents Pt. 2: Portfolios for language learners

As part of English language courses, there are usually weekly classroom assignments for writing and presentation. There may also be other writing assignments, or other speaking assignments. As for other constructs, the following have been shown to be important for successful language learning and therefore should be part of the curriculum:

  • Time on task
  • Time management (efficient use of time)
  • Commitment to improvement/quality (accountable for learning)
  • Critical evaluation of learning strategies
  • Collaboration (accountable to others)
  • Seeking feedback and incorporating feedback (revision)

 

If we try to build these into our portfolio system along with our language and culture target competencies while still managing the volume of the content, I believe that we must include the following elements, in addition to a general goal statement:

  1. Drafts and final products for a limited number of assignments, including a reflection sheet with information about the goals of the assignment (and a copy of the rubric for the assignment), time spent on the assignment, attempts at getting feedback and comments on how that feedback was included;
  2. Weekly reflection sheets (including a schedule planner) in which students can plan out the study plan for their week before it happens, and then reflect upon the results afterward. There could also be sections where students can reflect upon strategy use and explain their attempts to reach certain goals;
  3. Self-access tracking charts in which students list up the reading, listening, or other self-access activities students engage in. Several of these charts can be made available to students (extensive reading charts, extensive listening charts, TOEFl/TOEIC test training, online conversation time, etc.) and students can include the charts relevant to their personal goals (though extensive reading will be required for all students).

Finally

As you can see, there is much to be decided: specifically which assignments and how many will be included; also the various forms need to be designed and created; and, for the English classes, whether completing the portfolio and discussing learning is something that we want to scaffold learners to be able to do (something that I personally think is very important).

 

This post is part of a series considering ways to add more focus and learning to EFL classrooms by drawing on ideas and best practices from L1 classrooms.

Part 1 looked at the importance of goals.

Part 2 looked at using data and feedback.

Part 3 looked at the challenges and benefits of academic discussions

 

References

Berger, R. Rugen, L., and Woodfin, L. (2014). Leaders of their own learning: transforming schools through student-engaged assessment. San Fransisco: Jossey-Bass.

Greenstein, L. (2012). Assessing 21st century skills: a guide to evaluating mastery and authentic learning. Thousand Oaks, CA: Corwin.

Making EFL Matter Pt. 2: Data and Feedback

imafiling cabinet

 

In the last few years, I’ve found myself increasingly reaching for books that are not on my SLA or Applied Linguistics shelves. Instead, books on general teaching methodology have engaged me more. It started a few years ago when I came across John Hattie’s Visible Learning for Teachers, a book in which various approaches in education are ranked by their proven effectiveness in research. This led me to further explore formative assessment, one of the approaches that Hattie identifies as particularly effective, citing Yeh (2011). I was impressed and intrigued and began searching for more–and man is there a lot out there! Dylan Wiliam’s Embedded Formative Assessment is great (leading to more blogging);  Laura Greenstein’s What Teachers Really Need to Know about Formative Assessment is very practical and comprehensive;  Visible Learning and the Science of How We Learn by Hattie and Yates has a solid  section; and Leaders of Their Own Learning by Ron Berger et al., the inspiration for this series of blog posts, places formative assessment at the heart of curricular organization. There is, as far as I know, nothing like this in TESOL/SLA. I’m not suggesting, I would like to emphasize, throwing out bathwater or babies, however. I see the content of these books as completely additive to good EFL pedagogy. So let’s go back to that for a moment.

One of my favorite lists in TESOL comes from a 2006 article by Kumaravadivelu in which he puts forth his list of macro-strategies, basic principles that should guide the micro-strategies of day-to-day language classroom teaching, as well as curriculum and syllabus design. This list never became the Pinterest-worthy ten commandments that I always thought it deserved to be. Aside from me and this teacher in Iran, it didn’t seem to catch on so much, though I’m sure you’ll agree it is a good, general set of directives. Hard to disagree with anything, right?

  1. Maximize learning opportunities
  2. Facilitate negotiated interaction
  3. Minimize perceptual mismatches
  4. Activate intuitive heuristics
  5. Foster language awareness
  6. Contextualize linguistic input
  7. Integrate language skills
  8. Promote learner autonomy
  9. Raise cultural awareness
  10. Ensure social relevance

But what is missing from the list (if we really want to take it up to 11) I can tell you now is Provide adequate formative feedback. One of the great failings of communicative language teaching (CLT) is that is has been so concerned with just getting students talking, that it has mostly ignored one of the fundamental aspects of human learning:  it is future-oriented. People want to know how to improve their work so that they can do better next time (Hattie and Yates, 2014). “For many students, classrooms are akin to video games without the feedback, without knowing what success looks like, or knowing when you attain success in key tasks” (pg. 67). Feedback helps when it shows students  what success looks like, when they can clearly see the gap between where they are now and where they need to be, and when the feedback provides actionable suggestions on what is being done right now and what learners should  do/change next to improve. It should be timely and actionable, and learners should be given ample time to incorporate it and try again (Wiliam, 2011; Greenstein, 2010).

One of the most conceptually difficult things to get used to in the world of formative feedback is the notion of data. We language teachers are not used to thinking of students’ utterances and performances as data, yet they are–data that can help us and them learn and improve. I mean, scores on certain norm-referenced tests can be seen as data, final test scores can be seen as data, and attendance can be seen as data, but we tend, I think, to look at what students do in our classes with a more communicative, qualitative, meaning-focused set of lenses. We may be comfortable giving immediate formative performance feedback on pronunciation, but for almost anything else, we hesitate and generalize with our feedback.  Ms. Greenstein, focusing on occasionally enigmatic 21st century skills, offers this:

“Formative assessment requires a systematic and planned approach that illuminates learning and displays what students know, understand, and do. It is used by both teachers and students to inform learning. Evidence is gathered through a variety of strategies throughout the instructional process, and teaching should be responsive to that evidence. There are numerous strategies for making use of evidence before, during, and after instruction” Greenstein, 2012, pg. 45).

Ms. Greenstein and others are teachers who look for data–evidence–of student learning, and look for ways involving learners in the process of seeing and acting on that data. Their point is we have a lot of data (and we can potentially collect more) and we should be using it with students as part of a system of formative feedback. Berger, Ruben & Woodfin, 2014) put it thus:

“The most powerful determinants of student growth are the mindsets and learning strategies that students themselves bring to their work–how much they care about working hard and learning, how convinced they are that hard work leads to growth, and how capably they have built strategies to focus, organize, remember, and navigate challenges. When students themselves identify, analyze, an use data from their learning, they become active agents in their own growth (Berger, Rugen & Woodfin, 2014, pg. 97).

They suggest, therefore, that students be trained to collect, analyze, and share their own learning data. This sounds rather radical, but it is only the logical extension of the rationale for having students track performance on regular tests, or making leader boards, or even giving students report cards.  It just does so more comprehensively. Of course, this will require training/scaffolding and time in class to do. The reasons for doing are worth that time and effort, they stress. Data has an authoritative power that a teacher’s “good job” or “try harder” just don’t. It is honest, unemotional, and specific, and therefore can have powerful effects. There are transformations in student mindsets, statistics literacy, and grading transparency, all of which help make students more responsible and accountable for their own learning. Among the practices they suggest that could be deployed in an EFL classroom are tracking weekly quiz results, standardized tests, or school exams, using error analysis forms for writing or speaking assignments, and using goal-setting worksheets for regular planning and reflection.

You can see the use of data for formative assessment in action in a 6th grade classroom here.

This post is part of a series considering ways to add more focus and learning to EFL classrooms by drawing on ideas and best practices from L1 classrooms.

Part 1 looked at the importance of goals.

Part 3 looks at the challenges and benefits of academic discussions

 

References

Yeh, S. S. (2011). The cost-effectiveness of 22 approaches for raising student achievement. Charlotte, North Carolina: Information Age Publishing.

Learning Styles

OisoDesks

One idea that comes up again and again is that of learning styles. Of course you have heard about it: some people are visual learners, some people have to hear things to learn them, and some kinesthetic people need to get their whole body into the learning process or nothing sticks. Google these terms–visual, auditory, and kinesthetic–and you’ll get enough hits to make you think that learning styles constitute an established theory in learning.

Learningstyles

 

Unfortunately, that assurance would be misplaced. Daniel Willingham, in When Can You Trust the Experts?, has this to say:

…there is no support for the learning styles idea. Not for visual, auditory, or kinesthetic learners…The main cost of learning styles seems to be wasted time and money, and some worry on the part of teachers who feel they ought to be paying more attention to [them]…(page 13).

This comes as quite a shock for many people because the idea is so entrenched. “Experts” talk about it often. It is mentioned in countless books and articles. I have heard it many times and repeated it myself. But, nope, it just ain’t so. There is no such thing as a “visual learner.” At least, there is no demonstrated effect in any scientific study. Mariele Hardiman summarizes the myth and the reality nicely in The Brain-Targeted Teaching Model. She cites Pashler et al. (2008), where you can read it yourself if you are still numb in disbelief (citation below). Hattie and Yates have a unit devoted to this myth if you are still not convinced. Great book, by the way.

But your intuition tells you that there are differences between learners. There most certainly are. Every brain is wired differently because of the individual’s experience and their age of development (for children). These developmental differences and experience differences are real and have very real consequences for how we should teach and the best sizes for classes, if we take differences seriously.

Essentially, differences take the form of preferences, preparation, motivation, and pace. According to David Andrews of Johns Hopkins University in the wonderful MOOC named University Teaching 101, students have preferences for the modality (yes, here we can talk about print or video or audio), groupings, and types of assessment and feedback. Students also vary in how prepared they are to learn. All learning involves connecting new knowledge to knowledge already held. If your students lack certain schema or factual knowledge, they will need more time to gain that and the target knowledge. In any given class, motivation differences (often because of prior experience) and time commitments can produce huge differences in the amount of attention and effort students will exert and sustain. Lastly, processing speeds (again because of experience and practice) in reading and auditory processing can make content more or less challenging than the instructor may think it is. Watch any class taking a test to see pacing differences in action. Students finish at very different times, and this is often unrelated to proficiency with the target content.

So, what should an EFL teacher do? Well, smaller classes are a start, but only if you are really planning to do something about it. If you are going to teach to the same middle as always, smaller classes will not necessarily give your students any benefits. Small groupings ranks only #52 in Hattie’s list of effective interventions, probably for this reason. Mr. Andrews suggests personalizing content and delivery as much as possible. He suggests getting to know your students as much as possible, and giving them as much choice as possible in how they learn. This is a delicate balancing act, in my experience. Students can be notoriously bad at understanding themselves, their strengths and weakness, and choosing better strategies. The teacher must push and pull them carefully up to better performance, offering them choices and checking that they are choosing wisely and making sufficient effort to see results. Technology can  help a lot here. Recording short lectures/lessons and making them available with transcripts to students can allow slower/less-experienced/different-preference students options for learning and reviewing that can allow them to customize the education experience for themselves. And research has shown that repeated viewing/reading and multi-modal presentation are significantly correlated with better learning; and variety and choice will keep attention better and improve motivation. One crucial part of personalization is personalizing formative feedback (a series of posts on formative assessment starts here). The power of formative feedback in driving learning should not be underestimated, but you need to be close to your students to either do that yourself or teach them or their classmates to do it. This also involves making goals salient to students with clear rubrics, so that they can see where they are going and how they are progressing, and what they need to do to get better. A recent study in math classes at an American university illustrates this. For homework assignments, some students were given formative feedback and follow-up problems based on performance, while the spacing of content repetition was controlled for maximum effectiveness. This small change resulted in a 7% improvement on the short answer section of the final exams! Personalization seems to have that kind of power, if done right.

As Mr. Andrews says, “personalization has become a standard for learning in every part of our lives except school. And it will become a standard in school.” Get ready to hear more and more about it.

 

Pashler, H., McDaniel, M., Roher, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological Science in the Public Interest, 9(3), 109-115.

Learning styles image fragment from https://www.home-school.com/news/discover-your-learning-style.php

 

Formative Assessment Pt. 5: Owning Learning

This is the final post considering the implementation of Dylan Wiliam’s ideas on formative assessment in EFL classrooms in Japan. The ideas come mostly from his wonderful 2011 book titled Embedded Formative Assessment. You can learn more about Mr. Wiliam from his website or from a BBC documentary titled The Classroom Experiment (available on YouTube: Part 1 and Part 2). Earlier posts covered these topics: 1) Raising Awareness of Learning Intentions, 2) Eliciting Evidence of Achievement, 3) Feedback from the Teacher, and 4) Peer Feedback and Cooperative Learning. This post will look at how to make learners more autonomous. This does not necessarily mean getting learners to do more outside of the classroom (though it can). Often it means just having the learners clearly signal when learning is not happening, and reaching out for resources that can rectify that.

As a parent, I know that the most efficient way of getting my kids to clean up the family room is to do it myself. It’s faster, I can be sure the job is done well, and best of all, no time or energy has to be used for the messy business of requesting, cajoling, demanding, and inevitably, getting angry and/or disappointed. But, as my wife so correctly points out, that’s just bad parenting. It’s also a bad way of going about teaching. Ultimately, it is the learners who have to learn. And they need to learn how they learn and how they can improve that. But it ain’t easy.

One of the themes of Embedded Formative Assessment is to focus on the the cognitive, the task, the goal, and learning,  and take the focus off the emotional, the immediate sense of well-being, and “get the egos of the students out of the learning situation.” For improved autonomy, this entails dealing with issues of metacognition and motivation and Mr. Wiliam does another nice job of laying out the research context. What goes on in a learner’s head is tricky because so many things are at play–and the cognitive/motivational landscape is,  like any good offer you manage to find, subject to change without notice. As Mr. Wiliam summarizes it, it depends on the learner’s “perception of the task and its context, [the learner’s] knowledge about the task and what it will take to be successful, and [the learner’s] motivational beliefs, including their interest and whether they think they know enough to succeed.” Greatly affecting  this is the notion of self-efficacy, the belief that success comes from effort and is attainable. That needs to be nurtured, as I have mentioned before. But Mr. Wiliam is suggesting a wider approach. Everything he has talked about in the previous chapters helps to promote autonomy, by encouraging the learner to focus on growth. Encouraging the transfer of executive control from the teacher to the learner is just another part of it, and if the focus of the course and  the focus of the class  is on learning, on giving and getting feedback that moves the learning forward, then it is easier for the learner to see that as one of her own jobs, too. There are, of course, no “quick fixes.” But Mr. Wiliam provides a few practical ideas for encouraging learners to move toward owning learning. One of the  keys is establishing channels of communication and then promoting their use.

Colored Cards or disks, or cups for reacting to difficult topics.  Mr. Wiliam suggests a few ideas of how to use traffic light cards or cups or disks or marks to signal to the teacher if learning or understanding has happened or not. Creating an environment where learners can use the signals is essential for their success. Part of that involves responding in a way that takes the teacher’s ego out of the equation as well. The teacher needs to react responsibly when learners signal a failure of learning. One idea Mr. Wiliam suggests, illustrates how this can be done to promote autonomy. Let’s say the teacher has just finished explaining a particularly gnarly grammar item like the difference between perfect and simple past tenses. The students signal their understanding using cups/cards/disks. After students hold up red, yellow, or green cards, the teacher instructs  the green card students to help the yellow card students while she gathers the red card students together for further teaching or help. Everyone wins here. The students feel the teacher is responsive. The good learners get a chance to cognitively elaborate, the weak learners get some rephrasing/reinforcement, and the weakest learners get the second chance they need.

Learning Portfolios. For productive skills, have the learners keep samples of their writing or presentations or conversations in a portfolio. This can be done digitally, though that will require some tech savvy on the part of the teacher. The idea is to have a few samples that can allow the learner to see the trajectory of his or her improvement. That sends two very important messages: change is happening (possible); and you make it happen. I’ve done something similar in writing class effectively. I had learners keep their’ first assignments after asking them how long it took them to complete them. Then at the end of term, I asked them to compare their first assignment to their most recent assignment in terms of sentence variety, sentence word length, and time for assignment completion. The results  always showed improvement and caused learners to feel proud of their achievement. I used to look forward to that class every year.

“Reflecting critically on one’s own learning is emotionally charged, which is why developing such skills takes time, especially with students who are accustomed to failure,” Mr. Wiliam says. He is talking about all but a few of the English language learners at high schools in Japan, I’m afraid, where average exam scores are in the 40s or 50s or 60s. Success is usually passing a test and not accomplishing a communicative act, or understanding or performing with confidence. Mr. Wiliam’s techniques can help make classrooms less of a teacher-controlled one-way info barrage. That would be a step in the right direction.

 

 

Formative Assessment Pt. 4: Getting Other Learners Involved

This is the fourth post considering the implementation of Dylan Wiliam’s ideas on formative assessment in EFL classrooms in Japan. The ideas come mostly from his wonderful 2011 book titled Embedded Formative Assessment. You can learn more about Mr. Wiliam from his website or from a BBC documentary titled The Classroom Experiment (available on YouTube: Part 1 and Part 2). The first posting in this series looked at learning intentions. The second looked at eliciting evidence of achievement. The third looked at how and when teachers can best provide feedback to learners. This post will look at cooperative learning and peer involvement in learning. Mr. Wiliam’s point is that when learners are working together and helping each other, they are naturally giving and getting formative feedback.

Real cooperative learning is a little like real communism. It’s a nice idea but in actual practice, too many people just game the system for their own benefit to get maximum reward for minimum effort. Teachers have serious–and well-founded–concerns about the amount and quality of participation that is brought to the group table by all members. Mr. Wiliam’s comparatively short  chapter on activating students as instructional resources for one another approaches the topic with a tone that makes you think he shares at least some of that sense of trepidation. The research is clearly positive, and Mr. Wiliam presents the profound effects that have been found for cooperative learning, if it is done right (which it usually isn’t). Mr. Wiliam explains how it works (motivation, social cohesion, personalization, and cognitive elaboration) and what two elements are crucial (group goals and individual accountability) before ending the the first part of the chapter with a discussion on how many teachers have a problem with pure, uncut cooperative learning (holding everyone accountable by giving everyone in a group the same score as the lowest-scoring member) and then citing stats that show how few teachers are actually making use of real cooperative learning in their classrooms (very, very few). And on that mixed note of confidence, he begins listing his techniques. I’ll get to the techniques I think might work in Japanese high school EFL classes in a moment, but first an educational culture point needs to be addressed.

There seems to be a strong sense that Japanese classrooms are naturally more cooperative because, well, Japanese group culture makes it easier. Mr. Wiliam states the same thing in his book, listing as “proof” the contrasting proverbs of the squeaky wheel gets the grease (US) and the nail that sticks out will get hammered down (Japan). In addition to the book containing  a mistake with the Japanese version of the proverb, I think this generalization is more than a little stereotypical. Anyone who has seen Japanese students “unmotivated” in regular classes come together in a club activity or festival project knows that  group power and individual accountability are impressive but cannot be taken for granted; and anyone who has seen PTA mothers–dedicated, concerned parents all–trying to avoid being elected for positions that require work knows that Japanese, like anyone else I imagine, can go to pretty great lengths to remain uninvolved, despite being a members of a nation known for being responsible and group-oriented. But I don’t want to get on Mr. Wiliam’s back because his main point is sound: we want to get everyone more involved with helping each other because there are great benefits when that happens; and it really matters how you do it.

One idea that any school can use is the “Secret Student.” You can see it in practice in the BBC video. It is a devious bit of peer pressure judo teachers can use to promote better behavior in the classroom and I think it would work brilliantly in Japan. Each day one student is chosen at random as the secret student and his/her behavior is monitored by the teacher(s). If the student’s behavior/participation is good, his/her identity is revealed to the class at the end of the lesson or day. And the whole class gets a point that goes toward some reward (a trip to an amusement park in the video!). If the behavior/participation of the student is unsatisfactory, the identity is not revealed and the class is informed that they did not get a point for that day. This would almost certainly help to improve participation and reduce disruptive behavior (two really big problems in most high school English classrooms). The only problem is what reward can be offered. It would have to be something possible yet motivating.

One technique to get started with cooperative learning is “Two Stars and a Wish.” Students give feedback on other students’ work  by stating two things they like and one thing that they think could be improved. Mr. William suggests using sticky notes for this feedback. He also suggests picking up some of the feedback comments from time to time to teach students how to give better feedback. This last point is important because it is precisely the generally poor quality of student or peer feedback that makes many teachers to unenthusiastic about peer feedback. There are many times in a language course when students are just out and out unable to provide good feedback. But learning how to give feedback well when it is possible to do so is a real learning opportunity that can benefit the giver and (eventually) the receiver. This technique could be used well for anything students write, translate, present, or any time students produce anything in the target language.

One activity that he suggests, “Error Classification,” probably wouldn’t work in a language classroom as he suggests. This activity requires learners to pour over writing examples to classify the errors made. It sounds nice, but it is unlikely the learners would be able to do this at all but the most proficient of classes. And even if they could, spending so much time on superficial mechanical errors  may not be a good idea. Another activity, “Preflight Checklist,” might be much better for student writing assignments. For this activity, students are given a list of requirements for the writing assignment (things like proper format, clear topic sentence, logical organization, subject-verb agreement, or whatever the teacher is focusing on at the time). Another student is responsible for checking the writing and signing off, meaning certifying that the first student’s writing meets all the requirements.

And a final activity that I think would work well in EFL classes is providing a little time at the end of a lesson or section for pairs or groups to discuss and report on what they have learned. This can be a nice student-led review, and a chance for teachers to see what has and has not been grasped well.

To really get the benefit of cooperative learning, teachers need get learners to have group goals and accept the idea of shared responsibility and accountability. This may be problematic in many situations for many reasons, depending on the year, the course, and the proficiency and motivation differences of learners. I have recently observed a class where the teacher was making extensive use of group cooperative learning. Out of six groups, it was working for three but not really working for the other three. For it to work, it seems that some training, some acceptance of the approach, some accountability, and a fair bit of time are all necessary. When it comes to cooperative learning in Japan, perhaps introducing more chances for learners to see, formatively assess, and then communicate that assessment might be the best way to start. Real cooperative learning is hard, takes a serious commitment, and can all be for naught if not done (and embraced) well.

Next: Part 5, Encouraging greater autonomy and ownership of learning.

 

Formative Assessment Pt. 3: Moving Right Along

This is the third post considering the implementation of Dylan Wiliam’s ideas on formative assessment in EFL classrooms in Japan. The ideas were gleaned from his wonderful 2011 book titled Embedded Formative Assessment. The first post in this series looked at learning intentions. The second looked at eliciting evidence of achievement. This post will consider how and when teachers can best provide feedback to learners. This part of the book takes up the theoretical rationale for giving feedback.

Let’s start with a question: is a grade feedback? That is, is it information–meaningful, understandable, actionable information–that contributes to the learning process? Mr. Wiliam says usually it is not. In the language of assessment, there is summative assessment and formative assessment. And grades are not formative assessment. And in Mr. Wiliam’s view, formative assessment is really all that matters.

If we think carefully about it, and Mr. Wiliam has, we can see that there are four possible responses to feedback: the learner can change his behavior (make more or less effort); the learner can change his goal (either increase or reduce his aspiration); the learner can abandon his goal altogether (decide that it is too hard or too easy); or the learner can just reject or ignore the feedback. As teachers, we know which of these actions we want learners to take, but what the learner actually does depends on how he sees the goal, the feedback, the feedback giver, and a host of other factors. Feedback seems straightforward in the teaching/learning culture we grew up with. But it is not. In fact, getting it right is really hard. But before we even try to get it right, a more fundamental mindset change is necessary. We have to understand that much of the “feedback-giving” we have traditionally done as teachers has been a waste of time–our time mostly–and has not contributed to learning. Much of or the “feedback-giving” we thought was so important, turns out to either have negligible effect or even negative effect. Yup, negative.

Feedback needs to “cause a cognitive rather than emotional reaction in learners”. It must “make learners think”, and it is only effective “if the information fed back to the learner is used by the learner in improving performance.” And this is why just giving grades is problematic. Students first look at their grades, then they look at the grades of other students, and they generally don’t even read those elaborate comments you spent all that time writing. Providing good feedback is difficult. It requires breaking down each learning intention into micro-skills, or micro-concepts, or significant units, and then being able to identify exactly what the learner is not doing right and how he can improve. The timing is also important. Performance must be fresh in a learner’s mind and there must be time to make use of that feedback on subsequent performance. The amount is important. It must be focused enough to be understandable and actionable. And learners need to believe they have the power to make changes that lead to improvement. They have to trust the teacher and believe in themselves. These are not givens. Teacher praise of effort (see Carol Dweck, who Mr. Wiliam cites often in this chapter) affects this, but so do task perception and the social atmosphere of the classroom.

For language classes with their combination of knowledge learning and skill building, this is a challenge that will require at least two distinct approaches. For skill building, the teacher must act more like a coach. Speaking, writing, listening, and reading must be broken down into micro-skills and learners need to be given feedback on each one so that they and the teacher get a picture of how they are doing and what they need to improve. Let’s take listening as an example. Mr. Wiliam suggests a chart of micro-skills based on the rubric of learning intentions for the course and a score of 0, 1, or 2 for each. 0 means no evidence of mastery; 1 means some evidence of mastery; and 2 means strong evidence of mastery. Both the learner and the teacher get a good idea of what is being done well and not so well, and the rubric (provided earlier) clearly states the conditions of mastery performance. The teacher can then concentrate on giving advice for improving performance. Let’s say the micro-skills include  genre identification, understanding reduced speech, identifying transition signals, or keeping up with native speed levels. The teacher has ways of checking all of these and knows ways of improving each of them.

For productive skills like conversation skills or presentation skills, the same can be done. In addition, video can be used to give feedback and provide a marker against which future performance can be judged (though Mr. Wiliam doesn’t specifically suggest this in the book). I tried this back in the day of VHS analog video and it worked really well, though it was very difficult to get learners to watch critically and reflect on their performance and think about how to improve it. The original idea came out of work done at Nanzan University in the 1990s by Tim Murphey, Linda Woo, and Tom Kenny (here is a later article explaining how it is done). Recently, with digital video and with every student sporting a smartphone or a tablet, this can be done much more easily. Techsmith has a brilliant app available for exactly this purpose, called Coach’s Eye. It allows you shoot and annotate a video and then share it.

For any kind of written work (translations, example sentences, paragraphs, essays, culture notes, etc., something Mr. Wiliam suggests is providing feedback without the grade. This can be done individually or in groups. For groups of four, for example, essay comments can be handed out separately on four sheets of paper. The four corresponding essays are also handed out and the learners in the group must work to match the comments to the paper. This forces them to consider the comments and it gives them a way to compare their performance on specific criteria against that of others. After that–and this is a critical step–the learners are given a chance to make adjustments to their papers and resubmit them for actual grading.

Mr. Wiliam quotes Alfie Kohn in the chapter: “never grade a student while they are still learning.” This is good advice. It can help a teacher to get into the best mindset to move learning along. Mr. Wiliam provides a strong case for doing this. The differences in learning outcomes between classes that employ formative assessment and those that do not are stunning. Teachers should be coaches, encouraging, developing, and training essential skills for performance. Formative assessment is the key, I believe, to getting teachers to assume a more effective role in the classroom and to building a community of learning. More on that last point when we look at what Mr. William has to say about leveraging peer feedback in the next post.

Next: Part 4, Getting other learners involved.

Formative Assessment Pt. 2: Eliciting Evidence of Achievement

This the second post of a series  considering  Dylan Wiliam’s ideas on embedded formative assessment in EFL  classes at high schools in Japan. Mr. Wiliam is a proponent of assessment for learning, a system where teachers work closely with learners to guide them to better learning. In the previous post, I looked at learning intentions, picking up some of his recommendations and describing how they might fit in EFL classes.  To learn more about Dylan Wiliam, you can visit his website, or read this article from The Guardian, or read his latest book about why and how to make greater use of formative assessment, Embedded Formative Assessment. A BBC documentary of his initiatives called The Classroom Experiment is also available on YouTube (Part 1 and Part 2). Some of the ideas in this post are observable in the TV program and I encourage you to watch it. The book is much richer than the program and I encourage you to get and read it.

In Chapter 4 of his book, Dylan Wiliam illustrates a problem with a nice story. Ask small children what causes wind and they might say it’s trees. They are not being stupid, they are using their observations and creatively making sense of what they see. But it’s a classic confusion of correlation and causation. Your own students would never do such a thing, would they? Oh, yes, they would. They misunderstand, misinterpret, overgeneralize, oversimplify, etc. etc.  probably more often than you think; and it’s your job as a teacher to catch them when they do. By eliciting evidence of learning (or lack of learning), we make it easier for learners to stay on the path of learning. It is important–crucial–that learners and teachers know if learners are on that path, or are veering into the trees (as it were). Very often students manage to achieve the correct answer without really understanding why. But by cleverly using questions and other techniques and attentively listening to learners, we (and they) can get a better idea of how they are progressing. Teachers, aware of some of the common problems learners regularly experience, should give learners the opportunity to make those common errors. This garden path/trap technique is unfair on a test, but is a useful tool in assessment where the goal is to make error salient to  the learner, his peers, and the teacher.

At present, too few of the questions teachers ask in class help to do this. In research cited by Mr. Wiliam for an elementary school class, 57% of teacher questions were related to classroom management, 35% was used for rehearsing things students already know, and only 8% required students to analyze, make inferences, or to generalize–in a word, to really think. For Mr. Wiliam, this represents a good place to make some changes and he suggests things that we do two things at the same time: promote thinking (to see if it’s happening), and increase engagement in that thinking by a larger percentage of the class. These things, unsurprisingly, have “a significant impact on student achievement.”

So how can things be done in language classrooms? Looking at the chapter, it seems that most of this applies to science, history, and math courses, the ponder and wonder courses. Language courses, especially if we have a strong skill focus element as I pushed for last post, seem to require a different kind of learning. But all disciplines are a combination of skills and knowledge. And the techniques Mr. Wiliam describes can be adjusted for different parts of different courses. I will use the terms question and answer when illustrating the techniques, but they could be used for knowledge questions of usage or application of strategies or skill demonstrations of listening or reading comprehension or pronunciation, etc. So let’s get to them. Once again, I am selecting the techniques I think best match high school EFL courses. This is not a comprehensive list and the examples are mostly illustrated by how I imagine they could be used here in Japan. Here we go.

1) The No Questions By Role Rule (my variation of the No Hands Up Rule). Fortunately in Japan we are not oppressed by that small clique of students in every class who seem to raise their hands to venture answers or provide extra comments for almost every question and statement that comes out of a teacher’s mouth. I’ve probably had fewer hands up to answer questions in my 25 years of teaching here than the average North American teacher experiences on a Monday morning. In order to prevent that small clique from monopolizing class (and learning) time, Mr. Wiliam came up with his No Hands Up rule. In Japan, teachers typically go down the role list or go up and down rows picking the students to answer questions. It’s more democratic, yes, but there are similar problems. I’ve regularly observed students counting the questions and students so they can focus on getting their answer to their question right, completely disregarding every other question. What we need is for all students to be engaged in answering all questions. So instead of the roles or the hands, Mr. Wiliam recommends a pot of popsicle sticks, each with the name of one of the students written on it. The teacher asks a question and then pulls a stick from the pot and asks that student to answer it. It forces all students to pay attention and try to come up with an answer since they don’t know when their name will be called.  Of course, variations on this can make it better for your class (see the video for some of these). Adding more than one stick for some special  students is one way, and putting a student in charge of pulling names is another. Another technique is the Pose-Pause-Pounce-Bounce, which can be used along with the sticks. Ask the question, and give everyone a bit of time to think; then choose a student to answer; then ask another student to evaluate the first student’s answer.

Some of you are already shaking your heads. Too many students will answer with “I don’t know.” It’ll be stick after stick after stick of “I don’t know.” What’ll you do then, huh? Well, Mr. Wiliam offers a few suggestions for that, too, because it is essential that all students be brought into the ring of engagement. Get more students to answer and then ask the I-don’t-knower to choose the best answer. Or gamify things a little. Use game techniques like “Phone a Friend“, or give the student two choices and let him or her gamble on the answer. The key point is: keep them engaged and thinking, no matter what it takes, for as long as it takes. Don’t let them slip into drowsy disengagement in the warm sunlight in the back corner of the class. Sleeping students are a real problem in Japan. Sleeping should not be allowed. A policy of zero tolerance for disengagement should be embraced. It’s not easy and it might negatively impact the brighter, more motivated learners for a while, but in the end it is a better approach, Mr. Wiliam argues. Watch the video of  The Classroom Experiment to see some blowback, though.

2) Hot Seat and Waiting Time. In the Hot Seat technique, one student is chosen to answer several teacher questions. Another student is then chosen to summarize or report on what the first student answered. The teacher then gives his or her evaluation. The reason for this is to give learners enough waiting time to process and evaluate in their own heads the answers of their peers before the teacher provides the “correct” answer. Without that waiting time, learners just listen and wait for the right answer from the teacher rather than develop the habit of evaluating ideas themselves. This, of course, can be done with any questions. Be sure to allow enough time for everyone to hear, process, and assess an answer before you,as the teacher, pronounce judgement on it.

3) Multiple Choice Questions For Thinking. Give the learners a set of three, four, or five sentences and ask them to answer a question about them. Which are grammatically correct? Which are academic and which are more casual? Which grammar rules are true? How are the items related? Which one doesn’t belong in the set? Etc. These are all questions that can stimulate pair, group, or whole-class discussions.

4) Variations for No. 3. There are many ways of making use of questions or multiple choice questions or statements for evaluation mentioned in the book. Giving learners cards (A,B,C,D, for example) that everyone can hold up to display their choices can be a nice way of getting the whole class involved in answering. Exit passes are another variation. Each student must write and submit an answer or an opinion on a paper before leaving the class. This forces all students to participate and gives the teacher something concrete in his/her hands.

Many of these techniques are second nature to many teachers, but it is amazing how many do not ever dip their toes in the waters of achievement checking until they are slashing stokes on the the final tests. Making thinking visible has been a buzzy concept for the last few years. One book promotes many ideas for doing so, one of which fits right in with what Mr. Wiliam is suggesting. When eliciting student responses, ask a follow-up question: What makes you say that? This is something I’ve tried successfully in my own classes. It makes students think more deeply and justify their ideas more. Ideas like this are not only effective in L1 content courses. Ideas and approaches like Mr. Wiliam’s  could work very nicely in EFL classrooms in Japan. At present there is a strong tendency for the teacher to just teach, imparting (or so he/she believes) knowledge to learners. Learners are asked to “study.” But aside from memorizing words, or memorizing the text, they usually don’t know what to do. Next year, the Ministry of Education is pushing for all teachers to teach English in English. Mr. Wiliam’s techniques can fit really nicely with that. The techniques listed above could allow for more meaningful use of English in the classroom, more engagement by all learners, and very possibly more learning.

Next: Part 3, Moving right along.

Formative Assessment Pt. 1: Learning Intentions

This the first post of a series on considering embedded formative assessment in EFL  classes at high schools in Japan. In previous posts (here and here), I mentioned some of the potentially powerful reasons for making use of this type of formative assessment. Dylan Wiliam, a teacher/administrator/researcher/teacher training from the UK believes that the single most effective (and cost-effective!) way of improving learning is for teachers (and learners) to provide assessment for learning, not assessment of learning. This requires a rethinking of the purposes, timing, and techniques of assessment. In Japanese EFL classes, it will likely involve more than this…In this series, I will look at the possible application of Dylan Wiliam’s stages of formative assessment here in Japan. To learn more about Dylan Wiliam, you can visit his website, or read this article from The Guardian, or read his latest book about why and how to make greater use of formative assessment, Embedded Formative Assessment. A BBC documentary of his initiatives called The Classroom Experiment is also available on YouTube (Part 1 and Part 2). But before we go on, it is important to clear one thing up: assessment for learning is perhaps not the assessment you are thinking of if you are thinking about grading. It has very little to do with grading and everything to do with informing the teacher and the students (and possibly others, including peers and parents) about how to learn. So the topic of testing for grading will not be addressed here.

Where are we going? Or more precisely, where am I going? This is the question that should be on the minds of all learners as they select a course or arrive for the first lesson. It is a question that needs to be kept in mind as learners proceed through courses as active monitors and agents in their own learning. But often in institutionalized settings, it is not. Instead, the learners do not voice any expectations they may have and just flip through the textbook for a hint of the things they will learn. It’s frustrating for some, but years of similar starts to courses have made it unquestionably normal.

Too often in high schools in Japan, the teachers actually have a fairly similar experience. They flip through the textbook to see what it is they are going to teach in the upcoming year. That is, many schools fail to create a curriculum with specific skill targets for each year and instead they let the textbooks (OK, Ministry-approved so they must be appropriate, no?) decide what they are going to teach. It is the content of the textbook that becomes the de facto syllabus for the course. Having students learn–usually meaning “memorize”–the content of the textbook becomes everyone’s purpose. And it is at the point of this decision to not make a syllabus with specific skill targets and instead just teach the textbook from start to however far we get, that the first obstacle to deploying embedded formative assessment  emerges. For once the textbook becomes the object of learning, it changes the course content into a body of knowledge or information. It shifts the goals of the course from the learner’s skills and ability to something outside the learner. The starting and ending point of learning is no longer the learner, but the percentage of the textbook that the learner can “master.”

That is not to say that the textbook content cannot be a good part of a syllabus for a course. Used flexibly, by a dedicated teacher, a good textbook contains enough interesting activities and content that it can provide structure for a course and facilitate learning. But there’s an expression in English that we need to keep in mind: when the only tool you have is a hammer, every problem begins to look like a nail. For HS teachers in Japan, the textbook becomes the hammer with which they address the needs of every unique learner in the class. It is not the most effective way to teach and it doesn’t have to be this way. With clear skill targets, the teacher and the learners get a way of talking about learning. The teacher gets something she can show, demonstrate, and measure the progress of. The learners get a model and a yardstick. Of course all language courses feature a combination of knowledge content, skill content. But a greater emphasis on skills by everyone in the classroom is necessary to prevent the course from focusing completely on knowledge and understanding, things that will not actually matter that much when learners try to make use of the target language in the real world.

“It is important that students know where they are going in their learning and what counts as quality work, but there cannot be a simple formula for doing this,” says Mr. Wiliam. Look at that first part again: “know where they are going in their learning and what counts as quality work.” The learners need to have a better idea of what they can do now and and what they will be expected to be able to do and know by the end. They need to see it. They need to see themselves, the target, and the gap. This is, at present, not a common way that schools, English departments, or individual teachers approach the kids who come to them to learn. The focus of Mr. Wiliam’s book and  assessment for learning (AFL) is entirely the classroom and the learners in it. He does not spend any time discussing placement tests or proficiency tests. Instead, the learners are asked to consider learning intentions for every unit, topic, or module the class will encounter.  And he provides several concrete suggestions as examples for how this can be done. Many of them are collaborative in nature. I went through them and pulled out the ones that I thought could be adapted for use in English language classes in Japan. In most cases, the actual example is described as how I would imagine using the technique in HS English classes. If you want the complete list of original examples, you’ll just have to get the book, something I recommend anyway.

First up is passing out 4-5 examples of student work from the previous year. In the book it is done with lab reports, but it could be done with any kind of student writing (or if you have recorded examples of presentations or student speaking, that would work, too). In groups, the learners rank the works and report on how they assessed them. This lets the criteria for better performance become salient through comparison and discussion. Teachers may want to provide some topics or questions to guide the learners’ attention to specific aspects.

A variation of the  above involves the work of the present class. After the writing assignment is completed, the teacher collects them all and reads them, selecting what he thinks are the three best examples of student writing. No other feedback is on the paper at this point–no grade and no comments. The teacher hands out copies of the best student writing. The learners are asked to read them for homework and then discuss why the teacher thought these were the best. Then–and here’s the important step–all the students (including the authors of the best papers) get their papers back and are given time to redraft their writing. They then, finally, submit them for a grade.

In “Choose-Swap-Choose” learners choose a good example of their own work from several they have made (a short recorded speech, for example). They then submit these to a partner who then chooses the one he/she thinks is best. The two students discuss their choices if there is disagreement.

One good idea of reading aloud or pronunciation classes has learners in groups practicing the recitation of a short passage in the target language. Each group then chooses the learner who they think has the best accent and the whole class listens in turn to the representatives of each group. The teacher comments on the strengths and weaknesses of each one.

And finally, have the learners try to design their own test or test items for mid-term or end-of-term tests. Of course, this should be done while there is still time to make use of the feedback that emerges from this activity. But in making test items, students clearly show what they think they have learned and what they think is important.

The main thing to point out from all of the above techniques is that they provide feedback to both the learners and the teacher. The learners can use that information to make adjustments to their learning. And the teacher can use it to see what has been learned and how well in order to make adjustments to teaching. All of these techniques promote meta-cognitive skills. They also contribute to the creation of a community of learners. According to Mr. Wiliam, they also definitely lead to better learning. But would this approach work at high schools in Japan? The answer is a great big “it depends.” It depends on the levels of motivation and trust in the classroom. It depends on whether the teacher can afford the time it takes to allow learners to examine and discuss the work of others. And it depends on the mindset of the teachers. They need to be willing to try out a more learner-centered approach to teaching and learning, one with a greater emphasis on skills. Many–too many–teachers prefer to teach content at the students and leave the learning up to them. Too many have their syllabus strapped to the ankles of the syllabuses of the other teachers teaching the course in a given year. There is nothing to do but move along in lockstep. But I think that some of these ideas could be put into practice in almost any school in the prefecture where I work.

In the next post, we’ll look at what Dylan Wiliam says about how to elicit evidence of learning. Part 2: Eliciting evidence.