And there are no letters in the mailbox
And there are no grapes upon the vine
And there are no chocolates in the boxes anymore
And there are no diamonds in the mine
Leonard Cohen
Being clear about evidence-based reform
It’s been a privilege to work on evidence-based reform over the last ten years. To watch as the landscape has changed, more robust research has been carried out, and interest in using that research has grown. I
think we’re at an important stage in the movement, and perhaps equally at risk of falling back as moving forward. The risks come mostly, I think, from the danger that evidence-based reform won’t deliver the results that people expect. More on this later, but first, let’s go back ten years.
When I first began working on evidence-based reform in education, it seemed that here was a place where I could make a difference. If we could get the research evidence down from the library shelves, simplify it into words of less than six syllables, and get it out to teachers and schools, we could improve outcomes for children. It seemed like a substantial prize.
There many building blocks of evidence-based reform, and as I have explored each in turn, I have come to see that there are challenges and limitations to each of them. Taken together, they are so substantial that I think we need to be clearer about what evidence-based reform can and can’t deliver, so that everyone involved in education is clear about what they are getting themselves into. I’m still a believer in evidence-based reform, because the alternatives are worse, but we need a better understanding of the challenges.
So what are the challenges?
The state of the evidence
There is an awful lot of research evidence out there …but there is also a lot of awful research evidence out there. The poor research evidence tends to exaggerate the potential impact of new approaches and innovations. As a general rule of thumb, the better the research, the smaller the impact. When we were looking for research to include in our Best Evidence in Brief newsletter (http://www.beib.org.uk/), if we find, for example, a meta-analysis that shows a large effect size this raised alarms and a red flag.
The large effect has usually come about because the meta-analysis includes poor quality studies (e,g., of a short duration, researcher-designed measures, etc). These kinds of studies are unhelpful not just because they’re not very good, but because they raise the expectations of the degree of change that we might see in the classroom. John Hattie’s notion that only interventions with an effect size over +0.40 are educationally important is unhelpful.
To set my stall out early, I think it is reasonable to expect that, if you implemented the currently available best evidence across all aspects of school practice, you would see an average improvement of +0.30, maybe +0.40 if you’re lucky. This is well worth having, but it doesn’t, for example, quite close the achievement gap between disadvantaged students and their peers.
We don’t have enough good evidence upon which to rely. For example, the ratings for around 40% of the strategies listed in the EEF toolkit are based on limited evidence or worse (including important issues such as performance pay, and setting or streaming). So we can’t yet be confident about the conclusions of this research. It may be that in the future, new studies overturn the current recommendations. In a way this is great, it reflects the progression of science and learning. But this is an uncomfortable situation to be in when we want to influence practice. Practitioners want a “right answer” – should we stream or not? So equivocation on the evidence isn’t very helpful, even if it’s true.
Most interventions are no better than business-as-usual. Of 100 randomised controlled trials conducted by the Education Endowment Foundation, around one in five has evidence of a positive impact (i). This is something of a shock to the system. One of the objections to randomised controlled trials (RCTs) in education has been that it is unethical to deny the children in the control group the “treatment”, because there is an expectation that this shiny new treatment will work. Now we find that almost the opposite is the case.
It is much more likely that the treatment will not work (although thankfully it also usually causes no harm). In addition, although the EEF have conducted more than 100 RCTs, that still isn’t enough useful evidence. Evidence 4 Impact (https://www.evidence4impact.org.uk/) collects evidence on the interventions available to schools in England. The idea is that, if schools are considering an intervention, they should look for those that have evidence of effectiveness. Of the 171 interventions on the site, 16% have evidence of impact, 11% have been evaluated but showed no evidence of impact, and 73% have not been evaluated.
What does the fact that most interventions have no positive impact mean for the new ideas and initiatives that teachers and schools are trying every day in classrooms up and down the country? Not as part of a research project, but as part of everyday practice. Most new ideas come, not from research, but from colleagues in your own and nearby schools. Are they having any positive impact? The short answer is that we don’t know. My view is that good teachers can make just about anything “work” if they think it’s a good idea, and trying to improve your individual practice is a good thing. At the very least, it keeps you interested, enthusiastic, engaged, and it probably does no harm.
But there isn’t great evidence that teachers are continuously improving on their own. So small changes that are being introduced probably aren’t making much difference, and are unlikely to be a source of undiscovered diamonds. But I’ll come back to this later.
The impact of teachers and schools
The greatest influence on a student’s achievement is the student themselves (i.e.: their own attributes – prior achievement, motivation, socio-economic status, etc). Next in importance are their peers in the school. The impact of school itself comes in a distant third (ii, iii). Within school, in each class, the effectiveness and therefore impact of an individual teacher is important, but it is proving difficult to identify effective teachers and improve less effective ones (the Gates Foundation spent $575 million in the US trying to do this, with no impact) (iv).
This noise in the system makes it difficult for teachers and schools to identify the impact of changes in school practice.
Changes in the composition of the school (in classes and cohorts) can change outcomes in much more dramatic ways than those resulting from improving teaching and learning. It also makes it difficult to identify “better” schools. Schools may be “better” simply because their current student body is “better” (regardless of attempts to level the demographic playing field). Visiting better schools to identify their successful strategies isn’t a bad idea, but it can be difficult to ascertain whether their success is actually due to those strategies, what those strategies are, and whether you can implement them in your own school.
Within a school, or a small group of schools, it can be difficult to identify the impact of new approaches. The Institute for Effective Education (IEE) supported nearly 30 small-scale evaluations run by schools. The main learning point for the schools running these evaluations has probably been how difficult it is to organise a “fair test” within a school. The constraints of school organisation (timetabling, teacher-class allocation, curriculum structure, etc) make it difficult for schools to isolate and evaluate the impact of a single change in practice. Yet such evaluation is essential. For, although it’s not generalizable, and may exaggerate the impact of the intervention as small-scale studies tend to do, it’s an important first indicator of whether the intervention should be scaled up or scaled back. Often too, it provides useful information on the implementation of change across the school (for example, the extent to which other teachers can take up “your” idea effectively).
Securing evidence-based reform at scale
What do these challenges mean for the hopes of delivering evidence-based reform across the school system?
The evidence-based interventions available to schools are mostly built around practices with evidence of effectiveness – interventions to improve students’ reading or maths skills, social-emotional learning, classroom talk, and so on. And, as we have seen, only a small proportion of those interventions have been shown to be positive.
Schools or practitioners can, of course, implement by themselves approaches that are supported by the evidence, though all the evidence suggests that this is challenging. Simply defining what the approach is can be surprisingly difficult. The EEF toolkit has feedback as its top-rated strategy, but using feedback correctly is difficult. Dylan Wiliam identifies that, of eight ways to deliver feedback, only two result in a positive outcome (v).
The EEF’s implementation guide advises that schools should “Specify the active ingredients of the intervention clearly: know where to be ‘tight’ and where to be ‘loose’” and “Make thoughtful adaptations only when the active ingredients are securely understood and implemented” (vi). The problem with this advice is that the active ingredients or core components of an intervention are rarely, if ever, available to schools and practitioner. (vii).
It’s also difficult for practitioners and schools to go directly to the research and try and implement research-proven strategies from there. Numerous EEF-supported pilots and trials have attempted this with no impact (e.g., viii, ix).
There are a few schools and school organisations trying to do this, and they make for interesting case studies (such as the Aspirer Teaching Alliance, where teachers are “drenched in opportunities to engage with the evidence” (Megan Dixon, presentation at Chester ResearchED, March 2017) but it’s yet to be shown robustly (no matter how much I love them) the extent to which these schools can improve outcomes for themselves and, perhaps more importantly, for others.
Before lockdown, I interviewed 30 practitioners in schools such as Aspirer – “research-sensitive schools” trying to make the most of research evidence. What came across was the need for leadership that emphasised the moral purpose of the project, a focus on teaching and learning, maximising the opportunities (formal and informal) for professional development, and a culture that supported trust and open-ness. It was also clear that this needed to be a school-wide approach, that, though practitioners are gate-keepers to what happens in the classroom, it is difficult to make meaningful change on their own.
Whole-school improvement programmes that are based on the best evidence do exist. One such, Success for All, has been evaluated in more than twenty trials and has shown an average effect size of +0.29. This gives an idea of the amount of improvement that evidence-based reform can deliver. It would be enough to close perhaps 50-75% of the gap between disadvantaged students and their peers (although the peers would also improve).
Put another way, it would ensure that many more disadvantaged and previously lower-attaining students had the reading and maths skills to help them access the future curriculum and then succeed beyond school. A future whole-school improvement programme where the school uses research evidence itself might achieve more or less than this, but this is the area of impact we can expect.
This level of impact is only possible with intensive, whole-school implementation of evidence-based reform. Less intensive implementation than this will clearly have much less impact. So, for example, introducing metacognitive strategies across school or using retrieval practice starters is likely to have much less impact, impact that may be almost undetectable. What kind of impact are teachers and schools expecting from introducing such approaches, and how will they know if they have achieved it?
The evaluation projects that we have supported have shown that this evaluation can be done, but it is not easy, or commonly carried out at the moment. It is more likely that teachers and schools will rely on the existing approaches they have to evaluate change. This may be fine, but as with the implementation of many new approaches in schools (the social-emotional aspects of learning (SEAL) programme comes to mind (x)) is likely to result in a mix of beliefs about the effectiveness, from enthusiastic evangelist to disappointed sceptic. And here lies the risk, that evidence-based reform might end up being treated like any other fashion within schools – embraced or dismissed based on belief rather than science.
I think there are a number of issues that need further discussion and communication in order to ameliorate this risk:
- The potential impact that evidence-based reform can have within the school system. Let us all be clear about how much change can actually be achieved. How is this then presented to the wider community of parents and public? Can we agree a culture of realistic expectations?
The effort that is required, from teachers and schools, to achieve this impact. To achieve the most impact requires “relentless” effort, but how should we balance this with the workload, mental health, recruitment and retention challenges that schools face? - More and better evaluation of the impact that changes to practice are having, whether at a small-scale in-school level, all the way up to continuing randomised controlled trials, particularly of the interventions that schools are actually using.
- And as always, in any research-based article, more research, particularly on issues that are important to schools, but under-researched.
I still believe that evidence-based reform is the best way of achieving significant, worthwhile improvement in outcomes for children, but I think it is vital that we have a shared understanding of how difficult these gains are to achieve, in order to avoid disappointment and disillusion with the evidence-based movement.
Top tips for practitioners
It seems likely that sustained, school-wide engagement with research evidence is required to make a significant difference to pupil outcomes.
For individual practitioners, though, there is still benefit to be had from engaging with the research evidence, providing a rich source of ideas and challenge. Where it exists, research evidence provides a secure foundation and framework for practice.
Interest in research engagement has never been higher, and there is a vibrant, supportive community out there from whom you can benefit – distilling, reflecting, and applying research in practice.
Jonathan Haslam was until recently the Director of the Institute for Effective Education and has been working for the last twelve years to help practitioners and policy makers use research evidence in practice. He has been involved in a wide range of projects supporting evidence use, including the Research Schools Network and Evidence for the Frontline.
References
- (i) Lortie-Forgues H, Inglis M, Rigorous Large-Scale Educational RCTs Are Often Uninformative: Should We Be Concerned? Educational Researcher 48(3) 158-66 (2019)
- (ii) Coleman J, Campbell E, Hobson C, McPartland J, Mood A, Weinfield F, & York R. Equality of Educational Opportunity. Washington: US Department of Education and Welfare. (1966).
- (iii) Gorard S & Siddiqui N. How trajectories of disadvantage help explain school attainment. SAGE Open 9(1): 1-14 (2019)
- (iv) Stecher BM et al, Improving Teaching Effectiveness: Final Report: The Intensive Partnerships for Effective Teaching Through 2015–2016. Santa Monica, CA: RAND Corporation, (2018).
- (v) Wiliam, D, Seminar for district leaders accelerate learning with formative assessment 2013 https://www.slideshare.net/NWEA/dylan-wiliam-seminar-for-district-leaders-accelerate-learning-with-formative-assessment-2013-19027603 accessed November 2019
- (vi) Education Endowment Foundation, Putting Evidence To Work: A School’s Guide To Implementation (2018)
- (vii) Haslam J, What are active ingredients?, IEE blog, (2020) https://the-iee.org.uk/2020/10/22/what-are-active-ingredients/
- (viii) Murphy R, Weinhardt F, Wyness G and Rolfe H, Lesson Study, Evaluation report and executive summary, Education Endowment Foundation (2017)
- (ix) Rose J, Thomas S, Zhang L, Edwards A, Augero A, Roney P, Research Learning Communities, Evaluation report and executive summary, Education Endowment Foundation (2017)
- (x) Humphrey N, Lendrum A, & Wigelsworth M. Social and emotional aspects of learning (SEAL) programme in secondary schools: National evaluation. London: Department for Education. (2010)
Register for free
No Credit Card required
- Register for free
- Free TeachingTimes Report every month