Who judges the evidence?
Robust research evidence, adjusted to a local school or area context, and combined with professional judgements about priorities, should lead to a better education system. This, in turn, should help each new cohort of students to gain more from their schooling. However, while it is reasonable to expect teachers and school leads to be aware of context, and to be able to judge what their improvement priorities are, who judges which evidence is robust and relevant?
Multiple systematic reviews of education research, in which all of the pubilished and unpublished work on any topic is sought and then synthesised, have revealed that much of what is described by its authors as being ‘research’ is nothing of the sort. In addition, many reports of apparent research are indecipherable even to other professional researchers, and most of the remaining clearer reports portray research that is fundamentally flawed and should not be trusted or acted upon. There are good studies and, if found among the rest, they can be aggregated to begin to provide a basis for evidence-led teaching. However, this raises the question of who makes the quite complex judgements about which studies can be trusted, how this evidence is aggregated fairly, and how the synthetic results are best conveyed to their intended real-life users.
The double standard in evidence use research
Over the last 30 years, governments and funders worldwide have sought to improve the quality of evidence produced by publicly-funded research . Understanding of effective interventions to inform education policy/practice has improved since the US Institute of Education Sciences, Education Endowment Foundation (EEF) in England, and other initiatives. There has also been considerable progress in methods of summarising and synthesising research results, with the work of Evidence Centres and others. Evidence of what works, or not, is increasingly available for the first time.
However, good research evidence is still underused, and poorer research perpetuated. Education policy-makers generally say that they want, and use, good evidence, but do not always act correspondingly. In terms of practice, only a minority of teachers incorporate evidence-led practices into their planning, relying instead on personal experiences, advice from other teachers, and CPD not underpinned by research evidence. Initial teacher training often has little input from those with relevant knowledge of current evidence.
The suggested barriers to greater use of good evidence include the time needed to engage with the often volatile research on any topic, users’ lack of skill in finding, interpreting, and implementing evidence, individual, team, or system attitudes and behaviour, rapid staff turnover, and administrative changes. Potential users may also be unaware of the evidence that is available, feel unable to act in accordance with evidence because they lack the authority or resources to change existing practice, or have other priorities and pressures. This is dangerous because the use of flawed research is not cost-effective, fair to those most in need of evidence-led improvement, such as disadvantaged students, and it is also not cost-effective.