Celebration of a seminal work
This article is a celebration of the publication of the 5th Edition of Andrew Pollard’s Reflective Teaching in Schools (Pollard et al. 2018).1 The updated version includes adapted summaries of the research evidence included in the Education Endowment Foundation’s Teaching and Learning Toolkit. This serves as an indication of the change in the relationship between research and practice over the last twenty years. As a new lecturer in initial teacher education, I was an early adopter of the first edition of Reflective Teaching in Primary Schools (Pollard & Tann, 1997) and wished it had been available when I did my PGCE.! Since the publication of the EEF Toolkit I have thought a lot about the relationship between research evidence and reflective teaching and have developed some of these ideas in this article.
Research and reflection: neither is sufficient on its own
Researchers and practitioners are necessarily interested in different things. Researchers want answers to general questions about what is effective and seek to develop theories or models which can be applied across contexts. Practitioners are interested in how to meet the needs of learners and the influence of the contexts and the relationships which the researchers have often sought to efface. This does not mean that these perspectives are incompatible. We just need to understand the role each can contribute to effective teaching and learning.
In terms of research, single studies are not enough in education. There is too much variation between contexts and settings, between schools, teachers and pupils, as well as the in application of educational concepts and ideas to be confident of the findings from a single study, no matter how robustly designed, implemented and analysed. A single study can be interesting, but never conclusive. A cumulative and comparative approach is therefore an essential tool in making progress in education research and to prevent the pendulum of policy changes or public opinion swinging backwards and forwards each decade. Meta-analysis, such as the kinds of studies summarised in the EEF Toolkit, offers us the best way to get an overview of research findings in a specific area of educational practice, such as phonics, for example. Such research can also to inform our understanding of literacy more broadly, by looking across the meta-analyses of phonics and reading comprehension or other areas of intervention research in reading. Understanding the relative value of different teaching and learning approaches, such as collaborative learning or the contribution of digital technologies, can help set findings from different areas of research in perspective. This kind of synthesis of research provides a map of the field. It may not provide us with a route map or a set of directions for a particular journey, because these are dependent upon the precise starting point and destination we have in mind. However, this map can help orient us as we focus in on a particular educational goal.
To pursue this analogy, it seems to me that the current state of knowledge derived from meta-analysis in education is a bit like a medieval map of the world, a mappa mundi, where some areas are better known and more accurate, such as learning to read. In other areas, the evidence is less secure, but still coherent and positive, such as about collaborative learning or small group teaching. There are also other areas of research, like the “here be dragons”section of a mappa mundi, where you can find the mythical tales of learning styles, multiple intelligences and coloured lenses which cure all kinds of dyslexia, and which all appear to offer an educational panacea (see Higgins, 2018 for a fuller account of this argument).
The distortions produced by the aggregation of individual studies, with their varying designs, populations and measures, first up to the level of meta-analysis then again up to the level of meta-synthesis means that this picture is not yet as accurate or precise as we would like. It tells us what has worked in these studies “on average”, but contains all of the statistical risks of averaging averages. I think of this evidence as providing practitioners with a “good bet” for what is likely to be successful or unsuccessful, based on how large the average effect is as well as the extent of the spread of effects. We also have to remember that the effects in these studies are based on a comparison or “counterfactual” condition. In averaging the effects, we “average” the comparison conditions. We become more certain that something is likely to be effective, but less certain about what it is better than. This is important because an already highly effective school is likely to be better than the “average” comparison or control school. Any typical gains found in research will be harder to achieve in an already successful classroom. The larger the effect and the narrower the spread of effects, the more likely it is to be useful other contexts.
It also suggests that we need to be clear about what we should stop doing. Whenever schools adopt something new, they must stop doing something else. There is no spare time in schools. We rarely reflect on this, so it can be hard to tell what gets squeezed out. Research can also help us think about this, by providing information about things that haven’t worked, or tend not to work so well, on average. Research has clear limitations in its specific applicability. It is about what’s worked on average, not what works (or what will work) here. It is only once we understand this that we can use it appropriately.