· School effectiveness and improvement, including the methodology of school effectiveness research. The use and effects of feedback, use of performance monitoring information systems.
· Evidence-based education, the nature of evidence and its role in influencing practice and policy. Involvement of practitioners in research.
· Evaluation designs, including the use of randomised controlled trials.
· Research methods
· Assessment and comparability
I wrote this for a brochure that aimed to persuade undergraduates to consider studying Education, in 2005.
My first degree was in mathematics. I enjoyed maths at school and was good at it. I liked the power of abstraction in representing and solving complex problems. I became a maths teacher (it's a long story ...) and got interested in the power of education as a social and a personal force. I believed that education could have the potential to transform the ways people think about things, to open their eyes to new ideas, to challenge what they thought they knew, to provide them with opportunities for advancement and fulfilment, and to redress inequalities and injustices. In other words, it was a pretty important - and very satisfying - thing to be doing. But also an important thing to understand better. If we have specific aims for an education system (such as to equalise opportunities, or to promote a particular kind of learning), then some ways of trying to achieve them must be more effective than others. Which approaches are best, and how do we know?
These were the kinds of questions I was interested in. I left teaching and studied full-time for a PhD in education. When I finished, I worked as a researcher in the CEM Centre (www.cemcentre.org) at Durham University and then also as a lecturer in the School of Education. My CEM Centre research now involves working with nearly 3000 secondary schools to help them assess and monitor (and so improve) their own performance. Our work supports these schools to achieve their aims in a very pragmatic way, so it combines rigorous research with practical applications. We also work to try to promote the use of good evidence to inform policy. Unfortunately, most educational innovations are not really 'evidence-based' and in many cases the kinds of evidence we would need to find out which approaches are best does not exist. So the answer to my question is that we mostly don't know - yet. This is both a daunting challenge and an exciting opportunity.
My research has covered a range of areas including a lot of evaluation of numerous programmes as well as school effectiveness research, research on the impact of feedback, research on assessment, including issues of comparability of standards, and on school selection. None of my research has been specifically about mathematics education, but most of it uses statistical methods of some kind to analyse data - along with a range of other methods. In this sense it draws on my mathematical background, though I have never been formally taught any statistics (not since I was 16, anyway). In my research, as in my teaching (I teach the research methods module for the Ed.D. course), I believe the statistics should be used as a tool to help us to understand and approach a problem, rather than something we need to know about for its own sake. Understanding when a particular statistical method is appropriate and what conclusions it can - or cannot - support is the key thing. All methods require some assumptions, and these are often problematic, but sometimes the conclusions are not too sensitive to normal violations of those assumptions. Understanding these kinds of issues is vital for anyone who wants to be able to do or read research, and it is far from easy. However, I believe it does not need to depend on any advanced knowledge of mathematics or statistics.