The Skeptics Society & Skeptic magazine


Why Education Policy and Practice Have Become Research-Free Zones

When you drive past any American school, you’ll see signs telling you to reduce your speed and declaring the area to be a “drug-free zone,” with draconian penalties for violators. While we can all agree on keeping drugs away from school children, drugs are not the only thing we keep out of schools. Unfortunately, when it comes to educational policy and practice, research findings have also found themselves banned from schools. Why is that?

The State of Education Research

Getting your measurements and calculations right matters immensely when building an airplane that is unlikely to crash or a building unlikely to collapse. In Turkey and Syria, when a 7.8 magnitude earthquake hit, outdated building methods contributed greatly to the death toll.1 Engineers and builders need to make sure that the evidence they bring to the table is factually correct. Once you leave the concrete world where accurate facts are prized—or at least clearly have consequences you can detect—things get a lot fuzzier. In the realm of social science, particularly education and policy research, it isn’t always clear to a policy maker, practitioner, or parent what constitutes good evidence, especially when experts disagree.

Does that mean that researchers in the social sciences and education don’t think they have accumulated important evidence? No. So, from the perspective of those who recognize the value of accumulated knowledge and research evidence, it’s confusing why those in education policy and practice don’t appear to listen to researchers or fail to use what is considered the “best evidence” to date on a particular topic. When he realized that most research doesn’t impact policy or practice, educational psychologist David Berliner lamented: “Once upon a time, early in my career, when the world seemed quite a bit simpler than it really is, I believed that my research, and that done by my fellow educational psychologists, would influence what happens in America’s classrooms and in teacher education. I believed in the model of research that famous researchers often espoused.”2 And that’s often the belief many graduate students from the social sciences initially hold, and that many distinguished scholars in their specific subfields still hold. Why?

Education is filled with fads and myths. Hot topics such as learning styles,3 multiple intelligences,4 grit,5 and mindset6 have, at best, only weak support, even though they continue to be trumpeted by the media and have become a part of the popular conversation. Though these are well-recognized examples, the history of education shows7, 8 that they are by no means exceptions.

The replication crisis in which many published research results have proven difficult or even impossible to reproduce, has sent shock waves across all areas of science,9 especially social science,10 including the oftentimes policy-influential domain of economics.11 A paper published in Science that estimated the reproducibility of psychological science research was downloaded over 40,000 times and covered in over 231 news outlets,12 for example, “Over half of psychology studies fail reproducibility test” (Nature13) and “Psychology’s replication crisis is running out of excuses” (The Atlantic14). Not only are social scientists themselves justifiably skeptical whether some seemingly established findings will stand the test of time, but the broader public has become cynical regarding the value of expert opinion in general.

Within the social sciences, different fields have different theoretical, empirical, and tool-based approaches they employ based on their niche-specific promotion incentive structures (pay, promotion, awards, recognition), typically linked to publishing in particular field-valued journals. Generally, the more publications you have in the more prestigious journals, the greater your chance of receiving pay raises, promotions, prizes, and other perks. Since this translates into the need to write for the handful of peers in one’s field, the disciplines are largely siloed, i.e., publications and information get stacked up in specialist journals, encased in technical language, equations, and symbols. Only rarely, and at risk, do scholars dare build on the work of those outside their own discipline, or in some cases even within them. The unfortunate reality is that the use of research-based evidence in formulating education policy is quite limited because politics and personal values dominate. For example:

Ron Haskins, a respected former Republican committee staffer in Congress and now a Brookings Institution scholar, was asked several years ago about the role research played in what was, at the time, a contentious congressional debate about welfare reform. Without missing a beat, he responded that, based on his personal experience, the best research might exert five percent of the total influence on the policy debate, with an upside potential of 10 percent. Personal values and political power, Haskins went on to say to his now silent and disappointed audience, were what really mattered in Congress.15

Why Research Carries Little Weight in Policymaking

Policymakers16, 17 have explained that research use is not really linear in the way that most researchers hope.18 On the playing field of hardball politics, research is more often used to: (a) support and justify a favored, pre-existing ideological, and long-held point of view, or (b) help inform the thinking around a decision-making process in a way that is quite specific, context-dependent, and disconnected from the findings in a journal article. Simply stated, research results usually just sit on the bench during the policy-making process.

Moreover, the rigor of the methods employed is rarely the primary concern of those using the research. In making policy, what counts is whether a given piece of research provides support for a predetermined decision, in a particular on-the-ground context. Bill Knudsen, former Deputy Assistant Secretary in the U.S. Department of Education, noted in a personal communication that, based on what he saw in working with legislators, perhaps at most 10 percent of decision making in education policy is evidence-based, and the definition of what is considered as evidence is quite loose, with little distinction made between mere qualitative evidence and the ascending levels of scientific rigor such as Randomized Controlled Trials (RCT) evidence and above. (See Figure 1.) Of course, the unfortunate fact that evidence often fails to impact practice is also true in health care19 and numerous other fields: “Yet even today, health care experts maintain that 80 percent to 90 percent of daily medical practice is not anchored in such evidence because the specific, detailed information practitioners need still does not exist.”20

Figure 1. Research Design & Evidence Chart, redrawn based on a chart by CFCF
[CC BY-SA 4.0] (See https://en.wikipedia.org/wiki/Evidence-based_education)

Figure 1. Research Design & Evidence Chart, redrawn based on a chart by CFCF [CC BY-SA 4.0] (See https://en.wikipedia.org/wiki/Evidence-based_education)

One reason for this lacuna is that in U.S. education policy a small set of individuals, often dominated by education economists or graduates from certain types of education policy or reform programs, tend to cite each other while ignoring a lot of the broader social science evidence that has important bearing on particular topics.21 And this is probably not intentional. When you are trained to think in a certain way and exposed largely to many others who also think that same way and value similar research methods and approaches, groupthink tends to take hold. While this problem is inherent to all academic disciplines, not just education policy, some are better than others at being truly multidisciplinary.

Academics who produce research evidence across social sciences and in education believe their subfield has much to offer those in education policy and practice. So they often feel frustrated that decision makers don’t usually read their publications. And the public doesn’t read research publications either: A.K. Biswas and Julian Kircher, who measured the impact of academic conferences and publications on real-world practice, noted, “Practitioners very rarely read articles published in peer-reviewed journals. We know of no senior policymaker or senior business leader who ever read regularly any peer-reviewed papers in well recognized journals like Nature, Science or Lancet.”22

The history of education reform shows that most efforts have not proven successful.7 This is largely because top-down education reform efforts tend to evaporate at the point of impact, namely, the classroom.8 This is true even for efforts such as common core, which enjoyed wide bipartisan support.23 The disconnect between research and policy/practice is the rule, not the exception. Nor is such lack of success confined to education. Reform efforts in criminal justice and welfare policy often have gone awry.15 The education research and policy community often avoid discussing failure,25 perhaps in part because many look to education as the solution to those, and, increasingly, most perceived problems in society. Thus, though educational research has accumulated and, in some ways, has become more rigorous, the disconnect between research on the one hand and policy and practice on the other has remained quite consistent over time.2 Realistic policy scholars argue that this doesn’t mean policy reform should be abandoned, but only that incremental change is more likely to prove effective than any quick-fix “silver bullets.”24

Solutions

Some simply accept the verdict of history—the disconnect between research and practice is to be expected as the default condition in any field. This is especially true given that experts in education policy are unclear as to what a genuine solution might look like. Nonetheless, there continue to be important efforts to join the two. Both history and common sense suggest that gradual steps, monitored, measured, and revised, hold greater promise than one massive attempt to bridge the chasm. Here are some suggestions.

Improve the quality of evidence.

A necessary first step is to improve the research evidence base in education policy. Sadly, replications in the field of education are not standard at present. When researchers looked at the top 100 education journals they found that only 0.13 percent of education articles were replications.26 And though economists are highly influential in education policy as a research field, as are political scientists and to some extent sociologists, the research of psychologists and other social and behavioral scientists is noticeable only by their scarcity. Economic thinking and approaches are influential in all areas of policy, including in education,27 though they provide only one of the toolsets available for researching social science, education, and policy issues.28

Additionally, a small group of education policy scholars and influencers in think tanks dominate and serve as gatekeepers determining which ideas gain entry to shape the research, which topics are discussed, and which never receive a fair hearing.21 A truly multidisciplinary approach to integrating evidence from every possible discipline relevant to education research and policy would be another important step in improving the evidence base in education. New research is not always necessary—just integrating research evidence that has accumulated in as yet unincorporated fields would be productive.

Multidisciplinarity, of course, faces the incentive structure constraints in academia that arise from the silo effect described earlier, but hopefully that too can change incrementally over time.29 One way would be to first get the broader public, especially policymakers and practitioners, to read research.22 Even when the most rigorous relevant research is collected in an educational repository such as What Works Clearinghouse, policymakers and practitioners often don’t take the time to read it. They have different incentives and interests than researchers, who are trained and then rewarded in designing experiments and evaluating scientific evidence.30

Engage the public.

Academics should publicly engage and teach scholars in other fields as well as the broader public about their research, whether through writing for the news and in magazines, going on podcasts and doing interviews, writing popular books and articles, using social media, and other methods.31 Some go so far as advocating giving research away to the public by making clear accounts of research methods freely available on the Internet, along with the data and results so that they can be replicated easily. However, this must be done responsibly, given that the replication crisis has made it unclear in some areas whether the cumulative evidence is strong enough to communicate or be useful in policy and practice.32 The challenge here is to explain just how research results published in an academic journal are actually relevant to the average person in their everyday life.

Build better relationships.

If they want to influence policy, researchers should get to know state-level policymakers and form mutually beneficial relationships with them. Policy making at the national level is usually out of reach, so there is a greater chance at the state or more local level.15 Doing so, however, is a two-way street. Building relationships with state or local-level policymakers and politicians requires learning how the political process works and being available to help solve real, on-the-ground problems within short time frames.

Communicate in plain language.

Academics also need to be able to communicate their findings in plain language so that those outside academia—or even their particular discipline—can understand and use that knowledge.32 History professor Patricia Limerick33 made this case forcefully in her poignant article titled, “Dancing with Professors: The Trouble with Academic Prose.” Other scholars argue: “If academics want to have an impact on policymakers and practitioners, they must consider popular media, which has been ignored by them.”22

New research is not always necessary—just integrating research evidence that has accumulated in as yet unincorporated fields would be productive.

The technical jargon used in each academic subfield often prevents integrating knowledge across subfields and hinders those outside of academia from using relevant research findings. Writing policy briefs and other publications in plain language is not incentivized by traditional academic positions where a Darwinian calculus rules in the form of “publish and get grants or perish.”34 However, it is precisely the mass media that can transmit knowledge so that research findings could better find their way into policy decision-making. Learning why, when, and how to enter the public arena should be integrated into graduate training programs across the physical, biological, and behavioral sciences if researchers have any hope of impacting policy and practice.

Publishing for the general public needs to be incentivized.

An additional challenge is that academia rewards producing research that those in policy and practice just don’t think addresses their needs.16 One solution would be for academia to reward scholars in the tenure and promotion process for communicating and publishing the relevance of their research findings to a broader audience. Doing so could be included as part of the service component of the usual research-teaching-service pay-and-promotion criteria.

Work with practitioners in research-practice partnerships.

Some education researchers are embracing Research Practice Partnerships (RPPs),16, 35 for example, between researchers at a university and practitioners in local schools. (Full disclosure: I’m involved in an RPP in Northwest Arkansas,36 and it is well worth it). RPPs work because practitioners are a part of the research process so that their research needs are met. While academic research questions are often disconnected from practice, there are some cases where not only the answers but also properly framed questions can be useful to practitioners. And for some questions, the results are publishable in an academic journal and so do reward the researchers.

Since RPPs are a partnership, the typical independence of the researcher for purposes of evaluation is not present and a clear conflict of interest could arise. However, because policy changes often are constrained by many moving parts, when the time comes for implementation, gradual yet positive change can move the needle provided that the practitioners understand why the research is useful and so are eager to use it to help kids in their schools.

Top-down solutions often won’t work.

Eric Kalenze,37 a leading authority in the field of curriculum and content development, argues that top-down reform efforts don’t work largely because there isn’t either the infrastructure or buy-in from schools that are necessary to make it happen. He explains how in the school where he taught, there was a period when a supportive principal and a group of dedicated teachers could truly make effective education reform work, but the confluence of these positive factors is hard to scale. Despite this, he argues that bottom-up efforts are worth pursuing to help kids, including efforts that build on evidence use.30

Involve the teachers.

Educational psychologist David Berliner argues:

It is the tinkering by teachers and researchers, and the study of their craft by the teachers themselves, that seems to me the most likely to pay off in improved education. If those in the research community can learn to do more design experiments in real-world settings, and join teacher-researchers to produce knowledge about how things work in real-world classrooms, the great disconnect might become a much smaller disconnect. Educational research would end up being less a field of traditional scientific research, and much more a field of engineering, invention, and design.38

This perspective aligns with the focus on RPPs through getting teacher buy-ins and developing mutual respect to bridge the disconnect.

* * *

In their book Gradual: The Case for Incremental Change in a Radical Age, authors Greg Berman and Aubrey Fox explain that there have been successful policy reform efforts, such as Social Security, where the confluence of a large number of positive changes fortuitously came together.24 Perhaps by approaching all of these possible solutions while thinking of novel ways to end the separation between research and policy/ practice can lead to gradual but positive change. Avoiding unintended consequences will require a clear-headed understanding of all of the relevant research and facts that influence education.39

However, given the current climate of political polarization and the culture wars in education policy, the use of evidence in policy and practice remains an ongoing challenge, but one that might be overcome gradually, if more people better understood both the history of failures and the fact that radical changes are much less likely than positive small ones to help educate children — whether inside or outside of schools.40 END

About the Author

Jonathan Wai is Associate Professor and the Endowed Chair in Education Policy in the Department of Education Reform at the University of Arkansas, with a joint appointment in the Department of Psychology. He is also Affiliate Faculty in Educational Psychology at the University of Alabama. He studies education policy through the lens of psychology. His fields of expertise include gifted education, talent development, intelligence, individual differences, higher education, educational psychology, expertise, and education policy.

References
  1. https://rb.gy/hqw07
  2. Berliner D.C. (2008). Research, Policy, and Practice: The Great Disconnect. In Lapan S.D., Quartaroli M.T. (Eds.), Research Essentials: An Introduction to Designs and Practices, 295–326. Jossey-Bass, 296.
  3. Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2009). Learning Styles: Concepts and Evidence. Psychological Science in the Public Interest, 9(3), 105–119.
  4. Lubinski, D., & Benbow, C.P. (1995). An Opportunity for Empiricism [Review of the book Multiple intelligences: The Theory in Practice, by H. Gardner]. Contemporary Psychology, 40(10), 935–938.
  5. Crede, M., Tynan, M.C., & Harris, P.D. (2017). Much Ado About Grit: A Meta-Analytic Synthesis of the Grit Literature. Journal of Personality and Social Psychology, 113(3), 492–511.
  6. Macanamara, B. N., & Burgoyne, A. P. (2022). Do Growth Mindset Interventions Impact Students’ Academic Achievement? A Systematic Review and Meta-Analysis With Recommendations for Best Practices. Psychological Bulletin. Advance online publication.
  7. Ravitch D. (2000). Left Back: A Century of Failed School Reforms. Simon & Schuster.
  8. Tyack D., & Cuban L. (1995). Tinkering Toward Utopia: A Century of Public School Reform. Harvard University Press.
  9. Ritchie, S. (2020). Science fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. Metropolitan Books.
  10. Nosek et al. (2022). Replicability, Robustness, and Reproducibility in Psychological Science. Annual Review of Psychology, 73, 719–748.
  11. Ankel-Peters, J., Fiala, N., & Neubauer, F. (2023). Do Economists Replicate? Journal of Economic Behavior & Organization, 212, 219–232.
  12. Open Science Collaboration (2015). Estimating the Reproducibility of Psychological Science. Science, 349(6251), aac4716.
  13. https://rb.gy/468hv
  14. https://rb.gy/gc40k
  15. Bogenschneider K., Corbett T. (2021). Evidence-Based Policymaking: Envisioning a New Era of Theory, Research, and Practice (2nd ed.). Routledge. (p. 3).
  16. Conaway, C. (2020). Maximizing Research Use in the World We Actually Live in: Relationships, Organizations, and Interpretation. Education Finance and Policy, 15(1), 1–10.
  17. Tseng V. (2012). The Uses of Research in Policy and Practice and Commentaries. Social Policy Report, 26(2), 1–24.
  18. Weiss C.H. (1977). Research for Policy’s Sake: The Enlightenment Function of Social Research. Policy Analysis, 3, 531–545.
  19. Bryk, A.S. (2015). 2014 AERA Distinguished Lecture: Accelerating How We Learn to Improve. Educational Researcher, 44(9), 467–477. (p. 468).
  20. Institute of Medicine, Committee on Quality of Health Care in America. (2012). Best Care at Lower Costs: The Path to Continuously Learning Health Care in America. National Academies Press.
  21. Phelps, R.P. (2023). The Malfunction of U.S. Education Policy: Elite Misinformation, Disinformation, and Selfishness. Rowman & Littlefield.
  22. https://rb.gy/w750p
  23. Loveless T. (2021). Between the State and the Schoolhouse: Understanding the Failure of Common Core. Harvard Education Press.
  24. Berman, G., & Fox, A. (2023). Gradual: The Case for Incremental Change in a Radical Age. Oxford University Press.
  25. Greene J.P., McShane M.Q. (2018). Failure Up Close: What Happens, Why It Happens, and What We Can Learn From It. Rowman & Littlefield.
  26. Makel, M.C., & Plucker, J.A. (2014). Facts Are More Important Than Novelty: Replication in the Education Sciences. Educational Researcher, 43(6), 304–316.
  27. Berman E.P. (2022). Thinking Like an Economist: How Efficiency Replaced Equality in U.S. Public Policy. Princeton University Press.
  28. Singer, J.D. (2019). Reshaping the Arc of Quantitative Educational Research: It’s Time to Broaden Our Paradigm. Journal of Research on Educational Effectiveness, 12(4), 570–593.
  29. https://rb.gy/sr9zh
  30. Kalenze, E. (2020). What It Will Take to Improve Evidence-Informed Decision- Making in Schools. American Enterprise Institute.
  31. https://rb.gy/vub63
  32. Lewis, N.A., Jr., & Wai, J. (2021). Communicating What We Know and What Isn’t So: Science Communication in Psychology. Perspectives on Psychological Science, 16(6), 1242–1254.
  33. https://rb.gy/erm32
  34. Lilienfeld, S.O. (2017). Psychology’s Replication Crisis and the Grant Culture: Righting the Ship. Perspectives on Psychological Science, 12(4), 660–664.
  35. Booker L., Conaway, C., Schwartz N. (2019). Five Ways RPPs Can Fail and How to Avoid Them: Applying Conceptual Frameworks to Improve RPPs. William T. Grant Foundation.
  36. Tran, B.T.N. (2022). Expanding Gifted Identification to Capture Academically Advanced, Low-Income, or Other Disadvantaged Students. Journal for the Education of the Gifted, 45(1), 64–83.
  37. Kalenze E. (2019). What the Academy Taught Us: Improving Schools From the Bottom Up in a Top-Down Transformation Era. John Catt.
  38. Berliner D.C. (2008). Research, Policy, and Practice: The Great Disconnect. In Lapan S.D., Quartaroli M. T. (Eds.), Research Essentials: An Introduction to Designs and Practices, 295–326. Jossey-Bass, 311.
  39. Harden, K.P. (2021). The Genetic Lottery: Why DNA Matters for Social Equality. Princeton University Press.
  40. Maton, K.I. (2016). Influencing Social Policy: Applied Psychology Serving the Public Interest. Oxford University Press.

This article was published on January 4, 2024.

 
Skeptic Magazine App on iPhone

SKEPTIC App

Whether at home or on the go, the SKEPTIC App is the easiest way to read your favorite articles. Within the app, users can purchase the current issue and back issues. Download the app today and get a 30-day free trial subscription.

Download the Skeptic Magazine App for iOS, available on the App Store
Download the Skeptic Magazine App for Android, available on Google Play
Download the Skeptic Magazine App for iOS, available on the App Store
Download the Skeptic Magazine App for Android, available on Google Play
SKEPTIC • 3938 State St., Suite 101, Santa Barbara, CA, 93105-3114 • 1-805-576-9396 • Copyright © 1992–2023. All rights reserved • Privacy Policy