The ORE of Politics will be available for subscription in late September. Speak to your Oxford representative or contact us to find out more.

Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, POLITICS ( (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy).

date: 16 August 2017

The Politics of Evidence-Based Policy Making

Summary and Keywords

“Evidence-based policy making” (EBPM) has become a popular term to describe the need for more scientific and less ideological policy making. Some compare it to “evidence-based medicine,” which describes moves to produce evidence, using commonly-held scientific principles regarding a hierarchy of evidence, which can directly inform practice. Policy making is different: there is less agreement on what counts as good evidence, and more things to consider when responding to evidence.

Our awareness of these differences between science and policy are not new. Current debates resemble a postwar policy science agenda, to produce more scientific and “rational” policy analysis, which faced major empirical and normative obstacles: the world is not that simple, and an overly technocratic approach to policy undermines much-needed political debate. To understand modern discussions of EBPM, key insights from previous discussions must be considered: policy making is both “rational” and “irrational”; it takes place in complex policy environments or systems, whose properties should be understood in some depth; and it can and should not be driven by “the evidence” alone.

Keywords: evidence-based policy making, evidence-based medicine, evidence and policy, bounded rationality, policy-maker psychology, technocratic policy making, policy networks, institutions, ideas


Everyone likes evidence-based policy making (EBPM) until they consider the implications. EBPM has become one of many “valence” terms that seem difficult to oppose because they are so vague: who would not want policy to be evidence based? It appears to be the most recent incarnation of a focus on “rational” policy making, in which the same question can be posed in a more classic way: who would not want policy making to be based on reason and collecting all of the facts necessary to make good decisions?

As classic discussions have revealed, there are three main issues with such an optimistic starting point. The first is definitional: such terms only seem so appealing because they are vague. When key terms are defined to produce one definition at the expense of others, differences of approach and unresolved issues emerge. The second is descriptive: “rational” policy making does not exist in the real world. Instead, “comprehensive” or “synoptic” rationality is treated as an ideal type by which to frame the consequences of “bounded rationality” (Simon, 1976). Most contemporary policy theories have bounded rationality as a key starting point for explanation (Cairney & Heikkila, 2014). The third is prescriptive. Like EBPM, comprehensive rationality initially seems to be unequivocally good. When its necessary conditions or requirements for investigation are identified, however, EBPM and comprehensive rationality as an ideal scenario come under scrutiny.

What is “Evidence-Based Policy Making?” Much Like “What is Policy?” But More So!

Trying to define EBPM is like magnifying the problem of defining policy. As the articles in this encyclopedia suggest, it is difficult to say what policy is and measure how much it has changed. Rather than to provide something definitive, the working definition, “the sum total of government action, from signals of intent to the final outcomes” (Cairney, 2012, p. 5) is used to raise important qualifications: (a) there is a difference between what people say they will do, what they actually do, and the outcome and (b) policy making is also about the power not to do something.

The idea of a “sum total” of policy sounds intuitively appealing, but it masks the difficulty of identifying the many policy instruments that make up “policy” (and the absence of others), including the following: the level of spending; the use of economic incentives/penalties; regulations and laws; the use of voluntary agreements and codes of conduct; the provision of public services; education campaigns; funding for scientific studies or advocacy; organizational change; and, the levels of resources/methods dedicated to policy implementation and evaluation (2012, p. 26). In that context, the process in which actors make and deliver “policy” continuously is more important than identifying a single event providing a single opportunity to use a piece of scientific evidence to prompt a policy maker’s response.

Similarly, for the sake of simplicity, the term “policy makers” is used, but it is used in terms of the knowledge that it leads to further qualifications and distinctions: (a) between elected and unelected participants because people such as civil servants also make important decisions, and (b) between people and organizations, with the latter used as a shorthand to refer to a group of people making decisions collectively and subject to rules of collective engagement (also see “*Institutions*”). Blurry lines divide the people who make and influence policy, and decisions are made by a collection of people with formal responsibility and informal influence (also see “*Networks*”). Consequently, what is meant by “policy makers” must be clarified when the ways the evidence is to be used are identified.

A reference to EBPM provides two further definitional problems (Cairney, 2016, pp. 3–4). The first is to define evidence beyond the vague idea of an argument backed by information. Advocates of EBPM are often talking about scientific evidence which describes information produced in a particular way. Some describe “scientific” broadly, to refer to information gathered systematically using recognized methods, while others refer to a specific hierarchy of methods. The latter has an important reference point, evidence-based medicine (EBM), in which the aim is to generate the best evidence of the best interventions and exhort clinicians to use it. At the top of the methodological hierarchy are randomized control trials (RCTs) to determine the evidence, and the systematic review of RCTs to demonstrate the replicated success of interventions in multiple contexts, published in the top scientific journals (Oliver et al., 2014a; 2014b).

This specific reference to EBM is crucial in two main ways. First, it highlights a basic difference in attitude between the scientists proposing a hierarchy and the policy makers using a wider range of sources from a far less exclusive list of publications: “The tools and programs of evidence-based medicine … are of little relevance to civil servants trying to incorporate evidence in policy advice” (Lomas & Brown, 2009, p. 906). Instead, their focus is on finding as much information as possible in a short space of time—including from the “grey” or unpublished/non–peer-reviewed literature, and incorporating evidence on factors such as public opinion—to generate policy analysis and make policy quickly. Therefore, second, EBM provides an ideal that is difficult to match in politics, proposing “that policy makers adhere to the same hierarchy of scientific evidence; that ‘the evidence’ has a direct effect on policy and practice; and that the scientific profession, which identifies problems, is in the best place to identify the most appropriate solutions, based on scientific and professionally driven criteria” (Cairney, 2016, p. 52; Stoker, 2010, p. 53).

These differences are summed up in the metaphor “evidence-based” which, for proponents of EBM suggests that scientific evidence comes first and acts as the primary reference point for a decision: how is this evidence of a problem translated into a proportionate response, or how can one ensure that the evidence of an intervention’s success is reflected in policy? The more pragmatic phrase “evidence-informed” sums up a more rounded view of scientific evidence, in which policy makers know that they have to take into account a wider range of factors (Nutley et al., 2007).

Overall, the phrases “evidence-based policy” and “evidence-based policy making” are less clear than “policy.” This problem puts an onus on advocates of EBPM to state what they mean and to clarify if they are referring to an ideal type to aid description of the real world, or advocating a process that, to all intents and purposes, would be devoid of politics (see below). The latter tends to accompany often fruitless discussions about “policy-based evidence,” which seems to describe a range of mistakes by policy makers—including ignoring evidence, using the wrong kinds, “cherry picking” evidence to suit their agendas, and/or producing a disproportionate response to evidence—without describing a realistic standard to which to hold them.

For example, Haskins and Margolis (2015) provide a pie chart of “factors that influence legislation” in the United States, to suggest that research contributes 1% to a final decision compared to, for example, “the public” (16%), the “administration” (11%), political parties (8%) and the budget (8%). Theirs is a “whimsical” exercise to lampoon the lack of EBPM in government (compare with Prewitt et al.’s 2012 account built more on social science studies), but it sums up a sense in some scientific circles about their frustrations with the inability of the policy-making world to keep up with science.

The Politics of Evidence-Based Policy MakingClick to view larger

Figure 1. Model of Factors that Influence Legislation

Note: a. “Policy continents” refers to the complex set of statutes, regulations, lobbying groups, congressional factions, committees of jurisdiction, and so forth that affect legislation in each area of social policy.

Source: Author's compilation.

Indeed, there is an extensive literature in health science (Oliver, 2014a; 2014b), emulated largely in environmental studies (Cairney, 2016, p. 85; Cairney, Oliver, & Wellstead, 2016), which bemoans the “barriers” between evidence and policy. Some identify problems with the supply of evidence, recommending the need to simplify reports and key messages. Others note the difficulties in providing timely evidence in a chaotic-looking process in which the demand for information is unpredictable and fleeting. A final main category relates to a sense of different “cultures” in science and policy making which can be addressed in academic-practitioner workshops (to learn about each other’s perspectives) and more scientific training for policy makers. The latter recommendation is often based on practitioner experiences and a superficial analysis of policy studies (Oliver et al., 2014b; Embrett & Randall, 2014).

EBPM as a Misleading Description

Consequently, such analysis tends to introduce reference points that policy scholars would describe as ideal types. Many accounts refer to the notion of a policy cycle, in which there is a core group of policy makers at the “center,” making policy from the “top down,” breaking down their task into clearly defined and well-ordered stages (Cairney, 2016, p. 16–18). The hope may be that scientists can help policy makers make good decisions by getting them as close as possible to “comprehensive rationality” in which they have the best information available to inform all options and consequences. In that context, policy studies provides two key insights (2016; Cairney et al., 2016).

The Role of Multilevel Policy-Making Environments, Not Cycles

Policy making takes place in less ordered and predictable policy environments, exhibiting the following characteristics:

  • A wide range of actors (individuals and organizations) influencing policy in many levels and types of government

  • A proliferation of rules and norms followed in different venues

  • Close relationships (“networks”) between policy makers and powerful actors

  • A tendency for certain beliefs or “paradigms” to dominate discussion

  • Policy conditions and events that can prompt policy-maker attention to lurch at short notice

A focus on this bigger picture shifts our attention from the use of scientific evidence by an elite group of elected policy makers at the “top” to its use by a wide range of influential actors in a multilevel policy process. It shows scientists that they are competing with many actors to present evidence in a particular way to secure a policy-maker audience. Support for particular solutions varies according to which organization takes the lead and how it understands the problem. Some networks are close knit and difficult to access because bureaucracies have operating procedures that favor particular sources of evidence and some participants over others, and there is a language—indicating what ways of thinking are in good “currency”—that takes time to learn. Well-established beliefs provide the context for policy making: new evidence on the effectiveness of a policy solution has to be accompanied by a shift of attention and successful persuasion. In some cases, social or economic “crises” can prompt lurches of attention from one issue to another, and some forms of evidence can be used to encourage that shift—but major policy change is rare.

Policy Makers Use Two “Shortcuts” to Deal with Bounded Rationality and Make Decisions

Policy makers deal with “bounded rationality” by employing two kinds of shortcut: “rational,” by pursuing clear goals and prioritizing certain kinds and sources of information, and “irrational,” by drawing on emotions, gut feelings, beliefs, habits, and familiar reference points to make decisions quickly. Consequently, the focus of policy theories is on the links between evidence, persuasion, and framing.

Framing refers to the ways in which issues are understood, portrayed, and categorized. Problems are multifaceted, but they are bounded rationality limits the attention of policy makers, and actors compete to highlight one image at the expense of others. The outcome of this process determines who is involved (for example, portraying an issue as technical limits involvement to experts), and responsible for policy, how much attention they pay, and what kind of solution they favor. Scientific evidence plays a part in this process, but the ability of scientists to win the day with evidence should not be exaggerated. Rather, policy theories signal the strategies that actors use to increase demand for their evidence:

  • To combine facts with emotional appeals, to prompt lurches of policy maker attention from one policy image to another (True, Jones, & Baumgartner, 2007)

  • To tell simple stories which are easy to understand, help manipulate people’s biases, apportion praise and blame, and highlight the moral and political value of solutions (McBeth, Jones, & Shanahan, 2014)

  • To interpret new evidence through the lens of the preexisting beliefs of actors within coalitions, some of which dominate policy networks (Weible, Heikkila, deLeon & Sabatier, 2012)

  • To produce a policy solution that is feasible and exploit a time when policy makers have the opportunity to adopt it (Kingdon, 1984).

Further, the impact of a framing strategy may not be immediate, even if it appears to be successful. Scientific evidence may prompt a lurch of attention to a policy problem, prompting a shift of views in one venue or the new involvement of actors from other venues. However, it can take years to produce support for an “evidence-based” policy solution, built on its technical and political feasibility (will it work as intended, and do policy makers have the motive and opportunity to select it?).

EBPM as a Problematic Prescription

A pragmatic solution to the policy process would involve: identifying the key venues in which the “action” takes place; learning the “rules of the game” within key networks and institutions; developing framing and persuasion techniques; forming coalitions with allies; and engaging for the long term (Cairney, 2016, p. 124; Weible et al., 2012, pp. 9–15). The alternative is to seek reforms to make EBPM in practice more like the EBM ideal.

However, EBM is only defendable if the actors involved agree to make primary reference to scientific evidence and be guided by what works (combined with their clinical expertise and judgement). In politics, there are other—and generally more defendable—principles of “good” policy making (Cairney, 2016, pp. 125–126). They include the need to legitimize policy: to be accountable to the public in free and fair elections, to consult far and wide to generate evidence from multiple perspectives, and to negotiate policy across political parties and multiple venues with a legitimate role in policy making. In that context, scientific evidence ought to play a major role in policy and policy making, but should unelected experts and evidence that few can understand play a primary role as well?

Conclusion: The Inescapable and Desirable Politics of Evidence-Informed Policy Making

Many contemporary discussions of policy making begin with the problematic belief in the possibility and desirability of an evidence-based policy process free from the worst excesses of politics. The buzz phrase for any complaint about politicians not living up to this ideal is “policy-based evidence”: biased politicians decide first what they want to do, then cherry pick any evidence that backs up their case. Without additional thought, they put in its place a technocratic process in which unelected experts are in charge, deciding on the best evidence of a problem and its best solution.

In other words, new discussions of EBPM raise old discussions of rationality that have occupied policy scholars for many decades. The difference since the days of Simon and Lindblom (1959) is that scientific technology and methods to gather information have far exceeded the dreams of our predecessors. Such advances in technology and knowledge have only increased our ability to reduce but not eradicate uncertainty about the details of a problem. They do not remove ambiguity, which describes the ways in which people understand problems in the first place, then seek information to help them understand them further and seek to solve them. Nor do they reduce the need to meet important principles in politics, such as to sell or justify policies to the public (to respond to democratic elections) and address the fact that there are many venues of policy making at multiple levels (partly to uphold a principled commitment, in many political system, to devolve or share power). Policy theories do not tell us what to do about these limits to EBPM, but they help us to separate pragmatism from often-misplaced idealism.


Cairney, P. (2012). Understanding public policy. Basingstoke, U.K.: Palgrave.Find this resource:

Cairney, P. (2016) The politics of evidence-based policy making. Basingstoke, U.K.: Palgrave.Find this resource:

Cairney, P., & Heikkila, T. (2014). A comparison of theories of the policy process. In P. Sabatier & C. Weible (Eds.), Theories of the policy process. 3d ed. Chicago: Westview.Find this resource:

Cairney, P., Oliver, K., & Wellstead, A. (2016). To bridge the divide between evidence and policy: reduce ambiguity as much as uncertainty. Public Administration Review.Find this resource:

Embrett, M., & Randall, G. (2014). Social determinants of health and health equity policy research: Exploring the use, misuse, and nonuse of policy analysis theory. Social Science & Medicine, 108, 147–155.Find this resource:

Haskins, R., & Margolis, G. (2015). Show me the evidence: Obama’s fight for rigor and results in social policy. Washington, DC: Brookings Institution Press.Find this resource:

Kingdon, J. (1984). Agendas, alternatives and public policies. 1st ed. New York: HarperCollins.Find this resource:

Lindblom, C. (1959). The science of muddling through. Public Administration Review, 19, 79–88.Find this resource:

Lomas, J., & Brown, A. (2009). Research and advice giving: a functional view of evidence-informed policy advice in a Canadian ministry of health. Milbank Quarterly, 87(4), 903–926.Find this resource:

McBeth, M., Jones, M., & Shanahan, E. (2014). The narrative policy framework. In P. Sabatier & C. Weible (Eds.), Theories of the Policy Process. 3d ed. Chicago: Westview.Find this resource:

Nutley, S., Walter, I., & Davies, H. (2007). Using evidence: how research can inform public services. Bristol, U.K.: Policy Press.Find this resource:

Oliver, K., Innvar, S., Lorenc, T., Woodman, J., & Thomas, J. (2014a). A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Services Research, 14(1), 2.Find this resource:

Oliver, K., Lorenc, T., & Innvær, S. (2014b). New directions in evidence-based policy research: a critical analysis of the literature. Health Research Policy and Systems, 12, 34.Find this resource:

Prewitt, K., Schwandt, T. A., & Straf, M. L., (Eds.). (2012). Using science as evidence in public policy. Washington, DC: The National Academies Press.

Simon, H. (1976). Administrative behavior. 3d ed. London: Macmillan.Find this resource:

Stoker, G. (2010). Translating experiments into policy. The ANNALS of the American Academy of Political and Social Science, 628(1), 47–58.Find this resource:

True, J. L., Jones, B. D., & Baumgartner, F. R. (2007). Punctuated equilibrium theory. In P. Sabatier (Ed.), Theories of the Policy Process. 2d ed. Cambridge, MA: Westview.Find this resource:

Weible, C., Heikkila, T., deLeon, P., & Sabatier, P. (2012). Understanding and influencing the policy process. Policy Sciences, 45(1), 1–21.Find this resource: