Summary and Keywords
Since the late 1990s, increased attention has been given by governments and scholars to evidence-based policymaking (EBPM). The use of the term EBPM appears to have emerged with the election of Tony Blair’s government in the United Kingdom (UK) and a desire to be seen to be taking ideology and politics out of the policy process. The focus was on drawing on research-based evidence to inform policymakers about “what works” and thereby produce better policy outcomes. In this sense, evidence-based policy is arguably a new label for an old concern. The relationship between knowledge, research, and policy has been a focus of scholarly attention for decades—Annette Boaz and her colleagues date it to as early as 1895 (Boaz et al., 2008, p. 234).
In its more recent form, EBPM has been the subject of much debate in the literature, particularly through critiques that question its assumptions about the nature of the policy process, the validity of evidence, the skewing in favor of certain types of evidence, and the potentially undemocratic implications. The first concern with the concept is that the EBPM movement runs counter to the lessons of the critique of rational-comprehensive approaches to policymaking that was launched so effectively in Lindblom’s article “The Science of ‘Muddling Through’” and never really refuted, in spite of attempts by advocates of the policy cycle and other rational models.
The second problem is that the rhetoric of evidence-based policy does not recognize the contested nature of evidence itself, an area that has been the subject of a large body of research in the fields of the sociology of science and science and technology studies. These studies draw attention to the value-laden nature of scientific inquiry and the choices that are made about what to research and how to undertake that research.
Third, the emphasis has been on particular types of evidence, with particular methodologies being privileged over others, running the risk that what counts as evidence is only what can be counted or presented in a particular way. The choice of evidence is value-laden and political in itself.
Finally, attempts to take the ideology or politics out of policy are also potentially undemocratic. Policymaking is the business of politics. In democratic systems, politicians are elected to implement their policies, and those policies are based on particular sets of values. Leaders are elected to make collective decisions on behalf of the electorate and those decisions are based on judgments, including value judgments. Evidence surely must inform this process, but, equally, it cannot be decisive. Trade-offs are required between conflicting values, such as between equity and efficiency, and this can include deciding between solutions that the evidence suggests are optimal and other societal priorities.
In recent years, policymakers and commentators have increasingly used the language of evidence-based policy in policy debate and as justification for policy decisions. Intuitively, this makes good sense. It would clearly be undesirable for policymakers simply to make up policy based on their preconceptions or ideological dispositions. However, the concept is not as straightforward as common sense would suggest. Important questions need to be asked about what evidence-based policy actually means and its application to policy, particularly in liberal democracies. This article explores a number of these questions, many of which have been asked by researchers for decades.
Interest in the effective use of research and knowledge in the policy process is not new. It has been the focus of scholars of public policy since at least the middle of the last century, when Lasswell (1951) set out his hopes for the findings of research to contribute to what he described as the policy sciences of democracy and to the overall betterment of communities and societies in the Western (i.e., democratic) world. Since then, various studies have emerged across a range of disciplines exploring the relationship between research and policy and how best to connect the research endeavor to policymaking. This has taken various guises, from the knowledge utilization literature (Caplan, 1979; Hoppe, 2005; Lindblom & Cohen, 1979; Ravetz, 1986; Weiss, 1979), to research into the sociology of science (Yearley, 2005), to work on knowledge translation and science communication. Also relevant is the large amount of literature on decision-making, including the heuristics adopted by decision-makers faced with uncertainty (e.g., Kahneman, Slovic, & Tversky, 1982), incomplete information, and values conflict (e.g., Tetlock, 2000). The most recent foray into this issue has taken the form of the evidence-based policy movement. This term is used to mean a very diverse range of activities, and although several scholars have sought to give it more precise definition (e.g., Head, 2016), the concept remains nebulous.
The recent wave of interest in evidence-based policymaking—certainly the use of the term itself—is generally associated with the election of Tony Blair’s Labour government in the United Kingdom in the late 1990s and the British Labour government’s focus on “what works,” avoiding the “old dogmas of the past” (Cabinet Office, 1999), and highlighting the contribution of the social sciences to policy. The political rhetoric around the adoption of evidence-based policymaking (EBPM) included the desire to remove ideology from policy—in other words, to rescue the policy process from politics (e.g., Parsons, 2002; Solesbury, 2002). In the United Kingdom, the drive from politicians was to use research-based evidence as the basis of policy choice. This approach of focusing on what works had two dimensions. First, “what works” referred to the use of evidence about policy interventions to assess their effectiveness. This agenda was largely about evaluating policies either through the use of pilot programs to test their effectiveness or through policy learning and, if appropriate, policy transfer from elsewhere. A number of What Works research centers were established across the United Kingdom. Their purpose, as described by Bristow, Carter, and Martin (2015, pp. 126, 129), was “to identify which social interventions are the most cost-effective” and “to operate as knowledge brokers at the intersection of research, policy and practice.”
The second component was the more typical use of the term EBPM, in which policy responds to evidence of the existence of a problem, and if possible, available solutions. This second form of EBPM is about knowledge/research utilization in the process of developing policy. A notable feature of the U.K. approach was the focus on research emanating from the social sciences. A U.K. government minister (Blunkett, 2000, p. 20) summed up the research task as follows:
We need research which: i) leads to a coherent picture of how society works: what are the main forces at work and which of these can be influenced by government (e.g. inter-generational poverty, low aspirations, employability, participation in Society or exclusion, reducing crime, discrimination and prejudice, poor parenting, the quality of a school and its teachers): ii) research to evaluate specific policy initiatives e.g. the New Deal evaluations, the National Literacy and Numeracy Strategies, the Crime Reduction Strategy, Neighbourhood Renewal, Our Healthier Nation, the anti-drugs strategy, Sure Start. Government has been better at supporting the second but not so good at developing the first. We need both and both are difficult.
In this context, evidence-based policy is used rhetorically as a device to suggest that such policy is beyond reproach, objective, and apolitical. In the same speech, Blunkett (2000, p. 12) stated, “This Government has given a clear commitment that we will be guided not by dogma, but by an open-minded approach to understanding what works and why.” In terms of the policy process, the focus is on drawing on research during the decision-making process, on the basis that research is the output of an objective process undertaken by experts. This emphasis is based on a view that is nicely expressed by Nelkin (1975, p. 36):
The authority of expertise rests on assumptions about scientific rationality: interpretations and predictions made by scientists are judged to be rational because they are based on ‘objective’ data gathered through rational procedures, and evaluated by the scientific community through a rigorous control process. Science, therefore, is widely regards as a means by which to de-politicise public issues.
The focus on evidence-based policy extends beyond the United Kingdom. For example, it has been picked up in Australia, with a key bureaucratic agency in the country arguing that “Policymaking within the [Australian Public Service (APS)] needs to be based on a rigorous, evidence-based approach that routinely and systematically draws upon science as a key element” (Australian Department of Industry, 2012, p. iii). An interesting point of note is that the U.K. approach was heavily focused on the social sciences, while discussion in Australia, particularly within the bureaucracy, has predominantly centered around the role of the natural sciences.
Before proceeding, it is worth noting that the advocacy and promotion of EBPM tend to come from the policy practitioner community, including politicians, while critiques of the concept and focus on its limitations have arisen in the academic literature, particularly in political science. The emphasis in the following is more on the latter. How best to use knowledge in policy, as noted previously, has been a longstanding concern of policymakers and scholars. EBPM could be seen as simply the latest label to be put on this issue; however, given its origins in evidence-based medicine (with implications of scientific rigor and objectivity) and its rhetorical linking by the Blair government with ideology-free policymaking, it deserves scrutiny.
Evidence for Policy and Evidence About Policy
While politicians and policymakers themselves are focused on the use of evidence in and for the policy process, for scholars considering evidence-based policy, there is an important distinction between evidence for policy and evidence about policy. Traditionally, public policy research has focused not only on providing what are essentially inputs into the policy process, but also on investigating the nature and functioning of policymaking itself. Over 50 years of research around policy contains some important lessons about the nature of the policy process, and this work points to some important limitations to the value-free, “what works” agenda in terms of the messy reality of the policymaking process in liberal democracies and inherent uncertainties and even biases in both the production and use of evidence.
The literature around the use of research, both in government and in broader contexts, indicates that there are multiple ways in which research influences decision-making. One important distinction is between instrumental, conceptual, and symbolic use (Pelz, 1978). Instrumental use is associated with “a specific decision or action that can be clearly designated” (Pelz, 1978, p. 349). Conceptual use occurs when research provides “general enlightenment,” while symbolic use draws on research to legitimize choices already made (Beyer, 1997, p. 17). The latter has been termed “policy-based evidence” (Parkhurst, 2016, p. 375). The recent wave of EBPM has tended to take an instrumental approach to the use of knowledge and research in the policy process, one that involves the direct, specific transfer of research findings into practical policy outcomes. This limited understanding of the role of research and knowledge in policy overlooks the contribution that conceptual use can make to the democratic process through informing and shaping the thinking of policymakers and citizens in ways that they may not necessarily recognize.
Evidence about policy is rarely considered to be part of EBPM, as the focus of evidence-based policy is on gaining better policy outcomes by ensuring improved inputs. In so doing, it overlooks the inherent complexity of the policy process. It is on this distinction that an important divide is clear between policy practitioners and policy scholars. While practitioners have embraced the evidence-based policy movement as providing the means for improving policy outcomes by garnering the best available evidence, policy scholars have been more skeptical. A review of the U.K. experience of EBPM found that that the Blair government’s focus had led to “significant changes to the business of government and public services that reflect this commitment—changes in resources, institutions, processes, and capacities” (Boaz, Grayson, Levitt, & Solesbury, 2008, p. 234). The authors identify three responses from researchers to the EBPM agenda: skepticism, enthusiasm, and the emergence of a new field of knowledge-transfer studies. While all agree with the objective of an evidence-informed society, they differ with respect to the appropriate methodologies to apply (for example, whether there should be a preference for randomized controlled trials), over the relationship between evidence and policy, and also over what constitutes policy-relevant evidence (Boaz et al., 2008, pp. 239–240).
Research into the policy process has informed much of the criticism of the evidence-based policy movement, as it has pointed to the limitations of the rational, linear models of the policy process implicit in the EBPM model. In some cases, the rational model is made explicit. For example, in its report The Place of Science in the Policy Development in the Public Service, the Australian Department of Industry (2012, p. 4) stated that, inter alia, the project “specifically sought” to “examine existing policy formulation processes in the APS to consider where scientific input could be incorporated into the policy cycle.” The U.K. government has similarly used the policy cycle as a descriptor of policy processes (Bristow et al., 2015, p. 129). For many public policy researchers and political scientists, the policy cycle is the epitome of the rational policy model, which decades of research have identified as limited and unrealistic (beginning with Lindblom, 1959). Lindblom’s analysis has stood the test of time in its identification of key constraints on policymakers that prevent them from behaving in a “rational-comprehensive” manner. There is a certain irony to the EBPM model’s disregard for evidence about the policy process itself. As Boaz et al. (2008, p. 247) note, “evidence-based policy has generated little in the way of rigorous evidence of its own effectiveness, and it joins a long trend of limited empirical analysis of research evidence.”
In spite of its limitations, there is clearly a need for evidence to be brought to bear in the government decision-making process, but the limitations of relying on evidence to address the challenges and complexities of the policy process need to be recognized, particularly with regard to value-laden social problems. With respect to the natural sciences, many of the major policy challenges facing contemporary governments have been identified by researchers, and some of the solutions will emerge from the natural sciences—for issues such as climate change and obesity, for example. The relationship between the natural sciences, scientists, and the policy process, therefore, is important and careful attention is needed to the extent to which scientific advice is objective and how it can be used rhetorically as a counterpoint to the value-laden nature of debate in an attempt to silence dissent and reframe political disputes as technical problems; see, for example, Pielke’s (2007) distinction between tornado and abortion politics.
What Counts as Evidence
Many of the debates in the literature around the merits of the recent surge of enthusiasm for evidence-based policy can in large part be characterized in terms of how evidence is defined and understood. An important distinction is that between “evidence” and “facts” or “data.” Majone (1989, p. 10) made the following distinction:
Evidence is not synonymous with data or information. It is information selected from the available stock and introduced at a specific point in the argument in order to persuade a particular audience of the truth or falsity of a statement.
He went on to suggest (p. 11):
Thus the criteria for assessing evidence are different from those used for assessing facts. Facts can be evaluated in terms of more or less objective canons, but evidence must be evaluated in accordance with a number of factors peculiar to a given situation, such as the specific nature of the case, the type of audience, the prevailing rules of evidence, or the credibility of the analyst.
Lindblom and Cohen (1979, p. 81) made a similar point, arguing that
Problem complexity denies the possibility of proof and reduces the pursuit of fact to the pursuit of those selective facts which, if appropriately developed, constitute evidence in support of relevant argument.
Government pronouncements and policy documents imply a conflation of “evidence” with “fact,” and this interpretation has been the target of some of the strongest critics of EBPM, who see the modern version of it as a denial of the essential messiness of the policy process and a return to more linear, positivist approaches to knowledge utilization (see, e.g., Parsons, 2002).
The emphasis on evidence-as-fact influences perceptions of the types of research that are relevant or useful for policymakers. Although the U.K. government articulated a particular focus on the social sciences, the conceptual origins of EBPM in evidence-based medicine appear to have resulted in the importation of epistemological and methodological approaches from the natural sciences. This is illustrated by the focus on quantitative studies and the systematization of research findings. This bias was made clear in a much-cited speech by Secretary of State for Education Blunkett (2000, p. 20):
We’re not interested in worthless correlations based on small samples from which it is impossible to draw generalisable conclusions. We welcome studies which combine large scale, quantitative information on effect sizes which will allow us to generalise, with in-depth case studies which provide insights into how processes work.
He also highlighted the value of systematic reviews of the type undertaken by the Cochrane Collaboration and the Campbell Collaboration. These types of reviews were at the heart of the work of the What Works Centres in the United Kingdom, with the exception of What Works Scotland (Bristow et al., 2015). However, the issues with which policymakers deal are too often not of the type that is amenable to this approach. This apparent bias toward the quantitative is somewhat at odds with findings in organizational research that suggest there is a “concreteness” around qualitative research that “makes it easier to understand than quantitative research which is more abstract” (Beyer & Trice, 1982, p. 605).
While the social sciences by their very nature operate in the blurred area between means and ends and therefore accommodate some of the values issues inherent in social problem-solving, engagement with policy is fraught for the natural sciences. Weinberg (1972) attempted to disentangle scientists from the political aspect of policy debate by proposing the concept of “trans-science,” the point at which science and politics meet. In this gray area, decisions “hang on the answers to questions which can be asked of science and yet which cannot be answered by science” (Weinberg, 1972, p. 209, italics in original). Weinberg (p. 212) saw the social sciences as dealing with trans-scientific questions “very frequently,” a point that seems to have been lost to a large extent in elements of the EBPM debate.
There is an alternative understanding of EBPM that is more consistent with Majone’s distinction between evidence and facts. In this view, the concept of evidence-based policy can be useful if “evidence” is understood in a broader sense—distinguished from facts, and “taking account of disparate bodies of knowledge [that] become multiple sets of evidence that inform and influence policy rather than determine it” (Head, 2008, p. 4). In Head’s approach, scientific (i.e., research-based) knowledge is supplemented by practitioner knowledge and experience and political know-how. This recognition that research is not the only form of relevant knowledge for social problem-solving echoes the approach of Lindblom and Cohen (1979), who referred not only to “ordinary” knowledge and its place in the decision-making process, but also to the interactive (political) nature of problem-solving. They suggested that the results of professional social inquiry (including research), no matter how conclusive, are unlikely to be authoritative independent of other forms of knowledge, such as ordinary knowledge, in offering solutions to social problems. Acknowledging that there is an interactive role in the production of knowledge for problem-solving, however, runs counter to the EBPM approach that evidence is neutral and objective. Addressing problems interactively is likely to see evidence used symbolically rather than instrumentally as participants in interaction are “necessarily partisan” (Lindblom & Cohen, 1979, p. 62).
Some government documents travel a middle path by adhering to linear stages models of the policy process while using more moderate, less deterministic, language of ensuring that the policy process “draws on” the best research (Australia Department of Industry, 2012, p. iii). In a similar vein, a number of scholars have expressed a preference for the notion of evidence-informed policy (Boaz et al., 2008; Packwood, 2002). This concept allows for the fact that policymaking involves interests and values, as well as objective information about the issue under consideration. It is a more optimistic view of the relationship between evidence and decision-making. As Boaz et al. (2008, pp. 246–247) put it, “The ambition for more evidence-informed policy and practice should be uncontested, even while we recognise that evidence alone should not determine actions: there is scope, indeed need, for ideology and interests to contribute to the choices that policy makers and practitioners make.”
The Relationship Between Research and Policy
The increased focus on research-based evidence inevitably draws attention to the relationship between researchers and policymakers. Policymakers have raised it as an issue (e.g., Australia Department of Industry, 2012; Blunkett, 2000), and scholars for many years have been considering the connections between knowledge, research, and policy from different perspectives (see, for example, the debate between Mead, 2015; Newman & Head, 2015). This relationship between research and policy is considered as follows. The first approach relates to the nature of scientific evidence and to what extent it is as value-free as the most enthusiastic advocates of evidence-based policy appear to suggest. Related to this is the second concern, the relationship between the expert and the policy process, which generally is raised with respect to the natural sciences. This literature is concerned with the impact on the scientific expert of engaging in policy debate. The third issue is the concern of many social scientists that their research is not getting the attention that it deserves. The fourth has been characterized as the existence of “two communities” and the challenges of bridging the gap between them in order to facilitate better policymaking. The following section discusses each of these issues in turn.
The Nature of Scientific Evidence
Implicit in the argument that evidence depoliticizes government decision-making is the idea that it replaces subjectivity and values with objective facts, thereby improving policy outcomes. Apart from reflecting a simplified, linear view of the policy process, it also embodies a simplified view of the scientific enterprise, particularly with respect to the natural sciences. There are several dimensions to this simplified view. First, it implies that there is the potential for uncontested, proven facts. As Majone (1989, p. 42) argues, “Few scientists, or philosophers of science, still believe that scientific knowledge is, or can be, proven knowledge. If there is some point on which all schools of thought agree today, it is that scientific knowledge is always tentative and open to refutation.” Expecting science to provide indisputable answers to policy questions, without caveats or revisions, is unrealistic, and it stems from a misunderstanding of the scientific method itself.
Second, scientists are human. They make choices about what they research and what methodologies they pursue, and in so doing, they are making value-based judgments about what matters are worthy of study. Related to this point is the third dimension: As humans, scientists are part of human communities, including networks of scientists, that not only provide socialization of individuals, which is inseparable from their choices, but also frame the context of their work. Fleck’s (1979) work on “thought collectives” demonstrates how the interactions between scientists can influence the development of scientific ideas and the shaping of facts. These facts are further shaped by the transition from the esoteric scientific community to the broader “exoteric” community. This has important implications for policymakers. As Fleck argued, communication of scientific findings removes the caveats from research and creates a sense of faith in the natural sciences, a “vividness and absolute certainty [so that the science] appears secure, more rounded, and more firmly joined together” (Fleck, 1979, p. 113). Removing the caveats can have consequences for policymakers by inspiring confidence in scientific advice that may not be warranted and that could result in unintended consequences from policy that is based on that advice (Botterill & Hindmoor, 2012). Fleck essentially describes a process that is as influenced by the realities of bounded rationality as other forms of human activity.
The Relationship Between the Expert and the Policy Process
The Natural Sciences
The authority of the scientist as expert is partly based on this misrepresentation of science as the source of uncontested objective facts. Jasanoff (1987, p. 196) notes, “Though the sociology and philosophy of science both attest to the indeterminacy of knowledge, science has for several centuries maintained its authoritative status as provider of ‘truths’ about the natural world,” and researchers with an interest in scientific expertise have been concerned about the standing of scientists in the community. The role of the scientific expert in public debate raises issues of credibility for scientists themselves, which in turn has implications for evidence-based policy. Fleck’s work points to the simplification of the scientific message but, as others have pointed out, attempts to retain the complexities are also problematic. Where caveats are not lost and disagreements between experts are aired publicly, scientists can find their expert authority undermined (Nelkin, 1975), particularly if they are employed by an actor with an identifiable material interest in the policy issue in question. Such “Issue Advocates” (Pielke, 2007, p. 15) can be seen by the public as “dueling scientists” and can result in distrust of the evidence presented, as well as raising the ire of scientists concerned about the impact of “vested interests” on the dissemination of scientific advice (e.g., Rosenstock & Lee, 2002).
Engaging in public policy debate, therefore, carries a degree of reputational risk for scientists. As Jasanoff (1987, p. 197) argues, “The authority of science is seriously jeopardized when scientists are called upon to participate in policy-making. Administrative decision-making often requires a probing of the areas of greatest indeterminacy in science.” Weinberg’s proposal of a category of trans-science was based on his concern about the standing of scientists in public debate, and this is reflected in his observation (1972, p. 216) that “in trans-science where matters of opinion, not fact, are the issue, credibility is at least as important as competence.” In later work (Weinberg, 1985, p. 68), he advocated the development of “a new branch of science, called regulatory science, in which the norms of scientific proof are less demanding that are the norms of ordinary science.” Later writers also expressed concern that the image of science was at risk with respect to “issues at the boundary of science and policy, and the procedures used to resolve them” (Jasanoff, 1987, p. 224).
The Social Sciences
In public policy making, many suppliers and users of social research are dissatisfied, the former because they are not listened to, the latter because they do not hear much they want to listen to.
(Lindblom & Cohen, 1979, p. 1)
The problems of threats to expert credibility that concern natural scientists are less acute for social scientists who tend not to start from any privileged position of being the providers of “facts.” However, the social sciences, with the possible exception of economics, face their own particular challenges of relevance in terms of EBPM. The language of politicians like Blunkett suggests that they are increasingly being asked to produce similarly uncontestable truths or risk being sidelined as irrelevant. This lies behind much of the search for methodological rigor and an emphasis on quantitative approaches to research. However, as Hammersley (2005, p. 89) points out, “It is important to recognize that like all other forms of human practice research itself necessarily relies on judgment and interpretations: it can never be governed but can only be guided by methodological rules.” For social scientists, their concern about their professional standing and evidence-based policy is often framed in terms of a lament that researchers are not more influential in public policy (see, e.g., Mead, 2015, 2010). The growing pressure on academic researchers to demonstrate the impact of their research exacerbates this concern, particularly for research that is more conceptual in nature. Beyer (1997, p. 18) also noted that “[p]ractitioner communities don’t have norms that require acknowledging sources,” so influence and impact can be hard to trace.
The “Two Communities” Thesis and Bridging the Gap
For EBPM to be successful, there needs to be effective communication between researchers and policymakers (Hantrais, Lenihan, & MacGregor, 2015, p. 104). With respect to the perception that the social sciences have a poor record when it comes to policy impact, Caplan (1979) proposed the existence of “two communities,” with different norms, time frames, and languages. This characterization has been disputed in the literature (see, e.g., Newman & Head, 2015); however, the question of policy relevance and impact remains a live issue for both researchers and policymakers. In this context, scholars have explored the role of knowledge brokers (see, e.g., Knight & Lyall, 2013, and other articles in the special issue of the journal Evidence and Policy), who seek to communicate and translate research into a form usable by decision-makers; and there is at least one journal dedicated to science communication. The arguments in this debate range along a spectrum, from a critique of the failure of researchers to engage effectively and produce “useful” research through to arguments that research serves an “enlightenment” function (Weiss, 1977) in liberal democracies, similar to the conceptual use of knowledge described by Pelz (1978) and Beyer (1997). From the latter perspective, the value of research is in contributing to informed societal debate rather than making direct input into the policy process.
Apart from mismatches between evidence supply and demand, a further concern relates to different yardsticks for assessing the validity of evidence. This is related to the problem of time frames. Peer-reviewed evidence is a core requirement for academics; however, the lag between the completion of research and its publication in reputable journals can be measured in months, if not years. Policymakers require evidence for policy far more rapidly than this.
Evidence for Democracy
The previous section has considered the relationship between evidence and policy. For political scientists, a further concern relates to the relationship between evidence and politics. This concern relates to the attempted separation of policy and politics through the aspiration of a value-free (and therefore politics-free) policy process. This was explicit in the rhetoric of the Blair government, but it has been noted by scholars before the emergence of the rhetoric of EBPM. Lindblom and Cohen (1979, p. 69) wrote of a “hostility to democracy [that] appears in the ethic of nonpartisan neutrality and in associated ideas about the role of scientific expertise in the decision process.” Earlier work on the role of knowledge in democratic decision-making recognized the values basis of democracy. Two scholars stand out in this regard, Harold Lasswell and Herbert Simon, and it is worth recapping how they regarded the role of values in decision-making processes.
Some of the earliest work on the interaction between research and policy was published by Harold Lasswell in the middle of the 20th century. His vision was of a “policy sciences of democracy” (Lasswell, 1951) which drew on the breadth of human knowledge to further the democratic project. This idea of capturing the insights from research for democracy has to some extent been lost in the push for EBPM. The demonization of politics and calls for its removal from policymaking is quite different from the explicit democratic values base of Lasswell’s work. Lasswell’s classic (1951) chapter on the policy orientation was clear on this point. Although he sought to separate policy from politics terminologically because of what he saw as the pejorative nature of the latter term—its “undesirable connotations” (Lasswell, 1951, p. 5)—he also recognized the values basis of policy.
For Simon, the decision-making process begins from two premises: a values premise and a factual premise, which he describes as “roughly equivalent to ends and means, respectively” (Simon, 1944, p. 19). The former sets the context within which decision-making takes place. As Simon (1944, p. 19) noted, “The distinction between factual and value premises has an obvious bearing on the question of how discretion is to be reconciled with responsibility and accountability, and what the line of division is to be between ‘policy’ and ‘administration’.” The attempt to depoliticize policymaking seeks to remove the values premise from consideration.
The move by some writers in favor of evidence-informed policy recognizes a place for evidence, but also that there are different forms of evidence that allow for the incorporation of values and debates about ends. Before the latest wave of emphasis on evidence-based policymaking, Torgerson (1986, p. 45) wrote of the emergence of a “third face” of policy analysis, one which “begins with this realization: it is understood that the theory and practice of policy analysis are rooted in inherently political choices.” Some scholars are clearly cognizant of this fact while others, and many policymakers, continue to aspire to a very instrumental understanding of the relationship between evidence and policy.
While Lasswell was optimistic about the potential contribution of science to democracy, other writers have pointed to a number of tensions between the two. MacRae (1973, p. 229) addressed the tension between science (both natural and social) and policy:
Two aspects of science in policy have led to concern on the part of those who wish to preserve and improve democracy: first, the complexity of the problems which require political decisions, increased as it is by science and its applications; second, the inequality of knowledge between scientists and other citizens.
Policy-relevant science ideally addresses the means to meet the ends set by the political process but, inconveniently, it can also expose limitations to democracy itself (MacRae, 1973, p. 236):
Both the findings of science—natural or social—and its procedures of discussion may reveal or illuminate conflicts between values. The findings of science may indicate conditions under which the practice of democracy is difficult, and may themselves undermine confidence in the workability of democratic institutions.
The enlightenment interpretation of research, as espoused by Weiss (1977), recognizes the democratic role of researchers who contribute indirectly to public policy through the provision of information on which those engaged in public policy debate can draw. This perspective regards research as a public good that serves to raise the quality of public debate by informing citizens in general, not simply answering the specific questions asked by policymakers.
The concept of EBPM has gained momentum in recent years and, while intuitively it makes sense to seek policy decisions that have some basis in evidence and are not entirely ideologically driven, the drive to remove politics from policymaking has been subject to a growing body of criticism. In place of an outright rejection of the concept, there has been a push for evidence-informed policy, which takes a broad and inclusive view of what constitutes policy-relevant evidence. This allows for the incorporation of values and perspectives that are not necessarily the output of rigorous scientific investigation. This has resulted in an academic literature on EBPM that is often more nuanced, including reference to coproduction of knowledge and interactions between researchers and policymakers throughout the policy process. While being more realistic in terms of both the policy process and the norms of democracy, it moves the idea quite a distance from Blunkett’s very particular understanding of valuable evidence for policy. It is also quite a distance from the views of policymakers within those jurisdictions that have adopted the evidence-based policy mantra and who base their thinking in a very linear, input-based model.
Evidence-based policy has its origins in evidence-based medicine and, while there are similarities, there are also important differences that point to the difficulties of transferring the concept from medicine to policy. Brownson, Gurney, and Land (1999, p. 90) distinguish between two levels of evidence in evidence-based public health. They describe these as follows: “level one evidence may lead to the conclusion that ‘something should be done’,” while “level two evidence points the practitioner toward the conclusion that ‘specifically, this should be done’.” An important difference between evidence-based public health and evidence-based policy more broadly is that the identification of problems for which a solution is required in the latter case is a political choice. How problems are identified and defined and which issues are the focus of attention tend not to be the result of level one evidence. The slow pace of government responses to the scientific advice around climate change illustrates that there can be a large body of evidence that “something should be done,” which does not necessarily lead to action.
Other issues, which may be values-based or politically important, capture government attention without necessarily being supported by an identifiable body of evidence. Recognizing the values-based nature of problem identification does not detract from the desirability of relying on knowledge to inform problem solutions. It does, however, highlight the reality that EBPM will always be part of the political process, and as such will be influenced by politics, ideology, and values.
The relationship between research, knowledge, and governmental decision-making is not a new issue. It has been of concern for decades—both to those who produce knowledge and seek to influence policy and those in policymaking who seek answers to societal problems. EBPM is just the latest attempt to grapple with these issues. It is distinct from previous attempts, in that it has been embraced overtly by governments who have adopted the language of EBPM in public pronouncements. In the case of the United Kingdom, the Blair government sought to make it a distinguishing virtue of New Labour’s approach to policymaking.
This article has sought to highlight the contested nature of this most recent concept of EBPM, both in terms of the interaction between evidence and policy and the nature of evidence itself. Much of the critique of EBPM has pointed to an instrumental and positivist understanding of the relationship between research and policy. As Parkhurst (2016, p. 376) points out, “the divisions between the champions of [evidence-based policy] and critical policy scholars reflect deeper epistemological divisions between positivists and post-positivist or constructivist thinkers.” It also reflects a division between policy practitioners and policy scholars, with many politicians and policymakers presenting the concept in an instrumental fashion (see, for example, Blunkett’s comments given previously).
In an attempt to demonstrate that EBPM depoliticizes policy choices, advocates of this form of it are in fact engaging in politics. Choosing to pay attention to that which is quantifiable or can be systematized over research that is more qualitative places confidence in such research and closes the door on much other research that is equally valuable. Interpretations of the role of evidence that acknowledge the presence of other types of evidence, such as practitioner experience or ordinary knowledge, provide a more realistic, and democratic, understanding of the use of information in the development of policy. Sanderson (2002, p. 19) noted that “we need to recognise that policies are essentially ‘conjectures’ based upon the best available evidence. In most areas of economic and social policy this evidence will provide only partial confidence that policy interventions will work as intended.” This statement reflects the reality of the policy environment and the contingent and uncertain nature of policy problems and their available solutions. While these latter approaches are also more realistic in their understanding of how decisions are reached in a democracy, in stretching the definition of evidence and the extent to which policy can be evidence-based, they are robbing the concept of evidence-based policy of its rhetorical power.
Australia Department of Industry, Innovation, Science, Research, and Tertiary Education. (2012). APS200 Project: The Place of Science in Policy Development in the Public Service.Find this resource:
Beyer, J. M. (1997). Research utilization: Bridging a cultural gap between communities. Journal of Management Inquiry, 6(1), 17–22.Find this resource:
Beyer, J. M., & Trice, H. M. (1982). The utitlization process: A conceptual framework and synthesis of empirical findings. Administrative Science Quarterly, 27, 591–622.Find this resource:
Blunkett, D. (2000). Influence or irrelevance: Can social science improve government? Research Intelligence, 71, 12–21.Find this resource:
Boaz, A., Grayson, L., Levitt, R., & Solesbury, W. (2008). Does evidence-based policy work? Learning from the UK experience. Evidence and Policy, 4(2), 233–253.Find this resource:
Botterill, L. C. (2004). Valuing agriculture: Balancing competing objectives in the policy process. Journal of Public Policy, 24(2), 199–218.Find this resource:
Botterill, L. C., & Hindmoor, A. (2012). Turtles all the way down: Bounded rationality in an evidence-based age. Policy Studies, 33(5), 367–379.Find this resource:
Bristow, D., Carter, L., & Martin, S. (2015). Using evidence to improve policy and practice: The UK What Works Centres. Contemporary Social Science, 10(2), 126–137.Find this resource:
Brownson, R. C., Gurney, J. G., & Land, G. H. (1999). Evidence-based decision making in public health. Journal of Public Health Management and Practice, 5(5), 86–97.Find this resource:
Cabinet Office. (1999). Modernising government. White Paper CM4310. London: TSO.Find this resource:
Caplan, N. (1979). The two-communities theory and knowledge utilization. American Behavioral Scientist, 22, 459–470.Find this resource:
Fleck, L. (1979). Genesis and development of a scientific fact (T. J. Trenn & R. K. Merton, Trans.). Chicago: University of Chicago Press. Originally published in 1935.Find this resource:
Hammersley, M. (2005). Is the evidence-based practice movement doing more good than harm? Reflections on Iain Chalmers’ case for research-based policy making and practices. Evidence and Policy, 1(1), 85–100.Find this resource:
Hantrais, L., Lenihan, A. T., & MacGregor, S. (2015). Evidence-based policy: Exploring international and interdisciplinary insights. Contemporary Social Science, 10(2), 101–113.Find this resource:
Head, B. W. (2008). Three lenses of evidence-based policy. Australian Journal of Public Administration, 67(1), 1–11.Find this resource:
Head, B. W. (2016). Toward more “evidence-informed” policy making. Public Administration Review, 76(3), 472–484.Find this resource:
Hoppe, R. (2005). Rethinking the science-policy nexus: From knowledge utilization and science technology studies to types of boundary arrangements. Poiesis Prax, 3, 199–215.Find this resource:
Jasanoff, S. J. (1987). Contested boundaries in policy-relevant science. Social Studies of Science, 17(2), 195–230.Find this resource:
Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under uncertainty: Heuristics and biases. Cambridge, U.K.: Cambridge University Press.Find this resource:
Knight, C., & Lyall, C. (2013). Knowledge brokers: The role of intermediaries in producing research impact. Evidence and Policy, 9(3), 309–316.Find this resource:
Lasswell, H. D. (1951). The policy orientation. In D. Lerner & H. D. Lasswell (Eds.), The policy sciences (pp. 3–15). Stanford, CA: Stanford University Press.Find this resource:
Lindblom, C. E. (1959). The science of “muddling through.” Public Administration Review, 19, 79–88.Find this resource:
Lindblom, C. E., & Cohen, D. K. (1979). Usable knowledge: Social science and social problem solving. New Haven, CT: Yale University Press.Find this resource:
MacRae, D., Jr. (1973). Science and the formation of policy in a democracy. Minerva, 11(2), 228–242.Find this resource:
Majone, G. (1989). Evidence, argument, and persuasion in the policy process. New Haven, CT: Yale University Press.Find this resource:
Mead, L. (2015). Only connect: Why government often ignores research. Policy Sciences, 48, 257–272.Find this resource:
Mead, L. M. (2010). Scholasticism in political science. Perspectives on Politics, 8(2), 453–464.Find this resource:
Nelkin, D. (1975). The political impact of technical expertise. Social Studies of Science, 5(1), 35–54.Find this resource:
Newman, J., & Head, B. (2015). Beyond the two communities: A reply to Mead’s “Why government ignores research.” Policy Sciences, 48, 383–393.Find this resource:
Packwood, A. (2002). Evidence-based policy: Rhetoric and reality. Social Policy and Society, 1(3), 267–272.Find this resource:
Parkhurst, J. O. (2016). Appeals to evidence for the resolution of wicked problems: The origins and mechanisms of evidentiary bias. Policy Sciences, 49, 373–393.Find this resource:
Parsons, W. (2002). From muddling through to muddling up—Evidence based policy making and the modernisation of British government. Public Policy and Administration, 17(3), 43–60.Find this resource:
Pelz, D. C. (1978). Some expanded perspectives on use of social science in public policy. In M. Yinger & S. J. Cutler (Eds.), Major social issues: A multidisciplinary view. New York: Free Press.Find this resource:
Pielke, R. A., Jr. (2007). The honest broker: Making sense of science in policy and politics. Cambridge, U.K.: Cambridge University Press.Find this resource:
Ravetz, J. R. (1986). Usable knowledge, usable ignorance: Incomplete science with policy implications. In W. C. Clark & R. E. Munn (Eds.), Sustainable development of the biosphere. Cambridge, U.K.: Cambridge University Press.Find this resource:
Rosenstock, L., & Lee, L. J. (2002). Attacks on science: The risks to evidence-based policy. American Journal of Public Health, 92(1), 14–18.Find this resource:
Sanderson, I. (2002). Evaluation, policy learning, and evidence-based policy making. Public Administration, 80(1), 1–22.Find this resource:
Simon, H. (1944). Decision-making and administrative organization. Public Administration Review, IV(Winter), 17–30.Find this resource:
Solesbury, W. (2002). The ascendancy of evidence. Planning Theory and Practice, 3(1), 90–96.Find this resource:
Stewart, J. (2006). Value conflict and policy change. Review of Policy Research, 23(1), 183–195.Find this resource:
Tetlock, P. E. (2000). Coping with trade-offs: Psychological constraints and political implications. In A. Lupia, M. D. McCubbins, & S. L. Popkin (Eds.), Elements of reason: Cognition, choice, and the bounds of rationality (pp. 239–263). Cambridge, U.K.: Cambridge University Press.Find this resource:
Thacher, D., & Rein, M. (2004). Managing value conflict in public policy. Governance, 17(4), 457–486.Find this resource:
Torgerson, D. (1986). Between knowledge and politics: Three faces of policy analysis. Policy Sciences, 19(1), 33–59.Find this resource:
Weinberg, A. M. (1972). Science and trans-science. Minerva, 10(2), 209–222.Find this resource:
Weinberg, A. M. (1985). Science and its limits: The regulator’s dilemma. Issues in Science and Technology, 2(1), 59–72.Find this resource:
Weiss, C. H. (1977). Research for policy’s sake: The enlightenment function of social research. Policy Analysis, 3(4), 531–545.Find this resource:
Weiss, C. H. (1979). The many meanings of research utilization. Public Administration Review, 39(5), 426–431.Find this resource:
Yearley, S. (2005). Making sense of science. London: SAGE.Find this resource: