Performance Management in Public Administration
Summary and Keywords
As one of the reforms supported by the New Public Management movement, Performance Management Systems (PMSs) have been implemented worldwide, across various policy areas and different levels of government. PMSs require public organizations to establish clear goals, measure indicators of these purposes, report this information, and, ultimately, link this information with strategic decisions aimed at improving agencies’ performances. Therefore, the components of any PMS include: (1) strategic planning; (2) data collection and analysis (performance measurement); and (3) data utilization for decision-making (performance management). However, the degree of adoption and implementation of PMS components varies across both countries and levels of government. Therefore, in understanding the role of PMSs in public administration, it is important to recognize that the drivers explaining the adoption of PMS components may differ from those explaining their implementation. Although the goal of any PMS is to boost government performance, the existent empirical evidence assessing PMS impact on organizational performance reports mixed results, and suggests that the implementation of PMSs may generate some unintended consequences. Moreover, while worldwide there is a steady increase in the adoption of performance metrics, the same cannot be said about the use of these metrics in decision-making or performance management. Research on the drivers of adoption and implementation of PMSs in developing countries is still lacking.
Around the globe, performance management systems (PMSs) have been implemented across various policy areas and at every level of government. Since the late 1970s, an increasing enthusiasm for government reform, through the adoption of PMSs, has spread first among members of the Organization for Economic Co-operation and Development (OECD) and later among developing countries. The rationale behind these reforms lies in the belief that public service delivery will improve if public agencies are oriented toward results, rather than being focused on processes. Thus, governments have used slogans like “managing for results” (Behn, 2002), “customer-driven administration” (Kettl, 1997), or “reinventing government” (Osborne & Gaebler, 1992) to promote PMSs. Although each country establishes its own set of mechanisms, all share the following characteristics. The agencies set their own performance goals and create performance measures to keep track of their performance, using benchmarking practices. In doing so, agencies provide incentives to achieve their performance goals, and track their progress to set or modify the strategies intended to realize their aims.
In the literature, the term “performance management” tends to be used interchangeably with the term “performance measurement.” However, these two concepts differ (Hatry, 2002), although both should be understood as components of PMSs. Moynihan (2008) has conceptualized performance management as “a system that generates performance information through strategic planning and performance measurement routines and that connects this information to decision venues, where, ideally, the information influences a range of possible decisions” (2008, p. 5). Other authors define performance management as “an integrated set of planning and review procedures that cascade down through the organization a link between each individual and the overall strategy of the organization” (Smith & Goddard, 2002, p. 247). For others, performance management is the “use of performance measurement in shaping the performance of organizations and people” (Agere & Jorm, 2000, p. 1). These definitions denote the fact that performance management can be viewed as a system that requires public organizations to establish (1) clear goals, (2) measure indicators of these purposes, (3) report this information, and, (4) ultimately, link this information to strategic decisions aimed at improving agencies’ performance. In sum, PMS includes three essential components: 1) strategic planning; 2) data collection and analysis (performance measurement); and 3) data utilization for decision-making (performance management).
The outspread implementation of PMSs has been accompanied by an extensive number of studies devoted to discussing “big questions” related to this topic. For example, some have examined implementation strategies (Denhardt, 1985; Brudney et al., 1999; Ingraham & Moynihan, 2001; Johnsen, 2005; Holzer et al., 2009), as well as PMSs’ impacts on organizational performance (Holzer & Yang, 2004; Gerrish, 2015). Other studies have explained why PMSs are adopted (Moynihan, 2008), and what factors affect their implementation (Berman & Wang, 2000; De Lancer Julnes & Holzer, 2001). Some have focused on the components of PMSs by addressing, for example, the determinants of performance information use (Moynihan, 2008; Ammons & Rivenbark, 2008; Folz et al., 2009; Kroll, 2013). Others, like Frederickson and Frederickson (2006), have pointed out the difficulties of implementing PMSs in a context of a “hollow state.” These challenges derive from the potential conflict between the democratic values and the performance management movement’s values (Radin, 2006).
Although most of the literature on PMSs has been generated mainly from the contexts of the United States and the United Kingdom, recently some academics have assessed both the theory and practice of PMSs in other parts of the world. For example, Bouckaert and Halligan (2008) compare PMSs across different countries. Other authors have written about certain regions’ experiences with PMSs, including East Asia (Berman et al., 2010), Southeast Asia (Berman, 2011), and Latin America (EAPDF, 2016). This chapter assesses the outcomes of these reforms in both developed (Gerrish, 2015; Gao, 2015) and developing countries (Ni Putu et al., 2007), discusses the drivers of both the adoption and implementation of these practices (Brudney et al., 1999; De Lancer Julnes & Holzner, 2001), and explains some unintended consequences derived from the implementation of these reforms (Li, 2015; Kalgin, 2016).
The scholastic interest in PMSs has led to recent systematic literature reviews on the topic. For example, Star et al. (2016) addresses the different performance measurement systems as well as the factors that influence the implementation of each system. Kroll (2015) categorizes the drivers of performance information use. Others have compiled the general lessons learned from the worldwide implementation of performance-oriented reforms (Gao, 2015), as well as PMSs’ impacts on both organizational performance (Gerrish, 2015) and policy areas across countries (Schwartz & Deber, 2016). The insights obtained from these systematic reviews have been valuable for enhancing our understanding of the components of PMSs, their impact, and implementation trends. However, little effort has been made to generate an analytical framework that integrates the different factors influencing both the use and role of PMSs in public organizations’ decision-making. Specifically, this article seeks to link PMSs to the field of public administration by highlighting the channels through which the PMS relates to public administration. To do so, the article engages in an analytical review of the PMS and each of its components, and analyzes the role of the PMS in public administration.
With this purpose in mind, the present article is structured in the following way. The first section describes what is understood as the movement of performance management and proceeds to explain the components of a performance management system: 1) strategic planning; 2) data collection and analysis (performance measurement); and 3) data utilization for decision-making (performance management). The next section explains the role of performance management in public administration. As part of this elucidation, the rational expectations from implementing PMSs are clarified. Subsequently, the implementation efforts and trends of each component are described. Then, the effects of PMSs on organizational performance are elaborated. In the last section, the gaps in the existent literature are identified, and the relevance of addressing them for academics and practitioners is assessed.
Origins of Performance Management Practices: The Movement
Performance management as a movement has its origins in a set of ideas that, in the OECD countries, came to be known as the New Public Management (NPM), and in the United States as “reinventing government.” These beliefs were materialized into administrative reforms and implemented beginning in the late 1970s in Australia, New Zealand, and the United Kingdom, and later, in Sweden, the United States, and other OECD countries (OECD, 1996). In the United States, although reforms aligned with this doctrine began to be implemented in the 1970s, these efforts became more systematic and broader in the 1990s with the passing of the Government Performance and Results Act (GPRA) and the National Partnership for Reinventing Government (Joyce, 2011). Following this trend, the George W. Bush administration implemented in 2001 the President’s Management Agenda (PMA), and in 2002 the Program Assessment Rating Tool (PART). Other regions, like Latin America, Polynesia, and Eastern and Southeast Asia, started implementing these reforms later. For example, in the late 1990s, major changes were initiated when some international organizations, such as the World Bank, the Inter-American Development Bank, and the Asian Development Bank, supported specific projects aimed at consolidating these reforms.
The ideas prompting the performance management movement are considered management doctrines, because—as Donald Moynihan (2008) affirms—they constitute a “prescriptive theory of cause and effect for how public organizations should be run, resulting in a series of policy options that demand implementation” (p. 27). As with any theory, this one is based on some predetermined ideas of the state of the world—in this case, the inefficiency of government and its capacity to change this inefficiency from within (Hood, 1991) by making more rational decisions (Moynihan, 2015). The performance management movement’s main assumption is that public organizations will improve their performance if managers clarify their objectives and measure organizational progress against their objectives (Rainey, 2014).
From these assumptions, a set of doctrines was generated supporting the PM movement. Even though a unique and foundational text of these doctrines does not exist, in 1992 Osborne and Gaebler gave a good summary of 10 doctrinal claims: 1) steering rather than rowing; 2) empowering rather than serving; 3) injecting competition into services delivery; 4) transforming rule-driven organizations; 5) funding outcomes, not inputs; 6) meeting the needs of the customer, not the bureaucracy; 7) earning rather than spending; 8) prevention rather than cure; 9) moving from hierarchy to participation and teamwork; and 10) leveraging change through the market (Osborne & Gaebler, 1992). Therefore, the PM movement refers to the wave of bureaucratic reforms, which were adopted by several countries in an effort to conceive these 10 doctrines.
This movement, in turn, is connected with the appearance of a new paradigm in public management as a field. This paradigm distinguishes the “old” bureaucratic paradigm from the “new” managerial perspective focused on managerialism (Lynn, 2006). However, in most countries the endeavors mainly have focused on creating performance-information systems and linking measurements with strategic planning (West, 2015). In this sense, administrative reforms seem to have focused primarily on the fourth and fifth doctrinal claims. Countries that joined this movement did not necessarily reject other reforms embraced by the advocates of reinventing government. Nevertheless, in practice, most of the adopted reforms aimed at creating a mission- and results-oriented government, while the other doctrines were taken as symbolic ideas complementing the core reform or ideals of what the government should be. Thus, here we will focus on reforms targeting the four and fifth doctrinal claims.
Components of the Performance Management System
As mentioned before, the PMS includes three essential components: 1) strategic planning; 2) data collection and analysis; and 3) data utilization for decision-making. Strategic planning refers to the formulation of the strategy, which implies defining the goals and identifying their corresponding performance indicators1. The latter is generally known as performance measurement and, in turn, involves two steps: a) data collection; and b) analysis and dissemination of the collected data. In doing so, public managers first need to collect and measure performance information; then, they are expected to analyze and disseminate the information to the relevant decision-makers.2 Finally, the third component of the PMS requires managers to use the performance data to inform their decisions (Smith & Goddard, 2002).
A PMS relies on two assumptions. First, the agencies are considered capable of specifying and quantifying their goals, and, second, managers will trust in the provided information and use it to make decisions about future goals (Heckman et al., 1997). Consequently, performance management should be seen as a system that links the performance of individuals to the organization’s general direction through processes that promote attaining the target goals (Agere & Jorm, 2000). Consequently, individuals should be taken into consideration as part of each component of the PMS.
Boosters of Effective PMSs
In the literature on PMSs, the criteria for the construction of an effective PMS was not generated at the same speed and time as the doctrinal claims. There was not really an a priori theory of the characteristics or features of a successful PMS. After years of these systems’ dissemination, some authors started to distinguish good practices and, from there, key characteristics were identified. First, an effective system should clearly state the duties and performance expectations (West, 2015). Nowadays, we understand this is a necessary condition to set in motion the first PMS’s component: strategic planning. In the same line, Blackman (2015) argues that in order for a PMS to add strategic value, this has to be in clear alignment with the outcomes sought by the public agency. Reinforcing this idea, Agere and Jorm (2000) emphasize the importance of seeing each component as integral to the functioning of the whole PMS, rather than as separate events. For this reason, Agere and Jorm (2000) suggest that strategic planning should be considered as the first stage of the PMS process. That is why it is imperative that any review of organizational goals, practices, procedures, and structures be carried out before the measurement begins, because the selection of indicators to be measured should reflect the goals which the organization aims to achieve (Agere & Jorm, 2000).
Besides having clearly established goals, involving stakeholders in the first and second components also is necessary for the system to be effective. Participation of employees and managers in the definition of goals and indicators secures ownership of the measurement by the people affected, and encourages them to use the performance information in their decision-making (Agere & Jorm, 2000). In addition, participation of stakeholders improves the perception of credibility of the performance system by those involved with it (Blackman, 2015). To promote employees’ involvement, West (2015) suggests including mechanisms to review performance agreements and indicators, as well as procedures that allow dealing with unsatisfactory performance. For example, a PMS also may include coaching, training, or mentoring to help individuals to fulfill organizational goals, as well as rewards and recognition provisions for those who meet or exceed expectations. In sum, acceptance of the PMS requires feedback from lower levels of government (Sorber, 1996).
In addition to taking into account the human resources of an organization to build an effective PMS, some authors have emphasized the importance of embedding the performance system within the overall corporate management structure of the organization (Blackman, 2015). In assessing recent performance management reforms in the United States, Moynihan and Kroll (2015) claim that an effective PMS is one that establishes routines facilitating the use of performance data purposefully. In line with this, Moynihan and Lavertu (2012) argue that the Government Performance and Results Act (GPRA) and the Program Assessment Rating Tool (PART) created routines for the creation and dissemination of data, but not for using information to make management, program, or resource-allocation decisions. This resulted in a “passive use” of performance information to comply with the norm. On the contrary, an effective PM system should enact mechanisms that help better integrate measurement and strategic planning by creating routines encouraging managers to use performance data for their strategic planning.
In sum, performance management measures whether the goals and objectives set by either politicians or public administrators are being accomplished. Then, it is up to public managers to use and analyze the measures collected to determine whether the organizational plans, procedures, and processes need to be reexamined to achieve the established goals and objectives. While the worldwide collection of performance measures is becoming institutionalized—either voluntarily or externally imposed—analyzing and incorporating these measures in the decision-making process to implement the necessary organizational adjustments is far from institutionalized.
The Role of Performance Management in Public Administration
As defined by Kettl, public administration is “the administration of governmental affairs” (1993, p. 38). In this sense, PMSs are expected to improve the performance of government actions. This expectation is based on a rational perspective of organizational performance, according to which organizational effectiveness has to do with the achievement of organizational goals through design, management, and other organizational features (Rainey, 2014). 3 The way PMSs will help improve performance is by providing public managers with proper information about the level of goal achievement, so they can use this information to determine a strategy that would help accomplish their goals. Provided with this information, managers may opt for continuing, modifying, or terminating any organizational process or feature in order to improve performance. Some of the organizational features managers may target include strategy, goals, structure, resource allocation, and human resource reallocation, among others.
Usage of Performance Information in Public Administration
In the field of public administration, the generalized notion is that politicians and managers will use performance information in a purposeful way. In fact, the use of performance information is the ultimate test of a PMS (Zients, 2009). Politicians and public managers may use performance information for controlling, evaluating, motivating, learning, competing, etc. Each of these activities has been identified as a “purposeful” use of information (Moynihan, 2009), because these metrics are utilized to improve organizational performance. As noted by Kroll (2015), most of the empirical studies have concentrated on this sort of data use. This might be because a “purposeful” use of performance information is in line with one of the goals promoted by NPM reforms in public administration. However, some authors have documented cases where performance information has been used for gaming, cheating, and advocating (Kalgin, 2016).
Given the present focus on the role of PMSs in public administration, this article looks next at a few possible uses of performance information. For example, “[e]xternally, performance information can be used to showcase performance, to give account, or to compare and benchmark. Internally, it can be used to monitor internal development or to improve operations” (Hammerschmid et al., 2013, p. 261). Nonetheless, this chapter will make a distinction between intended and unintended uses of performance information.
According to Kroll (2015), the four dimensions of performance information use are: purposeful, passive, political, and perverse. In the category of purposeful use, Kroll (2015) identifies learning, controlling, evaluating, budgeting, motivating, celebrating (including ranking and rewarding), and improving. Other authors include employee empowerment in this purposeful dimension (Lee et al., 2006). The passive use of performance information refers to the utilization of it for accountability purposes. That is, from subordinates’ perspective, performance information should reflect the accomplishment of their tasks (Liu et al., 2010). The political dimension of performance information is utilized to promote stakeholders (Behn, 2003). The last dimension includes perverse use of performance information like gaming (Li, 2015), cheating (Moynihan, 2009; Kalgin, 2016), or effort substitution4 (Kelman & Friedman, 2009). In sum, public managers can use performance metrics for different purposes and in different forms (Hood, 2008).
Given the rational conception of PMS, the purposeful dimension fits into intended uses, as this usage should lead to performance improvement. The other three dimensions exemplify unintended uses. This is because, in the rational approach to organizational performance, the purpose of measuring performance is to improve government effectiveness. So, purposeful use should secure that intention. Although the passive use is not hurtful, it is not the type of use expected to improve organizational management.
From Expectations to Reality
As stated before, the role of the PMS in public administration is to improve the management of government business. Hence, in any public agency using this system, we expect to find the following: (1) organizational goals are clearly stated; (2) a performance measurement system is established; (3) information is used to make decisions; and (4) these three factors should contribute to its performance. We identify an extensive number of empirical studies that shed some light on the extent to which of these expectations have been fulfilled. The anticipated answer is that the first and second components are present in most of the settings in which PMSs have been adopted and implemented. However, the third element (performance information use) is lacking in many cases. Despite the partial implementation of PMSs, Gerrish’s (2015) systematic review of empirical studies concludes that PMSs have a small positive effect on organizational performance. In what follows, we will lay out some empirical evidence on the degree of implementation of each component of the PMS, factors that explain the adoption of each component, and the effect of PMSs on organizational performance.
The first conclusion derived from empirical studies is that public agencies have the procedures and systems needed to collect information and measure it. More than thirty years after the beginning of the performance management movement, most empirical research provides evidence of the establishment of existing and functional PMSs that produce information and performance indicators. Countries and regions have joined the movement at different times, which might explain why some have consolidated PMSs, while others are still working toward institutionalizing them. For example, studies about the first OECD countries implementing these systems affirm that NPM reforms have resulted in creating new agencies and mechanisms to gather performance information (Walker & Boyne, 2006; West, 2015). These systems are well developed at the federal level; however, at the subnational level the coverage is still relatively low (Brudney et al., 1999; Berman & Wang, 2000; Downe et al., 2010). For example, at the county level in the United States, Berman and Wang (2000) find considerable variation in the implementation of performance measurement, mainly because counties vary significantly in terms of their organizational and technical capacities. Likewise, Downe et al. (2010) show that local authorities across Wales, England, and Scotland exhibit differences in performance assessments frameworks, as required by the central UK government. Casas Guzman (2007) describes the implementation of the health evaluation system in Mexico and its variation in the degree of implementation across subnational government agencies.
The general notion in the OECD countries is that “performance management has failed to deliver on its promises” (Blackman, 2015). This belief is based on the fact that performance information is not commonly used purposefully (Joyce, 2003; Moynihan & Lavertu, 2012; Heinrich, 2012), although we can find examples at the local level in which managers use performance information to make budgetary decisions. As Melkers and Willoughby (2005) show, not all local managers are equally motivated to use performance information when making budgetary decisions. In the U.S. context, for example, when compared with county managers, city managers are more inclined to rely on performance information for budgetary decisions.
In developing countries, the performance management movement also has gained relevance. Given the high level of corruption characterizing these countries, PMSs have been viewed as instruments to increase control and promote accountability (Gao, 2015). However, even though the systems are formally put in place, they often fail to generate enough performance information (Uddin, 2003). In a sense, in many of these countries, the adoption of these systems has been merely symbolic (Ni Putu et al., 2007). Some scholars focused on developing countries have tried to identify the factors affecting whether the implementation of PMSs is successful. For instance, Sarker (2006) carried out a study focused on Bangladesh and Singapore, in which he found that factors such as economic development, existence of a formal market economy, rule of law, and level of administrative infrastructure influence the success of PMS implementation.
In short, given the distinct stages of progress of PMSs in developed and developing countries, it is relevant to understand what explains the lack of performance information use in developed countries, what accounts for adopting and implementing PMSs in developing countries, and the level of development in implementing these systems. The literature is vast on the first question, but still lacking on the second.
Determinants of Performance Information Use
Extensive empirical research has tested the effect of different factors on the use of performance information, mostly in developed countries. The first group of factors is associated with environmental drivers. In this respect, a large amount of the literature has explored the effect of stakeholder involvement on performance information use (Berman & Wang, 2000; Moynihan & Ingraham, 2004; Ho, 2006; Yang & Hsieh, 2007; Moynihan & Pandey, 2010; Moynihan & Hawes, 2012). Few studies have analyzed the relationship between performance information use and general political support (Moynihan, Pandey, & Wright, 2012a; Yang & Hsieh, 2007). Most studies report that stakeholder involvement and political support do influence performance information use. Another driver that has been examined quite extensively is political competition (Askim, Johnsen, & Christophersen, 2008; Bourdeaux & Chikoto, 2008; Moynihan & Hawes, 2012). However, its effect on performance information use is inconclusive.
Another group of explanatory variables points at organizational factors related to public management, such as measurement system maturity, leadership support, support capacity, and employee involvement, among others. Using a sample of more than 3,100 top public-sector executives in six European countries, Hammerschmid and colleagues (2013) found that the use of performance information is mainly determined by organizational factors, rather than managers’ individual socio-demographic characteristics.
Others have focused on determining the role more sophisticated measurement systems play in fostering the utilization of performance information (Berman & Wang, 2000; De Lancer Julnes & Holzer, 2001; Melkers & Willoughby, 2005; Ho, 2006; Ammons & Rivenbark, 2008; Taylor, 2009; Moynihan & Pandey, 2010). In this case, the majority of studies conclude that the type of measurement system has a statistically significant effect, although few show no significant effect. Other potential drivers of performance information use are leadership support (Boyne et al., 2004; Moynihan & Ingraham, 2004; Yang & Hsieh, 2007; Dull, 2009; Moynihan & Lavertu, 2012), and support capacity (Berman & Wang, 2000; De Lancer Julnes & Holzer, 2001; Yang & Hsieh, 2007; Moynihan & Lavertu, 2012). Both the commitment level of top leaders to accomplishing results, and the extent of resources an agency dedicates to performance measurement, seem to be very important in encouraging managers to use performance information purposefully.
A third set of factors relates to other organizational aspects. In this respect, academic studies have found that innovative culture (Moynihan, 2005; Folz, Abdelrazek, & Chung, 2009; Johansson & Siverbo, 2009; Moynihan, Pandey, & Wright, 2012), goal clarity (Moynihan & Landuyt, 2009; Moynihan, Pandey, & Wright, 2012), information availability, such as learning forums (Moynihan, 2005; Moynihan & Landuyt, 2009; Moynihan & Lavertu, 2012), and organizational type (Hammerschmid et al., 2013) influence managers’ use of performance metrics data to make decisions. On the contrary, other organizational variables, such as an organization’s size (Moynihan & Ingraham, 2004; Melkers & Willoughby, 2005; Bourdeaux & Chikoto, 2008; Johansson & Siverbo, 2009; Taylor, 2011; Kroll, 2013) and financial distress (Askim, Johnsen, & Christophersen, 2008; Berman & Wang, 2000; Johansson & Siverbo, 2009; Moynihan & Pandey, 2010; Kroll, 2013), do not seem to influence the utilization of information linked to organizational performance when taking into account more precise measures of resources, such as support capacity. Finally, Hammerschmid et al. (2013) found that in six European countries, top managers’ survey responses suggest higher use of performance indicators in subnational government agencies, compared to central government ministries. They also found considerable differences in patterns of use across policy areas (e.g., employment services, finances, and economic affairs).
Scholars also have explored the role of individual-level factors in explaining performance information use. However, socio-demographic characteristics, such as job experience (Melkers & Willoughby, 2005; Dull, 2009; Taylor, 2011), educational attainment (Moynihan & Ingraham, 2004; Moynihan & Hawes, 2012), and individuals’ hierarchical level (De Lancer Julnes & Holzer, 2001; Taylor, 2011) do not seem to explain the degree of performance information use in the organization. On the other hand, individuals’ thoughts and feelings about performance measures (Ho, 2006; Ammons & Rivenbark, 2008; Taylor, 2011), prosocial motivation (Moynihan & Pandey, 2010; Moynihan, Pandey, & Wright, 2012), and networking (Moynihan & Hawes, 2012; Kroll, 2013) do play a role in explaining the use of performance information.
The above studies report that individual beliefs, attitudes, and social norms are relevant in explaining use of performance metrics when making decisions. However, empirical evidence in this regard is mixed. Studies covering some European countries report that highly educated and more experienced politicians at lower-level positions make the least use of performance information (Askim, 2009; Askim et al., 2008; Hammerschmid et al., 2013). More experienced public managers at higher-level positions, however, report being more active users of performance information (Hammerschmid et al., 2013). These findings are relevant, as they show the importance of managers in the quest toward the utilization of performance data. This also implies that organizational reforms, in the form of rules and procedures, are an inefficient way to get people to use performance information.
Finally, the last set of drivers influencing performance information use refers to the nature of data and the way data are presented (Moynihan, 2015). For example, Moynihan (2015) contends “the connection between data and decisions will not be automatic, but will depend upon circumstances, such as the nature of the data and how they are presented.” He tests this proposition by designing a vignette, experiment, methodology approach, in which subjects were presented with a variety of budget scenarios and asked to make budgetary decisions. The results suggest that advocacy, goal ambiguity, and expectancy disconfirmation alter use of performance data in decisions. Likewise, Hammerschmid et al. (2013) find that the implementation of certain performance management instruments, such as target systems, controlling, balanced scorecards, reporting systems, and performance contracts, influence the actual use of performance information. Finally, for performance measures to be used, data must be gathered on a uniform basis, especially when metrics are used across time (Sorber, 1996).
The Impact of Performance Management Systems on Organizational Performance
Defining Organizational Performance
A fundamental question in the literature related to performance management has been how to measure an agency’s achievement of goals or objectives; in other words, how to define and measure performance (Andrews et al., 2011). In this regard, Scott and Davis (2007) point out that whenever we ask about performance, we are interested in knowing how an organization is doing with respect to some set of standards. On this matter, research has followed two trends. Studies have concentrated on identifying standards that would better represent the functions of the government (Ammons, 1995) or the purposes of its activities (Behn, 2003). In this sense, Boyne (2003) asserts that public-service improvement is subjective and intrinsically political. Therefore, there is no unique criterion to determine whether or not public services are actually obtaining better results. Nonetheless, Boyne (2003) provides the general dimensions to assess public service improvement: 1) quantity of outputs; 2) quality of outputs; 3) efficiency; 4) equity; 5) outcomes; 6) value for money; and 7) consumer satisfaction (Boyne, 2003). Each dimension, in turn, is expected to be affected in different ways by different factors. That is, factors boosting efficiency may at the same time diminish both service quality and consumer satisfaction. Or, while some drivers may have a direct impact on quantity of outputs, their effect on other dimensions of performance may be contingent on other factors.
In implementing PMSs, it is crucial to define the agency’s products, outputs, and outcomes. Hence, scholars have argued that, for PMS to function, it is crucial to produce the appropriate data; that is, the performance information has to actually say something about the indicator of performance in which we are interested (Joyce, 2011). However, given the nature of some agencies’ activities, it may be difficult to both define their outputs and allocate costs (including time) to their outcomes (Sorber, 1996). This impedes the creation of performance metrics and the subsequent use of information in decision-making. In this regard, it is important to distinguish between these two concepts. According to Behn (2003), outputs are the goods and services that organizations produce, while the outcomes are the ends that the organizations aim to achieve by means of their outputs. Given organizations’ different natures, clarity in defining products and outputs is vital for implementing PMSs. In the United States, the General Accounting Office, the Government Accounting Standards Board, and the National Academy of Public Administration are mainly focused on measuring outcome indicators (Berman & Wang, 2000).
In evaluating organizational performance, other scholars are interested in assessing the pertinence of the measures themselves. In this group, we find studies that analyze the nature and the source of the indicators used to measure performance (Brewer, 2006); the strengths and weakness of objective as opposed to subjective measures (Andrews et al., 2010; Schachter, 2010; Meier & O’Toole, 2013), as well as the statistical properties of performance measures (Heinrich & Lynn, 2001; Meier & O’Toole, 2012). Some of these studies’ conclusions point to the need for generating contextually specific indicators (Gao, 2015). The discussion about the adequacy of the performance data becomes even more important in contexts in which an extensive portion of government work is carried out by third parties. This situation is highlighted by Frederickson and Frederickson (2006), who argue that federal managers are being held accountable for performance standards which are difficult to apply to third parties.
In addition to the vital need to properly and contextually define and measure performance, there is interest in assessing the internal processes that may improve organizational performance. An emblematic study that focused on these aspects is the Government Performance Project, which aimed to evaluate effectiveness of public agencies. In practice, this project did not measure performance in terms of outcomes, but rather, in terms of management system capacity (Rainey, 2014). In sum, when implementing PMSs, it is crucial to clearly define performance, for there is a huge difference between measuring output as opposed to outcome, efficiency as opposed to effectiveness, and program outcome as opposed to policy outcome (Osborne & Gaebler, 1992; Sorber, 1996).
Do PMSs Impact Organizational Performance?
One of the main endeavors of the literature on this topic has been to measure the impact of PMSs on organizational performance. However, little research exists about the results of PMSs in developing countries. Empirical evidence from developed countries suggests that PMSs produce mixed results (Clarkson et al., 2009; West, 2015). Intrigued by these results, Gerrish (2015), relying on 49 studies, conducts a meta-analysis of the impact of PM on public organizational performance and finds that PM has a small-positive average effect. These results are in line with the ones obtained by Poister et al. (2013), who analyze the impact of PM on local government effectiveness in the United States. Other studies also have found an effect of PM systems on performance, but the impact is contingent on the level of clarity of goals, the ability to select undistorted performance metrics, and the degree to which managers know and control the transformation process (Spekle & Verbeeten, 2014). In the presence of high levels of these contingency factors, PMSs—when used as an incentive tool—have a negative influence on performance.
In developing countries, scholars also have found mixed results. For example, a study about Brazil’s PM system shows that the implementation of results-oriented agreements is associated with positive outcomes in the security and education sectors (Viñuela & Zoratto, 2015). But Graves and Dollery (2009) found that in South African municipalities, performance measurement reforms fail to improve funding compliance and obstruct performance assessment.
Another line of research suggests that performance management has different impacts in public and private organizations. Indeed, by using survey and administrative data from Danish private and public schools, Hvidman and Andersen (2014) find that the impact of performance is contingent on the sector in which it is adopted. In private organizations, use of performance metrics contributes to performance without having unintended effects on equity. However, performance information fails to improve performance in public schools.
In addition to the studies on the effect of PMSs on organizational performance, efforts have been made to measure the effect of these systems on other elements of the organization that indirectly influence performance. In this line of research, Campbell (2015) analyzes the central government ministries in South Korea to show that PMSs have a positive effect on change-oriented behavior, but this relationship is mediated by identification of individuals. Masal and Vogel (2016) focus on the effects that different uses of performance information have on job satisfaction. By studying the policing sector, Masal and Vogel (2016) find that using performance information for supportive purposes has a positive relationship with job satisfaction, while using it for controlling purposes has a negative effect. Another study provides evidence of the unintended consequences that PMSs might have on the relationship between community and bureaucrats (Faull, 2016). In particular, this study shows that in the South African Police Service, the implementation of PMS promoted police practices that weaken community trust and hinder cooperation with police. This finding might suggest a negative effect of PMSs on performance when it is assessed in terms of consumer satisfaction.
Future Lines of Research
This article has sought to define PMS, and identify its components and the status of its implementation, as well as the factors that encourage its implementation. Additionally, this exercise sought to assess the role of PMSs in public administration. Some scholars and practitioners regard PMS as any practice or reform that relates to the doctrinal claims of the “reinventing government” movement. However, in this review, we adopted a narrower perspective on the topic and incorporated only the studies that have an understanding of PMS in line with Moynihan’s definition: “a system that generates performance information through strategic planning and performance measurement routines and that connects this information to decision venues, where, ideally, the information influences a range of possible decision” (2008, p. 5).
While a steady increase exists in the worldwide adoption of performance metrics, the same cannot be said about the use of these metrics in decision-making. In settings where performance information is used, it tends to happen in particular policy areas and agencies. Finally, there is a preference for generating certain performance metrics, while neglecting metrics targeted to assess other dimensions of performance, such as equity, accountability, and citizens’ satisfaction.
From this review, some gaps and patterns can be identified in the literature on PMSs. The first gap points out the lack of research on the drivers of adoption and implementation of PMSs in developing countries. The existing studies have focused on describing the experience of implementing PMSs in developing countries. However, little empirical research has been made to disentangle the factors that affect the adoption of these systems in developing countries, and the effect that PMS systems have on organizational performance. In addressing this gap, it is necessary to determine the degree of implementation of each of the components of PMSs, and then estimate their effect on government performance. Blackman (2015) notices a disconnect between the adoption of PMSs and their actual implementation. In fact, as developing and transitional economies seek membership in the OECD, countries are required to have PMSs in place. Thus, some countries might adopt PMSs on paper, but not implement the required mechanisms. Therefore, future research should help us understand the adoption-implementation decoupling in these contexts. By addressing this gap, research will inform both academics and practitioners about the necessary elements for public managers to transition from adoption to real implementation of PMSs.
In addition, future research should explore the role of decentralization on both the implementation of PMSs and the use of performance information in decision-making. This is relevant because, since the 1990s, many centralized, developing countries have adopted political, fiscal, and administrative decentralization as a tool to improve governance and bring government closer to its citizens (Rondinelli, 1983). Other lines of research should assess the impact that external pressures, such as international funding and NGOs, may have on governments’ decisions to adopt PMSs. This is mainly the case in developing settings, which continually face fiscal constraints, forcing subnational governments to look for alternative, external sources of funding (Avellaneda et al., 2016). Likewise, other studies need to explore the effects that leadership turnover and bureaucratic professionalization have on the implementation of PMSs. Indeed, institutionalization of PMSs is expected to be contingent on the tenure and expertise of bureaucrats, which is a missing organizational feature in most developing countries (Avellaneda, 2009, 2012; Grindle, 2009).
This review also sought to assess the role of PMS in public administration. It did so by evaluating the effects of PMS on government performance and highlighting that performance is multifaceted, as it can be assessed across several dimensions. Having said this, more studies are needed to help understand the effect of performance information use on different dimensions of organizational performance, including agencies’ outputs, outcomes, equity, accountability, efficiency, citizens’ satisfaction, and responsiveness (Boyne, 2003). For example, as previous studies have shown, the impact of managerial variables varies across the dimension of performance that is assessed (Boyne, 2003).
Future studies also should identify and test the role of potential moderators, such as political (electoral cycle and electoral competitiveness), institutional (fiscal incentives, open government, civil society, and channels of citizens’ participation), and managerial factors (bureaucratic professionalization, contracting out, privatization). As the problems governments face today are more complex than ever, the relationship between PMSs and organizational performance might become contingent on other factors.
More recent studies show that PMSs influence other organizational features, such as job satisfaction and employees’ change-oriented behavior (Masal & Vogel, 2016). However, more research is needed to understand the causal mechanisms of these relationships. Finally, more research is needed to understand under what conditions PMSs generate unintended consequences.
Agere, S., & Jorm, N. (2000). Designing Performance Appraisals: Assessing Needs and Designing Performance Management Systems in the Public Sector. London: Commonwealth Secretariat.Find this resource:
Ammons, D. N. (1995). Overcoming the inadequacies of performance measurement in local government: The case of libraries and leisure services. Public Administration Review, 55(1), 37–47.Find this resource:
Ammons, D. N., & Rivenbark, W. C. (2008). Factors influencing the use of performance data to improve municipal services: Evidence from the North Carolina Benchmarking Project. Public Administration Review, 68(2), 304–318.Find this resource:
Andrews, R., Boyne, G. A., Moon, M. J., & Walker, R. M. (2010). Assessing organizational performance: Exploring differences between internal and external measures. International Public Management Journal, 13(2), 105–129.Find this resource:
Andrews, R., Boyne, G., & Walker, R. (2011). The impact of management on administrative and survey measures of organizational performance. Public Management Review, 13(2), 227–255.Find this resource:
Askim, J. (2009). The demand side of performance measurement: Explaining councillors’ utilization of performance information in policymaking. International Public Management Journal, 12(1), 24–47.Find this resource:
Askim, J., Johnsen, A., & Christophersen, K. A. (2008). Factors behind organizational learning from benchmarking: Experiences from Norwegian municipal benchmarking networks. Journal of Public Administration Research and Theory, 18(2), 297–320.Find this resource:
Avellaneda, Claudia N. (2009). Mayoral quality and local public finance. Public Administration Review, May/June, 469–486.Find this resource:
Avellaneda, Claudia N. (2012). Do politics or mayors’ demographics matter for municipal revenue expansion? Public Management Review, 14(8), 1061–1086.Find this resource:
Avellaneda, Claudia N., Johansen, M., & Suzuki, K. (2016). What drives Japanese INGOs to operate in Latin American countries?” International Journal of Public Administration, http://www.tandfonline.com/eprint/RaVYYMHK3fwVizGA8CHc/full.Find this resource:
Behn, R. D. (2002). The psychological barriers to performance management: Or why isn’t everyone jumping on the performance-management bandwagon? Public Performance & Management Review, 26(1), 5–25.Find this resource:
Behn, R. D. (2003). Why measure performance? Different purposes require different measures. Public Administration Review, 63(5), 586–606.Find this resource:
Berman, E., & Wang, X. (2000). Performance measurement in U.S. counties: Capacity for reform. Public Administration Review, 60(5), 409–420.Find this resource:
Berman, E. M. (2011). Public administration in Southeast Asia: Thailand, Philippines, Malaysia, Hong Kong, and Macao. CRC Press: Boca Raton, FL.Find this resource:
Berman E. M., Moon, M. J., & Choi, H. S. (2010). Public administration in East Asia: Mainland China, Japan, South Korea, and Taiwan. CRC Press: Boca Raton, FL.Find this resource:
Blackman, D. (2015). Employee performance management in the public sector–A process without a cause. Australian Journal of Public Administration, 74(1), 73–81.Find this resource:
Bouckaert G., & Halligan, J. (2008). Managing performance: International comparisons. Routledge: New York.Find this resource:
Bourdeaux, C., & Chikoto, G. (2008). Legislative influences on performance management reform. Public Administration Review, 68(2), 253–265.Find this resource:
Boyne, G., Gould-Williams, J., Law, J., & Walker, R. (2004). Toward the self-evaluating organization? An empirical test of the Wildavsky model. Public Administration Review, 64(4), 463–473.Find this resource:
Boyne, G. A. (2003). Sources of public service improvement: A critical review and research agenda. Journal of Public Administration Research and Theory, 13(3), 367–394.Find this resource:
Brewer, G. A. (2006). All measures of performance are subjective: More evidence on US federal agencies. In Boyne, G. A., Meier, K. J., O’Toole, L. J., & Walker, R. M. (Eds.), Public service performance: Perspectives on measurement and management. Cambridge, U.K.: Cambridge University Press.Find this resource:
Brignall, S., & Modell, S. (2000). An institutional perspective on performance measurement and management in the “new public sector.” Management Accounting Research, 11, 281–306.Find this resource:
Brudney, J. L., Hebert, F. T., & Wright, D. S. (1999). Reinventing government in the American states: Measuring and explaining administrative reform. Public Administration Review, 59(1), 19–30.Find this resource:
Campbell, J. W. (2015). Identification and performance management: An assessment of change-oriented behavior in public organizations. Public Personnel Management, 44(1), 46–69.Find this resource:
Casas Guzman, F. J. (2007). A system for monitoring and control of health services: The case of Mexico.” In Mayne, J., & Zapico-Goñi, E. (Eds.), Monitoring performance in the public sector: Future directions from international experience. New Brunswick, NJ: Transaction Publishers.Find this resource:
Clarkson, P., Davies, S., Challis, D., Donnelly, M., & Beech, R. (2009). Has social care performance in England improved? An analysis of performance ratings across social services organizations. Policy Studies, 30(4), 403–422.Find this resource:
De Lancer Julnes, P., & Holzer, M. (2001). Promoting the utilization of performance measures in public organizations: An empirical study of factors affecting adoption and implementation. Public Administration Review, 61(6), 693–708.Find this resource:
Denhardt, R. B. (1985). Strategic planning in state and local government. State and Local Government Review, 17(1), 174–179.Find this resource:
Downe, J. D., Grace, C. L., Martin, S. J., & Nutley, S. M. (2010). Theories of public service improvement: A comparative analysis of local performance assessment frameworks. Public Management Review, 12(5), 663–678.Find this resource:
Dull, M. (2009). Results-model reform leadership: Questions of credible commitment. Journal of Public Administration Research & Theory, 19(2), 255–284.Find this resource:
EAPDF (2016). ¡GpR! Buenas Prácticas en América Latina: Brasil, Colombia y México. Escuela de Administratión Pública del Distrito Federal.Find this resource:
Faull, A. (2016). Measured governance? Policing and performance management in South Africa. Public Administration and Development, 36, 157–168.Find this resource:
Folz, D., Abdelrazek, R., & Chung, Y. (2009). The adoption, use, and impacts of performance measures in medium-size cities. Public Performance & Management Review, 33(1), 63–87.Find this resource:
Frederickson, D. G., & Frederickson, H. G. (2006). Measuring the performance of the hollow state. Washington, DC: Georgetown University Press.Find this resource:
Gao, J. (2015). Performance measurement and management in the public sector: Some lessons from research evidence. Public Administration Development, 35, 86–96.Find this resource:
Gerrish, E. (2015). The impact of performance management on performance in public organizations: A meta-analysis. Public Administration Review, 76(1), 19.Find this resource:
Graves, N., & Dollery, B. (2009). Local government reform in South Africa: An analysis of financial management legislative compliance by municipalities. Public Administration and Development, 29, 397–414.Find this resource:
Grindle, M. S. (2009). Going local: Decentralization, democratization, and the promise of good governance. Princeton, NJ: Princeton University Press.Find this resource:
Hammerschmid, G., Van de Walle, S., & Stimac, V. (2013). Internal and external use of performance information in public organizations; Results from an international executive survey. Public Money and Management: Integrating theory and practice in public management, 33(4), 261–268.Find this resource:
Hatry, P. H. (1980). Performance measurement principles and techniques: An overview for local government. Public Productivity Review, 4(4), 312–339.Find this resource:
Hatry, P. H. (2002). Performance management: Fashion and fallacies. Public Performance & Management Review, 25(4): 352–358.Find this resource:
Heckman, J., Heinrich, C., & Smith, J. (1997). Assessing the performance of performance standards in public bureaucracies. American Economic Review, 87(2), 389–395.Find this resource:
Heinrich, C. J. (2012). How credible is the evidence, and does it matter? An analysis of the program assessment rating tool. Public Administration Review, 72(1), 123–134.Find this resource:
Heinrich, C. J., & Lynn, L. E., Jr. (2001). Means and ends: A comparative study of empirical methods for investigating governance and performance. Journal of Public Administration Research and Theory, 11(1), 109–138.Find this resource:
Ho, A. (2006). Accounting for the value of performance measurement from the perspective of Midwestern mayors. Journal of Public Administration Research and Theory, 16(2), 217–237.Find this resource:
Holmstrom, B., & Milgrom, P. (1991). Multitask principal–agent analyses: Incentive contracts, asset ownership, and job design. Special issue, Journal of Law, Economics and Organization, 7, 24–52.Find this resource:
Holzer, M., Charbonneau, E., & Kim, Y. H. (2009). Mapping the terrain of public service quality improvement: Twenty-five years of trends and practices in the United States. International Review of Administrative Sciences, 75, 403–418.Find this resource:
Holzer, M., & Yang, K. F. (2004). Performance measurement and improvement: An assessment of the state of the art. International Review of Administrative Sciences, 70, 15–31.Find this resource:
Hood, C. (1991). A public management for all seasons? Public Administration, 69(Spring), 3–19.Find this resource:
Hood, C. (2008). Public service management by numbers: Why does it vary? Where has it come from? What are the gaps and the puzzles? Public Money & Management, 27(2): 95–102.Find this resource:
Hopper, T., Tsamenyi, M., Uddin, S., & Wickramasinghe, D. (2009). Management accounting in less developed countries: What is known and needs knowing. Accounting, Auditing & Accountability Journal, 22(3), 469–514.Find this resource:
Hvidman, U., & Andersen, S. C. (2014). Impact of performance management in public and private organizations. Journal of Public Administration Research and Theory, 24(1), 35–58.Find this resource:
Ingraham, P. W., & Moynihan, D. P. (2001). Beyond measurement: Measuring for results in state government. In D. Forsythe (Ed.), Quicker, better, cheaper? Managing performance in American government (pp. 309–335). Albany, NY: Rockefeller Institute Press.Find this resource:
Johansson, T., & Siverbo, S. (2009). Explaining the utilization of relative performance evaluation in local government: A multi-theoretical study using data from Sweden. Financial Accountability & Management, 25(2), 197–224.Find this resource:
Johnsen, Å. (2005). What does 25 years of experience tell us about the state of performance measurement in public policy and management? Public Money and Management, 25(1), 9–17.Find this resource:
Joyce, P. G. (2003). Linking performance and budgeting: Opportunities in the federal budgeting process. Washington, DC: IBM Center for the Business of Government.Find this resource:
Joyce, P. G. (2011). The Obama administration and PBB: Building on the legacy of federal performance-informed budgeting? Public Administration Review (May/June), 356–367.Find this resource:
Kalgin, A. (2016). Implementation of performance management in regional government in Russia: Evidence of data manipulation. Public Management Review, 18(1), 111–137.Find this resource:
Kelman, S., & Friedman, J. N. (2009). Performance improvement and performance dysfunction: An empirical examination of distortionary impacts of the emergency room wait-time target in the English National Health Service. Journal of Public Administration Research and Theory, 19, 917–946.Find this resource:
Kettl, D. F. (1997). The global revolution in public management: Driving themes, missing links. Journal of Policy Analysis and Management, 16(3), 446–462.Find this resource:
Kettl, D. F., DiIulio, J. J., & Garvey, G. (1993). Improving government performance: An owner’s manual. Washington, DC: Brookings Institution.Find this resource:
Kroll, A. (2013). The other type of performance information: Non-routine feedback, its relevance and use. Public Administration Review, 73(2), 265–276.Find this resource:
Kroll, A. (2015). Drivers of performance information use: Systematic literature review and directions for future research. Public Performance & Management Review, 38(3), 459–486.Find this resource:
Lee, H. N., Cayer, J., & Lan, Z. Y. (2006). Changing federal government employee attitudes since the Civil Service Reform Act of 1978. Review of Public Personnel Administration, 26(1), 21–51.Find this resource:
Li, J. (2015). The paradox of performance regimes: Strategic responses to target regimes in Chinese local government. Public Administration, 93(4), 16.Find this resource:
Liu, W. B., Meng, W., Li, X. X., & Zhang, D. Q. (2010). DEA models with undesirable inputs and outputs. Annals of Operations Research, 173(1), 177–194.Find this resource:
Lynn, L. E. (2006). Public management: Old and new. New York: Taylor & Francis.Find this resource:
Masal, D., & Vogel, R. (2016). Leadership, use of performance information, and job satisfaction: Evidence from police services. International Public Management Journal, 19(2), 208–234.Find this resource:
Meier, K. J., & O’Toole, L. J., Jr. (2012). Subjective organizational performance and measurement error: Common source bias and spurious relationships. Journal of Public Administration Research and Theory, 23, 429–456.Find this resource:
Meier, K. J., & O’Toole, L. J., Jr. (2013). I think (I am doing well), therefore I am: Assessing the validity of administrators’ self-assessments of performance. International Public Management Journal, 16(1), 1–27.Find this resource:
Melkers, J., & Willoughby, K. (2005). Models of performance-measurement use in local governments: Understanding budgeting, communication, and lasting effects. Public Administration Review, 65(2), 180–190.Find this resource:
Moynihan, D. P. (2005). Goal-based learning and the future of performance management. Public Administration Review, 65(2), 203–216.Find this resource:
Moynihan, D. P. (2008). The dynamics of performance management. Constructing information and reform. Washington, DC: Georgetown University Press.Find this resource:
Moynihan, D. P. (2009). Through a glass darkly: Understanding the effects of performance regimes. Public Performance & Management Review, 32(4), 592–603.Find this resource:
Moynihan, D. P. (2015). Uncovering the circumstances of performance information use. Findings from an experiment. Public Performance & Management Review, 39(1), 33–57.Find this resource:
Moynihan, D., & Hawes, D. (2012). Responsiveness to reform values: The influence of the environment on performance information use. Public Administration Review, 72(1), 95–105.Find this resource:
Moynihan, D., & Landuyt, N. (2009). How do public organizations learn? Bridging cultural and structural perspectives. Public Administration Review, 69(6), 1097–1105.Find this resource:
Moynihan, D., & Pandey, S. K. (2010). The big question for performance management: Why do managers use performance information? Journal of Public Administration Research and Theory, 20(4), 849–866.Find this resource:
Moynihan, D., Pandey, S. K., & Wright, B. (2012). Prosocial values and performance management theory: The link between perceived social impact and performance information use. Governance, 25(3), 463–483.Find this resource:
Moynihan, D. P., Pandey, S. K., & Wright, B. E. (2012a). Setting the table: How transformational leadership fosters performance information use.” Journal of Public Administration Research and Theory, 22(1), 143–164.Find this resource:
Moynihan, D. P., & Ingraham, P. W. (2004). Integrative leadership in the public sector: A model of performance-information use. Administration & Society, 36(4), 427–453.Find this resource:
Moynihan, D. P., & Kroll, A. (2015). Performance management routines that work? An early assessment of the GPRA Modernization Act. Public Administration Review, 76(2): 314–323.Find this resource:
Moynihan, D. P., & Lavertu, S. (2012). Does involvement in performance management routines encourage performance information use? Evaluating GPRA and PART. Public Administration Review, 72(14), 592–602.Find this resource:
Moynihan, D. P., & Pandey, S. K. (2004). Testing how management matters in an era of government by performance management. Journal of Public Administration Research and Theory, 15(3), 421–439.Find this resource:
Ni Putu, S. H. M., van Helden, G. J., & Tillema, S. (2007). Public sector performance Measurement in developing countries. Journal of Accounting & Organizational Change, 3(3), 192–208.Find this resource:
OECD (1996). Performance management in government. Paris: Public Management Occasional Papers.Find this resource:
Osborne, D., & Gaebler, T. (1992). Reinventing government: How the entrepreneurial spirit is transforming the public sector. New York: Plume.Find this resource:
Poister, T. H., Pasha, O. Q., & Edwards, L. H. (2013). Does performance management lead to better outcomes? Evidence from the US public transit industry. Public Administration Review, 73(4), 625–636.Find this resource:
Radin, B. A. (2006). Challenging the performance movement: Accountability, complexity, and democratic values. Washington, DC: Georgetown University Press.Find this resource:
Rainey, H. G. (2014). Understanding and managing public organizations (5th ed.). San Francisco: Jossey-Bass.Find this resource:
Rondinelli, D. (1983). Implementing decentralization programmes in Asia: A comparative analysis. Public Administration and Development, 3, 181–207.Find this resource:
Saliterer, I., & Koran, S. (2014). The discretionary use of performance information by different local government actors—Analysing and comparing the predictive power of three factor sets. International Review of Administrative Sciences, 8(3), 637–658.Find this resource:
Sarker, A. E. (2006). New public management in developing Ccuntries. International Journal of Public Sector Management, 19(2), 180–203.Find this resource:
Schachter, H. L. (2010). Objective and subjective performance measures: A note on terminology. Administration & Society, 42(5), 550–567.Find this resource:
Schwartz, R., & Deber, R. (2016). The performance measurement-management divide in public health. Health Policy, 120, 273–280.Find this resource:
Scott, R. W., & Davis, G. F. (2007). Organizations and organizing: Rational, natural, and open systems perspectives. Upper Saddle River, NJ: Pearson Prentice Hall.Find this resource:
Smith, P., & Goddard, M. (2002). Performance management and operational research: A marriage made in heaven? Journal of the Operational Research Society, 53(3), 247–255.Find this resource:
Sorber, A. (1996). “Developing and using performance measurement: The Netherlands experience. In OECD (Ed.), Performance Management in Government (pp. 93–109). Paris: OECD/PUMA,.Find this resource:
Speklé, R. F., & Verbeeten, F. H. M. (2014). The use of performance measurement systems in the public sector: Effects on performance. Management Accounting Research, 25, 131–146.Find this resource:
Star, S., Russ-Eft, D., Braverman, M. T., & Levine, R. (2016). Performance measurement and performance indicators: A literature review and a proposed model for practical adoption. Human Resource Development Review, 15(2), 1–31.Find this resource:
Taylor, J. (2009). Strengthening the link between performance measurement and decision making. Public Administration, 87(4), 853–871.Find this resource:
Taylor, J. (2011). Factors influencing the use of performance information for decision making in Australian state agencies. Public Administration, 89(4), 1316–1334.Find this resource:
Uddin S., Hopper, T. (2003). Accounting for privatisation in Bangladesh: Testing World Bank Claims. Critical Perspectives on Accounting, 14, 739–774.Find this resource:
Viñuela, L., & Zoratto, L. (2015). Do performance agreements help improve service delivery? The experience of Brazilian states.Washington, DC: The World Bank. http://documents.worldbank.org/curated/en/902481468189567318/Do-performance-agreements-help-improve-service-delivery-the-experience-of-Brazilian-statesFind this resource:
Walker, R. M., & Boyne, G. A. (2006). Public management reform and organizational performance: An empirical assessment of the U.K. Labour government’s public service improvement strategy. Journal of Policy Analysis and Management, 25(2), 371–393.Find this resource:
West, D. (2015). Performance management in the Australian public service: Where has it got to? Australian Journal of Public Administration, 74(1), 73–81.Find this resource:
Yang, K., & Hsieh, J. Y. (2007). Managerial effectiveness of government performance measurement: Testing a middle-range model. Public Administration Review (September-October), 861–879.Find this resource:
Zients, J. (2009). Testimony before the Committee on Homeland Security and Governmental Affairs, United States Senate. September 29, 2009.Find this resource:
(2.) However, for some authors, a performance measurement system consists of the process of assessing advances toward reaching the established objectives (Hatry, 1980; Agere & Jorm, 2000), which, in turn, implies carrying out both strategic planning (the first factor) and data collection and analysis (the second factor).
(3.) There have been studies that analyze Performance Management as organizational change using a neo-institutional perspective (see Brignall & Modell, 2000). However, this study will work under the rational approach.