Performance Management

Bureau of Resource Management
April 21, 2011

Pie chart summarizing the performance ratings for FY 2010. Values are: Above target: 50 (37%). On target: 17 (13%). Improved, but target not met: 7 (5%). Below target: 27 (20%). Rating not available: 34 (25%). Total number of indicators: 135.

Source: FY 2010 Annual Performance Reports and FY 2012 Annual Performance Plans for both agencies. Performance ratings calculated from performance data provided at the time of publication.

Ratings are not available for indicators that are new or for which result data are not yet available.

1 Percentages rounded to the nearest whole number.

The Department of State and USAID work together—along with other U.S. Government partners—to plan and execute programs that meet global challenges of the 21st century. Performance management practices at the Department and USAID enable programs to achieve U.S. foreign policy outcomes and promote greater accountability to the American people. A multiphase process includes setting strategic goals and priorities, creating programs to achieve goals, monitoring program activities, measuring progress, using performance data to inform resource allocations, and communicating program results to stakeholders.

To communicate the progress achieved towards U.S. foreign policy goals in FY 2010, 135 representative performance indicators were published in the Department of State and USAID’s Annual Performance Reports. FY 2010 results for each indicator were reviewed against previously determined performance targets to determine performance ratings, which are summarized in the chart at right. The following section highlights 26 of these illustrative indicators organized by Strategic Goal, accompanied by an explanation of each goal and analyses of the results achieved in FY 2010. See the FY 2010 Annual Performance Reports for State and USAID.

Department of State

The Department of State uses performance management to measure organizational effectiveness, strengthen and inform decision-making, and improve programs and policies so that they are linked to specific performance targets and broader strategic goals. Managers at all levels use performance management best practices to assess and mitigate risks, benchmark program results, comply with legislative requirements, and adjust strategies in response to performance successes and shortcomings. A critical element of the performance management approach is the use of performance indicators that track the Department’s progress in reaching its annual targets. Many Department bureaus, such as the Department’s Bureau of Overseas Building Operations, conduct quarterly performance reviews to track their success in meeting the targets established for each indicator and address any shortfalls that might necessitate a change in direction.

The Department’s Annual Planning Cycle is set in the foundation of a Mission Strategic and Resource Plan and a Bureau Strategic and Resource Plan that engage diplomatic Missions and Washington-based bureaus in outcome-oriented planning activities that drive policies and establish programmatic direction by country, region, strategic goal, and strategic priority. A data quality assurance process has been instituted throughout the Department of State to ensure the integrity and reliability of the data reported for all performance indicators. Every two years, bureaus are required to complete a questionnaire that assesses the quality of each of their indicators. These Data Quality Assessments are also required for all new indicators. Through use of an internal Data Quality Assessment tool, the bureaus determine the overall quality of the indicators and submit a formal Data Quality Statement affirming the overall accuracy and reliability of the performance information provided.


Diagram showing USAID's four-part performance management process: (1) Plan and set goals; (2) Collect data and analyze results; (3) Use data for decision-making; and (4) Communicate results.At USAID, the tools of assessing, learning, and information sharing are interrelated through the concept of performance management, which represents the agency’s commitment to increase its accountability for delivering effective development outcomes. Performance management is directly related to and informs strategic planning, budget formulation, program design and program implementation. USAID Missions and offices are responsible for establishing performance management plans and targets to measure progress toward the intended objectives of their programs. They are also responsible for collecting data and reporting progress on key indicators in their annual performance reports.

Establishing ambitious, optimistic, and achievable performance targets is critical to ensuring effective performance management. USAID follows a multistep process to determine targets: examining the baseline value before U.S. Government intervention; evaluating historical trends and level of progress; reviewing expert judgments from technical authorities, research findings, and empirical evidence; studying accomplishments of other programs with similar characteristics; identifying customer expectations; and projecting progress to be accomplished over a five-year period with anticipated funds.

Data are only useful for performance management if the information collected is of high quality. As indicated in USAID’s Automated Directive System Chapter 203.3.5, all USAID Missions and offices are required to conduct data quality assessments for all performance data reported to Washington. USAID has three data source categories: primary data (collected by USAID or where collection is funded by USAID), partner data (compiled by USAID implementing partners but collected from other sources), and data from third-party sources (from other Government agencies or development organizations). Primary data undergo rigorous USAID assessments to ensure that it meets quality requirements. Third-party data do not go through the same USAID quality assessments, but sources are carefully chosen based on the organization’s experience, expertise, credibility, and use of similar assessments.