The Joint Performance Section reports on performance indicators owned and managed separately by the Department of State and the U.S. Agency for International Development (USAID). Each indicator table shows the logo of the agency responsible for gathering, reporting, and validating the performance data for that indicator:
|Department of State||USAID|
In addition, State and USAID are reporting separately on agency-specific resources invested to achieve specific performance goals. Throughout the fiscal year, performance management analysts from the Department of State and USAID provide training, guidance and support to planning coordinators from regional and functional bureaus in both agencies. These bureau planning coordinators work directly with senior leadership, program managers and technical experts to review and evaluate performance measures to ensure they best capture the President's highest foreign policy and foreign assistance priorities and focus on high-level outcomes. Furthermore, senior leaders and program managers use relevant performance data, including data from program evaluations, budget reviews, PART assessments, and quarterly results reporting to inform budget and management decisions.
During FY 2006, the Department and USAID closely reviewed and significantly simplified the number of indicators used to track performance. A joint State-USAID team of performance analysts reviewed the indicator set published in the FY 2006 Joint Performance Plan and, in consultation with program managers, replaced weak indicators and imprecise targets with measures that better track progress toward our highest-level outcomes and strategic goals. As a result, the number of indicators against which the Department of State and USAID are reporting in the FY 2006 PAR was reduced from 286 to 129, of which 35 are managed by USAID and 94 are managed by the Department.
In accordance with OMB guidance and the Reports Consolidation Act of 2000, the performance data contained in the FY 2006 PAR are complete and reliable. Actual performance data are reported for every performance goal and explanations for changes to performance measures are listed in an appendix. For many of its indicators, USAID estimated performance results based on preliminary data, as final year data were unavailable as of November 15, 2006. If preliminary data have been used, this will be noted in the data source information for each indicator. Final USAID performance results will be reported after year-end data is received from field operating units later in the calendar year.
The Department and USAID used a rigorous results rating methodology to assess FY 2006 performance on the initiatives and programs under each strategic goal. First, program managers assigned a single rating for each performance measure to characterize the status of agency performance in relation to targets set for FY 2006. Performance analysts from State and USAID then evaluated each self-assessed rating and raised follow-up questions with program managers as appropriate. On occasion, initial ratings were changed after review to more accurately reflect results.
The following table shows the criteria and parameters of the Performance Results Rating System.
|Results Against Targets||Results missed FY 2006 target by a significant margin||Results missed FY 2006 target by a slight margin||Results met FY 2006 target||Results slightly exceeded FY 2006 target||Significantly exceeded FY 2006 target|
|Budget Status||Spent significantly over budget||Spent slightly over budget||Spent on budget||Spent slightly under budget||Spent significantly under budget|
|Timeliness||Missed most critical deadlines||Missed some critical deadlines||Met all critical deadlines||Met some critical deadlines early||Met most critical deadlines early|
|Results significantly compromise progress toward targeted outcomes||Results slightly compromise progress toward targeted outcomes||Results support progress toward targeted outcomes||Results slightly ahead of expected progress toward targeted outcomes||Results significantly ahead of expected progress toward targeted outcomes|
Program managers are held accountable for performance results reported in the PAR. Credibility depends on the due diligence of program managers to validate and verify performance by choosing appropriate performance measures and ensuring the highest accuracy of reported results. The Department's Verification and Validation Reference Guide and USAID's Automated Directives System (www.usaid.gov/policy/ads/200/203.pdf) assist program managers to ascertain the quality, reliability and validity of performance data. The National Foreign Affairs Training Center also uses these reference materials in courses on strategic and performance planning.
Assessing the reliability and completeness of performance data is critical to managing for results. Tables in the Joint Performance Section include the following information to show validation and verification of performance data:
Federal agencies' Inspectors General play a central role in the verification and validation of their agency's performance measures. To improve performance and implement the President's Management Agenda, the Office of the Inspector General (OIG) reviews performance measures in the course of its audits and evaluations. The OIG consults with program managers to identify key measures to be verified and validated as a complement to agency verification and validation efforts. The OIG gives priority to performance measures related to the President's Management Agenda initiatives, programs assessed by OMB's Program Assessment Rating Tool, and areas identified as serious management and performance challenges. In addition, independent external auditors perform tests to determine if internal controls exist and are followed to ensure that performance indicator results are accurate and complete, in compliance with the Government Performance and Results Act.