An official website of the United States Government Here's how you know

Official websites use .gov

A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS

A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Introduction

Senate Report 115-282 directed the Secretary of State and the USAID Administrator to initiate a joint review of the quality of program evaluations, including: (1) the extent to which sustainability of programs is evaluated after assistance has ended; (2) the resources required to conduct evaluations of sustainability; and (3) the utilization of such evaluations in subsequent program design. The report also directed that State and USAID address the implementation of section 7034(m) of the act, on how the Department and USAID ensure implementing partners have effective procedures to collect and respond to feedback from beneficiaries, and regularly conduct oversight to ensure such feedback is collected and used. Senate Report 115-282 directed the State Department and USAID to publish the findings on the State Department and USAID websites.  This report responds.

Overall Quality of Program Evaluations

STATE:  Two recent thorough reviews of State’s program evaluations include the State-commissioned Examination of Foreign Assistance Evaluation Efforts at the Department of State [3 MB], in May of 2018, and GAO’s Foreign Assistance: Agencies Can Improve the Quality and Dissemination of Program Evaluations  (GAO-17-316), in March of 2017.  The GAO report, which reviewed 23 State evaluations, found 52 percent were of high or acceptable quality and 48 percent were lower quality. State’s  Examination of Foreign Assistance, however, analyzed 72 evaluation reports and found that the quality of reports had improved since the Department instituted its first evaluation policy in 2012.  Consultants determined a nearly equal number of reports were of fair (34) or good quality (30) and relatively few (8) were rated as poor.  The Evaluation found the reports were strong in methodology, drawing on all data collection methods, and readability and accessibility.  Weak areas were in distinguishing between findings and conclusions, providing actionable recommendations, and discussing gender and social effects.

Use

The Examination of Foreign Assistance Evaluation Efforts at the Department of State found that while 5 percent of respondents said that their bureau was using evaluations to a great extent, more than 91 percent said their bureaus used evaluation results to a good extent or some extent. The figure below shows the extent of uses mentioned during the course of the data collection, including informing strategy or program design and improving program implementation.

Costs

The Department’s Evaluation Registry collects estimated and final costs on planned and completed evaluations.  For the period January 1, 2016, to January 30, 2019, the Department completed 104 evaluations at a total cost of $29 million.  The cost of the evaluations ranged individually from $67,000 to $2.7 million, with an average cost of about $270,000.  Given the very low costs of 35 evaluations done through contributions to the Multilateral Organisation Performance Assessment Network (between $16,000 and $25,000), and a small number of evaluations close to or exceeding $1 million, a careful look at the numbers shows that about one third of evaluations cost between $300,000 and $600,000.

USAID: USAID is recognized as a leader in evaluation among federal agencies.  USAID’s evaluation policy  sets specific guidelines for how to conduct high quality performance and impact evaluations. USAID has a publicly available evaluation toolkit  which provides step-by-step resources to staff designing and managing evaluations to ensure that they are of high quality.  There is an active Evaluation Interest Group for peer-to-peer exchanges on evaluation best practices.  USAID/Washington’s Evaluation Team in the Office of Learning, Evaluation and Research provides assistance to missions on planning and conducting evaluations.  Sub-agency efforts have included a review of all education sector evaluations and the Middle East Bureau’s evaluation quality review protocol and tool.

GAO-17-316 ’s review of USAID program evaluations found  that 75% were of  either “high” or “acceptable” quality.  The Lugar Center and Modernizing Foreign Assistance Network’s 2017 report, “From Evidence to Learning: Recommendations to Improve Foreign Assistance Evaluations ” found that over 80% of evaluations explicitly discussed the management purpose for undertaking the evaluation.   USAID periodically conducts internal reviews of evaluation quality and capacity, including by working with USAID bureaus on Bureau Evaluation Action Plans since 2016, which identify bureau strengths and challenges in evaluation quality and use.  USAID is also commissioning an external study of the quality of and capacity for evaluation to begin in 2020, partly in response to requirements of the Foundations for Evidence-based Policymaking Act of 2018.

Overall, evaluation quality, usage and spending have steadily increased.  A recent review of evaluation utilization  commissioned by USAID found that over 70% of evaluations were used to design or modify a USAID activity or project.  One evaluation can serve multiple uses. Common uses for USAID evaluations include: to inform decision-making, e.g. guidance on project implementation; better understanding of  program or policy even if there is no change in the program or policy; to inform project or activity design; to inform strategy and policy formulation, to ensure accountability by providing accessible information and show responsible use of resources; and to better engage stakeholders in program or process.

In FY 2018, USAID Missions and Operating Units (OUs) reported completing 187 evaluations with resources totaling approximately $65 million, bringing the average evaluation cost to about $350,000. In addition, Missions/OUs are currently managing another 209 ongoing evaluations, many that span more than one year, with total ongoing evaluation budgets estimated to reach almost $121 million. As can be seen in the table below USAID’s investment in program evaluation has continued to grow in recent years with a total spend of $164,985,574 on completed evaluations between FY16 and FY18.

Evaluation of Sustainability

STATE:  State bureaus are encouraged to evaluate sustainability as one of five types of evaluations outlined in Department guidance. Bureaus at the Department assess sustainability through formal evaluations but also plan for sustainability by requiring details on achieving it in project plans.  For example, the Secretary’s Global AIDS Coordinator has a sustainability action plan  for the President’s Emergency Plan for AIDS Relief (PEPFAR) that includes planning for country ownership of initiatives and transition of management when epidemics are under control or at other agreed upon points.

Bureaus implement evaluations of sustainability from one to five years after project end.  For instance, the Bureau for Democracy, Rights and Labor conducts portfolio level evaluations that include dozens of awards with varying end dates to understand what sustainability looks like at different stages after grants have closed.  Usually at least a year must elapse before some measure of sustainability can be judged, but after five years, it may be difficult to locate participants.  Generally, the Department defines sustainability at five years after termination as the existence of an ongoing institution.  In another example, the Bureau for Populations, Refugees and Migration (PRM) prioritizes durable solutions—that is, sustainability—for refugees.  For instance:

  • An evaluation of capacity building of local governments in Colombia  (March 2016) assessed the impact and sustainability of capacity building programs to improve municipal authorities’ assistance to internally displaced persons (IDPs). Findings from the evaluation provided guidance and indicators to consider in reviewing and writing proposals that include government capacity-building components; monitoring capacity-building programming in the field; and engaging host governments, multilateral partners, and NGOs on building the capacity of governments to assist IDPs. Findings from the evaluation prompted PRM to revise instructions to NGOs applying for funding for local government capacity-building programs.
  • Evaluation of Effectiveness of Assistance to IDPs [7 MB] and Preparing for Eventual Transition from Relief to Development in Ukraine [7 MB] (April 2017) asked whether assistance provided to date supported local integration over time. It also analyzed factors that could influence the long-term effectiveness of IDP integration in Ukraine. PRM incorporated findings into its FY 2018 Ukraine strategic planning document.
  • Evaluating the Effectiveness of Shelter, Health, and Education Programs for Iraqi and Syrian Refugees in Lebanon, Turkey and Jordan [3 MB] (March 2017) asked what happened when rental agreements ended, and the implications for refugee assistance. PRM shared the report findings immediately with UNHCR HQ and NGO partners in Turkey, Lebanon, and Jordan where they contributed to changes in programming. The final report included tools to inform shelter, health, and education program proposal reviews, field monitoring, and NGO guidelines. PRM shared the tools with program officers, included them in bureau training on monitoring and evaluation, and used the report’s recommendations to update PRM’s FY 2018 NGO Guidelines.

Use

Evaluations of sustainability inform programming, provide guidance and indicators, assist in planning and strategy, and provide input to tools and instructions.  In addition, they help determine what sustainability looks like at different stages after programs end.

Costs

Costs for evaluations that assess sustainability depend on scope and methodology.  In addition, if evaluating sustainability was not planned, data collection could be more expensive.  The same is true if it is more than five years after the program ends.  The costs of evaluations that include sustainability ranged from $232,000 to $1.7 million and averaged $761,441.  Because of the large range between low and high costs, the median, $477,000, may be more reflective of costs in general.  This figure falls within the 300 to 600 thousand dollar figure average cost for evaluations as a whole.

USAID: Ex-post evaluations at USAID are used to assess sustained impact or outcomes of completed programs and, in turn inform the design of future projects and activities.  A current review of ex-post evaluations has found approximately 20 evaluations conducted between 2014 and 2019 looking at a project or activity’s sustainability one to five years after completion.  All except one are performance evaluations that employ qualitative or mixed qualitative and quantitative methods to examine post-project sustainability.  In one case USAID commissioned a rigorous ex-post evaluation, the Long-term Impact Evaluation of the Malawi Wellness and Agriculture for Life Advancement program, that employed a randomized controlled trial to estimate long-term impact.

With regard to sectoral areas of emphasis, USAID’s ex-post evaluations have looked at the sustainability of interventions in water security, sanitation and hygiene, resilience, agriculture led growth, nutrition, environment, energy and infrastructure, democracy, rights, and governance sectors (including civil society strengthening), economics and market development (domestic resource mobilization and fiscal reform), conflict prevention and stabilization, and early childhood education.  Ex-post evaluations have been undertaken in each of USAID’s regional areas of operation, with a concentration in Sub-Saharan Africa, South and East Asia, and Afghanistan.  The average cost of ex-post evaluations at USAID is estimated at approximately $325,000, with a range from $112,000 to $1.7 million.

Illustrative examples of recent ex-post evaluations and how they were used follow:

  • USAID’s water sector  completed a series of ex-post evaluations. They show mixed results with impacts not sustained in Madagascar but sustained impact in Indonesia. The learnings from Madagascar, where there were challenges with the sustainability of Community Led Total Sanitation (CLTS) are being built onto USAID’s continuing investments in this key sector.  Adaptations include holding more targeted, intermittent follow-up visits from technical specialists over a longer duration of time.  This is to troubleshoot emerging local systems challenges and help sustain community and local government capacities, commitment, functional relationships, and productive investments.[1]
  • An evaluation of alternative development program in Colombia  found sustained organizations, infrastructure works and productive activities throughout the country. This 2014 ex-post evaluation has informed new USAID activities that seek to promote sustained outcomes in the areas of resilience and agriculture-led growth, post-conflict stabilization, and improved democracy, rights, and governance. [2]
  • An evaluation of a post-conflict program in Uganda  found that the program’s success was in large part due to its ability to be adaptive to changing beneficiary needs. The evaluation of this multifaceted activity which ran from 2006 to 2011 had two key findings informing design of new activities: (1) to intentionally design a “learning” approach and mindset and therefore adapt more effectively to evolving conditions in a post-conflict environment; and 2) that geographic focus matters when reacting to specific challenges facing specific populations.[3]
  • An evaluation of a water and sanitation program in Indonesia  found not just sustained but increasing impacts post program. This ex-post evaluation has been used to refine work-plans under the current flagship water, sanitation, and hygiene (WASH) activity. It has also informed the development of USAID’s forthcoming Water and Development Implementation Plan, which places a new and greater emphasis on strengthening sector governance and finance.[4]
  • An evaluation of a fiscal policy reform program in El Salvador  found sustained impact on finance ministry operations and tax collection. The critical learning from this evaluation is that improved revenue compliance will, in some contexts, benefit from interventions that achieve a positive and self-reinforcing dynamic between legal and administrative fiscal reforms and IT know-how and technologies. USAID’s pivot towards Domestic Resource Mobilization (DRM) as part of the Agency’s vision for fostering country self-reliance is, in part, informed by this evaluation.[5]

Procedures for Incorporating Local Data and Feedback

STATE:  Department of State has provided guidance to all bureaus on beneficiary feedback through a white paper that discusses what it is and approaches to implementation.  Bureaus have responded in practice by requiring implementing partners to collect and respond to feedback from beneficiaries during the course of projects.

Further guidelines and procedures bureaus have developed to address accountability to populations at the heart of development assistance include:

  • Requiring implementing partners to describe how they will include beneficiary feedback in all funding proposal narratives, with progress outlined in quarterly and final reports;
  • Requiring frameworks that outline the implementing partner approaches to the collection and use of beneficiary feedback, during the program design and implementation phases;
  • Funding and researching strategies to improve the use of beneficiary feedback, such as that done by the International Rescue Committee;
  • Conducting evaluations specifically to obtain and report on program feedback collected directly from beneficiaries;
  • Monitoring the frameworks for collecting and making changes as a result of beneficiary feedback through quarterly reports and workshops or interviews.

In addition, on its public website PRM has published guidelines for accountability to affected populations in design, implementation and evaluation of projects. The Department also funds the Sphere Project, which produces and promotes the handbook on Core Humanitarian Standard on Quality and Accountability. The handbook endorses commitments to beneficiary involvement in decisions that affect them, and the ability to voice their complaints.  PRM co-leads the Grand Bargain work stream, which serves as a platform to discuss beneficiary feedback initiatives, events, and incentivize good practices.

USAID: Demonstrating respect for USAID beneficiaries and elevating the voices of the marginalized and vulnerable populations that we serve are core values of USAID.  During project and activity design, USAID policy encourages operating units to consult with key stakeholders and potential beneficiaries. During implementation of USAID activities, USAID implementing partners must develop a monitoring, evaluation and learning (MEL) plan to be approved by USAID.  Collecting feedback from marginalized groups and other beneficiaries is a practice supported by USAID guidance for activity MEL planning. Many USAID evaluations include beneficiary feedback.  For example, impact evaluations usually include household or individual surveys to gauge the impact of the program on beneficiaries’ lives and performance evaluations may use interviews or focus groups to gather information about how a program was implemented and perceived from the beneficiary perspective.

Since 2018, USAID guidance to organizations applying for funds for humanitarian programs requires these organizations to submit both a monitoring and evaluation plan and an accountability to affected populations plan.  In these plans, implementing partners must address:

  • How the affected population was involved in the program design;
  • What mechanisms are in place to receive beneficiary feedback;
  • How beneficiary feedback will be incorporated into program implementation;
  • How the implementing partner will ensure that feedback and information mechanisms are safe and accessible and how implementing partner will respond to any critical or sensitive protection issues that arise.

Similarly, prospective applicants applying for USAID emergency food assistance awards are required to describe how they plan to incorporate local feedback into their activity and how the activity will respond to feedback provided.

USAID also promotes and disseminates best practices and innovations for collecting beneficiary feedback through its ProgramNet and LearningLab platforms. These include:

  • A case study description of Catholic Relief Services/DRC’s use of a toll-free beneficiary feedback hotline through which participants could share their inputs, concerns and complaints on CRS activities in the country.
  • Guidance from USAID Global Development Lab on collecting feedback from beneficiaries using digital technologies such as tablets and SMS.

USAID continues to adjust its policy and guidance to best support monitoring and evaluation practices. Further discussions on improving beneficiary feedback collection and use are currently underway at USAID with additional actions on this issue expected in the new fiscal year.

[1] https://www.globalwaters.org/resources/ExPostEvaluations 

[2] https://pdf.usaid.gov/pdf_docs/PA00JRMK.pdf 

[3] https://pdf.usaid.gov/pdf_docs/PA00K45N.pdf 

[4] https://pdf.usaid.gov/pdf_docs/PA00N3F2.pdf 

[5] https://pdf.usaid.gov/pdf_docs/PA00SV5J.pdf 

U.S. Department of State

The Lessons of 1989: Freedom and Our Future