printable banner

U.S. Department of State - Great Seal

U.S. Department of State

Diplomacy in Action

Video Message from Secretary Clinton and Remarks by USAID's Ruth Levine


Remarks
Secretary of State Hillary Rodham Clinton; Ruth Levine, USAID Senior Advisor for Evaluation
U.S. Department of State Third Annual Conference on Program Evaluation
Washington, DC
June 9, 2010

Share

MS. CABELL: Good morning everyone. My name is Stephanie Cabell, and I'm the conference coordinator. And it's also my pleasure and honor to also welcome you to this year's conference on program evaluation, and in particular to the plenary session this morning.

Secretary Clinton, as many of you know, is away on business and is unable to be with us today, but she was able to record her greetings and remarks for the conference. So we'd like to show that to you right now.

[Video.]

SECRETARY CLINTON: I am delighted to have this opportunity to thank you for the important work you are doing to help us create a more efficient, effective and successful foreign policy. Program evaluation and the use of performance measurements are critical to the success of our foreign policy goals.

The best practices shared at this conference will help you manage your programs more effectively, develop better evaluation techniques and help advance U.S. interests around the globe. I am proud that the State Department continues to sponsor this conference. We have already seen the progress achieved in making rigorous program evaluation a central component of our work as we explain our goals and accomplishments to the American people and the world.

Our goals include preventing instability and conflict using our diplomatic and development tools to promote peace and security, and making the need for military action, maybe not at all, but certainly more remote. It is far cheaper to pay for civilian efforts up front than to pay for any war over the long, medium or even short term. Having solid information about the effectiveness of these efforts is critical.

Your work is also essential as we connect more effectively with publics in other countries to engage their business communities and reach out to civil society. Data that show the effectiveness of our outreach help us make better decisions about how to craft our messages to these publics and can even help us improve our management of the actual programs by which we hope to achieve our foreign policy goals.

I hope your conversations about lessons learned and best practices will inform and strengthen the work you do every day to help promote a safer, more prosperous and peaceful world. Thank you very much.

PETER DAVIS: It is really a pleasure and an honor for me to introduce Dr. Ruth Levine. She is the USAID Senior Advisor for Evaluation. But yesterday, Raj Shah, the administrator of aid announced -- I'm going to quote him -- "I am also pleased to announce the appointment of Ruth Levine as Deputy Assistant Administrator of the new USAID Bureau for Policy, Planning and Learning.

For the interim, Ruth will also serve as the Director of the Office of Learning, Evaluation and Research." That's the evaluation office. While new to USAID, Ruth is well known to many in the agency for her earlier work to elevate evaluation with donor agencies and for the development of innovative approaches to evaluation, particularly in health and education.

She has field experience in every region of the world, and she really knows how to translate ideas into real progress. She has already started with her colleagues in F and in State, a rather rigorous process of reforming and reinvigorating evaluation as part of a decisionmaking process as the Secretary was just referring to.

Previously, Dr. Levine was a senior fellow and vice president at the Center for Global Development. She has also worked at the World Bank and the Urban Institute. She holds a Ph.D. from Johns Hopkins. Most importantly, I commend her to you. She really is an expert on evaluation and its use as a guide to improve programs. I think she's going to have a lot of useful and important things for us to say.

Please welcome Ruth Levine.

(Applause.)

DR. LEVINE: Thank you so much. It's really a great pleasure and I appreciate having been invited to speak to this group in particular and contribute to a conversation that I know you're having about how to use evaluation for the purpose of making better decisions about the programs that are supported by the U.S. Government in a broad range of different areas that are designed and managed by the State Department and by USAID. So I thank Peter and Stephanie for having invited me.

So I'm going to talk for a few minutes today, first sharing a story about an evaluation that was particularly influential and I think has many of the elements that we're seeking when we do evaluation. But I'm going to sort of talk about the elements of programs that I think are crucial to make them able to be evaluated and, where you can find value in doing evaluation, putting the value in evaluation, talk a little bit about why it's hard -- what some of the obstacles are to evaluation and maybe some of those will resonate with your own experience -- and then close by talking about what we're doing within the USAID context over the next several months and several years to revitalize the agency's capacity to do evaluation and learning.

So starting with the story, let me first ask. How many of you are familiar with the experience of the Progressive Anti-Poverty Program in Mexico? So some, a smattering, and some not. So let me briefly give you an overview of that, and again I'm doing this because it provides some insights about some of the core benefits and elements that make for good evaluation.

As I'm sure you all know, one of the central, perhaps the central development challenge is what's referred to as the intergenerational transmission of poverty, the fact that children in poor households themselves are deprived of the good health and the quality education that will be able to help them exit from poverty so they end up being poor and their children end up being poor, and on it goes. So that's clearly a central challenge for development programs to address.

Historically, in Mexico, the way that challenge had been addressed was very much through a series of what you can think of as kind of supply side efforts on the part of the government, the provision of milk and other food subsidies, the development of rural health infrastructure, very much a kind of we give you a particular set of inputs and expect improvement in health and education outcomes. Many of the benefits from those programs, it was seen in survey after survey, actually leaked to a non-poor household, or in some cases used more by better off households than by those in the lower 20 to 40% of the income distribution.

Social programs in Mexico were also characterized by a six-year cycle. So new president would be elected -- out with the old, in with the new set of programs often with quite elaborate acronyms, a whole new set of investments, construction, hiring, serving of particularly preferred regions of the country. You can see the dynamic here, I suspect. And then after six years a new president would be elected, typically from the same party. It was Mexico after all. But they would quite rapidly shift to a different set of supply side programs.

In 1997 under the leadership of then relatively new Mexican President Ernesto Zedillo, there was an effort to rethink how social programs were provided and to really address this fundamental question of the intergenerational transmission of poverty. There were at that time a set of a kind of cohort of white visionary social planners and leaders across the government.

And one of them in particular, Santiago Levy, who is now as you may know the chief economist at the Inter American Development Bank and might be a good speaker to bring in some time -- just down the street -- thought about what the new set of programs should look like, and introduced based largely on some social science research that had been done around demand side barriers to the use of health and education services, and the fact that women, moms, disproportionately use a new income, use additional income in support of children's better health and education outcomes.

They developed a program of what came to be called conditional cash transfers where this is the progressive program where moms receive on a monthly basis -- poor moms by a set of eligibility criteria -- receive on a monthly basis a pretty significant bump in their income, a transfer, on the condition that their children regularly attend school and that they go for services to well child and other kinds of health services to local health facilities. So this was introduced again, and called a conditional cash transfer program.

And in what was really perhaps one of the most visionary parts of the program they decided that they would from the outset build in an evaluation strategy so that they would know what the impact of that program was on the children's health, on adult health because there was a piece of that involved, and on children's attendance and completion of school. So what they did in being able to see from the beginning, that they wanted to know the performance of the program in really quite fundamental ways.

And they also knew that they couldn't instantly scale up the whole program. They couldn't implement it in all villages simultaneously. What they did was they rolled it out in a way that permitted them to make comparisons between those who had entered the program first and those who entered later. So of 506 villages they randomized the entry instead of saying, for example, we're going to start where the minister of education's hometown is.

Instead of doing that, which often happens in development programs, what they did was they said we're going to roll the dice and implement in a randomized fashion, selecting by chance the villages that entered. That permitted after 18 months the potential to make a statistically valid comparison between the outcomes in the control population and those in the population that had been part of the program. What they found was by and large very favorable results, particularly for education and particularly for girls' schooling, somewhat more modest, but positive for the health indicators, nutrition and immunization.

And here's the last key point I will make about this evaluation. This evaluation was very public. It was public not only nationally, but it was public internationally because they had some very high class, U.S. primarily, academics assisting with the technical aspects of the evaluation and they through presentations, through publications, made it very widely known and what the outcomes were. This meant that when the next administration came in it was impossible to back away from what was demonstrably a successful program. So this broke the cycle, perhaps the cycle of the intergenerational transmission of poverty.

We have yet to know that, because it hasn't been long enough, but it certainly broke the cycle of every six years generating a new, sometimes patronage based social set of programs. So that's kind of a once over lightly the Progreso story. If you're interested, it is perhaps the most well-documented evaluation story in social programs ever, so there's plenty of material to look at.

And let me just highlight some of the elements of that experience that I think can inform how we think about evaluation. One is that there was a clear understanding of what the decisions would be in the future that could be informed by good evaluation. So those were, for example, about how big the subsidies should be, that cash transfer, whether the kinds of conditions that were put on the transfer were going to be effective, and they certainly learned quite a bit about that.

They learned through the evaluation about the leakage to non-poor households of which there wasn't much. The targeting was actually quite effective. And they learned about the impact which helped inform the decision about whether to continue the program. So there were a set of decisions about program design and even continuation on funding scale-up of the program that they anticipated and that could be informed by the findings from the evaluation.

Other elements that were essential were a clear objective. They knew what they were trying to do with this program. The ultimate goal is this breaking the intergenerational cycle of poverty, so they couldn't measure that in the timeframe. But they had clear measures around health and education outcomes.

In the absence of that, if it had just been vaguely improve the life of poor rural people would have been extremely difficult to measure. They also had a development hypothesis. They had underlying the program a logic behind why they were providing conditional cash transfers, why they were doing it to the women in the household, and why they had chosen the conditions they did. And they had a set of valid measurement tools that they could use, a strong design. They had the good fortune and to some extent, to a large extent by intent they had a design that was very robust statistically.

But, even if they hadn't been able to do that, the fact that the unimportant element in a successful evaluation is to use as strong measurement methods as possible. Again, the Progreso experience was particularly fortunate in this regard in large measure because they thought about evaluation from the outset of the design.

Let me turn to why this is the exception rather than the rule; and, that is why evaluation is hard. It's really very challenging in all our work. And when you first sort of ask people in at least the development business, say implementing partners or USAID staff or other development professionals, why don't you do more evaluation? The typical response is well, we don't have money for it. Right? So whenever you hear we don't have money for it, then the question is, well, why is there not sufficient priority placed on what is arguably an essential part of good stewardship of public resources.

So why isn't there enough money devoted to evaluation? Well, you can look at it in a number of different ways and then work that colleagues of mine at the Center for Global Development and I did, now a number of years ago, we sort of broke it down into three reasons, which act in concert I think in a kind of conspiracy against doing evaluation, and particularly evaluations that generate generalizable knowledge.

The first is that investing, that the findings from evaluation are essentially a public good. It's new knowledge. It's new knowledge that can be used. If I do an evaluation of a girls’ scholarship program in Bangladesh and I learn about that, well it benefits that program in Bangladesh. But if I share that information, it also benefits those who are working in Senegal to design a girls’ scholarship program. It also benefits those in Paraguay who are designing a girls' scholarship program, not something to replicate, but it contributes to their understanding of the problem and the potential solutions.

So that's a public good, and, therefore, any individual program or funder of a program doesn't have sufficient incentive for the full investment to generate the knowledge that will benefit many others. The second reason behind why there is typically not very much money invested in the evaluation is certainly an important one, and that is that there's a tension between in a sense doing and learning, between implementing and evaluating and then feeding those results back.

That tension is partly about money that is if you have to make a decision on any given day whether to immunize a few more kids or invest in a bunch of consultants and data collectors to figure out how well or poorly a different strategy might work to reach kids with immunization, on any given day you might choose to just immunize more kids. And, again and again we see this tradeoff of wanting to expand the program, wanting to sustain program funding for the direct benefits at the expense of investing in the longer term goal of learning from those experiences.

But, there are also conflicts or tensions between implementation and learning in terms of the timing of when you start up a program. If you're all set to go with your program design and somebody says, wait a minute. Wait a minute, we have to do a baseline. We have to design the evaluation. We have to think ahead about this. Many times what those who are responsible for getting the program off the ground will think is are you kidding me? I worked so hard to get this in place I have ministries lined up to cut the ribbon. I have the press release ready to go.

I am not going to wait for any fool baseline. So I think that that tension is certainly very real. So partly it's about money. Partly it's about timing. And I think there's another part of the tension between implementation doing and learning, which is a little bit more subtle, and that is that often those who have designed programs and who have started to get them off the ground are true believers. They really are convinced, because of what they've seen, because of their professional expertise, and because of their value system, that this is the way to go.

You find this in a very pronounced way in much of the NGO community, and I think across the spectrum of development professionals. People, for example, I'll pick on girls' education again. People who believe in girls' scholarship programs often really believe in them. And when you have a kind of professional commitment to a particular approach it seems like a waste of time and perhaps even a risk to step back and say, well, you know, I think this is going to work. I think this is worth my energy and your money, but I'm not really sure, and I think we better find out whether it really works or doesn't.

So it's a challenge, I think, for people to be able to step away, and in a sense be candid about what they do and don't know about the approaches, the sectors, the particular strategies that they have committed their intellectual and professional effort to. So there's the public goods problem. There's the tension between doing and learning, and then I think there is in many bureaucracies.

I'm new to the U.S. Government, so I'm not going to make any statements about the U.S. Government yet. But I think that in many bureaucratic environments there tends to be a kind of aversion to the risk that's associated with learning and a kind of preference for telling success stories; so, and that preference is often reinforced if you get your budgets based on the number of success stories you're able to get into the public domain.

Then that preference is quite strongly reinforced. If you get slammed in the budget process when you reveal that a program hasn't really worked out as you'd anticipated, then that preference for telling success stories gets reinforced. So all of those factors and probably more that are in thought bubbles above your head conspire to limit the amount of money that's available for evaluation and also I think limit the professional rewards to those who undertake evaluation and seek to make transparent the findings from those evaluations.

And I suspect there are people in this room who could tell some more stories about times when they sought to do evaluation and had the rug pulled out from under them, and/or when they conducted an evaluation that generated findings that could be very useful for future decisionmaking. And those reports were buried in the e-mail archives of the program manager or those above that person.

So that's a bit of a gloomy story, but now we're going to turn to the good news, which is that I think that there is really a huge opportunity right now to revitalize and to expand the useful role of evaluation within the broad range of foreign assistance programs that the U.S. Government supports, and this is very much in keeping with an international dialog and movement around improved evaluation and in the field of development and foreign affairs.

In our own country what we see as sort of signs of that new moment, I think, is the perspective of the Obama Administration around evidenced-based decisionmaking, around genuine learning across all agencies of government. This is something that was part of the inaugural address and has been repeated over and over again, almost like a kind of mantra about evidence-based decisionmaking.

So there's a very high level of support. You heard Mrs. Clinton, and think that we all take on board as a very serious commitment her interest in evaluating the array of programs to the extent possible that the U.S. Government supports in this area. Very importantly, there are leaders in developing countries who are increasingly focused on this as a key instrument of sound, public policymaking.

Mexico is certainly a stellar example from the Latin American region as is Brazil. Chile has long been a standout. India recently, about a year and a half ago, the planning commission in India established an independent body to conduct evaluations of social programs in that country. China is stepping out in the area of evaluation and there are demands. There are calls for many of the partners with whom we work to do better evaluation of the programs funded, not only by the donor community, but also by the national governments, again as a key instrument of sound public policymaking and responsiveness to demands from citizens.

And our own USAID administrator speaks frequently about the value of evaluation and learning, and this function as a key priority, and really, rebuilding, revitalizing a function within USAID that used to be very strong. USAID, as many of you know, used to be at the forefront of development evaluation and can regain that ground now.

So let me tell you a little bit about what we're doing on that front at USAID. And I'd be interested in hearing later in what the sort of parallel activities are within the state-managed programs. So this is within what Peter was saying is the newly established bureau of policy planning and learning, and the announcement went out a couple of days ago. Many of you may have seen it.

So there are five offices within that bureau: policy and strategic planning, learning evaluation and research, science and technology, donor engagement. So this is within the office of learning evaluation and research. And, I have to say we are extremely lucky. Many of the other offices are starting by pulling together activities that were sort of disparate within the organization and building some new.

In this case, for learning evaluation and research, we are extremely lucky, because Gerry Britan who I see over here started the process a couple of years ago now of really rekindling the efforts around evaluation. And, we're very lucky because in many of the technical and regional bureaus there are existing, really important and strong evaluation efforts that are already underway, not generalized through the agency, but great examples.

So let me tell you what we're up to. These are not quite in sequential order. First is that we are developing an evaluation and learning policy that will help us set priorities for where we put our evaluation resources. It's not five percent across the board. It's not every program needs to do a randomized evaluation. It's none of those things. Rather, it's how do you set priorities by thinking about what the opportunity is for the findings from the evaluations to feed into important decisions within the agency.

Let me give you a sense of what important means in this context. It means are these programs that have an innovative element that will inform future program design, are these programs that we explicitly are starting small because we think that they have the potential to be scaled up over time. Are these programs that are important because the findings from the evaluation will shape and inform the public spending of our partner governments?

So the evaluation policy will spell out how to set priorities for the use of evaluation resources, how to map methods to different types of questions. Yes, there are questions about the ultimate impact of programs. There are also questions about the implementation of programs, better and not as successful ways to implement programs. And those are equally and sometimes more important that those around the ultimate impact, and they too require strong methods.

The evaluation policy will attempt to get us all on the same page about a consistent terminology, so that when we talk about the different types of evaluation we're all speaking the same language. And the evaluation and learning policy will address the question of how to close the feedback loop, how to make sure that the value of evaluation is realized by informing decisions partly about resource allocation, but also about the design and implementation of future programs.

Second is that we are going to develop an evaluation and learning agenda that is on a rolling basis what are the key areas in which we are going to make a special effort to generate a body of evidence. So this requires thinking ahead about what the key questions are. Evaluations, as you know, take 18 months, 24 months, sometimes four years to generate the findings. So to make them relevant and useful you have to think ahead about what are the questions that are going to be asked in the future. This seems in some ways like a kind of hopeless crystal ball exercise.

How do we know what even a different set of stakeholders are going to be thinking about in the future? But I would posit that some of those questions are predictable if we put our mind to it. So, for example, in the field that I know best, global health, we could easily have predicted three years ago, four years ago, that one of the central questions we would all want the answer to is what are the effective means of taking money that's dedicated to a particular disease and strengthening a broader set of health services with the use of those resources.

We are desperate for that information right now, and we could easily have predicted that we would need it right about now. So I think that in each of your fields it is likely that you can look at where money is being spent now, look at the development and foreign policy challenges that we have now and on the predictable horizon and figure out what some, at least of the key questions, will be. And that's a way to focus evaluation questions.

So we're going to be developing an evaluation and learning agenda, again, on a rolling basis. It won't be the sum total of where we focus our resources, but it will shape our thinking about where the special efforts need to be made. We're doing a series of evidence summits which seek to have a sort of creative structure dialog between development practitioners who have real world questions -- where do I put the next dollar -- and those who have done evaluation and research that can inform those questions in particular subject areas.

So the first one, and I have to say I never would have believed this three months ago when I entered the agency, the first one is going to be on counter-insurgency programs and what the research is telling us about how development investments can best support counter-insurgency efforts, clearly an important topic. We are developing a set of tools that match the real life business processes in the agency. Well, what does that mean?

That means that it's really lovely to be able to say, well, we need more and better evaluation. And it's equally lovely to hear the response by many of the development professionals in the USAID who say great, great. This is absolutely what we should be doing, really excited. We used to do it. We don't do it so much anymore. I'm excited. Then they go back to their desk. And what's on their to-do list? Okay. So what's on their to-do list is something like write the scope of work for the team that's going to write the scope of work that's going to become part of the request for proposal that I'm going to have to get the technical team together to review before the contract is awarded.

So that's what's really on their to-do list over the next six months. So at each of those steps and all the microsteps in between, there are opportunities for strong thinking about how to build evaluation in and how to ensure that it's conducted in an appropriate way. So when they're writing the scope of work for the team that's going to write the scope of work they need to think about not just having an evaluation and monitoring specialist on the team, but what type of evaluation and monitoring specialist, and what is that person supposed to do?

That decision at that point can make all the difference. That's where the dominos start to fall, or start to stand up. I don't know. Did dominos ever stand up? So I could go on and on, which I will not, about these tools, but I'm excited about thinking through what it takes to translate the big ambition of better evaluation into the daily and weekly tasks that real people do within the agency.

We are also developing, and I see some of my colleagues here like Virginia Lamprecht who's taking the lead on this, designing a set of training programs that will build some core skill sets, again the things that the folks in USAID need to do so that they can do things like write the scope of work to write the scope of work for the program, and even more. So in a graduated way, build a set of skills around evaluation and learning, and reasoning from evidence.

And, finally, we're going to increase even more than has already been the case, and this is well underway, engagement with others. We have a lot to learn from the academic community from other bilaterals who have over the past few years really charged ahead in terms of the evaluation work that they're doing.

We have a lot to learn from the evaluation networks in Africa in Latin America and in other parts of the world, and from NGOs like Pratham in India that have integrated evaluation into their very DNA. So we have a lot to learn from engagement with others and we look forward to that, including lots of engagement with the State Department.

So now, do I have a couple more minutes? So let me take a couple minutes and say well how are we going to get all this done. And I like to talk about this, because it gives me some inspiration that we actually will get it done. So there are kind of three pieces that we've thought about, and we'll see if they work.

So first has to do with clarity. Part of the way we're going to get this done, and this echoes what I was just saying to some extent, is setting out very clear expectations and processes, again, trying to avoid lots of vagueness, lots of aspiration without boiling it down into what people need to do. To some extent a little bit of that, I think, is going to have to be arbitrary at first.

So we might put some numerical targets on how many evaluations of different types are done. We'll see about that. We need to articulate a process, a clear process around how to make judgment calls about whether and how intensively to evaluate a particular kind of program. Again, so that people aren't too much left to their own devices in an area where at a minimum folks at USAID are largely out of the habit.

And, we are going to in a systematic way try to look at the good evaluation practices across the agency. And, as I said earlier, there certainly are some. There are some stand-out practices in Democracy and governance, in global health, in microfinance and other aspects of the economic growth programs in certain missions, and in regions. We are going to seek to find those who are innovating and succeeding around evaluation, and try to understand how they've done it within the confines of the bureaucracy within the budget limitations which are well known to all.

How do they manage to pull together the resources, the political space to do good and serious evaluation and learning? And we're going to try to figure out how to generalize from them, so, now inventing as little new stuff as possible. So that's the being clear in sort of steering the staff and implementing partners.

Second is motivating the staff and implementing partners. Part of that is about, I think, tapping into the innate professionalism, curiosity and interest in increasing development effectiveness, and that is a universal trait within the development professionals at USAID. And then I think that there's also space to foster a bit of healthy competition. As I've learned in my now three months at USAID there is competition between offices, between bureaus. And it's a natural function of an environment where there's a pie, a budgetary pie to split up and where there are professional interests; and, to some extent, that can impede learning, but I think that we can try to leverage it in positive ways.

So, just to give a simple-minded example, what if after we had our training program developed we just exposed, we just made transparent what proportion of staff in different offices had completed the set of training modules. Just that, just share that information. It's my guess, and I guess we'll see if it's borne out. It's my guess that that would trigger some different behavior than would otherwise occur on the part of managers.

They might say, oh, that office has more of its staff trained in evaluation. The administrator keeps talking about evaluation as something important. Maybe the next training that I approve for my staff should be in this area, so we're going to try to take advantage of that competition in a healthy way. And we're going to try to convey the view, which I at least hold that doing evaluation is virtuous.

It's like exercise. It makes you stronger. It's a pain in the butt to do. It's not nearly as much fun as doing a whole range of other things you might otherwise do with your time and money, but it's virtuous. It's contributing to improvement over time. And so we want people who do evaluations and who promote them for their own programs and who share information about programs that didn't work as well as those successes. We want them to be rewarded with the great benefit of feeling virtuous.

And, finally, we want to really focus attention on creating a path that leads to better evaluation and learning, tackling some core institutional incentives and disincentives, working with partners in the State Department and elsewhere to reduce reporting burdens that are perceived to be unnecessary and compete for time and resources with evaluation. And we want to support what you might think of as social networks around evaluation, part of changing the social norms to make people feel like they're not in this alone.

So that's what's on our agenda, and I'd like to close by asking you to put something else on your agenda, and so I'm guessing that many of you are involved in various programs that are broadly speaking in the area of diplomacy. And so you think about how to use the diplomatic assets of the U.S. Government in important ways to improve the conditions in countries around the world and for our own benefit as well.

So this is not a fully formed thought and I'd really like to engage in a conversation about this with the right people in the State Department. But I think that there is a very key role for diplomacy in supporting evaluation. And, what do I mean by that? Evaluation is actually quite a political activity, because having information out there about what's worked and what doesn't constrains politicians. It limits the amount of discretion that they can have.

And that's just one dimension of the political nature of evaluation; and so I think that there's an agenda of support in the diplomatic realm of helping motivate and persuade the politician elements in the countries in which we work to be supportive of and allow evaluation to take place, and for the findings to be shared. And there's a role for the diplomatic side of the house to help anticipate the risks of candor and transparency, and I think we have to be quite realistic about that.

So, again, it's a thought in formation, but I think that there's a real partnership to be brokered here. So I hope that in these remarks I appreciate your attention and hopefully your interest in this area. I hope that some of what I have said resonates with your own experience and I really look forward to a continued conversation and a chance to participate in a key function of U.S. Government programs that I think is essential to gaining and sustaining the support of the American public and really making the kind of difference that I think we all signed up to to make. So thanks very much.

(Applause.)

MR. DAVIS: Do we have time for Q and A, a couple minutes for Q and A?

DR. LEVINE: I didn't succeed in using up all the time, so any questions, challenges, disagreements?

QUESTION: One, thanks very much for helping me feel virtuous early in the morning. That's a really unusual sensation.

(Laughter.)

QUESTION: The Mexico example was an intriguing one, because one of the final components of it was that the success of the change in policy can strain political opportunities, if you will.

DR. LEVINE: Right.

QUESTION: And it strikes me that one of the issues that we have seen over time with evaluation, whether it be in USAID, whether it be in the Department, whether it be anywhere else in the Federal Government, is that evaluation constrains political opportunities in the United States Government. And I haven't heard anything, unfortunately, in your remarks that help me understand how the kinds of changes that you're going forward with, which I totally support, but how that is going to constrain the next AID administrator or the next congress, or the next president, or the next secretary. And I'm curious how you see that process trying to unfold.

DR. LEVINE: So let me see if I understand the artfully phrased question. So are you saying what's irreversible about this?

QUESTION: What irreversibility is probably physically impossible, but what is going to slow down the next administration or, you know, the next administrator's desire to say -- and let's just pick 2012 right now. We have an OMB decision or request to cut five percent out of everybody's budget proposal. What's going to constrain people from saying this is a brand new office, last in, first out?

DR. LEVINE: Look at that. I think we have a proposal on the table here.

(Laughter.)

DR. LEVINE: So I don't know that anything can constrain that particular -- I mean, obviously, some things can. But I don't think anything in the realm of evaluation can constrain that particular conversation. So a couple of things, and I think it's definitely, you know, a question that I will ponder with my colleagues more; but, I think a couple of things.

One is that engagement with others, I think that the extent that part of what will create long-term success is making sure that we have a set of strong partnerships with the thought leaders in the academic community, with other development partners, and with our implementing partners. So it's not just us who feel this pain if evaluation is undercut in the future. So I think that's part of the story to get a broad set of partners and supporters.

I think another part is to be quite aggressive on the transparency front and make sure that information is very broadly shared in the public domain about what evaluations are being done, what the findings are. I realize that this is difficult, but I think that, you know -- and I stand to be corrected by people who have been at USAID a lot longer than I have -- but I think in the past in general the development community was much more concentrated in the official institutions, the bilateral agencies and the multilateral development banks than it is now.

And although AID was a leader in evaluation among other things, including project design, it was a relatively small community that knew about that. And so I think that part of our sort of longevity, the longevity of these ideas has to do with making sure that they are known by a broad community. And then, you know, I think we have to prove our worth. We have to really demonstrate that there is value in evaluation.

That said, it is a political process and I don't think we can pretend that it couldn't be undone by a concerted effort in the future. But if you guys have thoughts about irreversible thermodynamics, I'd be interested in hearing them.

Can I take one more? I'll be quick. Yeah. We'll both be quick. Right? I know you need to move on, but --

QUESTION: I'd like to thank you very much for your brilliant presentation. My name is Phoebe and I work for Westat, which is a research corporation based in Rockville. I think I'd like to know whether you've given thought to the users of evaluation, because I think the resistance surrounding the whole area of evaluation within each institution is often related to the incentive system in terms of how the findings are going to be used.

So people might theoretically support the evaluation, but if the findings are used in a certain way to like a lot of focus is placed on the findings themselves, not on whether evaluation was integrated into programs, and that we are learning a lot from the findings. I think people might be discouraged to support the evaluation and findings.

DR. LEVINE: So a point very well taken. I think we really guard against the possibility that those who reveal that their programs aren't pure success stories get whacked in the budget process. I think we have to really mitigate that risk.

One approach that actually I learned about from colleagues in one of the regions is phased program design. I learned that USAID, at least some people are doing it, is a phased program design where you start in phase one with three different approaches to a particular issue, in this case how to address the problem of dropout in schools, don't really know exactly what's going to work best in a particular environment. You have three different models that you explicitly are testing or looking at in a careful way in phase one and you have a commitment for funding of phase two that builds on the most successful approach.

And I think that is a brilliant kind of design, because, well, for the obvious reasons, that it builds in the incentive for learning without the risk that you're going to find out, uh, things didn't really turn out that well. You know, I'd better not share that information too widely because I'm going to get whacked in the budget.

So thanks again for your attention, and I wish you well in the rest of the conference, which looks really interesting.

(Applause.)



Back to Top
Sign-in

Do you already have an account on one of these sites? Click the logo to sign in and create your own customized State Department page. Want to learn more? Check out our FAQ!

OpenID is a service that allows you to sign in to many different websites using a single identity. Find out more about OpenID and how to get an OpenID-enabled account.