printable banner

U.S. Department of State - Great Seal

U.S. Department of State

Diplomacy in Action

Workshop on Best Practices for Evaluating State Department Assistance Programs Overseas


Remarks
U.S. Department of State Third Annual Conference on Program Evaluation - Democracy and Governance Track
Washington, DC
June 8, 2010

Share

MR. KULCHINSKY (phonetic): Good afternoon everybody, and welcome to the fourth democracy and governance workshop entitled Best Practices for Evaluating State Department Assistance and Programs Overseas. Today’s presenters are from the populations refugee and migration and the Department of State and which I will introduce. Also, I’d like to make an announcement that all of the presentations that have been taking place will be posted on the evaluation website, so you will be able to download all the different PowerPoint presentations.

That said, I’d like to introduce Fruzsina Csaszar, who works in the monetary evaluation training and policy issues. And she has participated in PRM’s monitoring missions in Sri Lanka, South Sudan and the Horn of Africa. Ms. Csaszar helps manage two FSI credited courses for the PRM bureau, a weeklong workshop on PRM’s monetary and evaluation policy, practice and a series throughout the year on monitoring and evaluation of humanitarian assistance.

I’d also like to introduce Dr. Karen Chen, program evaluation specialist in the Bureau of Democracy, Human Rights and Labor. Karen has been working with DRL’s programming unit to develop a performance evaluation strategy, which includes formulating useful performance indicators for programs funded by DRL and improved reporting of indicator and information by grant partners. She started in DRL as an AAAS science and technology policy fellow and was previously DRL’s bureau planning officer. Karen has a Ph.D. in social psychology from the University of Michigan in Ann Arbor.

Also, Ms. Emily Bruno, monitoring evaluation officer in PRM. In PRM, Emily develops and implements the monitoring and evaluation policy and practice by training PRM staff in two FSI-certified courses, providing technical assistance to individual officers and recommending improvements to the bureau’s performance management practices.

Prior to joining PRM, Emily conducted in depth open source analysis of humanitarian issues in Africa while at the Department of State’s humanitarian information unit. She began her tenure at state as a presidential management fellow. Previously she has worked on gender-based violence programming and has conducted primary qualitative research in West Africa. She holds a master’s of law and diplomacy from Fletcher School of Tufts University where she studied humanitarian assistance, forced migration and human security, where she participated in the intra-university initiative on humanitarian studies. Thank you.

MS. CSASZAR: Okay. Thank you, Yarro (phonetic). Now that you have heard this introduction of us, I wanted to get a sense of who is in the room today, to get a sense of our audience because we do want this to be a very participatory session. We do have some good practices we would like to highlight, but also really want to hear from you about good practices that you have developed in your organizations, so first of all, just a show of hands, who is this room works for the U.S. government in some capacity, currently employed by the U.S. government?

(Show of hands.)

MS. CSASZAR: Okay. And of those, who works at the Department of State.

(Show of hands.)

MS. CSASZAR: Okay. And who here works for a private organization?

(Show of hands.)

MS. CSASZAR: Okay. And finally who here works for a nongovernmental organization or a public, other public organization?

(Show of hands.)

MS. CSASZAR: Okay. Great. Excellent. We were hoping that we would have a great mixture from all of those groups, so we wanted to welcome you. And finally, one more question. Who here works directly on implementing foreign assistance programs overseas, so direct implementation?

(Show of hands.)

MS. CSASZAR: Okay. So given there is just a few, who here works mostly as the monitoring or evaluation office or evaluation officer within their organization?

(Show of hands.)

MS. CSASZAR: Okay. Great. So I wanted to talk to you today about two key issues that both PRM and DRL have identified as kind of key challenges in both of our bureaus in terms of evaluating foreign assistance programs overseas and to talk to you a bit about some of the good practices that we have tried to implement to rise to these challenges, and then want to hear from you about your opinions, any practices, ideas that you have in terms of response to some of these challenges.

The first is a challenge of conducting evaluations in insecure environments. Increasingly, foreign assistance is programmed in countries, in areas in regions that are highly insecure. And especially in terms of humanitarian assistance, which is what PRM �'�' for those who don’t know PRM well, just briefly, PRM �'�' the bureau of population, refugees and migration, has the mandate from Congress to be the primary liaison for the U.S. government to the UN high commissioner for refugees and the to the Red Cross Red Crescent movement to provide humanitarian assistance funding for refugees and conflict victims overseas.

We also do fund nongovernmental organization partners as well as IOM, and the UN Relief and Works Agency for Palestinians. So most of PRM’s funding, about 85 percent of the bureau’s funding, does go to international organizations, and we work mostly through multilateral setting through UN partners and the ICRC. But 15 percent of PRM’s funding goes to NGO partners.

Both our international organization partners as well as our NGO partners have spoken with PRM directly through reporting, through the field visits that PRM refugee coordinators and program officers have conducted about this challenge of how to monitor and evaluate programs in increasingly insecure environments.

So this is a very important issue for PRM, and as a bureau, we’re working to develop policies to strengthen both PRM policy and practice in making sure that we are able to be accountable for the funds that our bureau and our partners spend in these insecure environments. Oftentimes the greatest humanitarian need is in areas of conflict. So these are insecure environments where USG personnel can’t just go out and monitor a project directly. And even grantees, contractors, have trouble accessing these locations.

That said, the impact of foreign assistance funding and humanitarian funding in these regions can be very important, and can be important to measure. These are some areas where literally having better assistance, and that being better targeted, more effective delivery mechanisms, can literally save lives. So learning from what works and what doesn’t is very important in these situations, but can be very difficult because they are such insecure environments.

And so increasingly, humanitarian programs are taking place in these environments. They’re very difficult to access and therefore programs are difficult both to monitor, to set up good monitoring systems, data collection systems, and also to evaluate over all. But because humanitarian needs are so high in this area, in these areas, USG would like to continue to fund these assistance programs and has done so.

So we know that setting up strong monitoring systems is crucial to providing the data that will be used to conduct evaluations of overall program effectiveness and impact, and we’re really working on how we can set up good monitoring systems and also evaluate in insecure environments, especially in places where security restrictions limit the movement, the ability of USG personnel to conduct site visits.

PRM has to be creative in getting information to support both program monitoring and evaluation efforts. So by way of example, one good practice that we have instituted is that our refugee, regional refugee coordinators and program officers who cover Lebanon and our programs in Lebanon have looked at the fact that some of the grant recipients that we fund in Lebanon, we are not able to directly monitor because of security restrictions. So PRM has solicited proposals for stand-alone award from entities with the capacity to assist PRM and the regional refugee coordinator in administering an external monitoring and evaluations system that would be grounded in local expertise and language skills for PRM-funded activities in Lebanon. We have also solicited �'�' we have also told our partners, both our UNHCR offices in the field, UNHCR headquarters, as well as NGO partners, that PRM as a bureau does support funding for evaluation.

Oftentimes, especially NGOs say, well, if we insert an evaluation budget line item, it’s going to be the first to get cut. Not at PRM, we very much support evaluation and especially in some of these insecure environments where it is difficult to evaluate projects, if partners have ideas for how they can do so and can show PRM how they are going to do this, we will work with partners to directly fund kind of internal evaluations or to work with partners to fund external evaluations of programs.

I wanted to turn it over to Karen for some examples from DRL.

MS. CHEN: Hi. So as Yarro has described, I am Karen Chen. I work as the program evaluation specialist within the Bureau of Democracy, Human Rights and Labor. The State Department, DRL, we are actually relatively new in terms of providing foreign assistance programs. We, I think, gave out our first grant starting in 1998, and �'�' but there has been a lot of interest and we have shown a lot of successes with our program, and so I think congressional support for our programs have grown exponentially since then, but still relative to USAID’s DCHA programs, our programs are seen as much smaller, much shorter term and budget-wise, in general, much smaller compared to DCHA.

But despite that, I think monitoring an evaluation of our programs is still a critical component of �'�' and necessary component of what we do with our foreign assistance programs. And I think that democracy and democracy human rights programs are for �'�' those of you who work in the area know that it’s not very easy to measure. It’s not easily quantifiable compared to programs in the health or education sectors. Not that we can necessarily say, okay, we have seen mortality rates drop, or �'�' you know, how is it that we know that a country is moving in the right trajectory in terms of democratic progress?

And so one of the things that we constantly struggle with is �'�' and I think this has come up in previous sessions here, is that how is that we come up with good indicators of what we do in terms of democracy and human rights programs? And then to add to that is that DRL operates in �'�' we have programs all over the world, and as Fruzsina has said, we also operate in a lot of insecure environments, and so to add to that �'�' besides being one that it is difficult to necessarily evaluate or measure success of democracy and human rights program, how do you add to it when you have that additional challenge of working in insecure environments.

Some of the challenges that we face and that we have enumerable discussions about within our bureau is, well, what is it that we do �'�' I mean because our program is relatively small relative to DOTCHA, we have �'�' except for in countries where we have a significant amount of funding, we might not necessarily have a person who works in the �'�' works at the embassy to be able to provide consistent oversight into the programs. So what are some creative ways that we can think about providing proper oversight and management of our programs there?

And then sometimes for those where we are able to have a presence in the country, you know, sometimes it’s just too dangerous to be able to monitor the program. For example, you know, we work in �'�' we have programs in China, Cuba, you know, Burma, some of these countries where we may or may not have a USG presence, and so how is it then we know that we’re having an impact in the country if we may not necessarily be able to see what the impact is of the program?

And so, you know, I’d love to be able to hear people if they have ever had ideas or if they have found any success in trying to be able to monitor those types of programs, or the other scenario is that, you know, I think we �'�' unlike USAID, we don’t have a branding in terms of the types of programs that we do, and so we �'�' you know, we like to feel that for our grantees to be able to carry out the work without necessarily having the USG stamp on there.

So if they feel like they want to be able to conduct their work and reach out to participants who, you know, don’t have to feel like they’re doing this because they’re getting USG funding �'�' so because of that, having that �'�' I don’t know if it’s a layer of anonymity, but because of that, then sometimes the people at the embassy might not be able to have direct contact with the direct recipients of the program. So then in those types of cases, we also then struggle with, okay, then if we can’t have direct access to the direct beneficiaries, how do we know that these programs are having an impact?

So I think those are some of the key questions that we can continuously struggle with in trying to come up with creative ideas or creative methods of looking to monitor and evaluate it, and then how is �'�' so just kind of quickly, how is it that we can evaluate these types of programs, and then in terms of thinking about measurement of success for these programs, and if these are particularly restrictive environments, democratically challenging environments, what are sort of key milestones that we can take to say that, okay, it looks like we are making some sort of change, even if it is incremental?

And I’ll turn it back over to Fruzsina �'�' talk about earlier �'�' thing.

MS. CSASZAR: Yeah, just following up on that, I think especially for those who have direct implementation of programs in the field, since 2006, according to the Humanitarian Policy Group’s report on providing aid in insecure environments, attacks against aid workers have increased sharply, have accounted for more than 60 percent of violent incidents. It used to be that overseas the kind of number one cause of death was traffic accidents. It is now actually intentional violence against aid workers. So that is another layer that is added onto this issue.

And the U.S. government is very aware of this issue. Both PRM and DRL oftentimes fund their sensitive programs or programs in very, kind of politically sensitive environments as well as environments where there’s �'�' especially in PRM’s case, war still ongoing, and so there is still conflict. Very difficult to access populations, but very important, and again the greatest �'�' some of the areas of greatest need.

So very interested in your thoughts and ideas during the kind of participatory portion of this discussion on how your agencies, organizations have dealt with this issue and what lessons that you have learned that PRM, DRL and the State Department as a whole can use in shaping State Department PRM, DRL policy because this is definitely something that we, as a community, are grappling with.

The second issue that Karen and I thought would be interesting to discuss today is kind of a more longer term, more holistic issue, which is the challenge, we feel, of building an evaluation culture, especially given some of the political imperatives in foreign assistance programs. So through �'�' especially I feel like this has been �'�' the past two, three years have really been very supportive of strengthened evaluation efforts, and there is a lot of congressional support for strengthened evaluation efforts, as well.

And so there is a lot of kind of lip service paid to the idea of evaluation as very, very important. And it is. And we believe that it is, but actually implementing it when there is also political priorities that make very important to put money in certain areas of the world, you have to balance that, you have to balance kind of both political priorities that often take precedent over evidence-based decision making, with the imperative to evaluate and to keep learning so that you do know what works and what doesn’t.

And so that there is accountability, there are lessons learned and that, as organizations, we learn about what works so that in future years we don’t keep making the same mistakes, and for projects that have been very successful, we understand why they have been successful and understand whether it’s possible to replicate that success.

So while research and evaluation I think are recognized, widely recognized as being important, evaluation is difficult to build into bureau processes in ways that reward curiosity and risk-taking. And both internal and external evaluations can provide important information for policy and programming.

What’s very important, we found, is the link between data findings and recommendations that are clear and actionable. I think that an evaluation is useful mostly if it’s used. And oftentimes, the attention paid is whether to an evaluation is being conducted.

That’s very important. But I think what’s more important is how is that evaluation used. What did you really learn from it and how are you utilizing it? And so as part of creating an evaluation culture, I would really argue that what that means partially is that you are creating a culture of learning within your organization, that learning about what has and has not worked is not only encouraged, but it’s seen as something that is essential to your work.

In terms of how to get at this, we have found �'�' there is no magic bullet. This morning and throughout the sessions today we have heard about the importance of leadership from the top, and yes, we believe that is very important in having leaders in an organization talk about the importance of evaluation and encouraging evaluation experts to conduct evaluations, encouraging those lessons to be published widely, that’s very important.

But we really feel that getting at this evaluation culture question, you have to get at it from as many different angles as possible. So we have a policy of supporting a variety of both internal and external evaluation efforts and have seen that over time, over the past, I would say, five years, PRM has really been able to strengthen its evaluation culture as a result.­

We have trained our staff so that they have a better sense of how to both monitor and evaluate projects and engage with our partners to evaluate projects’ progress and impact. We fund projects to support international standards that provide the indicators against which we measure progress of our projects and the eventual impact. So, for example, PRM supports the Sphere project, which is a project that provides guidelines and standards for humanitarian assistance and disaster response and conflict response.

We conduct research. For example, we just recently had a request for proposals that was �'�' we got 45 different proposals for research and evidence-based decision making tools to strengthen evidence-based decision making and evaluation in humanitarian assistance programs, so we have seen that there is great interest both in university communities, research communities as well as in the NGO community and international organizations to do more learning and to evaluate what works and what doesn’t.

We have worked to support our partners’ staffing, so we directly support UNHCR’s junior professional officers program, and we have directly funded protection officers of UNHCR to ensure strength and protection for refugees and conflict victims overseas through that organization and that key partner of ours.

We conduct monitoring and improvements data collection analysis of partners. So for example, again, UNHCR, our biggest international organization partner, we have really pushed for reform and strengthening of their health information system, and we funded the Center for Disease Control and Prevention in Atlanta to evaluate UNHCR’s health information system.

So we pushed both for the creation of this data collection mechanism, and then we have pushed and have funded the evaluation of whether this data collection mechanism is working, where it’s not working, why not, how has it made an impact in terms of health of refugees in both camp and non-camp settings. So far it has only been rolled out in camp settings. How can we roll this health information system, data collection system out in urban settings?

We also conducted an external evaluation of our reintegration programs in Burundi. PRM has been funding refugee reintegration in Burundi for over five years, and we wanted to know what was the impact of that programming, a very useful evaluation that we have actually used in several other reintegration situations.

A note here, this is where the whole, “Great, an evaluation was conducted, so what did you do with it” is an important point. Some of the recommendations in this evaluation were very actionable. Others were very, very systemic in nature. So you know, looking at the links between relief and development and how those links could be strengthened, very big, kind of theoretical recommendations, very difficult to implement and will probably take many years to implement, so not nearly as actionable; still very useful however.

And finally, we do support evaluation on our partners’ budgets, and that is something that we have emphasized, that we have worked through interaction to train NGO partners on how to do better performance monitoring and evaluation. So those are all the different ways in which our bureau has really tried to engage to strengthen the evaluation culture, not just of our bureau but of our community of humanitarian assistants.

I wanted to turn it over to Karen.

MS. CHEN: Yeah, in terms of building an evaluation culture within DRL, actually we have a lot of the same type of sort of questions and concerns of what is it that DRL can do to better embed evaluation into our work with our grantees and also just instilling a culture within our program officers about program evaluation, and you know, I think just to talk a little bit about some of the work that we do is that we have been pushing a lot more in terms of our grantees to do independent evaluation in their work, and we have actually see just in the past one to two years substantial �'�' from our proposals, a substantial number of our grantees coming in with independent evaluation �'�' or an external evaluation as part of their program proposal.

And then also in terms of what the �'�' during the negotiation process of really beefing up and building up a much more robust M&E plan and providing a more �'�' providing more guidance to our grantees in terms of their reporting that provides the types of information that really assesses the �'�' not just an activities report but more of a results oriented information providing for us.

But I think just linking back to just more general questions that we have is that particularly given the current economy and also the fact that DRL in general has relatively �'�' our programs are relatively small compared to other bureaus, you know, how is it that we can justify putting a substantial amount of money towards program evaluation when you really want �'�' when, a lot of the grantees, they’re looking to put the money towards program activities.

And so what is it that �'�' how is it that they see the value of program evaluation but balancing that with the fact that, you know, this is going to be money taken away from program activity? How is it that we can show them that this program evaluation that they’re going to be doing is going to be helpful for them in the long run? Maybe not directly in the short term, but what is it that we can do to build that into the culture.

And also, if there is a limited amount of money available for program evaluation, what is the most cost effective way of doing it?

We were talking �'�' I mean just in the last session, an impact evaluation is great, but sometimes it might not necessarily go towards causality, so how is it �'�' if it’s a small program, how do we know that it’s having the impact that �'�' say they’re doing, without going into an experimental, quasi-experimental, randomized control design study, which is very, not only labor �'�' money intensive, but it is also very labor intensive.

So in these types of programs, does it make sense to be putting the amount of money, do we �'�' what type of evaluators should we be hiring? Should we be working with local evaluators? Do they have the capacity to do it? What if they don’t? What is it �'�' how is it? Can we be building the culture of an evaluation in the host country so that if we’re going to be continuing to do future programs in the particular country that the local NGOs can tap into the evaluators there.

And then just kind of one more general question is, you know, sometimes we have just �'�' even stepping back of hiring an external evaluator at the local, comparing local level versus U.S.-based, well known evaluation expert is what about the grantee themselves? You know, some of them may be amazing implementers and they have done amazing work, but M&E is definitely not their forte, and so what is it that we can do to work with them?

You know, some of them, just even explaining the difference between an output versus an outcome, you know. I think it’s �'�' just for me, my own personally �'�' working with several grantees as they try to rework their M&E plan, just trying to get them to come up with concrete targets for their �'�' knowing the best way to measure the outcomes of their programs has been a struggle.

So now at this point I turn it over to Emily to facilitate.

MS. BRUNO: So as you can see, we have lots of questions, and that is actually where we started when we were designing this panel is we said, we �'�' here are two areas that we �'�' that are kind of the cutting edge for us in our overseas assistance programs and we actually really want to hear from folks in the room about what’s working for you all, what questions you might have for us, what questions you might be having in general.

So just to very briefly recap, we identified working in insecure environments and doing monitoring and evaluation in insecure environments and building an evaluation culture, and I know we gave you very specific examples to PRM and DRL, but I think there are some general themes that are within each of these.

So for those folks who were here this morning, Senator Hagel really emphasized the accountability. I know that something as public funding we all have on our minds that, how do we have our eyes on the projects, how do we know what’s happening. So what we face in insecure environments are issues of access. I’m sure that you guys have other examples of programs where you don’t have access to the populations you’re working with or to the programs themselves.

We also face issues of security, so how do we move around overseas, how do we do our work overseas, and also considering the security of our partners. And then with evaluation culture, building an evaluation culture, the key question is there, how do we build these linkages from the evaluations to the programs and the policies that we then have to implement, and then how do we do that, as we said, in the most cost-effective way.

So those are some of the questions that we have for you all, and we’d welcome any questions �'�' be happy to answer any questions you have for us, but then also we’d be really interested in what �'�' to hear what you have to say as well. Please.

QUESTION: I have one question. I was wondering if that was somehow factored into the question about (inaudible)?

MS. BRUNO: In case anyone couldn’t hear, the question was about impact and how we factor it in.

MS. CHEN: No. Impact, I mean, is definitely a key �'�' is an important part of when we think about monitoring evaluation and that �'�' it is, because that’s the whole point of why we’re doing a program that we’re doing is that having the results and the impact that it’s supposed to be doing. I think again that goes back to the struggle with it is, okay, the level of impact, let’s say we do a lot of programs.

For my example, in DRL, we do a lot of programs in democracy and human rights and we do a lot of training, whether it’s training political party members or training women and youth to become more empowered population within their community, you know, these types of things that we struggle with, okay, so we can show that the number of people that we’re training and the number of people that’s doing that, but then the so what factor, which is the impact.­

And so how is, sometimes those types of impact might not be seen until a few years out, and so how is it that we can gauge a sense of the trajectory. Are we gauging an accurate way of the trajectory of the impact of the program and what is going to be a realistic way of doing it that we can measure by the end of the project if it’s �'�' we know that sometimes impact might not be necessarily more visible until a few years down the line.­

QUESTION: The reason I asked, I’m not trying to be emotional about it, but years ago I did a refugee study for Congress when I was working with GAO, and I was in Sudan, and we were finding that there were reports that were being sent back to Washington that were being done by third parties, and those �'�' the refugee coordinator found it unpleasant to go into the camps. And we encouraged the refugee coordinator to go with us because the things we were hearing when we arrived in country were very troubling.

And so I think of it in terms of life and death issues, you know. These are not just numbers; these are people, people who are possibly dying because �'�' or not dying because of the intervention you’re doing. So that’s �'�' when I think of impact, I think of people.

MS. CSASZAR: I think that’s a great point, if I could answer that one. And actually what you pointed out is something we struggle with I think the most in humanitarian assistance, is that often when we say let’s focus on evaluations or let’s try to put some money into the programs that we do, what we get back is, “I could feed people, and you want me to spend that money on an evaluation?”

And so I think that’s, for a lot of reasons, why the humanitarian assistance field is much further behind the broader development field in the use of evaluations is that people think they’re constantly making a choice between life or death in program spending and what we have tried to do is say small amounts used strategically can be used to improve programs for the long term because the reality is, particularly �'�' Sudan is a great example. We’re funding programs there for 20 years, so it’s not a short term investment.­

Yes?

QUESTION: I guess I was just going to try to answer the question. I used to work at GAO and I also ran evaluation and strategic planning shops in the executive branch, and one of the things I’d suggest there is a conceptual model of what the program is supposed to do that includes steps along the way toward progress down the road years down the road so that you have sort of dominoes falling that you can measure, and you’ll know that if you’re heading in the direction that you need to head or if the program is really not working at all or headed off in another direction.

Also, for evaluation culture what we did was make sure that everybody had a clear understanding of the mission, which again, that conceptual modeling can help you see where your program is part of the mission of the vision or the whole agency and people can see what their role is in that, and then there has to be an understanding which OMB has been trying to do for years that managing a program includes planning it, implementing it and evaluating it. You can’t skip different parts of it, and if you don’t understand if a program is really functioning well, and you don’t know how to fine tune it and change it, then you’re wasting money anyway. So you’re �'�' if people can begin to understand that as part of their culture.

MS. BRUNO: Great, thanks very much for that. Any other comments or suggestions like that for the group or questions?

QUESTION: Well, I want to concur with that. The other issue is, yeah, that as long as they understand that evaluation is part of how can I do this better and making sure that the program provider is working with the evaluator and they’re not just �'�' this evaluation and here you take care of this, you take care of it. No, it’s got to be interactive with the program providers and the evaluators, whether they’re a staff member on the project or whether it’s a third party, it’s got to be an integrated approach.

MS. BRUNO: Thank you so much. Any other challenges that other folks who are �'�' I know some of you are directly funding projects.

QUESTION: Just to add to the number of complications maybe, the challenges that you raised, that perhaps you should not be seeking long-term impact for programs related to democracy because, as you mentioned, you have been investing for more than 20 years in Sudan and in Egypt and you cannot really assume that responsibility of your project with a limited budget or with a limited timeframe will have an impact on democracy in that nation.

You may have outputs, you may have outcomes that more people will participate in elections, but there are so many variables involved, and it’s not your responsibility as a program to seek change in the democratic environment in any nation; it’s their responsibility or our responsibility, coming from Egypt, all right? So you should not take it upon yourself to try to achieve an impact relating to democracy. Would that --

MS. CSASZAR: No, I think it’s an integral part of the current debate about what we measure and why. I don’t know if, Karen, if you have any reaction to that or �'�'

MS. CHEN: No, definitely, that’s really �'�' I think what you’re saying is completely relevant, but then what’s then the reason for continuing the program? And so sometimes we need to say, okay, then, if we don’t �'�' do we see, if we don’t see measurable success should we still continue working in these countries or not? And so I think that’s the ongoing debate that we have of policy versus program, do we keep chugging along or not? Yeah, but it’s true that you’re saying though. There are so many external factors. There could be other donors coming in, doing work in that country as well, you know, that needs to be taken into consideration too.

So yeah, the onus should not be completely put �'�' we don’t think that the onus should be completely put on us. Yeah.

MS. CSASZAR: Just following up on that from the kind of humanitarian assistance point of view, although we know that there is a lot of things that contribute to mortality, we do measure mortality rates, especially in emergency situations. It’s a huge indicator of impact.

And so it could be that because of weather-related events, because of a conflict, mortality rates were higher than what we had set out as our target, which was to keep them as low as possible. But we will still measure that and we will still report it because it’s �'�' it’s important to note that even if there is confounding variables, our programs also do contribute towards that number, so �'�' but yes, we definitely do measure the impacts, yeah, even if we’re not the only one.

QUESTION: (Inaudible) democracy --

MS. CSASZAR: No, no. Yeah.

MS. CHEN: Yes, please.

QUESTION: My question is it sounds like your bureau is very pro training and is looking at ways to be cost effective. So I’m wondering, since you mentioned that you have some great implementers that may be lacking a bit in how to do the monitoring and evaluation piece is whether you have considered doing the web-based training -- so that you’re implementing partners could access them at any time that is convenient to them worldwide -- that punch out your basic principles of difference between outputs and outcomes and how to choose an appropriate custom indicator versus those types of things.

Because I’m thinking that with the trauma treatment area that there was a great lack of trauma treatment principles that were internationally recognized as best practice and the cognitive behavioral trauma treatment program actually developed a web-based program, and it had great success, and in fact people at the end of it could do it on their own time, could print out a certificate and then were about to use that to begin providing services and show that they had the basic knowledge.

MS. CHEN: That’s a great idea. Also, I would note that the office of the director of foreign assistance on their public Web site has a monitoring and evaluation training like this. And so in a lot of cases PRM doesn’t like to reinvent the wheel, and we can save ourselves time where we can, but so just for folks to know, it’s foreignassistance.net. On the public section of the site there is an eight section tutorial that is accessible to anyone and it’s a basic monitoring and evaluation training for folks, for partners.­

MS. BRUNO: I think we have just a few minutes left so we’ll take these last two questions. So the woman in the purple, please. Thanks, Yarro.

QUESTION: Oh, I just wanted to hear more about �'�' you said a problem, a challenge, was collecting data in other countries, especially in insecure environments and all that. I just wanted to hear more about how you’re able to do that and if anybody else has suggestions to offer you on that.

MS. CSASZAR: A lot of it is about triangulation of data and trying to get reporting from as many sources as possible. So it would be through, if possible, directly through our staff on the ground, through our program partners, both NGOs and international organizations, through other donors and liaising with them to see what kind of reporting they’re getting and sharing that information reporting.

And also, if possible, getting information from other USG entities and, in some cases, from the private sector as well. We’ve got the overseas security advisory council that does share information between NGOs and the U.S. government, especially relating to insecure environments and how to operate in insecure environments, so linking into that network.

But a lot of it is about trying to get as much information as you can and then triangulating it if you can’t directly get the information more directly through program monitoring and evaluation at the site because you can’t access the site. I don’t know if you have other --

MS. CHEN: No, I mean yes. What you’re saying, we try to do that as well. Sometimes we just have to rely on our grantee a lot in terms of what they are able to report back to us just because of �'�' even if we are able �'�' we try to triangulate as much as we can, but sometimes that just �'�' the amount of information that’s available is just so limited, and so you kind of just have to rely on the grantee, but yeah.

MS. BRUNO: Okay. Do you have a quick question? Is it complex?

QUESTION: Just a comment.

MS. BRUNO: Okay. Yes, please.

QUESTION: Just a quick comment on the question of building culture of evaluation. I know that one of the things that we’re finding at USAID is that there is actually a tremendous amount of enthusiasm in the field for conducting more evaluations, better evaluations, figuring out how to organize our findings better, but a great amount of confusing and frustration about clarity, paving the way to show, demonstrate very clearly this is the �'�' an appropriate budgetary level for an M&E component of a project, this is �'�' these are the necessary steps, sort of paving the way and just making it a little bit more of a streamline process so it’s not a special ad hoc thing every time you want to do a project.

And because I do think �'�' and if you can demonstrate very clearly the feedback loop, this is how the findings of the evaluation will be integrated into our programming in the future, there is actually a tremendous amount of support for evaluations, even if it takes away some budget from programming because people intuitively realize that it might actually �'�' the findings might help save lives in the future or whatever goals you’re trying to achieve.

MS. BRUNO: Sure. That’s a great point. Thanks so much. So thank you all. I will turn it over to Yarro to conclude.

MR. KULCHINSKY: I’d like to thank our presenters, Fruzsina, Karen and Emily for a wonderful discussion and really bringing and closing the day.­

If you notice on your agenda, at 4:00 the next plenary session begins. So with that said, I would really like to thank you presenters and wish you a wonderful afternoon. Thank you.

(Applause.)



Back to Top
Sign-in

Do you already have an account on one of these sites? Click the logo to sign in and create your own customized State Department page. Want to learn more? Check out our FAQ!

OpenID is a service that allows you to sign in to many different websites using a single identity. Find out more about OpenID and how to get an OpenID-enabled account.