[ View slide presentation
MS. STERN: Thanks, everyone, for coming. Sorry. We’re going to go ahead and get started, since we don’t have a whole lot of time, and we want to make sure we leave adequate time for discussion at the end.
My name is Lili Stern, and I am with the Department of Labor’s international labor affairs bureau, and this is my colleague, Michael Branson. And we are here today to talk about improving knowledge transfer through project evaluations, really trying to take a hard look at our evaluation processes and policies, and how we can use them as a more effective tool for transferring knowledge about what works, both internally, for program improvement, and also externally, to a very broad range of stakeholders.
One small caveat before we get started is I think in the program it mentions that we’re going to be sharing with you a proposed framework for doing that. And, really, the point of this �'�' in having a roundtable, in fact, was to have a dialogue about the steps that we’re taking. And it is a work in progress, and we’re learning and doing at the same time. But we’re interested in hearing what others are doing, and also what steps �'�' what additional steps we think could be taken in this regard. So, it is a framework that we are hoping to develop along with you.
And with that, I think we will get started. We are really going to focus in on evaluations that have been carried out of our programs in sub�'Saharan Africa over the last 10 years, and really trying to use these as a launching ground to have a constructive brainstorming session at the end.
So, let me just walk you through the agenda for our discussion today. We are going to give a very brief overview of shifts in evaluation policy, both from a U.S. Government perspective, but also broader international kinds of changes, and the way that we think about evaluation and pressures that are coming to do evaluation differently. And I know that most of you work in the evaluation field, so this is probably not new information, so we will kind of do that fairly quickly. And if there are questions, we can certainly take those at the end.
And we’re going to talk about specifically the bureau that we work in, and how our evaluation policy and some of our processes have changed along with the shifting times.
And then, again, we’re going to offer these particular case studies of projects that we’ve funded in Africa, and some of the challenges that have come up. We recently �'�' Michael recently �'�' carried out a meta�'analysis of these evaluations, and identified some common issues that came up with respect to performance measurement, with staffing and management, and also making the best use of recommendations and findings from midterm evaluations to improve program design for future projects.
And we are going to offer a few recommendations. But, as I said, we would really like to open that discussion up for a more participatory dialogue. So we are going to leave, hopefully, about 20 or 25 minutes for that at the end.
And with that, I will turn it over to Michael to walk you through some of the changes in evaluation policy from a U.S. Government perspective over the last 10 or so years or more.
MR. BRANSON: Yes. So this is just a brief history of evaluation policy at the U.S. federal level. So, kind of the broad trend has been to encourage more program evaluation and more rigorous program evaluation.
So, in 1993, the Clinton Administration implemented the Government Performance and Results Act, which shifted towards a result�'oriented measurement of federal programs. Then, in 2003 that was kind of pushed forward again by the Bush Administration with the program assessment rating tool, which put more emphasis on programs’ impact and really emphasized impact evaluations.
And then, in 2005, Secretary Condoleezza Rice, kind of underscoring the push for an evaluation process that worked in with the planning of projects, said �'�' or she called for a foreign assistance strategy that enhanced the accountability, effectiveness, efficiency, and credibility of foreign assistance by introducing a system of coordinated planning, budgeting, and evaluation.
Then, within the last year or so there has also been a push in Congress to reform U.S. foreign aid, which includes a reform of evaluation policy. Specifically, there have been two bills introduced in the House and one in the Senate. The Senate bill, which was introduced by Senators Kerry and Lugar, would establish a bureau for policy and strategic planning within USAID, and it would also create an office for learning evaluation analysis and development, and independent counsel on research and evaluation of foreign assistance.
Then also, in 2009, Secretary Clinton called for a quadrennial diplomacy and development review, which will review current development practices, and act as a blueprint for future efforts. That review is due in 2013, with a midterm report before then.
And in addition, last summer, President Obama announced a Presidential study directive on global development policy. That study directive has not yet been released. But initial reports suggest that it will focus on the sustainability of programs, as well as on designing future aid programs based on the evidence based from past programs.
So now �'�' and, let’s see, the current administration also places a strong focus on rigorous independent program evaluations. Specifically, it emphasizes impact evaluations over process evaluations. And the effort to promote impact evaluations is kind of taking on three forms.
Number one, it plans to make information about all USG evaluations available online for other USG agencies, researchers, and the public.
Number two, it plans an interagency working group to promote stronger program evaluation.
Number three, it plans to create a voluntary evaluation initiative, which will fund approximately 20 rigorous program evaluations, USG�'wide.
Other administration efforts are to improve the transparency of program evaluations, you know, that is to make it available to people outside the government researchers and the public, and also to foster a whole�'of�'government approach, so that all development agencies are working together to share what works and what doesn’t in their projects.
And I will turn it back over to Lili.
MS. STERN: So, as Michael mentioned, the current administration does emphasis impact evaluation more so over process evaluations. But we can’t seem to just turn on a dime. So, at least as far as our lab goes, we are certainly shifting and moving in that direction. And not wholesale; we’re still doing process evaluations. But we’re trying to build in impact evaluations into new programs as they’re being funded and designed.
In particular, in our child labor programs, they are planning to pilot randomized control evaluations with new programs in El Salvador and Bolivia, and possibly in Tanzania, and working with academic and research partners to do that.
And on the trade and labor affairs side, similarly, we’re not quite as far along in that regard, but we are looking to build that, and, in particular, a program that we fund heavily through the ILO, the International Labor Organization, and the IFC, the International Finance Corporation, called Better Work, which has been going since about 2001. It started in Cambodia and has yet to really have a rigorous evaluation. Although there is a lot of anecdotal evidence that it seems to be an effective model, we are really keen to show that in a more rigorous way.
So, there is actually an evaluation underway in Vietnam of the Better Work program there, and we are hoping to use a similar approach with other programs that were funding Better Work, possibly in Haiti and Lesotho, as well.
And, in terms of transparency, as Michael mentioned, there is a much stronger emphasis on transparency of foreign aid in general, but specifically of the evaluation process, as well. And on the flip chart I have just kind of tried to note some of the key developments that have happened in recent years, and particularly some documents that point to transparency of evaluation as a key factor in improving knowledge about what AID is doing, whether it’s working or not, and improving access to that knowledge.
So, going back to 1991, OECD, the development assistance committee, of which the United States is a member, released principles for evaluation of development assistance.
And I am just going to �'�' because this is something that has remained sort of on the radar screen even all of these years up until now �'�' but they highlighted that the credibility of evaluation depends on the expertise and the independence of the evaluators, and the degree of transparency of the evaluation process. Credibility requires that evaluation should report successes as well as failures, and recipient countries should, as a rule, fully participate in evaluation, in order to promote credibility and commitment.
It also mentions that the transparency of the evaluation process is crucial to its credibility and legitimacy. To ensure transparency, the evaluation process, as a whole, should be as open as possible, with results made widely available. Evaluation reports must distinguish between findings and recommendations. So, this was dating back to 1991.
In 1998, OECD released a review of these principles for evaluation, and kind of reported on how member countries were doing. And they included specific criteria around transparency. In particular, they looked to see whether information about the evaluation process �'�' and that means the terms of reference, the scopes of work, the choice of evaluators, and analysis, findings, and recommendations of the reports, how widely those were available. Were they available only within the specific funding agency to management staff within that agency? Were they available to legislative groups or bodies? Were they available to the public and to the press? And if so, to what degree?
And then also, what was the sort of nature of participation of stakeholders in the country, itself? So that was an opportunity to take a hard look at what the countries were actually doing, not just the principles that they had stated and agreed to, but how they were following up.
In 2002, there was the Monterey Consensus, the international conference on financing for development, which kind of re�'invigorated the commitment to increase aid effectiveness and transparency. In 2005, the Paris declaration on aid effectiveness.
In 2008, really, a lot of things started taking more concrete shape. And, in particular, the international aid transparency initiative was launched, along with a statement. And I should note that the U.S. is not a signatory to that, but there are a number of other large donor agencies �'�' DIFD (phonetic) from the UK, Irish AID, the Australian Government, CIDA (phonetic), and then a number of foundations, the Hewlett Foundation being one of them, the World Bank, a lot of other large players in the funding field that did sign on to this.
And, along with that, a set of aid transparency principles, which talked about the need for greater transparency at large, but also with respect to evaluation. And, in particular, making information available about the results of development efforts.
And then, also, the OECD DAC (phonetic) released its evaluating development cooperation key norms and standards. And then, this year, the guidelines and reference series on quality standards for development evaluation were released. And it includes, specifically, a section on dissemination of evaluation results that �'�' it is expected that these be presented in accessible format, that they are systematically distributed internally and externally for learning and follow�'up actions, and to ensure transparency.
And they also talked about free and open evaluation process. So the evaluation process must be transparent and independent from program management and policy�'making to enhance credibility.
They talked about the make�'up of the evaluation team, and how to make the selection of that evaluation team as transparent as possible, and then also some recommendations around engaging a full range of stakeholders and partners in the evaluations.
So, this has been a conversation that has been developing over a long period of time, but that gives you a sense of not only the U.S. Government policy and expectations for transparency, but also the commitments that we have made internationally, and what our funding partners are doing and saying �'�' but more importantly, doing �'�' to make changes in their evaluation processes and policies.
To look at how ILAB (phonetic) compares to the expectations in terms of why �'�' the uses for evaluations, the U.S. �'�' the current U.S. Government evaluation policy basically outlines three uses: one, determining cost effectiveness of programs; two, strengthening design and operation of programs; and three, shaping budget priorities.
And going back and looking at our own evaluation policies, we actually have some overlap, but also some differences with that. And so we are kind of identifying where we may need to make some changes in our policies.
For the child labor, looking to see if desired outcomes were reached, improving program design and management, achieving cost efficiencies �'�' which is, obviously, one of the administration concerns �'�' and also program sustainability.
And then, from our trade and labor affairs group, the use being more to enhance the management of ongoing projects, so sort of internal efficiencies. Improving preparation for new projects �'�' and I think, in the past, that’s really been of our own, not necessarily improving preparation of other funders’ projects, but really looking much more internally �'�' and providing inputs into broader program evaluations, which could be outside of what we’re funding.
So, now we’re going to take a look at the programs that we have funded in sub�'Saharan Africa, and really, as I mentioned, this meta�'analysis that Michael carried out, looking at his findings from two particular questions in mind. One, did these evaluations foster program improvements, as we would have hoped that they would have done? And, two, how transparent was the evaluative process?
And for that we will also be looking at just our bureau’s current policies and processes, both in the child labor side and in our trade and labor affairs side, to see how they measure up with these expectations.
MR. BRANSON: Okay. So now I will go over some of the projects we looked at as case studies. I will go over these briefly, and then talk about some of the overall trends.
The first one was the strengthening labor administration in southern Africa project. That project’s aim was to reform the labor laws in Zambia, and strengthen institutional capacity in Botswana, Lasotho, Malawi, and Zambia. And it also aimed to strengthen the capabilities of the tripartite system there.
So, unfortunately for this one, we only had the midterm evaluation available. There was no final evaluation. But, as of the midterm evaluation, the project was successful in providing sustained intervention there. And it was able to achieve a number of its goals. You know, again, unfortunately there was no final evaluation, so the transparency aspect was a bit lacking.
Let’s see, the second case study is improving industrial relations in Mozambique. For this one there was just a final report done in�'house. And so, again, transparency issues there. This project was designed to improve Mozambique’s economic development by enhancing the country’s role in the global trading system, and also to support the implementation of internationally recognized labor standards. This project was a model of success, and it provided first exposure to a number of participants to the collective bargaining system, and provided technical capacity in the country.
Okay, and then the next one we will look at is the Southern African Veterans Employment Program, or the (inaudible) program. For this one there was a third�'party final evaluation. And this project, in particular, was really the best example of a project that was able to use the midterm evaluation to improve the project for the second half. This one also was able to use solid indicators, so there was good data on the employment outcomes of the group who took the employment training course in the first half, and the group that did so in the second half. And we saw a marked improvement in the second half group.
Let’s see. The fourth project was the Nigerian Veterans Employment Program. For this one, again, there was a third�'party final evaluation. This was similar to the South African Veterans Employment Program, in that it was another labor training program. And this one was also able to use the midterm evaluation, to some extent, to adjust the project goals and activities halfway through. This one, unfortunately, relied more on anecdotal evidence to show that the project was a success. There was not as much hard data from any indicators.
Next we will look at the Tanzania Labor Exchange Center Project. This project made improvements to the �'�' an existing labor exchange center in Tanzania. It mainly improved training materials, and then also refurbished the actual building, itself. Again, this one was a success, but it also again relied on more anecdotal evidence in the evaluation, rather than having good indicator data that could provide more hard evidence. For this one there was a midterm study tour that was conducted, and that helped the management of the project in the second half.
The sixth project was the Nigerian Declaration Project. This project provided aid for labor law reform, and also were planned training activities. But, due to an unexpected resignation by the chief technical advisor, those were never able to be carried out. This one did have a more transparent evaluation process. There was a third party final evaluation done. Unfortunately, they were not able to really use a midterm evaluation to improve the project, mainly because of those unexpected difficulties.
And then the final project we will look at is the Improving Labor Systems in Southern Africa, or ILSSA, project. This project was implemented in six countries in southern Africa: Botswana, Lasotho, Malawi, Namibia, Swaziland, and Zambia. And the three main objectives were: increasing the knowledge of labor rights in the countries, improving the labor inspection system, and increasing the use of the dispute prevention and resolution systems. This one, again, did achieve some success, but much of the evidence was anecdotal, and there were really weren’t �'�' wasn’t a whole lot of hard evidence.
Okay. So now we will move on to some of the challenges to project improvement. And so this is one of kind of the major trends that showed up in looking at these project evaluations. And six of the seven of those case studies had serious issues with their performance monitoring plans, or PMPs. And they really weren’t able to be used to provide any sort of hard evidence �'�' or semi�'hard evidence �'�' of actual program improvements, which is why a lot of the projects’ evaluations really relied on anecdotal evidence and interviews to figure out if the project was successful.
And considering the current administration’s focus on the project impacts, this seems like it’s one area that should be very important in designing a project.
So the �'�' kind of the four main problems �'�' number one was the timeliness of the PMP design. A lot of times we found that PMPs were not designed �'�' were not created during the project design phase, and then afterwards were never able to really be enmeshed in the project.
Number two was a poor drafting of indicators. That could either be vague or impractical indicators, those that are difficult to measure concretely. Or, in the ILSSA project in particular, there was a �'�' one indicator that was looking at the number of labor violations to measure the capacity�'building in the labor ministry.
But then of course, as the labor ministry got better and started looking at more business places, they issued more violations, which actually made it look like the project was failing when it was succeeding. So kind of really looking, thinking through the indicators we’re using.
Third was the satisfactoriness of the PMP, which kind of goes back to the first one, as well. A lot of times, the �'�' there was never really a final, completed version, and specifically not at the beginning of the project, and so never really able to be used throughout the project.
And then the final is reporting of the PMP indicator data. We found in a lot of these projects that either the reporting was long�'winded and not useful, or just not rigorous enough, or that there was insufficient follow�'up. So we didn’t really get the indicator data far enough along to see if these projects were actually increasing the livelihoods of the people they were trying to help.
So, as far as our recommendations for improving this, we think that agencies should first focus on the accuracy of indicators and the quality of the reporting. And, secondly, budget an appropriate amount of time for the design of a PMP with effective progress measuring indicators. One of the evaluators specifically mentioned, like, one to three months during the project design phase. And that, I don’t think, was budgeted for any of the projects that we looked at.
Concerning impact evaluations versus process evaluations, we believe agencies should balance and prioritize impact and process evaluations, based on what needs to be learned about the project; and secondly, tailor methodologies accordingly.
Okay. So, now I will move on to some of the challenges to institutional learning. As we mentioned, these projects kind of took place over the course of a decade or so. They were all in the same geographic region, so it’s kind of a ripe set of projects to see if �'�' what’s being learned from each evaluation, from one project to the next.
And we really found there were a lot of common trends and problems, and �'�' suggesting that there wasn’t a whole lot being learned from each project. So, number one, as we mentioned, the PMP design was a big problem. And that affected six of the seven case study projects.
Secondly, there were problems with planning appropriate goals and activities. That was experienced by all seven of the projects. And, obviously, that’s one that can’t really be completely wiped out, just because it’s impossible to see into the future and know if the planned projects are going to work out along the time line that you expect. But certainly it’s important to learn from the past mistakes there, and really look more closely at that.
Number three, there are problems with management and staffing. Specifically, we saw a trend of local stakeholders not being involved in the project enough with an over�'reliance on international experts and other problems in that vein that suggested that past mistakes weren’t really being learned from.
And then, fourth, there were problems achieving or even promoting program sustainability. A lot of times sustainability plans were not drafted from one project to the next. And also we saw a trend of too much reliance on foreign expertise which, then, once the project was over, there really wasn’t enough �'�' weren’t enough local experts there to continue to carry on the project.
And I will turn it over to Lili.
MS. STERN: Okay. I’m going to kind of speed through the next few things, just because we want to get enough time in for discussion.
In terms of the transparency of the evaluative process, I basically went through our core bureau documents that our child labor office has, a set of procedures for working with contractor�'led evaluations, so they use a sort of an outside contractor to organize evaluations and to get consultants in to do that, and work with them to develop the methodologies and plan the logistics.
I also looked at a recent TOR (phonetic) for an evaluation that was carried out of one of our ILO�'funded programs in Mali. And then, our �'�' the office that Michael and I work with, the Trade Agreement and Technical Cooperation Office, I looked at their guide to evaluations, which is similar in some ways to the child labor office, but also different, and I looked at the TOR for an evaluation that we recently carried out in 2008 for the ILSSA project that Michael talked about.
And there were a number of issues that I think you’ve just highlighted that transparency just �'�' it’s clear that it wasn’t in the mind of the folks who were writing these documents. Not that they didn’t think it was important, necessarily, just that it �'�' they had other sort of lenses through which they were approaching the issue.
And so, it identified for us the opportunity to go in and revisit some of these policies and documents, and to make sure that our evaluations are really a tool that can help us achieve better knowledge transfer and better institutional learning and improvement in our programs.
Some of the common �'�' some of the core problems, I think again, are main evaluation documents, whether it is the guidelines for how we interact with our evaluation partners, or the guidelines for setting up stakeholder workshops, or for getting input on the TOR for recruiting evaluators, for reviewing their qualifications. They just lack reference to transparency, period.
And, in terms of within each individual project, we signed these very detailed cooperative agreements at the time that we fund the program. And most of our projects are about �'�' of a four�'year duration. They also lack any reference to transparency.
And then, our staff, you know, have a lot of things codified in our own performance measures as individuals and also as offices. But within those measures, transparency isn’t really one of the outputs that’s being looked at. So we identified a few opportunities for us to kind of try to build this into the work that we’re doing, so that we don’t lose track of what’s important.
And then, in terms of recommendations for �'�' and this is more externally �'�' I mean, obviously, there are things that we can do in our own shop, but broader �'�' more broadly, making process and impact evaluations available within our agency and also to other U.S. Government agencies, and hopefully to other funders; making the summaries of the evaluation findings, if not the whole evaluation report, but something that’s accessible to a wide range of stakeholders and the wide variety of media.
We know that not everyone can access the Internet, especially in some of the communities where we’re implementing programs, and may not speak English. So how do we make the findings of our evaluations available, even down at the grass roots level?
And then, obviously, making better use of the Web, it’s a very powerful tool. Right now we’re posting summaries of the evaluations that we have carried out, at least on the child labor side, but not the full reports, and those are available upon request.
And for our office at this point, that’s been the policy as well, that the reports are there, in most cases. Sometimes you have to dig a little bit to find them, but they are, in theory, available upon request. Well, we would like to be a lot more proactive in making �'�' putting those out there into the public space.
MR. BRANSON: Okay, and so �'�' just, I think, the main take�'away is that, you know, every agency does things differently dealing with their evaluations. It’s certainly important to use them as tools for improving future projects and really increasing institutional learning, and not just as an exercise that is done at the end of a project and then you move on to the next one without using it again.
So, with that, I think we will move on to some discussion. Lili?
MS. STERN: Yes, and I think that we will try to change this up a little bit. We weren’t sure how many people to expect, and also how much time would be left.
And what I would like to do �'�' we were going to split it in small groups, but since we’re already relatively a small group, and people are bringing different kinds of expertise and perspectives, maybe if I could just get a sense �'�' we had identified four kind of areas for discussion. And if maybe we could identified one or two that are sort of the most burning, that people really want to talk about, and we could open it up for the whole group to discuss, does that sound �'�'
(No audible response.)
MS. STERN: So, basically, we talked about how we could try to integrate evaluation planning into program design and earlier phase. We talked about possibly having a discussion on transitioning to impact�'oriented evaluation. I think there have been other sessions on this, maybe. And then, really focusing more on the knowledge transfer piece. What can we do to change or improve our evaluation processes? And then, the last one on increasing the transparency of evaluations and our policies and our processes.
I think the first two being much more kind of internally focused, and the latter two being more kind of externally �'�' since we’re here as an interagency, inter �'�' you know, non�'governmental, governmental, I thought that it might be really helpful to talk about what we can do collectively, what steps maybe some of you are already taking in your agencies or your organizations to help facilitate this kind of knowledge transfer and institutional learning, and then what additional recommendations. Any �'�' yes.
QUESTION: Well, I was going to ask two questions.
MS. STERN: Okay.
QUESTION: One would be specifically related perhaps to trade. I will start with that, and then I have a second question.
Actually, when I came in here, I thought that what you might even be talking about when you’re talking about transparency and knowledge transfer was not just necessarily within the U.S. Government, but also with the partner nations. We were talking about institution building, a lot of (inaudible) some of your programs are doing.
I’m just imagining, but tracking veterans, and do they get jobs, and did they (inaudible), and what was most effective? Was it a media campaign or was it something else? Those are things that, ultimately, in many cases governments should assume those responsibilities themselves, like we want to assist, but (inaudible) goals.
And so, (inaudible) talking about knowledge transfer, I actually thought this session was going to be you talking about transparency and knowledge transfer with the partners, which doesn’t sound like has been a factor, or �'�' and so I was just curious about that. That’s question number one.
MS. STERN: Yes, okay.
QUESTION: And whether anyone thinks that’s useful. Because, actually, in an OECD (inaudible) report on security issues, one of the things they have noted is sometimes a lack of capacity in partner nations to actually do a lot of this monitoring and evaluation work themselves.
MS. STERN: Right.
QUESTION: So, whether we’re using these programs also to help increase that capability.
The second is �'�' one thing, for a separate effort, that we’re looking at and trying to do some research on is the inclusion of when you’re designing and when you’re including these monitoring and evaluation in the program design stage �'�' so, from number one �'�' whether you look at all or deal at all with this concept of theory of change, which is to say do you identify assumptions and say, “Okay, our theory of change is that if we, you know, provide computer skills to veterans, they will be able to compete in the market place,” so then maybe you discover no, that was actually an incorrect theory of change. Have you done any work in that area?
MS. STERN: I can, I guess, respond for DoL and from our international labor affairs bureau. I will take the last question first, in terms of theory of change.
I think, looking back over especially the last �'�' 1995 is pretty much when the bulk of our technical assistance work started �'�' I think that we came in with certain assumptions about what the theory of change was. If you do X, Y, and Z, it will lead to �'�' but there wasn’t really evidence that that was the case. And I think we’re now at the moment where we’re realizing that, and we’re trying to build in sort of ways to test various theories of change.
Now, this is also �'�' it’s an issue with transparency, because if we �'�' we typically sign a letter of agreement with a government when we fund a new program. And they’re expecting that we’re going to accomplish, you know, said objectives through what means we think work. So, for us to go in saying, “We actually are not sure, and we’re doing this, and we’re going to also combine or complement it with a rigorous way of testing whether or not it works,” you know, of course there is a different reaction. They would like us to already have that information.
So, it’s �'�' you know, there is some tension there. But definitely there is that intent on our part to start with that theory of change, and try to kind of test that in our programs.
MR. BRANSON: Yes. I can touch on the first one, too. I think a lot of that kind of goes back to some of the sustainability measures I touched on briefly, and I think that was kind of a goal of many of the projects, but unfortunately not focused on enough, where, you know, we only have a certain amount of money and time, and then we kind of have to leave.
And so, you know, there really needs to be a serious focus during the project, throughout the whole thing, of, you know, engaging local stakeholders, and using some of that money to really get people there on the ground, to be there after the program ends, to be able to kind of take up some of these things that are successful.
And so, I think there is kind of an initial attempt to do that in all of these projects, but not enough of a focus, and not really a completely successful model for doing that yet.
MS. STERN: And I guess a core piece of each of the evaluations in both offices is that, at the end of the evaluation mission, there is a stakeholder workshop where a range of local stakeholders are invited. So that’s standard practice.
That said, it’s usually limited to about 20 or 30 people, and these are usually nationwide programs. How representative that group is of the population, those who were participating in the project, those who were not, I think we could debate that, and I think even in the �'�' developing the TOR for the evaluation, typically it’s sent out for input from a wide range of stakeholders. But that input is requested within a 10�'day time frame, and probably not reaching, you know, the people at the different levels that need to be reached.
So, I think that, in terms of knowledge transfer to the local partners, and sharing sort of the findings, and getting their input about what questions should be asked, I think that there is definitely room for improvement there.
And then, in terms of building local capacity to evaluate, with each evaluation the norm is, I think as Michael pointed out, that �'�' also with staffing, but also the evaluators tend to be international consultants, assuming that we don’t find a suitably qualified national candidate. But they’re always paired with a national kind of expert in the subject matter who knows the context and, you know, we’re �'�' and then the team itself is made up of a large sort of contingent of nationals.
So, we’re hoping that that is, but I don’t know that it’s �'�' it could �'�' that could also be improved. I think we could be much more intentional about how we’re building that capacity over time, instead of just on an ad hoc basis.
QUESTION: Despite the chronologies that you’ve thrown up here, the reality is that most U.S. Government agencies don’t really give a hoot, one way or the other, about evaluations. It’s not been a major factor in their determining what they’re going to do next or how they’re going to do it.
I’m kind of curious whether, in the evaluation process that you have used, you have looked at whether or not there is any greater interest on the part of host country governments and institutions, whether or not this evaluation (inaudible) evaluation is important to them, particularly if it’s going to cost their project money.
And, I mean, you partially touched a little bit on �'�' with reference to the last question. But in respect to the first part of my assertions, I mean, how much do you see �'�' you listed some of the things here, agencies, your recommendations. I mean, how many of those are now in place, are going to be done? I mean, how has DoL leadership bought off on that, on the transparency one? And, if that’s the case, how long before all this is on the Web?
MS. STERN: Well, some of it is already on the Web. I think they definitely have drank the Kool�'Aid. I don’t know if that’s the way to say it, but I think that They are very involved in the Presidential study directive group, the open government initiative, which is �'�' that’s their big push, to get information up on the Web about what we’re doing, what are the results, and very quickly �'�' indeed, even in our funding streams �'�' making information available about what we’re intending to fund much earlier in the cycle. There is definitely political support for it.
In terms of our partner governments, I think there is an interest in knowing. I mean, we just recently completed a cluster evaluation for programs that we funded in Indonesia. I think we funded something like 30 projects in Indonesia over the last 20 years.
And I think, you know, the government has a right to say, “You’re still funding programs here; have you not accomplished” �'�' you know, after all of this time, and all of these resources, what have we achieved, and what’s left to achieve? I think there is a demand and an appetite there for that. You know, to the extent that they are actually willing to be partners in funding it and carrying it out, that’s �'�' I don’t know how high it is on their priority list. But �'�' yes?
MS. STERN: That’s okay. I think we’re just �'�' let’s just talk.
QUESTION: Okay. I was wondering (inaudible) you were talking about documents on aid transparency and all that, and I was wondering. Considering U.S. Government agencies that focus on aid versus on investments, how would you look at evaluation differently or similarly?
I don’t know if that makes sense, but in terms of the extent to which the organizations are involved in the actual management (inaudible) should they get involved?
MS. STERN: We’ve got a lot of experts in the room. I might actually throw that one out to you all. Anybody have thoughts about that? It’s a good question. I think that’s always �'�' it’s a fine balance of how involved we should be, or need to be.
And some �'�' we were just talking before, that some agencies are large enough that they can have a separate evaluation unit, and it’s treated very �'�' it’s an independent unit, and they are able to kind of build in processes and mechanisms that help them to achieve a certain amount of credibility and transparency, just by virtue of doing that. But not all offices or agencies are that well resourced for their evaluation work. So then how they end up doing �'�' I’m not sure if I’m getting at what you’re saying, but –
MR. BRANSON: Yes, well �'�' and I think if it’s an evaluation of an investment, or �'�' and how well that’s working, then the transparency aspect becomes even more important because, you know, it doesn’t really matter necessarily in�'house so much as showing others how well those types of investments are working, what sort of effect they’re having, so –
PARTICIPANT: But doesn’t it also become far more complicated? Because once you start dealing with a private company, and looking at the notion of transparency, then you’re also having to deal with, you know, business proprietary information, and how much countries prepare to sort of share that kind of information.
And I think an institution like OPEC has a real schizophrenia involved, because you’re there, in large part, to help U.S. companies invest overseas, but that’s partially there for the process of development of those overseas countries. And yet, you know, you’ve got both the private sector that doesn’t necessarily want to be terribly transparent �'�'
MS. STERN: Right.
PARTICIPANT: �'�' and shareholders that don’t’ want them to be transparent. And yet, you know, is it working or isn’t it working, what part of it isn’t working or not working that you’re worried about?
Is it the investment that you’re worried about, or is it the transfer of wealth to the recipient country? And that’s an intriguing sort of set of issues to deal with (inaudible).
PARTICIPANT: (Inaudible) with investments there is this other issue around �'�' that differentiates it, it sounds like, from what Labor is doing. Because with an investment, typically you don’t get to choose the (inaudible) dimension of the initiative.
And so now you’re not evaluating the success or the progress of the (inaudible) initiative, as much as you are whether or not your support, your financial support, along with whatever other financial support went into that particular investment, had what is sometimes called additionality, whether it contributed to �'�' whether the money you brought to that investment added to what other monies there were. And so you’re sort of looking at the impact of the money, rather than of the initiative, per se.
MS. STERN: Well, these are complicated issues, and I am imagining that all of you, in your various lines of work, are struggling with some of them the way that we are. But we would welcome an ongoing exchange of ideas.
It’s past 4:00, so I think you’re due back in the plenary. But thank you all for your time, and for coming to our session.
MR. BRANSON: Thank you.