DRL Bureau 2017 Potential Grant Applicants Workshop - Monitoring and Evaluation
MR. GIOVANNI DAZZO: Wow. I didn't even give like a-- All right.
SPEAKER : Giovanni?
MR. DAZZO: Camp, thanks.
SPEAKER: I'll let you turn it over.
MR. DAZZO: Thanks. Hi, everyone. My name is Giovanni Dazzo. I am an evaluation specialist with DRL's Office of Global Programs. As many of you know M&E is everyone's favorite topic. That's why we leave it for last. I know sarcasm helps for M&E specialists. It makes the job easier. All right. So I've worked with many of you. So hopefully a lot of you in the room are familiar with DRL's flexible approach to monitoring and evaluation.
As a brief introduction we have a strategic planning and evaluation unit staffed by me at the moment. I know there was an oh in the-- yeah. You feel my pain. We will have more people soon. Now, that said, within our evaluation unit we are available to provide assistance to grantees. So we can advise implementing partners on their monitoring and evaluation plans forming measures, whether those are indicators or more qualitative measures, coming up with data collection strategies and protocols, and also helping you with your evaluation statements of work when you are interested in hiring an external evaluator.
A program officer can connect you with a DRL M&E specialist. So really all you have to do is ask for a bit of assistance once you are an implementing partner. Unfortunately, we can't help you while you are applying for a grant, but once you are in that negotiation phase then it's quite easy to connect and receive assistance. Now, within our flexible approach to M&E, we believe that implementers know best.
Now, what does this mean? Basically it means we let you do whatever you want. OK. People didn't have their coffee. All right. So of course, this doesn't mean you can do whatever you want because we're a donor. So that's a-- when we-- we do actually mean that we believe our implementers know best. That means that if you do want to use alternative measures we're not going to force you to use 60 indicators within your project. We're not going to force you to use our templates for logic models or M&E plans.
So it also doesn't mean that we expect you all to become monitoring and evaluation experts, because most people entered international development and human rights work for the purpose of doing good, not counting good. So we do believe that monitoring and evaluation can be done by qualified internal evaluators, though, as many of your organizations do have internal monitoring and evaluation staff. But we also understand that many programs staff, program officers within organizations, have the capacity to conduct evaluation research. And if that's the case, then we're willing to work with-- with you.
Additionally, we offer-- while we offer templates-- again, it doesn't mean that we require you to use these templates. We provide these templates as a way for organizations without monitoring and evaluation staff to have those at their disposal to make proposal writing and program design a bit easier. In keeping with this flexible approach, we believe that M&E plans are living documents-- and you will often hear that-- or-- or read that within monitoring and evaluation guides. But we truly believe it.
So during program implementation you can propose changes to your monitoring and evaluation plan, or your monitoring and evaluation protocols. So if you feel that an indicator doesn't work and you want to scrap it, then just tell your program officer. And after discussing with them all you have to do is agree on the changes and upload it to GrantSolutions, or Sam's domestic, the new system. And so it's really that easy to change some of your-- your M&E documents and honest communication.
What do we mean-- mean by this? We actually encourage implementers to tell us about the challenges occurring during implementation. So your program officer feels much better when they're well informed rather than hearing about a challenge after it's occurred, or after a-- a threat has happened. They really don't like to be caught off guard as many of them have a dozen or more projects to look after. So it's really not just about reading success stories-- monitoring and evaluation is not just about indicators and success stories, it's about value-- valuing, judging the merit or worth of your projects and activities.
Now, how do we work with partners? If your proposal is accepted and awarded then we can help you on program design with training and coaching on-- on monitoring and evaluation and during evaluation design and implementation. So as I had mentioned, once your proposal is awarded I can actually review your M&E plan and your measures to talk about what makes sense for you, or what's even feasible for your organization. We design measures and theories of change with partners. So we have collaborative and interactive sessions with our program officers and their teams to come up with standard protocols for measurement, whether those are quantitative or qualitative.
But we essentially have buy in and sign off from our implementing partners to tell us, this is what we value in terms of measurement, and this is what we would like to change as a group or a cohort of implementing partners. Grantee training can happen at stakeholder workshops, which we hold throughout the year, or we also conduct this during site visits where it's more one on one coaching with an organization. And then if you do have an external evaluation or even an internal evaluation planned for your project, just let us know and we can review a statement of work or work on that terms of reference with you and even review a roster of consultants.
In terms of M&E resources we have a number of templates and guidance on our website. We have our DRL program monitoring and evaluation guide, which I'll go over a bit on the next few slides. As I mentioned, we have a number of templates that we provide, but again, they're not required. These are sample templates. But we have a logic model and the M&E plan. Some of you within your organizations call this a performance monitoring plan, or a PMP. And then performance indicator reference sheets for the infamous f indicators, or our foreign assistance indicators, which everyone loves.
Now, the f indicators I will note-- it was the general guidance in the past to think that you had to have an f indicator. Of course this isn't very useful if your project doesn't measure any of those things. So we do have guidance within the M&E guide that has a standard-- that has standard language that says none of these f indicators are valid, so we're waiving the requirement.
So f indicators are required if applicable. If they're not applicable to your grant there's really no reason for you to select an indicator that doesn't count. Now the logic model and the M&E plan templates are provided in usable formats. So the logic model is in Word format and the M&E plan is in Excel format.
So this means you can download them from our website and begin using them within your proposal design period, or if you have a current project you can actually download these and start redesigning your logic model or M&E plan if you feel that's necessary. And some of our grantees do that during cost amendment periods.
Now, I would like to say some implementing partners have asked me if they should use these templates. Of course, again, I will state we cannot force you to use these templates. And also it will not help or hinder your chances of winning a proposal. Flattery won't do you no good.
Now our guide to program monitoring and evaluation. Now, we've uploaded this on our website, the DRL Resources page. And it's about 30 pages. And I will say it's not 30 pages of dense reading. It's a number of checklists, templates, and a few pages of monitoring and evaluation resources that evaluators use. It's not filled with tips from other donors. They're-- they're generally resources that evaluation specialists that conduct evaluation research will reference.
Now, the reason we developed this-- this program guide to monitoring and evaluation was for transparency purposes. We felt that it was necessary to provide implementing partners and applicants with an understanding of what DRL expects when it comes to monitoring and evaluation. Of course we didn't want to reinvent the M&E wheel because that's been done-- well, it's done every six months with new jargon.
But that said, we just wanted to put together a short document that outlines these priorities and a few guidance notes. We-- the purpose of the M&E guide is to help grantees, implementing partners, and applicants understand our current M&E requirements for a project proposals, for project design, as well as how DRL program officers actually review an implementing partner's M&E components throughout the life of a grant. Developing your logic models when designing a project, understanding how these models can be used to outline relationships between activities and intended outcomes, as well as identifying the assumptions of how you think the world works and how you think your project works within that world.
Now forming an M&E narrative and plan, we've provided design strategies there which outline how M&E activities should be carried out and the types of indicators that will be used to measure project performance, understanding how evaluation can be used to demonstrate whether intended outcomes occurred, project activities were appropriate for the environment, and how findings will be used to redesign projects and learn. And lastly, again, the resources that actual evaluators use from sources like betterevaluation.com and others that are related to evaluation for democracy rights and governance projects.
So applicants, of course, are not graded on how complicated their logic models are. Unfortunately we have seen some 15 page whoppers out there. But that said, we will say, there are a number of different methods out there to plan a project, whether that is a logic model, log frame, results framework, really whatever you call it. But our expectations from implementing partners and DRL staff are that all individuals value evidence, so using evaluative thinking or critical thinking for project design. DRL really expects partners to use evaluative thinking when designing projects. By using evaluative thinking during project design, grantees should value existing internal or external studies or evaluations, project documents, information from baseline studies, needs assessment, and they should identify the assumptions behind the way they think their projects work.
Again, during project implementation grantees should reflect on the information they gather to make decisions on the way a project is designed, implemented, and evaluated. We stress ethics, of course. We work in the field of human rights. And we strongly encourage that applicants consider whether their M&E systems are using rights-based approaches, applying a gender and equity lens, or including marginalized populations in data collection and analysis. So applicants should consider whether evaluation design, data collection, analysis reporting, and learning are conducted in ethical and responsible ways with all project participants. So you should question whether you're including your direct beneficiaries or subgrantees within the design of your project, within the design of your data protocols, within the testing of those data protocols.
And while DRL and many partner organizations may not have institutional review boards or IRBs like universities, we do feel that organizations should still make adequate provisions to protect the privacy of respondents when collecting data from these individuals. So for instance, when collecting data from participants, consider whether your organization and your M&E staff have the necessary informed consent forms, confidentiality agreements, and data security protocols.
Now, everyone's favorite topic, logic models. So we believe that M&Es should be accessible to all program staff in your organizations, and it shouldn't be necessary for you to work with or contract out your M&E activities to external consultants, just to simply think through how your project works. So on the next two slides, or in the next few slides we'll briefly review what DRL has released as logic model templates. But before that, in terms of program design strategies, we do recommend within our M&E guide that implementing partners and applicants do construct the logic model before the proposal narrative. Why? This is not to say that if you haven't developed a logic model that you're just going to have a huge failure of project, but it is to say that sometimes, the logic model, it's very apparent that it was constructed as an afterthought and it doesn't actually match what's in your proposal narrative. That is a mark against you in terms of your proposal. So I would say use the logic model not as a monitoring and evaluation template but as a planning template. It is helpful to use, whether it's a logic model or a systems map, whatever you would like to use, to actually think through and construct the logic behind your project.
Now we also ask you to question will your project work based on the way it's designed. In evaluation terminology, this refers to assumptions usually. And so focusing on the logic model at the start really helps project staff to question those assumptions that they have, specifically whether this project will actually work. But then we also admit that the world is a complex place. You can't actually capture everything in a two page logic model. So that said, please reference those external factors. We do operate in highly fluid and complicated and complex environments, and we can't control everything.
But that said, if you are looking through those external factors and including them within your logic model, it should also help you develop your risk management plan for your proposal. So there should-- again, as a tip, as a program design strategy, please ensure that there is a link between your logic model, your proposal narrative, and those extra factors and assumptions. That should be included within your risk management plan.
Again, we welcome alternative design approaches. So whether you want to start with a logic model or a theory of change or a systems map or outcome map, we've included a number of resources and links in our DRL M&E guide, so you can explore some of these alternative methodologies. For those of you that are familiar with systems thinking or complexity science, you can use alternative methods like outcome mapping, systems mapping, or rich pictures, which are highly relevant for complex environments, but they can also be used in a participatory way with your program staff, with your direct beneficiaries or your subgrantees in validating the assumptions and the logic or rationale behind your project model.
So again, the most important part for us is not assessing whether you used our template. It's about being able to determine how you've outlined the underlying assumptions and rationale of your project, and whether those are actually highlighted within your project proposal narrative. So it's not meant to say that you've predicted every little external factor that could balloon into some unforeseeable problem in the field, but we do need to see that you've thought through the program design process in a responsible and ethical way.
So as I mentioned, we have two versions of our logic model. If you really like M&E jargon, we've provided you with that template. If you really hate it, we have a plain English version. So those are both on our web site in Word format in one document. And you can use either of those. If your program officer doesn't agree with the plain English language format, you can just show them the M&E guide and say hey, I read it. Did you?
So that said, I'll show you the examples on the next couple of slides. But really these are the same two-- they're the same two templates, just in different formats. And we felt that it was necessary, if our expectation is to value evidence and evaluative thinking, we felt that it was necessary to take some of that jargon out for the benefit of implementing partners.
So here is the standard format. And you can see, it has this succession of logic here that we generally see from inputs to impact. And it has a number of guiding questions. And it does have examples as well. And so we do also ask for assumptions and external factors. We went through a number of guides that-- or a number of templates that implementing partners already used and came up with a hybrid approach here. So if your organization already uses something similar, then it's really not necessary to use this, and it also won't be much of a change. And it's really not anything new for your organization.
I would like to mention, as a program design strategy, something that we often see within proposals is confusion between needs and solutions. So we would like to see actual needs within your logic model. So please make sure that you differentiate between the two. Your solution should be in the activities column, whereas the needs will actually identify a gap in an environment. And so within logic models we'll often see, as an example, participants need training. So unfortunately that's a solution because you've already identified what you'll be doing, which should be in the activity column again. But when we see this, we question whether the applicant actually assessed needs of participants within that environment, or whether they've simply put in a need that they know that they can satisfy, like training. So if an organization is well-versed in developing training curriculum, then you generally see that the need and the solution will both be training.
So we do want to see that the need is participants need access to information, or participants may not have awareness of their human rights. And then we would like to see that rationale and thinking through the logic model of how you arrived at the solution that matches that need. Because the solution is not always training of course. It could be a simple translation of documents. It could be public awareness campaigns. So we do want to see activities match needs.
Now this is the plain English version. And so you can see there's no reference to inputs or outcomes. So a lot of you will be relieved there. And we go over this activity called M&E in plain English with a lot of our implementing partners as an interactive session. After I do this, most of them hate me. They usually don't invite me to their organizations after, so it's a win-win, isn't it?
But you can see, here we have guiding questions again. But here you'll see, rather than talking about inputs, we're asking, what do you need to do your job? And that's generally what an input is. What do you need to do your job? What do you need to do-- or to conduct your project? So we're really looking for more than just staff, money from DRL, and a training location. Please think through this, because again, there should be that logical succession from inputs to outcomes and impacts.
Now, M&E plans-- in terms of program design strategies here, program officers are asking if the data is telling you anything useful. So what we generally see is M&A plans, again, that are five pages with 60 indicators. And our program officers are just shaking their heads saying, I'm not going to read it. Are they reading it? So if the data isn't telling you anything useful, we really do take a use it or lose it approach. So if you're not going to use the data, then just scrap it. If an indicator is simply measuring one thing, is it really necessary to take up an entire line on your M&E plan? And if it's really generally not useful for you, it's not useful for us.
So once you start implementing a project, you'll be responsible for submitting quarterly reports to DRL. And DRL staff use these reports to learn from you, including whether a project is working the way it should be working. And if your indicators or your qualitative measures and your project narrative aren't really telling us anything useful about processes, then you may get some questions, of course, from your program officer. So we do stress quality over quantity. More is not always better. You don't get extra credit for more indicators. So please look through your indicator lists and question whether the data is useful to you and whether it's useful to DRL.
And we're completely fine if you have just a few indicators. So we do stress this mentality of thinking beyond indicators. Evaluation is more than indicator usage. Of course monitoring is rooted in business processes, whereas evaluation is rooted in the social sciences. And as long as the measures and the indicators and the narratives and the case studies that you're using are useful to you, then they are, again, useful to us.
And if you have concocted this very complex M&E plan during your proposal, consider revising it with your program officer. Again, we do stress that M&E plans can change, and really all you have to do is let us know.
Also, one more program design strategy in terms of monitoring and evaluation plans-- they can go beyond monitoring. So think about evaluation. Think about putting evaluation questions and qualitative measures within your monitoring and evaluation plan as placeholders, because then it gives us an idea within the proposal design review period that you have considered things beyond just your usual routine monitoring of quantitative statistics.
Now this is our monitoring and evaluation template. And again it has a number of guiding questions in each row. And really here what we're looking for, the what, when, why, how of data collection. And so this should form the basis, the foundation of your data collection processes. Again, you can go beyond just routine monitoring and include evaluation questions. But this is provided on our web site in Word format. So you can easily-- or Excel format. So you can easily download it and start using it during your proposal design period.
For implementing partners, we also provide this template in Excel format. So then they can turn their monitoring and evaluation plan into a performance indicator tracking table or a PIT. The name is unfortunate and it often ends up being a pit of data that no one looks at, especially if you do have 60 indicators. And this is in the DRL M&E guide. Unfortunately it hasn't been put on the website yet in Excel format. But if you are an implementing partner and you would like this, then we can easily send a copy to you.
Really all this does is take the previous slide, the indicators and the activity information from the monitoring and evaluation plan, inputs it into the first two columns, and then you have a tracking table for your indicators. This should be annexed at the end of your quarterly reports each quarter.
If you do annex this at the end of your quarterly reports, you do get extra credit. We have candy at the end of each year. All you have to do is go through security and then we'll direct you to Pat Davis's office. I'm not kidding.
But if you don't include it at the end of your quarterly reports, you will receive that dreaded data call at the end of the year from your program officer asking for a full year's worth of data in October and November. I know, grantee horror stories. I used to be there.
So our monitoring and evaluation checklist-- we provide a number of checklists and templates. So we went for the strategy of death by checklists in our M&E guide. But we do provide you with information on how and when data should be collected, what DRL program officers are looking for, how to adequately budget for monitoring and evaluation activities. And so you should really be able to summarize the following within the narrative section of your proposal-- so how and when you'll collect data, analyze it, and report on it, so M&E activities; who will be responsible for each one of these steps, M&E responsibilities; how the applicant will safely store collected data, because we do stress ethics and data security; and how you'll evaluate the project, so a simple evaluation plan. We also are looking for how and when you're going to use that data to improve or adapt strategies or redesign activities.
Oh it's Pat, oh. All right.
Now external versus internal evaluation-- projects that are over 24 months are encouraged to include external evaluation or specify how internal evaluation will meet the needs for accountability and learning. Because many DRL projects are shorter than two years, what would be considered traditional external evaluation design, which includes your baseline, midterm, final evaluation, that may not be feasible due to your budget and just the timing. So we do stress you to think about internal evaluation, how you're going to learn from monitoring. But for projects that are over 24 months, we do also encourage you to think about evaluation.
Since DRL projects may be cost amended after the initial project period, then that is also a good time to start thinking about whether you should be budgeting and planning for external or internal evaluation. Because your two year project or your 18 month project may turn into a three or four or five year project.
Now budgeting for M&E_- since monitoring and evaluation is a grant requirement, we do encourage applicants to think through your M&E system described in your proposal and budget accordingly. So that is the cost of the M&E system, associated M&E activities like field work and data collect. And that should all be included within your project budget, either as contractual line items or staff and personnel line items. But we don't mandate a certain percentage of project budgets. We have looked at previous proposals and looked at the budgets of current projects, and generally most applicants are budgeting between 3% to 10% for monitoring and evaluation. 10%, of course, is on the higher end, but we've seen that sometimes 3% is more than sufficient, whereas in very complex environments 10% may not be enough, because of security issues. So really when you're budgeting for monitoring and evaluation, you should be looking at whether that budget provides you with reliable and rigorous evaluation data.
So please just assess your data needs. If your project is awarded and you do want to reconsider your monitoring and evaluation budget, that is something that we can consult with you on.
OK, so on this next slide this is an example of one of our checklists. And you'll see through the checklist we're looking over data collection, data analysis, reporting, the timing of data collection. And these checklists are provided within the monitoring and evaluation guide on our DRL resources page, because we wanted to provide a sense of transparency as to what DRL program officers are reviewing when it comes to your proposal-- monitoring and evaluation components. So you'll find a series of these checklists in the M&E guide for both constructing monitoring and evaluation components like your logic model, your M&E plan, your evaluation plan, but also for quarterly and final reporting requirements.
We do ask for a two page monitoring and evaluation checklist within your proposal. So within the M&A proposal, in line with your M&E plan, we are looking for a narrative version of that on an overview of your evaluation methodology and the methods that you'll use, the roles and responsibilities, especially if there is not an M&E specialist at your organization. Or you may only have one M&E specialist, and we of course wouldn't expect that person to look over 20 grants. So generally program officers will be responsible for monitoring and evaluators will look over evaluation documents or even conduct evaluation in a lot of implementing partner organizations. Again we're looking at data security and ethics again. How will data be safely stored, transmitted, and handled? And are you thinking through evaluation and learning?
One thing that we do see, just as a program design tip, is the use of the phrase, our organization has a robust M&E system. Sometimes it does.
I got a laugh on that one, all right. So simply because you write you have a robust M&E system does not mean that you can forego the two page request for an M&A narrative. So we will ask, well, what does that robust M&E system look like? And then you will be asked to provide an example of that robust M&E system. So really if you do have this robust M&E system, that's great. We applaud you for that. But we would like to know why you consider it to be so robust. So please tell us who is responsible for collecting and analyzing and reporting on this data. Tell us about your survey protocols, applied research or evaluation approaches and methodologies. And while you do have only two pages to talk about this very robust M&E system, you can always annex additional pages at the end of your proposal.
Again, I will stress think beyond indicators. We welcome alternative approaches to monitoring and evaluation, just like we recommend or encourage alternative approaches to design. In design we're talking about outcome maps, system maps, rich pictures. But for monitoring and evaluation, we encourage alternative methods like most significant change or case studies. So if you are familiar with some of those, then please, we encourage you to use those. You don't always have to have 60 indicators, again. But if you do need help coming up with more feasible alternative strategies for your organization, then we're more than willing to talk about those approaches with you after your project is awarded.
Again, for external versus internal evaluation, if your project is on the shorter side, do consider what data you need to understand whether your project is working, and what type of data that you can learn from. But that said, even 18 month projects have had very adequate monitoring systems and even internal evaluation systems where project staff have learned a number of methods that they've used to monitor their own projects. Again, if you have current projects, consider adding and budgeting for evaluation when you redesign your project, and discuss these options during communications with your program officer.
And last, we do have, as I mentioned, a number of reporting checklists within the M&E guide. And this includes a quarterly report checklist as well as a final reports checklist. So following the launch of your project, you should be using your scope of work, so those agreed upon objectives and activities that you've agreed upon with your program officer and your grants officer. And the M&A plan within your proposal has a guide for the type of data you should be collecting during implementation and reporting. So this data should contribute to the measurement of those proposed indicators and qualitative measures in the M&E plan. And your quarterly narrative report should provide an analysis of project progress, and by objectives. So within your quarterly report, each project objective can serve as a symbol header and you can start analyzing any activities that you've conducted and what happened after the outcomes.
Now takeaway points-- we do encourage you to focus on utility. Use what's useful for you. Generally what's useful for you is useful for us in terms of data. Please use evaluative thinking. And then plan ahead. Once you are an implementing partner and you have a project with DRL, there are DRL M&E specialists that are there to help. So great.
Now that's it for my presentation. So if you-- I finished just on time. That is a first. All right, so if anyone has any-- does anyone have any questions? Yes? Sure. I thought I was really comprehensive, but I guess not.
AUDIENCE: Thanks a lot. This is actually really helpful. I am Tom [INAUDIBLE] with the University of Notre Dame. How much do you guys do or encourage impact evaluations? And I say that only because I know that budgets aren't big and so they might not be feasible, but especially for first rounds, but even just spending a little bit of money to identify a counterfactual during a first phase so that you might be able to consider an impact evaluation, maybe either during follow-on phases or maybe even retrospectively. Are you guys getting into that at all or--
MR. DAZZO: We are. So we have two forms of evaluation budgeting at DRL where we have DRL-commissioned evaluations where we generally have ex post evaluations, so conducting evaluations that are done after projects have ended. And we'll collaborate with our implementing partners on those to ensure that it's actually feasible to conduct an ex post evaluation. We also do encourage some of our-- we encourage our implementing partners to think about rigorous designs, whether that rigorous design is qualitative or quantitative in nature. If we're talking about impact design like randomized controlled trials and the establishment of more quantitative counterfactuals, then it is difficult, as you mentioned, on the budgeting side. But we do have cases where we amend budgets if an implementing partner has provided a feasible strategy to conduct a randomized controlled trial or a quasi-experiment. So we can always amend project budgets to include those types of counterfactual designs. We do also encourage qualitative counterfactual designs or nonexperimental counterfactual designs.
Who's confused now?
That's a great question. Thank you.