U.S. ADVISORY COMMISSION ON PUBLIC DIPLOMACY
MINUTES AND TRANSCRIPT FROM THE QUARTERLY PUBLIC MEETING ON DATA DRIVEN PUBLIC DIPLOMACY, SIX YEARS LATER, BASED ON A REVIEW OF THE 2014 REPORT, “DATA-DRIVEN PUBLIC DIPLOMACY: PROGRESS TOWARDS MEASURING THE IMPACT OF PUBLIC DIPLOMACY AND INTERNATIONAL BROADCASTING ACTIVITIES”
U.S. Advisory Commission on Public Diplomacy Quarterly Meeting
Tuesday, June 23, 2020 | 12:00 p.m. – 1:30 p.m.
Virtual Public Meeting via Videoconference
COMMISSION MEMBERS PRESENT:
TH Sim Farar, Chair
TH William Hybl, Vice-Chair
COMMISSION STAFF MEMBERS PRESENT:
Dr. Vivian S. Walker, Executive Director
Mr. Shawn Baxter, Senior Advisor
Ms. Kristy Zamary, Program Assistant
MINUTES:
The U.S. Advisory Commission on Public Diplomacy (ACPD) met in an open virtual session from 12:00 p.m. to 1:30 p.m. on Tuesday, June 23, 2020, to discuss “Data Driven Public Diplomacy, Six Years Later,” based on a review of the Commission’s 2014 report, Data-driven Public Diplomacy: Progress Towards Measuring the Impact of Public Diplomacy and International Broadcasting Activities. A panel of public diplomacy evaluation experts from the Department of State provided updates on the practice of using data to formulate public diplomacy programming. Speakers included Amelia Arsenault, Senior Advisor and Evaluation Team Lead in the Research and Evaluation Unit in the Office of Policy, Planning and Research (R/PPR) within the Office of the Under Secretary for Public Diplomacy and Public Affairs (R); Luke Peterson, Deputy Assistant Secretary in the Bureau of Global Public Affairs (GPA); and Natalie Donahue, Chief of Evaluation in the Bureau of Educational and Cultural Affairs (ECA). ACPD Executive Director Vivian Walker opened the session, and Chairman Sim Farar provided introductory remarks. Senior Advisor Shawn Baxter moderated the Q&A, and Vice-Chairman Bill Hybl closed the meeting. The speakers took questions from the Commissioners and the online audience, as detailed in the transcript below. The speakers also provided written responses to several additional questions that could not be addressed within the panel’s time frame.
AUDIENCE:
More than 165 participants joined the ACPD’s first virtual public meeting, including:
- PD practitioners and PD leadership from the Department of State, USAGM, and other agencies;
- Members of the foreign affairs and PD think tank communities,
- Academics in communications, foreign affairs, and other fields,
- Congressional staff members,
- Retired USIA and State PD officers, and
- Members of the general public.
Vivian Walker: Hello everyone. My name is Vivian Walker. I’m the Executive Director of the U.S. Advisory Commission on Public Diplomacy. Along with Commission Chair Sim Farrar and Vice Chair Bill Hybl, it is my great pleasure to welcome you to the Commission’s first-ever virtual public diplomacy meeting. A few years ago, the Commission published a report that proved to be groundbreaking with respect to the use of assessment tools to understand how well our public diplomacy programs were doing in the field.
This report, Data-driven Public Diplomacy, ended up being a go-to resource for practitioners, for policymakers, and of course for scholars, all of whom were very interested in improving the impact of public diplomacy programs. Today we’re lucky enough to be joined by three experts in the field of public diplomacy from the Department State who are going to walk us through that report, look at some of the changes and developments that have been made in research and evaluation since then, and point us in the direction of some future trends – both challenges and opportunities – in the field of research and evaluation.
Before we begin, let me take you through the agenda for today’s event. Following introductory remarks by ACPD Chairman Sim Farar, I’ll provide a brief overview of the original report to provide some context for today’s discussion. This might be particularly useful for those of you who haven’t yet had an opportunity to read the report, which is available on the ACPD website.
Then, we’ll go straight to our experts, who will be presenting consecutively. While they are speaking, we urge you to submit your questions via Slido using the event code #ACPD. You should have received instructions in the most recent email about this event. ACPD Senior Advisor Shawn Baxter is standing by to receive your questions, and he will then share them with our panelists for what we anticipate will be a robust discussion. Following the Q&A, ACPD Vice Chairman Bill Hybl will provide concluding observations. It’s now my very great pleasure to introduce our Commission Chairman Sim Farar, who is going to give us some introductory remarks. Sim, over to you.
Sim Farar: Thank you. Vivian, and all of you who have joined us today from the United States and around the world. We are so pleased to welcome you to this growing community of interest around public diplomacy. With me today is my distinguished colleague and friend, Vice Chairman of the ACPD Bill Hybl, from Colorado Springs, Colorado.
The ACPD is a bipartisan panel created by Congress 1948 to appraise U.S. government activities intended to understand, inform, and influence foreign publics and to increase the understanding of and support for these same activities. The Commission assesses the effectiveness of USG PD activities across the globe and recommends changes when needed. For more than 70 years the Commission has represented the public interest by advising the U.S. government’s global information, media, cultural, and educational exchange programs.
The Commission is mandated by law to report its findings and recommendations to the President of the United States, Congress, the Secretary of State, and, of course, the American public. In addition to our flagship annual report, the ACPD produces a series of focus studies on issues of importance to the practice of public diplomacy. As Vivian noted, today we’re here to follow up on one of them. The 2014 report on data-driven public diplomacy produced nine innovative recommendations, many of which have already revolutionized public diplomacy impact assessments within the Department of State. But, we’re not here to rest on our laurels. We can always do better to ensure that our information and our outreach programs are having the desired effects. That’s why we convened this group of experts and distinguished practitioners, who are on the front lines of research, analytics, and evaluation at the Department of State, to take a look at future trends and challenges for public diplomacy research, monitoring, and evaluation practices. Thank you very much. And now, back to Vivian.
Vivian Walker: Thank you, Sim, for those remarks. As I mentioned, I’ll say a few words about the report itself, its origins, and where I think we’d like to go with the event today.
The Data Driven Public Diplomacy report grew out of the realization that the increasingly complex information environment in which we find ourselves requires a paradigm shift in thinking about public diplomacy program evaluation and impact assessment. This rapidly changing information space has created enormous challenges to our ability to understand and assess our key audiences. Joseph Nye, more than a decade and a half ago, noted that in this era of unlimited and unfiltered access to information, we experience – paradoxically – a debilitating scarcity of attention. Because our audiences have so many data points to choose from, it is very difficult, particularly for government public diplomacy practitioners, to identify high value audiences and to assure that public diplomacy messaging, program, and outreach activities are having the desired impact.
Moreover, in 2014, when the report was published, public diplomacy practitioners faced a dizzying array of outreach and program options. Along with that whole new kit of shiny tools came intensive competition for resources to develop and implement them. And finally, there was the recognition that these tools needed to be both cost effective and high impact, relative to the investments that could be made in terms of resources, personnel, and infrastructure.
As Sim mentioned, there were nine recommendations that came out at this report. I’d like to focus on three of them in the hope that our panelists might take them up in their discussions later. First, not surprisingly, there was a call for more funding, more personnel, and more training in development and infrastructure expansion to be able to conduct targeted evaluations using cutting edge research tools and techniques. Next, there was a call to move away from, if possible, what is frankly a fairly risk averse culture at the Department of State, and a reminder that we can learn from our errors, our setbacks, and our missteps. And finally, the report issued a call to avoid the bureaucratic stove-piping and compartmentalization that often happens between public diplomacy bureaus and offices within the Department of State. Cooperation ensures a unified approach, to the degree possible, as well as genuine information sharing, so that ultimately, the programs produced have greater impact and relevance for key audiences.
Our panelists today will focus on three broad areas. They are going to examine how research practices for public diplomacy programming have evolved in the six years since the report. They will talk about some successes, and some failures, in the implementation of reforms recommended by the report. By the way, by failures, we mean – what did we learn, and what should we change? What can we do better?
And finally, our panelists will tell us about some of the next big trends, challenges, and opportunities for public diplomacy research, monitoring, and evaluation practices. We here at the ACPD believe that research, analytics, and evaluation are not just about fine-tuning programs that have already taken place, but about improving programming going forward. And this requires some really hard work with respect to making difficult decisions between multiple effective ongoing programs. If we have to prioritize, how do we do that? How can we maximize efficiencies and the distribution of scarce resources in an era in which support for our research and different initiatives may shrink? And finally, how can we assure the continual assessment of program relevance in light of emerging U.S. policy priorities?
In short, research is not about the past. It is truly about the future, and today we are fortunate to have a group of experts with us who can explain to us why that’s so important. You have had an opportunity, I hope, to check out their bios, so I will just briefly introduce them: Amelia Arsenault is Senior Advisor for Research and Evaluation in the Office of the Under Secretary for Public Diplomacy and Public Affairs, otherwise known as R. Amelia, by the way, is one of the original contributors to the 2014 report, so we are truly fortunate to have her experience and perspective on our panel today. She is joined by Luke Peterson, the Deputy Assistant Secretary for Research and Analytics in the Bureau of Global Public Affairs, and Natalie Donahue, Chief of Evaluation in the Bureau of Educational and Cultural Affairs. It’s my great pleasure to turn this event over to the stars, our panelists, beginning with you, Amelia. Thanks so much.
Amelia Arsenault: Thank you, Vivian, just one clarification. I am a Senior Advisor for the PD Research and Evaluation Unit, which is in the Office of Policy, Planning and Resources for the Under Secretary. I’m also a contractor. Everything I say today is speaking in my personal capacity, not officially representing the views of the U.S. Department of State. I just wanted to make sure that that was clear from the beginning.
As Vivian said, it’s really great to be here six years later because I was there at the beginning as we were thinking through what kind of questions to ask and how to design the report structure. And I have to say that now, after having joined the PD Research and Evaluation Unit in 2017 and leaving my formal academic position to work at State, I feel like I’ve gained some important perspective – perspective that I wish I’d had when I participated in the report research and writing process in 2014. It’s really given me a newfound respect for the budgetary and institutional constraints in conducting research and evaluation of PD. And that is not so much a constraint of the State Department, but just the budgetary cycle of the U.S. government in general. Planning programs on a yearly cycle has its own constraints. I also have a newfound respect for the wide range of people inside and outside of State that I think have made real strides in advancing research and evaluation despite these institutional constraints. I also want to thank the ACPD for hosting this event and for taking the lead on fostering this cross-fertilization among academics, government officials, and other PD stakeholders. These sorts of discussions are incredibly important, as we all work to strengthen our U.S. PD efforts.
There are two major things that I want to focus on today. The first is looking back at the set of recommendations and then thinking through the recommendations themselves and where we’ve come. The majority of the specific recommendations for the R/PPR evaluation unit were focused on reports and studies conducted between 2009 and 2011, and I have to say, we’ve come a long way since then, not the least of which was that R/PPR launched a reimagined PD Research and Evaluation Unit (REU), which is focused on rigorous and expansive monitoring and evaluation activities.
Three of the major recommendations from the report were, first, to create a director of research and evaluation position and expand the evaluation unit in R/PPR. The second was to support evaluations staff with more expertise, both in-house and out, and through partnerships with people outside of the State Department, and then third, was to increase funding for research and evaluation.
The fact that I’m speaking to you today, as somebody who works with the Research and Evaluation Unit, is a testament to those three recommendations and the progress that we’ve made. The REU in its new and reimagined form is roughly five-years old. It was formally relaunched in 2015. We have a Director of Research and Evaluation, Paul Kruchoski. Our team has grown, both in size and scope, and how we support the field, and I’ll explain the reasons for that and a few things that we experienced that we didn’t necessarily anticipate a few years ago. The REU staff is now roughly 40 contractors, civil servants, and Foreign Service Officers (FSOs) who work in close coordination with other offices and R/PPR to provide support in audience research, monitoring and evaluation, and importantly, training in those topics for the field, and the development of a new technological infrastructure that has quite exciting implications for data driven public diplomacy.
Our budget and the support we have received from senior leadership and the PD cone has also advanced year-on-year. That leads me to the second area, and I’ll come back to a few specifics. I’m also happy to answer more questions in the Q&A. One challenge that is incredibly important and has seen a big learning curve for the REU has been the recommendation to establish guidance and training on research and evaluation. We’re still working on rolling out this sort of training to the field and working in close concert with partners at the Foreign Service Institute (FSI), who are doing really important work, and we’re helping where we can.
To contextualize this, there was a push right off the bat for the reimagined REU to focus on rigorous and ambitious forms of evaluation – to prioritize things like randomized controlled trials and other methodologies that would really provide actionable insights. And while we think that these efforts are critical and we found them to be important, what really came to the fore as we started rolling out these sorts of studies was that we needed to consider all the ways in which the field needs some more support to use and implement the learning from these evaluations. The old [Edward R.] Murrow adage that PD needs to be present at the takeoff, as well as the landing, should also be the M&E adage. M&E really needs to be embedded throughout the process, and we need to not only think about the sort of reports that we’re going to produce about the field, but how we can support the field throughout the whole process. We also want the field to think about M&E throughout the entire process.
We really try to create support structures to encourage this sort of embedded thinking, and we talk a lot in the REU about embracing a “cycle of learning” model that integrates M&E throughout the cycle. In this process – people might get bored with it, we talk about it so much as a team – but it begins with helping to encourage all PD practitioners in research. The process begins with 1) thinking about and clearly articulating foreign policy priorities or long-term outcomes, which was a particular point that the original ACPD report mentioned – that R/PPR needed to make progress with – and then 2) learning more about the stakeholders and the audiences that are influential and affected by those outcomes. PD is, as the ACPD and others have highlighted, most effective as a conversation and mutual engagement. This really starts with learning more about the people that you’re trying to reach before you start designing programs and campaigns to reach them. And that leads to the third part of the process.
Once you’ve done those first two things, then, think about designing programs and campaigns using research and data to do so more thoughtfully. Then, four, carefully monitor progress throughout the program or campaign implementation cycle to make sure that it’s effective. Then, five, make sure that evaluations, where appropriate – and I’ll come back to that – and are clearly tracking and informing the intended outcomes that were identified in the first step.
So, as we work to develop curriculum, monitoring and evaluation, data literacy, audience research, and planning strategically, [it’s important to recognize that] PD practitioners in the field have a wide range of responsibilities, and time is always an issue. Part of what we’re trying to accomplish is not to turn all PD officers into people who are going to be conducting research or M&E themselves alone, but being more critical consumers of that sort of data, knowing what to ask. There are many people who are incredibly good at this in the field, but there are certain levels of familiarity and comfort with the terminology that we want to make sure are a baseline for all people involved. This also includes drilling down to the importance of identifying outcomes and objectives, so we’re maximizing responsible user research.
This leads to what Vivian mentioned earlier, which is an important recommendation from the 2014 report, to support a risk-taking culture that allows for public diplomacy setbacks. I want to say that all organizations and individuals want an “A” – that isn’t going to go away. But, particularly when you’re talking about individual PD initiatives and programs, we can’t forget about the countless hours of work. Learning that what you desired, what you wanted to achieve wasn’t necessarily there, is always going to be a tough blow. There’s still work to be done here, but there is a growing desire in the field, particularly with the people I interact with, to assess whether programs that have been conducted for long periods of time are actually achieving their intended objectives.
There really is progress that’s been made in not assuming that you’re getting it right the first time. The idea that PD, in terms of programs and campaigns, should always ideally be an iterative process, is flourishing and there is always room to grow. There is always a process of making incremental changes, and that’s important. This just leads me to the last area: integration of data into strategy and program development.
Time and resources, as I mentioned before, definitely are not infinite, especially when you’re thinking through how you can conduct research and provide actionable data for 189 missions in countries around the world with particular cultural contexts and in many, many languages. It isn’t feasible or advisable to conduct evaluations on every PD program out there. The average PD program is roughly $2,000, so that wouldn’t make sense. But what is feasible and advisable, given recent technological gains, is a more uniform and systematic monitoring of programs. And that leads me to a final area which is very exciting.
R/PPR is working in concert with other partners to develop a suite of digital tools that will be accessible to all PD practitioners that will include strategic planning and capabilities tools for learning about stakeholders, and the capability to monitor how programmers are doing on a systematic basis. The goal here is really to give PD practitioners more tools to refine existing programs and campaigns, and to learn more about the people they are trying to engage with, and the successes and failures of previous programs in order to guide reform of existing programs and development of new ones, as Vivian mentioned. Obviously, we can’t deploy these sorts of changes overnight. The size and scope of State’s global footprint is immense, but we’ve come a long way.
The full global deployment of PD Tools is an important next step to help PD practitioners where they work and to develop audience research and M&E products that are actionable, realistic, and tailored to the environment in which they work. This will continue to be a process because there is no “right” set of methodologies, “right” set of activities in which we should be engaging. The global environment is constantly shifting, and cultures and context always need to be considered. Culture and society are in a state of flux. It’s always going to be a cycle of learning; there’s never going to be one way to go. We are continuing to expand and improve training and refine more systematic ways to share our findings with the field in a user-friendly manner. Those are our immediate concerns, but they probably will be concerns in five years, but we’ve made progress.
Vivian, should I turn it directly over to Luke? I have concluded my introductory remarks.
Vivian Walker: Yes, that sounds good. Thanks so much, Amelia. Luke, you are up.
Luke Peterson: Thanks, Dr. Walker, and thank you, Commissioners, for the opportunity to present today. The ACPD’s 2014 report observed, “It is an opportune time to rethink how actionable research is created and impact evaluation is conducted to strengthen the public diplomacy toolkit in order to ensure its maximum utility for public and traditional diplomacy leadership.”
In the last six years, we’ve answered this call for reform in the International Information Programs (IIP) and Public Affairs (PA) bureaus, and now in the Bureau of Global Public Affairs (GPA), into which those two legacy bureaus merged. By applying research and analysis to the planning, in-cycle optimization, and evaluation of GPA’s fast-moving and crisis-paced global communications initiatives, we have strengthened the PD toolkit and started to get ahead of the curve. The actionable data-backed insights we deliver in moments of crisis buy our leaders valuable time and reduce the “gut decisions” this Commission and the Government Accountability Office (GAO) have raised as problematic. As cause and consequence of that success, our resourcing has seen a tenfold increase since 2014. But challenges remain: we have not accomplished all the recommendations this Commission’s panel of experts outlined in the report.
The Commission is familiar with the GPA vision: our new bureau delivers world-class content and research capabilities to resource-starved PA while delivering seventh floor prominence to the public diplomacy campaigns coordinated by IIP. GPA’s official goal is to advance U.S. foreign policy through effective and consistent communications with global audiences. To achieve this, all of GPA’s components must work together: Analytics and research; Content creation and production; Digital media platforms and channels; and the Media teams that shape the global narrative and coordinate with the Secretary’s office on travel and events. We collectively focus on the Secretary’s policy priorities and promote American values. Currently, these priorities are China, Iran, Venezuela, illegal immigration, and the founding American value of religious freedom. GPA must do all that and also be agile enough to quickly pivot to new issues like COVID-19. The Research and Analytics offices support GPA’s mission by delivering actionable data-driven insights through robust audience research and analysis of the information environment. Thus, our work enables GPA and the Department as a whole to deliver the right message at the right time on the right platform to reach the right audience.
In the last six years since the report was issued, our units have experienced significant changes in terms of resourcing, speed, and scope. We’ve grown the team from half a dozen to nearly 50, and our budget has grown to over 10% of the bureau total, which is double the Commission’s recommendation. With these new resources, we’ve kicked up the cadence of our work to deliver research and analytics insights at the speed of media in 2020: the pace at which the Secretary and the Spokesperson require insights to shape their decisions. We have expanded our research and analysis to encompass all communication mediums, not just social media. We have grown our social and digital analysis capacity to assess the media presence of Department messengers in near real-time. This delivers a level of self-awareness and message control that diminishes “spray-and-pray” tactics for press teams, both in Washington and at our missions overseas.
The 2014 report authors – and Amelia mentioned this – characterized PD culture as “risk averse.” I would challenge that assertion. I’d say that every time a Department communicator pushes a message they think sounds right without rigorously studying its effects on the intended audience, they have taken a risk with America’s reputation and with our policy goals. So, as a culture, we are anything but risk averse. To quantify and reduce this type of risk, GPA research and analytics has developed robust message testing methodologies that enable speedy assertion of the potential impact different messages will have on PD audiences overseas. When we apply these randomized controlled trial methods to a burning policy issue, we try to consider both our short-term policy and long-term reputational goals. Then, we present the full picture of short- and long-term risks and rewards to our leadership as quickly as possible. Pairing this research function with our Media Monitoring Unit’s ability to assess the prominence of different narratives in the information environment increases our situational awareness. The ability to rapidly deploy this content in a highly targeted way through social and digital media, including through paid promotion, has yielded powerful new possibilities for PD practitioners. Our products have earned us a positive reputation with, and have had a coordinating effect on interagency colleagues, as well, although relationships with the Department of Defense and the National Security Council are a little more ephemeral and ad hoc than I’d prefer.
Some of the most important insights we deliver to senior leaders are corrective – they are about actions happening right now and, thus, are pretty sensitive. As the Commission noted, program evaluators can often be treated like tax auditors rather than strategic partners, and we warily avoid that dynamic. When our findings challenge practitioners’ assumptions, as they often do, we thoughtfully present research-grounded alternatives. We maneuver carefully to share corrective insights, while preserving our reputation for trustworthiness and objectivity. While we refuse to compromise our findings and, indeed, we’ve shared reports that eviscerate programs in a post-mortem or phased evaluation format, the sensitivities for in-cycle evaluations are different. As Vince Lombardi used to say, “Praise in public, criticize in private.” Each well-received, data-driven correction from our shop helps positively reframe evaluation for our PD colleagues on an individual basis, but bringing that dynamic shift to the scale of cultural change may ultimately require a complete overhaul of how we discuss PD evaluation. Looking ahead to the next six years, we anticipate challenges inherent to our position as a friction point between a fast-moving strategic environment and a slow-moving bureaucracy.
In the 2014 report, Dr. Arsenault’s observations on the evolution of communications provided an insightful and enduring commentary on our external challenges. The global media environment is as dynamic as it is heterogeneous. For GPA evaluation purposes, whether, for example, 20 pieces of press coverage is indicative of success or failure on the rollout of some annual policy report, depends not only on the context of who covered it, but what the coverage said, who encountered it, the audience’s typical modes of information consumption, and what else happened in the world. In practice, we have to blend together signals from social, digital, and traditional media sources to self-assess, predict what will come next, and make recommendations as to how we should proceed. But, our methodologies have to evolve with the changing environment. This is a major challenge because practitioners tend to want to develop methodologies that work and stick to them, even when those underlying system dynamics shift to invalidate the established methods.
We face major challenges related to hiring and procurement. This is what Dr. Arsenault talked about at the top – the hiring process throughout government is slow-moving and hinders our achievement of our near-term growth targets. We are well behind where we want to be in terms of filling seats in GPA. Even after we write tentative offers, security clearances can delay hiring for many more months. Government compensation remains unable to match private sector salaries for the skill sets we seek. Furthermore – maybe speaking more directly to Dr. Arsenault’s point – effectively executing in-depth research around the Department’s budget and funding cycle requires both luck and foresight. We are funded in multi-month or multi-week tranches of one-year money. Our procurement and contracting colleagues often need months of lead time to award contracts. So, these dynamics lead to procurement timelines that exceed campaign timelines and to certain seasons in which research may not even be able to start at all. So, the bureaucracy heavily limits our ability to design experiments and deliver insights at the pace demanded by our strategic environment.
Finally, the 2014 data-driven diplomacy report mentioned limitations related to the Privacy Act of 1974. While we today are pushing aggregate anonymized social media analysis to the limit, the Watergate-era Privacy Act still forbids us from engaging in many insightful analytical practices, for example, optimized media contact engagement that private sector companies, nonprofits, and foreign governments can do without restriction. This Commission has consistently recommended that Congress examine this law’s applicability to our work, and it absolutely indeed remains an obstacle today.
To paraphrase Professor Cull from the 2014 report’s preface – the history of public diplomacy in America consistently shows our colleagues being called in to spin foreign policy, but seldom having a seat at the table for its formation. The Global Public Affairs Bureau merger has shifted the Department’s capacity to study, create, deliver, amplify, and evaluate policy-advancing messaging into an organizational position with political relevance and top-level exposure. In Research and Analytics, we apply rigorous methods to help our leaders negotiate the chaotic and fast-moving media environment that perpetually dominates their attention. As a result, we’ve solved some of the more intractable challenges highlighted in the 2014 Commission report. For us, the moral of the GPA story for the public diplomacy community should be that investments in research and evaluation pay tremendous dividends back to all PD practitioners. In economic speak, we’re in the ‘increasing returns’ part of the total product curve. We don’t know when we’ll start to hit ‘diminishing returns’ yet.
There are plenty of obstacles ahead and plenty of goals that remain unmet, but research and evaluation in GPA has the resourcing and the momentum needed to enable our government to advance our national interest effectively through words and images. Thanks to the Commission for your time and the opportunity to speak today about the hard work we’ve done in IIP, PA, and GPA and will continue to do on behalf of the American people.
Vivian Walker: Thanks so much, Luke, that was great. Lots of interesting comments that I’m sure our participants will want to ask you about. Now, let’s turn it over to Natalie from the Bureau of Educational and Cultural Affairs to tell us a little bit about what’s going on there with respect to research and analytics.
Natalie Donahue: Thank you very much, Vivian. Hi, I’m Natalie. I’m the chief of evaluation in the ECA bureau. Our evaluation division has a long history of work within the State Department. We were formed in 1999, and we’re the oldest evaluation division within the Department, and we have a long track record of great work. Of course, we’ve come a long way since then, and certainly since the report was written, six years ago. However, I’m just going to focus on the last two years and what we’ve been working on the last two years, mostly because I obtained this position in May 2018, and five of my six evaluation division colleagues came onboard after me. So, a lot of the work that we’ve done has been very recent.
Currently, the evaluation division is a seven-person team. In FY 2020, Congress provided us a $3.5 million earmark, up from $3 million in 2019. Before that, there was no earmark, but we averaged about $1.5 million per year. So, we have doubled our budget since then.
Historically, the ECA evaluation division has been known for two overarching things. One is the monitoring work that has been done. Specifically, the evaluation division conducted surveys of a subset of our numerous programs in which we asked program participants at the pre-exchange stage, the post-exchange stage, and six to nine months after they’ve completed the exchange, to really understand their attitudes and perspectives around their exchange, what they learned about civic engagement, American values, democracy, governance, and, of course, their perceptions on Americans and the U.S. government in general.
That was really what the evaluation division focused on, in addition to the evaluation work that was done, which consisted of two to three evaluations per year that focused on looking at long term outcomes, with those evaluations taking anywhere between three to four years to complete. We’ve really tried to evolve our practices in the last few years, particularly with both monitoring and evaluation, but also expanding to additional lines of effort, which I’ll talk about now.
Starting with our monitoring line of effort – and this is where we’ve really put a lot of stock in the past year – when we looked around, we realized that just serving a small subset of our programs, while useful, just wasn’t enough. There are a lot of program officers that need the data to be able to make programmatic decisions for their programs. Last year, we led an initiative to redesign performance monitoring within ECA. We sat down with program officers, senior leaders, even our award recipients, as well as a number of people from our regional bureaus and our embassies, to ask them, when they think about ECA programs and implementing ECA programs and exchanges in general, what is the information they need to report to their senior leadership? Also, how can we help to modify programming and make programming better? We want to be aware of both successes and maybe where we’re falling short.
So, we compiled all the information from these disparate groups of stakeholders and put together a results framework we called the MODE Framework. This is the Monitoring Data for ECA Framework, and essentially it is a results framework that denotes overarching objectives, sub-objectives, and indicators – measures to understand if we’re meeting our goals and objectives that take into account what we’re trying to achieve with our exchange programs – things like advancing cross cultural competency and increasing civic engagement, strengthening relationships and networks, both in-country and with our foreign publics. Essentially, this is designed to track program performance and the direction, pace, and magnitude of change for ECA programs.
While that sets the framework for everything, we also realized that we have a number of stakeholders collecting data on our behalf. We have our award recipients, the evaluation division, and our Foreign Service Officers at embassies collect information, as well. We realized that we have to create standardized measures, standardized data collection questions for each measure. What we’ve done, through collecting information from already existing validated surveys from social science research and practice-oriented literature, we’ve also created a number of data collection questions for corresponding indicators. Anytime anybody is collecting information through the MODE Framework on behalf of ECA and collecting for a certain measure, everybody is asking the same question. In this way, we can ensure reliability and consistency across our data and have better confidence that the data that we’re receiving is reliable.
The MODE Framework also looks at a number of different stakeholders. One of the recommendations that we had was to avoid self-evaluation reporting from our participants. In the past, we just looked at what our program participants were stating, so we have broadened our data collection to include our program participants at the pre- and post-program stages. We’re also looking at alumni, as we also want to track not only what they thought before and directly after the program, but also one, three, and five years later, so we can track on a continuum their knowledge, continuing perceptions, attitudes, practices, behaviors, etc. over the long term. We’ve also expanded data collection to include other stakeholders such as host families and host organizations. If a program participant does an internship somewhere in the States, we ask the host about their perception not only of the exchange participant, but also how has the participant improved and strengthened the host organization or the firm’s practices? We just completed the MODE Framework in April, so we’re currently piloting this. Hopefully we’ll start to see data roll in by the end of the calendar year to pinpoint what we need to modify and keep going.
In addition, in a lot of the data collection that we did with external stakeholders, somebody said, “how does this relate to foreign policy?” So as part of the MODE Framework, my team also created what we’re calling a strategy crosswalk, in which we have mapped our MODE Framework goals and objectives up through ECA’s Functional Bureau Strategy, which rolls up to the State-USAID Joint Strategic Plan, which rolls up to the National Security Strategy goals. So, it’s very clear to see how the ECA programs that we’re implementing now contribute to the National Security Strategy.
On to the next line of effort. I already talked about the evaluations that have been conducted. In the past, the evaluation division conducted two to three evaluations, taking a longer-term perspective. So far this fiscal year, from September through now, in the last nine months we’ve completed four evaluations. We currently have eight evaluations ongoing and two more out for bid. So, we have obviously greatly expanded our work on the evaluation front. What’s changed with our evaluations is data collection from what we’re calling the constellation of stakeholders – not just talking to participants, having them take a survey and then going into the field to speak with them – but taking a more holistic approach and talking to host families, host organizations, sometimes friends, family, or supervisors of the participant as they’re back in their home countries now, to really understand the progress or the current situation of the exchange participant. We’ve also worked to shorten the timelines of the evaluation from three to four years to a year and a half. I echo Luke’s comments on legislation hindering monitoring and evaluation timelines, such as the Paperwork Reduction Act, which prohibits data collection from Americans or foreign participants on U.S. soil without approval. We have to go through the OMB clearance process, which takes at minimum seven months. So, it makes it very difficult to do things in a timely manner.
Thinking about where we’ve expanded, everything that the ECA evaluation division does is with use in mind. So, we’ve got the new MODE Framework, and we’re doing more evaluations. What we’re interested in, is putting all of the information that we’re receiving into practice. We’ve started a learning line of effort in which after an evaluation is completed, we go through what we call the action plan process. We sit down with the program officer, or whoever the evaluation was for, to say here are the recommendations that came from the evaluation that was conducted. Which ones do you concur with, which ones do you not concur with, and how can we modify the program and implement these recommendations so we can see even more successes than what we’ve seen thus far?
Then, as they need our assistance, we follow up with them to assess progress. After they have time to put these recommendations into practice, we follow back up with them, and we even write a case study that highlights some of the findings that weren’t as flattering or highlights where we need to improve programs. The case studies highlight how the program teams have implemented recommendations so we can see that from conducting the evaluation through six to nine months later, this is what is happening. In the passage of the Foundations for Evidence-based Policymaking Act, ECA also came up with a learning agenda to help guide some of our research and evaluation work.
Finally, echoing what Amelia was saying about the importance of capacity building, our fourth line of effort really focuses on this. We’ve put together what we call evaluation seminars. These are for interested ECA program officers, or even other Department of State colleagues. We choose a monitoring and evaluation topic each month that we think would be interesting to our colleagues and would be easy for them to put into practice – things like, how to create or evaluate surveys and how to create logic models for programs. We focus on program design and best practices in monitoring and evaluation, and we host seminars, which are now held virtually each month.
In addition to these trainings, we also create monitoring and evaluation related resources, all of which are published online. You can just Google ECA evaluation and go to our capacity-building page. All our seminars, as well as a good bulk of the resources we create, are online so our award recipients can take advantage of those, as well.
And finally, something new since March: we’ve hosted an evaluation community of practice, as well. This is again, both for internal ECA evaluation colleagues and is open to the Department of State and ECA award recipients. Each month we choose a topic, we have guest speakers surrounding exchange programs and the monitoring and evaluation of these programs, to try to see each other’s viewpoints and learn best practices from each other. We are thinking about moving away from a risk averse culture and being open. As I said earlier, everything is online – all of our evaluations, our resources, the learning agenda. Everything is online. We have full transparency on what we’re doing.
With that, I’ll send it back over to Vivian and Shawn. And of course, I’m happy to take any questions you might have.
Vivian Walker: Great, thanks so much for those illuminating comments, Natalie. We’re going to move to the Q&A period, as I promised. Just a reminder that if you just entered and you’ve got questions, please send them to us via Slido.com using the event code #ACPD. The instructions are in the information accompanying your invitation email, as well. ACPD Senior Advisor Shawn Baxter will be moderating the Q&A period. Before we turn to the audience, I want to give the first question to our Commission Chair, Sim Farar.
Sim Farar: Thanks. First off, the presentations were very informative. I want to thank Amelia, Luke, and Natalie for their professionalism. I appreciated your presentations.
I have visited many posts along the way – some big, some small. For PD officers at our missions overseas, especially the one or two officer posts, doing program level research and evaluation is very challenging on a number of fronts, from budget to staff time. How can we support research and evaluation efforts in the field or recognize the limitations of the Public Affairs Officers (PAOs) and our PD sections? Do more simplified, out of the box solutions exist? That question is for any one of the three of you that want to answer. Thank you.
Amelia Arsenault: I’m happy to take a first stab. One of the things you mentioned – locally employed staff and supporting locally employed staff who help PAOs, particularly in small posts, although they’re essential in missions around the world – is incredibly important and something that we are doing. The REU has been going to a series of posts to do something we’re calling “core curriculum,” and that includes training on monitoring and evaluation for locally employed staff. This is not intended to turn all people into experts but more to help them be comfortable with the basics, not with the assumption that they’re all going to become professional researchers. There are also some larger posts, where there are people on staff who are dedicated to thinking through issues of metrics and monitoring and evaluation.
At smaller posts staff time is incredibly important. That’s where the technological tools part is an important move in the future because there’s always going to be staff time limitations, there are always going to be foreign policy issues that are going to take people’s time, as they should. There are, increasingly, digital tools and digital delivery mechanisms for data that we can take advantage of to help support staff in new ways that wouldn’t have been possible 10 years ago. That is why that direction is really important.
Luke Peterson: First of all, I agree with everything that Dr. Arsenault said. I’ll make two additional points. One is rotation, and this is just a plug. We’ve got three Foreign Service Officer spots in the GPA Research and Analytics family and are happy to accept Foreign Service Officers who’d like to scrub in and learn how to do this work firsthand, so they can take it to posts they then rotate to after us.
Second, a couple of the online comments today focus on how we take advantage of the large quantity of data that’s available to us. We try to procure tools off the shelf before we build tools – we have data science and software development capacity within the bureau that allows us to build things. One of the ways we assess the Department’s footprint in earned media around the world is first by capturing as much earned media content as we possibly can and determining what the prevailing narratives are, how big they are, how much they’re being shared, and with whom. In order to do that, we had to build a pretty comprehensive system to capture and analyze all that information. We built it in a way that we think will eventually allow us to provide some capacity to posts to streamline the production of things like clip reports for the Chief of Mission, which we observe, just by talking to folks at small posts, takes a significant amount of staff time. Hopefully, proliferation of technology and tools like these – both things that are time savers and things that are just straight up enablers of the kinds of methods we use to do our work in Washington – putting these in the hands of folks in the field will start to proliferate the same sort of approach.
Natalie Donahue: For ECA, we developed the MODE Framework with this in mind, so people would be able to pick it up and be able to utilize it as they need. Because ECA is a 500-person bureau, and there are seven members of the evaluation division, we can’t even support internally everybody that needs our assistance, let alone all of the embassy personnel, as well. So, we created a suite of tools around the MODE Framework. Not only is there an indicator book which highlights all those objectives, sub-objectives, indicators, and the corresponding measures, but we created webinars for the MODE Framework, we created a website that walks people through the process: here’s what I have to submit, here’s the template, I can watch this video on how to do this. So, there are a lot of resources, one-pagers and 20-pagers for those of us like me who aren’t able to do anything with brevity. We put everything that’s relevant up on the website, with all those tools and the how-to’s, so people can refer to it versus us having to provide the capacity in an ad-hoc way. We are able to answer questions instead of having to walk each post or each program officer through it.
Vivian Walker: Thank you very much. That’s extremely helpful. Before we take the rest of the questions, I want to remind everyone that there will be a transcript of this meeting available on our website in a month or two. If there are details or references to programs that any of our panelists might have made, or you need additional information, you’ll be able to find it in the transcript. We will alert you when that’s available.
Now, I’m going to turn it over to my colleague, Shawn Baxter, to moderate the Q&A section.
Shawn Baxter: Good afternoon from Washington, everyone. I’m looking at Slido.com, where we have lots of questions coming in. If you’re interested in asking a question, please post it there, and we will do our best to get to you in the limited time that we have.
One question has to do with coordination of research and evaluation practices, particularly among R, ECA, and GPA, not to mention the Global Engagement Center. Is there hope of a single, Department-wide approach to PD research and evaluation? Can we expect that in the future? I open that up to any of our presenters.
Natalie Donahue: I can’t speak to the holistic viewpoint, but what I can say is, certainly where there are synergies, we do coordinate. Amelia was talking earlier about the work on PD Tools that they’re doing. They came in and discussed it – I’ll let her speak to the quantity of stakeholders that she spoke with – but, of course, ECA was one of those, me and some of my team, and some of the broader ECA stakeholders, so there is some cross coordination there. And, as the REU is developing that, we provided them ECA’s MODE Framework indicators, so when they are developing and collecting information from stakeholders to provide a fuller view of indicators that affect PD writ large, not just exchanges, they have a base from which those will come into play. So, there is some coordination and collaboration happening already, and I’ll kick that over to Amelia or Luke to see if they want to add anything.
Amelia Arsenault: There’s no plan to create any, as Natalie said, any sort of centralized office, but there are benefits to having researchers embedded within different bureaus and offices at State, and working in complementary ways. There are benefits from a monitoring and evaluation perspective. If you are an evaluator, and you are coming into a program, you’re completely removed from the office, but you’re thinking through how as a team you can support a wide range of posts and a wide range of their needs. Having some integration into other offices and a network of research nodes embedded within different bureaus and departments, is actually useful to have a better connection with the needs of the field.
Luke Peterson: I’ll go back to the 2014 report. So, the Commission recommended that we take a more active posture in data sharing, which was part of the question. Truthfully, we’ve had hits and misses on this front. As I talked about, we built active relationships with the National Security Council, the White House, our J-39 colleagues at the Pentagon and the combatant commands, and other interagency elements. Closer to home, we absolutely are voracious consumers of the INR Bureau’s products, and I’d say we have a positive and collaborative relationship with them, as well as the Global Engagement Center. But, as I mentioned, these are very ad hoc relationships. They’re on a project-by-project or campaign-by-campaign basis, largely. The regular connections that we do have to the remainder of the Department’s communicators are in the form of finished products, such as the weekly insight digests that go up to R or across to the regional bureaus and then out through the regionals to the missions.
We’re working hard, and I think R/PPR and IRM have a lot of stuff under way to create the more technical underpinnings to help systematize these relationships. Part of it is just process, which we’re trying to solve at a staff and resourcing level, as well. Ideally, we improve the inter-bureau and interagency sharing of not just the finished products, which is where we’re focused now, but also the source data. Then, as other folks have mentioned, sharing finished products, sharing the source data, and also methods, tradecraft, and cross-training opportunities. Bid GPA and get some of those cross-training opportunities!
Shawn Baxter: Thank you. We have a number of questions about analyzing social media. First, how are we analyzing the massive amount of data that we may be receiving online? How do you process the feedback to a social media post in a way that combines data in both a quantitative and a qualitative manner? How do you use data to inform the design of forward-looking strategies? Those are some of the questions that we’ve received about social media, and perhaps Luke could start us off.
Luke Peterson: Sure, although I also think Dr. Arsenault’s dissertation was focused on answering this question, so hopefully she can give us some comments. The top line answer is, it just depends. It really depends on what the question is. We capture a lot of different data, and it depends on what the strategic question is, who the audience is. One of the things that we do in GPA is research. We look to identify who the persuadable audiences are. We try to get a sense on a given policy topic and for a given set of content, how we should score content as being positive or not based on social media reactions. One example of how we do this is, we’ll look and see on a given campaign if a Facebook “haha face,” for example, is indicative of backlash or support, and it really depends on the content and the audience and what they’re used to. There are lots of parts of the world where there are active users of social media who don’t particularly telegraph how they feel, using the platform. They’re mostly on the consumer side and less on the production side. So, it really depends on what we want to learn, who we are trying to reach, and what data we have.
Amelia Arsenault: One thing that falls into what Luke said –there is a tendency to say there is this deluge of data. How do we understand it? It’s just not manageable to think through how you structure and interpret the sheer volume of existing data. Drilling down on what Luke said, I think it’s incredibly important to think through, rather than what is out there, think about what do we want to achieve and what is the relevant data to achieve that, because there’s just no way to structurally process the sheer volume of data that exists in a way that is reflective of any real trends or outcomes. And, it’s only getting bigger every day.
Luke Peterson: I’ll add on a little bit. When we think about how to figure out what kind of content might shape hearts and minds for a particular target audience, the first thing is to figure out whether that audience is even persuadable in the first place. The second thing is to figure out – because in a social media world, you are self-selecting – what content they engage with. You could make really persuasive content as the State Department and put it in front of audiences that largely ignore it. We have to make content that’s engaging. Once we figure out what content is engaging to the audience that’s actually persuadable on a particular issue, we do more work: This involves a blend of finding content that’s deliverable through a social medium and then using other methods to drop that content in front of audiences and actually observe in a randomized controlled trial setting whether those audiences exposed to that content experience the type of change in action, awareness, or opinion that we’re seeking. But again, it all depends on what you’re trying to do. If you’re trying to figure out what color background to put on an image that needs to go out on the State Department Twitter page in the next hour, we might be able to do some kind of quick analysis that looks at the last 50 things we posted and identify what works best, and what doesn’t. So just doing split testing might be the situationally appropriate thing to use in a particular case.
Shawn Baxter: Thank you all for that. We have a question that talks about the risk averse culture at State. Vivian mentioned this briefly in her opening, but is this still an impediment to effective research and evaluation efforts? Does it still prevail, and why or why not?
Natalie Donahue: I have worked in two bureaus at the State Department, of course, ECA, as well as the Bureau of Near Eastern Affairs in their assistance coordination office. I was speaking with my supervisor not too long ago, and she made the joke that monitoring and evaluation is not a speedboat, it’s a cruise ship, so it takes some time to make turns. I think we’re seeing that with the risk averse culture, definitely in the two offices that I’ve worked in, NEA and ECA.
If you package things correctly and you have conversations with people about what evaluations can do and what you can learn from them and – just talk about how no program is perfect, nobody expects programs to be perfect – there are always going to be successes in the programs, but by learning what’s not going so well, you can make those programs even better. That’s part of the reason for ECA’s evaluation case studies – to showcase to other program officers or anybody, here was an evaluation, not all the findings were rosy, but this is how the program teams have used this to modify their programs and make them even stronger.
I see that there’s a lot more openness to evaluation. And again, speaking about 10 years ago, the way the evaluation division worked was to have just a few evaluations, and they would dictate to the program team – we’re conducting an evaluation for your program; here are the questions that we’re asking – versus now with the current ECA evaluation division team, we spent a lot of time in the last two years meeting with different offices and talking about the products and services that our team offers and talking about this risk averse culture. As I stated, we have 12 evaluations that are happening or have happened in the last year, and there are program officers asking us to conduct these on their behalf. So, we’re starting to see a change. For me personally, it hasn’t really hampered the work that we’re doing.
Luke Peterson: I can offer a couple of different ways that we deal with this. I talked a little bit about how we deal with sensitive challenges when we do a message test that indicates that the way that we’re currently proceeding on a given policy topic isn’t helping. That’s not the only way we deal out our fairly tough and introspective advice. We have a platforms team that tracks over 2,000 social media accounts and websites for different State and USAID programs. These are bureaus, missions, ambassadors, senior leaders, alumni groups, etc. We’ve been capturing this content for archival purposes and have been tracking statistics like page views and likes and shares for a long time, but just in the last year since we’ve moved that function into GPA, we’ve started to use this data as a signal source for messaging alignment. So, for example, throughout the COVID crisis we’ve done regular reports for Department leadership that show how the Secretary’s priority messages have moved through all those different information distribution platforms and how they reached audiences overseas, or haven’t. To the point of this question, a lot of those reports have highlighted some shortcomings and created some corrective conversations among high level officials and subsequently created some action in the field.
Lastly, I’d call this an example of risk aversity, having to do with holdout groups. Many times, when we talk about setting up a treatment control situation or program design for the purposes of evaluation on a burning high priority topic with a clear target audience, there’s not a ton of interest. The preference is to go full bore and make sure that we get all of the possible reach we can get out of a program, not having an intentional holdout group that doesn’t get treated so that we can do that randomized controlled trial comparison at the end. We end up using natural holdouts for a lot of stuff, which is methodologically problematic, and we need to get beyond that. That’s part of the reflexive risk aversity in which folks want to give everything their all and don’t care as much about planning ahead to get the evaluation result they’re looking for – getting the program’s purpose executed.
Shawn Baxter: Thank you. We have a couple of questions related to training and also stating the importance of data and research and evaluation in terms of developing PD programs and even influencing how we develop our foreign policy, which are related. Could some of you discuss how your offices are working with FSI’s PD training division to improve the training for PD professionals in the field? Is your work informing curriculum changes and laying the groundwork for long term improvements? Also, in a general sense, are you engaged in other efforts outside of formal PD training to “sell” to your colleagues and to others who are not necessarily PD officers, the importance of using data to improve our foreign policy work?
Amelia Arsenault: I can speak for R/PPR efforts. We work quite closely with colleagues at FSI who are involved in PD training. There’s really a good working relationship currently, thinking through the core competencies that a 21st century PD officer needs to have and how we can improve and expand training to make sure that those core competencies are achieved. Right now, the REU as a team is working on bringing in training specific to research areas like monitoring and evaluation, survey design, and things like that for certain posts around the world. We’ve piloted it in several places. We’re working with FSI to make sure that all of the ways in which the field is being supported through learning are complementary, so they are not getting one set of messages from one office or bureau and one set of messages from another, and thinking through how these can build on one another. FSI is really moving the needle on their training, but because not everyone can go to FSI, we have visited particular posts to help with training. It’s been a very good relationship. And the second question…
Shawn Baxter: It’s more of, in a general sense, how are we convincing our colleagues, both within the PD cone, but also in other cones – political, economic – the value of using data to drive program design and U.S. foreign policy formulation?
Amelia Arsenault: I can’t really comment on educating colleagues outside of the PD cone, but there is incredible demand for data from the field across the world. We have way more requests for specific data than we can fulfill at this moment. There’s not much of a need to drive demand for data. It’s actually difficult to fulfill the demand for data.
Luke Peterson: The top-level Department leadership’s demand to see evidence justifying programs helps us. Being able to quantify the results of programs or being able to say, “We have three possible ways forward on this particular effort. We’ve measured in a sample setting what each of these three strategies may yield, and B is 20% better than A or C. Therefore, we should do it unless there’s a good political reason not to…” This is the kind of conversation we are now able to have often.
I saw one of the questions on Slido: it basically asked, how quickly can you get that kind of information together? The answer is, pretty quick. And, we think, in a pretty reliable way. For us, it’s eliminating the risk of shooting from the hip and trying to make sure that people know what the consequences of any considered forward path might be on a particular policy issue. This is something that we try to do often. The reason we try to do it is because there’s demand from leadership to quantify that kind of thing.
Before I came to State, I worked in the private sector, and most of the big companies that I was working with were used to this approach, as a matter of course. Whether they’re launching a new product, adding a new ad campaign, thinking about how to respond to a particular line of reporting that’s coming in from trade journalists, they think about all those questions, and they want to see the numbers. Culturally speaking, those especially who come in from outside into the Department, political appointments, are going to be used to seeing these kinds of assessments going forward.
Natalie Donahue: I echo my colleagues. First off, I wholeheartedly agree with Amelia about the need to fulfill requests for data versus drive the demand for data. We’ve seen that as well. And to Luke’s point, when we first started developing the MODE Framework and some of our other lines of effort, we let people know about the new Foreign Aid Transparency and Accountability Act and the Foundations for Evidence-based Policy Making Act to say, yes indeed, these are best practices, but we are also now required to do them. Going back to fulfilling versus driving the demand for data, I don’t think this is much of an issue. We are seeing it less and less, but for those that are a little slower to get on board, it certainly helps.
Shawn Baxter: We have one question really focused more on exchanges. Many PD programs are designed with longer term qualitative outcomes. How can we balance the need to wait to observe these changes with the importance of having data drive decisions? That’s a historic problem in public diplomacy.
Natalie Donahue: It is. We recognize that, and that’s why the MODE Framework really looks immediately post-program on collecting data and observations and information at that point in time. Then also we survey one, three, and five years, and for some of our youth programs, 10 years later, to understand the change in perceptions, knowledge, attitudes, and practices. We’re trying to adjust for this and take into account that public diplomacy is not the short term, and it’s not necessarily skills increases that are the long-term objectives that we want to see, but are exchange participants still a part of the ECA or State Department or USG network? Do they still value American ideals and values and things like that? Are they in a position to enact those? We’re trying to take those into account by looking into the long term, as well as the short term.
Shawn Baxter: We probably have time for just one or two more questions. This next one is more specific: Can you provide a hypothetical or actual example in which audience research has illuminated insights for policy designers and policymakers? Can any of you provide us a specific example, a good news story, if you will?
Luke Peterson: I have one that’s maybe less timely, about four years ago. We were looking at how to talk to South Africans about embracing forms of wind and solar energy and learned a lot about the U.S. role as a messenger with that particular target audience and how much credibility we had. We arrived at the understanding that we were not the right direct messengers for that particular message about moving to wind and solar energy resources. The key finding that drove our communications strategy was that the target audience didn’t need to be convinced that wind and solar was the right solution for the future. It was that they needed to know that wind and solar is less expensive than it used to be. They already had the will. They just thought it was prohibitively expensive and not worth looking into.
So, we pivoted the entire campaign from being about the benefits of wind and solar to talking about how inexpensive wind and solar is now. That pivot drove a hard 45-degree turn in the direction of that campaign, and it was based on the research we were able to do on the front end with the audience that needed to be engaged to make the decision. Since then, we’ve improved a lot in our ability to identify, not just the persuadable points of the audience, but understand what it is that we need to persuade them of in order to engender the action desired or the opinion or attitude change. I don’t know if that’s a satisfactory answer for you.
Amelia Arsenault: This is one small example. The REU developed a systematic monitoring system for the U.S. Embassy in Tokyo to help them track their huge number of study abroad programs and to learn more about them – and to help refine how the Mission was serving the needs of Japanese people interested in studying in the United States. We did some audience research to understand the different barriers to study abroad, and one of the data points that was the most surprising for those who were involved – because for Americans we are used to the complexities of our application process – was how big the barrier was just understanding the university application process. So, the Mission was able to adjust and add more learning sessions focused on that complexity.
Shawn Baxter: All right, we have time for one last question. We have a couple of questions regarding financial resources, in particular, the industry standard of three to five percent of program resources going to research and evaluation. How close are the different bureaus in the Department that do research and evaluation to achieving this industry standard? Would you agree that that is a good standard to have – three to five percent of a program budget being dedicated to R&E?
Natalie Donahue: For ECA, as I said, these past few years we have had $3 million to $3.5 million earmarks, which translates to about half a percent of ECA’s overarching budget. However, this doesn’t take into account what our award recipients are expending or the time that our program officers are using on monitoring and evaluation. So, we’re actually not sure how to answer that. I don’t think we’re close to five percent, but factoring in time and what our award recipients are spending within their program budgets, themselves, I really can’t say. Is three to five percent accurate? That’s the standard that we’ve been using for a while. I’d love more money, of course, but I’ll take what I’m given.
Amelia Arsenault: To add to what Natalie said, three to five percent is an important yardstick. There’s also a couple of other things that we ask, particularly posts to think about, in terms of thinking through how much of their program funds to set aside for M&E. First, is this important for management decision making on whether to do the program again? Is this a pilot that you want to scale up? What is the budget of the overall program? If it’s a $2,000 program, three to five percent of your budget is a very different thing than if it’s a $500,000 project. All of these things are factors in whether the three to five percent bar should be applied. It’s not a firm and fast rule.
Luke Peterson: For us, we’re at about 11% of the GPA overall nominal budget, but we’re also only about half filled out at this point in terms of the actual people we have in seats. So, we’re right at that five percent number. There are a lot of questions that come into us, asking if we can use data to shed light on programmatic decisions that have to go unanswered because we don’t have bandwidth to meet them. Part of that is because we don’t know how to answer them yet, and we’re still working on the research methodology. But, part of it is just because we don’t have the bandwidth to do it. We’re still at the increasing returns part of that labor to value curve. I’ll let you know if we start to see a diminishing point or a negative point, although I can’t even imagine where that would start to hit, given it’s unlikely that we’re going to get 100 percent of the bureau’s budget, in which case we’d be measuring everything but not doing anything. So, in the realm of reasonable budget shares, we’re still going to be fine for a while.
Vivian Walker: Thanks very much to everybody for your thoughtful questions and for the equally thoughtful responses from our panelists. You will have an opportunity to examine them in greater detail in the transcript. Now, to close this out, I’d like to turn to Commission Vice Chairman Bill Hybl.
William Hybl: Thank you, Vivian. Special thanks to you and Shawn. Our panelists today were great. They really did capture the impact of assessments and the importance for future programs. To those of you in the audience, thank you for joining us. We all look forward to our next meeting in the next quarter, and we wish you all a great day. Thank you.
Vivian Walker: Thanks for joining us.
ADDENDUM: PANELISTS’ WRITTEN RESPONSES TO ADDITIONAL QUESTIONS
Given the high volume of questions and time constraints, our panelists graciously agreed to provide written responses to several of the excellent questions posed by our audience.
- Is the ECA suite of tools for M&E available to implementing partners for ECA programs? How can we join the community of practice on M&E?
Natalie Donahue: Yes, ECA has a number of tools available on the Evaluation Division’s website for implementing partners and the general public (should they be so interested). Please peruse our website here: https://eca.state.gov/impact/eca-evaluation-division. For all MODE Framework templates and webinars, you can view those here: https://eca.state.gov/impact/eca-evaluation-division/monitoring-data-eca-mode-framework; while recordings of our Evaluation Seminars can be viewed on this page: https://eca.state.gov/impact/eca-evaluation-division/capacity-building. The Community of Practice is open to Department of State and implementing partners only. If you would like to register, please send an email to ecaevaluation@state.gov. The Community of Practice meetings are held every second Thursday of the month from 1:30-2:30 p.m. We look forward to seeing you there!
- Relative to online efforts, what do you find are the most insightful means and methods to evaluate reach and effectiveness on social media platforms?
Natalie Donahue: ECA included several social media-related indicators as part of the MODE Framework, all of which are collected from GPA’s Analytics Dashboard: 1) # of social media post views, 2) # of social media likes or positive reactions, 3) # of shares on social media, and 4) # of followers on social media. The ECA Evaluation Division has also collected social media-related information as part of specific evaluations. For instance, the Mandela Washington Fellowship for Young African Leaders Initiative evaluation included social media/research analysis across seven different platforms that was used to develop an understanding of public perceptions of the fellowship and segmentation of those perceptions related to specific stakeholder groups. ECA plans to continue utilizing social media platforms as part of its data collection efforts across both the ‘monitoring’ and ‘evaluation’ lines of effort in the future.
- Can ‘content analysis’ of verbal PD messages, such as speeches of practitioners or textual messages through internet and social media, help evaluate PD messages for their appropriateness vis-a-vis the character of the intended audience?
Luke Peterson: Our research strongly supports the importance of tailoring verbal or textual PD messages for intended audiences in order to maximize receptivity and impact, and content analysis’ critical role in the tailoring effort.
- When the data indicates the message is not effective (or counterproductive), or that enacted policies and behaviors are contradictory to the message, how useful is data for convincing the “higher-ups” to undertake a course correction?
Luke Peterson: Data assessing impact from and audience receptiveness to policy messages enhances arguments about when and how to communicate. Senior leaders consistently demand assessment of the strategic and operational risk as well as returns associated with different messaging concepts and efforts, and they definitely incorporate this input into their decision-making calculus.
- How can officers obtain these research and evaluation skills when PAOs don’t support sending PD officers for training (e.g. SIP posts where officers are only there for one year, posts with small budgets, etc.)?
Amelia Arsenault: Making sure that all DOS PD practitioners have access to training in research and evaluation is critical. As this question highlights, not all PD practitioners can travel to DC for in-person training. The PD Research and Evaluation Unit has developed and deployed an intensive four to five-day training program, in audience analysis, planning strategically, and monitoring and evaluation. Pre-COVID 19, our team members delivered a beta version of our core curriculum to six different posts. Currently, we offer abbreviated digital components of the core curriculum content and are also developing self-guided training materials, available for all PD practitioners. We plan to continue our in-person training offerings for the field, including in new formats to include participants from different posts in both in-person and online environments.
- GPA, ECA, GEC, REU, mission, and other sources are siloed from a data perspective. It falls on action officers at post to curate the info from PDFs for the analysis part of planning. Who is working on the big picture and what’s next?
Amelia Arsenault: This is an important question. The PD Research and Evaluation Unit is working on launching PD Tools, a global next-generation platform that integrates PD audience research data and tools, strategic planning tools, budgeting tools, and monitoring and evaluation into one interface. The initial deployment will be completed by fall 2020, with new capabilities added for audience analysis and monitoring and evaluation throughout 2021. A central goal of the PD Tools effort is to make it easier for all PD practitioners, stationed abroad and in DC, to access and collect data in order to facilitate a cycle of learning: to use data to assess strategic priorities, identify and connect with stakeholders, manage and design programs, and course correct, where necessary. An important next step in this effort is working with all of the PD research offices to make sure that all available and relevant data – from monitoring and evaluation reports, to public opinion research, to digital data and analytics – is integrated into the system.
- Are there good bibliographies of evaluation studies already conducted? Where would I find some of them?
Amelia Arsenault: While there are a number of interesting evaluation reports, there are no, to my knowledge, comprehensive bibliographies of public diplomacy evaluations. If you are interested in learning more about monitoring and evaluation in general, Better Evaluation, the Washington Evaluators Association, the USAID Development Experience Clearinghouse, and the World Bank’s Development Impact Evaluation are good places to start.