James Martin Center for Nonproliferation Studies
Middlebury Institute of International Studies at Monterey
Results for Implementation of Metrics for Nuclear, Biological, and Chemical Security Programs
Office of Cooperative Threat Reduction
US Department of State
Philip Baxter (PI)
February 29, 2016
1400 K Street Northwest, Suite 1225
In 2012, the Office of Cooperative Threat Reduction (CTR) in the State Department’s Bureau of International Security and Nonproliferation (lSN) has tasked the James Martin Center for Nonproliferation Studies (CNS) to create a tool that could be used to assess the effectiveness of its programs. CNS designed a multi-layered, cross-domain data collection and analysis tool to ascertain the status and effectiveness of ISN/CTR’s programs over time. The tool measures the outcome of ISN/CTR programs-the changes in security culture that the programs have made in individual partner countries. CNS conducted a trial implementation of the tool in fall 2013 and a full-scale analysis of twelve priority countries (Afghanistan, Algeria, Egypt, Indonesia, Iraq, Jordan, Libya, the Philippines, Morocco, Saudi Arabia, South Africa, and Yemen) in 2014. Thi~ report provides the results of the evaluation and examination of national, institutional, and combined metrics, as well as the contextual indicators, for the 12 countries selected for evaluation by ISN/CTR where CTR activities have been undertaken for the current evaluation time period of June 2014 to June 2015.
The report provides an overview of the evaluation framework, progress to-date, and then walks through the results by metric category and country. Overall, steady progress was seen in advancing CTR priorities in a number of countries. This analysis found that on the national level, seven of the twelve countries assessed (Algeria, Egypt, Indonesia, Iraq, Jordan, Philippines, and South Africa) improved their score, and two other countries (Libya and Morocco) having no change to their scores compared to the previous evaluation. The core of the remaining three countries (Afghanistan, Saudi Arabia, and Yemen) decresed but were also managing conflict within their borders, or regionally. At the institutional level, improvement was seen in nine (Algeria, Egypt, Indonesia, Jordan, Morocco, Philippines, Saudi Arabia, South Africa, and Yemen) of the ten countries were data was available. In one country (Afghanistan) the score remained the same, and in two countries (Iraq and Libya) data was unavailable due to lack of visits on sites. With this in mind, when computing the combined scores for each country, only one country (Afghanistan) had a negative change to their overall combined score.
While institutional change varied by country and discipline, it should be pointed out that in the institutional metric, only one country experienced a decline at the discipline level, and even that decline was minimal (1 percent). Overall, the average increase in the institutional level in which data was available was 18 points. These institutional level score increases, coupled with the national level data, results in an average 9 point increase for the 12 evaluated countries.
There were also several other trends identified during the execution of the evaluation program. First, while in the previous evaluation period, national level scores were higher than institutional level scores, in this evaluation period analysis in most cases, institutional scores were higher than national level scores, and were marked by steep increases. Interestingly, national level increases did not appear correlated with institutional level increases. For example, a large national level increase did not necessarily mean that a large institution metric score would also be seen, and vice-versa.
It should also be noted that some stagnation, or minor slippage, in scores was also observed during this evaluation period in certain cases, but this was often attributable to broad geo-political events occurring either in the partner country or regionally, which prevented partner country from taking additional steps to improve their nuclear, chemical and biological security and security culture. These factors also impacted the ability of the research team to adequately assess institutions, and sometimes restricted national level data, in these countries due to severe drops in the data availability. Data collection in the institutional level is heavily reliant on implementers’ account of the security situation in an institution. Due to security concerns, significant numbers of institutions were not visited during the current evaluation period precluding the project team from collecting information on these sites. This, in tum, can lead to overstating (and in rare cases understating) national and institutional scores. Since the rate of unanswered questions in the national level was relatively very low (in the highest case 10 percent), the data comparability across the years is very strong. The report concludes with observations and lessons-learned from the evaluation writ-large and also suggests means for improving future evaluation processes.