The Importance of Evaluation

Eileen Babbit

Assistant Professor of International Politics, Fletcher School of Law, Tufts University

Interviewed by Julian Portilla, 2003

This rough transcript provides a text alternative to audio. We apologize for occasional errors and unintelligible sections (which are marked with ???).

A: We were trying to be a little bit more distanced from what we were observing and trying to understand what people were doing without saying, "Yes you're doing a great job," or "No you're not doing a great job."

So that seemed a little further than what we felt comfortable getting involved with. What we did instead was to set up a series of tracking processes that tried to identify how different kinds of interventions created ripples in the community and what those ripples were and not just in terms of relationships but in terms of reputation for the interveners, feelings of increasing comfort on the part of individuals engaging in these activities, the extent to which they felt comfortable getting colleagues, peers, family members involved in things they were doing and the viability of these projects over time. Unfortunately the evaluation project was so short lived, most of the finding in terms of sustainability had to be very provisional because we didn't even have a two year data point. I mean the projects as we were evaluating them were only about 6 months old, which is nothing in terms of relationship-building. So we could extrapolate from the information we gathered we did a lot of interviewing, we hired local interviewers, we trained those local interviewers because we wanted the interviews conducted in the local languages.

Q: People who understand maybe even the non-verbal stuff

A: Exactly.

Q: The contextual problems.

A: Exactly, exactly. And there's, you know, both strengths and weaknesses of that approach. The strengths are exactly as you say, people who can pick up the nuance, they understand the body language, and they know what words mean in a level of subtlety that an outsider could just never get. And you would never catch if you forced people to speak English. On the other hand you have to be very careful in choosing those local partners because there's also things that people will say to an outsider that they may not say to an insider.

So we had to try and figure out multiple ways of triangulating the same information. Local contacts, outside contacts, us, insiders, etc. And so that's how we did it. And I had I was the overall coordinator of the project. I had one person who was working primarily in Bosnia, one person primarily in Rwanda, who was French speaking, and then the set of local people who collaborated with us as well. The way HCR works as well as most international organizations they have organizations they call implementing partners. They come in with the money and with the framework of the project, what it's supposed to look like and then they put out an RFP and people respond and they choose an organization to be their implementing arm. So HCR implements very little on it's own.

Q: So like USAID or any sort of granting institution.

A: Exactly and in Bosnia, the implementing partner was a local NGO a Bosnian NGO based in Banja Luka, which is the Serbska part of Bosnia, with a lot of experience doing contracting work and a strong psychosocial background. They did a lot of work with traumatized children, with women's groups; with community sort of community health issues and some of the people on their staff are psychologists. And for the HCR office in Bosnia, they felt that was the best profile.

In Rwanda, they did something different because they didn't feel that the local NGO's had enough organizational capacity to be able to carry out this project in such a short space of time and they didn't have enough staff to bring those local organizations up to speed. So they chose to work with two international NGO's- Oxfam Great Britain and Norwegian People's Aid. So we had a very interesting comparative study two countries five regions, three implementing partners two of whom were international, one was local, so it was an incredible opportunity to grapple with too many variables.

Q: Perfect storm.

A: Exactly, exactly, exactly. Great metaphor. And we came up with I think some very good but preliminary findings and advice for HCR. Some of which they knew already but their organizational culture is such that putting them into practice is just so difficult.

Q: Meta-evaluation?

A: Exactly, exactly. But many of the things are things that as a conflict resolution professional were not surprising. The importance of choosing the implementing partners and making sure that the partners had the kinds of skills for not just logistical administration, but understanding what was really happening on the ground. A real sophisticated view of what's going on in their respective countries and that made enormous amounts of difference, enormous difference.

Goodwill is not enough. People really need to know what's going on and you have to really find people who know, number one. Number two, you've got to find people who are able to be creative. This is not something where there are standard operating procedures. We don't yet have a checklist of what works and what doesn't. So you have to have people who can think on their feet who really can be creative and but can evaluate as they go and are smart enough and self confident enough to make adjustments as they go along. The third thing is that the whole project has to be self-reflective. That you can't just do an up-front something or other and end output thing there has to be

Q: Benchmarks?

A: Not just benchmarks but there's a research methodology in political science called process tracing, in which you actually look at what happens over time but not in terms of. It's not benchmarks. It's not saying how does this activity at time T1 relate to our goal. It's what's happening at T1 and how does in react to what's happening at T-1. So you're trying to see how things play out over time following certain themes. The most important and challenging thing was figuring out what it was you wanted to be tracing over time. What we discovered is that there were some things that you could choose to look at that you could evaluate in any context in which you were working. Whatever the project, tracing the way relationships developed within organizations across organizations between organizations and governments between leaders and followers. I mean these are the kinds of things you could predict, but there were other sorts of things that were very context specific and you could not necessarily always identify ahead of time what those might be really had to work with the local partners and with the organizations themselves to identify those almost as in process.

One of the weaknesses of the work that we did was the insufficient resources to keep somebody on the ground during the whole evaluation. Our evaluation team had to keep going in and out and the local partners could be there all the time but expecting them to be completely self-reflective as they did every single thing is really asking a lot. This is not a mode of operating that is familiar. And although particularly the people from psychology in Bosnia, although they took to it very quickly and understood the benefit of it right away, both the time and the skill to do it were things they were building as the project went along. So the whole project was a set of self reflections.

Of us reflecting on what we were doing, of our local partners reflecting on what they were doing and then reflecting that back to us, of the projects themselves it was iterative as we went we learned a tremendous amount and all I can say is that in order to do this well in the future we have to do a lot more work up front of training work to get people a little more comfortable with what the methodology will be as the project unfolds.