Evaluation as a Tool for Reflection

By
Michael Elliott
Tamra Pearson d'Estrée
Sanda Kaufman

September 2003
 

The Role of Evaluation in Resolving Intractable Conflicts

Evaluation is a systematic effort to learn from experience. It is a common human activity, one that enables us to make sense of the world and our impact on it. The understanding that comes from careful evaluation empowers us to act more effectively.

Intractable conflicts share characteristics that both enhance the usefulness and increase the difficulty of evaluation. Escalation, polarization, ineffective communication, strong emotions and distrust are rooted in complex social systems and have multiple causes. Particularly when feeling threatened, disputants seek to explain away these complexities through high degrees of simplification and framing. These explanations are reinforced through socialization with like-minded individuals. As conflicts escalate, parties polarize and encode differences and negative expectations through formal policies and informal cultures.[1]

Intractability therefore dampens normal processes of evaluation and feedback. Mutually escalatory cycles are strengthened by mistaken assessments, unhelpful reference points, and strategic behavior based on highly simplified assumptions. The underlying conflict remains unresolved in part because disputants have no mechanism for conscious learning about the process and substance of the dispute or the relationship between their own choices and the ensuing negative outcomes. They therefore repeatedly engage in behaviors that prolong and escalate the conflict.

Even positive efforts to resolve intractable disputes are subject to these dynamics. Because intractable conflicts involve a series of interconnected disputes and complex social dynamics, they usually require a series of interventions. Cumulatively, these efforts may allow for significant change over time, yet each effort by itself is likely to produce only marginal de-escalation in an overall pattern of escalatory behavior.

In light of the complex social forces that reinforce intractable conflict, how can we best design efforts to de-escalate or resolve these conflicts? Once we have designed and initiated conflict management processes, how can we best understand their effectiveness? And finally, considering how common intractable conflicts are, what can we learn about past conflict-resolution efforts that might prove useful in the future?

While these three questions pose distinct challenges, evaluation is essential to answering each of them. Evaluation is a powerful way to link theory with practice. Evaluation requires one to be explicit about one's goals, processes and theory. At its best, this explicitness encourages reflective practice, in which the evaluator and the participants in the dispute work together to understand the conflict, reflect on how current practice has helped or hindered resolution of the conflict, and reconsider how best to alter their practice to better resolve their conflict.[2]

Why is evaluation particularly important in intractable conflicts?

Three characteristics of intractable conflicts make evaluation particularly useful.

First, intractable conflicts are complex. Disputes emerge from the interaction of numerous dynamics, many of which are self-reinforcing. Effective intervention into such disputes requires knowledge about the dispute's history, dynamics and context. Evaluation, particularly one conducted by the disputants themselves, can provide a structure for assessing the conflict and setting goals. Through evaluation, the intervenor and the disputants can better understand their situation and begin to design processes for moving forward. Both conflict assessment and formative evaluations are effective tools for exploring this complexity.

Second, intractable conflicts are long-term. The length of time presents both challenges and opportunities. Over time, both the context and the parties change. This creates a need for midcourse adjustments in interventions. Intractable conflicts usually continue long enough to allow for reflection on the intervention attempts to date, to learn from the past and adapt to both this history and the changing conditions. Formative and

summative evaluations

are needed for this step.

Finally, intractable conflicts manifest at multiple levels. Disputes in a particular community are often imbedded in a deeper set of structural conflicts. Actions that might move immediate disputes toward resolution may neglect or possibly even escalate the underlying conflict. Systematic self-reflection and evaluation is needed so that parties and intervenors can set goals and assess accomplishments not only at the level of episodic disputes, but also of underlying conflicts. Therefore, intractable conflicts call for practice that is based in a deep understanding of systemic conflict. This knowledge is built through knowledge-oriented evaluation.


Figure 1: Evaluation Utilization Continuum

 

As a tool for intervention (Action Evaluation)
Conflict assessment: assesses dispute dynamics for immediate use by participants in the design of an intervention

Formative evaluation: assesses an ongoing process or intervention for the purpose of improving the process through immediate feedback and recommendations

Summative evaluation: determines the effectiveness of interventions and the contextual and process conditions that contribute to success

Knowledge-oriented evaluation: contributes to theories of conflict and conflict management

As a tool for research (Knowledge-Building Evaluation)

Evaluation's Uses

Because intractable conflicts often require many interventions, evaluations must be closely matched to the goals and objectives of a particular process. Evaluations have the potential to either reinforce or undermine conflict resolution processes. Evaluations with goals that differ from the goals of the intervention can be destructive, distracting participants and leading to inappropriate critiques. Evaluations using appropriate criteria provide realistic feedback and guidance for constructive change.[3] To promote the effective resolution of conflict, evaluations cannot be strictly objective and external to the process, but rather must be supportive and integrated into it.

The main types of evaluations include:

  •  Conflict assessment or conflict analysis is conducted before the design of a conflict resolution process. Its primary purpose is to enable the intervenor and the disputants to reflect upon and understand the conditions of their conflict in order to assess the viability of potential interventions and to increase the likelihood of designing a successful intervention process. The assessment identifies everyone with a stake in the conflict, important issues to address, and concerns about the conflict resolution process. The conflict assessment helps the parties to set objectives, design processes, build relationships, feel ownership of the process, develop a shared understanding of the conflict, and secure participation. Often, the assessment is used to help participants reflect upon and gain new insights into the conflict.[4]
  • Formative evaluation or monitoring focuses on ways of improving dispute resolution processes as they occur. It is a structured process of reflection that seeks to provide input into program planning and revision. It promotes both individual and social learning, increases awareness and continues to build relationships amongst participants in a dispute.[5]

  • Summative evaluation

    focuses on the overall effectiveness of dispute resolution processes or programs. It draws general lessons from interventions in order to improve dispute resolution practice over time. Summative evaluations make judgments about what works and what does not, and are usually conducted after a number of similar interventions have been conducted. These interventions may all focus on one conflict. However, more often, summative studies examine the effectiveness of a series of similar conflict interventions or a program for conflict resolution.

Though primarily designed to systematize reflection by dispute resolution professionals, evaluation can also be useful for the disputants themselves. Because disputants have no mechanism for conscious learning about their conflict and their attempts to resolve it, evaluation can provide a process for disputant reflection and learning. Evaluations are not only done "to you" (by an outsider trained in evaluation techniques) but can also be done "by you," possibly with outside assistance, and geared toward goals of disputants, interveners, and supporters. They can be used not only to determine the impact of interventions (as in traditional evaluations that seek to build general knowledge about conflict and its resolution), but also as a means of intervention itself (as in action evaluations that seek to interactively build the capacity of disputants to resolve their own conflict).

Evaluations can therefore be distinguished by whether they are primarily designed to generate knowledge about a conflict or a conflict resolution process, or to generate knowledge and interactions that may promote resolution of the conflict itself. Knowledge-oriented evaluation seeks to accumulate lessons across cases and to build theory, contributing to our overall understanding of conflict. The products of knowledge-oriented evaluations are often aimed at understanding conflict dynamics and improving the general practice of dispute resolution, rather than attempting to improve a specific intervention.

Participants in a conflict resolution process are often better served by an approach that integrates evaluations into the conflict resolution intervention itself, to help promote "success" of the intervention, as determined by the disputants, conflict resolution interveners, and other stakeholders. The participants in the process become partners in the evaluation, generating goals for both the evaluation and their own efforts at resolution. Because this evaluation is situated within the intervention and actively involves participants in the process, it is known as action evaluation.[6]

What constitutes success?

Eileen Babbitt describes some of the challenges in evaluating the UNHCR's refugee resettlement efforts in Bosnia and Rwanda.

What can an evaluation tell us about the success (or lack thereof) of an intervention? Since evaluations measure outcomes against goals, criteria for success must grow from the goals of the processes being evaluated. While many outside observers often focus on whether or not an agreement is reached, agreement is only one indicator of a successful process. Particularly in intractable conflicts, the absence of agreements may be counterbalanced by general improvements in the conflict dynamics. Multiple criteria can be applied to measure success.

In reviews of environmental and public policy disputes, inter-communal conflict resolution and consensus-building processes, d'Estree, Beck, and Colby (2003), d'Estree, et al. (2001) and Innes and Booher (1999) identified criteria such as:[7]

  • Achievement of an outcome: agreements or ruling that are consensual, ratified, and verifiable;
  • The quality of the conflict resolution process: processes that are procedurally just, fair, reasonable in cost;
  • The quality of the outcome: agreement that are cost-effective, clear, financially viable, culturally and environmentally sustainable, legal, politically and scientific/technically feasible and acceptable to the larger public;
  • Satisfaction with outcomes: whether participants and stakeholders in a dispute resolution process are satisfied, think the agreement is fair and agree to comply with it;
  • The quality of the parties' relationships: new relationships resulting in increased trust and an improved emotional climate, reductions in hostility, an increased ability to resolve future disputes, new conceptualizations of the relationship and increased empathy between the parties;
  •  Improved decision-making ability: new learning, changed perceptions and attitudes, integrative framing, problem-solving, better communication and new vocabulary; and
  • Increased social capital: increased capacity to draw on collective resources, empowerment, new leadership, problem-solving and influential participation, new partnerships, organizations and processes that transform the social system within which the conflict occurs.

In addition to focusing on goal-achievement, evaluations may also focus on:

  • comparisons with other programs;
  • compliance with regulations;
  •  the relationship between costs and benefits;
  •  giving voice to diverse viewpoints and experiences;
  •  describing the context and its effect on the program;
  •  describing the program's culture;
  •  describing the changes in the program and participants over time (longitudinal focus);
  •  and/or answering questions of intended users.

These and many other ways of focusing an evaluation are summarized in Patton (1997).[8]

One should bear in mind that all evaluations involve comparison. Comparisons are typically to a past state of affairs that the parties desire to change. However, comparisons may also be made to a future state of affairs, an ideal state of affairs, or to another similar intervention or setting.

In addition, evaluations are better at measuring the success of short-term processes and the immediate impacts of those processes. Measurements of long-term impacts are more difficult because these impacts cannot easily be separated from other social processes that are occurring at the same time. Causal models of basic social processes are difficult to build. Particularly in intractable conflicts, these basic social processes may pose fundamental difficulties to resolving the conflict. Do conflict resolution interventions in this case act as a band-aid on the underlying conflict? Do they work at cross-purposes by resolving an immediate dispute but failing to address larger societal problems? These questions are difficult to answer, and certainly cannot be answered through evaluations of episodic disputes. They require much more carefully designed studies of underlying conflict.[9]

Challenges of evaluation in intractable conflicts

Intractable conflicts pose particular challenges to the design and implementation of evaluations. These are due to the large scale of conflict, over-ambitious expectations associated with interventions, the difficulty of determining success, the difficulty of establishing cause-effect relationships between interventions and conflicts, and the need for confidentiality. These challenges are briefly described below.

Large Scale

  •  Intractable conflicts are often too large to resolve in one intervention.
  •  Evaluation of single interventions cannot logically be linked to the resolution of the larger conflict.

Inflated Expectations:

  •  The goals of any intervention may not be shared by all participants, or may not be articulated clearly.[10]
  •  Although intractable conflicts are often too large to resolve in one intervention, many people have this expectation. If expectations are over-ambitious, the intervention is bound to end in disappointment even if it was actually successful at some level.
  •  Even micro-level interventions can require extensive effort, leading to inflated expectations.

Unclear Indicators of success:

  •  Interventions often have small, but realistic objectives, which may seem insignificant to outsiders.
  •  Interventions may need to adapt to changing conditions. Therefore original goals may be revised as the process evolves. If the conflict continues for a long period of time, the context and the players will inevitably change during the course of intervention.
  •  The impact of the intervention may be difficult to measure in the short run.

Complex Causality:

  •  It is hard to link micro-level interventions to the macro-level outputs desired because theories of change are not well developed.
  •  In complex conflicts, outcomes are affected by more than a specific intervention, so linkages and attributions are bound to remain murky.
  •  It is hard to disentangle effects of interventions from other, unrelated changes in the environment.

Need for Confidentiality:

  •  Because participation in conflict resolution processes may be seen as treason, selling out, or betraying one's own side, confidentiality becomes extremely important.
  •  Because of the need for confidentiality, collecting information for evaluation becomes more difficult.[11]

Who should conduct an evaluation?

Evaluations can be conducted by a professional evaluator, by the conflict resolution intervener or the intervener's peers, or by the participants in the conflict resolution process. Each has advantages and disadvantages.

Outside evaluation is likely to be seen as more "objective" and therefore more credible. Outside evaluators are likely to be more skilled methodologically and have a greater grasp of the theoretical and practical problems associated with the evaluation. At the same time, because these evaluators are independent, they can become more concerned with the maintenance of scientific rigor than with the success of the conflict resolution process itself. They can therefore be highly intrusive and may actually alter the intervention process.

Peer evaluations are conducted by others skilled in conflict resolution, and who bring an awareness of the realities and challenges of intervention to bear on their work. Feedback is often more appropriate and valid, if it is rooted in the dynamics of the conflict. These evaluators often are as concerned for the success of the resolution process as for the integrity of the evaluation and therefore tend to minimize their intrusiveness. At the same time, peer evaluators often are subject to bias in favor of the field or for the particular intervention being evaluated.

Self-evaluations are conducted by either the facilitator or the participants themselves. Such evaluations allow for the greatest degree of control over the process and collect the most useful information for making improvements. They also intrude the least on the process and most fully protect the confidentiality of parties. At the same time, self-evaluations are subject to considerable bias and thus usually have less credibility with outsiders. In addition, unless assisted by a person skilled in evaluation, these studies are unlikely to maintain the same level of rigor that can be expected from professionally conducted studies.

The choice of who should conduct the evaluation therefore depends on audience and purpose. Audiences for evaluations can include policy makers, civic leaders, the dispute resolution community, funders of dispute resolution processes, the disputants, or other interested persons. The purpose of the evaluation will shift depending on the intended audience. Evaluations that seek to influence outsiders or to determine effectiveness should usually be conducted by professional evaluators. Evaluations that seek to inform participants about the process and to promote active learning within that group can often be conducted by either peers or by participants in the process.

How do you conduct an evaluation?

The specifics of how to conduct an evaluation are beyond the scope of this essay. Rather, we provide a brief overview of the tools used by evaluators, as well as some of the tradeoffs involved in using these tools. More complete coverage of these issues can be found in texts by Patton (2002 and 1997), Rea and Parker (1992), Marshall and Rossman (1999), House (1993), Morgan (1998), and others.[12]

Observations. While the spontaneous observations of untrained observers are often highly subjective, observations made by skilled observers are more accurate, valid and reliable. Observations allow the evaluator to enter into and understand the conflict resolution process and to describe the process in detail. Observations can be made as a participant observer (an evaluator who actively engages in the process) or as a field observer (an evaluator who observes as an onlooker). Observation also provides the evaluator with a more open and inductive process of exploration that is more likely to lead to discoveries because it is less bound by preconceptions imbedded in evaluation protocols. However, observational information is more difficult to analyze, particularly when making cross-case comparisons conducted by different evaluators.

Qualitative interviews. Observation reveals patterns of behavior, but not the meaning behind that behavior. Interview data allows the evaluator to enter into another's experience, to probe their memories and motives. In the absence of observations, interview results can be cross-referenced with the results of other interviewees. Thus, while interview data is inherently subjective, much of the bias in the data can be controlled through the selection of interviewees and the structure of the interviews. Yet tradeoffs must be made as to how one conducts the interviews -- whether conversationally, with an interview guide, or through standardized open-ended questions. Less structured and more conversational interviews are more responsive to the interviewee. More structured interviews are better at ensuring that the same data is collected from all interviewees, thereby increasing the comparability of the interviews and the potential for systematic analysis. Structured interviews also reduce interviewer bias when several interviewers are used.

Fixed response interviews and surveys. Unlike open-ended interviews, fixed response interviews and surveys are highly efficient. Many questions can be asked in a short amount of time and the results are highly comparable and easy to analyze. Statistical tests of significance can be applied if the number of respondents is large enough. Often, a larger pool of respondents can be obtained, allowing for a more robust range of perspectives. At the same time, since the questions are fixed, respondents must fit into the categories and concerns raised by the evaluator. Often surveys are seen as overly mechanistic, asking questions that seem irrelevant to the respondent. Surveys can also distort the meaning of respondents both in the way questions are asked and in the limits put on answering these questions.

Focus groups. Focus groups are group interviews, usually conducted in small discussion groups of about eight individuals. By placing the interviewees into a group setting, issues linked to cultural or social dynamics can often be explored in more detail as participants interact over both what is shared and how their perspectives and concerns differ. At the same time, organizing and conducting focus groups is more difficult than personal interviews. There is also more potential for some members of the group to influence others' perspectives. Finally, focus groups work best when participants feel safe in each other's presence. Focus groups may therefore be of limited use in the context of intractable conflicts.

Document and media analysis. Evaluation often involves reviewing meeting notes, supporting documents, newspaper and news accounts, government documents, and similar recorded materials. Document and media analysis are particularly important when the evaluator has not observed the interactions directly. At the same time, the observations recorded in the documents and on media may be biased, with all the problems associated with observations made by unskilled observers and further complicated by the potential purposeful distortions that often exist in publicly available documents.

Dialogue with participants. While technically not a distinct evaluation tool, dialogue with participants about the findings can often provide the evaluator with deeper insights. Participants can also highlight concerns that might require either corrections or clarifications. Particularly when evaluating intractable conflicts, the wording and presentation of results often has political or cultural meaning, which the evaluator may wish to understand before finalizing the results. However, ultimate responsibility for the reliability and accuracy of the evaluation report rests with the evaluator.

Conclusions

This essay argues for the central importance of evaluation and systematic reflection in efforts to resolve intractable conflicts. Such reflection provides for the learning and knowledge necessary for the design, implementation and recalibration of dispute resolution processes. At the same time, it poses significant difficulties. Useful evaluations require a clear sense of the uses of evaluation, the desired outcomes, the intended audiences, who might best conduct the evaluation, the tools and techniques used to conduct them, and their outcomes. Figure 2 provides a summary of our discussion.


Figure 2

 

 

Types of Evaluation

Assessment

Formative

Summative

Objectives Generate information useful to a specific conflict resolution process before commencement of a formal intervention to:
  • decide if a conflict resolution process can work
  • design the conflict resolution process
  • select the appropriate type of intervention
Generate information useful to a specific conflict resolution process during the formal intervention to:
  • promote more constructive and thoughtful engagement
  • enable process corrections that can improve the conflict resolution outcome
Generate knowledge about a specific conflict resolution process or a whole class of similar processes to:
  • help disputants, funders and interveners assess the effectiveness of a completed intervention
Audience Participants, potential intervener, funders Participants, funders, intervener, conflict resolution community, prospective users, social theorists
Purpose in Knowledge-Oriented Evaluation To add knowledge useful for interested parties, particularly those outside the dispute, such as funders, the conflict resolution community and social theorists
Purpose in Action Evaluation To help actively improve the conflict resolution processes and help resolve the conflict
Who Conducts the Evaluation Intervener, disputants Intervener, participants or non-affiliated researcher
Means A wide array of techniques are useful at multiple levels:
  • Observation
  • In-depth interviews
  • Surveys
  • Focus groups
  • Analysis
  • Reports
  • Dialogue with participants around findings
Outcome Immediate feedback, analysis On-going feedback, analysis Evaluative analysis and feedback, best practices, program recommendations, improvements of theory and practice

 


[1] Coleman, J. S. (1957). Community Conflict. New York: The Free Press.

d'Estree, T.P., & Colby, B.G. (2003). Braving the currents: Evaluating conflict resolution in the river basins of the American West. Norwell, MA: Kluwer.

Elliott, M., Gray, B., and Lewicki, R. (2003). "Lessons Learned about the Framing and Reframing of Intractable Environmental Conflicts." in R. Lewicki, B. Gray, and M. Elliott, editors. Making Sense of Intractable Environ­mental Conflicts. Washington DC: Island Press.

Rubin, J. Z., Pruitt, D. G., and Kim, S. H.. (1994). Social Conflict: Escalation, Stalemate, and Settlement. (2nd ed.). New York: McGraw-Hill.

[2] Argyris, C. and Schon, D. A. (1974). Theory in Practice: Increasing Professional Effectiveness. San Francisco: Jossey-Bass Publishers.

[3] d'Estree, T.P., & Colby, B.G. (2003). Braving the Currents: Evaluating Conflict Resolution in the River Basins of the American West. Norwell, MA: Kluwer.

[4] Susskind, L. & Thomas-Larmer, J. (1999). "Conducting a conflict assessment" in The Consensus Building Handbook: A Comprehensive Guide to Reaching Agreement, L. Susskind, S. McKearnan, and J. Thomas-Larmer, eds. Thousand Oaks: Sage Publications.

[5] Patton, M. Q. (1997). Utilization-Focused Evaluation. (3rd ed.). Thousand Oaks, CA: Sage.

[6]Action Evaluation Research Institute (2004) at http://www.aepro.org/.

[7] d'Estree, T.P. , Fast, L.A., Weiss, J.N., & Jakobsen, M. S. (2001). Changing the Debate About "Success" in Conflict Resolution Efforts. Negotiation Journal, 17(2), 101-113.

d'Estree, T.P., & Colby, B.G. (2003). Braving the Currents: Evaluating Conflict Resolution in the River Basins of the American West. Norwell, MA: Kluwer.

Innes, J. E. (1999). "Evaluating consensus building" in The Consensus Building Handbook: a Comprehensive Guide to Reaching Agreement. L. Susskind, S. McKearnan and J. Thomas-Larmer. Thousand Oaks, CA, Sage.

[8] Patton, 1997.

[9] Bush, R. A. B. and Folger, J. P. (1994). The Promise of Mediation: Responding to Conflict Through Empowerment and Recognition. San Francisco: Jossey-Bass.

Innes, J. E. (1999).

Kriesberg, L. (1998). Constructive Conflicts: From Escalation to Resolution. Lanham, MD: Rowman & Littlefield.

[10]Ross, M.H., and Rothman, J. (1999). Theory and Practice in Ethnic Conflict Management: Theorizing Success and Failure. London, England: McMillan Press Ltd.

[11] d'Estree, T.P. , Fast, L.A., Weiss, J.N., & Jakobsen, M. S. (2001).

[12] Patton, M. Q. (1997).

Patton, M. Q. (2002).

Rea, L. M. and Parker, R. A. (1992). Designing and Conducting Survey Research: A Comprehensive Guide. San Francisco: Jossey-Bass.

Marshall, C. and Rossman, G. (1999). Designing Qualitative Research, 3rd ed. Thousand Oaks: Sage.

House, E. (1993). Professional Evaluation: Social Impact and Political Consequences. Newbury Park, CA: Sage Publications. Morgan, D. (1998). The Focus Group Guidebook. Thousand Oaks, CA: Sage.


Use the following to cite this article:
Elliott, Michael, Tamra Pearson d'Estrée and Sanda Kaufman. "Evaluation as a Tool for Reflection." Beyond Intractability. Eds. Guy Burgess and Heidi Burgess. Conflict Information Consortium, University of Colorado, Boulder. Posted: September 2003 <http://www.beyondintractability.org/essay/evaluation-reflection>.


Additional Resources