Qualitative analysis focuses on richly nuanced data from a single context (or several, for comparison studies). This is especially critical for civil-military operations, which have an urgent need to understand problems and possibilities here and now, in a specific village:5
A primary goal of within-case analysis is to describe, understand, and explain what has happened in a single, bounded context – the “case” or site... 6
Qualitative analysis … is unrelentingly local, and deals well with the complex network of events and processes in a situation. (Miles, Huberman, and Saldaña 2013: 100, 223, emphasis in original)
Qualitative researchers deal with, and revel in, confusing, contradictory, multi-faceted data records, rich accounts of experience and interaction. (Richards 2009: 4)
U.S. intelligence has a different focus. Data that starts out richly nuanced is converted to indicators. For example, transcripts of weekly sermons at a mosque would be valuable data for qualitative analysis. The tenor of these sermons, just the tenor, is an indicator for U.S. intelligence. (RAND 2009: 11) Indicators are valuable for comparisons and trends, but they provide only the most superficial understanding of any one particular context:
Unfortunately, the Army Intelligence Community’s transformation risks overlooking one critical, systemic shortfall – the perennial inability to provide sufficiently detailed social, political, cultural, economic, and unconventional threat intelligence to deploying forces under crisis-response conditions. (Tohn 2003: 1)
SNA [Social Network Analysis] tools … while they may be useful for identifying prominent members of networks … most have very little to say about the influence these members may exercise over others in the network. (RAND 2009: 120)
[T]he first casualty of coalition forces engaging in transition is often situational awareness… [O]bjective criteria expressed in measures of effectiveness and measures of progress will have an important role in the transition. However, more subjective or qualitative reporting, the type based on a first-hand understanding of an operating area … will be more valuable in most cases. (L’Etoile 2011: 10)
In a conventional conflict, ground units depend heavily on intelligence from higher commands… Information flows largely from the top down. In a counterinsurgency … the soldier or development worker on the ground is usually the person best informed…Moving up through levels of hierarchy is normally a journey into greater degrees of cluelessness. (Flynn, Pottinger, and Batchelor 2010: 12)
D. Causal inference and assessment within a particular context
Qualitative analysis can investigate cause-and-effect in a way that statistical analysis cannot. Maxwell explains:
Experimental and survey methods typically involve a “black box” approach to the problem of causality; lacking direct information about social and cognitive processes, they must attempt to correlate differences in output with differences in input and control for other plausible factors that might affect the output…
Variance theory deals with variables and the correlations among them… Process theory, in contrast, deals with … the causal processes by which some events influence others. (Maxwell 2004: 248)
A core strategy of causal inference within a specific context (the “case”) is known as process tracing or causal process tracing. (Bennett and Checkel 2012; Langley 2009; Maxwell 2004) In essence, the strategy is simply to examine a string of related events, and ask how and why each one leads to the next.
Process tracing may proceed either forward or backward in time. Tracing forward starts with causes, and traces the chain of actual (or theoretically plausible) events forward to final outcomes. At every step, the analyst looks for alternative hypotheses, intervening variables, supportive evidence, and contrary evidence. One option here is a probing action by one actor (perhaps even with a control group of some sort) to see how others respond. Tracing back is essentially the same strategy, in reverse: start with an outcome and trace the causal chain of events backward in time. Typically, the analyst does both.
-
Tracing forward (effects-of-causes): What happens if we do A, B, C?
-
Tracing back (causes-of-effects): What worked here? Will it work elsewhere?
The goal of this analysis is to discover and test a theory of change – a specific pathway of cause-and-effect – that is valid in a particular context. Tracing forward and tracing back are alternate strategies to the same goal. However, as the questions illustrate, tracing forward is especially relevant to operational planning, while tracing back applies most directly to operational assessment. A clearly articulated theory of change is especially important in assessment, because a key question is whether and how the intervention contributed to the outcome. (White and Phillips 2012, Stern et al. 2012)
Visual charts (Figures 1 and 2) are a good way to discipline oneself to think about the logic and evidence behind a theory of change. With minor changes, essentially the same chart as Figure 1 may be used to diagnose the causes of a problem, or solutions to the problem, as in McVay and Snelgrove (2007).
Figure 2 below offers a specific example.
(Note: The published literature uses the term sufficient causes for causes linked by "OR" connections, and necessary causes for causes linked by "AND" connections.7)
To illustrate the need for a clearly articulated theory of change, here are two quite different observations on how to succeed in Afghan village stability operations:
First, demonstrate power (Zerekoh Valley)
On 8 May 2010 … Taliban directly attacked the locals and Special Forces teams. Our response—with its speed, violence of action, and effective but discretionary use of indirect fires—was … a decisive moment in coalescing the support of the villagers.
When the villagers perceived such strength, maliks (village elders) became responsive to measures like construction projects, representative shuras, and conflict resolution mechanisms…
The people must believe it is in their interest to resist Taliban threats. They will only do this if they believe that a more dominant and lasting authority will prevail… (Petit 2010: 27)
First, demonstrate benefits (Adirah)
In Adirah, jump-starting a representative shura helped to reinstall local governance councils that had been attrited over the past 30 years of conflict. The key to generating momentum in these shuras was the skilled introduction of development. A Special Forces team sponsored community elders who executed over 55 small projects… The locally run projects—culverts, irrigation, retaining walls, foot bridges—produced clear benefits to the community and quickly galvanized the locals against insurgent encroachment. … Critically, projects were nominated and started in hours and days, not weeks or months. (Petit 2010: 29)
Each of these reports articulates a theory of change, and they are polar opposites:
-
Theory of Change: Afghan Stability Operations
|
Zerekoh Valley
Demonstration of power
Popular support
Development projects
Village stability
|
Adirah
Development projects
Popular support
Village stability
|
Here are some candidate explanations for the contrary theories:
-
Each theory is valid for that village only: In both villages, the chosen strategy led to village stability, and the alternate strategy would not have.
-
Each theory is valid for both villages: In both villages, the chosen strategy led to village stability, and the alternate strategy would have succeeded, too.
-
One or both theories ignore other contributing factors: In one or both villages, stability was achieved for other reasons, in addition to or instead of the articulated theory of change.
To sort out the confusion, qualitative causal inference urges the analyst to clearly articulate a theory of change (a specific pathway of cause-and-effect in a particular context) and alternative competing hypotheses, search for observable evidence (necessary clues and sufficient clues) that confirm or reject one or another of these, and keep iterating toward better explanations.
When theories of change differ for each specific context, how can one generalize what has been learned? One practical strategy is to develop "typologies of contexts" that behave similarly, and then describe what happens "under these conditions":
Causal mechanisms operate in specific contexts making it important to analyse contexts. This is often to be done only in an ad hoc way but typologies of context are a useful intermediate step towards generalisation in mechanisms based evaluations. Contexts may include related programmes affecting the same target population; socio-economic and cultural factors; and historical factors such as prior development initiatives. Developing typologies of context whilst not supporting universal generalization can support ‘under these conditions’ types generalizations. (Stern et al. 2012: ¶3.40; cf. Rohlfing 2012: 8)
For practical insight into why and how to do this, economic development is a good source because so many problems are identical to those in civil-military operations:
The evolving nature of the aid relationship and shifts in aid priorities and modalities has many consequences for IE [impact evaluation] designs. Such designs have to be:
-
Appropriate to the characteristics of programmes, which are often complex, delivered indirect through agents, multi-partnered and only a small part of a wider development portfolio.
-
Able to answer evaluation questions, that go beyond ‘did it work’ to include explanatory questions such as ‘how did it work’, and equity questions ‘for whom do interventions make a difference?’.
Stern et al. (2012: 2.27, emphasis added)
In addition, as (Connable 2012: xix) observes, "Effective assessment depends on capturing and then relating contextual understanding in a way that is digestible to senior decisionmakers." For that, Connable (2012) proposes a bottom-up assessment process, in which higher-level reports contain each of the lower-level reports, with built-in summaries at each level.
III. The basics: qualitative interviewing and making sense of the data
A. Qualitative interviewing
Rubin and Rubin (2012) is an excellent guide to all aspects of qualitative interviewing.8 For example, here is some basic guidance on follow-up questions:
Follow-up questions ask for missing information, explore themes and concepts, and test tentative explanations. They allow you to examine new material, get past superficial answers, address contradictions, and resolve puzzles. They help you put the information in context. (Rubin and Rubin 2012: 169)
Table 1. Basic follow-up questions (from Rubin and Rubin 2012: 137-169)
|
Purpose
|
Questions
|
Missing pieces
Unclear concepts
Broad generalizations
|
Such as…? Can you give me an example?
How would you compare …? (broad, then specific)
How is that the same or different than …?
How does this compare with the way things were in the past?
|
Why? (causation)
|
Could you tell me how…? How do you go about…?
Can you step me through that? What happens step by step?
What happens during…? What led up to …?
What contributed to …? What influenced …?
|
How do you know?
|
You said… Could you give me an example?
How did you find that out?
Your unit did… Did you personally have anything to do with it?
|
B. Making sense of the data
Qualitative analysis is a probe-and-learn process, much the same whether the investigation lasts three years or three hours. It's like putting together an especially diabolical jigsaw puzzle without a picture, like the one that ensnared my family over the Christmas holidays. It pretty much instantly refuted our prior hypotheses. None of the edge pieces fit together directly – only indirectly, through connector pieces ("intervening variables") that are not themselves edge pieces. Color was a weak clue of close attachment, because pieces with no shared colors often fit together. On the other hand, clusters of pieces often came together in interesting shapes – e.g., a bicyclist, boat, or hot air balloon. And as the puzzle came together, other constructs surfaced as useful ways to define concepts and relationships – windows on a building, bridge spans, a flower stall – and, finally, a fully integrated picture of San Francisco landmarks.
Miles, Huberman, and Saldaña (2013) is the essential reference. It is a good choice for a required textbook in a first introduction to qualitative analysis, just to ensure that it becomes a ready reference on the student's bookshelf.
Vakkari (2010: 25) explains the key elements in the process: “[T]here is some evidence of how conceptual construct changes when actors’ understanding grows. In general, it changes from vague to precise. The extension of concepts decreases, the number of sub-concepts increases, and the number of connections between the concepts increases."
Miles, Huberman, and Saldaña (2013) aptly characterize this step of qualitative analysis as "data condensation”:
Data condensation … refers to the process of selecting, focusing, simplifying, abstracting, and/or transforming the data … which data chunks to code and which to pull out, which category labels best summarize a number of chunks, which evolving story to tell … in such a way that “final” conclusions can be drawn and verified. (Miles, Huberman, and Saldaña 2013: 12)
Data condensation is a challenging task for groups that share information but work independently, because the category labels for data are constantly in flux. However, the challenge is manageable. (Ane 2011, Dungan and Heavey 2010, Portillo-Rodríguez et al. 2012) One need not default to a standardized database with preset categories that preclude learning and locally nuanced intelligence.
Developing a less vague, more precise understanding of relationships between individuals and organizations is often a key issue for investigation – i.e., not just who is connected to whom, but why and how. This is especially the case for non-transient relationships in which both parties expect to benefit from repeated interactions (Fig. 3). The business literature calls this a relational contract. (MacLeod 2007). Cabral (2005) calls it trust: "Trust...is the situation 'when agents expect a particular agent to do something.' … The essence of the mechanism is repetition and the possibility of 'punishing' off the-equilibrium actions." Greene (2013: 26, 61) underscores the role of social norms for moral behavior: "Morality is nature's solution to the problem of cooperation within groups, enabling individuals with competing interests to live together and prosper… Empathy, familial love, anger, social disgust, friendship, minimal decency, gratitude, vengefulness, romantic love, honor, shame, guilt, loyalty, humility, awe, judgmentalism, gossip, self-consciousness, embarrassment, tribalism, and righteous indignation… All of this psychological machinery is perfectly designed to promote cooperation among otherwise selfish individuals… "
Figure 3 is applicable to almost any sort of relationship. The relationship can be distant (a trusted brand with a loyal following). It can even be coercive (“your money or your life").
-
"Actor" names an entity, "Attributes" describe it, "Actions" list what it does.
-
"Key capabilities" are essential, uncommon, and hard to acquire.
-
Each actor gives something and gets something. (Actor A gives GoodA and gets back GoodB.)
-
Each actor expects to benefit from repeated interactions.
-
"RelationshipType" names a type of relationship
-
Compliance is enforced through monitoring, unilateral actions (ending the relationship, taking violent action), and social norms for moral behavior (love, honor, guilt, gossip).
The diagram serves as a roadmap and visual file cabinet for evidence on questions like these:
-
What does each entity get out of the relationship? (GoodA for ActorB, GoodB for ActorA)
-
Why do they value those goods? (ActorA’s wants and alternatives to GoodB, Actor B’s desires and alternatives to GoodA)
-
How are they able to provide those goods? (ActorA’s capabilities to provide GoodA, ActorB’s capabilities to provide GoodB)
-
What future commitments and expectations sustain the relationship? (compliance commitments and benefits)
-
What mechanisms exist to monitor compliance? What are the consequences of noncompliance? What incidents of noncompliance have occurred?
Data condensation is not a step that can be evaded by asking computer algorithms to screen contextually rich data for relevance and meaning. It is just "one of the realities of case study research: a staggering volume of data." (Eisenhardt 1989: 540) Moreover, “Whereas coding in quantitative analysis is for the explicit purpose of reducing the data to a few ‘types’ in order that they can be counted, coding in qualitative analysis … add interpretation and theory to the data.” (Gibbs 2007: 3)
To manage the process, Miles, Huberman, and Saldaña (2013) recommend developing a simple conceptual framework at the beginning of a study, and revising it continually:
A conceptual framework explains … the main things to be studied – the key factors, variables, or constructs – and the presumed interrelationships among them… As qualitative researchers collect data, they revise their frameworks – make them more precise, replace empirically weak bins with more meaningful ones, and reconfigure relationships. (Miles, Huberman, and Saldaña 2013: 20, 24)
IV. Adapting qualitative analysis to an operational tempo and purpose
A program management framework for civil-military operations called the District Stability Framework (DSF) specifies five sequential steps. (Derleth and S. Alexander 2011: 127) These are steps in a pathway of cause and effect leading to a desired outcome (Theory of Change), as in Section II.D:
• Situational awareness
• Analysis
• Design
• Implementation
• Monitoring and Evaluation
Essentially the same steps appear in a more iterative framework published by the Mennonite Economic Development Associates (MEDA). Their methodology emphasizes easy-to-use techniques (conversational interviewing, worksheets, and visual charts like Figure 1) to diagnose causes of a problem, and to develop solutions. The authors emphasize that "The toolkit concentrates on qualitative research tools" (Miehlbradt and Jones, 2007: 2) and "Program design is an iterative, ongoing process" (McVay and Snelgrove 2007: 2)
Fast iteration between information-gathering and analysis is the core strategy of qualitative analysis. MEDA's "iterative, ongoing process" extends this to the operational phase. That lets them discover and respond to problems and possibilities continually, through every stage of the program from Situational Awareness to Analysis, Design, Implementation, and Monitoring and Evaluation.
Agile Management/Lean Start is a similarly interactive strategy for developing products while you discover what features the product should have. It is used primarily in software design, but other industries are beginning to try it. Here are its four basic principles (Blomberg 2012:22, Ries 2009):
-
Offer small changes in product features that produce rapid learning about problems and possibilities. (These small changes are called "Minimum Value Products.") Often this actually is just an offer: "Hey, would you like this new feature?"
-
Solicit customer feedback.
-
Iterate rapidly.
-
Validate learning. Pivot as necessary to explore more promising opportunities.
The U.S. Army's active military component of Civil Affairs also embraces an interactive approach. Their strategy relies on training Civil Affairs specialists, in contrast to MEDA's methodological toolkit and Agile Management/Lean Start's set of guiding principles.
The authors are currently working with the U.S. Army's 83rd Civil Affairs Battalion to explore what could be developed from these varied approaches, to support civil-military operations that are iterative, locally nuanced, and conducted entirely by conventional soldiers:
-
iterative
-
fast iteration between information-gathering, analysis, and action during all phases of the operation (Situational Awareness, Analysis, Design, Implementation, Monitoring and Evaluation)
-
locally nuanced
-
U.S. role is to discover and facilitate local solutions to local problems
-
conducted entirely by conventional Soldiers
-
with an easy-to-use methodological toolkit, plus predeployment training that imposes no significant cost, effort, or disruption
We are using the Agile Management/Lean Start approach to design our methodological toolkit. A "Minimum Value Product" (MVP) is one methodological step that produces rapid learning about problems and possibilities. Each element of Figure 4 is an MVP. This project is still in a very early stage, so these MVPs are broad categories of methodological issues that can be investigated separately. In response, a new MVP might propose a simpler operational context, a more specific category than "Gather information", a new causal factor that isn't currently on the slide, etc.
Figure 4. Designing a methodology in small steps (Minimum Value Products)
V. Conclusion
It seems counterintuitive that one can search systematically for answers to questions we are too clueless to ask, and quite impossible to test hypotheses with a sample size of one. But that is what qualitative analysts do.
The role of operational qualitative analysis is to facilitate rapid and accurate decisionmaking in one-of-a-kind, poorly understood contexts – for example, in locally nuanced civil-military operations. That helps commanders quickly identify and act on locally derived and context-specific information, subsequently developing a theory of change tailored to that area.
Dostları ilə paylaş: |