What is an Explanation? - Part 2

This is a second post in a two-part series on explanation. In the first post, we defined what "explanation" is, distinguishing it from related concepts like "argument". We then established a baseline for evaluating explanations, and then dove into the psychological and cognitive mechanisms that cause us to deviate from applying ideal criteria when approached with an explanation. We then finished off with the social epistemology of explanation, showing how explanation hardly takes place in a vaccum; and these contextual/environmental features of explanation evaluation should be considered.

In this post, I want to revisit what makes an explanation "good", but with particular focus on causal and mechanistic explanations. First, we will distinguish between explanation and pseudo-explanation. Then we will dive into the this particular type of explanation.

Explanation vs Pseudo Explanation

An explanation needs to "do work". It does this when it adds epistemically useful structure—it doesn’t just rename the phenomenon, list features, or tell a coherent story. Philosophers and cognitive scientists converge on a practical idea. A genuine explanation must support understanding in a way that licenses inferences (especially about dependence, counterfactuals, mechanisms, constraints, or reasons). A pseudo-explanation produces the feeling of understanding without adding that inferential power. Here are the main requirements that distinguish explanations from pseudo-explanations, grouped by what they must provide and what pseudo-explanations typically lack.

  1. It must cite an explanatory relation, not just restate the explanandum - The explanans must stand in a recognized explanatory relation to the explanandum: causal, mechanistic, nomological, mathematical/structural, grounding, or reason-giving. If you replace the explanans with the explanandum (or a synonym), does nothing change? If yes, it’s not explanatory. Examples of pseudo-forms:

    • Circularity / restatement: “It happened because it was destined to happen.”
    • Label-as-cause: “He did it because he has impulsivity.” (If “impulsivity” is just a re-description of doing impulsive things, no extra dependence is given.)
    • Tautological dispositions: “It dissolves because it’s soluble.” (Unless “soluble” is unpacked in terms of microstructure and conditions.)
  2. It must be difference-making relative to a contrast - Explanations should answer a contrastive “why this rather than that?” question (often implicitly), and cite factors that made the difference. Good explanations specify (or at least track) e rather than E and c rather than C, instead of offering an undifferentiated “because.” This is contrastivism; we will elaborate more on this next section. Does the explanation tell you what would have to change to get the alternative outcome? Examples of pseudo-forms:

    • A factor that is always present in both the actual case and the relevant alternative (“oxygen caused the fire” when oxygen is present in both fire and no-fire scenarios).
    • “Everything-and-nothing” causes: “Society” or “human nature” as catch-all drivers that don’t discriminate among outcomes.
  3. It must support counterfactuals / interventions (where causal explanation is at issue) - For causal explanations, a good explanation tells you what would change under interventions or relevant counterfactual variations. Can you use the explanation to answer “If we changed X, would Y still occur?” with non-handwavy specificity? Examples of pseudo-forms:

    • Explanations that predict the outcome but give no handle on manipulation: “It happened because it was his karma.”
    • “Post hoc” narratives that fit the facts but don’t constrain what would happen in nearby cases.
  4. It must provide mechanism or lawful/structural constraints (not just a black box) - It should open the black box to the degree appropriate to the question: Mechanistic: intermediate steps/components/organization (common in biology, tech, psychology), Nomological/statistical: laws or stable regularities + boundary conditions, or Structural/mathematical: constraints, symmetries, optimization, proofs (non-causal explanations). Examples of pseudo-forms:

    • Mystery tokens: “quantum,” “energy,” “vibrations,” “toxins,” “manifestation,” used as explanatory garnish without specifying a mechanism, constraint, or testable implication.
    • Neuroscience gloss: irrelevant brain detail that signals “science” without improving relevance (a known effect in explanation evaluation).
  5. It must be anchored in evidential support that is diagnostic, not just compatible - Evidence should not merely be consistent with the explanation; it should be at least somewhat discriminating against plausible competitors (i.e., it raises the probability of this explanation more than rivals). Name two plausible alternatives. What evidence would count more for one than the other? Examples of pseudo-forms:

    • “Just-so” stories: plausible, coherent after the fact, but not supported by evidence that would look different if the story were false.
    • Barnum-style “fits everything” explanations: high fit because they’re vague.
  6. It must handle alternatives and defeaters (defeasibility discipline) - Good explanations are defeasible, they can say what would undermine them, and they can respond to counterevidence without ad hoc patches. Can it lose? If it can’t lose, it isn’t explaining. Examples of pseudo-forms:

    • Immunizing moves: “Any counterexample proves it’s hidden,” “If you doubt it, that shows it’s working.”
    • Unfalsifiable flexibility: the story can be adjusted to fit anything.
  7. It must reduce complexity in a non-cheating way (compression with integrity) - Explanations are a kind of compression—they reduce many facts into an organizing dependence. But the compression must preserve the right structure. Does it generalize appropriately (scope) without becoming vacuous (too broad)? Examples of pseudo-forms:

    • Overcompression into a slogan (“It’s all capitalism,” “It’s all trauma,” “It’s all greed”) when the phenomenon varies widely under that label.
    • Overfitting narrative: a detailed story that fits this one case but has no generalizable constraints.
  8. It must be appropriately precise (not vagueness masquerading as depth) - Precision doesn’t always mean math, but it means specifying variables/conditions, what kind of effect, what kind of dependence, and the relevant context/contrast. Can you turn it into a simple model: “Under conditions A, factor X increases probability/produces Y via pathway P”? Examples of pseudo-forms:

    • Barnum explanations about people: “You’re independent but value relationships.”
    • “Multi-factor” as a shield: listing many factors without specifying their roles, interactions, or relative importance.

When you hear an explanation, try these six questions:

  1. What’s the contrast? Why this rather than what?
  2. What’s the dependence claim? Cause, mechanism, constraint, grounding, reasons?
  3. What would change the outcome? (Counterfactual/intervention handle)
  4. What’s inside the black box? At least one intermediate step.
  5. What would count against it? (Defeaters / loss conditions)
  6. How does the evidence discriminate it from rivals?

Pseudo-explanations succeed because they deliver psychological goods (clarity, closure, identity fit, coherence) without paying the epistemic costs (mechanism, discriminating evidence, defeasibility, contrast sensitivity). In hostile social environments, those psychological goods can be deliberately optimized—so the “feeling of explanation” becomes separable from explanation quality.

Contrastive Explanations

As mentioned briefly in the prior section, good explanations often explain "X rather than Y"; in other words, they provide a contrast class. This is very similar to how model-based reasoning operates in science. You normally have a baseline from which you compare alternative models against. In many cases, we are asking these "rather than" counterfactual style questions. For example, "why did this person recover from this illness" implicitly means "why did this person recover, rather than remain sick." A good explanation for "Why X?" should explain "Why not X?".

But counterfactual in the binary sense (if not-c then not-e) is often too crude to constitute a sufficient explanation. In Jonathon Schaffer’s "Contrastive Causation", his main lesson is that many causal/explanatory questions are implicitly contrastive—they ask for a difference-maker between options, not an absolute producer of an effect. And the option set is normally larger than a single contrast.

Schaffer argues that it’s a mistake to assume causation is always binary (c causes e). Instead, he proposes a quaternary, contrastive structure: **c rather than C*** causes **e rather than E, where **C and **E*** are nonempty sets of alternatives. And he connects this to the thought (from Mackie) that causal talk answers a difference question: what made the difference between cases where the explosion didn’t happen and this case where it did.

On this view, an explanation is not merely “what caused e,” but “what made this happen rather than that—given the relevant field of alternatives.” So explanatory quality improves when an explanation identifies the right contrasts, and tracks difference-making at the level that matters (not too coarse, not too fine). Schaffer explicitly emphasizes that any indeterminacy we feel is often not in “the causal relation itself” but in which contrasts the context has silently supplied: the binary statement is under-specified about what it really encodes.

If we’re building a general rubric for “good explanations,” Schaffer motivates adding a distinct dimension: contrastive adequacy. A good explanation should make clear:

  1. Explanandum contrast (E vs E*): What outcome are we explaining instead of what? “Why did the window shatter rather than remain intact?” is different from “Why did the window shatter rather than crack?” These demand different explanatory resources.
  2. Explanans contrast (C vs C*): What factor are we treating as the difference-maker relative to which alternative(s)? “Moderate smoking rather than abstaining…” vs “moderate smoking rather than heavy smoking…” (Schaffer uses Hitchcock here to show the causal valence flips with the contrast).
  3. Background field (what’s being held fixed): The context sets what’s “in the field” and what’s backgrounded—what alternatives are live.

Binary counterfactual tests can be trivially satisfied or fail for the wrong reason if you choose the wrong “not-c” and “not-e.” Schaffer’s point is: specify the alternatives explicitly and the dependence structure becomes principled.

Contrast sets help with “explanatory depth”: they force you to say what your explanation actually accounts for. A major payoff in Schaffer is that contrasts let you separate explaining whether something happened from explaining how it happened (or in what manner). For example, a tiny dust speck might not explain why the window shattered rather than stayed intact, but it could explain why it shattered in manner m rather than m′—i.e., it made a difference to the fine-grained outcome even if not to the coarse-grained one. Schaffer’s general moral: “the contrasts measure impact.” So contrastive framing gives you a clean “depth” dimension. Shallow but acceptable explains why e rather than E** at a coarse contrast. Deeper also explains *why e happened in this way rather than that way (finer effect contrasts), or why this cause rather than that cause mattered.

Schaffer gives a precise place where context matters without collapsing an explanation into “anything goes", unbounded and subjective. He discusses the classic short circuit vs oxygen case: people call the short circuit “the cause” but oxygen “background.” Schaffer’s reconciliation is that there is no objective selection independent of contrasts, but there is an objective answer given the contrasts fixed by the inquiry. And he ties that to predictability: once you know what question is being asked (what contrasts are relevant), you can predict which factor will count as “the cause.” This implies that a “good explanation” is not one that names the cause simpliciter; it is one that is honest about the contrastive question it answers and does not smuggle in a contrast set that flatters the explainer’s preferred conclusion.

This is where Schaffer plugs directly into Nguyen-style concerns about clarity and closure we discussed last post.

  1. Contrast specification is an antidote to “seductive clarity”- A lot of weak explanations feel satisfying because they sound like they explain “the phenomenon,” but they only explain it under a narrow, favorable contrast. Making contrasts explicit forces you to check “Does this really answer my why-question, or a nearby one?” That reduces the chance of premature closure (seizing/freezing), just-so stories that never say what alternatives they beat, and “explanatory depth illusions” where you mistake a coherent narrative for an answer to the right contrastive question.

  2. Contrast gerrymandering is a central social-epistemic failure mode - In hostile or echo-chamber environments, groups can stabilize explanations by manipulating what counts as a “live alternative”. They do this by shrinking C* (only straw alternatives count), shrinking E* (only convenient outcome contrasts get discussed), or moving inconvenient possibilities into the “background” so they’re never tested. Schaffer gives you a clean diagnostic for this kind of manipulation: ask what the contrast sets are, and whether they were selected in a principled way or engineered to secure a foregone conclusion. (This is the contrastive analogue of Boyd’s “narrow source base” problem: narrow informational range often pairs with narrow contrast range.)

Integrating Schaffer into thing earlier “good explanation” criteria:

  1. State the explanandum contrast explicitly: e rather than what?
  2. State the explanans contrast explicitly: c rather than what?
  3. Check robustness across reasonable contrasts: If you swap in other plausible C* or E*, does the explanation still look good, or does it collapse?
  4. Measure impact at the right grain: Does it explain the coarse event (e vs far E*) or merely the manner (m vs m′)? Schaffer’s “contrasts measure impact” idea is key here.
  5. Audit the context/background: What got held fixed, and is that appropriate to the inquiry?
  6. Locate indeterminacy properly: If you feel torn between explanations, is it because the causal facts are underdetermined—or because the contrast sets are underspecified?

A high-quality explanation is one that wins a well-defined contrastive contest—it shows what made the difference between this and that, under these live alternatives, with this background held fixed.

Mechanisms

Next is the “mechanisms” picture of explanation as it’s developed in the new mechanistic philosophy (especially in biology, neuroscience, psychology), drawing heavily on the SEP entry.

Under the canonical “new mechanist”; a mechanism is a system whose organized components produce / underlie / maintain a phenomenon. A widely cited formulation is that mechanisms are “entities and activities organized” so that they are productive of regular changes. Another influential formulation (from Stuart Glennan) emphasizes mechanisms as complex systems whose interacting parts produce a behavior.

According to the mechanist approach, to explain a phenomenon is not (primarily) to fit it under a law in a deductive argument (the old “covering law” ideal), but to describe the worldly causal structure—the organized causal goings-on—by which the phenomenon happens. In this framework, the phenomenon is the behavior/capacity of the mechanism as a whole. Every mechanism is a mechanism of some phenomenon. For example, protein synthesis mechanism → synthesizes proteins and action potential mechanism → generates action potentials. The phenomenon fixes the boundaries of the mechanism. What counts as “in” the mechanism is determined by what is relevant to explaining that phenomenon. The mechanism–phenomenon relation isn’t always “production” in a single sense. Mechanists distinguish talk of mechanisms that produce, underlie, or maintain phenomena (e.g., homeostatic maintenance). So the explanandum isn’t just an “output”; it can be a temporal unfolding, a capacity, or a maintained range.

What are the parts of a mechanism? Mechanists standardly parse mechanisms into four components: (1) phenomenon, (2) parts, (3) causings, (4) organization.

  1. Entities (parts) - These are the things in the mechanism: molecules, cells, organs, components of a device, agents in a social mechanism, etc. A common requirement is that parts have some robustness—they can often be characterized as entities with stable properties (though there’s debate for “ephemeral” biochemical cases).
  2. Activities (what the entities do) - Activities are the doings: binding, firing, pumping, depolarizing, catalyzing, inhibiting, signaling, etc. The distinctively mechanistic thought here is: you don’t just list entities; you specify what they do and how those doings connect.
  3. Interactions - Activities are rarely isolated. Mechanisms work through interactions among parts—often including feedback, inhibition, and “double prevention” patterns common in biology and special sciences.

Mechanists don’t all define “cause” the same way. The SEP entry lays out several approaches, but the shared orientation is causation is not just regular succession (anti-Humean in spirit), and it must be broad enough to cover the special sciences, where conserved-quantity “push-pull” pictures often don’t fit. Here are two dominant “cause” conceptions within mechanistic thinking:

  1. Productive/activities-based causation - Some mechanists (inspired by G. E. M. Anscombe) treat causation as fundamentally productive activity—magnetic attraction, hydrogen bonding, enzyme catalysis, etc. On this view, to cite a cause in a mechanism is often to cite an activity-type that does the producing.
  2. Difference-making / interventionist causation - Others—especially those focused on explanation—lean toward James Woodward–style interventionism: a variable is causally relevant if ideal interventions on it would change the effect variable. This meshes naturally with experimentation: removal, stimulation, blocking, and modulation of components are ways of probing mechanistic causation.

Mechanistic explanations often combine both intuitions. Mechanisms involve productive activities in the world, and good mechanistic models capture difference-makers (what you can change to change the phenomenon).

Mechanisms aren’t mere heaps of parts. The “mechanism-ness” is in organization—spatial, temporal, and activity-organization. The SEP entry contrasts organization with aggregation: in aggregates, rearranging parts doesn’t change the whole much; in mechanisms, organization is crucial and often non-additive. Mechanists often highlight a few organizational dimensions like spatial organization (location, orientation, shape), temporal organization (ordering, rate, duration), network/feedback structure (loops, cycles, motifs) , and modularity / near-decomposability (subcomponents interact more strongly internally than externally)

One of the most important mechanist contributions is a precise notion of levels of mechanisms. Mechanists emphasize that mechanisms are often hierarchically organized and that explanation frequently spans multiple levels—organ, circuit, cellular, molecular, etc. What is a “level” here? It is not “size scale” in the abstract, and not grand metaphysical strata. Rather levels of mechanisms are defined locally within a multilevel mechanism: A is at a lower level than B when A is a part of B and A is organized with other components so that together they realize B. That yields “nested” explanation. The phenomenon at one level (e.g., hippocampus generates a spatial map) can itself be explained by a lower-level mechanism (cellular interactions), which in turn can be explained by still lower-level mechanisms (molecular interactions). This enables “hierarchical explanation”. Mechanistic relations (part–whole + organized realization) give you a principled way to explain a capacity at a higher level by decomposing it into organized sub-capacities, while still recognizing that higher-level organization matters (so it’s not automatically “reduce everything to molecules”).

Mechanists are explicit that “mechanism” doesn’t mean a narrow, old-fashioned, clockwork picture. For example:

  • not necessarily deterministic (can be stochastic)
  • not necessarily reductionistic (can be legitimately multilevel)
  • not necessarily machine-like (biological/social mechanisms aren’t designed artifacts)
  • not necessarily linear/sequential (feedback and cycles are common)
  • not necessarily neatly localizable (distributed mechanisms exist)

The SEP entry gives a very direct list of what mechanisms are not:

  • Entities/objects by themselves aren’t mechanisms (mechanisms do things—they’re tied to a phenomenon).
  • Correlations (or mere temporal sequences) aren’t mechanisms.
  • Inferences/arguments/reasons aren’t mechanisms (logical relations aren’t the same as causal/mechanistic relations—though there can be mechanisms of reasoning).
  • Symmetries aren’t mechanisms (highly general structural facts, not organized productive systems).
  • Fundamental laws/fundamental causal relations aren’t mechanisms (if fundamental, there’s no deeper mechanism “for” them).
  • Logical/mathematical necessities aren’t mechanisms (true in all possible worlds, not dependent on this world’s causal organization).

This is a crucial demarcation: mechanisms are worldly causal organization, not mere patterns, summaries, or abstractions (unless those abstractions are explicitly used to represent causal organization).

Mechanistic explanations explain by showing how the phenomenon arises from organized parts and activities—that is, by representing the relevant causal/mechanistic structure. Mechanists contrast this with the old idea that explanation is primarily about the logical form of an argument. Instead, they say explanatory models are explanatory in virtue of what they represent: they must refer to the structures that produce/underlie/maintain the phenomenon.

A central quality distinction in mechanistic explanation is between a merely how-possibly story (a candidate arrangement that might generate the phenomenon), and a how-actually-enough account (accurate enough about how the parts/activities are in fact organized, for the purposes at hand). This connects to the broader theme of pseudo-explanation: a mechanistic explanation is pseudo-ish when it stays at “how-possibly” (or black-box functional labels) but is treated as “how-actually” without evidential anchoring.

Mechanisms give you the “work” explanations must do:

  • difference-making: which components matter, how interventions change outcomes
  • constraint: which outcomes are ruled in/out by the organization
  • multi-level integration: how different scientific descriptions relate (organs ↔ cells ↔ molecules)
  • prediction + control: once you know the organized activities, you can often predict what happens under perturbations and design interventions

Good Mechanistic Explanation

A mechanistic explanation is “good” when it does more than tell a plausible story about what might be going on—it correctly represents enough of the mechanism’s parts, activities, and organization to answer the relevant why/how question, support interventions/counterfactuals, and withstand challenges from alternatives. The “new mechanists” (especially Peter Machamer, Lindley Darden, Carl Craver, and Stuart Glennan) emphasize that mechanisms are entities + activities organized so they are responsible for a phenomenon. Below is a practical “quality anatomy” for mechanistic explanation—broken into the main dimensions philosophers use when they say a mechanistic model is sketchy, how-possibly, how-actually-enough, or deep.

  1. A good mechanistic explanation is clear about the explanandum and its contrasts. A mechanistic explanation must begin by identifying exactly what phenomenon is being explained. Mechanisms are always mechanisms for some phenomenon, capacity, or behavior, and that target phenomenon determines what counts as a relevant component, what falls inside the explanatory boundary, and what belongs outside it as mere background. For that reason, the phenomenon has to be specified at the right grain. If it is characterized too vaguely, almost any story can be made to fit; if it is defined too narrowly or too finely, the model may become impossible to test or support. A good explanation also ties the phenomenon to the conditions under which it occurs, including when it happens, the range over which it operates, and what counts as successful functioning. In addition, when relevant, the contrast class must be made explicit. Many weak mechanistic explanations fail because they do not clarify whether they are explaining why a phenomenon happens at all, why it happens in this particular way, or why it happened rather than some salient alternative. This links mechanistic explanation to contrastive explanation more generally: often the real explanatory task is not mere production in the abstract, but showing what makes the difference within a field of alternatives.

  2. It identifies the right components: entities and activities, not just labels. A mechanistic explanation is not adequate if it merely names a process or gives a descriptive label. It must specify the relevant entities, meaning the parts or components involved, and the activities or interactions through which those parts operate. This is central to the classic mechanistic idea that explanation requires showing how entities and activities are organized so as to produce the phenomenon. A common failure mode is pseudo-mechanistic explanation, where one simply substitutes a label or disposition for an account of the mechanism itself. Saying that something occurs because of “attention,” “inhibition,” or “inflammation” is not yet a mechanistic explanation unless one also explains what is attending, inhibiting, or inflaming what, through what pathway, and under what conditions. A good mechanistic explanation therefore uses activities that are properly typed rather than generic placeholders. It should specify actions such as binding, firing, pumping, phosphorylating, or gating, and connect those activities to particular entities. Most importantly, it must show how those organized activities are productive of the phenomenon, meaning that they generate or maintain it rather than merely accompanying it.

  3. Organization is where the real explanatory work happens; good explanations are not mere part lists. Mechanisms are not just collections of parts. What makes them explanatory is the way those parts are organized. A strong mechanistic explanation therefore does more than list components; it shows how they are arranged spatially, temporally, and in terms of control structure. Spatial organization includes matters such as location, orientation, compartmentalization, and connectivity. Temporal organization includes the sequencing of operations, the timing of interactions, the rates at which processes occur, and whether they function synchronously or asynchronously. Control organization involves patterns such as feedback loops, feedforward paths, inhibition, gating, and oscillation. These are not optional embellishments but often the core of the explanation, because the same parts can behave very differently depending on how they are wired together and coordinated over time. A warning sign of a poor mechanistic explanation is therefore a diagram or description that simply gives a list of parts connected by arrows without explaining why this particular arrangement matters or how the ordering and connectivity contribute to the production of the phenomenon.

  4. A good mechanistic explanation is sensitive to evidence: it aims for “how-actually-enough,” not merely “how-possibly.” Mechanistic philosophy often distinguishes between a how-possibly model and a how-actually model. A how-possibly model gives a plausible account of how a phenomenon could be produced, but it may remain speculative. A better mechanistic explanation is one that is how-actually-enough for the scientific purpose at hand. This means that the proposed mechanism is not just imaginable or superficially plausible, but is adequately supported by evidence showing that it maps onto the real components and activities involved. Carl Craver emphasizes that mechanistic models explain when they map in the right way onto actual entities and activities, not merely when they reproduce input-output behavior. A strong explanation is therefore constrained by multiple forms of evidence rather than by behavioral fit alone. It should also make discriminating commitments, meaning that it says enough about the structure of the mechanism that it could turn out to be wrong in identifiable ways. This is a sign of explanatory seriousness: the explanation does not merely accommodate the data, but risks falsification by making definite claims about how the mechanism is put together.

  5. A good mechanistic explanation includes the right amount of detail and the right components. Mechanistic explanations can fail in two opposite directions. They may be too thin, offering only a black-box or handwavy story, or they may be too bloated, including so much causally upstream detail that the mechanism itself becomes obscured. What matters is explanatory relevance. A good explanation includes the parts and activities that are responsible for the phenomenon at the level and grain of interest, while treating the rest as background conditions or contextual factors. This requires distinguishing between constitutive components, whose organized activity makes up the phenomenon, and merely causal background conditions that may enable the mechanism without being part of it in the explanatory sense. One mark of success is that the explanation supports difference-making tests. If a purported component is genuinely relevant, then changing, removing, or disrupting it should have a predictable effect on the phenomenon within the regime being studied. The goal is not maximal detail, but relevant detail.

  6. Mechanistic depth often comes from hierarchy, decomposition, and recomposition. Mechanisms are frequently multi-level. A mechanism at one level, such as a neural circuit producing a behavior, may itself be constituted by lower-level cellular or molecular mechanisms, while also forming part of a higher-level organismic or social mechanism. Philosophers of mechanism often emphasize that these levels should not be understood as one single universal hierarchy, but rather as local part-whole relations within nested mechanisms. A good multi-level mechanistic explanation shows depth by decomposing the phenomenon into organized sub-operations and then recomposing those sub-operations into an account of how the overall phenomenon arises. In other words, it breaks the system down and then shows how the organized pieces work together again. There are important failure modes here. One is greedy reductionism: diving to a lower level of analysis without showing why that lower level actually helps answer the original explanatory question or contrast. Another is empty levels talk, where one invokes labels such as “molecular,” “cognitive,” or “systemic” without actually linking those levels through concrete part-whole relations and organized activity. Good multi-level explanation requires those links to be explicit.

  7. A good mechanistic explanation captures robustness, invariance, and generality without becoming empty or vacuous. Mechanistic explanation is stronger when it does not merely fit a single observed case, but captures a stable causal organization that remains informative across a range of conditions. This is where robustness and invariance matter. A good explanation should show how the mechanism behaves under typical conditions, how it responds to perturbations, and where its boundary conditions lie. It should identify not just one trajectory through the mechanism, but the stable relations that persist across relevant changes. This is why intervention and stability are so important in mechanistic thinking. Explanations are valuable partly because they support systematic reasoning about what would happen if some component or condition were altered. A good explanation therefore states when the mechanism works, when it breaks down, and what kinds of changes leave key relations intact. It achieves a kind of generality, but without becoming so abstract that it loses contact with the real organization responsible for the phenomenon.

  8. Mechanistic credibility depends on explanatory testing norms and evidential support. Mechanistic explanations are not just stories; they earn credibility through characteristic forms of evidence. Different sciences use different techniques, but there are recurring evidential strategies that philosophers highlight. These include interventions such as lesioning, blocking, stimulating, knockout or knockdown, or adding and removing components. They also include tracing and localization methods, which show that the relevant parts exist, are connected appropriately, and are active at the right times. Parameter modulation is another important strategy, where systematic changes in dose, intensity, or rate reveal regular effects on the phenomenon. Double dissociations and selective disruptions can also be powerful, because they show that specific sub-operations depend on specific components rather than on the system as a whole in an undifferentiated way. The broader philosophical point is that the more tightly an explanation is linked to testable dependencies and organized activities, the less it remains a mere narrative gloss. Its credibility grows in proportion to the experimental and observational support tying it to real mechanisms.

  9. Abstraction and idealization are legitimate, but only if they preserve the organizational features that do the explanatory work. Mechanistic explanations do not need to include every micro-detail in order to be good. In fact, abstraction and idealization are often virtues, because they allow the explanation to focus on the features that matter most. A selective model can therefore be better than a hyper-detailed one, provided it preserves the causal topology and organization responsible for the phenomenon. Good idealization simplifies without distorting the relevant structure. It omits detail while retaining the relations that generate, sustain, or regulate the explanandum. Bad idealization, by contrast, strips away too much and replaces mechanistic content with vague forces, capacities, or placeholders that cannot be tied back to interventions, constraints, or organized activities. The issue is therefore not whether abstraction is used, but whether what is abstracted away can truly be ignored for the explanatory purpose at hand. A good mechanistic explanation can be selective, but it cannot become explanatory “mystery meat.”

So a good mechanistic explanation (applied in the correct context) consists of the following:

  1. Defines the phenomenon + relevant contrast(s).
  2. Identifies entities and activities that are genuinely productive.
  3. Specifies organization (spatiotemporal + control structure).
  4. Shows multi-level relations where relevant (decompose/recompose).
  5. Is how-actually-enough: anchored by discriminating evidence and correct mapping to real components.
  6. Supports counterfactual/intervention reasoning and states boundary conditions.
  7. Competes successfully against plausible alternative mechanisms.

Pseudo-mechanistic explanation consists of mostly labels (“X causes Y”), unspecified arrows, missing organization, unfalsifiable flexibility, and little intervention-relevant structure. Mechanistic explanation is one of the best antidotes to “seductive clarity” when it’s done right, because it forces you to cash out: what changes what, through what steps, under what conditions. But it can also be faked (e.g., diagrams with impressive boxes/arrows, or irrelevant technical detail). The key quality checks above—organization, mapping, interventions, boundary conditions, and alternatives—are what separate genuine mechanistic understanding from mechanistic theater.

Yes, But What's the Mechanism?

Identifying a mechanism in practice is actually extremly challenging. Bullock & Green’s "Yes, but what's the mechanism? (don't expect an easy answer)" paper explains the challenges of mechanism identification in psychology. This paper is a good segway into the next, which is how do researchers identify mechanisms.

In the authors framing of the topic, we can infer that a “good explanation” in their world is basically synonymous with a mechanistic explanation: not just showing that X causes Y, but spelling out how the effect is transmitted—i.e., identifying (and credibly testing) the mediating variables that carry causal influence from X to Y. They open with the familiar complaint that experiments can “reveal but do not explain” causal relationships, and immediately translate “explain” into “search for mediators” (variables that transmit causal effects).

From there, they connect “mechanism” to what their discipline treats as the formal object of mechanistic claims: direct and indirect effects in a causal model. In the standard mediation setup, the “mechanism” claim is that X moves M (path a), and M in turn moves Y (path b), so the indirect/mediated effect is ab (or equivalently c − d).

But the key move of the article is: mechanistic explanation is not just naming a mediator; it requires a research design that makes the mediator claim identifiable. They argue that what often passes for “good explanation” (a regression-based mediation test) is usually not a good mechanism test, because it rests on strong assumptions that are rarely defended—especially the assumption that unobserved causes of M are uncorrelated with unobserved causes of Y (their cov(e1,e3)=0 point).

So, in their discipline’s terms, a good mechanistic explanation has to do at least three things (and these are exactly the obstacles/recommendations they emphasize):

  1. Rule out confounding of the mediator–outcome link (or justify why it’s absent), because otherwise the “mechanism” is statistically and causally ambiguous.
  2. Show that any experimental manipulation aimed at the mediator isolates that mediator (doesn’t shift other mediators/cognitive states), because otherwise you haven’t identified which mechanism is operating.
  3. Handle the fact that mediation estimates may be local/conditional and can break under heterogeneity—they stress that indirect effects may apply to an unknown subset of participants (akin to compliance problems) and can be distorted when effects vary across people.

Putting it together: they’re saying that, in practice, “mechanism” talk should be earned by design + argument, not just by running a mediation regression. That’s why they push the idea that credible mechanism knowledge is cumulative—built from an experimental program that repeatedly validates manipulations, checks alternative pathways, and probes heterogeneity—rather than a one-shot mediation table in a single paper.

Next, we will outline the various methods people use to successfully identify and argue for mechanisms.

How to Identify Mechanisms

Scientists “discover mechanisms” in a surprisingly repeatable way across disciplines: they (i) decompose a phenomenon into parts/operations, (ii) localize those operations in structures, (iii) intervene to test which parts are difference-makers, and (iv) integrate the results into a coherent, multilevel model. Philosophers of science have tried to extract these recurring strategies and explain why they work—especially in the mechanistic tradition (Machamer–Darden–Craver; Bechtel & Richardson). Below is a cross-disciplinary map (economics, biology, epidemiology) of the main methods for finding, validating, and refining mechanisms, with special attention to how these methods manage the psychological hazards we’ve been discussing (clarity, premature closure, narrative pull).

The general “mechanism discovery toolkit” (what shows up everywhere)

The general “mechanism discovery toolkit” refers to a set of recurring strategies that appear across many sciences when researchers try to uncover how a mechanism works. The Stanford Encyclopedia of Philosophy entry notes that philosophers of mechanism have paid particular attention to discovery strategies such as decomposition and localization associated with Bechtel and Richardson, forward and backward chaining associated with Darden and Craver, and the combined use of difference-making evidence and mechanistic evidence discussed by Russo and Williamson. The broader point is that mechanism discovery is not a matter of finding one single decisive method, but of using a family of strategies that help identify parts, operations, organization, and evidential support.

A. Decomposition and localization involves first breaking the overall phenomenon into smaller component operations or sub-capacities, and then mapping those operations onto specific spatial or structural components, such as organs, neural circuits, institutions, markets, or other organized systems. Decomposition asks what smaller tasks or functions make up the larger phenomenon, while localization asks where in the system those functions are carried out. This is the classic “discovering complexity” approach: rather than merely correlating variables at a high level, researchers try to determine what does what, and where. That is one of the central ways mechanistic inquiry moves beyond surface association toward explanatory structure.

B. Forward and backward chaining (abductive assembly) highlights the fact that mechanism discovery is rarely a simple linear process. Scientists often work in two directions. In forward chaining, they begin from already known components or activities and build upward toward a candidate pathway that could produce the phenomenon. In backward chaining, they start from the phenomenon itself and work backward, asking what intermediate steps would have to exist in order for that phenomenon to occur, and then searching for realizers of those steps. Together these strategies show that discovering a mechanism often involves abductive assembly: piecing together the most plausible organized sequence from both what is already known and what must be the case if the phenomenon is to be explained.

C. Intervention or perturbation logic is central because mechanistic claims become much more credible when they survive controlled disruption. Researchers test a proposed mechanism by removing or disabling a component and asking whether the phenomenon changes, by stimulating or amplifying a component and asking whether the phenomenon scales in a predictable way, or by blocking a proposed link and asking whether the pathway breaks down. This is the difference-making side of mechanistic inquiry. Interventions connect explanations to counterfactual reasoning, because they help establish claims of the form: if we changed X, Y would change. In that sense, intervention is not only a method of testing mechanisms but also a way of showing that the proposed components and pathways are explanatorily relevant.

D. Triangulation and consilience matter because no single method is usually decisive in complex systems. A mature mechanistic explanation is typically supported by multiple lines of evidence that would be difficult to reconcile with rival mechanisms. These may include observational patterns, interventions, traces or biomarkers, temporal information, dose-response relationships, and successful replication in new settings. The force of triangulation is that different kinds of evidence converge on the same mechanistic hypothesis from different angles. Hernán and Robins explicitly stress this kind of triangulation in causal inference practice, and the same lesson applies more broadly to mechanistic discovery: confidence grows when varied evidential routes point toward the same organized account.

E. Stress-testing against alternatives emphasizes that mechanism discovery is fundamentally comparative. A strong mechanistic explanation is not merely one that can be made to fit the data, but one that explains the evidence while also ruling out plausible competitors or revealing where those competitors fail. This is especially important as an antidote to “just-so stories,” since many superficially plausible mechanisms can be invented after the fact. What distinguishes a strong candidate is its ability to survive comparison with rival explanations and to show superior fit, greater evidential support, or clearer predictive and intervention-based success. Mechanism discovery, on this view, is not just about building one possible account, but about showing why that account deserves to be preferred over others.

Biology: how scientists discover mechanisms in “paradigm mechanistic” sciences

Biology and neuroscience are often treated as the paradigm cases of mechanistic explanation in contemporary philosophy of science, especially in the tradition associated with Machamer, Darden, and Craver and the work that followed them. These fields provide especially vivid examples of how mechanisms are discovered because they regularly investigate organized systems made up of parts, activities, and interactions that can be experimentally manipulated and traced. For that reason, philosophy of mechanism frequently turns to biology and neuroscience as showcase disciplines when explaining what mechanistic discovery looks like in practice.

A. “Break it and see”: perturbation experiments are among the most important tools in biological mechanism discovery. These include knockout and knockdown methods, gene editing techniques that remove a gene or reduce its expression, pharmacological blockade, lesions, optogenetic stimulation, and similar interventions. What makes these methods so valuable is that they help identify difference-making parts of a system. By disrupting or modifying a candidate component and observing the consequences, scientists can determine whether that part is genuinely involved in producing the phenomenon. These experiments are especially useful for separating mere correlates from constitutive components. A variable may correlate with a phenomenon without being part of the mechanism that generates it, but perturbation methods help reveal whether the component is actually playing an essential mechanistic role.

B. Tracing and localization are equally central. In biology and neuroscience, scientists use methods such as imaging, labeling, tract tracing, microscopy, and time-resolved measurement of signals to determine where relevant parts are, how they are connected, and when they are active. The goal is not simply to show that some component exists, but to establish that the right parts are present, connected in the right way, and active at the appropriate time for the phenomenon to occur. This is the localization side of the classic decomposition-and-localization strategy. Once a phenomenon has been decomposed into component operations, tracing and localization help assign those operations to actual physical structures and processes within the biological system.

C. Mechanism assembly via pathway reconstruction is another major feature of biological discovery. Scientists often piece together mechanisms by reconstructing a sequence or network of interactions that links components into an organized pathway. In molecular biology, this may involve discovering chains of binding, catalysis, and inhibition. In neuroscience, it may involve identifying circuit motifs, feedback loops, and signaling pathways. This process often relies on forward and backward chaining. Researchers may reason forward from a known component to ask what downstream effects it could generate, or backward from an observed phenomenon to ask what intermediate steps must exist for the effect to be produced. For example, if a phenomenon depends on X and somehow influences Y, then there must be some organized process that transmits X’s effect to Y, and mechanism discovery involves finding that process.

D. Omics plus functional follow-up captures an important feature of contemporary biology. High-throughput methods such as genomics, proteomics, and metabolomics are extremely powerful for identifying broad patterns and associations, but by themselves they often function mainly as pattern finders. They generate correlations, clusters, and candidate relationships, but those patterns do not become mechanistic explanations on their own. They become mechanistically significant only when paired with more focused follow-up work, especially targeted perturbations and experiments that validate specific causal roles. The philosophical lesson is that -omics screens are often best understood as mechanism generators or hypothesis factories rather than mechanism confirmers. They are valuable because they suggest where to look, but they require anchoring in interventions and organizational analysis before they can support a genuinely mechanistic account.

From a philosophical perspective, what counts as mechanism discovery in biology is the transition from a merely plausible model to one that is adequately constrained by evidence. More specifically, one moves from how-possibly models, which show only that a phenomenon could in principle be produced in a certain way, to how-actually-enough models, which are supported by interventions, localization, and evidence about organized activities. In that shift, biological inquiry moves from speculative possibility to a sufficiently evidence-based account of how the phenomenon is in fact produced for the explanatory purposes at hand.

Epidemiology: mechanisms under constraints of ethics, scale, and confounding

Epidemiology is deeply concerned with mechanisms because it seeks to understand what causes disease, through what pathways those causes operate, and which interventions are effective. At the same time, it works under distinctive constraints. Unlike some laboratory sciences, epidemiology often cannot directly manipulate exposures through controlled experimental perturbation, because doing so may be unethical, infeasible, or impossible at population scale. Those constraints do not eliminate mechanistic inquiry, but they do force epidemiologists to rely on more indirect strategies for discovering and supporting causal and mechanistic claims.

A. Causal inference frameworks, especially potential outcomes and directed acyclic graphs (DAGs), play a central role in modern epidemiology because they provide tools for extracting causal structure from noisy, messy, and confounded observational data. Much of epidemiologic methodology is devoted to carefully defining interventions, specifying which variables count as confounders, adjusting for those confounders appropriately, using graphical reasoning through DAGs to represent assumptions about causal structure, and conducting sensitivity analyses to test how robust the conclusions are to violations of those assumptions. In this style of reasoning, one is not directly manipulating the world in a fully randomized way, but instead trying to recover causal structure by making assumptions explicit and reasoning systematically about what would have happened under different exposure conditions. Hernán and Robins’ Causal Inference: What If is a central reference point for this entire approach.

B. Instrumental variables (IV) methods function as a kind of quasi-experimental lever when randomized experiments are unavailable. The basic idea is to use an instrument that shifts exposure while, ideally, affecting the outcome only through that exposure. If such an instrument can be identified, it may allow researchers to estimate causal effects even in the presence of confounding that would defeat more straightforward observational analysis. However, epidemiology-focused discussions, especially those associated with Hernán, emphasize how demanding the assumptions behind IV methods are. In particular, exclusion restrictions and related conditions are often difficult to justify in real-world settings. For that reason, instrumental variable methods can seem highly attractive, even “dreamlike,” in their promise of causal leverage, while also remaining fragile and vulnerable if the assumptions do not genuinely hold.

C. Natural experiments and policy shocks are another important indirect strategy in epidemiology. Researchers often rely on exogenous events such as policy changes, disasters, supply disruptions, or other externally imposed variations that approximate random assignment for certain exposures. These events create opportunities to study whether outcomes shift in the ways predicted by mechanistic or causal hypotheses. The logic is that when a naturally occurring event alters exposure independently of the usual confounding factors, it can serve as a real-world analogue to experiment. Epidemiologists then examine whether the resulting outcome patterns align with what would be expected if the hypothesized mechanism or causal pathway were genuine.

D. Triangulation, or combining methods with distinct bias profiles, has become increasingly important because every observational design comes with its own vulnerabilities. A cohort study, a case-control study, a natural experiment, or a negative control design may each be threatened by different forms of confounding, selection bias, or measurement error. Because of this, epidemiologists often emphasize the importance of triangulating across multiple methods whose weaknesses do not fully overlap. If different approaches, each with different bias structures, converge on the same conclusion, confidence in the causal or mechanistic claim increases. Hernán and Robins explicitly present triangulation as a practical necessity rather than a methodological luxury, given the unavoidable limitations of any single observational strategy.

E. Mediation analysis, the more explicitly mechanism-probing move, is a special tactic used when epidemiologists try to estimate pathways connecting an exposure to an outcome. The question is often framed as whether X affects Y partly through some mediator M. This makes mediation analysis especially mechanism-adjacent, because it tries to say something not just about whether a cause has an effect, but about how that effect is transmitted. However, mediation analysis is also especially assumption-heavy. Its conclusions depend on strong assumptions about measurement quality, the absence of confounding between mediator and outcome, and correct temporal ordering. For that reason, there is an important mechanistic caution here: mediation estimates are easy to overinterpret as identifying “the mechanism,” when in reality they often provide only partial and model-dependent evidence about one possible pathway. They can contribute to mechanistic understanding, but they rarely settle it on their own.

Economics: mechanisms as causal pathways + institutional structure (often without direct access to parts)

Economics provides an interesting contrast case for mechanism discovery because it is deeply concerned with causal mechanisms, yet often lacks direct access to the “parts” in the way biology or neuroscience does. The mechanisms economists study typically involve agents, institutions, incentives, information, and constraints, and these mechanisms are often strategic, shaped by feedback, and highly sensitive to context. Rather than focusing on tangible components linked by physical interactions, economics frequently studies organized systems in which behavior emerges from the interaction of beliefs, rules, opportunities, and responses to policy. For that reason, economists often frame their work in terms of identification strategies: methods for isolating causal effects from observational data. This style of thinking was especially popularized by Angrist and Pischke, who emphasized how empirical designs can approximate the inferential strength of experiments even in settings where controlled manipulation is limited.

A. Randomized controlled trials, both in the lab and in the field, are one of the clearest ways economics studies causal effects. Laboratory experiments offer tight control over variables and allow researchers to examine stylized mechanisms under simplified conditions. Field randomized controlled trials, by contrast, are valued for their policy relevance and their ability to capture real-world behavioral responses in natural settings. Both forms of experimentation can identify causal effects and can also test specific mechanism predictions, for example by varying incentives, altering available information, or changing the framing of choices. However, moving from an observed effect to a claim about the underlying mechanism still requires additional structure. Economists typically need further evidence about mediators, heterogeneity across individuals or groups, and the behavioral theory linking the intervention to the outcome. An RCT may show that a treatment works, but explaining how it works requires more than the treatment effect alone.

B. Natural experiments and the broader “credibility revolution” toolkit are central to modern empirical economics when randomization is not available. Core methods in this toolkit include difference-in-differences, regression discontinuity designs, instrumental variables, and event studies. These methods are designed to exploit naturally occurring or institutionally created variation in ways that approximate the “experimental ideal.” Angrist and Pischke present them as strategies for recovering causal effects from observational settings by leveraging discontinuities, policy timing, exogenous shocks, or other forms of quasi-random variation. In this respect, economics often approaches mechanism through careful design logic: it first tries to establish that a causal effect exists under credible identification, and only then asks how that effect is produced. These methods do not automatically reveal the mechanism, but they often provide the causal foothold needed for more refined mechanistic inquiry.

C. Structural modeling and what might be called mechanism discipline represent a different style of mechanistic reasoning in economics. Economists frequently build structural models, such as demand systems, dynamic models, or game-theoretic models, in which the mechanism is encoded in the relations among preferences, constraints, beliefs, technologies, and strategic interaction. In these cases, the explanatory mechanism is not primarily a decomposition into physical parts, but an organized system of incentives and constraints that generates the observed pattern of behavior, often through equilibrium feedback. Estimation and counterfactual simulation then allow economists to ask what would happen if some policy were changed. This is mechanistic explanation in a different register: it identifies the structure that organizes responses rather than a sequence of physical component activities. The mechanism is the institutional and strategic architecture through which behavior is produced and transformed.

D. Mechanism discovery through heterogeneity and “mechanism experiments” is another important strategy in economics. Economists often probe competing explanations by designing targeted variations that isolate specific channels, such as information versus incentives, norms versus material payoffs, salience manipulations, differential treatment intensity, or subgroup responses predicted by rival theories. The idea is to intervene not just on an overall treatment, but on a particular link in the causal chain, in order to see which explanatory story remains viable. If one theory predicts that only information matters while another predicts that incentives matter, varying those channels separately can help determine which mechanism is doing the work. Likewise, if competing theories predict different subgroup responses, heterogeneity can itself become evidence about mechanism. In that sense, economists often discover mechanisms indirectly by testing which channel-specific predictions survive targeted interventions and comparative design.

How these methods relate to explanation quality (and the psychological hazards)

This section ties the earlier discussion of explanation quality directly to methodology by showing that many scientific methods can be understood as safeguards against the very psychological temptations that make weak explanations appealing. The same hazards previously discussed, such as the seduction of clarity, the pull of a good story, and the tendency toward premature closure, do not disappear when scientists build explanations. Instead, they show up as pressures that methodology is designed to resist. In that sense, methods are not just tools for discovering mechanisms; they are also norms for forcing explanations into contact with reality rather than allowing them to remain merely compelling narratives.

A. Anti–“just-so story” safeguards as design plus discriminators captures the idea that a mechanistic story, by itself, is cheap. It is often easy to tell a plausible narrative about how some phenomenon might arise. What makes such a story epistemically expensive, in the valuable sense, is when it commits itself to things that can be checked and potentially falsified. A stronger mechanistic account must specify intermediate steps, impose timing constraints, identify measurable signatures, and generate predictions about what should happen under perturbation. These commitments make it possible to distinguish a genuine explanation from a merely coherent narrative. That is why, across many disciplines, interventions, natural experiments, and triangulation are so important: they provide discriminators that narrative coherence alone cannot supply. They force the explanation to do more than sound right; they require it to survive tests that rival stories may fail.

B. Clarity becomes epistemically suspicious when it is not anchored connects directly to concerns, associated for example with Nguyen, about the way clarity itself can become exploitable. A mechanism should not be trusted simply because it feels intuitive, neat, or elegantly packaged. Those very features can make an explanation psychologically attractive without making it true. The scientific response is therefore to treat unanchored clarity with caution. A mechanism earns credibility not because it is tidy, but because it withstands adversarial testing, converges with independent measurements, and outperforms competing models. In this sense, mechanism discovery methods function as institutionalized protections against what earlier discussions described as seizing and freezing. They are designed to keep alternatives alive for longer, to require explicit failure conditions, and to demand multiple evidential anchors before explanatory closure is warranted. What feels clear is not automatically what is best supported.

C. Mechanisms differ in what counts as a “part,” but the underlying logic remains the same emphasizes that mechanistic explanation takes different forms across disciplines even though its basic ambition is shared. In biology, the parts are often literal components and activities, such as molecules, cells, circuits, and the operations they perform. In epidemiology, the relevant parts may instead be exposures, mediators, susceptibilities, and contextual variables, and these are investigated through quasi-experiments and causal inference tools rather than direct manipulation of microscopic entities. In economics, the parts are often agents, institutions, constraints, and rules of interaction, probed through identification strategies and structural counterfactual analysis. Despite these differences, the mechanistic impulse is common across all of them: the aim is to identify an organized structure of dependence that explains the phenomenon and supports systematic “what if” reasoning. What changes from field to field is not the general explanatory logic, but the form that organization, parts, and testing must take.

When a discipline claims a mechanism, ask:

  1. Decomposition: What are the proposed components/sub-operations?
  2. Organization: How are they connected (timing, feedback, structure)?
  3. Intervention handle: What manipulation would change the phenomenon, and in what direction?
  4. Tracing / signatures: What observable footprint should the mechanism leave?
  5. Alternative competition: What rival mechanisms are plausible, and what evidence discriminates?
  6. Triangulation: Do multiple methods with different bias profiles converge?

And in general, when someone is proposing an explanation, think of these fundamental quesions.

Keeping Mechanism Meaningful

On Lauren M. Ross’s view (in Causation in neuroscience: Keeping Mechanism Meaningful), “mechanistic explanation” in neuroscience is both indispensable and at risk: it has become a default badge for “deep understanding,” but the term’s variable and sometimes non-causal uses create confusion, miscommunication, and even perverse incentives. Her recommendations amount to a diagnosis of what goes wrong with mechanistic talk and a normative proposal for how neuroscientists should evaluate and communicate “mechanistic” claims. Below is what we can say about her assessment of mechanistic explanation, specifically as reflected in her “moving forward” recommendations.

Her central diagnosis is that “mechanism” has become unstable in meaning, and that instability damages explanation. Ross frames neuroscience as using a rich causal vocabularypathways, cascades, circuits; causes that control/trigger/constrain/predispose; different causal topologies and networks—because the brain exhibits many kinds of causal organization. But against that background, “mechanism” is used in too many ways, and the ambiguity matters because mechanism-talk is frequently treated as a gold standard (“mechanistic insight” as a publication/grant criterion), even when people disagree about what that standard requires. For Ross, the problem is not merely semantic. It’s that the field’s evaluative and communicative practices can get distorted when a prestige term floats between incompatible roles.

Ross distinguishes three broad uses, ordered by increasing breadth:

  1. Mechanism as a particular kind of causal system (narrow / constrained) - This is the “default expected” meaning. Mechanism-language is tied to machine-like causal organization and often to assumptions about fine-grained causal detail (often biophysical or otherwise “how it unfolds”). She treats this as historically primary and communicatively informative.

  2. Mechanism as any causal system (broad / umbrella) - Here “mechanism” collapses into “causal structure” or “causal system.” Ross’s worry is that this meaning can at best mark the causal/non-causal boundary, but it “cannot do much more”—it can’t discriminate kinds of causal organization, and it drains the term of the extra content scientists often intend (“how,” intermediates, detail, etc.).

  3. Mechanism used for non-causal structures (widest / most dangerous) - Ross flags a third, explicitly non-causal usage—e.g., “topological mechanisms” (where explanatory power is mathematical/structural rather than causal), or “mechanism” used for correlates/realizers/constitutive relations rather than causes.

Her evaluation here is blunt: if the term is supposed to convey causal information, that third usage is “clearly disadvantageous” because it can mislead by sounding causal while not being causal.

Ross recommends four strategies for clarifying what is meant by mechanism. Ross’s first major proposal is a norm of explicit disambiguation: if “mechanism” is invoked, state what you mean by it, because otherwise the interpretive range spans “causal to non-causal systems.” This directly relates to explanation quality. A good mechanistic explanation is not just “detailed” or “compelling”—it is type-clear: it tells the audience what kind of explanatory warrant is being offered (causal? constitutive? mathematical?), so evaluators can apply the right standards. Ross's second proposal argues that expanding “mechanism” to cover non-causal models “severely reduces the function and meaning of the term,” especially because it can falsely signal causal depth. Mechanistic explanation (as a causal-explanatory category) must be causally accountable. If your explanatory dependence is mathematical/topological, or constitutive, call it that—don’t borrow the rhetorical authority of “mechanism.” Her third recommendation is don’t collapse all causal explanation into “mechanism”; keep alternative causal concepts alive. Ross argues that neuroscience’s other causal concepts (pathways/cascades/circuits/networks) are not stylistic variants; they often function to pick out different kinds of causal system. Collapsing them into “mechanism” can “gloss over varieties of causal explanation.” A high-quality explanation is structure-sensitive: it chooses a causal template that matches the target—rather than forcing everything into a one-size-fits-all mechanistic mold. Ross's final recommendation explicitly warns against treating lower-level detail as automatically better explanation. She frames it as an open question whether “including lower-scale causal information always improves explanation and understanding.” And she endorses a levels-friendly stance; different levels can require different descriptive/explanatory frameworks; no single scale is automatically best. Good mechanistic explanation is level-appropriate: it supplies the right grain of causal detail for the question, rather than defaulting to “more micro = more explanatory.”

Ross makes a striking social-epistemic point: in neuroscience, “mechanism” functions as a status signal—suggesting importance, causal depth, fundability, publishability—so people may use it “liberally and with less care.” She also describes an undesirable dialectic. If you assume the narrow/reductive sense, you may dismiss valuable higher-level theorizing (dynamical, computational, etc.) as “not mechanistic,” and therefore not serious. If you do higher-level work, you may feel pressure to re-label it as “mechanistic” anyway (by stretching the concept), instead of arguing directly for its explanatory strengths. Explanatory evaluation should not be driven by prestige labels (“mechanism!”) but by what causal information is actually provided and whether the methods support those causal claims. This is directly related to what we discussed prior; just because a mechanism is posited, doesnt mean it is a good explanation. There is no silver bullet.

Based on her recommendations, Ross’s assessment implies that a mechanistic explanation is better (higher quality) to the extent that it:

  1. Disambiguates its explanatory kind (causal vs constitutive vs mathematical/topological) and doesn’t trade on ambiguity.
  2. Earns causal talk via causal standards (intervention/difference-making, etc.), rather than labeling correlates/realizers as “mechanisms” in a way that invites causal misreading.
  3. Matches the causal structure to the explanandum—using “mechanism” only when the machine-analogy features are genuinely doing explanatory work, and otherwise using more fitting causal concepts (pathway, cascade, circuit, network, etc.).
  4. Is level-appropriate rather than reflexively reductive, recognizing that different levels can be legitimate explanatory frameworks.
  5. Resists status-term distortion: it invites appraisal of the model’s actual causal content and empirical support rather than relying on the honorific “mechanistic.”
  6. Improves communication by preventing audiences from inferring biophysical detail when only higher-level structure has been shown (or inferring causation when only correlation/constitution is at issue).

Ross’s paper fits cleanly into the pattern we been developing. The term “mechanism” can produce a feeling of depth (“this is serious, causal, deep!”) even when it’s being used in an overly broad or non-causal way. That feeling can steer peer review, funding, and public communication—so the social environment selects for labels that satisfy explanation-hunger rather than for explanations that meet the right standards.

Other Causal Structures

Mechanisms arent the only structure used in explanation. In this section, we will look at a variety of causal structures useful for describing and explaining phenomena.

Pathways vs Mechanisms

In Causal Concepts in Biology: How Pathways Differ from Mechanisms and Why It Matters, Ross argues that biologists use the terms “pathway” and “mechanism” to refer to genuinely different kinds of causal structure, and that this difference matters because each concept supports different investigative strategies and answers different explanatory questions. Her central claim is that a pathway should not be treated as merely an incomplete sketch of a mechanism. Instead, the two concepts organize biological inquiry in importantly different ways.

When biologists talk about mechanisms, Ross says, they typically invoke something like a machine analogy. A mechanism is understood in the way one might understand a car engine or a clockwork device: by identifying its component parts, locating those parts within an organized system, and showing how their interactions produce some higher-level behavior. On this view, mechanisms are treated as having a constitutive organization. A phenomenon at a higher level can be decomposed into lower-level causal parts, and this decomposition supports a characteristic discovery strategy: one begins with a behavior or effect of interest, then “drills down” to find the parts, their locations, and the interactions through which they generate that effect. Ross also stresses that mechanism talk carries an expectation of fine-grained causal detail. A mechanistic explanation is not satisfied with an abstract claim that X leads to Y; it aims for a step-by-step account of the causal and temporal structure involved, often at a level of detail that exceeds what scientists can actually provide in many domains. Finally, mechanisms are associated with an emphasis on productive how. Mechanism talk is used to answer the question of how something works, in the sense of specifying the activities, interactions, or force-like relations that produce the phenomenon, rather than merely registering that two items are connected. In Ross’s picture, then, mechanisms are discrete causal systems organized around the production of some phenomenon, with relatively clear boundaries and a strong norm of fine-grained causal characterization.

Pathways, by contrast, are not simply thinner or less complete mechanisms. Ross presents them as a different causal category altogether, one that is better understood through a roadway analogy than a machine analogy. Where mechanisms are bounded systems organized to produce a phenomenon, pathways are more like maps of available causal routes. They have, in her account, several defining features. First, pathways consist of an ordered sequence of causal steps: they represent what must happen before and after what, as in a biochemical sequence such as glycolysis. Second, they track the flow of some entity through a system. A pathway is not just any causal chain, but a route along which something persists and moves forward, whether that be metabolites, blood, energy, cells, or sometimes information. Third, pathways abstract from significant causal detail. They represent the sequence of causal connections without attempting to include the full set of factors that regulate or control the process. In Ross’s terms, they omit many mechanistic determinants, such as enzymes, cofactors, pH, or temperature, much as a road map leaves out traffic lights and roadblocks. She emphasizes that biologists themselves often say pathways are not intended to be exhaustive descriptions, and she notes the revealing contrast that scientists may talk comfortably about “complete pathways” even while “complete mechanisms” remain out of reach. This suggests that the standards of completeness governing the two concepts are quite different. Finally, pathway explanations emphasize connection rather than productive how. Where a mechanism tells us how X produces Y, a pathway often shows that X is connected to Y, identifying which nodes are linked to which in something closer to a wiring diagram or connectance map.

The structural difference Ross draws is therefore fundamental. Mechanisms are target-relative, discrete causal systems, whereas pathways are maps of possible causal routes. This difference also leads to different styles of investigation. Mechanism discovery begins by picking out a system and an effect or behavior of interest and then narrowing attention to the causally relevant parts that make up that particular system. Pathway investigation, by contrast, often begins by charting causal connections across a broader domain without fixing on a single outcome or starting point. It proceeds less like isolating an engine and more like mapping a transportation network. Ross describes this as an “expanding out” strategy: instead of bounding one mechanism for one explanandum, scientists construct a roadmap of possible causal routes that can later be used flexibly for different purposes. This is why she highlights biological claims to the effect that a single pathway may be realized by different mechanisms, and that pathways can often be discovered even when the underlying mechanisms remain unknown. Those claims would make little sense if pathways were merely incomplete mechanism sketches; they make much more sense if pathways are understood as a distinct form of causal representation.

Ross also argues that pathways can explain things that mechanisms do not, or at least not in the same way. Pathways are not merely shallow causal chains. Their explanatory value lies in the fact that sometimes what matters is not the productive detail of each local interaction, but the broader pattern of routing and connectivity. In such cases, what explains a downstream difference is which routes are available, which nodes are connected, and how flow can be directed through the system. A pathway can therefore answer certain why-questions by showing how differences in outcome depend on the existence or structure of available routes, even when the underlying micro-level productive details are not fully known. This is Ross’s direct rebuttal to the idea that pathways are explanatorily second-rate. Pathway information can be genuinely explanatory because in some contexts connectivity itself is what makes the relevant difference.

Her overall distinction, then, is that mechanisms in biology are machine-like, constitutively organized, detail-rich causal systems that are typically discovered by decomposing a phenomenon into parts and interactions, whereas pathways are roadmap-like, flow-tracking, connectivity-focused causal sequences that intentionally abstract away from internal productive detail and can explain outcomes in virtue of the structure of the available routes. On Ross’s view, these are not simply two degrees of completeness within a single causal concept, but two distinct explanatory and investigative frameworks within biology.

Cascades vs Mechanisms

In Lauren N. Ross’s Cascade versus Mechanism: The Diversity of Causal Structure in Science, the point isn’t just “there are lots of causal words.” It’s that “cascade” names a causal structure that supports a distinct kind of causal explanation, with different standards of adequacy than mechanistic explanation. Philosophers (and sometimes scientists) go wrong when they treat cascades as “just mechanisms in disguise,” because that encourages the wrong explanatory demands and the wrong inferences about what has been shown.

Ross’s discussion of cascades is introduced as a direct challenge to the tendency, especially among new mechanists, to treat every causal pattern in biology or neuroscience as if it could be captured by the single concept of mechanism. She notes that some mechanists suggest that talk of cascades, systems, pathways, and similar concepts could all be replaced by mechanism talk without loss. Her response is that this is both surprising and often mistaken, because scientists use different causal terms to pick out genuinely different causal patterns, not merely different labels for the same pattern. The point is not only descriptive. Ross is also making a normative claim about how these causal concepts should be understood and used, because the concepts chosen shape how scientists represent the world, investigate it, control outcomes, and communicate findings.

For Ross, a cascade is a causal system defined by three central features: an initial trigger, sequential amplification, and stable progression from start to finish. She emphasizes that this pattern is not confined to one discipline. It appears across biology in signaling cascades and coagulation cascades, in psychology in developmental cascades, in ecology in trophic cascades, in economics in cascading failures, in physics and chemistry in collision cascades and chain reactions, and in epidemiology in infectious spread. What unifies these diverse cases is not shared material composition or a fixed level of organization, but a common causal form: something initiates a process, that process amplifies as it moves forward, and the resulting progression tends to unfold in a relatively stable or self-propelling way.

The first feature, the trigger, functions as the upstream initiator of the cascade. Ross treats it as the switch-like starting point of the process, often in a fairly binary or thresholded sense: the trigger either fires or it does not, the cascade is either initiated or not initiated. Two aspects of the trigger are especially important in her account. First, it is causally upstream of the subsequent cascade steps and tends to generate downstream effects with high likelihood, which gives it a distinctive kind of predictive salience and practical control value. Second, because it kick-starts a process that produces a large downstream outcome, it assumes a special kind of causal responsibility. The trigger matters not just because it comes first, but because it sets into motion a process whose later consequences are much larger than the initiating event itself.

The second and, in Ross’s presentation, most important feature is sequential amplification. A cascade is not just a sequence; it is a sequence in which a relatively small cause is transformed into a much larger effect by means of amplification arranged in series, so that the gains multiply across steps. Ross distinguishes two forms of amplification. In single-product amplification, one cause generates a larger amount of one product, as when an enzymatic step produces more downstream enzyme activity. In multi-product amplification, one cause generates many different downstream effects, as when one disaster produces fires, flooding, infrastructure failure, and additional secondary disruptions. She also notes that amplification is not merely a qualitative metaphor. Scientists quantify it in different fields using measures such as gain in electronics, multipliers in economics, and reproductive numbers such as R0 in epidemiology. This helps show that amplification is a substantive causal characteristic, not just a vivid way of speaking.

The third defining feature is stable progression. Once triggered, cascades often unfold with high probability, gaining momentum and becoming difficult to stop. This is what gives cascade language its familiar associations with avalanches, runaway processes, and chain reactions. Ross highlights the practical importance of this feature: cascades are powerful causal systems. They can be desirable when one wants engineered amplification, but they can also be dangerous in contexts such as epidemics, cascading infrastructure failures, or financial meltdowns. The idea of stable progression helps explain why cascade analysis matters so much for intervention and control. If a process becomes hard to stop once underway, then identifying it as a cascade changes what counts as timely and effective response.

Ross contrasts this sharply with mechanisms, which she associates with a different kind of causal structure. Mechanisms, on the standard picture, are characterized by features such as constitutive part-whole relations, fine-grained detail, and often a machine-like conception of how a system works through organized interactions among parts. Cascades, by contrast, are defined by trigger, sequential amplification, and stable progression, and they do not need to exhibit those mechanistic features. This difference becomes especially important once one asks what counts as a good explanation.

One major difference concerns constitutive hierarchy versus level-agnostic causal spread. Mechanisms are typically understood hierarchically: lower-level parts and their organized activities produce the higher-level behavior of the whole. Cascades do not have this constitutive part-whole structure. Ross makes two important points here. First, cascades can connect causes and effects at the same level, across levels upward, or across levels downward; in that sense they are level-agnostic. Second, cascades are not naturally bounded within a single discrete system. They can spill across systems as effects ripple outward through a broader domain. Mechanisms therefore encourage a way of thinking centered on the parts of a whole, while cascades encourage a way of thinking centered on distinct factors linked through a branching, spreading chain of influence.

A second major difference lies in the investigative strategies each concept supports. Mechanisms are usually studied by decomposition and localization: one fixes an explanatory target and then drills down to find the lower-level parts and their organized activities that produce it. Ross argues that cascades are not naturally discovered in this way. They do not necessarily present lower-level parts to drill down into, because they may traverse levels and scales. They may also fail to be relative to a single outcome, especially in cases of multi-product amplification, where one trigger fans out into many downstream effects. Often, moreover, scientists do not yet know the full set of those downstream effects; part of cascade investigation consists in discovering what the process branches into. Accordingly, a cascade-oriented approach often starts from the trigger and maps outward through the spreading consequences rather than beginning with one effect and drilling inward to its constitutive parts.

A third contrast concerns fine-grained “how” detail. Mechanistic explanation, especially in the new mechanist tradition, typically demands a detailed account of activities and interactions with no explanatory gaps or black boxes; mechanistic explanation is often described as literally opening the black box. Ross emphasizes that cascades, as cascades, are not expected to deliver this kind of detailed productive account. A cascade model may show that amplification occurs without specifying in comparable depth how each local interaction produces the next. On her view, this is not a defect but a sign that a different explanatory target is in view. In some contexts, what matters is not an exhaustive account of local productive detail, but the identification of a one-to-many amplification structure and the consequences that follow from it.

Ross then presses an even stronger claim: sometimes cascade structure explains where mechanism detail is either unnecessary or actually unhelpful. In some cases, mechanistic micro-detail simply is not needed for the explanatory target at hand. Her example of wolf trophic cascades illustrates this point: one does not need the minute mechanical details of how elk consume willow in order to understand the larger explanatory fact that elk consumption influences many downstream ecological outcomes because of its position in the network. More strikingly, Ross argues that in some cross-case contexts mechanistic detail may fail to be explanatory precisely because it varies too much. Where many cases share the same cascade structure but are instantiated by different mechanisms, the commonality across those cases is better explained by the shared cascade structure than by the mechanisms, since the mechanisms differ from one case to another. Her example of COVID transmission makes this vivid. The mechanisms of transmission may differ across contexts—talking, coughing, surface contact, and so on—but the branching, one-to-many structure of spread remains shared and carries key explanatory implications, such as rapid dissemination, difficulty of control, and the importance of early containment.

This leads directly to the question of what makes an explanation good. Ross’s framework suggests that explanatory goodness is partly a matter of being appropriate to the causal structure involved. A mechanistic explanation is good when the underlying causal structure is genuinely mechanism-like, involving constitutive organization, decomposable operations, and meaningful productive detail, and when the evidence supports those commitments. A cascade explanation is good when the important explanatory work lies instead in identifying the trigger, mapping the branching amplification pathways, and capturing the stable or runaway dynamics that shape prediction, prevention, and control. If one were to judge a cascade explanation by mechanistic standards alone—asking where the parts are, what the activities are, and whether every black box has been opened—one might wrongly dismiss it as shallow. Ross’s point is that this would often be a category mistake.

From her analysis, one can extract a more specific set of criteria for good cascade explanation. A good cascade explanation shows trigger adequacy: it correctly identifies the trigger, including any threshold structure or on/off conditions, and it shows that this trigger genuinely has the high-likelihood initiating role rather than merely being a rhetorically convenient first event. It also shows amplification adequacy: it must establish that there is real one-to-many amplification, whether single-product or multi-product, rather than a mere linear chain, and where relevant it should characterize amplification in terms of branching factors, multipliers, gain, or analogous parameters. A good cascade explanation must also display propagation or stability adequacy by showing why the process gains momentum and becomes difficult to stop once initiated, while also identifying plausible choke points or early-intervention leverage points consistent with cascade dynamics. Another important virtue is scope and cross-context invariance. If the same cascade structure generalizes across different contexts even though the local mechanisms differ, that generality can count as explanatory strength rather than weakness. Finally, a good cascade explanation shows implication adequacy. Ross emphasizes that different causal structures carry different consequences for control, risk, and communication, so a strong cascade explanation should make those practical and inferential implications explicit—for example, the importance of upstream intervention, vulnerability created by interdependence, or the risk of runaway failure.

Her account also gives a useful way of diagnosing pseudo-explanation. One common temptation is to use the word mechanism as a prestige label even when the evidence really supports only a high-level cascade topology. Ross warns that this can mislead audiences into assuming that constitutive parts, lower-level interactions, and fine-grained productive details have been established when they have not. If the concept of mechanism is stretched to cover every kind of causal structure, it becomes too diffuse to do explanatory work. Her warning is that if mechanism comes to mean everything, it begins to mean nothing. Good explanation therefore depends not just on causal truth in the abstract, but on using the right causal concept, since the concept chosen carries expectations about what kind of evidence has been provided and what kinds of inference about prediction, control, and generalization are warranted.

The broader payoff of Ross’s distinction is that it enriches a general framework for thinking about explanatory quality. It shows that a good explanation is not always better simply because it is more detailed, more micro-level, or more “inside the box.” Sometimes what does the explanatory work is the macro-structure of causal influence itself—the branching amplification and stable progression characteristic of cascades—rather than the micro-level activities through which that structure is locally realized. Explanatory adequacy, then, depends in part on matching the explanatory demand to the actual causal pattern in play, and on resisting the intellectual and psychological temptation to treat one prestigious explanatory label, especially mechanism, as the sole mark of genuine understanding.

Let's construct a comparative template for mechanisms and cascades. The two causal structures are optimized for different causal structures. Mechanism explanation (MDC-style) explains a phenomenon by specifying entities + activities organized so that they produce/underlie/maintain the phenomenon. It’s “how it works inside the box.” Cascade explanation (Ross-style) explains by identifying a trigger that initiates a sequential amplification process with stable propagation (often with branching, spillover, and cross-level effects). It’s “how a small push becomes a big wave.” So mechanisms are best when the explanandum is a capacity of a bounded system; cascades are best when the explanandum is runaway growth / chain amplification / contagion / ripple effects. Next are two parallel sets of criteria applied to mechanisms and cascades respectively.

  1. Question fit and contrast clarity A strong explanation must first answer the right question. For a mechanistic explanation, an A-level response clearly identifies the phenomenon being explained and often also specifies the relevant contrast, then fixes the system boundary accordingly by distinguishing what belongs inside the mechanism from what counts as external background. For a cascade explanation, the standard is different: it must clearly identify the contrast of triggered amplification. The key question is why the process spread or escalated rather than dying out or remaining local, and this requires specifying what counts as “spread,” across what domain, and over what time scale. The failure modes differ accordingly. A mechanistic explanation fails when it offers a story about how something works even though the real question is why it suddenly blew up or propagated. A cascade explanation fails when it defaults to contagion or spread language even though the actual question is about what produces the content’s cognitive effect in the first place.

  2. Core structural adequacy The core explanatory burden also differs by structure. A high-quality mechanistic explanation must identify the relevant entities and activities and then show how they are organized, spatially, temporally, and in terms of control structure. It is not enough merely to name components; the explanation must show how their organized interaction produces the phenomenon. A high-quality cascade explanation, by contrast, must identify the trigger, show the pattern of sequential amplification that often takes branching form, and explain why propagation becomes stable or self-reinforcing once it begins. The corresponding pseudo-explanations are easy to spot. Mechanistic pseudo-explanation appears when someone relies on labels such as “inhibition,” “attention,” or “algorithm” without specifying interactions and organization. Cascade pseudo-explanation appears when someone says that “it went viral” without providing any account of amplification structure, such as branching factors, self-reinforcement, or the conditions under which propagation occurs.

  3. Difference-makers and intervention handles A good explanation should do more than describe structure; it should reveal where intervention matters. For a mechanistic explanation, an A-level account predicts what happens when a component or activity is perturbed, blocked, stimulated, or removed, and it explains why those changes matter. For a cascade explanation, the task is to identify the leverage points that alter propagation dynamics: reducing branching, delaying spread, raising thresholds, or breaking feedback loops, especially through early-stage intervention. Again, the weaknesses differ by type. A mechanistic explanation is weak if it can speak only about distal inputs and outputs but says nothing about midstream control points within the system. A cascade explanation is weak if it cannot specify what would reduce R-like parameters, where the cascade could be interrupted, or how its momentum might be slowed or prevented.

  4. Completeness norms, or what counts as enough detail The standard of completeness is not the same in every explanatory form. For a mechanistic explanation, adequacy usually means being how-actually-enough: it must provide enough internal detail to avoid black boxes for the explanatory purpose at hand. For a cascade explanation, the demand is different. It is not expected to open every black box or provide full mechanistic micro-detail at every stage. Instead, it is expected to represent connectivity, amplification, and progression reliably, even when the local underlying mechanisms vary or remain underspecified. This is where an important category error often occurs. One can wrongly dismiss a cascade explanation as inadequate simply because it does not satisfy mechanistic standards of internal micro-detail, even though those standards are not appropriate for the explanatory structure in question.

  5. Cross-context generalizability Generalizability also works differently in the two cases. A mechanistic explanation generalizes when the same organized internal structure recurs across cases, that is, when the same mechanism type is present again. A cascade explanation can generalize even when the local mechanisms differ, so long as the amplification topology and propagation constraints remain similar. For example, transmission mechanisms may differ from one case to another, yet the overall contagion dynamics may display the same branching and escalation structure. This is one of the distinctive virtues of cascade explanation: the explanatory commonality may reside in the amplification structure itself, rather than in a stable underlying micro-mechanism.

  6. Evidence profile, or what kinds of evidence should be present The two explanatory forms also call for different evidential profiles. A strong mechanistic explanation is typically supported by evidence showing that the relevant components exist, that their activities occur at the right times, that disrupting the pathway alters the phenomenon, and that the organized system can in some sense be recomposed from those parts and activities. It relies on multiple converging kinds of “inside the box” evidence. A strong cascade explanation, by contrast, is supported by evidence such as time-series spread patterns, branching statistics, network diffusion traces, threshold effects, spillover signatures, sensitivity to early interventions, and robust replication across contexts. In other words, mechanism evidence usually looks inward toward component organization, whereas cascade evidence often looks outward toward propagation form and dynamic pattern.

  7. Failure-mode handling and defeaters Good explanations should also anticipate their own limits and rival possibilities. A strong mechanistic explanation states its boundary conditions, clarifies when the mechanism fails, and distinguishes itself from rival mechanisms that might also appear to fit the phenomenon. A strong cascade explanation explains the conditions under which a process fizzles rather than explodes, incorporates suppression mechanisms such as immunity, friction, or saturation, and compares rival propagation topologies, such as a simple chain, a branching process, or a more complex contagion structure. In both cases, explanatory strength depends partly on how well the account handles defeaters, but the relevant defeaters differ depending on whether one is explaining organized production or dynamic spread.

  8. Explanatory honesty and resistance to seductive clarity Finally, both forms of explanation must avoid becoming mere prestige labels. A strong mechanistic explanation does not simply invoke the word “mechanism” as if that by itself conferred depth; it actually specifies the parts, activities, and organization that make the label warranted. A strong cascade explanation likewise does not use “cascade” as a dramatic vibe-word; it must quantify, or at least structurally specify, the amplification and stability that justify calling the process a cascade. This is where the broader concern with pseudo-explanation becomes especially important. In both cases, explanatory adequacy requires more than rhetorically satisfying labeling. It requires real inferential work: the explanation must support predictions, counterfactual reasoning, and disciplined judgments about what would happen under change.

Here are two lists you can use to quickly identify the adeqaucy of each type of structure.

Mechanism explanation

  1. phenomenon/contrast + boundary
  2. entities
  3. activities
  4. organization
  5. intervention predictions
  6. evidence mapping (how-actually-enough)
  7. boundaries/defeaters
  8. alternatives

Cascade explanation

  1. escalation contrast (explode vs fizzle)
  2. trigger identification
  3. amplification structure (branching/multipliers)
  4. stability/self-reinforcement
  5. leverage points (early and midstream)
  6. diffusion evidence (networks/time-series)
  7. suppression/failure conditions
  8. cross-context invariance (when mechanisms vary)

Applying Ross’s framework to a misinformation cascade.

  • Step 1: What’s the phenomenon/contrast?

    A cascade-apt explanandum isn’t “why did person X believe claim Y?” (mechanism-ish). It’s:

    • Why did this misinformation story spread widely rather than remain local or die out?
    • Why did it accelerate quickly / jump communities / persist over time?
    • Why did it spill into offline behavior or institutional responses?

    Those are classic cascade contrasts: growth vs non-growth, containment vs spillover, fizzle vs runaway.

  • Step 2: Build the misinformation cascade explanation

    (a) Trigger

    A misinformation cascade typically begins with a trigger such as:

    • a high-visibility post by a prominent account,
    • a breaking-news vacuum (high uncertainty),
    • a salient event that “primes” an existing narrative template,
    • or an algorithmic surfacing event (platform recommendation boost).

    A cascade explanation earns its keep when it identifies what counts as the initiating condition and why it crosses a threshold (e.g., enough initial exposures to get branching going).

    (b) Sequential amplification (the heart)

    You then specify the amplification path(s). In misinformation, amplification often comes from multiple sequential multipliers:

    1. Cognitive multiplier: emotional salience → higher share probability
    2. Social multiplier: identity signaling → group endorsement and repetition
    3. Algorithmic multiplier: engagement → recommender boosts → more exposure
    4. Media multiplier: online buzz → coverage → legitimacy halo → more spread
    5. Cross-platform multiplier: migration → new audience segments → renewed growth

    This is cascade logic: small initial perturbation becomes large via repeated gain at each stage.

    (c) Stable propagation / momentum

    “Stable progression” shows up as:

    • feedback loops (engagement → promotion → engagement),
    • reputational lock-in (influencers double down),
    • identity-protective dynamics (counterevidence increases commitment),
    • and saturation dynamics (it eventually slows when audiences saturate or platforms intervene).

    A good cascade explanation makes clear why it was hard to stop once it started, and what the momentum-maintaining loops were.

    (d) Spillover (cross-level effects)

    Misinformation cascades often “spill” beyond the originating community:

    • from niche forums → mainstream platforms,
    • online belief → offline behavior,
    • public discourse → institutional reaction → further discourse.

    This is exactly the kind of “not bounded to one discrete system” propagation that makes cascade models natural.

  • Step 3: Why this cascade explanation can be more suited than a mechanistic one

    A mechanistic explanation here would often aim at:

    • individual cognitive mechanisms (motivated reasoning, heuristic processing),
    • micro-interactions (who persuades whom),
    • algorithmic mechanisms (ranking function details),
    • etc.

    Those can be real and important—but they’re not always the right explanatory level for the spread phenomenon.

    Here’s Ross’s key insight translated: a cascade explanation can do explanatory work even when the underlying micro-mechanisms vary across contexts.

    • One community spreads it via identity signaling.
    • Another spreads it via fear and uncertainty.
    • Another spreads it due to elite cues.
    • Another spreads it because the platform’s recommender happened to catch it.

    The mechanisms differ, but what might remain invariant is:

    • branching amplification,
    • threshold triggering,
    • feedback stabilization,
    • network bridge crossings.

    So if your explanatory aim is to understand and control the large-scale growth pattern, it may be more explanatory to model:

    • the branching factors,
    • the feedback loops,
    • the early threshold conditions,
    • and the choke points, than to insist on a single unified cognitive/algorithmic micro-mechanism.
  • Step 4: What counts as a good misinformation-cascade explanation

    Cascade explanation of misinformation includes:

    1. Trigger threshold: identifies the initiating conditions and why they crossed “ignite” threshold.
    2. Amplification chain: specifies sequential multipliers (human + algorithm + social).
    3. Stability loops: identifies feedback maintaining spread.
    4. Fizzle conditions: shows when/why similar stories die (no bridge nodes; low affect; platform friction).
    5. Leverage points: predicts interventions that reduce spread (rate-limits, friction, inoculation, deplatforming, reducing recommendation boost, targeting bridge nodes).
    6. Evidence: time-series diffusion + network traces + intervention comparisons (e.g., when friction is introduced, growth curves change).
    7. Cross-context generalization: distinguishes what’s structural vs what’s local mechanism-specific.

Ross’s framework doesn’t say “never use mechanisms.” It says: don’t force everything into mechanism-shaped explanation. A very powerful strategy is a hybrid. Cascade at the macro level explains spread dynamics, thresholds, leverage points whil Mechanisms at local nodes/links explains why particular multipliers exist (why identity increases share probability; why certain ranking changes increase exposure; why certain narratives hook attention). That hybrid avoids two bad extremes: pure “mechanism fetishism” (demanding micro-detail everywhere) and pure “cascade simplifications” (no causal accountability).

Social Structural Explanation and Causal Constraints

Ross’s project in social structural explanation is, at bottom, an attempt to make structure-talk both causally intelligible and evaluable. The aim is to show that when social scientists invoke “structure,” they need not be engaging in vague or pseudo-explanatory rhetoric. Instead, structure can name a specific kind of causal claim with its own explanatory role. In Ross’s framework, many social structures explain precisely by functioning as causal constraints: they shape, limit, and channel what outcomes are possible or likely, rather than triggering outcomes in the more familiar manner of a shove causing a fall. To understand this, one must first clarify what counts as a structure in social structural explanation, then examine how causal constraints work and how structures encode them, and finally ask what makes a structural explanation a good one.

When social scientists appeal to structure, they generally mean relatively stable, socially organized features of the environment that pattern behavior and outcomes across persons and across time. Ross’s examples include public policies, economic systems, social hierarchies, and forms of resource infrastructure such as access to transportation, medical insurance, or fresh food. What is important here is that these are not primarily individual psychological states. They are larger-scale causes, often harder to perceive and define directly, and they are frequently contrasted with the more familiar “standard stories” that explain behavior by focusing on individual beliefs, motives, and choices. Ross’s point is that many durable social patterns cannot be adequately understood if one attends only to individuals while ignoring these broader, organized features of the environment.

In this sense, structures perform several kinds of causal work. They shape the opportunity set by influencing which actions are feasible, affordable, safe, accessible, or salient, as in cases where food deserts and targeted advertising affect dietary patterns. They shape incentives and payoffs, since policies and institutions systematically alter the costs and benefits associated with different actions. They also shape exposure and pathways, determining what individuals are exposed to in the first place, including risks, information, social networks, policing patterns, and education quality. Finally, they channel default paths: they need not eliminate agency in order to matter causally, because they can make some behaviors much more likely and others much less likely by changing friction, availability, and the conditions for coordination. Structures, in other words, are patterning causes. They help explain population-level regularities and persistent disparities that remain mysterious if one cites only individual preference or decision.

This leads directly to Ross’s account of causal constraints. Against a common philosophical tendency to treat constraints as somehow non-causal or merely negative, Ross argues that some constraints are genuinely causal and provide a distinctive kind of explanation. In her account of causal constraints in the life and social sciences, she treats causal constraints as causes with additional features beyond simply satisfying a basic interventionist difference-making condition. She characterizes them as causes that, at a minimum, limit the possible values of the explanatory target, stand external to the process they limit, and are relatively fixed, meaning harder to change and typically operating over longer timescales. She also emphasizes that constraints often guide outcomes rather than trigger them. They determine which outcomes are possible and which are effectively off-limits, thereby drawing a boundary between what can occur and what is impossible, or at least practically inaccessible. Constraint explanations therefore answer questions such as: why did the system end up within this range rather than another, why are some outcomes systematically rare or unavailable here, and why does this stable pattern recur across many individuals or cases? A causal constraint explains by restricting the space of possible trajectories and outcomes, much as a riverbank shapes the flow of water. This is why constraint explanations are particularly apt for explaining persistent inequalities, robust cross-context regularities, and even absences, such as why some outcome does not happen more often in a given environment.

One reason structural explanation can feel elusive is that structures do not resemble the vivid, local causes that dominate common-sense causal thinking. They do not look like billiard balls colliding, and they can seem uncomfortably “top-down.” Ross’s constraint framework is meant to make their causal role intelligible without appealing to anything mystical. Structures influence events by shaping and stabilizing causal pathways. They matter causally because they control what resources are available, constrain movement, access, and exposure, structure patterns of network contact, and set institutional defaults. None of this requires structures to “push” individuals directly. Rather, they reshape the causal landscape in which individuals act by altering probabilities, costs, and feasibility relations. Structural influence is therefore perfectly compatible with ordinary causal reasoning once one stops assuming that all causes must resemble discrete triggers.

Ross further argues that structures can count as causes in the ordinary interventionist sense. If one changes a policy, an infrastructure arrangement, or a regime of access, outcomes can shift in reliable ways. This matters because it shows that structural explanation is not merely ideological description or loose social commentary. It can be anchored in counterfactual reasoning and intervention-based evidence. Indeed, one of Ross’s key goals is to rebut the suspicion that structure-talk is somehow less causal than more proximate forms of explanation. If changing the structure changes the outcome, then the structure has satisfied a familiar causal test.

At the same time, Ross notes an important social-epistemic complication: structures are often slow causes, and that is one reason they are neglected. Because constraints are relatively fixed, operate over longer timescales, and lack vivid event-like salience, both scientists and ordinary observers are prone to discount them even when they are explanatorily central. Human attention tends to be drawn to vivid, proximate, and agentic causes, while slower structural constraints can feel less story-like and therefore less causally compelling. This bias helps explain why individual-level accounts are often overprivileged even when structural factors better account for persistent group differences and long-run regularities.

A successful structural explanation, then, cannot consist merely in naming a structure. It must show how the structure constrains outcomes and why that constraining role makes the observed pattern likely or robust. Several success conditions follow from this. First, the structure must be specified precisely enough to do explanatory work. Bad or pseudo-structural explanation consists in saying things like “capitalism did it,” “patriarchy did it,” or “the system did it,” without giving those labels operational content. A good structural explanation instead identifies the particular structural factor or factors at issue: a policy, an access regime, an institutional rule, a network topology, a segregation pattern, a zoning regime, or insurance eligibility criteria. Only then can one tell what would count as change and what inferences the explanation supports. This matches the broader requirement that an explanans must do real inferential work rather than merely supply a label.

Second, a strong structural explanation must show that the cited factor genuinely functions as a constraint. Ross’s characterization gives a practical checklist: does it limit the possible values of the outcome, is it external to the process or behavior it shapes, is it relatively fixed or slow-moving, and does it guide outcomes rather than trigger them? Structural explanation is especially powerful when it shows why certain outcomes are systematically rare, off-limits, or difficult to realize under the prevailing regime. In such cases, the explanatory force lies not in identifying a discrete event that produced the outcome, but in showing how the structure narrowed the range of possible trajectories.

Third, structural explanations must demonstrate difference-making. In practice, social scientists support structural claims using policy changes, natural experiments, difference-in-differences designs, discontinuities and eligibility thresholds, instrumental variables used cautiously, longitudinal comparisons, and related strategies that make the counterfactual “if the structure had been different…” credible. Ross explicitly frames the causal status of structures in terms of such difference-making under change. The point is not merely to observe that structures exist, but to show that altering them would alter the distribution of outcomes.

Fourth, a good structural explanation must address the familiar worry about individual choice without collapsing back into individualism. One of Ross’s motivations is that social inquiry often overvalues “standard stories” centered on individual agents even where structural constraints do the real explanatory work. A good structural explanation does not deny agency. Rather, it shows that agency is exercised within sharply unequal constraint regimes, so that formally similar “choices” are not equally available, affordable, safe, or likely across groups. This preserves the reality of agency while resisting the mistaken inference that agency alone explains patterned disparities.

Fifth, a strong structural explanation must compete with alternatives and avoid what might be called structure gerrymandering. This is where a general framework for good explanation applies with particular force. A structural explanation should be able to say what it explains better than its rivals. Does it outperform a pure preference-based story? Does it better explain stability across contexts? Does it predict what happens when constraints are relaxed or tightened? If a structural explanation can be stretched to fit anything whatsoever, then it ceases to explain anything in a disciplined way and becomes pseudo-explanatory.

Ross’s account of structural and constraint-based explanation also fits into her broader causal pluralism, according to which not all causal explanation aims to open the black box in the manner of mechanistic explanation. Mechanistic explanations typically aim to reveal the internal organized parts and activities that produce a phenomenon. Constraint-based structural explanations, by contrast, often aim to show why the evolution of a system is channeled into a limited range of outcomes, sometimes without requiring micro-detail about every internal step. Because these explanatory forms answer different kinds of questions, their evaluative standards differ as well. A good mechanism tends to require decomposition, organized interaction, internal pathway detail, and manipulable components. A good structural or constraint explanation tends instead to require correct identification of the constraint regime, correct characterization of what it rules in or out, robustness across cases, and credible support for the claim that changing the structure would change the outcome distribution.

Ross’s framework therefore supports a concise grading rubric for high-quality social structural explanation:

  1. It names the structure non-vacuously. The explanation identifies a policy, institution, or resource topology rather than relying on slogans or prestige labels.
  2. It shows genuine constraint behavior. The explanation demonstrates that the structure limits the outcome space and guides outcomes rather than merely triggering them.
  3. It establishes difference-making. The explanation is supported by plausible counterfactual evidence, whether comparative, quasi-experimental, or intervention-based.
  4. It clarifies the relation to agency. The explanation shows how structure reshapes feasible options and probabilities without erasing the role of choice altogether.
  5. It states scope and boundary conditions. The explanation specifies when the constraint binds, when it loosens, and where its influence is strongest or weakest.
  6. It beats plausible alternatives. The explanation accounts for patterns that rival individualistic or purely proximal-cause stories struggle to explain.

Taken together, these points show what Ross is trying to achieve. Social structural explanation, on her view, is not a fallback for when finer-grained explanation is unavailable, nor is it a rhetorical substitute for real causal analysis. It is a distinctive and legitimate explanatory form, centered on the idea that some causes work by constraining and channeling the space of possible outcomes. To evaluate such explanations well, one must judge them by standards appropriate to that causal role rather than by assuming that all good explanation must look like a local trigger or a fully opened mechanism.

What is social structural explanation? A causal account. What are social structural explanations? - LSE Impact

Next is an example of structural explanation, applied to misinformation cascades. Misinformation spread is almost tailor-made for social structural explanation in Lauren N. Ross’s sense, because a lot of what we want to explain is not “why did this person believe that claim?” but:

  • Why did this claim spread widely rather than fizzle?
  • Why does misinformation spread faster here than there?
  • Why is spread persistent/recurrent across topics?
  • Why do particular groups or regions show systematically different exposure/belief patterns?

Those are questions about patterned distributions and stable regularities, which is exactly where “structure” and “causal constraints” do explanatory work. Below I’ll build a Ross-style structural explanation of misinformation spread, show how “constraint” explains, show how social scientists successfully support structure claims, and list a set of evaluative criteria for what makes this "good".

What “structure” is in misinformation spread

In the context of misinformation spread, structure usually refers to the relatively stable, organized features of the information environment that systematically shape who encounters what information, through which channels, and with what degree of friction. These structures are not primarily mental states or individual attitudes. Rather, they are external conditions that pattern the broader causal field within which individual minds operate. Thinking in structural terms therefore shifts the focus away from isolated psychology and toward the organized environment that makes certain informational outcomes more likely than others.

Several kinds of structure are especially important in this domain. First, there is platform architecture: recommender systems, ranking rules, autoplay, sharing affordances, low- or high-friction reposting tools, group-formation mechanisms, and quote-tweet features all affect how information moves. Second, there is the attention and incentive economy, including ad-driven revenue models, engagement optimization, creator monetization, political incentives, and outrage incentives. Third, there is network topology, such as clustering, segregation, bridge nodes, influencer hubs, degree distributions, community structure, and cross-platform pipelines. Fourth, there are institutional information systems, including media ecosystems, press incentives, partisan news segmentation, public health communication systems, and education systems. Fifth, there is the regulatory and governance environment, which includes moderation policy, transparency rules, platform liability regimes, election rules, and enforcement capacity. Finally, there are socioeconomic infrastructures that shape access, such as time scarcity, educational inequality, language access, local trust networks, and vulnerability to scams.

Ross’s broader point about structure applies directly here: if one looks only at individuals and says things like “people are gullible,” “people are biased,” or “people are tribal,” one will miss the reason misinformation displays recurrent population-level patterns across different topics and time periods. Structural causes are precisely the kinds of causes one needs in order to explain those larger regularities.

What “causal constraints” are here, and why they explain

A Ross-style causal constraint is a cause that does not mainly operate by triggering events in the way a shove causes a fall. Instead, it works by limiting and channeling the space of possible trajectories. In misinformation environments, this means that structures help determine what kinds of informational pathways are easy, difficult, rewarded, suppressed, or even practically unavailable. Constraint explanation is therefore especially well-suited to showing why some patterns of spread recur reliably while others are systematically blocked or dampened.

These constraints appear in several forms. One is constraints on exposure. Some communities are structurally more exposed to certain claims because of network configuration and ranking rules, while others are more shielded because their informational environment has higher friction, more diverse feeds, or stronger moderation. This helps explain why certain claims reliably reach some subpopulations earlier than others. Another form is constraints on propagation, especially through friction versus amplification. Read-before-share prompts, rate limits, and forwarding limits constrain rapid spread, whereas low-friction designs expand the range of propagation trajectories that content can take. This explains why very similar content may spread explosively on one platform but not on another, even when the same kinds of users are involved.

A third form consists in constraints on credibility pathways. Institutional trust landscapes shape which sources count as credible in a given environment. If trust in mainstream institutions is structurally low in a community, then certain narratives may face fewer social penalties and fewer obstacles to uptake. This helps explain why corrections and fact-checks systematically fail in some environments without requiring that the explanation reduce entirely to individual stubbornness. A fourth form is constraints on what can go viral. Even without appealing directly to cognition, structures determine which kinds of content are most likely to be selected and amplified. Engagement-driven ranking, for example, tends to favor emotionally arousing, identity-signaling, and conflict-rich material. This helps explain why misinformation that fits outrage or identity templates is disproportionately represented across many otherwise different topics. One of the key strengths of constraint explanation is that it explains not only occurrences but also absences and impossibilities. It can answer questions like: why does high-quality corrective information not spread as quickly? The answer may be that the relevant structure makes those trajectories high-friction and low-reward.

How structures causally influence observed events without being mysterious

Structural causes often seem mysterious because they do not resemble simple local pushes or discrete triggering events. One useful way to make their influence intelligible is to think in terms of three linked layers.

At Layer 1, structures shape the causal landscape itself. They determine who encounters what, how easy transmission is, what gets rewarded, what gets penalized, and how quickly cascades can accelerate. In this sense, structures do not merely sit in the background; they organize the possibilities within which later informational events unfold.

At Layer 2, individual psychology operates within that structured landscape. Psychological factors such as confirmation bias or identity protection still matter, but they matter within environments that activate them more often in some settings than in others. Those same psychological tendencies also have more downstream impact when friction is low and amplification opportunities are high. Structure therefore does not replace psychology, but conditions when and how psychological mechanisms matter.

At Layer 3, aggregate patterns emerge. Population-level phenomena such as polarized false beliefs or repeated rumor cycles can then be explained as the result of structural constraints plus amplification conditions operating across millions of micro-interactions. This parallels Ross’s treatment of cascades: the large-scale pattern can often be explained by the shape of the environment even when the specific micro-mechanisms vary from one interaction to another.

How social scientists make successful appeals to structure (what counts as evidence)

A good structural explanation in misinformation research cannot merely gesture at “the system.” It has to earn causal credibility. Social scientists do this through several recurring kinds of evidence, each of which helps establish that a structural feature is genuinely making a difference.

One important strategy is comparative platform or ecosystem contrast. If two environments differ structurally in ranking, friction, moderation, or network topology, and they display systematically different spread patterns for similar content, that supports the claim that structure is playing a causal role. Another major strategy is the study of natural experiments and policy changes. When platforms introduce features such as forwarding limits, prompts, downranking, de-amplification, bans, or transparency reforms, and systematic changes in diffusion patterns follow, that is strong evidence that structural constraints matter.

A further line of support comes from network analysis that identifies constraint points. Evidence that spread depends heavily on bridge nodes, hubs, influencers, or tightly clustered communities supports structural explanation because it shows that topology restricts which diffusion paths are available. In addition, intervention studies and field experiments can test whether informational interventions, inoculation or prebunking, labeling, or design changes affect exposure rates, forwarding probabilities, cross-community spillover, or persistence over time. Finally, structural claims become strongest when there is triangulation across multiple measurement types. When behavioral data, network data, platform rule descriptions, and temporal diffusion patterns all line up while also ruling out obvious alternatives, the causal case for structure becomes much stronger.

When structural explanation is better suited than mechanistic explanation for misinformation spread

There are contexts in which mechanistic explanation remains appropriate. In the “entities plus activities organized” sense, it is often the best framework for explaining individual-level cognition, how people update beliefs, how a ranking algorithm computes decisions, or how persuasion works in a small-scale setting. These are cases where one wants an account of the organized internal processes that generate a phenomenon.

But misinformation spread, considered as a social phenomenon, often raises a different explanatory question. What one frequently wants to understand is the distribution and dynamics of misinformation across a population: who gets exposed, how quickly something spreads, why it crosses into new groups, and why it persists over time. Those questions are often better addressed through structural constraints—what diffusion routes exist, what content is rewarded, and what friction is present—and through cascade dynamics, involving trigger, amplification, and stability. In these contexts, insisting on a fully mechanistic account can become a category mistake, because it demands internal micro-detail at precisely the point where the explanatory work is being done by macro-level constraints on the available diffusion paths.

The best approach is often a hybrid. Structural explanation helps account for why an environment tends to produce certain spread patterns, while more local mechanisms help explain how particular nodes, groups, or individuals respond within that constrained environment. The point is not that one mode of explanation replaces the other, but that they answer different explanatory demands and often work best in combination.

A Ross-style grading rubric for structural explanations of misinformation

A high-quality structural explanation of misinformation spread should satisfy several distinct conditions. Here, a numbered list is helpful because the criteria are explicitly evaluative:

  1. Structural specificity (the anti-slogan test). A good explanation identifies concrete structural features such as a ranking rule, moderation policy, friction affordance, or network segregation pattern. A bad explanation falls back on vague claims like “capitalism,” “modernity,” or “polarization” did it, without any operational handle.

  2. Constraint articulation. A good explanation states what trajectories are made easy, difficult, or impossible. For example, it may show that one design makes rapid resharing cheap, or that a certain network topology blocks cross-community spread unless bridge nodes activate. A bad explanation says only that “structure shapes behavior” without specifying what that shaping amounts to.

  3. Difference-making evidence. A good explanation points to real or quasi-interventions showing that when the structure changes, diffusion changes. A bad explanation merely asserts structural influence without testing it against counterfactuals.

  4. Level-appropriate scope. A good explanation aims at population-level patterns and does not pretend to be a full theory of individual belief adoption. A bad explanation overreaches by implying that structure alone explains every individual cognitive response.

  5. Rival explanation competition. A good explanation distinguishes itself from purely individual-bias stories, purely content-based virality stories, and purely elite-cue stories by showing what those alternatives fail to explain, such as cross-platform differences or time-varying bursts after rule changes. A bad explanation can be stretched to explain anything after the fact.

  6. Leverage points and policy relevance. A good explanation identifies where changing constraints would likely change outcomes, for example by increasing friction, disrupting bridge nodes, reducing engagement incentives, or improving trusted institutional channels. A bad explanation offers no actionable counterfactuals.

These criteria show how one can distinguish a genuinely explanatory structural account from one that merely sounds sociological or critical without doing disciplined causal work.

A worked example sketch: explaining a misinformation wave structurally

Consider a case in which a rumor suddenly explodes across a platform. A structural explanation of that wave would not need to reconstruct a complete micro-mechanism for every individual act of belief update. Instead, it would explain the spread by identifying the relevant trigger, the surrounding constraint regime, and the amplification dynamics that made the wave possible.

In such a case, the explanation might proceed by identifying:

  1. a trigger condition, such as a high-centrality account posting during a period of uncertainty;
  2. a constraint regime, such as low-friction resharing combined with engagement-ranked feeds;
  3. an amplification process, where engagement loops accelerate early spread and recommendation systems surface the content beyond the original audience;
  4. a network structure, in which clustered communities plus a few bridge nodes allow the rumor to jump across groups once those bridges repost;
  5. a source of stability, such as identity signaling that makes retractions costly and encourages counterevidence to be reinterpreted as hostile propaganda;
  6. an account of why the rumor did not fizzle, namely that the structure failed to provide early damping because friction was too low and algorithmic amplification too strong; and
  7. clear leverage points, such as adding friction, limiting forwarding, reducing recommender boosts, targeting bridge nodes, or inoculating highly exposed clusters.

What is important about this style of explanation is what it makes intelligible. It explains the speed of the spread, its breadth, its ability to make cross-group jumps, its persistence, and the points at which intervention is most likely to work. It does all this without demanding a complete micro-level account of every individual cognitive process involved. That is precisely what it means for constraint explanation to do real explanatory work in the study of misinformation.

Non-Mechanistic Explanatory Styles

In Biological Explanation and What Constitutes an Explanation in Biology?,philosopher of science Angela Potochnik pushes back against the view that mechanism should be the dominant explanatory style in biology. I find her work similar to Ross's in that they both recognize the role of pluralism in explanatory styles.

Potochnik’s view is that biological explanation is best understood neither as “deriving from laws” nor as “always giving mechanisms,” but as depicting causal patterns. She argues that two ingredients are jointly central:

  1. Causal dependence: the phenomenon happened because of certain causal factors.
  2. Scope / pattern / regularity: explanation also shows the range of circumstances in which that causal dependence holds (how general it is, where it breaks, what variants it has).

Her claim is not just “causes matter.” It’s: explanation = causes + the pattern of when/how those causes matter. That’s why she explicitly endorses “causal patterns” as the central explanatory currency in biology. This matters for biology because biological phenomena are typically produced by many interacting causal influences; it’s often impractical (and not even desirable) to cite all causes. So explanation has to be selective and pattern-oriented. This is one reason she thinks a simple “just list the causes” picture is not workable in biology, and why explanatory practices in biology naturally encourage accounts that emphasize generalizable causal structure rather than complete causal micro-history.

Potochnik is explicit that biology undermines the classic “laws of nature + derivation” model (the Carl Hempel / deductive-nomological tradition). Many biological explanations don’t cite laws at all, or only do so in a strained way. Even where biology uses “laws” (e.g., Mendel), they’re often exception-riddled and better seen as lawlike patterns with limited scope, not strict universal laws. This reinforces her view that “regularities” matter, but they often take the form of robust but non-universal patterns, not exceptionless laws.

Potochnik agrees with a broad causal approach: explanation often answers “what brought it about?” But she’s also very clear that “causal” is not equivalent to “mechanistic”. She gives mechanists their due: mechanisms are real and important, and in some areas (cell biology, physiology, biochemistry) mechanistic explanation is central. She uses a standard mechanist definition (entities + activities organized to produce regular changes). But she argues that there is a tendency—among philosophers and sometimes scientists—to treat “real explanation” as synonymous with “component-and-process detail.” She explicitly pushes back: biology contains many successful explanations that do not work by detailing component-process mechanisms. So for her, mechanistic explanation is one subset of causal pattern explanation, not the template that everything must fit.

Potochnik’s contribution is to insist that biology’s explanatory repertoire includes many non-mechanistic causal patterns:

  1. Mechanistic explanations (component-and-process): These give causal + organizational detail—classic “how it works” stories, e.g., biochemical pathways for photosynthesis. Mechanistic explanations typically answer What are the parts/activities? How are they organized? How do successive steps produce the phenomenon?
  2. Functional / optimality explanations (role-based, selective advantage): She treats classic optimality explanations as a kind of functional explanation, traits are explained via the fitness-conferring role they play under selection. This is causal, but it often abstracts away from detailed causal production (developmental genetics, etc.) in order to capture a selection-based causal pattern.
  3. Equilibrium explanations (stability-focused; process-omitting): She emphasizes evolutionary game theory as a major explanatory strategy and describes it as fruitfully construed as equilibrium explanation—explaining via stability points rather than tracing the full causal history. Equilibrium explanations can be broadly causal (they reflect causal patterns), yet they may omit the causal process leading to equilibrium, and still be explanatory for the relevant aims.
  4. Large-scale and structural-cause explanations (context, distance, background shaping): Large-scale causes are spatially or temporally distant influences. Structural causes are contextual influences that shape a phenomenon without changing to precipitate it. She insists these can be “just as causal” and “just as explanatory” as component/process causes. This aligns very closely with Ross on constraints and structure—though Potochnik’s framing is broader: “mechanism” is only one kind of causal-pattern depiction; context and large-scale influences are also legitimate causal patterns.

Potochnik’s position is basically they don’t all fit neatly into “mechanism,” and trying to force them to can mislead. She notes a recurring dispute: mechanists will either (i) stretch “mechanism” to include structural/large-scale/general patterns, or (ii) question whether those are genuine causal explanations. Her response is that stretching “mechanism” that far encourages an inaccurately reductionist expectation that “real causes” must be local, component-based processes.

So if you define “mechanism” narrowly (entities/activities organized to produce outcomes), then lots of biology explains outside mechanism. If you broaden “mechanism” to cover all causal patterns, you risk making it a catch-all that stops discriminating explanatory types (and stops guiding evaluation). That is directly relevant to the earlier “pseudo-explanation” concern: a concept becomes epistemically dangerous when it becomes vacuously inclusive (everything counts, so nothing is clarified).

Potochnik argues strongly that biology often supports multiple explanations of the same phenomenon, because different research aims select different causal patterns as explanatorily salient. She uses Ernst Mayr’s proximate/ultimate distinction as a paradigm: developmental/physiological vs evolutionary explanations can both be correct and non-competing. (and elaborated in the “multiple explanations” section). This is philosophically important for the earlier point last post about “indeterminacy” or underdetermination. It's not that "anything goes", its that explanatory goodness is aim-relative in a principled way: different causal patterns answer different questions, so multiple explanations can be simultaneously good because they are responding to different explanatory projects.

What makes an explanation “good” on Potochnik’s view? Potochnik doesn’t give a single universal checklist, but her framework yields fairly crisp criteria.

  1. Causal dependence: does it correctly identify what makes a difference? An explanation must latch onto genuine dependence relations (not mere correlations or labels).
  2. Scope: does it correctly characterize the range of circumstances? Good explanations don’t just cite a cause; they show where/how it generalizes—what else it would explain, when it would fail.
  3. Pattern selection is justified by the research question (aim-sensitivity): The “right” causal pattern depends on what you’re trying to understand; she repeatedly interprets biological disagreements as disagreements about explanatory projects rather than metaphysical worldviews.
  4. Strategic abstraction/idealization is legitimate when it serves the pattern: Omissions aren’t defects if they are the means by which the model captures the targeted causal pattern.
  5. Don’t over-demand mechanistic detail: Lack of component/process detail is not automatically “non-explanatory” if the explanation is doing different explanatory work (e.g., equilibrium, selection-pattern, structural-cause).

Potochnik gives a normative defense of non-mechanistic explanation (structural/large-scale/general patterns) that complements Ross’s more fine-grained taxonomy. She explains why “mechanism hunger” is epistemically risky: it can hide real causal patterns that are large-scale or contextual, and it can produce “ideological squabbles” about being “more mechanistic.” Her “causal patterns” view gives you an elegant way to unify mechanistic explanation, equilibrium/cascade explanations, structural/constraint explanations, and even some non-causal pattern explanations (she mentions regression to the mean as explanatory despite not being causal).

Eight Other Questions about Explanation

Potochnik’s Eight Other Questions about Explanation is a “reframing” paper. Her aim is not to propose yet another single master account of explanation (law-based vs causal vs mechanistic), but to show that philosophy has over-indexed on one issue—what kind of dependence explains—and in doing so has eclipsed other largely independent features that matter for how explanations actually work and how we should evaluate them.

She identifies eight “other questions” (beyond “what dependence relation is explanatory?”), grouping them into three clusters: human explainers, explanations as representations, and ontic/ontology-adjacent issues.

Below I’ll explain each question, what philosophical tension it captures, and how it connects to the things we have been building: explanation quality, psychological satisfaction/closure, mechanisms vs cascades/structures, underdetermination/pluralism, and the “seductions of clarity.”

Potochnik is trying to do three things with the eight questions:

  1. Decouple philosophical debates: you can disagree about abstraction, understanding, idealization, or levels without disagreeing about whether explanation is causal/mechanistic/law-like.
  2. Legitimize non-ontic issues: she pushes back on the idea that “the only real question” is what in the world explains. She wants communicative, psychological, and representational issues to be treated as central, not “mere pragmatics.”
  3. Make room for explanation pluralism: different legitimate explanatory aims → different standards → multiple good explanations. This connects directly to questions about indeterminacy and competing explanations.

Below are the eight questions. I will apply the questions to the concrete example we have been tracking throughought the post: misinformation spread. The popular explanation/story is “Misinformation spreads because people are biased and tribal. They want comforting stories, they don’t think critically, and social media just amplifies what people already want to believe.” It feels satisfying because it’s simple, agency-centered, and morally legible. We’ll run it through Potochnik’s eight questions and show what it gets right, where it fails, and what a Ross-style cascade/structural account adds.

  1. Priority of communication Potochnik’s first question asks whether explanations are primarily ontic, meaning objective relations in the world, or communicative, meaning answers that succeed for an audience. The deeper issue is whether human-facing features are merely external to explanation or partly constitutive of it. That question becomes especially vivid when we examine a familiar explanation of misinformation spread: misinformation spreads because people are biased and tribal; they want comforting stories, do not think critically, and social media merely amplifies what they already want to believe. This story is rhetorically powerful because it performs extremely well on the communicative dimension. It answers a why-question in a way that feels immediately intelligible, gives us a morally legible picture of the problem, and offers a clear villain in bias or tribalism. But that same communicative success is also where the danger begins. If explanatory success is measured too heavily by whether something “clicks,” then the relief of having an explanation can substitute for explanatory adequacy. This is exactly where the seductions of clarity enter: the explanation satisfies the audience, but may fail to track the dependence relations most relevant to the phenomenon. This shows why communication cannot be treated as mere packaging. Psychological closure, narrative pull, and identity fit are not external to explanatory practice; they are part of what makes some explanations compelling even when they are incomplete or distorting.

  2. Connection to understanding The second question concerns the relationship between explanation and understanding. Potochnik places this within the debate over whether explanation is necessary for understanding, sufficient for it, or neither. The misinformation story is a useful case because it clearly produces a strong feeling of understanding. It is easy to picture: people believe what they want, share what flatters their group, and ignore correction. That image creates a powerful sense that the phenomenon has been made intelligible. Yet what it often fails to provide is the kind of understanding needed to explain actual diffusion dynamics. It does not, by itself, explain why some falsehoods explode while others fizzle, why certain rumors jump across platforms or communities, why a given story surges in one week rather than another, or why a design change alters the shape of spread. In that sense, it illustrates the gap you have been tracing between understanding-feel and actual explanatory competence. It fits neatly with the illusion of explanatory depth: people can feel that they understand misinformation because “humans are biased,” while remaining unable to answer the mechanistic, cascade, or structural questions that would reveal real explanatory grip. Potochnik’s framing helps show that if explanation is tied to understanding, then one must ask not only whether an account satisfies us, but whether it equips us to navigate the phenomenon in a disciplined way.

  3. Psychology of explanation Potochnik’s third question asks what the psychology of explanation reveals about the roles explanations play and the features people tend to prefer. She notes that people are often drawn to explanations that are simple and highly general, and that explanatory activity can support learning about patterns and causal structure. The popular misinformation story exemplifies that perfectly. It exploits simplicity by reducing the phenomenon to bias, generality by suggesting that humans always behave this way, and agency salience by centering on people making bad choices. This is part of why it is so satisfying: it reduces uncertainty quickly and provides cognitive closure. But those same psychological virtues can become epistemic liabilities. The story encourages what you have called seizing and freezing: once “bias and tribalism” are in place, the search for alternative explanatory structures often stops. Platform incentives, friction design, network topology, moderation regimes, bridge nodes, and institutional trust structures are all easier to neglect once the psychologically satisfying account is in hand. It can also distort evidence evaluation, since vivid examples of gullibility or tribal reasoning may be over-weighted while structural evidence, such as changes in diffusion curves after design modifications, is under-weighted. This is why Potochnik’s attention to the psychology of explanation fits so well with concerns about closure, ambiguity intolerance, and the craving for explanation over accuracy: explanatory preference is itself part of the problem explanation theory must reckon with.

  4. Priority of representation The fourth question asks whether representational choices such as abstraction, omission, and idealization are peripheral to explanation or central to it. Potochnik’s answer is that they are central. This becomes obvious in the misinformation case, because the popular story is not simply a neutral report of causal fact; it is a specific way of representing the phenomenon. It abstracts away from platform architectures, network structures, media ecosystems, and governance environments, and instead zooms in on psychology and identity. That choice is not automatically mistaken. Explanations always involve selective representation. But the problem arises when representational choices conceal the phenomenon’s causal sensitivities. If diffusion is highly sensitive to omitted factors such as friction, recommender boosts, cross-community bridges, or trust asymmetries, then an explanation centered only on bias can be representationally misleading even if the psychological claims themselves are true. This is where the distinction between pseudo-explanation and real explanation becomes especially important. An account may sound deep because of its representational style, because it is vivid, morally legible, and narratively neat, while still omitting what most strongly changes the probability or trajectory of the explanandum. Potochnik’s framework helps make clear that representational form is not a cosmetic extra; it affects explanatory adequacy itself.

  5. The representational aims of explanation Closely related is the question of what explanation is trying to represent, and at what level of fidelity. Potochnik treats abstraction and idealization as substantive matters rather than defects by default. This is especially useful for diagnosing the misinformation story. Its frequent failure is not simply that it abstracts, but that it often abstracts in the wrong way. It treats “people are biased” as if that were the core causal pattern responsible for spread, when in many cases the more important pattern is something like cascade amplification or structural constraint: a trigger enters a low-friction environment, is amplified through engagement-ranked systems, crosses clustered networks through bridge nodes, and persists because correction channels are weak or costly. In such cases, the popular story has idealized away precisely the features that constitute the spread pattern. The right Potochnik-style repair is not to reject abstraction, but to relocate the psychological account to its proper role. One can treat it as a local mechanism within a broader explanatory structure. For example, instead of saying “misinformation spreads because people are biased,” one might say: identity protection increases resharing probability under certain cues within a structurally amplification-prone environment. That is a much more disciplined abstraction. It preserves the psychological mechanism, but no longer mistakes it for the whole explanation of spread.

  6. Relationship to other scientific aims Potochnik’s sixth question concerns how explanation relates to other scientific aims, especially prediction. She emphasizes that explanation and prediction can come apart: some good explanations idealize in ways that weaken prediction, while some predictive successes do not yield the kind of understanding explanation aims at. The popular misinformation story is revealing here because it predicts only in a loose, handwavy sense. It suggests vaguely that people will continue to fall for falsehoods, but it performs poorly when more targeted predictive tasks are at issue. It does not tell us which specific claim will spread next, which interventions will most reduce spread, why a rumor spiked at a particular time, or why one platform produced explosive dissemination while another did not. That mismatch is significant because it shows that the explanation may be too thin for the actual aims we often care about, including mitigation, control, and forecasting. This is where Ross-style cascade and structural accounts become more useful. If the explanatory target involves leverage points, then factors such as branching rates, friction, bridge-node activation, moderation thresholds, and incentive gradients are more scientifically informative than a broad moralized narrative about bias. Potochnik’s question thus helps separate mere plausibility from explanatory usefulness: the fact that an account feels like it should explain does not mean it serves the explanatory aims actually at stake.

  7. Priority of the ontological dimension The seventh question asks how primary the ontological dimension of explanation is relative to communicative and representational dimensions. Potochnik warns that getting the dependence relation right is not sufficient, even if explanations must still track some dependence of the right kind in the world. The misinformation story is especially helpful here because it is not simply false. Cognitive biases, identity-protective cognition, and tribal filtering are real dependencies. In that sense, the story has ontic contact. But it often smuggles in a stronger conclusion than its evidence warrants, namely that these are the primary dependencies responsible for population-level spread. If the actual pattern of wide diffusion depends heavily on platform constraints, network architecture, amplification dynamics, and institutional trust asymmetries, then the story is ontically incomplete even while remaining communicatively powerful. This captures the concern with explanations that are extremely plausible yet off base. An account can latch onto a real dependence and still fail to track the dependence that matters most for the specific explanandum under consideration. So the lesson is not that ontic truth does not matter, but that a partially true dependence can be explanatorily misleading when it is presented as the whole causal story.

  8. Level of explanation The eighth question concerns the appropriate level of explanation: microphysical, biological, psychological, social, or some other level. Potochnik treats this as one of the clearest places where pluralism matters. In the misinformation case, the core flaw of the popular story is often a level mismatch. It offers a micro-level explanation centered on individual cognition and motivation, while the explanandum is frequently macro-level: diffusion dynamics across populations, platforms, networks, and institutions. That mismatch does not make the psychological story worthless. It may be quite good at explaining why a particular person shares a claim, or why certain cues increase uptake. But it is often a poor explanation of why diffusion becomes runaway, why cross-community jumps occur, or why a platform rule change reshapes the entire spread pattern. This is precisely where Potochnik and Ross fit together. Ross shows that pathways, cascades, structures, and constraints each have their own explanatory standards, and Potochnik shows why higher-level explanations can be fully legitimate when they better suit the explanatory aim. The important point is that different explanations may not be rivals at all. A micro-level mechanism of belief uptake and a macro-level structural account of diffusion can both be good, provided they are answering different questions at different levels. The mistake is to let the familiar micro-story about bias crowd out the level-appropriate explanation of the spread phenomenon itself.

Taken together, this eight-question audit shows why the popular story about misinformation is so seductive and yet so often explanatorily insufficient. It is highly communicative, psychologically satisfying, and strongly productive of understanding-feel. But it often relies on representational choices and level selections that hide causal sensitivities, performs weakly for explanatory aims such as intervention and targeted prediction, and mistakes a partial ontic truth for the whole dependency structure behind population-level spread. That is exactly the broader pattern you have been tracking: weak explanations can be compelling because they optimize the psychological and narrative dimensions of explanation while under-serving ontic adequacy, structural fit, and level-appropriate understanding.

Potochnik is building a conceptual framework that explains why there seems to be new criteria for every type of explanation:

  • mechanistic virtues (parts/activities/organization),
  • cascade virtues (trigger/amplification/stability),
  • structural virtues (constraints/opportunity-sets),
  • social-epistemic virtues (source diversity, anti-clarity exploitation),
  • psychological fit vs epistemic fit (closure vs truth-tracking).

Her eight questions say: these are different dimensions of explanation, and philosophers too often pretend only one dimension (explanatory dependence) matters.

Potochnik gives you a taxonomy for why persuasive stories can undermine explanatory virtue: they can optimize communicative/psychological satisfaction while degrading ontic adequacy and representational responsibility. This connects directly with nguyen-style concerns (“clarity feelings can be exploited”) on multiple dimensions: communication priority (Q1), understanding (Q2), psychology (Q3), and representation choices (Q4–5).

Ross’s causal pluralism becomes more evaluable. Her main point (mechanisms vs cascades vs constraints/structures) is causal structure differs, so standards differ. Potochnik gives you the meta-apparatus for that: some of those differences are about level, some about representation, some about what audiences need, and some about the dependence relation itself. So the Ross-style cascade / structural explanation would cenet the trigger, amplification, stability, and constraints of the misinformation spread (what we saw earlier). When you include this type of explanation and evaluate the explanation against the eight questions, you can see it covers more:

  • Q8 (level): it matches the macro diffusion explanandum.
  • Q4–5 (representation): it represents the spread-relevant causal pattern (amplification + constraints).
  • Q6 (aims): it generates intervention handles (increase friction, target bridge nodes, reduce recommender amplification).
  • Q7 (ontic): it targets dependencies that more directly govern diffusion.

And it can still incorporate the “bias” story—but as one local multiplier inside the cascade, not the whole explanation.

I think Potochnik's questions provide a nice generalization of the types of considerations required for a "good" explanation. It encapsulates the different causal structures Ross talks about, considers psychological aspects, and includes the philosophical requirements we discussed at the beginning of the last post. When you’re evaluating an explanation (e.g., of misinformation spread), you can ask:

  • Ontic: What dependence is claimed (causal? mechanistic? structural constraint? cascade)?
  • Level: Is the level appropriate to the explanandum and contrast?
  • Representation: What’s omitted or idealized, and is that omission responsible?
  • Communication & psychology: Is it satisfying because it’s true/pattern-revealing—or because it exploits closure/identity/narrative preferences?

Comments

Popular posts from this blog

Michael Levin's Platonic Space Argument

Core Concepts in Economics: Fundamentals

The Nature of Agnosticism Part 5