Skip to main content

Interpretation as an Act of Argumentation

Recently, I've been thinking about what it means to interpret something, and to what extent we can say an interpretation is "correct". During a conversation with a friend on the topic of Nietzsche, I was asked if I've read any of his writings; to which I answered yes, but then qualified it by saying he is difficult to interpret given his writing style. Nietzsche is but one example within the philosophical tradition that is difficult to understand, especially in isolation from the rest of his writings or the thinkers to which he was responding. You often have to embed a philosopher within their own lifecycle and broader cultural context to make sense of them. Wittgenstein was also brought up; to which again I responded by saying he is difficult to interpret (for the same reasons). In one of my last posts, I referenced one of Wittgenstein's writings to support something I was claiming; but then began to wonder whether quoting such a difficult-to-interpret thinker helped illuminate my own arguments or increased its complexity. Then I started thinking about the extent to which interpretation is an information compression activity, and how it's embedded within communication more broadly even within mundane situations like speaking with friends. Shortly after, I was reading about an upcoming SCOTUS case that challenges an existing constitutional settlement about how constrained the president is, especially when it comes to independent regulatory agencies. The target is a longstanding Supreme Court precedent: Humphrey’s Executor (1935). That case is the canonical support for Congress creating multi-member “independent” commissions whose members can’t be fired at will. One of the Questions Presented is explicitly whether those removal protections are unconstitutional and whether Humphrey’s Executor should be overruled. This is a matter of constitutional interpretation and will have significant implications if overruled. Interpretation is not just an isolated act of identifying meaning, but a communicative act that determines who holds power and how they can wield it. Crucially, during this SCOTUS case, people will be presenting arguments for why "such and such" text has "such and such" meaning. Interpretation is therefore an argumentative process, in which various parties will advance justifications for why their interpretation should hold. This is true outside of legal cases; philosophical argumentation often consists of someone arguing for a specific interpretation of a text. More broadly, these interpretive arguments are often the product of interpretive frameworks consisting of theories of language, theories of meaning, historical traditions, and other assumptions often not explicitly stated. So in this post, I want to explore what exactly is meant by "interpretation", different interpretive practices, and broader assumptions on which these practices are founded. My key point is that regardless of the tradition, interpretation can fundamentally be cast as an act of argumentation, which means we can use tools from argumentation theory to think clearly about a given interpretation. 

Hermeneutics

For much of literature, philosophy, and religious exegesis, it seems like there is no agreed-upon standardized evaluative criteria for what something means, and this results in having a plethora of diverging interpretations that are often incompatible. Contrast this with data analysis and scientific modeling. There are more or less precise and agreed upon standards for saying whether an interpretation of a dataset or a model is better or worse than any given one. It seems that, fundamentally, if you want to claim that an evaluative criteria is standard, or is good, you have to argue for this position. 

Hermeneutics is basically the study of how understanding happens, when it can go wrong, and the norms we use to argue that one interpretation is better than another. The hermeneutic circle describes the process of doing hermeneutics. The core idea is that you never start from zero; you approach a text with prior concepts, interests, language, and historical location (pre-understanding), you then read parts in light of the whole and the whole in light of the parts. Interpretation is an iterative adjustment: expectations, then encounter, then revise. Disagreement proliferates mainly because different readers bring different starting horizons and different purposes for reading the text. 

A crucial insight from humanities is that meaning is not one thing, it's distinguished into several categories. There is the semantic content of an utterance; this refers to what the words, sentences, and structures plausibly say. There is speaker and authorial intention; this is distinct from semantic content and can often diverge. It encompasses pragmatic elements (think speech act theory), referring to what an author actually meant to do by saying it. Then there is significance; what it matters as (politically, ethically, aesthetically, existentially). When someone says a text "has a lot of meaning", they are often referring to this aspect. Then there is reference; what it picks out or implies about reality. Meaning is always tethered to a referent, whether it be an experience, object, or concept. Lastly, there is reception and function. What did the text do in a community, historically or now? Many incompatible interpretations are actually competing along different axes of meaning. 

Even when schools disagree, there’s a surprisingly stable set of constraints and virtues that show up across philosophy, literature, and theology that determine whether a given interpretation is "better".

Constraint-type criteria: These are closer to “standards” in science because they rule out readings that violate basic discipline norms. These don’t force one “true” interpretation, but they create a floor: some readings are just bad readings because they can’t satisfy these constraints.

  • Textual accountability: Can you show the reading in the text—word choice, syntax, imagery, form, argument moves? Does it survive close counterevidence (awkward lines you have to explain away)?
  • Non-contradiction / logical discipline: Does the interpretation produce contradictions internal to the passage without explanation? If it claims irony, ambiguity, or paradox, can it justify that claim textually and contextually?
  • Historical/linguistic plausibility: Does it respect what the words could mean in the language at the time? Does it avoid anachronism without argument?
  • Genre and discourse constraints: Reading a sonnet like a lab report is usually a category mistake; genre sets expectations about sense, reference, voice, irony, etc.

Virtue-type criteria: These act like theory-choice virtues in science (coherence, simplicity, fruitfulness), but applied to texts. These virtues often decide debates when multiple readings satisfy the “floor” constraints. 

  • Coherence and integration: Does it unify disparate parts without strain? Does it explain why this detail is there?
  • Explanatory power: Does it illuminate puzzles, tensions, tonal shifts, structure, omissions, or recurring motifs? Does it account for more of the “data” (textual features) than rivals?
  • Parsimony: Does it avoid inventing unsupported machinery (hidden speakers, elaborate conspiracies of meaning) unless needed?
  • Fecundity / fruitfulness: Does it open productive lines of inquiry (intertextual links, thematic depth) without becoming unconstrained fantasy?
  • Comparative fit across a larger corpus: If you interpret a Plato dialogue or a Pauline epistle, does the reading fit the author/corpus patterns unless you argue for a deliberate break?

Purpose-relative criteria (where pluralism really enters): Here the field explicitly admits that “better” depends on what you’re trying to do. Interpretive standards aren’t always “global.” Many are telos-dependent (dependent on the discipline’s aims).

  • Aesthetic/critical aims: If the goal is literary value: richness, formal elegance, affective power, innovation, etc.
  • Ethical-political aims: If the goal is critique: how the text participates in power, ideology, exclusion; what it legitimates or resists.
  • Theological/confessional aims: If the goal is doctrine/formation: consistency with a rule of faith, tradition, liturgical use, or ecclesial authority.

Stanley Fish pushes the idea that what counts as a “good reading” is stabilized by interpretive communities. Even if you dislike that conclusion, it captures a real phenomenon: standards converge locally via education, institutions, journals, and ongoing dispute. Different communities emphasize, neglect, and prefer different criteria. Different schools disagree because they pick different targets: intention, text-as-object, readerly effect, social function, or metaphysical implications. Here are common hermeneutic orientations, each with characteristic evaluative rules. 

  1. Authorial-intent / validity traditions (Schleiermacher, Hirsch)
    • Aim: recover what the author meant (or what the text meant in its originating act).
    • Standards: historical context, philology, genre, intent-consistency, avoidance of projecting later concerns.
    • Justification style: meaning is anchored in communicative action; otherwise interpretation collapses into projection.
  2. Philosophical hermeneutics (Heidegger, Gadamer)
    • Aim: understand as a human mode of being; interpretation is always situated.
    • Standards: openness to being corrected by the text, coherence, “fusion of horizons,” dialogical testing of prejudices.
    • Justification style: you can’t step outside history; objectivity becomes responsible situatedness, not neutrality.
  3. Ricoeur-style “hermeneutics of suspicion” + restoration
    • Aim: both unmask ideology and recover meaning.
    • Standards: explanatory power about distortions (power, desire), plus textual discipline so suspicion doesn’t become free-association.
    • Justification style: surface meaning is often systematically deceptive; critique is an epistemic virtue.
  4. Structuralism / semiotics / formalism
    • Aim: map structures (codes, oppositions, narrative functions).
    • Standards: systematicity, replicability of analytic procedure, coverage of formal features.
    • Justification style: meaning is constrained by systems, not private intention.
  5. Reader-response / reception (Jauss, Iser, Fish)
    • Aim: meaning as enacted in reading and historical uptake.
    • Standards: evidence of reception, plausibility of readerly operations, community norms.
    • Justification style: texts don’t “mean” apart from interpretive practices; standards are social.
  6. Deconstruction
    • Aim: show internal instabilities, undecidability, suppressed oppositions.
    • Standards: extremely close textual attention; demonstrating how a text undermines its own claims.
    • Justification style: the demand for fixed meaning is itself a philosophical overreach the text can’t sustain.

Establishing the criteria in each of these schools involves arguing that they are normatively justified. There are various patterns or common arguments that are invoked. For example, in theology the arguments are often institutional and authoritative. The "community tradition" sets the interpretive rules. This is openly non-neutral; the standard is grounded in authority and communal identity, not universal epistemology. Genealogical and ideological critique argues that supposed neutral standards hide power interests, and therefore standards must be revised or challenged to reduce that distortion and exclusion. Some arguments involve pragmatic elements. For example, someone might argue that criterion X is justified because it produces inquiry that is non-arbitrary, criticizable, and progressive. 

Jurisprudence

Law faces a very similar underdetermination problem (multiple plausible readings of the same authoritative text), but it has stronger closure mechanisms and more institutionalized constraints than most literary/philosophical interpretation. Because law authorizes coercion, systems build in devices that force convergence in practice such as hierarchy (trial, appellate, and supreme court), stare decisis / precedent, standards of review, burdens of proof / procedural limits, written reasons, and authoritative settlement. Even within disagreement, judges share a toolbox and a vocabulary for arguing that a reading is legally better. In U.S. statutory interpretation, a widely used framing is that textualism and purposivism are the two major “theories,” and both deploy overlapping tools (semantic arguments, canons, context, structure), while differing about things like legislative history and how strongly purpose should drive results. Different schools disagree partly because they disagree about what interpretation is for (and therefore what counts as “success”). 

A useful way to organize theories of legal interpretation is to ask: what is doing the constraining? Different schools place the “center of gravity” of constraint in different places—(1) the enacted text itself, (2) the law’s purpose or legislative intent, (3) constructive moral-political justification of legal practice, or (4) the real-world forces shaping judicial outcomes. Each approach can be read as an account of what should count as a good reason in legal interpretation—and therefore what standards we should use to evaluate whether an interpretation is better or worse.

Text-centered constraint theories

Text-centered approaches begin from the idea that law’s most legitimate public “signal” is the enacted text. On this view, interpretation should be anchored in the linguistic artifact adopted through lawful procedures. The interpreter should resist replacing that public artifact with speculative reconstructions of hidden intentions or contemporary policy preferences. The animating worry is that once interpreters treat the text as merely a starting point, “interpretation” can become an undisciplined form of lawmaking. Textualism prioritizes the enacted text and tends to distrust resources that are not equally available to the public (for example, selective legislative history or anecdotal accounts of what some legislators “really meant”). Textualists often portray their method as a rule-of-law posture: citizens and institutions should be able to look at the law and have a stable basis for prediction, rather than depending on a judge’s reconstruction of intentions or moral sensibilities.

Evaluative criteria emphasized by textualism:

  • Linguistic evidence and ordinary meaning: Prefer interpretations that track how competent speakers would ordinarily understand the words and syntax in context, including grammar, punctuation, and the way terms operate in the whole sentence and provision.
  • Semantic and syntactic canons (as discipline, not decoration): Reward readings that respect stable conventions of legal/linguistic interpretation; whole-text reasoning, consistent usage, avoiding surplusage, and standard narrowing principles when general terms follow specific ones. A strong interpretation does not cherry-pick canons to justify a desired outcome; it uses them consistently and explains why particular canons apply here.
  • Public accessibility and auditability: A “better” interpretation is one another interpreter could reach from publicly available materials (the text, structure, widely shared linguistic context), rather than privileged access to insider motives. This makes the reasoning easier to critique and replicate.
  • Predictability and administrability: Favor readings that generate clear guidance for citizens, agencies, and lower courts. When multiple meanings are possible, textualists often prefer the one that reduces litigation incentives and lowers the need for case-by-case moral balancing.
  • Democratic legitimacy and separation of roles: Evaluate interpretations by whether they respect the legislature’s job (to write/alter law) and the court’s job (to apply law). A key anti-criterion is judicial “updating” of policy choices that should be made through legislation.
  • Constraint on judicial discretion: Interpretations are viewed as better when they minimize the room for judges to decide based on idiosyncratic values while presenting the decision as compelled by law.

Originalism treats the Constitution as a historically situated legal act that binds later interpreters. Many originalists focus on original public meaning—how the text would have been understood by the public at the time of ratification—while some variants emphasize original intent. The central thought is that legitimacy comes from fidelity to what was ratified, and that constitutional change should generally occur through amendment rather than judicial modernization.

Evaluative criteria emphasized by originalism:

  • Historical semantic grounding: Prefer interpretations supported by period-appropriate linguistic evidence: contemporaneous dictionaries, usage patterns, legal treatises, public writings, and other historical sources that illuminate how key terms were understood at enactment/ratification.
  • Fixed meaning (with a distinction between meaning and application): Evaluate readings by whether they preserve the idea that the Constitution’s meaning does not drift with current preferences. Many originalists allow that applications can evolve as facts and technologies change, but the underlying semantic content should remain anchored.
  • Constraint and legitimacy via amendment-respect: A better interpretation is one that does not smuggle major constitutional change into “interpretation,” thereby bypassing the amendment process.
  • Doctrinal fit as implementation fidelity: Assess whether existing doctrine can be justified as a faithful implementation of original meaning, or whether doctrine has drifted and should be corrected.
  • Methodological transparency: Strong originalist arguments show their work: clear steps from sources to claims about meaning, with explicit handling of conflicting evidence.

Common stress-test for text-centered theories: What should be done when the text is genuinely underdeterminate, yields counterproductive results, or collides with deep moral intuitions? Text-centered approaches typically respond by tightening the interpretive discipline (better linguistic analysis, better attention to structure) and by emphasizing institutional remedies (legislation or amendment) rather than judicial innovation.

Purpose/intent-centered theories

Purpose/intent-centered approaches treat statutes as purposive instruments: they are enacted to solve problems, coordinate behavior, allocate authority, and pursue goals. The guiding idea is that interpretation should not become a fetish for literal language when a literal reading defeats the law’s function. In hard cases, interpreters should ask what the statute is for, and interpret the text in a way that makes the statute operate as a coherent plan rather than a set of disconnected clauses.

Purposivism / intentionalism. Purposivism interprets to advance statutory purpose; intentionalism emphasizes legislative intent (what the enacting body meant to do). Many real-world arguments blend the two: purpose provides the “why,” intent clarifies the “what we were trying to accomplish,” and statutory structure provides the “how.”

Evaluative criteria emphasized by purposivism/intentionalism:

  • Coherence with the statute’s aims: Prefer interpretations that make the statute function as a rational solution to the problem it targets. A reading is weaker if it makes major provisions pointless or defeats the core policy the statute appears designed to implement.
  • Avoidance of absurdity and misfires: Penalize interpretations that generate outcomes that appear self-defeating, irrational, or wildly disproportionate relative to the statute’s evident objectives—especially when an alternative reading preserves both text and function.
  • Fit with legislative design and internal architecture: Evaluate interpretations by how well they align with definitions, exceptions, remedies, enforcement mechanisms, and the statute’s broader scheme. A good purposive reading treats the statute as an engineered system.
  • Explanatory “best account” of the statute’s shape: Favor the reading that best explains why the statute is drafted the way it is (why these thresholds, why these categories, why these carve-outs), rather than a reading that makes drafting choices mysterious or arbitrary.
  • Responsible use of legislative history (when used): If legislative history is consulted, better interpretations explain what kind of history is being used (committee reports vs. floor remarks), why it is reliable, and how it interacts with the text. Cherry-picked quotations count against interpretive quality.
  • Administrability and practical governance: Prefer interpretations that yield workable guidance for agencies and courts, reduce perverse incentives, and avoid constant edge-case litigation that would paralyze the statutory program.

Common stress-test for purpose/intent theories: How do we identify “purpose” without turning it into a blank check for judicial policy-making? These theories typically respond by demanding disciplined evidence (structure, context, history) and by treating purpose as a constraint that must remain tethered to the enacted scheme—not a license to rewrite it.

“Constructive” / value-involving theories (Dworkin-style interpretivism)

Constructive theories argue that interpretation is not merely linguistic decoding. Law is a practice that claims authority over people, and that claim raises normative questions. On this view, legal texts and precedents constrain judges, but rarely determine outcomes fully in contested cases. So a responsible interpreter must both (a) respect the institutional record and (b) offer a principled justification for why that record should guide decisions.

Dworkin’s interpretivist picture. Dworkin frames legal interpretation as the search for the best constructive interpretation of the community’s legal practice—one that fits the institutional history and justifies it in moral-political terms. “Fit” keeps interpretation from becoming free invention; “justification” ensures that the law is presented as a principled enterprise rather than a pile of compromises.

Evaluative criteria emphasized by Dworkin-style interpretivism:

  • Doctrinal fit: Prefer interpretations that plausibly account for existing materials—precedents, settled doctrines, institutional commitments— without treating large portions of the legal record as mere mistakes (unless a strong justification is provided).
  • Moral-political justification: Among interpretations that fit reasonably well, favor the one that makes the practice more defensible by appealing to principled values (e.g., fairness, equality, liberty, democratic legitimacy). The point is not personal preference, but publicly arguable principle.
  • Integrity (principled coherence): A good interpretation makes the law hang together as a coherent set of principles: like cases should be treated alike for reasons that generalize, not because the judge likes one party more than another.
  • System-level coherence: Evaluate interpretations by how well they harmonize different areas of law and reduce arbitrary doctrinal fragmentation.
  • Candor about normative premises: Because values are inescapable on this account, interpretive quality increases when the interpreter makes their justificatory commitments explicit and open to critique rather than hiding them behind “neutral method” rhetoric.

Common stress-test for constructive theories: How do we keep “justification” from collapsing into judicial moralizing? The typical reply is that fit is a real constraint, and justification must be principled, generalizable, and consistent with the institutional record—not merely outcome-driven.

Realist / critical approaches

Realist and critical traditions shift attention from idealized interpretive constraints to the actual causal forces shaping decisions. They stress that outcomes often track practical consequences, institutional incentives, ideology, and social power—sometimes more than official interpretive rhetoric admits. These approaches often treat “method debates” as partly rhetorical: judges may cite textualism or purpose when convenient, but the deeper drivers can be political, psychological, or institutional.

Legal realism (and later critical traditions). Legal realism emphasizes that judges respond to facts, equities, and consequences, and that doctrine can function as a justificatory vocabulary rather than a deterministic engine. Later critical theories often expand this by analyzing how legal categories and reasoning styles can reproduce hierarchy, exclusion, and ideology while presenting themselves as neutral.

Evaluative criteria emphasized by realist/critical approaches:

  • Explanatory adequacy: Evaluate accounts of interpretation by whether they explain what courts actually do across many cases, not just what courts claim to do in opinions. A theory is weaker if it repeatedly fails to predict outcomes or must explain away patterns as “exceptions.”
  • Transparency and honesty about value choice: Interpretive arguments are better when they acknowledge the value judgments and trade-offs that are actually driving results, rather than disguising them as mechanically compelled.
  • Attention to power, incentives, and institutional context: Assess interpretations by their awareness of who benefits, who bears costs, how institutions behave under constraints, and how legal decisions interact with political economy.
  • Consequential and distributional impact: A strong evaluation asks what decisions do in the world—who is harmed, who is protected, what incentives are created, what second-order effects follow (e.g., chilling effects, strategic compliance, unequal enforcement).
  • Hidden-premise detection: Interpretive quality increases when the argument exposes contested assumptions embedded in “neutral” language—assumptions about markets, families, security, race, gender, normality, deservingness, and so on.

Common stress-test for realist/critical approaches: If law is so driven by politics and power, is “legal constraint” an illusion? Responses vary. Some realists push for methodological reform (greater candor, empirical feedback, institutional redesign). Some critical traditions emphasize how contestation is endemic, and evaluate legal argument by its role in either sustaining or challenging unjust structures.

Cross-cutting checklist for evaluating an interpretation (regardless of school)

Even if you lean heavily toward one framework, it’s often useful to test an interpretive claim across multiple dimensions. Many disputes persist because people are silently using different scoring rules. This checklist makes those scoring rules explicit:

  • Textual plausibility: Is the reading linguistically credible given the enacted language?
  • Contextual integration: Does it cohere with surrounding provisions, defined terms, and overall structure?
  • Historical grounding: Is there strong evidence about meaning-at-enactment or the relevant legal backdrop?
  • Purpose sensitivity: Does it advance (or undermine) the law’s apparent aims and design?
  • Doctrinal continuity: Does it fit with precedent and system-level patterns without ad hoc exceptions?
  • Administrability: Can it be applied predictably without constant litigation or unworkable standards?
  • Institutional legitimacy: Does it respect the roles of courts vs. legislatures/amenders?
  • Normative defensibility: Can it be justified by principled reasons that generalize beyond this case?
  • Consequential impact: What are the real-world effects, including distributional consequences and feedback loops?
  • Transparency: Are the interpretive steps and assumptions explicit and open to critique?

Framed this way, disagreements become easier to diagnose: they’re often disputes about which evaluative criteria should dominate. Is the best interpretation the one most faithful to public meaning? The one that best fulfills legislative purpose? The one that best justifies the practice morally? Or the one that most honestly accounts for power and consequences? In practice, many legal arguments are hybrids—mixing criteria—but the weights assigned to each criterion often determine the final result.

What is the goal "interpretation"?

Let's refer back to the objectives laid out at the beginning. What do people even mean when they say "interpret"? What is the goal of interpretation? What sorts of functions or activities are undertaken when applying that term? When people say they’re “interpreting” something, they’re usually doing (some mix of) a handful of distinct activities. The disagreements we've been circling often happen because different people mean different goals by the same word. Interpretation can be aimed at different targets, and those targets often blend in practice. But it helps to separate the main “jobs” interpretation can be doing—what you’re trying to fix, recover, situate, explain, or do with a text, utterance, action, artifact, theory, or dataset.
  1. Fix “what it says” (sense)

    The aim here is to determine the content of the words, symbols, or actions at the level of meaning—what the expression itself encodes. This is interpretation as disambiguation and semantic determination.

    • Resolve ambiguity: Is “bank” referring to a riverbank or a financial bank?
    • Pin down reference: If the text says “he,” “they,” or “it,” who or what is being referred to?
    • Determine scope: When it says “all,” does it mean literally all, or all relevant cases within an implied domain?

    In this mode, a successful interpretation reduces semantic uncertainty: it clarifies what the sentence, clause, sign, or gesture means as such.

  2. Recover “what was meant by saying it” (communicative intention / pragmatics)

    Here the target is not just sentence meaning but speech-act meaning: what someone was doing in saying it. This is interpretation as reconstructing an act of communication—often by inferring intentions, conversational goals, and shared assumptions.

    • Speech act: Was it a promise, threat, joke, warning, irony, or understatement?
    • Responsive context: What question, problem, or provocation was the speaker responding to?
    • Shared presuppositions: What background knowledge or norms were assumed between speaker and audience?

    In this mode, “meaning” includes implicatures and pragmatic force, not merely dictionary definitions.

  3. Place it in a wider context (contextualization)

    This goal treats the object as something whose intelligibility depends on embedding it within larger systems of practice. This is interpretation as situating: identifying the frameworks that make the thing legible.

    • Historical setting: What time, place, and social conditions shape what can be said or meant?
    • Genre and conventions: Is this a statute, poem, manifesto, lab report, meme, ritual, or contract—and what does that genre normally do?
    • Institutional roles: Who is speaking (judge, priest, scientist, politician), and what authority or constraints come with that role?
    • Intertextual echoes: What traditions, references, or prior texts are being invoked, adapted, or resisted?

    Contextualization often changes what counts as a plausible reading by showing what kinds of moves the text is “allowed” to make in its setting.

  4. Make it coherent (integration)

    This approach treats the object (a text, theory, person’s action, dataset, or practice) as something like a structured whole. The interpreter tries to integrate parts into a pattern, reduce fragmentation, and show how seemingly disconnected elements belong together. This is interpretation as pattern-finding and coherence-building.

    • Connect parts to wholes: How do particular passages, claims, or moves contribute to the overall structure?
    • Explain tensions and gaps: How can shifts, contradictions, silences, or missing steps be understood?
    • Propose organizing principles: What themes, values, or inferential rules unify the material?

    Integration can be descriptive (showing the internal logic) or charitable (seeking the most rational, unified version of the view), depending on the interpreter’s aims.

  5. Explain “why it’s like that” (causal/functional explanation)

    Sometimes interpretation aims at explanation in a more external sense: not “what does it mean?” but “what produced it?” or “what is it doing in the world?” This is interpretation as diagnosis—often associated with a “hermeneutics of suspicion.”

    • Psychological motives: What needs, fears, aspirations, or self-conceptions might be driving the expression?
    • Social pressures and institutions: What incentives, constraints, or organizational dynamics shape it?
    • Ideology, power, economics: What interests are served, what hierarchies reinforced or challenged, what material conditions reflected?
    • Community function: What role does the text or practice play—boundary-setting, legitimation, identity formation, coordination?

    In this mode, the “meaning” of a text can be partly explained by what it accomplishes socially or politically, regardless of authorial self-understanding.

  6. Apply it to a case (normative application)

    In law, theology, ethics, and everyday rule-following, interpretation often means deciding what a general norm requires here and now. This is interpretation as bridging the general to the particular, where “application” is not merely downstream of understanding, but part of what understanding is for in normative domains.

    • Case-specific judgment: Given this rule or principle, what should be done in this situation?
    • Operationalization: How do we translate general language into concrete criteria, thresholds, or actions?
    • Handling hard cases: How should exceptions, conflicts of principles, or unforeseen scenarios be treated?

    This mode highlights that norms often “come alive” only through interpretation-in-application: deciding relevance, weighing factors, and producing a warranted verdict.

  7. Draw out “what it means for us” (significance)

    This is where much of humanities pluralism lives: interpretation as articulating a work’s significance—what it reveals, symbolizes, critiques, or invites us to become. The aim is not only to decode or explain, but to make explicit the work’s import for a community, a tradition, or a present audience.

    • Revelation and critique: What does it disclose about a society, an ideology, a moral blind spot, or a form of life?
    • Symbolic or aesthetic meaning: What is its expressive power—its imagery, resonance, and felt orientation?
    • Moral/political import: What stance does it press, what responsibilities does it suggest, what possibilities does it open?
    • Why care: What makes this worth returning to, arguing over, or building upon?

    In this mode, “meaning” is inseparable from evaluation: the interpretation clarifies what the object matters as, not only what it says.

Essentially, "interpret" can name at least three big kinds of goals: What does it say/mean? (sense, intention, context), How does it work / why is it this way? (structure, causes, functions), and What should we do with it / what follows? (application, significance); descriptive, explanatory, and normative considerations. People fight about interpretation because they’re often optimizing different targets—one person is doing (1) and (2), another is doing (6) and (7), and they talk past each other as if there must be one single “goal.” 

Interpretation as a Two-Layered goal

Interpretation is generally a two-pronged goal, one preceding the other. First, you must decide and determine what the goal is, which might require arguing about what our values are, the pragmatic goals, constraints, etc. Once these are determined, proper application of the criteria requires argumentation. So for example, once the criterion has been established, we must argue the applicability of the the criterion, within a particular interpretive task.

If your goal is "what does it say?", then criteria shift towards textual evidence, lexical meaning, genre conventions, internal coherence etc. If your goal is "what did the author mean?", this is about intent and pragmatics. Criteria shift towards historical context, authorial habits, communicative situation, analyzing contemporaneous usage, and disambiguating intentions. A bad reading would be one that attributes implausible intentions or anachronistic concepts without support. If your goal is "what does this do or how does this function?", you are now in the realm of reception and social role. Some criteria you might consider are institutional setting, audience effects, historical consequences, and intertextual networks. A bad reading would be something like ignoring or misrepresenting how the text circulated or how it was actually treated. If your goal is "What is its significance for us?", now we are interested in the space of critique, value, and appropriation. Criteria you might consider may include insight, ethical/political illumination, explanatory depth about power/ideology, transformative or aesthetic payoff—but still usually with a “textual friction” requirement so it doesn’t become free association. A bad interpretation might be one that can be "proved" for any text. A practical way of thinking about this would be to ask the following questions:
  1. Target: Are we interpreting meaning, intention, function, implication, or application?
  2. Stakes: Are we trying to decide coercive outcomes (law), form persons (theology), or understand/criticize (humanities)?
  3. Constraint floor: What must any acceptable reading respect? (text, language, context, authority)
  4. Ranking rule: When criteria conflict, which wins? (text over purpose? purpose over text? precedent over best moral reading?)

Interpretation disputes are two-layer arguments. The first layer is meta-interpretive arguments, consisting of dialogue over what counts as decisive in an interpretive situation. The second layer is at the object-level, consisting of arguments conditional on the first layer such as, "Given these standards, this reading is preferred because of XYZ". I'll elaborate on this more later when introducing a generalized scheme of "arguments from interpretation", but this dual layer nature of interpretation can be reduced to the following:

  • There is a claim (“This text means M”),
  • It is defended by argument schemes (textual, definitional, analogical, purposive, etc.),
  • under procedural norms (burdens, starting points, allowed moves),
  • plus ongoing meta-argument about which norms and schemes should control.

The procedural norms refer to these decisions guiding the first layer of argument. Pragma-dialectics refers to this as the opening stages of a dialogue. The idea is that any given interpretive act is situated within a broader, lets call it "interpretive situation", establishing the direction of argumentation.

Semiotics

This is broadly understood as the study of signs. A sign is simply an entity that stands for something else. The symbols involved within a natural language, for example English, are signs that collectively form a signaling system. People using signaling systems can communicate, create, and express meaning to one another. Signs therefore stand in relation to some underlying meaning; expressed verbally, in written form, through visual cues etc. either intentionally or unintentionally. Semiosis is the capacity or activity or comprehending and production of signs. A sign is literally anything that communicates a meaning, that is not the sign itself, to the interpreter of the sign. The meaning of a sign is what is generated in the process of semiosis. 

Where am I going with this? Semiotics is a meta-discipline concerned with sign use in general, even in non-human animals (called biosemiotics). For example, Bee's communicate information to one another using complex forms of "dancing"; these movements instruct other members of the aggregate to perform tasks. Collectively, the bee swarm consists of a complex signaling system, where signs such as the dancing, communicate meaning to other members of the collective. Proper interpretation of the dance is crucial for the survival of the collective; and in some sense determines the resulting structure and patterns of the collective. It's very interesting; how a collective transmits and decodes signals is directly related to the resulting structure. This is directly analogous to the signaling systems humans have formed. Even human dancing can be seen as a system for communicating meaning. In fact, hermeneutics (the study of interpretation), which we discussed earlier, is a subset of semiotics concerned with written texts (historically concerned with religious texts, as early as Augustine). I am not a semiotician so I wont be able to effectively dive into the nuances of the discipline, but various scholars have proposed models of signaling systems. For example, one of the most influential models is C.S. Peirce's triadic model of signs.

Peirce’s triadic model of signs says a sign isn’t just a two-part link (like “word ↔ object”). Instead, a sign is a three-part relation: something that stands for an object to an interpretant. On this view, you don’t fully have a “sign” unless all three roles are in play, and meaning-making (semiosis) can continue as interpretants generate further signs.

  • Representamen (the sign-vehicle)
    • The thing that functions as the sign — a sound, word, image, gesture, smoke, footprint, etc.
    • Example: the written word “tree,” a red light, a weather vane’s arrow.
  • Object (what the sign is about)
    • The thing, event, property, or situation the sign refers to.
    • Peirce distinguishes:
      • Immediate object: the object as the sign presents it (the “object-in-the-sign,” a conceptual slice).
      • Dynamic object: the object as it really is / as it constrains the sign (what can surprise or correct you).
    • Example: for “tree,” the immediate object might be “tree-as-a-kind,” while the dynamic object could be that specific oak outside (or trees in the world, depending on context).
  • Interpretant (the sign’s effect/meaning)
    • Not the human interpreter, but the meaning produced — the understanding, inference, habit, or further sign that arises.
    • Peirce distinguishes:
      • Immediate interpretant: the sign’s basic intelligibility (it’s meaningful as a sign).
      • Dynamic interpretant: the actual effect in a particular occasion (you look up, you feel warned, you imagine a tree).
      • Final interpretant: the stabilized meaning a community would converge on under ideal inquiry (a norm/habit of interpretation).

Here is a concrete example of his model:

  • Representamen: the red illuminated circle
  • Object: the traffic rule / the state of “stop now” (and the traffic system it belongs to)
  • Interpretant: “I must stop,” leading to braking (and the general habit that red means stop)

Peirce categorized sign types according to this classification structure:

  • Icon: resemblance or shared structure. For example, a map, portrait, diagram, waveform display.
  • Index: causal/physical connection or direct pointing
    • Smoke → fire, footprint → someone walked here, thermometer reading → temperature
  • Symbol: rule, convention, learned habit. These include words, flags, traffic laws, mathematical notation

The reason I bring up this model, is to point out the fact that the interpretant (the signs effect), results from an inferential processes. In the example above, it is somewhat trivial because this is a socially learned meaning taken for granted in modern industrialized nations. Some processes of semiosis however, require much more deliberate inferential processes. 

Umberto Eco is another semiotician who stressed this inferential element. Very broadly, Eco thinks that every act of communication consists of encoding and decoding stages. The messages first must be encoded into a set of signs by the sender. These signs should be commonly held. They must then be transmitted and decoded by the receiver, using a coding system, to understand the contained messages. The right interpretation can be called the preferred decoding or preferred reading. Divergence from the intended message is called an aberrant decoding. Errors like these can occur for a variety of reasons. Eco lists four major classes of exceptions:

  • People who did not share the same language.
  • People trying to interpret the meanings of past cultures. For example, Medieval people looking at Roman art.
  • People who did not share the same belief system. For example, Christians looking at pagan art.
  • People who came from different cultures. For example, white Europeans looking at Aboriginal art.

Crucially, this "coding system" is not like a computational system (see the encoding/decoding model of communication); it is conjectural. In Interpretation and Overinterpretation, Eco says the reader’s initiative is to make a conjecture about the text’s intention (his intentio operis) and he frames this as a hermeneutic-circle process: you build a hypothesis about “what kind of reader this text asks for,” and you test it against the text. That’s his “model reader” idea: texts are built with an implied cooperative reader in mind; interpretation is partly the attempt to align yourself with that role. Eco also talks about an “encyclopedia”: the background web of shared cultural knowledge that makes signs interpretable in context. If the sender and receiver have different encyclopedias, you get systematic drift. Eco is famous for pushing back on the idea that texts license limitless readings. He calls uncontrolled interpretation a threat to meaning/communication and argues for constraints grounded in the text’s coherence and semiotic strategy. I think the key insight here between Eco and Peirce is this conjectural, inferential, or abductive step required for determining meaning (and interpretation). I think this implies an argumentation theoretic approach. Interpretation is fundamentally underdetermined, and therefore defeasible. Almost all informal reasoning is non-monotonic; also known as defeasible. Assuming no two people can have the same coding system, there will always be some sort of inferential process for determining the meaning of a text; meaning argumentation is fundamental to this process. 

Authorial Intent

A strict, one-for-one “authorial intent” approach can become uncharitable even if it’s trying to be faithful. For some writers, “interpretation” is not mainly recovering a determinate package of propositions that got “encoded” in language and needs to be decoded back out. It’s often closer to engaging with a designed artifact whose point is partly to move the reader through an activity—a way of seeing, a set of temptations to resist, a reorientation—where any “theory” you extract is (at best) a byproduct or a tool. Rigorous authorial-intent interpretation assumes (often implicitly): 

  1. the author had a reasonably stable, articulable content in mind,
  2. the text is mainly a vehicle for transmitting that content, and
  3. the best reading is the one that most closely matches that intended content.

I can think of a few authors who presumably violate these assumptions. For people like Wittgenstein and Nietzsche, all three of these assumptions are under pressure:

  • The author may not be fully self-transparent: they may be discovering their own view by writing, or working against their own nagging thoughts.
  • The aim may not be “theory delivery”: the writing may be therapeutic, diagnostic, genealogical, or provocational.
  • The text may be intentionally multi-voiced: aphorisms, masks, remarks, thought experiments—these can be tests for the reader, not “doctrines.”

If we were to try to reconstruct the faithful intent of writers like this, we very well might be misunderstanding their message. Wittgenstein's aphoristic, remark-based style might not be a failure to compress a complete theory into a digestible form for readers; it can be part of his communicative technique: multiple angles, reminders, and staged confrontations with your own conceptions. With Nietzsche, stylistic elements of writing are not purely decorative, they are part of the argument. Aphorism can resist being turned into a settled doctrine, it can force contextual reading (“this remark has significance only under certain conditions”), enact perspectivism by presenting partial angles rather than a synoptic view, and function as a provocation that reveals the reader’s moral/psychological commitments. So a “univocal” extraction can be a category mistake: you end up treating a repertoire of perspectives and diagnostic tools as if it were a single axiomatized system with one single meaning like mathematics. 

This reveals an interesting tension between faithfulness and fruitfulness when it comes to interpretation. Exegetical faithfulness has the goal of determining what an author is specifically trying to do. Constructive appropriation (fruitfulness) has the goal of asking “What can we build from this that clarifies a problem, even if it goes beyond what the author explicitly (or literally) meant?” In the latter, reconstructing meaning (interpreting) is less about semantic/syntactic meaning, and more about criteria such as explanatory reach, illumination, or problem solving power. Engaging with texts like these can force you to reconceptualize your own views in ways not originally intended by the authors. Authorial intent still matters as a constraint (“don’t make it mean anything”), but it’s not always the sole aim or the highest court of appeal. Wittgenstein wrote in very ambiguous fragments, which ended up influencing fields as diverse as linguistics, AI, and cognitive science. Nietzsche was profoundly influential, ranging from existentialism, political movements, and even the foundational thinkers in psychology. This just shows that a fruitful interpretation of a text is often times more important and impactful than a concise faithful interpretation of a text. 

A Software Analogy

Meaning, purpose, and function might seem to be synonymous at first glance, but when you investigate these concepts they are quite distinct. I think there is a parallel within the software engineering practice that can illuminate these distinctions, and also shed light into how meaning evolves and how its fundamentally connected to use. 

It's very common in engineering applications where the original intended use case of an engineering artifact is later used for an entirely different purpose. For example, memory foam was originally used for cushioning aircraft, but was later used for mattresses. Shipping containers were originally used for standardized cargo transport, later to be used as modular architecture and living. Hashtags were originally a very uninteresting metadata convention, then became deeply embedded in our cultures vernacular and a crucial feature of social media systems. Blockchain was originally designed for decentralized digital cash, but later was repurposed for broader data provenance functions. There is actually a similar biological phenomenon. Exaptation refers to the process where a trait (or structure) evolves for one function (or no particular function), and later gets co-opted for a new function. This is distinct from adaptation, in which the feature or trait is selected for function it currently serves. Exaptation refers to repurposing. 

The key analogy I want to make, is that once an artifact can circulate, it can be taken up in ways that outrun or invert its originating purpose. I think this is true for meaning and interpretation as well. Suppose you are a software engineering building a component for a very specific purpose; in other words, you intend on it to serve a function in a particular way in a larger system. The "function" or "meaning" of this component is not exhausted by what they had intended at the time of their commitment. The component is also its interfaces (what it lets you do), its constraints, its documentation and conventions, and its downstream uptakes. Some code is open to modification, in unanticipated ways. This "open-source repurposing" is basically what Derrida had in mind when he talked about iterability and recontextualization; a sign (or mark) can be repeated in new contexts in the absence of the original intention, and that repeatability is not a defect—it’s a condition of possibility for communication at all. This is precisely how the open source community operates and I think how any technology becomes possible; technological development crucially takes place within an ecosystem or trial-and-error iterability, where it's realized some system can be repurposed or adapted for an entirely different purpose than originally intended. This is highly analogous to the evolution of the meaning and interpretation of texts. An author might write a story about X, not realizing it will later be appreciated for some trivial aspect Y. 

Within the software development cycle, you are often embedded within an enterprise where the function of the system is determined by stakeholder needs, shifting requirements, and economic constraints. "The purpose" of the system or component, is not just what the engineer intended, but this broader context as well. This mirrors what Michel Foucault discussed in his essay What Is an Author? His claim is that "author" is often a function, not a sovereign agent. Authors are not just people, but an author-function: a cultural or institutional way of grouping texts, assigning responsibility, managing meaning, and policing interpretation. In software, who is "the author"; the original dev? the engineers approving the PR? the techops team dictating the platform constraints? Authorship is distributed and institutional. Likewise with texts: genre, institutions, publication practices, disciplinary uptake—these are part of what makes something interpretable and governable. So the "function" of a system or component is often tethered to these enterprise actors, likewise the "meaning" of a text is often tethered to these broader cultural features. The "author" is more of a vessel that transmits these features. “Who the author is” is tangled with ownership, accountability, governance, and downstream reuse. Foucault is basically saying: that tangle is the point. The “author” is a functional node in a social system that manages texts and their acceptable uses. Texts and software are both designed artifacts that (a) embed constraints, (b) travel through changing contexts, and (c) acquire meanings through communities of use. Appealing to “the author” is often not a neutral route to meaning—it’s a practice that serves certain functions (classification, limitation, responsibility, coherence-making). That can be useful, but it’s not inevitable. Likewise, appealing to "the developer" also misses the functional role the developer plays within a broader system.

Is "Meaning" Inseparable from Facts?

It seems to me that what counts as a permissible interpretation—and which evaluative standards are appropriate—is constrained, at least in part, by facts about the text and its history.

For instance, if I start from the assumption that the Bible is the infallible Word of God—perfectly transmitted, with human authors functioning only as vessels and no scribal error—then I’m likely to treat questions of authorship, transmission, and compilation as basically irrelevant. But that stance conflicts with a range of historical and textual considerations: the Old and New Testaments were assembled over long periods; communities and institutions made deliberate inclusion/exclusion choices (the very idea of a “canon” presupposes selection and filtering, including the exclusion of non-canonical gospels); some books (e.g., Job, on many scholarly views) may reflect multiple authorship and thus multiple intentions; and certain passages appear to be later additions rather than part of the earliest recoverable text (for example, the story of the woman taken in adultery and “let the one without sin cast the first stone”).

Taken together, these kinds of facts should limit what we can responsibly claim about the text’s meaning, intention, and authority—and they should also shape which interpretive criteria it makes sense to apply in the first place.

If you say “the Bible is the infallible Word of God with no scribal error and humans were merely vessels,” you’ve (often implicitly) committed yourself to a very specific picture of what the text is. But historical facts immediately complicate which “text” you mean. We don’t possess the original “autographs” for biblical books; we have later manuscript traditions with variants, which is why textual criticism exists at all. “Canon” is itself a historical reality: different communities recognized and listed authoritative books over time; Athanasius’s 39th Festal Letter (367 CE) is often singled out as the first surviving list that matches the 27-book NT canon. So the “facts” force a choice: are you interpreting (a) an idealized perfect original text, (b) a particular manuscript tradition, (c) a particular community’s canon/received text, or (d) the evolving textual tradition? Different answers yield different standards. 

You can see major traditions explicitly baking historical realities into their hermeneutics:
  • Evangelical inerrancy (Chicago Statement) commonly locates strict inerrancy in the autographic text and acknowledges that God did not promise an inerrant transmission, which implicitly authorizes textual criticism as part of responsible “what does Scripture say?” work. 
  • Catholic doctrine (Dei Verbum) explicitly says Scripture is “word of God in human language,” that the human authors are true authors, and frames “without error” in relation to what God wanted conveyed “for the sake of salvation.” 
  • The Pontifical Biblical Commission goes further in method: it calls the historical-critical method “indispensable” for scientific study of meaning precisely because Scripture is God’s word in human language and has sources behind it. 
Notice what’s happening: these aren’t merely interpretations; they’re meta-criteria about how interpretation should proceed given the kind of thing Scripture is taken to be.

Many scholars treat Job as at least structurally composite (prose frame vs poetic core, possibly additional strata), which makes “the author’s intent” at best plural or layered. This alters evaluative criteria. If you assume single authorial unity, you’ll treat tensions as deliberate artistry or pedagogical strategy. If you assume layering/redaction, you’ll treat tensions as evidence of dialogue between traditions/editors, and you’ll evaluate readings by how well they explain seams, shifts, and editorial aims. Same text, different “facts,” different validity conditions for interpretations.

Background Theory Determines Interpretation

“Interpretation” isn’t a single method so much as an activity that inherits its standards from whatever background picture you have about meaning, language, mind, and reality—whether you’ve made that picture explicit or you’re just running it implicitly. Every interpretation smuggles in a theory of meaning; making it explicit just lets you argue about it rather than letting it silently steer the result. When you say “this passage means X,” you’re taking stands (often tacitly) on questions like: 
  • Is meaning mainly in the author’s intention (Grice-style)?
  • Is meaning mainly in public linguistic rules/use (later Wittgenstein)?
  • Is meaning a function of reference/truth conditions (Frege/Tarski traditions)?
  • Is meaning a function of social practice and power (genealogy/critique traditions)?
  • Is meaning a function of readerly uptake (reception/reader-response)?
  • Is meaning stable or fundamentally open/iterable (deconstruction-ish)?

All of this determines what counts as evidence, what counts as error, and what counts as success. Change the meaning theory → change the goal of interpretation → change the ranking of criteria. 

An Argumentation Scheme for Interpretation

Looping back to what was mentioned earlier, there is an obvious argumentative aspect of interpretation. Scholars in Argumentation Theory have identified this, but limit their analysis to the legal domain. I think we can generalize their work, providing a scheme that could apply to all situations of interpretation. So what follows is an attempt at that, based on the following papers from scholars of this discipline.
A Walton–Macagno Argumentation Framework for Interpretation
A defeasible, goal-relative, dialectical method for settling meaning disputes across domains (law, scripture, philosophy, literature, data/model interpretation, institutional artifacts, and beyond).
Core commitments

Interpretation is best treated as a defeasible claim about meaning—a conclusion supported by pro and con arguments rather than a one-shot deduction. To interpret is to associate an expression/element E in an artifact/document/object D with a meaning M in a context/use, and then to defend that association under challenge.

Interpretive conclusions should be framed as conceptual/terminological claims about “best interpretation” relative to specified goals and standards—not as deontic “oughts.” The defeasibility of interpretive reasoning is made explicit through critical questions that expose defaults, assumptions, and points of vulnerability.

0) Trigger condition: when interpretation is needed

Interpretation-in-the-strict-sense is appropriate when there is a genuine doubt or conflict about what some element E in an artifact D means/does/implies/applies-as, such that no unchallenged default meaning settles the issue.

This corresponds to the transition from prima facie understanding (a shared, conventional default) to interpretation proper, where the default can no longer be taken for granted and must be defended, refined, or replaced.

1) The target form of an interpretive conclusion
Interpretive claim (meaning-attribution under evaluation)

BestInt(E, D | Cxt, G, A) ≡ M
“Relative to context Cxt, goal G, and audience/standards A, the best (or most justified) interpretation of element E in artifact D is meaning M.”

This captures interpretation as a meaning-attribution claim made to overcome doubt, evaluated by an explicit set of criteria and dialectical tests.

2) A two-level Walton-style scheme

Interpretive disputes often involve two intertwined questions:

  1. Which evaluative framework should govern the dispute? (meta-level)
  2. Given that framework, which interpretation is best? (object-level)

These are modeled as two linked schemes.

Scheme I: Argument for adopting an interpretive framework (meta-criteria)
Claim (C)

For this dispute about E in D (in context Cxt) with goal G, we should evaluate interpretations using framework F—a set of criteria plus priority rules and burden/standard of proof.

Premises
  1. Goal premise: The practical/theoretical goal G of this interpretive task is specified (e.g., faithful reconstruction, legal applicability, theological normativity, aesthetic understanding, moral critique, predictive adequacy, explanatory adequacy, institutional compliance, etc.).
  2. Object premise: D is an artifact of type T in domain Dom (statute, scripture, poem, scientific model, dataset, contract, UI spec, algorithmic output…), with authority-status A (binding, canonical, exemplary, exploratory, aesthetic…).
  3. Practice/fit premise: In domain Dom, for objects like T with authority-status A, framework F is (a) recognized as appropriate and/or (b) best fits G and the relevant constraints (institutional, epistemic, moral, practical).
  4. Feasibility premise: F can actually be applied here (available evidence, competence, time, procedural constraints).
  5. Defeasibility premise: No overriding reason blocks using F (e.g., F violates binding authority or explicit procedural rules; F’s triggering conditions aren’t met; F yields contradictions with controlling constraints).
Conclusion

Therefore, F is the (provisional/default) evaluative framework for settling interpretations of E in D for goal G.

Defeasibility note: Framework choice is itself defeasible. In practice, frameworks function as defaults that can be overridden when superior reasons arise.

Scheme II: Argument from a framework to a “best interpretation” (object level)
Claim (C)

Interpretation M is the best/most justified interpretation of E in D (in Cxt) for goal G, under framework F.

Premises
  1. Doubt premise: There is a genuine interpretive doubt/conflict about E in D in Cxt for goal G (i.e., no unchallenged default resolves it).
  2. Framework premise: Framework F is applicable here (by Scheme I).
  3. Support premise: Under F, there are sufficient pro-arguments supporting M from admissible sources of support (textual/structural evidence; contextual facts; genre constraints; intent evidence; precedent/tradition; functional/purpose evidence; empirical fit; institutional fit; explanatory/predictive adequacy; etc.).
  4. Defeat-management premise: Defeaters against M—both rebutting alternatives and undercutting attacks on the inference—are answered or neutralized under F.
  5. Comparative premise: Competing interpretations M₁…Mₙ are (i) considered and (ii) either rejected or shown not better than M under F.
Conclusion

Therefore, M is (provisionally) justified as the best interpretation of E in D for goal G, under F.

3) A generic Walton-style scheme for interpretive fit under a goal

The two-level view can be implemented as a single, domain-general scheme that explicitly parameterizes context, goals, and standards.

Scheme: Argument from Interpretive Fit Under a Goal
Target conclusion

BestInt(E, D | Cxt, G, A) ≡ M

Ordinary premises (grounds)
  1. Interpretandum identification: Element/expression E occurs in artifact/object D, and the interpretive question is well-posed (the “thing to be interpreted” is fixed).
  2. Context specification: Relevant context Cxt is identified (local co-text, broader corpus, historical setting, institutional setting, genre/discourse type, task setting).
  3. Goal specification: Interpretive goal G is specified (recover intended message, apply a norm, preserve doctrinal coherence, achieve aesthetic illumination, obtain explanatory/predictive adequacy, etc.).
  4. Candidate meaning: A candidate meaning/reading M is proposed at the appropriate grain size (propositional content, directive, theme, function, rule, model-structure, etc.).
  5. Method/criterion-set selected: An evaluative set of criteria/warrants K (or framework F) is selected as appropriate to (D, Cxt, G, A).
  6. Fit/support claim: Under K/F, interpreting “E in D as M” is supported (linguistic fit, contextual fit, coherence, explanatory/predictive power, institutional fit, empirical fit, etc.).
Major (warrant) premise — defeasible conditional

Interpretive warrant: If (1)–(6) obtain and no defeating exception applies, then BestInt(E, D | Cxt, G, A) ≡ M.

Assumptions (defaults unless challenged)
  1. Default-meaning presumption: There is a reasonable prima facie/default reading available given community conventions, unless challenged.
  2. Competence/normalcy assumptions: Interpreter competence; normal discourse conditions; artifact integrity; data quality; stable background practices, etc.
Exceptions (potential undercutters)
  1. Defeater conditions: Superior reasons support rejecting K/F here or preferring a rival meaning M′ (technical context defeats ordinary meaning; irony/sarcasm; corruption/variant text; genre shift; redaction; measurement error; institutional constraints; etc.).
Conclusion

Therefore, BestInt(E, D | Cxt, G, A) ≡ M (defeasibly).

Standard attack types
  • Undermining: attack a premise (e.g., the alleged context is wrong; the evidence is weak).
  • Rebuttal: support an incompatible conclusion (a rival interpretation M′).
  • Undercutting: attack the inferential link by showing an exception applies (e.g., the canon/criterion is inapplicable here).
4) A repeatable workflow: the interpretive act as a dialectical procedure
  1. Fix the interpretandum: Specify what exactly is being interpreted (E and D) and what would count as an answer.
  2. State the goal (G): Clarify what interpretive success means (reconstruction, application, critique, explanation, prediction, aesthetic illumination, etc.).
  3. Set the audience/standard (A): Identify who must be persuaded and what proof standard applies (court, scholarly community, faith community, lab/peer review, product team, etc.).
  4. Establish a default: Start from a presumptive reading when available (prima facie understanding).
  5. Generate alternatives: Enumerate plausible candidates {M₁…Mₙ}.
  6. Select warrants/criteria (K) / framework (F): Choose admissible interpretive warrants and any priority rules, burdens, and standards appropriate to G and A.
  7. Build pro/con arguments: Instantiate argument schemes supporting and attacking each candidate; make inferential dependencies explicit.
  8. Apply critical questions: Surface assumptions and test for exceptions; ensure objections are handled rather than bypassed.
  9. Resolve conflicts: Weigh competing arguments according to the selected proof standard and any priority rules when schemes collide.
  10. Output a statused conclusion: Report not only M, but its dialectical status (e.g., defensible vs justified) relative to the current argument graph and standards—and note any remaining live defeaters or unresolved ties.
5) What can “back” an interpretation: hidden commitments and fault lines

In Toulmin terms, backing underwrites the warrants inside K/F—the often-implicit commitments that make a criterion seem appropriate. In real disputes, these are frequently the deepest sources of disagreement.

A) Ontology and authority of the artifact
  • What is D? (law, scripture, literature, model output, dataset, institutional directive…)
  • What kind of authority does it have? (binding/coercive, canonical/normative, exemplary, exploratory, aesthetic…)
  • Where does authority live? (original author, final text, community practice, institutional body, empirical performance, tradition…)
B) Semantics and pragmatics (how meaning works)
  • Meaning as ordinary-language default vs technical language vs contextual pragmatics
  • Strong vs weak assumptions about semantic stability over time
  • Genre conventions as binding constraints vs looser affordances
  • The role of co-text and broader corpus in fixing meaning

This includes the crucial distinction between prima facie understanding (default meaning from shared socio-linguistic practice) and interpretation as a more complex presumptive reasoning task when doubt arises.

C) Hermeneutic school commitments (illustrative)
  • Intentionalist: author’s communicative intent is primary
  • Textualist/formalist: artifact-internal constraints dominate
  • Purposivist/teleological: function/aim/purpose dominates
  • Reader-response/reception: uptake/community practice is central
  • Canonical/tradition-governed: final form + rule-of-faith/tradition constrain
  • Deconstructive: emphasizes instability/iterability; critiques closure assumptions
  • Constructive: optimize philosophical/ethical/aesthetic fruitfulness subject to constraints (“textual friction”)
D) Epistemic and procedural commitments
  • What counts as admissible evidence (manuscripts, legislative history, author letters, experiments, lived experience, model diagnostics, institutional records…)
  • Burdens of proof, burden-shifting during critical questioning, and proof standards
  • Error-cost profiles (false positives vs false negatives in interpretation)
E) Value commitments
  • What “best” means: truth, coherence, justice, salvation, emancipation, predictive accuracy, utility, aesthetic richness, institutional safety, etc.
  • Whether consequences are admissible as evidence (teleological/purposive vs strict literalism/textual constraint)
  • Why this goal is the right one for this community and object (often moral/political/theological/pragmatic)
6) Critical questions: a comprehensive checklist

A central insight is that interpretive arguments share a general critical-question spine:

  1. What plausible alternatives exist?
  2. What reasons reject them?
  3. What reasons support an alternative as better/equally good?

Below is that spine expanded into a full set covering task framing, framework choice (Scheme I), and interpretation choice (Scheme II).

A. CQs about the interpretive task (framing)
  1. What exactly is the disputed element E (term, clause, symbol, gesture, data feature, model component)?
  2. What is the domain Dom and artifact type T?
  3. What is the interpretive goal G (reconstruction, application, critique, explanation, prediction, optimization, aesthetic illumination, etc.)?
  4. Is G legitimate/appropriate for this domain and audience (court, church, seminar, lab/peer review, product team)?
  5. What decision/stakes hinge on the interpretation, and what error-cost profile follows?
B. CQs about object identity and boundaries
  1. Are we interpreting the right object D (which edition, manuscript tradition, translation, dataset version, model version, build)?
  2. Are the boundaries of E correct (word, sentence, clause, whole work, canon, dataset slice, model family)?
  3. Is E/D possibly corrupted, interpolated, mistranslated, noisy, non-authoritative, or otherwise unreliable for this question?
C. CQs about context selection
  1. What contexts Cxt are relevant (immediate co-text, broader corpus, genre, institutional setting, historical setting, speaker/audience situation, task setting)?
  2. Why these contexts rather than others? Are we cherry-picking or importing context illicitly?
D. CQs for Scheme I: choosing evaluative criteria / framework F
  1. What alternative frameworks F′ are plausible here (text-first, intent-first, purpose-first, reception-first, empirical-fit-first, tradition-first, etc.)?
  2. What are the reasons to reject each F′ (inapplicable, violates authority rules, ignores key evidence types, yields systematic bias, contradicts binding procedure, etc.)?
  3. What are the reasons an alternative F′ is better/equally good for G?
  4. Are F’s triggering conditions actually met? (e.g., “technical meaning” requires a technical context; otherwise that canon is inapplicable.)
  5. Does F specify priority rules for conflicts among criteria (what outranks what), and are those priority rules justified here?
  6. Does F specify burdens/standards of proof—what counts as defeating vs merely raising doubt?
  7. Does F rely on controversial background assumptions (inerrancy, strong intentionalism, strong skepticism, etc.), and are those defensible here?
  8. Can F actually be applied given practical constraints (evidence availability, competence, time, procedure)?
E. CQs for Scheme II: arguing for interpretation M under F
  1. What is the prima facie/default reading—and why isn’t it sufficient here?
  2. What admissible evidence supports M under F (textual, contextual, historical, empirical, institutional, genre-based, functional/purpose evidence, etc.)?
  3. Are any key premises merely assumed (missing warrants, hidden definitional steps, suppressed context)?
  4. What defeaters exist?
    • Rebutters: rival interpretations M′ supported by comparable arguments.
    • Undercutters: attacks on applicability of the inference rule/criterion being used (e.g., “that canon doesn’t apply here”).
  5. Have all reasonable alternative interpretations M₁…Mₙ been considered?
  6. What reasons reject each alternative?
  7. What reasons suggest an alternative is better/equally good?
  8. If multiple interpretations remain comparably supported, what rule breaks ties (priority rules, charity, conservatism, simplicity, minimal revision, institutional safety, etc.)?
  9. Is M consistent with the broader system/corpus/practice that F treats as relevant (precedent, tradition, genre family, model family)?
  10. Does M overfit local details while breaking global constraints (or vice versa)?
  11. Does M depend on anachronism or illicit context importation (especially for historically distant texts)?
  12. Is M robust across nearby cases/usages, or does it only “work” for a cherry-picked scenario?
  13. Does the evidence offered bear on meaning rather than merely on desirability, ideology, or association?
  14. Is the evidence sufficient to support the premises if challenged (especially where CQs shift the burden back to the proponent)?
  15. What counterevidence supports rebutting interpretations or attacks premises directly?
  16. Is there an exception that blocks the inference from the support to the conclusion (irony, sarcasm, technical jargon, redaction, corruption, measurement error, domain shift, institutional constraint, etc.)—and is the alleged exception itself well-supported?
F. CQs about procedural rationality, burden, and convergence
  1. Who bears the burden to support the interpretation, and when does it shift during critical questioning?
  2. What proof standard is in force (preponderance, beyond reasonable doubt, scholarly plausibility, domain-specific acceptance thresholds), and does the current pro/con graph meet it?
  3. Given the current pro/con graph, is M merely defensible or actually justified under the chosen standard?
  4. If multiple incompatible interpretations remain, does the downstream decision actually depend on choosing between them—or do rival interpretations converge on the same operative outcome (a weak-justification/convergence case)?
G. CQs about output quality and interpretive robustness
  1. Is M too vague (unfalsifiable/elastic) or too specific (overfitted)?
  2. Does M remain plausible under small context changes, or is it fragile and “just-so”?
  3. Does M integrate explanatoryly with other parts of D or the relevant corpus without ad hoc patches?
  4. Are we “jumping to a conclusion” prematurely, rather than letting the objection-and-response cycle run to convergence?
H. Dialectical/procedural CQs (burdens, standards, audiences)
  1. Burden allocation: Who bears the burden to support the interpretation, and when does it shift during critical questioning?
  2. Proof standard: What proof standard is in force (preponderance, beyond reasonable doubt, scholarly plausibility, etc.), and does the argument meet it? (Carneades uses proof standards to resolve conflicts .)
  3. Defensible vs justified: Is M merely defensible (supported in some acceptable extension), or justified in a stronger sense?
7) Practical note on interpretive warrants (families of admissible support)

Across domains, interpretive support often draws from recurring families of warrants/criteria, including (non-exhaustively):

  • Ordinary meaning / conventional use (often a default starting point)
  • Technical meaning (triggered by technical context)
  • Contextual harmonization (co-text/corpus coherence)
  • Precedent / tradition / canonical constraints
  • Analogy and concept-based reasoning
  • General principles and systematic fit
  • Historical evidence (authorship, drafting, redaction, reception history)
  • Purpose / function / teleology
  • Substantive reasons (domain-accepted normative or pragmatic reasons)
  • Intent evidence (when admissible under the chosen framework)

Which of these are admissible, how they’re weighted, and which outrank which are questions for Scheme I.

8) What the framework delivers

The output of interpretive reasoning is not merely “M,” but a statused interpretive conclusion:

  • The proposed meaning-attribution: BestInt(E, D | Cxt, G, A) ≡ M
  • Its dialectical status: defensible vs justified (relative to proof standards and the current pro/con graph)
  • The governing evaluative framework: F (criteria + priority + burdens/standards)
  • The live defeaters and unresolved ties: what would need to change to overturn or strengthen the conclusion

This makes interpretation a transparent, revisable, burden-sensitive form of practical reasoning: a structured method for moving from doubt to a justified meaning-attribution under explicit goals and standards.

Comments

Popular posts from this blog

The Nature of Agnosticism Part 5

Core Concepts in Economics: Fundamentals

Clarifying Scientific Concepts