Interpretation as an Act of Argumentation
Recently, I've been thinking about what it means to interpret something, and to what extent we can say an interpretation is "correct". During a conversation with a friend on the topic of Nietzsche, I was asked if I've read any of his writings; to which I answered yes, but then qualified it by saying he is difficult to interpret given his writing style. Nietzsche is but one example within the philosophical tradition that is difficult to understand, especially in isolation from the rest of his writings or the thinkers to which he was responding. You often have to embed a philosopher within their own lifecycle and broader cultural context to make sense of them. Wittgenstein was also brought up; to which again I responded by saying he is difficult to interpret (for the same reasons). In one of my last posts, I referenced one of Wittgenstein's writings to support something I was claiming; but then began to wonder whether quoting such a difficult-to-interpret thinker helped illuminate my own arguments or increased its complexity. Then I started thinking about the extent to which interpretation is an information compression activity, and how it's embedded within communication more broadly even within mundane situations like speaking with friends. Shortly after, I was reading about an upcoming SCOTUS case that challenges an existing constitutional settlement about how constrained the president is, especially when it comes to independent regulatory agencies. The target is a longstanding Supreme Court precedent: Humphrey’s Executor (1935). That case is the canonical support for Congress creating multi-member “independent” commissions whose members can’t be fired at will. One of the Questions Presented is explicitly whether those removal protections are unconstitutional and whether Humphrey’s Executor should be overruled. This is a matter of constitutional interpretation and will have significant implications if overruled. Interpretation is not just an isolated act of identifying meaning, but a communicative act that determines who holds power and how they can wield it. Crucially, during this SCOTUS case, people will be presenting arguments for why "such and such" text has "such and such" meaning. Interpretation is therefore an argumentative process, in which various parties will advance justifications for why their interpretation should hold. This is true outside of legal cases; philosophical argumentation often consists of someone arguing for a specific interpretation of a text. More broadly, these interpretive arguments are often the product of interpretive frameworks consisting of theories of language, theories of meaning, historical traditions, and other assumptions often not explicitly stated. So in this post, I want to explore what exactly is meant by "interpretation", different interpretive practices, and broader assumptions on which these practices are founded. My key point is that regardless of the tradition, interpretation can fundamentally be cast as an act of argumentation, which means we can use tools from argumentation theory to think clearly about a given interpretation.
Hermeneutics
Even when schools disagree, there’s a surprisingly stable set of constraints and virtues that show up across philosophy, literature, and theology that determine whether a given interpretation is "better".
Constraint-type criteria: These are closer to “standards” in science because they rule out readings that violate basic discipline norms. These don’t force one “true” interpretation, but they create a floor: some readings are just bad readings because they can’t satisfy these constraints.
- Textual accountability: Can you show the reading in the text—word choice, syntax, imagery, form, argument moves? Does it survive close counterevidence (awkward lines you have to explain away)?
- Non-contradiction / logical discipline: Does the interpretation produce contradictions internal to the passage without explanation? If it claims irony, ambiguity, or paradox, can it justify that claim textually and contextually?
- Historical/linguistic plausibility: Does it respect what the words could mean in the language at the time? Does it avoid anachronism without argument?
- Genre and discourse constraints: Reading a sonnet like a lab report is usually a category mistake; genre sets expectations about sense, reference, voice, irony, etc.
Virtue-type criteria: These act like theory-choice virtues in science (coherence, simplicity, fruitfulness), but applied to texts. These virtues often decide debates when multiple readings satisfy the “floor” constraints.
- Coherence and integration: Does it unify disparate parts without strain? Does it explain why this detail is there?
- Explanatory power: Does it illuminate puzzles, tensions, tonal shifts, structure, omissions, or recurring motifs? Does it account for more of the “data” (textual features) than rivals?
- Parsimony: Does it avoid inventing unsupported machinery (hidden speakers, elaborate conspiracies of meaning) unless needed?
- Fecundity / fruitfulness: Does it open productive lines of inquiry (intertextual links, thematic depth) without becoming unconstrained fantasy?
- Comparative fit across a larger corpus: If you interpret a Plato dialogue or a Pauline epistle, does the reading fit the author/corpus patterns unless you argue for a deliberate break?
Purpose-relative criteria (where pluralism really enters): Here the field explicitly admits that “better” depends on what you’re trying to do. Interpretive standards aren’t always “global.” Many are telos-dependent (dependent on the discipline’s aims).
- Aesthetic/critical aims: If the goal is literary value: richness, formal elegance, affective power, innovation, etc.
- Ethical-political aims: If the goal is critique: how the text participates in power, ideology, exclusion; what it legitimates or resists.
- Theological/confessional aims: If the goal is doctrine/formation: consistency with a rule of faith, tradition, liturgical use, or ecclesial authority.
Stanley Fish pushes the idea that what counts as a “good reading” is stabilized by interpretive communities. Even if you dislike that conclusion, it captures a real phenomenon: standards converge locally via education, institutions, journals, and ongoing dispute. Different communities emphasize, neglect, and prefer different criteria. Different schools disagree because they pick different targets: intention, text-as-object, readerly effect, social function, or metaphysical implications. Here are common hermeneutic orientations, each with characteristic evaluative rules.
-
Authorial-intent / validity traditions (Schleiermacher, Hirsch)
- Aim: recover what the author meant (or what the text meant in its originating act).
- Standards: historical context, philology, genre, intent-consistency, avoidance of projecting later concerns.
- Justification style: meaning is anchored in communicative action; otherwise interpretation collapses into projection.
-
Philosophical hermeneutics (Heidegger, Gadamer)
- Aim: understand as a human mode of being; interpretation is always situated.
- Standards: openness to being corrected by the text, coherence, “fusion of horizons,” dialogical testing of prejudices.
- Justification style: you can’t step outside history; objectivity becomes responsible situatedness, not neutrality.
-
Ricoeur-style “hermeneutics of suspicion” + restoration
- Aim: both unmask ideology and recover meaning.
- Standards: explanatory power about distortions (power, desire), plus textual discipline so suspicion doesn’t become free-association.
- Justification style: surface meaning is often systematically deceptive; critique is an epistemic virtue.
-
Structuralism / semiotics / formalism
- Aim: map structures (codes, oppositions, narrative functions).
- Standards: systematicity, replicability of analytic procedure, coverage of formal features.
- Justification style: meaning is constrained by systems, not private intention.
-
Reader-response / reception (Jauss, Iser, Fish)
- Aim: meaning as enacted in reading and historical uptake.
- Standards: evidence of reception, plausibility of readerly operations, community norms.
- Justification style: texts don’t “mean” apart from interpretive practices; standards are social.
-
Deconstruction
- Aim: show internal instabilities, undecidability, suppressed oppositions.
- Standards: extremely close textual attention; demonstrating how a text undermines its own claims.
- Justification style: the demand for fixed meaning is itself a philosophical overreach the text can’t sustain.
Establishing the criteria in each of these schools involves arguing that they are normatively justified. There are various patterns or common arguments that are invoked. For example, in theology the arguments are often institutional and authoritative. The "community tradition" sets the interpretive rules. This is openly non-neutral; the standard is grounded in authority and communal identity, not universal epistemology. Genealogical and ideological critique argues that supposed neutral standards hide power interests, and therefore standards must be revised or challenged to reduce that distortion and exclusion. Some arguments involve pragmatic elements. For example, someone might argue that criterion X is justified because it produces inquiry that is non-arbitrary, criticizable, and progressive.
Jurisprudence
Law faces a very similar underdetermination problem (multiple plausible readings of the same authoritative text), but it has stronger closure mechanisms and more institutionalized constraints than most literary/philosophical interpretation. Because law authorizes coercion, systems build in devices that force convergence in practice such as hierarchy (trial, appellate, and supreme court), stare decisis / precedent, standards of review, burdens of proof / procedural limits, written reasons, and authoritative settlement. Even within disagreement, judges share a toolbox and a vocabulary for arguing that a reading is legally better. In U.S. statutory interpretation, a widely used framing is that textualism and purposivism are the two major “theories,” and both deploy overlapping tools (semantic arguments, canons, context, structure), while differing about things like legislative history and how strongly purpose should drive results. Different schools disagree partly because they disagree about what interpretation is for (and therefore what counts as “success”).
A useful way to organize theories of legal interpretation is to ask: what is doing the constraining? Different schools place the “center of gravity” of constraint in different places—(1) the enacted text itself, (2) the law’s purpose or legislative intent, (3) constructive moral-political justification of legal practice, or (4) the real-world forces shaping judicial outcomes. Each approach can be read as an account of what should count as a good reason in legal interpretation—and therefore what standards we should use to evaluate whether an interpretation is better or worse.
Text-centered constraint theories
Text-centered approaches begin from the idea that law’s most legitimate public “signal” is the enacted text. On this view, interpretation should be anchored in the linguistic artifact adopted through lawful procedures. The interpreter should resist replacing that public artifact with speculative reconstructions of hidden intentions or contemporary policy preferences. The animating worry is that once interpreters treat the text as merely a starting point, “interpretation” can become an undisciplined form of lawmaking. Textualism prioritizes the enacted text and tends to distrust resources that are not equally available to the public (for example, selective legislative history or anecdotal accounts of what some legislators “really meant”). Textualists often portray their method as a rule-of-law posture: citizens and institutions should be able to look at the law and have a stable basis for prediction, rather than depending on a judge’s reconstruction of intentions or moral sensibilities.
Evaluative criteria emphasized by textualism:
- Linguistic evidence and ordinary meaning: Prefer interpretations that track how competent speakers would ordinarily understand the words and syntax in context, including grammar, punctuation, and the way terms operate in the whole sentence and provision.
- Semantic and syntactic canons (as discipline, not decoration): Reward readings that respect stable conventions of legal/linguistic interpretation; whole-text reasoning, consistent usage, avoiding surplusage, and standard narrowing principles when general terms follow specific ones. A strong interpretation does not cherry-pick canons to justify a desired outcome; it uses them consistently and explains why particular canons apply here.
- Public accessibility and auditability: A “better” interpretation is one another interpreter could reach from publicly available materials (the text, structure, widely shared linguistic context), rather than privileged access to insider motives. This makes the reasoning easier to critique and replicate.
- Predictability and administrability: Favor readings that generate clear guidance for citizens, agencies, and lower courts. When multiple meanings are possible, textualists often prefer the one that reduces litigation incentives and lowers the need for case-by-case moral balancing.
- Democratic legitimacy and separation of roles: Evaluate interpretations by whether they respect the legislature’s job (to write/alter law) and the court’s job (to apply law). A key anti-criterion is judicial “updating” of policy choices that should be made through legislation.
- Constraint on judicial discretion: Interpretations are viewed as better when they minimize the room for judges to decide based on idiosyncratic values while presenting the decision as compelled by law.
Originalism treats the Constitution as a historically situated legal act that binds later interpreters. Many originalists focus on original public meaning—how the text would have been understood by the public at the time of ratification—while some variants emphasize original intent. The central thought is that legitimacy comes from fidelity to what was ratified, and that constitutional change should generally occur through amendment rather than judicial modernization.
Evaluative criteria emphasized by originalism:
- Historical semantic grounding: Prefer interpretations supported by period-appropriate linguistic evidence: contemporaneous dictionaries, usage patterns, legal treatises, public writings, and other historical sources that illuminate how key terms were understood at enactment/ratification.
- Fixed meaning (with a distinction between meaning and application): Evaluate readings by whether they preserve the idea that the Constitution’s meaning does not drift with current preferences. Many originalists allow that applications can evolve as facts and technologies change, but the underlying semantic content should remain anchored.
- Constraint and legitimacy via amendment-respect: A better interpretation is one that does not smuggle major constitutional change into “interpretation,” thereby bypassing the amendment process.
- Doctrinal fit as implementation fidelity: Assess whether existing doctrine can be justified as a faithful implementation of original meaning, or whether doctrine has drifted and should be corrected.
- Methodological transparency: Strong originalist arguments show their work: clear steps from sources to claims about meaning, with explicit handling of conflicting evidence.
Common stress-test for text-centered theories: What should be done when the text is genuinely underdeterminate, yields counterproductive results, or collides with deep moral intuitions? Text-centered approaches typically respond by tightening the interpretive discipline (better linguistic analysis, better attention to structure) and by emphasizing institutional remedies (legislation or amendment) rather than judicial innovation.
Purpose/intent-centered theories
Purpose/intent-centered approaches treat statutes as purposive instruments: they are enacted to solve problems, coordinate behavior, allocate authority, and pursue goals. The guiding idea is that interpretation should not become a fetish for literal language when a literal reading defeats the law’s function. In hard cases, interpreters should ask what the statute is for, and interpret the text in a way that makes the statute operate as a coherent plan rather than a set of disconnected clauses.
Purposivism / intentionalism. Purposivism interprets to advance statutory purpose; intentionalism emphasizes legislative intent (what the enacting body meant to do). Many real-world arguments blend the two: purpose provides the “why,” intent clarifies the “what we were trying to accomplish,” and statutory structure provides the “how.”
Evaluative criteria emphasized by purposivism/intentionalism:
- Coherence with the statute’s aims: Prefer interpretations that make the statute function as a rational solution to the problem it targets. A reading is weaker if it makes major provisions pointless or defeats the core policy the statute appears designed to implement.
- Avoidance of absurdity and misfires: Penalize interpretations that generate outcomes that appear self-defeating, irrational, or wildly disproportionate relative to the statute’s evident objectives—especially when an alternative reading preserves both text and function.
- Fit with legislative design and internal architecture: Evaluate interpretations by how well they align with definitions, exceptions, remedies, enforcement mechanisms, and the statute’s broader scheme. A good purposive reading treats the statute as an engineered system.
- Explanatory “best account” of the statute’s shape: Favor the reading that best explains why the statute is drafted the way it is (why these thresholds, why these categories, why these carve-outs), rather than a reading that makes drafting choices mysterious or arbitrary.
- Responsible use of legislative history (when used): If legislative history is consulted, better interpretations explain what kind of history is being used (committee reports vs. floor remarks), why it is reliable, and how it interacts with the text. Cherry-picked quotations count against interpretive quality.
- Administrability and practical governance: Prefer interpretations that yield workable guidance for agencies and courts, reduce perverse incentives, and avoid constant edge-case litigation that would paralyze the statutory program.
Common stress-test for purpose/intent theories: How do we identify “purpose” without turning it into a blank check for judicial policy-making? These theories typically respond by demanding disciplined evidence (structure, context, history) and by treating purpose as a constraint that must remain tethered to the enacted scheme—not a license to rewrite it.
“Constructive” / value-involving theories (Dworkin-style interpretivism)
Constructive theories argue that interpretation is not merely linguistic decoding. Law is a practice that claims authority over people, and that claim raises normative questions. On this view, legal texts and precedents constrain judges, but rarely determine outcomes fully in contested cases. So a responsible interpreter must both (a) respect the institutional record and (b) offer a principled justification for why that record should guide decisions.
Dworkin’s interpretivist picture. Dworkin frames legal interpretation as the search for the best constructive interpretation of the community’s legal practice—one that fits the institutional history and justifies it in moral-political terms. “Fit” keeps interpretation from becoming free invention; “justification” ensures that the law is presented as a principled enterprise rather than a pile of compromises.
Evaluative criteria emphasized by Dworkin-style interpretivism:
- Doctrinal fit: Prefer interpretations that plausibly account for existing materials—precedents, settled doctrines, institutional commitments— without treating large portions of the legal record as mere mistakes (unless a strong justification is provided).
- Moral-political justification: Among interpretations that fit reasonably well, favor the one that makes the practice more defensible by appealing to principled values (e.g., fairness, equality, liberty, democratic legitimacy). The point is not personal preference, but publicly arguable principle.
- Integrity (principled coherence): A good interpretation makes the law hang together as a coherent set of principles: like cases should be treated alike for reasons that generalize, not because the judge likes one party more than another.
- System-level coherence: Evaluate interpretations by how well they harmonize different areas of law and reduce arbitrary doctrinal fragmentation.
- Candor about normative premises: Because values are inescapable on this account, interpretive quality increases when the interpreter makes their justificatory commitments explicit and open to critique rather than hiding them behind “neutral method” rhetoric.
Common stress-test for constructive theories: How do we keep “justification” from collapsing into judicial moralizing? The typical reply is that fit is a real constraint, and justification must be principled, generalizable, and consistent with the institutional record—not merely outcome-driven.
Realist / critical approaches
Realist and critical traditions shift attention from idealized interpretive constraints to the actual causal forces shaping decisions. They stress that outcomes often track practical consequences, institutional incentives, ideology, and social power—sometimes more than official interpretive rhetoric admits. These approaches often treat “method debates” as partly rhetorical: judges may cite textualism or purpose when convenient, but the deeper drivers can be political, psychological, or institutional.
Legal realism (and later critical traditions). Legal realism emphasizes that judges respond to facts, equities, and consequences, and that doctrine can function as a justificatory vocabulary rather than a deterministic engine. Later critical theories often expand this by analyzing how legal categories and reasoning styles can reproduce hierarchy, exclusion, and ideology while presenting themselves as neutral.
Evaluative criteria emphasized by realist/critical approaches:
- Explanatory adequacy: Evaluate accounts of interpretation by whether they explain what courts actually do across many cases, not just what courts claim to do in opinions. A theory is weaker if it repeatedly fails to predict outcomes or must explain away patterns as “exceptions.”
- Transparency and honesty about value choice: Interpretive arguments are better when they acknowledge the value judgments and trade-offs that are actually driving results, rather than disguising them as mechanically compelled.
- Attention to power, incentives, and institutional context: Assess interpretations by their awareness of who benefits, who bears costs, how institutions behave under constraints, and how legal decisions interact with political economy.
- Consequential and distributional impact: A strong evaluation asks what decisions do in the world—who is harmed, who is protected, what incentives are created, what second-order effects follow (e.g., chilling effects, strategic compliance, unequal enforcement).
- Hidden-premise detection: Interpretive quality increases when the argument exposes contested assumptions embedded in “neutral” language—assumptions about markets, families, security, race, gender, normality, deservingness, and so on.
Common stress-test for realist/critical approaches: If law is so driven by politics and power, is “legal constraint” an illusion? Responses vary. Some realists push for methodological reform (greater candor, empirical feedback, institutional redesign). Some critical traditions emphasize how contestation is endemic, and evaluate legal argument by its role in either sustaining or challenging unjust structures.
Cross-cutting checklist for evaluating an interpretation (regardless of school)
Even if you lean heavily toward one framework, it’s often useful to test an interpretive claim across multiple dimensions. Many disputes persist because people are silently using different scoring rules. This checklist makes those scoring rules explicit:
- Textual plausibility: Is the reading linguistically credible given the enacted language?
- Contextual integration: Does it cohere with surrounding provisions, defined terms, and overall structure?
- Historical grounding: Is there strong evidence about meaning-at-enactment or the relevant legal backdrop?
- Purpose sensitivity: Does it advance (or undermine) the law’s apparent aims and design?
- Doctrinal continuity: Does it fit with precedent and system-level patterns without ad hoc exceptions?
- Administrability: Can it be applied predictably without constant litigation or unworkable standards?
- Institutional legitimacy: Does it respect the roles of courts vs. legislatures/amenders?
- Normative defensibility: Can it be justified by principled reasons that generalize beyond this case?
- Consequential impact: What are the real-world effects, including distributional consequences and feedback loops?
- Transparency: Are the interpretive steps and assumptions explicit and open to critique?
Framed this way, disagreements become easier to diagnose: they’re often disputes about which evaluative criteria should dominate. Is the best interpretation the one most faithful to public meaning? The one that best fulfills legislative purpose? The one that best justifies the practice morally? Or the one that most honestly accounts for power and consequences? In practice, many legal arguments are hybrids—mixing criteria—but the weights assigned to each criterion often determine the final result.
What is the goal "interpretation"?
-
Fix “what it says” (sense)
The aim here is to determine the content of the words, symbols, or actions at the level of meaning—what the expression itself encodes. This is interpretation as disambiguation and semantic determination.
- Resolve ambiguity: Is “bank” referring to a riverbank or a financial bank?
- Pin down reference: If the text says “he,” “they,” or “it,” who or what is being referred to?
- Determine scope: When it says “all,” does it mean literally all, or all relevant cases within an implied domain?
In this mode, a successful interpretation reduces semantic uncertainty: it clarifies what the sentence, clause, sign, or gesture means as such.
-
Recover “what was meant by saying it” (communicative intention / pragmatics)
Here the target is not just sentence meaning but speech-act meaning: what someone was doing in saying it. This is interpretation as reconstructing an act of communication—often by inferring intentions, conversational goals, and shared assumptions.
- Speech act: Was it a promise, threat, joke, warning, irony, or understatement?
- Responsive context: What question, problem, or provocation was the speaker responding to?
- Shared presuppositions: What background knowledge or norms were assumed between speaker and audience?
In this mode, “meaning” includes implicatures and pragmatic force, not merely dictionary definitions.
-
Place it in a wider context (contextualization)
This goal treats the object as something whose intelligibility depends on embedding it within larger systems of practice. This is interpretation as situating: identifying the frameworks that make the thing legible.
- Historical setting: What time, place, and social conditions shape what can be said or meant?
- Genre and conventions: Is this a statute, poem, manifesto, lab report, meme, ritual, or contract—and what does that genre normally do?
- Institutional roles: Who is speaking (judge, priest, scientist, politician), and what authority or constraints come with that role?
- Intertextual echoes: What traditions, references, or prior texts are being invoked, adapted, or resisted?
Contextualization often changes what counts as a plausible reading by showing what kinds of moves the text is “allowed” to make in its setting.
-
Make it coherent (integration)
This approach treats the object (a text, theory, person’s action, dataset, or practice) as something like a structured whole. The interpreter tries to integrate parts into a pattern, reduce fragmentation, and show how seemingly disconnected elements belong together. This is interpretation as pattern-finding and coherence-building.
- Connect parts to wholes: How do particular passages, claims, or moves contribute to the overall structure?
- Explain tensions and gaps: How can shifts, contradictions, silences, or missing steps be understood?
- Propose organizing principles: What themes, values, or inferential rules unify the material?
Integration can be descriptive (showing the internal logic) or charitable (seeking the most rational, unified version of the view), depending on the interpreter’s aims.
-
Explain “why it’s like that” (causal/functional explanation)
Sometimes interpretation aims at explanation in a more external sense: not “what does it mean?” but “what produced it?” or “what is it doing in the world?” This is interpretation as diagnosis—often associated with a “hermeneutics of suspicion.”
- Psychological motives: What needs, fears, aspirations, or self-conceptions might be driving the expression?
- Social pressures and institutions: What incentives, constraints, or organizational dynamics shape it?
- Ideology, power, economics: What interests are served, what hierarchies reinforced or challenged, what material conditions reflected?
- Community function: What role does the text or practice play—boundary-setting, legitimation, identity formation, coordination?
In this mode, the “meaning” of a text can be partly explained by what it accomplishes socially or politically, regardless of authorial self-understanding.
-
Apply it to a case (normative application)
In law, theology, ethics, and everyday rule-following, interpretation often means deciding what a general norm requires here and now. This is interpretation as bridging the general to the particular, where “application” is not merely downstream of understanding, but part of what understanding is for in normative domains.
- Case-specific judgment: Given this rule or principle, what should be done in this situation?
- Operationalization: How do we translate general language into concrete criteria, thresholds, or actions?
- Handling hard cases: How should exceptions, conflicts of principles, or unforeseen scenarios be treated?
This mode highlights that norms often “come alive” only through interpretation-in-application: deciding relevance, weighing factors, and producing a warranted verdict.
-
Draw out “what it means for us” (significance)
This is where much of humanities pluralism lives: interpretation as articulating a work’s significance—what it reveals, symbolizes, critiques, or invites us to become. The aim is not only to decode or explain, but to make explicit the work’s import for a community, a tradition, or a present audience.
- Revelation and critique: What does it disclose about a society, an ideology, a moral blind spot, or a form of life?
- Symbolic or aesthetic meaning: What is its expressive power—its imagery, resonance, and felt orientation?
- Moral/political import: What stance does it press, what responsibilities does it suggest, what possibilities does it open?
- Why care: What makes this worth returning to, arguing over, or building upon?
In this mode, “meaning” is inseparable from evaluation: the interpretation clarifies what the object matters as, not only what it says.
Essentially, "interpret" can name at least three big kinds of goals: What does it say/mean? (sense, intention, context), How does it work / why is it this way? (structure, causes, functions), and What should we do with it / what follows? (application, significance); descriptive, explanatory, and normative considerations. People fight about interpretation because they’re often optimizing different targets—one person is doing (1) and (2), another is doing (6) and (7), and they talk past each other as if there must be one single “goal.”
Interpretation as a Two-Layered goal
- Target: Are we interpreting meaning, intention, function, implication, or application?
- Stakes: Are we trying to decide coercive outcomes (law), form persons (theology), or understand/criticize (humanities)?
- Constraint floor: What must any acceptable reading respect? (text, language, context, authority)
- Ranking rule: When criteria conflict, which wins? (text over purpose? purpose over text? precedent over best moral reading?)
Interpretation disputes are two-layer arguments. The first layer is meta-interpretive arguments, consisting of dialogue over what counts as decisive in an interpretive situation. The second layer is at the object-level, consisting of arguments conditional on the first layer such as, "Given these standards, this reading is preferred because of XYZ". I'll elaborate on this more later when introducing a generalized scheme of "arguments from interpretation", but this dual layer nature of interpretation can be reduced to the following:
- There is a claim (“This text means M”),
- It is defended by argument schemes (textual, definitional, analogical, purposive, etc.),
- under procedural norms (burdens, starting points, allowed moves),
- plus ongoing meta-argument about which norms and schemes should control.
The procedural norms refer to these decisions guiding the first layer of argument. Pragma-dialectics refers to this as the opening stages of a dialogue. The idea is that any given interpretive act is situated within a broader, lets call it "interpretive situation", establishing the direction of argumentation.
Semiotics
This is broadly understood as the study of signs. A sign is simply an entity that stands for something else. The symbols involved within a natural language, for example English, are signs that collectively form a signaling system. People using signaling systems can communicate, create, and express meaning to one another. Signs therefore stand in relation to some underlying meaning; expressed verbally, in written form, through visual cues etc. either intentionally or unintentionally. Semiosis is the capacity or activity or comprehending and production of signs. A sign is literally anything that communicates a meaning, that is not the sign itself, to the interpreter of the sign. The meaning of a sign is what is generated in the process of semiosis.
Where am I going with this? Semiotics is a meta-discipline concerned with sign use in general, even in non-human animals (called biosemiotics). For example, Bee's communicate information to one another using complex forms of "dancing"; these movements instruct other members of the aggregate to perform tasks. Collectively, the bee swarm consists of a complex signaling system, where signs such as the dancing, communicate meaning to other members of the collective. Proper interpretation of the dance is crucial for the survival of the collective; and in some sense determines the resulting structure and patterns of the collective. It's very interesting; how a collective transmits and decodes signals is directly related to the resulting structure. This is directly analogous to the signaling systems humans have formed. Even human dancing can be seen as a system for communicating meaning. In fact, hermeneutics (the study of interpretation), which we discussed earlier, is a subset of semiotics concerned with written texts (historically concerned with religious texts, as early as Augustine). I am not a semiotician so I wont be able to effectively dive into the nuances of the discipline, but various scholars have proposed models of signaling systems. For example, one of the most influential models is C.S. Peirce's triadic model of signs.
Peirce’s triadic model of signs says a sign isn’t just a two-part link (like “word ↔ object”). Instead, a sign is a three-part relation: something that stands for an object to an interpretant. On this view, you don’t fully have a “sign” unless all three roles are in play, and meaning-making (semiosis) can continue as interpretants generate further signs.
-
Representamen (the sign-vehicle)
- The thing that functions as the sign — a sound, word, image, gesture, smoke, footprint, etc.
- Example: the written word “tree,” a red light, a weather vane’s arrow.
-
Object (what the sign is about)
- The thing, event, property, or situation the sign refers to.
-
Peirce distinguishes:
- Immediate object: the object as the sign presents it (the “object-in-the-sign,” a conceptual slice).
- Dynamic object: the object as it really is / as it constrains the sign (what can surprise or correct you).
- Example: for “tree,” the immediate object might be “tree-as-a-kind,” while the dynamic object could be that specific oak outside (or trees in the world, depending on context).
- Interpretant (the sign’s effect/meaning)
- Not the human interpreter, but the meaning produced — the understanding, inference, habit, or further sign that arises.
- Peirce distinguishes:
- Immediate interpretant: the sign’s basic intelligibility (it’s meaningful as a sign).
- Dynamic interpretant: the actual effect in a particular occasion (you look up, you feel warned, you imagine a tree).
- Final interpretant: the stabilized meaning a community would converge on under ideal inquiry (a norm/habit of interpretation).
Here is a concrete example of his model:
- Representamen: the red illuminated circle
- Object: the traffic rule / the state of “stop now” (and the traffic system it belongs to)
- Interpretant: “I must stop,” leading to braking (and the general habit that red means stop)
Peirce categorized sign types according to this classification structure:
- Icon: resemblance or shared structure. For example, a map, portrait, diagram, waveform display.
-
Index: causal/physical connection or direct pointing
- Smoke → fire, footprint → someone walked here, thermometer reading → temperature
- Symbol: rule, convention, learned habit. These include words, flags, traffic laws, mathematical notation
The reason I bring up this model, is to point out the fact that the interpretant (the signs effect), results from an inferential processes. In the example above, it is somewhat trivial because this is a socially learned meaning taken for granted in modern industrialized nations. Some processes of semiosis however, require much more deliberate inferential processes.
Umberto Eco is another semiotician who stressed this inferential element. Very broadly, Eco thinks that every act of communication consists of encoding and decoding stages. The messages first must be encoded into a set of signs by the sender. These signs should be commonly held. They must then be transmitted and decoded by the receiver, using a coding system, to understand the contained messages. The right interpretation can be called the preferred decoding or preferred reading. Divergence from the intended message is called an aberrant decoding. Errors like these can occur for a variety of reasons. Eco lists four major classes of exceptions:
- People who did not share the same language.
- People trying to interpret the meanings of past cultures. For example, Medieval people looking at Roman art.
- People who did not share the same belief system. For example, Christians looking at pagan art.
- People who came from different cultures. For example, white Europeans looking at Aboriginal art.
Crucially, this "coding system" is not like a computational system (see the encoding/decoding model of communication); it is conjectural. In Interpretation and Overinterpretation, Eco says the reader’s initiative is to make a conjecture about the text’s intention (his intentio operis) and he frames this as a hermeneutic-circle process: you build a hypothesis about “what kind of reader this text asks for,” and you test it against the text. That’s his “model reader” idea: texts are built with an implied cooperative reader in mind; interpretation is partly the attempt to align yourself with that role. Eco also talks about an “encyclopedia”: the background web of shared cultural knowledge that makes signs interpretable in context. If the sender and receiver have different encyclopedias, you get systematic drift. Eco is famous for pushing back on the idea that texts license limitless readings. He calls uncontrolled interpretation a threat to meaning/communication and argues for constraints grounded in the text’s coherence and semiotic strategy. I think the key insight here between Eco and Peirce is this conjectural, inferential, or abductive step required for determining meaning (and interpretation). I think this implies an argumentation theoretic approach. Interpretation is fundamentally underdetermined, and therefore defeasible. Almost all informal reasoning is non-monotonic; also known as defeasible. Assuming no two people can have the same coding system, there will always be some sort of inferential process for determining the meaning of a text; meaning argumentation is fundamental to this process.
Authorial Intent
- the author had a reasonably stable, articulable content in mind,
- the text is mainly a vehicle for transmitting that content, and
- the best reading is the one that most closely matches that intended content.
I can think of a few authors who presumably violate these assumptions. For people like Wittgenstein and Nietzsche, all three of these assumptions are under pressure:
- The author may not be fully self-transparent: they may be discovering their own view by writing, or working against their own nagging thoughts.
- The aim may not be “theory delivery”: the writing may be therapeutic, diagnostic, genealogical, or provocational.
- The text may be intentionally multi-voiced: aphorisms, masks, remarks, thought experiments—these can be tests for the reader, not “doctrines.”
If we were to try to reconstruct the faithful intent of writers like this, we very well might be misunderstanding their message. Wittgenstein's aphoristic, remark-based style might not be a failure to compress a complete theory into a digestible form for readers; it can be part of his communicative technique: multiple angles, reminders, and staged confrontations with your own conceptions. With Nietzsche, stylistic elements of writing are not purely decorative, they are part of the argument. Aphorism can resist being turned into a settled doctrine, it can force contextual reading (“this remark has significance only under certain conditions”), enact perspectivism by presenting partial angles rather than a synoptic view, and function as a provocation that reveals the reader’s moral/psychological commitments. So a “univocal” extraction can be a category mistake: you end up treating a repertoire of perspectives and diagnostic tools as if it were a single axiomatized system with one single meaning like mathematics.
This reveals an interesting tension between faithfulness and fruitfulness when it comes to interpretation. Exegetical faithfulness has the goal of determining what an author is specifically trying to do. Constructive appropriation (fruitfulness) has the goal of asking “What can we build from this that clarifies a problem, even if it goes beyond what the author explicitly (or literally) meant?” In the latter, reconstructing meaning (interpreting) is less about semantic/syntactic meaning, and more about criteria such as explanatory reach, illumination, or problem solving power. Engaging with texts like these can force you to reconceptualize your own views in ways not originally intended by the authors. Authorial intent still matters as a constraint (“don’t make it mean anything”), but it’s not always the sole aim or the highest court of appeal. Wittgenstein wrote in very ambiguous fragments, which ended up influencing fields as diverse as linguistics, AI, and cognitive science. Nietzsche was profoundly influential, ranging from existentialism, political movements, and even the foundational thinkers in psychology. This just shows that a fruitful interpretation of a text is often times more important and impactful than a concise faithful interpretation of a text.
A Software Analogy
Is "Meaning" Inseparable from Facts?
- Evangelical inerrancy (Chicago Statement) commonly locates strict inerrancy in the autographic text and acknowledges that God did not promise an inerrant transmission, which implicitly authorizes textual criticism as part of responsible “what does Scripture say?” work.
- Catholic doctrine (Dei Verbum) explicitly says Scripture is “word of God in human language,” that the human authors are true authors, and frames “without error” in relation to what God wanted conveyed “for the sake of salvation.”
- The Pontifical Biblical Commission goes further in method: it calls the historical-critical method “indispensable” for scientific study of meaning precisely because Scripture is God’s word in human language and has sources behind it.
Background Theory Determines Interpretation
- Is meaning mainly in the author’s intention (Grice-style)?
- Is meaning mainly in public linguistic rules/use (later Wittgenstein)?
- Is meaning a function of reference/truth conditions (Frege/Tarski traditions)?
- Is meaning a function of social practice and power (genealogy/critique traditions)?
- Is meaning a function of readerly uptake (reception/reader-response)?
- Is meaning stable or fundamentally open/iterable (deconstruction-ish)?
All of this determines what counts as evidence, what counts as error, and what counts as success. Change the meaning theory → change the goal of interpretation → change the ranking of criteria.
An Argumentation Scheme for Interpretation
- Argumentation Schemes for Statutory Interpretation: A Logical Analysis
- Arguments of statutory interpretation and argumentation schemes
- An Argumentation Framework for Contested Cases of Statutory Interpretation
- Arguments of statutory interpretation and argumentation schemes
- Legal reasoning with argumentation schemes
- Statutory Interpretation as Argumentation
Interpretation is best treated as a defeasible claim about meaning—a conclusion supported by pro and con arguments rather than a one-shot deduction. To interpret is to associate an expression/element E in an artifact/document/object D with a meaning M in a context/use, and then to defend that association under challenge.
Interpretive conclusions should be framed as conceptual/terminological claims about “best interpretation” relative to specified goals and standards—not as deontic “oughts.” The defeasibility of interpretive reasoning is made explicit through critical questions that expose defaults, assumptions, and points of vulnerability.
Interpretation-in-the-strict-sense is appropriate when there is a genuine doubt or conflict about what some element E in an artifact D means/does/implies/applies-as, such that no unchallenged default meaning settles the issue.
This corresponds to the transition from prima facie understanding (a shared, conventional default) to interpretation proper, where the default can no longer be taken for granted and must be defended, refined, or replaced.
BestInt(E, D | Cxt, G, A) ≡ M
“Relative to context Cxt, goal G, and audience/standards A, the best (or most justified) interpretation of element
E in artifact D is meaning M.”
This captures interpretation as a meaning-attribution claim made to overcome doubt, evaluated by an explicit set of criteria and dialectical tests.
Interpretive disputes often involve two intertwined questions:
- Which evaluative framework should govern the dispute? (meta-level)
- Given that framework, which interpretation is best? (object-level)
These are modeled as two linked schemes.
For this dispute about E in D (in context Cxt) with goal G, we should evaluate interpretations using framework F—a set of criteria plus priority rules and burden/standard of proof.
- Goal premise: The practical/theoretical goal G of this interpretive task is specified (e.g., faithful reconstruction, legal applicability, theological normativity, aesthetic understanding, moral critique, predictive adequacy, explanatory adequacy, institutional compliance, etc.).
- Object premise: D is an artifact of type T in domain Dom (statute, scripture, poem, scientific model, dataset, contract, UI spec, algorithmic output…), with authority-status A (binding, canonical, exemplary, exploratory, aesthetic…).
- Practice/fit premise: In domain Dom, for objects like T with authority-status A, framework F is (a) recognized as appropriate and/or (b) best fits G and the relevant constraints (institutional, epistemic, moral, practical).
- Feasibility premise: F can actually be applied here (available evidence, competence, time, procedural constraints).
- Defeasibility premise: No overriding reason blocks using F (e.g., F violates binding authority or explicit procedural rules; F’s triggering conditions aren’t met; F yields contradictions with controlling constraints).
Therefore, F is the (provisional/default) evaluative framework for settling interpretations of E in D for goal G.
Defeasibility note: Framework choice is itself defeasible. In practice, frameworks function as defaults that can be overridden when superior reasons arise.
Interpretation M is the best/most justified interpretation of E in D (in Cxt) for goal G, under framework F.
- Doubt premise: There is a genuine interpretive doubt/conflict about E in D in Cxt for goal G (i.e., no unchallenged default resolves it).
- Framework premise: Framework F is applicable here (by Scheme I).
- Support premise: Under F, there are sufficient pro-arguments supporting M from admissible sources of support (textual/structural evidence; contextual facts; genre constraints; intent evidence; precedent/tradition; functional/purpose evidence; empirical fit; institutional fit; explanatory/predictive adequacy; etc.).
- Defeat-management premise: Defeaters against M—both rebutting alternatives and undercutting attacks on the inference—are answered or neutralized under F.
- Comparative premise: Competing interpretations M₁…Mₙ are (i) considered and (ii) either rejected or shown not better than M under F.
Therefore, M is (provisionally) justified as the best interpretation of E in D for goal G, under F.
The two-level view can be implemented as a single, domain-general scheme that explicitly parameterizes context, goals, and standards.
BestInt(E, D | Cxt, G, A) ≡ M
- Interpretandum identification: Element/expression E occurs in artifact/object D, and the interpretive question is well-posed (the “thing to be interpreted” is fixed).
- Context specification: Relevant context Cxt is identified (local co-text, broader corpus, historical setting, institutional setting, genre/discourse type, task setting).
- Goal specification: Interpretive goal G is specified (recover intended message, apply a norm, preserve doctrinal coherence, achieve aesthetic illumination, obtain explanatory/predictive adequacy, etc.).
- Candidate meaning: A candidate meaning/reading M is proposed at the appropriate grain size (propositional content, directive, theme, function, rule, model-structure, etc.).
- Method/criterion-set selected: An evaluative set of criteria/warrants K (or framework F) is selected as appropriate to (D, Cxt, G, A).
- Fit/support claim: Under K/F, interpreting “E in D as M” is supported (linguistic fit, contextual fit, coherence, explanatory/predictive power, institutional fit, empirical fit, etc.).
Interpretive warrant: If (1)–(6) obtain and no defeating exception applies, then BestInt(E, D | Cxt, G, A) ≡ M.
- Default-meaning presumption: There is a reasonable prima facie/default reading available given community conventions, unless challenged.
- Competence/normalcy assumptions: Interpreter competence; normal discourse conditions; artifact integrity; data quality; stable background practices, etc.
- Defeater conditions: Superior reasons support rejecting K/F here or preferring a rival meaning M′ (technical context defeats ordinary meaning; irony/sarcasm; corruption/variant text; genre shift; redaction; measurement error; institutional constraints; etc.).
Therefore, BestInt(E, D | Cxt, G, A) ≡ M (defeasibly).
- Undermining: attack a premise (e.g., the alleged context is wrong; the evidence is weak).
- Rebuttal: support an incompatible conclusion (a rival interpretation M′).
- Undercutting: attack the inferential link by showing an exception applies (e.g., the canon/criterion is inapplicable here).
- Fix the interpretandum: Specify what exactly is being interpreted (E and D) and what would count as an answer.
- State the goal (G): Clarify what interpretive success means (reconstruction, application, critique, explanation, prediction, aesthetic illumination, etc.).
- Set the audience/standard (A): Identify who must be persuaded and what proof standard applies (court, scholarly community, faith community, lab/peer review, product team, etc.).
- Establish a default: Start from a presumptive reading when available (prima facie understanding).
- Generate alternatives: Enumerate plausible candidates {M₁…Mₙ}.
- Select warrants/criteria (K) / framework (F): Choose admissible interpretive warrants and any priority rules, burdens, and standards appropriate to G and A.
- Build pro/con arguments: Instantiate argument schemes supporting and attacking each candidate; make inferential dependencies explicit.
- Apply critical questions: Surface assumptions and test for exceptions; ensure objections are handled rather than bypassed.
- Resolve conflicts: Weigh competing arguments according to the selected proof standard and any priority rules when schemes collide.
- Output a statused conclusion: Report not only M, but its dialectical status (e.g., defensible vs justified) relative to the current argument graph and standards—and note any remaining live defeaters or unresolved ties.
In Toulmin terms, backing underwrites the warrants inside K/F—the often-implicit commitments that make a criterion seem appropriate. In real disputes, these are frequently the deepest sources of disagreement.
- What is D? (law, scripture, literature, model output, dataset, institutional directive…)
- What kind of authority does it have? (binding/coercive, canonical/normative, exemplary, exploratory, aesthetic…)
- Where does authority live? (original author, final text, community practice, institutional body, empirical performance, tradition…)
- Meaning as ordinary-language default vs technical language vs contextual pragmatics
- Strong vs weak assumptions about semantic stability over time
- Genre conventions as binding constraints vs looser affordances
- The role of co-text and broader corpus in fixing meaning
This includes the crucial distinction between prima facie understanding (default meaning from shared socio-linguistic practice) and interpretation as a more complex presumptive reasoning task when doubt arises.
- Intentionalist: author’s communicative intent is primary
- Textualist/formalist: artifact-internal constraints dominate
- Purposivist/teleological: function/aim/purpose dominates
- Reader-response/reception: uptake/community practice is central
- Canonical/tradition-governed: final form + rule-of-faith/tradition constrain
- Deconstructive: emphasizes instability/iterability; critiques closure assumptions
- Constructive: optimize philosophical/ethical/aesthetic fruitfulness subject to constraints (“textual friction”)
- What counts as admissible evidence (manuscripts, legislative history, author letters, experiments, lived experience, model diagnostics, institutional records…)
- Burdens of proof, burden-shifting during critical questioning, and proof standards
- Error-cost profiles (false positives vs false negatives in interpretation)
- What “best” means: truth, coherence, justice, salvation, emancipation, predictive accuracy, utility, aesthetic richness, institutional safety, etc.
- Whether consequences are admissible as evidence (teleological/purposive vs strict literalism/textual constraint)
- Why this goal is the right one for this community and object (often moral/political/theological/pragmatic)
A central insight is that interpretive arguments share a general critical-question spine:
- What plausible alternatives exist?
- What reasons reject them?
- What reasons support an alternative as better/equally good?
Below is that spine expanded into a full set covering task framing, framework choice (Scheme I), and interpretation choice (Scheme II).
- What exactly is the disputed element E (term, clause, symbol, gesture, data feature, model component)?
- What is the domain Dom and artifact type T?
- What is the interpretive goal G (reconstruction, application, critique, explanation, prediction, optimization, aesthetic illumination, etc.)?
- Is G legitimate/appropriate for this domain and audience (court, church, seminar, lab/peer review, product team)?
- What decision/stakes hinge on the interpretation, and what error-cost profile follows?
- Are we interpreting the right object D (which edition, manuscript tradition, translation, dataset version, model version, build)?
- Are the boundaries of E correct (word, sentence, clause, whole work, canon, dataset slice, model family)?
- Is E/D possibly corrupted, interpolated, mistranslated, noisy, non-authoritative, or otherwise unreliable for this question?
- What contexts Cxt are relevant (immediate co-text, broader corpus, genre, institutional setting, historical setting, speaker/audience situation, task setting)?
- Why these contexts rather than others? Are we cherry-picking or importing context illicitly?
- What alternative frameworks F′ are plausible here (text-first, intent-first, purpose-first, reception-first, empirical-fit-first, tradition-first, etc.)?
- What are the reasons to reject each F′ (inapplicable, violates authority rules, ignores key evidence types, yields systematic bias, contradicts binding procedure, etc.)?
- What are the reasons an alternative F′ is better/equally good for G?
- Are F’s triggering conditions actually met? (e.g., “technical meaning” requires a technical context; otherwise that canon is inapplicable.)
- Does F specify priority rules for conflicts among criteria (what outranks what), and are those priority rules justified here?
- Does F specify burdens/standards of proof—what counts as defeating vs merely raising doubt?
- Does F rely on controversial background assumptions (inerrancy, strong intentionalism, strong skepticism, etc.), and are those defensible here?
- Can F actually be applied given practical constraints (evidence availability, competence, time, procedure)?
- What is the prima facie/default reading—and why isn’t it sufficient here?
- What admissible evidence supports M under F (textual, contextual, historical, empirical, institutional, genre-based, functional/purpose evidence, etc.)?
- Are any key premises merely assumed (missing warrants, hidden definitional steps, suppressed context)?
- What defeaters exist?
- Rebutters: rival interpretations M′ supported by comparable arguments.
- Undercutters: attacks on applicability of the inference rule/criterion being used (e.g., “that canon doesn’t apply here”).
- Have all reasonable alternative interpretations M₁…Mₙ been considered?
- What reasons reject each alternative?
- What reasons suggest an alternative is better/equally good?
- If multiple interpretations remain comparably supported, what rule breaks ties (priority rules, charity, conservatism, simplicity, minimal revision, institutional safety, etc.)?
- Is M consistent with the broader system/corpus/practice that F treats as relevant (precedent, tradition, genre family, model family)?
- Does M overfit local details while breaking global constraints (or vice versa)?
- Does M depend on anachronism or illicit context importation (especially for historically distant texts)?
- Is M robust across nearby cases/usages, or does it only “work” for a cherry-picked scenario?
- Does the evidence offered bear on meaning rather than merely on desirability, ideology, or association?
- Is the evidence sufficient to support the premises if challenged (especially where CQs shift the burden back to the proponent)?
- What counterevidence supports rebutting interpretations or attacks premises directly?
- Is there an exception that blocks the inference from the support to the conclusion (irony, sarcasm, technical jargon, redaction, corruption, measurement error, domain shift, institutional constraint, etc.)—and is the alleged exception itself well-supported?
- Who bears the burden to support the interpretation, and when does it shift during critical questioning?
- What proof standard is in force (preponderance, beyond reasonable doubt, scholarly plausibility, domain-specific acceptance thresholds), and does the current pro/con graph meet it?
- Given the current pro/con graph, is M merely defensible or actually justified under the chosen standard?
- If multiple incompatible interpretations remain, does the downstream decision actually depend on choosing between them—or do rival interpretations converge on the same operative outcome (a weak-justification/convergence case)?
- Is M too vague (unfalsifiable/elastic) or too specific (overfitted)?
- Does M remain plausible under small context changes, or is it fragile and “just-so”?
- Does M integrate explanatoryly with other parts of D or the relevant corpus without ad hoc patches?
- Are we “jumping to a conclusion” prematurely, rather than letting the objection-and-response cycle run to convergence?
- Burden allocation: Who bears the burden to support the interpretation, and when does it shift during critical questioning?
- Proof standard: What proof standard is in force (preponderance, beyond reasonable doubt, scholarly plausibility, etc.), and does the argument meet it? (Carneades uses proof standards to resolve conflicts .)
- Defensible vs justified: Is M merely defensible (supported in some acceptable extension), or justified in a stronger sense?
Across domains, interpretive support often draws from recurring families of warrants/criteria, including (non-exhaustively):
- Ordinary meaning / conventional use (often a default starting point)
- Technical meaning (triggered by technical context)
- Contextual harmonization (co-text/corpus coherence)
- Precedent / tradition / canonical constraints
- Analogy and concept-based reasoning
- General principles and systematic fit
- Historical evidence (authorship, drafting, redaction, reception history)
- Purpose / function / teleology
- Substantive reasons (domain-accepted normative or pragmatic reasons)
- Intent evidence (when admissible under the chosen framework)
Which of these are admissible, how they’re weighted, and which outrank which are questions for Scheme I.
The output of interpretive reasoning is not merely “M,” but a statused interpretive conclusion:
- The proposed meaning-attribution: BestInt(E, D | Cxt, G, A) ≡ M
- Its dialectical status: defensible vs justified (relative to proof standards and the current pro/con graph)
- The governing evaluative framework: F (criteria + priority + burdens/standards)
- The live defeaters and unresolved ties: what would need to change to overturn or strengthen the conclusion
This makes interpretation a transparent, revisable, burden-sensitive form of practical reasoning: a structured method for moving from doubt to a justified meaning-attribution under explicit goals and standards.
Comments
Post a Comment