Arguments from Silence
For any given domain of inquiry, some argument structures and patterns tend to be more central than others. For example, statistical sciences tend to emphasize arguments about causality and correlation. Arguments from precedent tend to frequently arise in legal domains, especially within the context of legal interpretation. Arguments using analogies tend to underpin many situations where we need to evaluate some situation in terms of criteria used to evaluate related situations. You can get a sense of an arguments prominence by googling it and observing what is returned. For example, argument from precedent seems to exclusively return results from legal domains. Searching for "correlational arguments" will return a plethora of results related to data analysis. But strangely, "Argument from Silence" returns quite a number of articles from apologetics websites. This is not obvious at first glance, especially if you are unfamiliar with the apologetic-industrial complex. Afterall, search engines use algorithms such as PageRank to return relevant search results from their index. Ranking algorithms use a number of heuristics when deciding what to return, such as:
- Relevance: How closely the content matches the query terms.
- Quality of Content: How valuable or trustworthy the content is, often inferred from links, authority, and user engagement.
- Page Authority: Based on the number of other reputable sites linking to the page (backlinks).
- Freshness: Some searches prioritize recent information, like news, while others don’t require the latest updates.
- P1: If some event occurred, or some hypothesis about the historical record is true, then we would expect to observe evidence indicating its occurrence
- P2: We do not observe any evidence
- C: Therefore, the hypothesis is false, or the event did not occur.
- An extant document D in which no reference to an event E appears.
- It is known that the intention of the author of document D was to provide an exhaustive list of all the events in the class of events to which E belongs
- Event E is assumed to be a type of event which the author of D would not have overlooked, had the event taken place.
"Silence" means that the thing in question (call it X) is not mentioned in the available documents. If it were mentioned, then with the usual qualifications it would be proved to exist. Since X is not mentioned, X cannot be proved to exist. A natural further inference from this evidence is that X did not exist. The basic point is that if X did not in fact exist, then the only trace which that fact could leave, in the evidence, is the silence of the evidence as to X. At the same time, any such conclusion must be provisional. If documents are later found that do mention X, then X is after all proved to exist. A single positive may overturn any number of negatives. A single sound refutes all silences.The possibility of such a future positive can never be ruled out. But until it occurs, the non-existence of X is the best inference from the absence of X in the evidence. The strength of that inference in a given case will depend on (1) how many documents there are, or in statistical terms how large the sample is, and, in literary terms, (2) how likely the thing is to have been mentioned in documents of that type in the first place. We might explore these concepts just a little
... the argument from silence, like all historical arguments, is always conjectural. But it is not, as some claim, a fallacy. It is the correct default inference from silence. That inference can be strengthened by relevant evidence of a positive kind, or by the continued silence of further evidence.
- Context Establishment :Identify a source or set of sources (such as a text, record, historical account, or authority) that would reasonably be expected to mention a certain fact, event, or detail if it were true or relevant.
- Expectation of Mention: Argue that, under normal circumstances, if the fact or event in question had occurred, then this source (or sources) would likely have included it.
- Observation of Silence: Point out that the source is silent or does not mention the fact, event, or detail being discussed.
- Inference from Silence: Conclude that because the source does not mention it, the fact, event, or detail likely did not occur, or the entity did not exist.
- Limitations and Counterarguments: The information may have been omitted due to irrelevance to the source’s purpose. The source may not have had access to the information. The fact may have been considered too well-known to mention explicitly.
Wigmore ( 1940 , 270) drew a distinction between these two meanings of burden of proof. The first one he called the risk of nonpersuasion. Wigmore offered the following example (271) from “practical affairs.” Suppose A has a property and he wants to persuade M to invest money in it, while B is opposed to M’s investing money in it. A will have the burden of persuasion because unless he persuades M “up to the point of action,” he will fail and B will win. Wigmore went on to show how the burden of persuasion works in litigation, in a way similar to that of practical affairs, except that the prerequisites are determined by law (273), and the law divides the procedure into stages (274). The second meaning is called the burden of production. It refers to the quantity of evidence that the judge is satisfied with to be considered by the jury as a reasonable basis for making the verdict in favor of one side (279). If this is not fulilled, the party in default loses the trial (279). According to Wigmore (284), the practical distinction between these two meanings of burden of proof is this: “The risk of nonpersuasion operates when the case has come into the hands of the jury, while the duty of producing evidence implies a liability to a ruling by the judge disposing of the issue without leaving the question open to the jury’s deliberations.”
Definitions
- : The hypothesis that the event or entity in question exists/occurred.
- : The negation of (i.e., the event/entity does not exist/occur).
- : The evidence we expect to observe if is true (e.g., a record or mention in a source).
- : The absence of that evidence.
Bayesian reasoning assesses the posterior odds as:
where the likelihood ratio is:
Here’s how these terms apply to the argument from silence:
The probability of silence (absence of evidence) given that is true.
- This depends on how likely the evidence would be recorded or observed if were true. If silence is unlikely (evidence is expected),
is false.
- This depends on the context. If no evidence would arise regardless of ’s truth,
Prior Odds Our initial belief in the likelihood of vs. .
The argument from silence is strongest when:
- If evidence is expected when is true but not when is false, the absence of evidence strongly supports .
- If , silence is not informative.
Using Bayes' theorem, we update the odds:
- Argument Strength: The argument from silence is stronger when is low (evidence is highly expected if is true).
- Limitations: If and are similar, the absence of evidence is weakly informative.
- Uncertainty: Prior beliefs () play a critical role in how strongly silence updates our confidence in .
Here is a fictitious example:
- Prior probability of (e.g., the event occurred) = 0.5.
- Prior probability of = 0.5.
- : Probability of silence if is true = 0.2.
- is false = 0.8.
Likelihood Ratio:
Posterior Odds:
Thus, the posterior probability of given decreases significantly because the silence is more likely if is true. By comparing and , this Bayesian framework provides a systematic way to evaluate arguments from silence.
Third, the strength of an argument from silence can be measured in terms of the ratio of these likelihoods, P(~E|~H)/P(~E|H). This mathematical fact has three consequences: (a) there is no upper limit on the strength of arguments from silence, since that ratio approaches infinity with a positive numerator as the denominator shrinks toward zero; (b) when the two likelihoods are equal—that is to say, when we expect ~E equally strongly whether or not H is true—the argument is completely forceless; and (c) when there is not a very high expectation of the evidence on the assumption that the event had occurred, that is, when P(E|H) is rather small, say less than 0.5, the denominator of the likelihood ratio, which is equal by definition to 1 – P(E|H), will be rather large, in this case greater than 0.5; and as the numerator can be no greater than 1, the argument from silence will have very little force.
- If H (the event or fact in question) were true, how probable is it that the author in question would have noticed it (N)?
- If H were true and the author had noticed it, how probable is it that he would record it (R)?
- If H were true, and the author had both noticed and recorded it, how probable is it that this record would have survived and that contemporary historians would be aware of it (S)?
In more formal terms, these three questions amount to a request for three numbers:P(N|H), P(R|H & N), and P(S|H & N & R). Since (N & R & S) entails E, we can approximate the critical value P(E|H) by the productP(N|H) × P(R|H & N) × P(S|H & N & R)noting that this is equivalent to P(N & R & S|H), which in turn must be less than or equal to P(E|H).
or, if the events are independent:
Every additional condition multiplies the probability by a factor, so the more conditions you add, the smaller the joint probability becomes. For example:
- Event : It rains tomorrow ().
- Event : A bird lands on your window ().
- .
If you keep adding conditions, such as "and I win the lottery," the joint probability quickly becomes negligible.
- "They would need a telescope (low chance)."
- "They would need to be awake at that time (low chance)."
- "They would need to report it (low chance)."
Even if the base rate of someone noticing a rare event is , adding arbitrary low-probability conditions can lead to being perceived as near zero. The conjunction fallacy arises when people believe the probability of a detailed event is higher than that of a simpler one. In reality, adding conditions always reduces the probability. The classic example being:
- Scenario A: "Linda is a bank teller."
- Scenario B: "Linda is a bank teller and a feminist."
People often perceive Scenario B as more likely because it feels more "representative," but mathematically:
If the ultimate goal is to analyze or —the probability of observing or not observing evidence given the hypothesis—then (awareness) might indeed be unnecessary.
Why Could Be Redundant:
- Evidence Is Ultimately Tied to Writing (): Whether someone was aware of a historical fact () only matters if they wrote it down (). If no one wrote it down, has no impact on whether we observe evidence .
- If the chain of reasoning always requires , then: can be omitted because awareness alone doesn’t produce observable evidence.
- would only matter if there’s a direct pathway where awareness alone could create evidence—for example, if awareness led to oral traditions that later became written records. But if you're focusing strictly on (writing) and (preservation), may add unnecessary complexity.
(the survival of writing through history) is influenced by numerous independent and external factors. These factors could make the probability seem arbitrarily low if the dependencies aren’t accounted for properly.
Why This Matters:
-
Complex Dependencies Inflate Uncertainty:
- Historical survival depends on many unrelated events: wars, natural disasters, decay, or random loss of records. Modeling all these factors directly is almost impossible, so can feel arbitrarily small.
- This small value for might unfairly dominate the overall , even if (writing it down) was highly probable.
-
The Fallacy of Over-Specification:
- Treating as a single, unified condition hides the fact that is actually a conjunction of many events, each of which reduces the total probability: where represent conditions like physical preservation, political continuity, and access to historical archives.
-
Alternative Pathways for Evidence:
- If evidence can survive through indirect means (e.g., copies, translations, or secondary references), then modeling solely as the survival of the original writing may be overly restrictive.
If is redundant, the probability of observing evidence depends on two key components:
- : The fact is written down.
Probability of No Evidence:
Rather than modeling as a conjunction of highly specific and improbable events, treat it as a broader, aggregated probability:
- : Represents the likelihood of written evidence surviving through any pathway, not just direct preservation.
For example:
- could account for:
- Copies being made.
- Translation into other languages.
- Indirect mentions in other documents.
By broadening , you avoid making arbitrarily small due to overly specific assumptions. If someone is arguing that
is high because is low:
- I would argue that (survival) depends on overly complicated and arbitrary assumptions. Suggest using a broader, more reasonable .
- S (survival) is heavily dependent on external, independent factors and should not be modeled in an overly restrictive way. Simplifying it as a broader, aggregated probability avoids artificially deflating P(E∣H).
The analysis is overly restrictive if it focuses exclusively on one individual's written account as the sole determinant of . By ignoring alternative sources of evidence—such as physical artifacts, other written records, or indirect inference from context—it artificially limits the pathways through which evidence could arise. Let’s expand the reasoning to account for these additional sources of evidence.
1. Evidence is Often Multimodal
Historical evidence rarely relies on a single source or pathway. The absence of evidence () cannot reasonably hinge on just one author's potential recording of an event. Historical evidence often emerges from multiple pathways, so must account for all these sources of potential evidence. P(∼E∣H) Is Not Determined Solely by One Author. Instead, it typically arises from a combination of:
- Direct Written Records:
- The specific author’s mention (e.g., and
- Other Independent Written Accounts:
- Records by other authors or cultures, often reinforcing or corroborating the event.
- Physical or Archaeological Evidence:
- Artifacts, ruins, or environmental traces that indirectly suggest the event happened.
- Contextual Evidence:
- Broader societal patterns, oral traditions, or logical implications derived from other known facts.
By focusing solely on the chain , the current model excludes all these additional potential sources, which would generally increase and reduce .
2. Expanding P to Include Multiple Pathways
To account for these additional sources, should reflect a disjunction of pathways through which evidence could arise. Instead of just one pathway (the specific author), we sum the probabilities of all independent sources of evidence:
General Formula:
Let:
- : Probability of evidence from the original author's writing ( and ).
- : Probability of evidence from other independent written accounts.
- : Probability of evidence from physical data.
- Overlap terms account for situations where multiple sources produce overlapping evidence.
The expanded is:
If the sources are approximately independent (a simplifying assumption), this reduces to:
3. Reassessing P(−E∣H)
Once is expanded to include these alternative pathways, —the probability of no evidence given —is correspondingly reduced:
If becomes significantly larger because of multiple pathways, becomes much smaller.
4. Addressing the Redundancy of Focus on One Source
By restricting to a single author's survival ( and ), the analysis implicitly assumes:
- The author’s record is the only possible source of evidence for .
- If that specific record is lost, no other data could provide evidence.
This is rarely true in practice, especially for historical or archaeological hypotheses. Other potential sources, such as:
- Parallel accounts by different observers or cultures.
- Physical remnants that confirm the hypothesis indirectly.
- Secondary writings that reference the original work (even if it’s lost).
All these sources contribute to , even if or fails.
5. Alternative Interpretation of Indirect Evidence
Even if no direct records exist (e.g., no author writes about the event), indirect evidence can still support . For instance:
- Archaeological digs might uncover artifacts consistent with .
- Geographical evidence (e.g., drought, volcanic ash) might align with .
- Social or cultural patterns (e.g., sudden abandonment of a city) might indirectly imply .
These indirect forms of evidence are not captured by if the model focuses solely on written records.
Example:
- Hypothesis (): A major volcanic eruption destroyed a historical city.
- Evidence (): Archaeological ruins with volcanic ash layers, corroborating environmental data, and shifts in trade routes.
Even if no written records survive (), the physical data and contextual evidence could still strongly support . Ignoring these sources severely underestimates and inflates .
Example:
If is "a major battle occurred in a particular region," evidence could include:
- Written accounts from multiple observers (not just one author).
- Archaeological findings, such as weapons or fortifications.
- Cultural artifacts that reflect the aftermath (e.g., monuments, traditions).
By ignoring these, the restrictive model vastly underestimates .
6. Accounting for Uncertainty in Alternative Pathways
While it's hard to precisely model all pathways, you can approximate their contributions:
-
Assign Probabilities to Additional Pathways:
- Estimate , , etc., based on the likelihood of independent documentation or physical traces.
- Example: If the hypothesis concerns a widely known historical event, might be high.
-
Incorporate Conditional Independence:
- Treat the different sources as conditionally independent given , unless strong dependencies are known.
-
Adjust for Historical Context:
- Consider factors like the time period, geography, and cultural context, which influence the likelihood of independent evidence pathways.
Let's now reconsider the formalism:
Expanding to include multiple pathways strengthens the numerator, as it becomes harder for to be arbitrarily small. Conversely, —the probability of evidence arising without —often remains low, as most sources of evidence are causally tied to . By restricting P(E∣H) to one individual’s writing, the argument severely underestimates the probability of evidence. A more realistic model would:
- Recognize that evidence can arise through multiple independent pathways, not just and .
- Expand to include contributions from physical data, other writings, and indirect sources.
- Adjust accordingly, making less likely to be arbitrarily high.
This broader perspective ensures a more accurate and balanced assessment of and the overall plausibility of . This makes the argument from silence a far more realistic. It also shows that evidence does not exist in a vacuum. Reinterpretation of source material can transform old materials perceived to be irrelevant into something that could be added into P(E|H). Focusing solely on one individual's record (and whether or not it survives) as the determinant of (the absence of evidence) under is extremely restrictive and incomplete. Historical events often leave traces across multiple independent sources—including indirect evidence such as physical artifacts, secondary references, or even inconsistencies in unrelated records that can be analyzed probabilistically. This over-restrictive approach fails to account for the diversity of ways evidence can support , which ultimately inflates to an unreasonable degree. Essentially, we can challenge the restrictiveness of by addressing the complementarity of evidence and presenting a broader framework for evaluating . Even if one pathway fails to produce evidence, others can compensate, thereby reducing . This broader view ensures that the absence of one type of evidence (e.g., written records) doesn’t overly inflate . Below is a summary of what we have discussed thus far:
1. Written Records Are Not the Sole Source
The argument overly restricts by assuming:
- depends solely on the existence, recording, and survival of a single written account.
- arises whenever this specific pathway fails.
However, history and archaeology show that events often leave redundant evidence across multiple domains. For example:
- Physical artifacts might confirm an event even if all contemporary writings are lost.
- References in secondary or unrelated records can fill gaps left by primary sources.
2. Physical and Contextual Evidence Are Complementary
Written records provide direct documentation, but physical and contextual evidence can serve as indirect confirmation:
- Physical Evidence: Archaeological artifacts (e.g., ruins, tools, weapons, environmental markers) can independently corroborate . For example:
- A battle might leave behind fortifications, graves, or weapon fragments.
- A volcanic eruption might leave geological markers like ash layers.
- Contextual Evidence: Broader societal patterns or consequences indirectly support . For example:
- Economic disruption visible in trade routes might indicate a major event like a war or natural disaster.
- Cultural shifts, such as the sudden adoption of specific religious practices, might hint at a historical turning point.
Even in the absence of direct written accounts, these complementary sources can significantly boost .
3. Redundancy of Evidence Reduces
If evidence arises from independent and complementary pathways, the probability of complete absence of evidence becomes much smaller. Mathematically:
Where:
- : No evidence from written records.
- : No evidence from physical data.
- : No evidence from contextual clues.
If these pathways are independent, the probability of joint failure decreases exponentially:
For example:
- If , , and , then:
This is far smaller than the restrictive derived from focusing on one pathway. Instead of focusing solely on one source, redefine to include all possible pathways:
Where:
- : Evidence from written records.
- : Evidence from physical artifacts.
- : Evidence from contextual patterns.
Practical Simplifications:
If the pathways are roughly independent:
This makes far larger than when relying solely on , reducing .
4. Likelihood Ratio and the Absence of Evidence
The real test for is not whether is high, but how it compares to , the probability of no evidence under the alternative hypothesis. Expanding makes smaller and shifts the likelihood ratio:
Example:
- If and , the absence of evidence is more consistent with , weakening the argument against .
By including complementary pathways, you show that is not overwhelmingly high. This broader, multimodal approach reflects how evidence for historical events is typically evaluated and avoids artificially inflating .
One of my main critiques is that the author is not considering total evidence. Arguments from silence are rarely ever presented with respect to one historical source. When they are presented as such, it's normally under the assumption that alternative pathways of evidence are also silent. One could argue that this "total evidence" is contained in the prior likelihood ratio. But if that were the case, the argument from silence would demonstrate the prior likelihood dominates the overall expression. Just think about how absurd the argument would be if it didn't take into consideration total evidence. Suppose you have N sources, all confirming H. A new source N+1 is identified as potential evidence for H. Maybe it is a document written by some author presumed to be in a position to know whether H was true. Suppose conditions 1-3, listed above by the author, lead to P(~E|H) to be very close to zero. This would affirm P(~E|~H). But since every other source N-1 strongly confirms H, reasonable historians would affirm H because the total evidence strongly confirms H. But no one makes arguments like this when using arguments from silence, so presenting it that way seems like a straw man argument against the argument from silence. When someone asserts the argument from silence is fallacious, they are almost always neglecting the broader context of discussion in which the argument is presented against a body of total evidence. A bit more about this concept; according to the Stanford Encyclopedia of Philosophy:
In order to be justified in believing some proposition then, it is not enough that that proposition be well-supported by some proper subset of one's total evidence; rather, what is relevant is how well-supported the proposition is by one's total evidence. In insisting that facts about what one is justified in believing supervene on facts about one's evidence, the Evidentialist should be understood as holding that it is one's total evidence that is relevant. Of course, this leaves open questions about what relation one must bear to a piece of evidence E in order for E to count as part of one's total evidence, as well as the related question of what sorts of things are eligible for inclusion among one's total evidence.[6]
As mentioned earlier, what constitutes evidence depends on admissibility criteria and the evidence someone has access to. The requirement of total evidence is a principle in epistemology and philosophy of science that states we should base our beliefs, probabilities, or inferences on all available and relevant evidence, rather than on a subset of evidence or incomplete information. This principle plays a crucial role in Bayesian reasoning and the interpretation of Bayes factors, as Bayesian methods inherently rely on incorporating and updating beliefs in light of all relevant evidence. Bayesianism views belief updating as a normative model of reasoning. Total evidence aligns with this framework by insisting that all available information be used to evaluate hypotheses, ensuring that Bayesian reasoning is not distorted by partial or biased evidence. How we define the E in P(E|H) matters because if E is a subset of the total evidence, P(E|H) could be underestimated. This is a very important consideration because Bayesian reasoning does not explicitly guarantee the problem definition to consider whether E is comprehensive; it just specifies an update rule for rational inference. Someone can properly apply Bayesian logic but still arrive at an incorrect probability because of how they defined P(E). is conditional on the subset of data and may be biased if the sample is not representative of the entire evidence set. This can lead to misleading posterior probabilities unless the missing evidence is either irrelevant or explicitly accounted for later. The quantity P(E|H) considers whatever is defined as E in the analysis. To align with the requirement of total evidence, E should ideally encompass all relevant evidence. If only a sample is used, the analysis may still be valid for that subset, but the results must be interpreted with caution to account for the potential impact of omitted evidence. In the case of Arguments from Silence, P(~E|H) should refer to the entire set of evidence E that is absent ~E but expected.
What we count as evidence obviously matters, and will influence what is considered to be total evidence. In historical contexts, I am not sure if there are clearly defined admissibility rules like there are in legal contexts. Presumably, historians have some sort of procedures for categorizing and grading evidence, as well as rules of thumb for deciding whether something should be considered evidence. It most certainly is a broader subset than what was presented earlier by the author of that article. Admissibility criteria help determine what qualifies as E in the computation of P(E∣H)
- Relevant Evidence: Admissibility criteria prioritize evidence that directly pertains to the hypothesis . Irrelevant evidence, even if available, is excluded to prevent noise from distorting .
- Reliable Evidence: Evidence must be sufficiently credible or reliable. Unreliable evidence could skew , leading to incorrect posterior probabilities.
- Complete Evidence: Ideally, admissibility criteria should align with the requirement of total evidence by ensuring all relevant evidence is considered. Ignoring admissible evidence could result in incomplete likelihood calculations.
- Total Evidence Requirement: The total evidence principle mandates considering all admissible evidence. Evidence that does not meet the admissibility criteria (e.g., irrelevant, unreliable, or misleading data) should not be part of .
- Selective Evidence Inclusion: If evidence selection is biased or admissibility criteria are overly restrictive, may only reflect a subset of the true evidence. This violates the total evidence principle and can lead to misleading Bayesian inferences.
- Practical Challenges: Defining admissibility criteria can be subjective or context-dependent. In some cases, determining whether evidence is reliable, relevant, or complete may be unclear.
- Balancing Inclusion and Exclusion: While admissibility criteria prevent irrelevant or misleading evidence from distorting , overly strict criteria could result in the omission of valid evidence, violating the total evidence principle.
- Uncertainty in Evidence: Admissibility decisions sometimes involve probabilistic judgments. Bayesian reasoning can handle uncertainty in evidence (e.g., using hierarchical models), but this assumes that all admissible evidence has been included.
- Philosophy of Science and Evidence Evaluation:
- In scientific reasoning, exclusionary restrictions are used to filter out evidence that is considered irrelevant, unreliable, or biased.
- For example:
- Evidence not derived from proper experimental conditions might be excluded.
- Anecdotal evidence may be restricted in favor of systematic data.
- These restrictions help ensure the validity and robustness of inferences.
-
Bayesian Reasoning:
- Exclusionary restrictions in Bayesian reasoning may determine which evidence is included in .
- For example:
- Evidence obtained through unreliable means or conflicting with prior constraints may be excluded.
- Irrelevant evidence—evidence that has no bearing on the likelihood of a hypothesis —is excluded to avoid inflating or deflating posterior probabilities.
-
Statistical Modeling:
- Exclusionary restrictions can apply to variables or datasets in statistical models, such as:
- Removing outliers or noise from the dataset.
- Excluding variables that do not significantly contribute to the model or violate assumptions (e.g., multicollinearity in regression).
- These restrictions ensure the model is parsimonious and interpretable.
- Exclusionary restrictions can apply to variables or datasets in statistical models, such as:
-
Legal Contexts:
- In legal reasoning, exclusionary restrictions often take the form of rules that bar certain types of evidence from being presented in court.
- For example:
- Hearsay evidence is often excluded because it is considered unreliable.
- Evidence obtained unlawfully (e.g., through illegal searches) may be excluded under the exclusionary rule to protect rights and encourage lawful conduct by law enforcement.
- For example:
- In legal reasoning, exclusionary restrictions often take the form of rules that bar certain types of evidence from being presented in court.
-
Ethical and Policy Decisions:
- Exclusionary restrictions are applied to uphold ethical norms or policy standards. For example:
- Data obtained through unethical means, such as coercion or exploitation, may be excluded from consideration in decision-making.
- Certain demographic factors, such as race or gender, may be excluded in hiring or admissions decisions to prevent discrimination.
- Exclusionary restrictions are applied to uphold ethical norms or policy standards. For example:
-
Relevance-Based:
- Evidence or variables that are not relevant to the hypothesis or decision are excluded to avoid distraction or overfitting.
- Example: In Bayesian reasoning, should only include evidence that can differentiate between and competing hypotheses.
-
Reliability-Based:
- Evidence that is deemed unreliable (e.g., due to measurement error, biased sources, or incomplete data) is excluded.
- Example: Excluding self-reported data when objective measures are available.
-
Legal or Procedural:
- Evidence that violates procedural rules or legal principles is excluded.
- Example: Illegally obtained evidence is inadmissible in many judicial systems.
-
Ethical or Normative:
- Data or evidence obtained through unethical means or in violation of normative standards is excluded.
- Example: Excluding data from studies that violate human rights.
-
Practical or Feasibility-Based:
- Evidence or variables that are too costly, complex, or impractical to include may be excluded.
- Example: Excluding high-dimensional variables in a statistical model to avoid computational challenges.
- Focus and Relevance: Exclusionary restrictions ensure that only pertinent evidence or variables are considered, simplifying analysis and interpretation.
- Reliability and Validity: By filtering out unreliable evidence, these restrictions help maintain the credibility of inferences or decisions.
- Normative Consistency: In ethical or legal contexts, exclusionary restrictions reinforce adherence to moral and procedural principles.
- Pragmatism: They help manage complexity by reducing the scope of evidence or variables to those that are most impactful.
There are also challenges to properly employing exclusionary restrictions. In the context of an argument from silence, perhaps someone assumes P(~E|H) is large because they have significantly narrowed what constitutes E in relation to H. Perhaps they have violated the requirement of total evidence by ignoring evidence deemed relevant to H. This might be caused by a subjectivity inherent to the process of Bayesian reasoning, leading to information loss. Nevertheless, Argument from Silence is a legitimate form of reasoning provided certain conditions are satisfied. It can be assessed probabilistically using the framework above, along with considerations about what counts as evidence. It also depends crucially on how we define the search space with respect to a set of possible pieces of evidence. Consider a parallel argument:
- If my keys were in this room, I would be able to find them
- I cannot find them
- Therefore my keys are not in this room
Perhaps there is evidence that corroborates the hypothesis “the keys are in this room”. You search for this evidence, and the keys directly, but find nothing. You conclude the hypothesis “keys in the room” is false. This can also be seen as an argument from negative evidence:
- Major Premise: If A were true, A would be known to be true.
- Minor Premise: A is not known to be true.
- Conclusion: A is false. S
Such pattern of reasoning has been analyzed in computing as a relativistic form of deductive reasoning called autoepistemic reasoning. On Moore’s view (1985: 273) inferences of the kind Tweety is a bird. Most birds can fly. Therefore Tweety can fly can be analyzed considering the premise “Most birds can fly” as a consistency clause, providing that “the only birds than cannot fly are the ones that are asserted not to fly” (see also McDermott and Doyle 1981). Since Tweety is not asserted to fall within the group of birds that cannot fly, Tweety can fly. Therefore, the conclusion that “Tweety can fly” is not drawn absolutely (that is, it is not an ontological fact that birds fly, and if something does not fly it is not a bird) but only relative to a theory, or shared knowledge. Such a pattern of reasoning can be formalized as follows (Moore 1985: 275):
- If P1,…, Pn are in T, and P1,…, Pn ⊢ Q, then Q is in T (where “⊢” means ordinary tautological consequence).
- If P is in T, then LP is in T.
- If P is not in T, then ~LP is in T.
The second and third clauses provide that if a proposition is (is not) in the theory, or domain of knowledge, such a proposition is (is not) believed (indicated by the logical operator ‘L’) to be in such theory.
In computing, such principle has been developed under the name of the Closed World Assumption, setting forth that “if a ground atom A is not a logical consequence of a program P, then it is possible to infer ~A” (see Reiter 1978). This rule has been developed by Clark into the principle called “Negation as Failure” (1978: 114), stating that “To show that P is false, we do an exhaustive search for a proof of P. If every possible proof fails, ~P is ‘inferred’”
Walton provides an example of denying the predicate of no negative effects in the context of a medical substance:
The argument depends on how exhaustive the implicit reasoning stage is. This is also a common form of reasoning about information in databases known to be relatively complete and efficient at tracking information. Suppose we search an enterprise information system for some fact, but fail to find the fact. We can reasonably infer the fact is not the case, given the track record of logging the information. Someone could reason that the information was removed, but this action would generate evidence. We could track a metadata log to identify any changes to the database. This would verify the inference. This is related to the burden of proof. Someone can assert something about the information in the database being absent, and therefore false. An interlocutor could then assert the information was deleted. This would shift the burden of proof to them. If this burden is not satisfied, we accept the conclusion that the initial assertion is false. It is a bit more difficult when it comes to historical reasoning about ancient events because the presupposed set of clearly defined alternatives is rather large. But if we rule out information based on inquiry from established disciplines or other legitimate forms of inquiry, the assertion ~H is very reasonable.
Consider the statement “Absence of evidence is not evidence of absence”; if we relate this to "correlation does not equal causation" all this is telling us is that the subset of causation does not equal the superset of correlation. But correlation seems to be a requirement of causation. There are additional assumptions needing to be satisfied prior to concluding causation, likewise there are additional assumptions needed to be satisfied before concluding that absent evidence is indeed evidence of absence. Evidence of absence is possible, provided certain conditions are reasonably satisfied. This is the entire point of belaboring on the points above about the felicity conditions of argument from silence. So we can see, after considering the nuance of the argument, that it is indeed a valid probabilistic argument. It depends crucially on how we define the search space with respect to evidence. Think about the parallel argument about the keys: perhaps there is evidence that corroborates the hypothesis “the keys are in this room”. You search for this evidence, and the keys directly, but find nothing. You conclude the hypothesis “keys in the room” is false.
Back to my main point of this; why do apologists seem to be the only ones who care about this? It should be obvious: many claims asserted in the Bible literally have no evidence substantiating them, and in many cases the evidence is mere hearsay. Apologists focus on this argument to dismiss its credibility. Earlier I covered a few conditions mentioned by the author to decide whether the probability of evidence is low. I agree that there are conditions that might prevent a historical author from transcribing some historical fact. However, we cannot merely assume that these negating conditions were present, simply because they did not transcribe the fact. These conditions need to be shown to have been instantiated; if they are not then that is an argument from ignorance. Nevertheless, apologists will shift the burden of proof, arguing from ignorance about the plausibility of these conditions. Take a condition listed on a famous catholic apologetics website: the subject under discussion is "embarrassing". I agree this is possible, but if we assume, in absence of evidence that this condition is satisfied, the author didn’t transcribe because of this embarrassment, that’s an argument from ignorance. It is tantamount to saying "I don't know what conditions prevented the author from transcribing the information, but I know something must have, therefore the hypothesis is true". It’s possible to construct an inference to best explanation that explains why an author wouldn’t transcribe something; but this is rife with issues. For starters, these arguments depend heavily on what’s considered plausible. This is inherently connected with the assumed world view of the person advancing the argument; I've discussed this at length in my other blog posts. For example, if you argue embarrassment is the explanation, based on some unestablished interpretation of a biblical passage, I can reject it because the biblical assumptions are not plausible to me. We do not share the same background assumptions, so what seems plausible to A might not seem plausible to B. We need data to understand whether these conditions are satisfied.
The inverse is usually assumed by apologists. What I mean by this, is suppose some argument from silence concludes P(~E|~H) is by far relatively higher than P(~E|H), and that P(~H) > P(H) is our base rate; many apologists will still hold to H regardless of where the total evidence points. Remember that if the argument is weak, then at best P(~E|~H) == P(~E|H), which would imply a position of absolute agnosticism about the proposition. This would mean that statements severely lacking historical evidence such as those asserted in Exodus for example, would require apologists to be agnostic about historical events relevant to their faith. This is obviously unacceptable, and contravenes the statements of faith they're required to sign prior to embarking on their little apologetics journey. They may get around this by asserting P(H) is a strong prior. However, these "priors" significantly lack any theoretical rigor, evidential adequacy, or consistency to be considered as strong. So what this really amounts to is prior manipulation. The prior probability (the initial belief about the likelihood of an event or proposition before considering evidence) is chosen in a way that unduly favors a particular outcome. This is often done arbitrarily or with a motivated bias, rather than based on objective or reasonable grounds. They do this by assigning an unjustifiably high probability to the prior, such that the evidence (likelihood) has little influence on the posterior, leading to a skewed conclusion. So in other words, P(H) strongly outweighs any form of evidence or absent evidence, such that P(~H) becomes implausible by definition.
I am well aware of the many selection biases that plague non-experimental fields of study such as history. Something like the survivorship bias could be systematically skewing the historical evidence rendering P(H) extremely low. However, if multiple independent measures render P(H) low, and there is no evidence of some selection mechanism, it's reasonable to infer ~H. Not allowing this form of inference leads to some highly counterintuitive results such as the one I listed above about a cheating spouse. On the topic of selection bias, the reason we see apologetics websites listed in the google search for "argument from silence" definitely sheds light on the mechanisms that generated those search results. Why is it, that in the result set, our sample is systematically skewed towards these particular results, and not anything from professional academics? There is a strong motivational force to dismiss this argument as ipso-facto fallacious, or to quickly conclude that a particular instance of the argument is fallacious, due to the motivational reasoning inherent in apologetics. Earlier I called this the "Fallacy Fallacy" but that's not quite right. Instead, its a form of premature fallacy attribution, motived by defensive reasoning.
The act of misidentifying fallacies prematurely is related to motivated reasoning, which is the tendency to process information in a way that aligns with one's preexisting beliefs, emotions, or desires. When someone engages in motivated reasoning, they might misidentify or over-interpret arguments as fallacious to dismiss them more easily, often without fully engaging with the substance of the argument. A person might be motivated to label an argument as fallacious because it contradicts their beliefs. For instance: If someone strongly opposes a viewpoint, they may quickly declare the argument supporting it as a "strawman" or "ad hominem," even if it isn't, to justify disregarding it. Misidentifying a fallacy prematurely can be a shortcut to avoid the cognitive effort of critically analyzing the argument. Motivated reasoning makes this shortcut appealing because it reinforces existing attitudes without the need for deeper reflection. Consider this scenario, The Causes of the Fall of the Roman Empire:
Argument:
- Historian A argues, "The Roman Empire fell because of the over-expansion of its borders, which stretched resources too thin and made the empire vulnerable to outside invasions."
- Historian B accuses Historian A of committing a post hoc fallacy (assuming that because over-expansion preceded the fall, it caused the fall) and dismisses the argument entirely.
-
Motivation to Defend a Preexisting Belief:
Historian B might be motivated by their belief in another explanation for Rome’s fall, such as internal political corruption or economic collapse. Instead of engaging with the argument about over-expansion, they dismiss it outright by prematurely accusing Historian A of a post hoc fallacy. -
Cognitive Shortcut:
Declaring "post hoc fallacy!" allows Historian B to sidestep deeper engagement with the evidence (e.g., examining whether over-expansion indeed led to overtaxation, logistical issues, or weakened defense strategies). -
Potential Error:
In reality, over-expansion might have been one of several contributing factors to Rome’s fall. While Historian A's argument may not explain the entire phenomenon, labeling it as a fallacy prematurely could result in the loss of valuable insights about the complex interplay of causes.
Motivated reasoning often emerges in historical debates because the stakes can be ideological. For instance:
- Defenders of Western civilization might be motivated to downplay "internal decay" explanations, as they could be seen as undermining the perceived greatness of Rome.
- Others might emphasize external invasions to draw parallels to modern political concerns, such as immigration or military defense.
In such cases, accusations of fallacies (like "post hoc" or "slippery slope") may be wielded as rhetorical tools to dismiss opposing views rather than engage with them critically. To avoid misidentifying fallacies prematurely, historical reasoning requires:
- A nuanced understanding of fallacies and when they genuinely apply.
- An openness to complex, multifaceted explanations that don't fit neatly into one narrative.
- Self-awareness about motivated reasoning, especially in ideologically charged debates.
What I am suggesting, is the sample results from the google query, can be explained by this phenomenon, and not some issue overlooked by professional historians. Apologists recognize that the severe lack of data substantiating even mundane biblical claims is problematic if they are seeking to establish an evidential grounding for the bible. These search results indicate, not a problem with this form of reasoning, but a defensive strategy. Even the way apologists represent the arguments presented by historians is overtly fallacious and unfaithful. Consider this example by the catholic apologetics website: "The Exodus never happened. There’s no evidence that it did". Obviously, something like that would never get published. This is probably what a historian would tell you if you caught them on an elevator and had ten seconds to speak to them. It overlooks the depth and breadth of the reasoning behind their conclusion. But by prematurely attributing it as fallacious, reveals more about the structures motivating such an assertion, not the lack of rigor behind the argument itself.
Among apologists, it is common for someone to add ad hoc explanations to when the likelihood (the probability of observing the absence of evidence if were true) is very low. This typically occurs because the low value of creates tension for an agent committed to , as the lack of expected evidence undermines the hypothesis. Motivating Factors for adding ad hoc explanations include:
-
Cognitive Dissonance: When someone is deeply committed to a hypothesis, the absence of expected evidence creates cognitive dissonance—the psychological discomfort caused by holding contradictory beliefs or evidence. To reduce this dissonance, they may introduce ad hoc assumptions that reconcile the absence of evidence with their commitment to H. Example: A scientist might assume that an experiment failed not because their theory is flawed, but because the conditions were somehow atypical or the measurement tools were inadequate.
-
Emotional Investment: People may have emotional attachments to H because it aligns with their personal beliefs, identity, or values. For example, a historical theory that supports one's cultural heritage might lead someone to explain away contradictory evidence (e.g., "The records were likely lost or destroyed").
-
Confirmation Bias: The tendency to seek, interpret, or create information in a way that confirms one’s pre-existing beliefs can motivate ad hoc reasoning. In this case, when −E appears, the agent may construct unverifiable explanations to preserve their belief in H. Example: A believer in extraterrestrial visitation might argue that the absence of credible UFO evidence is due to a government conspiracy suppressing the truth.
-
Epistemic Inertia: People are often resistant to revising or discarding long-held beliefs because doing so requires significant effort and acknowledgment of past errors. Adding an ad hoc explanation is a way to "patch" a theory without the more disruptive process of abandoning or revising it.
-
Social Pressures: Commitment to H may be reinforced by social or institutional pressures, especially when H is central to a group’s identity, ideology, or goals. In such cases, adding ad hoc assumptions may be motivated by a desire to maintain credibility, avoid alienation, or protect group cohesion.
-
Theory Tenacity: In science and philosophy, some theories are considered too important or elegant to abandon lightly. Proponents might justify temporary ad hoc fixes by arguing that H has a strong track record and will ultimately be vindicated. Thomas Kuhn called this "normal science" in his analysis of scientific paradigms—scientists work to reconcile anomalies within a dominant paradigm rather than discarding it prematurely. Example: In the Ptolemaic model of astronomy, epicycles were added to account for anomalous planetary motion because the geocentric paradigm was deeply entrenched. It is important to note, however, that since theism is not well defined, it does not operate as a scientific theory.
-
A Priori Commitment: Sometimes, an agent's commitment to H is based on non-empirical factors, such as religious or metaphysical beliefs, that are insulated from empirical scrutiny. In such cases, ad hoc explanations are used to harmonize H with contradictory evidence, as abandoning H might threaten a broader worldview. Example: A creationist might explain the absence of certain fossil evidence by invoking unverifiable claims like a "testing" or "deceptive" design by a deity.
-
Conspiracy Theories: Claiming that evidence is intentionally suppressed or hidden (e.g., "The lack of documents is due to a cover-up").
-
Hypothetical Entities: Postulating unobservable factors to explain the absence of evidence (e.g., "An unknown mechanism prevents us from detecting X").
-
Unfalsifiable Assumptions: Introducing assumptions that cannot be tested independently (e.g., "We haven’t found evidence yet, but we will eventually").
-
Shifting Goalposts: Adjusting criteria for what counts as evidence so that the absence of evidence no longer appears problematic.
The tendency to add ad hoc assumptions is often a symptom of over-committing to a hypothesis. While it can sometimes be reasonable (e.g., temporarily preserving a theory with strong prior success), it risks undermining the hypothesis’s explanatory power, parsimony, and falsifiability. Philosophers and scientists emphasize the importance of letting evidence guide beliefs, rather than twisting explanations to fit preconceptions. As Karl Popper warned, too much reliance on ad hoc reasoning can render a hypothesis unscientific, as it becomes immune to empirical refutation.
When someone inserts an ad hoc assumption (e.g., ) into a hypothesis , the burden of proof typically shifts to them to demonstrate that these auxiliary assumptions are independently plausible and supported by evidence. This is because the auxiliary assumptions are being introduced to explain away what would otherwise undermine , and without justification, they risk reducing 's credibility and falsifiability. For example, if someone argues is low because "evidence was suppressed", it is now their burden to show that evidence was probably suppressed, by appealing to evidence that positively affirms the auxiliary assumption.
- Shift in the Explanation: When someone asserts that P(−E∣H)is low because of an auxiliary assumption (e.g., "evidence was suppressed"), they are effectively introducing a new component to the explanation that needs to be justified independently. Without supporting evidence for the auxiliary assumption, the explanation becomes speculative and untestable.
-
Avoiding Arbitrary Complexity: According to principles like Ockham’s Razor, we should avoid introducing unnecessary assumptions unless they are independently justified. If the auxiliary assumption cannot be substantiated, it is an arbitrary addition and risks making H overly complex and less credible.
-
Maintaining Epistemic Accountability: Scientific and philosophical discourse relies on participants being accountable for claims they introduce. If someone adds h1,…,h_n to H, they take on the burden to provide evidence or reasoning that demonstrates these assumptions are likely true or at least plausible.
- Provide positive evidence for the claim that suppression occurred.
- Show that this suppression is consistent with what is observed.
- Establish that suppression is a plausible and sufficient explanation for the lack of evidence.
Without these steps, the explanation becomes circular or unfalsifiable. The lack of evidence −E is explained by suppression, and the suppression claim itself is justified by the lack of evidence. This undermines the explanatory value of the hypothesis. In practice, someone doing will need to provide evidence for auxiliary assumptions. For example, to argue that evidence was suppressed, one might point to leaked documents showing suppression, testimonies from credible sources or a pattern of behavior by relevant agents that supports suppression. If the auxiliary assumptions (e.g., suppression) cannot be tested independently, they weaken the hypothesis because they reduce its falsifiability. Even if direct evidence for suppression is unavailable, an agent must at least argue that suppression is more probable than alternative explanations for −E, based on background knowledge and reasoning. Introducing unverified auxiliary assumptions reduces falsifiability by shielding H from disconfirmation. To preserve H 's integrity, these assumptions must be justified. By adding auxiliary assumptions, the agent shifts the debate from H to
In this case of arguing from silence, the modus operandum of an apologist is to flood the internet with content dismissing the credibility of the argument, introduce ridiculous auxiliary assumptions, and shift the burden of proof. This is motivated by strong a-priori commitments to biblical historicity and inerrancy, stemming from institutional and ideological forces. This explains the strongly biased search results. If the biblical stories lack evidential support, they begin to appear as mere myths, dethroning their cultural status, with nothing distinguishing them from the plethora of alternative (often contradictory) myths we have deemed as useless, mere entertainment, or literature. Anyway, that's all I really have to say for now.
Comments
Post a Comment