Principle of Charity and Some Obvious Ways a Person Could be Stupid

Recently, I was thinking about philosophical principles in the context of argumentation and communication. Very loosely, a principle is a guide for behavior or evaluation. Think about the principle of parsimony when comparing two explanatory models of a natural phenomenon; the simplest one should be selected, all else equal (both have equally sufficient explanatory power but their structures differ). This seems like a reasonable rule to follow; if one explanation requires additional assumptions or a more complex explanatory structure, we gravitate towards the simpler one. These principles are very rough Heuristics that are not guaranteed to lead you to the correct answer. Nevertheless, we use them all of the time without even explicitly acknowledging them.

Some principles seem to be innate. We typically do not have to think about the Cooperative Principle when engaging in communication with someone:

Accordingly, the cooperative principle is divided into Grice's four maxims of conversation, called the Gricean maxims—quantity, quality, relation, and manner. These four maxims describe specific rational principles observed by people who follow the cooperative principle in pursuit of effective communication.[2] Applying the Gricean maxims is a way to explain the link between utterances and what is understood from them.

Though phrased as a prescriptive command, the principle is intended as a description of how people normally behave in conversation. Lesley Jeffries and Daniel McIntyre (2010) describe Grice's maxims as "encapsulating the assumptions that we prototypically hold when we engage in conversation."[3] The assumption that the maxims will be followed helps to interpret utterances that seem to flout them on a surface level; such flouting often signals unspoken implicatures that add to the meaning of the utterance.

The principles guiding our common sense do not even need to be explicitly stated. We intuitively understand when someone violates one of these principles. 

  1. The maxim of quantity, where one tries to be as informative as one possibly can, and gives as much information as is needed, and no more.

  2. The maxim of quality, where one tries to be truthful, and does not give information that is false or that is not supported by evidence.

  3. The maxim of relation, where one tries to be relevant, and says things that are pertinent to the discussion.

  4. The maxim of manner, when one tries to be as clear, as brief, and as orderly as one can in what one says, and where one avoids obscurity and ambiguity.

    As the maxims stand, there may be an overlap, as regards the length of what one says, between the maxims of quantity and manner; this overlap can be explained (partially if not entirely) by thinking of the maxim of quantity (artificial though this approach may be) in terms of units of information. In other words, if the listener needs, let us say, five units of information from the speaker, but gets less, or more than the expected number, then the speaker is breaking the maxim of quantity. However, if the speaker gives the five required units of information, but is either too curt or long-winded in conveying them to the listener, then the maxim of manner is broken. The dividing line however, may be rather thin or unclear, and there are times when we may say that both the maxims of quantity and quality are broken by the same factors.

These seem so obvious that they don't really need explanation. Consider the maxim of relation; we intuitively understand that communication entails saying relevant information. If something appears irrelevant, there needs to be a chain of argumentation to establish its relevancy otherwise communication is broken. The key is that these principles do not need to be explicitly practiced or thought about during communication; in contrast to the principle of parsimony. We just do it, and it seems weird when any of them are violated.

There are many guiding principles that seem to run counter to our initial intuitive judgements. In other words, the principles run contrary to our initial predispositions, or tendency to act, in a given situation. For example, take the Precautionary Principle;

The principle is often used by policy makers in situations where there is the possibility of harm from making a certain decision (e.g. taking a particular course of action) and conclusive evidence is not yet available. For example, a government may decide to limit or restrict the widespread release of a medicine or new technology until it has been thoroughly tested. The principle acknowledges that while the progress of science and technology has often brought great benefit to humanity, it has also contributed to the creation of new threats and risks. It implies that there is a social responsibility to protect the public from exposure to such harm, when scientific investigation has found a plausible risk. These protections should be relaxed only if further scientific findings emerge that provide sound evidence that no harm will result.

Doesn't it seem that we (collectively) crave novelty and changes to the status quo? Should your company adopt a new technology pitched by the AWS salesman, who guarantees the tech will reduce costs by some extravagant number? Should you elect the candidate who proposes radical solutions to a problem but seems to ignore or downplay every analysis of the second-order effects? Or the candidate who want's everything to go back to the good-ole days, so they are willing to disrupt the current configuration of the system in favor of a rosy past? This principle seems much more difficult to follow when the passions are aroused, and requires someone to hold back form their initial instinctual response. It is typically very hard for people to see beyond the first-order effects of  a situation, let alone second-order effects, irreversibility, and feedback loops; this requires systems thinking which is notoriously undertaught. This principle is a guide to prudential decision making in the face of uncertainty:

Many definitions of the precautionary principle exist: Precaution may be defined as "caution in advance", "caution practiced in the context of uncertainty", or informed prudence. Two ideas lie at the core of the principle:[15]: 34 

  • an expression of a need by decision-makers to anticipate harm before it occurs. Within this element lies an implicit reversal of the onus of proof: under the precautionary principle it is the responsibility of an activity-proponent to establish that the proposed activity will not (or is very unlikely to) result in significant harm.
  • the concept of proportionality of the risk and the cost and feasibility of a proposed action.

A very similar principle is Chesterton's fence. The idea is simple (as elaborated by the parable); if you see something you want to change (a fence), but have no idea about the reasons behind constructing the fence, you run the risk of unleashing unknown consequences and making things worse. This is a very plausible principle we use in Software Engineering; if you see some code but have no idea how it works, you should probably leave it alone. This seems to be the inverse of the precautionary principle; the "fence" might be holding back the floodgates, so we should probably understand why we needed it before we "dismantle" it. 

As a rule of thumb, these principles seem useful to explicitly practice. However, they seem to depend on the decision makers risk aversion and the perceived level of the Burden of Proof. So they are not without their critics. It seems like we can construct seemingly contradictory principles such as the Proactionary Principle and Postcautionary Principle which were developed in response to the precautionary principle. Whatever principle you favor must be a function of your degree of risk aversion, the anticipated costs (costs for who?), necessity to act (perceived threat of waiting to act), and number of unknown unknowns

Principles can be overridden at any time, since they are guides for behavior and not strict rules limiting behavior. But it seems like some principles seem to be more fundamental than others, like Grice's Maxims. Some principles are so ingrained in our collective cognition that we simply take them for granted. There are some principles that seem to be ignored, but their objective ends are desirable at surface level. Consider the Principle of Humanity; it seems that an ethical principle guided by empathy will encourage collaboration, reduce tribalism, and prevent demonization. This principle seems to be directed at a desirable end, but in reality do we ever see people practicing it? We seem to be predisposed to favor those who believe what we believe, think how we think, and act how we think someone should act. The principle of humanity immediately becomes an ideal fiction once we put it into practice; but we should nevertheless strive for it. So this principle differs in the others in that, we typically can all agree with it (unlike the precautionary principle) and yet it is hard to remain committed to it. Closely related are the Rochdale Principles; a set of ideals for the operation of cooperatives. How can we cooperate to achieve common goals? The Rochdale principles set about general heuristics for achieving these ends. So it seems like principles can be constructed in accordance with certain goals. Principles are fundamentally, goal oriented. 

Principles are all around us. They determine, undermine, support, and justify behavior of individuals, groups, institutions, collectives, and organizations. Consider the Calculus of Negligence in United States Tort Law; what are the rules of thumb we use to determine if someone is guilty of negligence? Some principles are more than just the cliché ones; they are subtle and guide our institutions of Common Law, Economic Trade (consider the principle of Comparative Advantage and how this has shaped the world), and Politics (consider Machiavellian Principles). Think about the notion of Proximate Cause and its relation to "The Risk Rule". If something bad is foreseeable, and you are in power and do nothing to prevent it, are you liable? Consider a company who manufactures some food that has foreseeable side effects. Information is not disclosed to the parties consuming the product, so they can't properly assess the risk. People become sick and die. Are you as a manufacturer reliable? Or is this overridden by Caveat Emptor (Buyer Beware)? If you are a governing institution guided by principles such as protecting your citizens, should you override buyer beware in favor of enforcing information and quality standards? Does that conflict with the principle of Free Enterprise?

You see, the point is that we need to think and reason about these principles, how they interact, how they conflict, and how to resolve any conflict between them. If principles can be oriented towards achieving goals such as cooperative resolution, is there anyway we can generalize this so our ends are harmonized? What would be the most fundamental principle we can apply to ensure that we have the ability to adapt and revise our principles in light of contradictions? In light of all of this, I was rethinking the Principle of Charity :

In philosophy and rhetoric, the principle of charity or charitable interpretation requires interpreting a speaker's statements in the most rational way possible and, in the case of any argument, considering its best, strongest possible interpretation.[1] In its narrowest sense, the goal of this methodological principle is to avoid attributing irrationality, logical fallacies, or falsehoods to the others' statements, when a coherent, rational interpretation of the statements is available. According to Simon Blackburn,[2] "it constrains the interpreter to maximize the truth or rationality in the subject's sayings."

Think about the benefits of applying this principle:

  1. You can better understand the position of others if you are engaging with the strongest forms of their arguments.
  2. Implementing the principle means you will enhance the quality your own arguments. You come to a better understanding of your own position by putting it in conflict with others. 
  3. You can revise your position and strengthen it if you are engaging with strong forms of argument.
  4. You are actually engaging with someone, rather than constructing a strawman, under the pretense of "engaging" when you are really just trying to win. So "steelmanning" an opponents argument enhances communication between opposing parties. 
  5. If you are honest with how you present the arguments of others, they will be honest with how they present yours, and more willing to engage with what you have to say. 
  6. As a pedagogical tool, it can help you learn the alternative interpretations.
  7. As a matter of fairness, it becomes a morality requirement in an open society, encouraging exchange of ideas.
  8. Our goal is to find the truth so we should have good arguments and evidence. Constructing a Steelman of your opponent forces you to strengthen your position. So potentially, we are coming closer to truth.
  9. If I am wrong, it puts me in the best possible position to alter my views in accordance with what is right.
This is very similar to Rapaport's Rules (A subset of Rogerian Debate):
  1. Listening and making the other feel understood
  2. Finding some merit in the other's position
  3. Increasing perceived similarity

It is a kind of epistemic humility which traces all the way back to Socrates, and is one of the pillars of critical thinking. See here for a detail of the intellectual virtues:


Lets think about the principle in terms of a closely related idea. Consider the notion of Freedom of Speech. I would consider myself to be somewhat of a Free Speech radical akin to Noam Chomsky. The state should not have the right to set the guidelines for how we determine truth, and punish those who deviate from it. When people assert ridiculous claims; allowing them to do so in accordance with their Civil Liberties does not entail accepting the truth of their propositions. Some may say that "enabling hate violence with free speech is equivalent to doing the violence itself". This type of "With us or against us" thinking absolutely pollutes political discourse. It is a type of false-dilemma that forces people into accepting authoritarian regimes "for the greater good". If you are for selective enforcement of compelled or selective speech; you simply cannot believe in Free Speech. Unfortunately, it is a binary choice; deviation from this fact implies you have selected the alternative, that YOUR speech should be free but the people you don't like should not have the right. If you believe in the universality of these fundamental rights, you are a hypocrite. Consider all of the instances in which suppression of speech prolonged human suffering; Holodomor for example.

I am not sure why Free Speech has become "A Conservative" thing anyway. Free Speech is an enlightenment principle deriving from classical liberalism; notably in response to Theocracy, from writers such as John Locke, David Hume, and John Stuart Mill. These were very radical views at the time, and still are. Rejection of Free Speech has been the norm for all of human history. Selective application of "Free Speech" has been the typical condition for humans across space and time, in all civilizations. This has never been a conservative position. This is a tactic of the illiberal left (from people such as Herbert Marcuse) to tar-and-feather issues of Free Speech as a "conservative obfuscation to allow hatred"; so anyone in favor of the principle is now associated with this smear. But as I mentioned above, Free Speech is a universal principle; and if you know anything about Noam Chomsky you know that he is radically on the Left of the political aisle and an out-spoken critic of free speech suppression on both sides. See this discussion with Chomsky and French intellectuals on the topic. A lot of these discussions become clouded around arguing whether "allowing free speech means defending the hate speech spewed by bigots". It is simple; a critical thinker has the ability to discern distinctions among concepts, entities, events, and propositions. Failing to see that defending someone's right to free speech is not defending the proposition they are expressing, is a shortcoming of critical thinking. Equating the two means a failure to see the universality of civil rights, even when you disagree with your opponent. Failure to discern, in other words, reduces civil liberties to choice, rendering them arbitrary. This is textbook authoritarian and has been used on both sides of the political spectrum.  

I would also note, accepting the principle of charity implies accepting free speech as a fundamental prerequisite. Consider Mills Three arguments for Free Speech. This is a very sound argument, and in my opinion has yet to be rebutted.


Maybe we should apply the principle of precaution when considering what sort of speech becomes dangerous? Very often, "dangerous speech" becomes closely related to "I am offended"; how to distinguish the two? Key to Free Speech is the notion of the Harm Principle.

The object of this Essay is to assert one very simple principle, as entitled to govern absolutely the dealings of society with the individual in the way of compulsion and control, whether the means used be physical force in the form of legal penalties, or the moral coercion of public opinion. That principle is, that the sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number, is self-protection. That the only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant. He cannot rightfully be compelled to do or forbear because it will be better for him to do so, because it will make him happier, because, in the opinion of others, to do so would be wise, or even right... The only part of the conduct of anyone, for which he is amenable to society, is that which concerns others. In the part which merely concerns himself, his independence is, of right, absolute. Over himself, over his own body and mind, the individual is sovereign. -Mill

So now, The question becomes "At what point does Speech create unsafe conditions for others?". Consider "Shouting 'Fire' in a crowded theater"; this is an obvious instantiation of the harm principle. This could also be why you can't say "Bomb" on an airplane. I would imagine that politicians should avoid inciting rebellions among their constituents to seize power.  What about scenarios where "Free Speech" is used to create hatred towards a group of people? My solution to this is consistent with my belief in the principle; allow them to speak but destroy their arguments and show how ridiculous they are. However, I notice the limitations in this. Consider the Satanic Panic or the instances of hate speech on 4chan leading to domestic terror. As stated above, we know very well that many people struggle with the Principle of Humanity; we consistently demonize the groups we dislike and this could escalate to outright violence. But I am not sure handing over power to a bureaucracy will solve our problems; institutionalizing compelled and selective speech and enabling the state (or any other large body) with unlimited power to implement has historically lead to abuses in power. The Harm principle is nevertheless a decent supporting principle to help us discern boundary conditions for free speech. 

Many people will cite the Paradox of Tolerance in defense of intolerance towards disliked opinions (admonishing the value of free speech); the idea of taking "tolerance for hate speech" to its logical conclusion (The Open Society and its enemies):

Less well known [than other paradoxes] is the paradox of tolerance: Unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them. 

Many people will cite that above, but leave out the rest of what was said:

 In this formulation, I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be most unwise. But we should claim the right to suppress them if necessary even by force; for it may easily turn out that they are not prepared to meet us on the level of rational argument, but begin by denouncing all argument; they may forbid their followers to listen to rational argument, because it is deceptive, and teach them to answer arguments by the use of their fists or pistols. We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant. We should claim that any movement preaching intolerance places itself outside the law and we should consider incitement to intolerance and persecution as criminal, in the same way as we should consider incitement to murder, or to kidnapping, or to the revival of the slave trade, as criminal.

While acknowledging the fact that unlimited tolerance leads to paradoxes, he does not suggest handing over power to the state. He is laying out the conditions in which, if the interlocutor begins to use violence, we can use repressive tactics because without repressing it, their intolerance degenerates an open society from the inside out making it susceptible to authoritarianism. As with many principles, there are conditions of applicability. Popper is describing the conditions under which we are allowed to supersede the principle in order to defend the notion of free speech itself. The problem is that modern advocates of Free Speech Repression are claiming that even stating an "intolerant" philosophy is equivalent to enacting violence; this is not what Popper had in mind. I quote "intolerant" because there is no shared definition as to what constitutes intolerance anymore; literally stating that "I think Free Speech is a good thing" is now characterized as intolerance (consider someone like Judith Butler or another leftist type who claim "Language is Violence"). The problem is deeper; think of books such as Dialectic of Enlightenment, which claim that liberal enlightenment principles are themselves Totalitarian. What an amazing bait-and-switch; some intellectuals have managed to convince people that the tools and principles of liberation are actually totalitarian. We need to realize that its this illiberalism that is staunchly undemocratic and authoritarian; and this can manifest on either side of the political spectrum in a variety of forms.

I bring up Free Speech not because I intend on making it the topic of the blog. In the context of thinking about the Principle of Charity, I began thinking of Free Speech and wanted to relate the two. My question became; in the obvious cases of violence and intolerance regarding Free speech, are there equivalent issues with the Principle of Charity? As critical thinkers, can we see any obvious drawbacks of applying this principle indiscriminately, without any drawbacks? Does it constrain us in anyway, from doing serious and rigorous investigation? Are there edge cases where application of the principle becomes difficult? What are the instances in which we should override the principle, if ever? What are our objectives with charitability? Does charity lead you to the truth? Are there limitations? Does it lead us to the correct interpretation of whoever we are engaging with?

The title of the blog "Some obvious ways a person can be stupid" was created in the context of me thinking more critically about my deeply held principle of charity. Sometimes people can hold outright stupid positions; so stupid that applying a charitable interpretation only wastes time. I immediately think to this Feynman Lecture on the Scientific Method (at 9:15), in which he poses a situation where he is attempting to guess the combination of a lock. Some clown walks up, without a clue, and says try "10-20-30", without any background context or prerequisite knowledge. The principle seems to be inapplicable in this situation, implying that there are conditions for the application of the rule. In the lecture Feynman notes "What if you know the middle number is 32, or that it is a 5 digit combination?"; its this sort of immediately dismissible recommendation that can be ignored when some "Joe" gives their opinion. In the case of combinatorial problem solving, it seems obvious; but how about in the case of rigorous debate or philosophical analysis where answers are non-demonstrable? You can imagine some sort of rebuttal that seems immediately ridiculous but after further consideration it seems reasonable. An argument might seem implausible at the surface, while seeming very reasonable upon further reflection. 

Before we think about plausibility, lets just think about how some positions can be downright absurd. The principle of charity is a sort of skepticism against initial or face value interpretations of an argument; at the benefit of the person delivering the argument. If someone claims "XYZ" but you have some reason to probe into what they are saying a little deeper, it might reveal they meant "XYZA". But what if statements are just factually incorrect? Consider Aristotle, he held some demonstrably false views about nature. Some of them were ridiculous. One of the scary things is that some modern philosophers hold the position that modern science needs to incorporate Aristotelian methods into their frameworks. This is also, just plainly absurd. How can I even begin to think about confronting these arguments in a charitable manner? So suppose you are reading Aristotle and you come across some of these demonstrably false assertions in which he bases his philosophy. Do you want to waste your time trying to assume he meant something else? Perhaps you can make some assumptions of the person delivering the argument. Aristotle did not have access to the evidence-base we have accumulated since the scientific revolution, so of course his arguments will just flat out be incorrect in many instances. This is also applicable to modern orators; if someone is making a claim but we know they don't have the relevant background knowledge, do we need to go through the process of constructing a charitable interpretation? 

What if the person is not intending on constructing a "proof" in a systematic way but is instead speaking metaphorically with the use of parables? If I apply the principle of non-contradiction would that even make sense? These sorts of metaphors are used to persuade and show a larger governing principle. They are meant to invoke your thinking and are very open ended. Suppose I say "He who truly knows lives in the highest and lowest simultaneously"; this obviously makes no sense from an argumentative perspective since it is contradictory but maybe I am trying to demonstrate the point that knowledge is held by someone who "traverses all perspectives" so-to-speak. These things are supposed to compel us to action, or speak to a broader point; they are not "proofs", do not rely on "evidence", so in that sense they are not open to a criticism or understanding from an analytical framework. They are however, trying to assert a more nebulous notion of "ultimate" truth. Principle of Charity seems to be inapplicable in this instance. A Refutation strategy might be to construct an alternative metaphor with a different meaning, or point to counterexamples since these things are typically rooted in the experience of a linguistic community. But I am not "charitably interpreting" premises in an argument, because there is no argument; there are no truth conditions. So understanding the genre of communication is important, when considering the applicability of charity. If someone is performing Epideictic Oratory, do I need to apply charity?

So it seems that there are at least three instances in which we should be think twice with how we apply the principle of charity: recommendations deriving from lack of context, historical figures making factual assertions, and metaphorical (hyperbolic) monologues. If you think about how we analyze arguments, there is a sort of filtering process that is applied to the structure of the argument. This takes the form of critical questioning. There are many ways our arguments can be absurdly incorrect and we have done a good job at compiling standards by which we assess arguments. In addition, there are a number of cognitive biases researchers have shown, that skew our thinking patterns deviate us away from truth, and systematically bias our information processing. When we look at an argument, what if it "obviously" fails on a few of these? The principle of charity employs you to take a second look; maybe it's not fallacious or subject to bias. Or maybe it is, but maybe you can still extract a useful argument if you reinterpret it without the errors. But what if it is "damaged beyond repair", so to speak? What if it is so fundamentally plagued by obvious issues, that the whole thing is unrepairable? Think back to the images I posted above; what if it fails relevancy criteria? I am sure you can go through a process of constructing a sub-argument that will make it somewhat relevant, but at what point does the cost outweigh the benefits? At what point can we simply take what someone says at face value?

The question becomes; should I indiscriminately apply a rigorous selective process to all arguments or be universally charitable? Both extremes seem to have obvious flaws. What if someone says something true but it fails the selection procedure and I did not reconstruct the argument or go through a process of reinterpretation? It seems like I may miss something; a false positive. But how do I discern those arguments which are worthy of such a laborious process, and how much time and resources should I spend reconstructing a charitable interpretation once I have determined its worth? Like all principles, the principle of charity is yet another rule that guides our behavior; but it is an imperfect heuristic, a rule-of-thumb. We can expect it to fail in either direction on some occasions even if we are very diligent with our application. Perhaps you have a purpose for applying the principle in a situation or withholding the principle in a situation. Consider someone you suspect to be a Nazi; why would you want to reconstruct their arguments in the best possible way? Selecting the best interpretation of Nazism seems to be, bolstering Nazism. This holds true for any fundamentalist ideology that has ulterior motives behind their argumentation. They are appearing to uphold the image of an orator relentlessly and passionately determined to find the truth; while in reality they truly perceive argumentation as a battle. They provide a Gish Gallop of  arguments which are all fundamentally weak, to overwhelm their interlocutor. Or perhaps, in a public debate forum they seem concerned with universal principles of human rights; but on their personal website they write articles eerily similar to the Final Solution in well crafted prose. Should I really try and sympathize with that? Think of something very simple like the Burden of Proof; if someone just asserts something they think is true, should I go through the laborious process of constructing their argument for them? So it seems obvious that the principle has limits and its not always clear where and when to apply it. Lets refer to the image below:


It is not just the interlocutor who is susceptible to these issues; we are also susceptible when deciding how to apply the principle of charity. Consider the selfishness or groupish-ness distortion; do we give charity to those outside of our group? When should we, even if we despise them? Maybe there is a principle of anti-charity that we should occasionally consider. Think about an instance in which we are just fundamentally wrong about a subject but nevertheless have the best argument in its favor. If I deliberately construct a radical interpretation of an argument (radicalism is simply relative to the current status quo; consider how Mill and Hume were radicals of their day), could I potentially be opening up to new unforeseen possibilities that I was previously blind to? Remember, having the "best argument" does not mean you are necessarily in accordance with the truth. Consider rationalist approaches to medicine that were dominant in the middle ages and early Renaissance. It is easy to dismiss the practice as fundamentally flawed in hindsight, but placing yourself in context we can see that the understanding of medicine was based solely on medieval scholastic argumentation that was the dominant form of investigation. If you were educated, it was within this paradigm. People "reasoned" about the cause of certain illnesses; and the best argument was the default diagnosis. Solutions that deviated from the principles of investigation approved by Popes and Monarchs were immediately met with suspicion. Now consider someone approaching one of these arguments by saying "We should probably use an Evidence Based Medicine approach more firmly grounded in empirical data"; this would appear completely radical and yet it is the de-facto standard we use in modern medicine today. Consider how unscientific the medieval paradigm was; it reasoned from approved assumption to supposed conclusion, literally ignoring empirical and factual adequacy. The arguments were sound nonetheless; despite reasoning from seriously questionable premises that withstood argumentation because questioning the their theological basis was heretical. Consider Trephination, Humorism, Shunamitism, self-flagellation and many other bogus scholastic medicinal treatments that were nonetheless considered "rational" in accordance with theological assumptions. This culminated in the Black Death; in which no one knew how to handle such an "invisible killer" because their assumptions could not account for microbiology. Confidence in reason is a crucial skill of a critical thinker; but understanding how it can pigeonhole you is an even greater skill. Anti-charity, or considering the benefits of a paradigmatic shift, could very well open your eyes to what you were previous blind. Maybe if we spent less time giving charitable interpretations to competing scholastic arguments, we could have opened our eyes to the fact that we needed a paradigm shift.

So think back to Mill's argument for free speech; he laid out a supporting principle to help determine which instances of speech were fundamentally corosive. We should try the same for the principle of charity; so we can gain the benefits but identify edge cases where we need to abandon it. Our supporting principles or rules should reduce the number of false positives, maximize the number of true positives, while minimizing the amount of time spent going through a reinterpretation process. It can't be something vague such as "apply the principle unless the other person is being hateful" because (as critical thinkers) we can immediately recognize this creates more problems than it solves. It is very tempting to say that "we should apply the principle if we can construct a plausible interpretation of their argument". The problem is the notion of plausibility. Consider the image below:


The first thing to note is that plausible need not overlap with actual or expected. Another thing to note is that plausible reasoning is highly dependent on your philosophical assumptions and cultural background. Plausible does not mean probable; people consistently conflate the two concepts. Plausibility is a form of presumption based reasoning when information is seriously lacking; its something we fall back on when we are ignorant in a situation. It is a form of common sense reasoning but it is not as straight forward as it seems because common sense is by definition the shared background knowledge among a community; what happens when you are engaging with someone who has an entirely different plausibility structure? Embedded in "common sense" are collections of unquestioned and unproved background assumptions, that may very well be wrong. Common sense is similar to the default logic you employ when reasoning under conditions of Knightian Uncertainty. So if plausibility depends on common sense, and this depends on encyclopedic knowledge (and assumptions taken for granted), we can immediately see the problem if your interlocutor shares a different knowledge base. Something plausible to one community might seem completely ludicrous to another community. Operating under the Open-World Assumption, If you dismiss or accept something on the grounds of implausibility while remaining open to new information, you will almost certainly have to revise. Consider how this also applies to cases of "possibility". Since this is a type of "Inference to Best Explanation", dismissing something on grounds of impossibility is intricately intertwined with our assumption of the set of "best" explanations. But "Best" is a function of our current background assumptions, which we do not share with the interlocutor. "Best" can only be explicated in a self-referential way; so dismissing something as impossible is, in some ways, simply dismissing an argument at face value without justification. There seems to be a tricky debate tactic with IBE arguments, where "Plausible" seems to be redefined such that it overlaps completely with the "Preferable". 

Consider an example: you and a friend are walking in the Sierra Nevada mountain range in California and see animal tracks. Given your knowledge of the geography, you rule out Panda bears, Saber Tooth Lions, and Cheetahs. Your friend recommends they could be Polar Bear tracks; but given your general knowledge of California and Polar Bears, you find it implausible and dismiss it. Turns out, your friend has never left Alaska, so the only association they have with these tracks in a woodsy wilderness will be Polar Bears because they don't have the relevant information about California in their knowledge base. This is an obvious case of implausibility and is easily resolvable by informing your friend of their missing information. Now consider you and a friend walking in the same mountains. You are a devote Muslim and your friend a devote Hindu. You notice the very dry landscape and start speculating as to what caused it. Suddenly, storm clouds appear and it begins raining. You as a Muslim might quote:

It is God Who sends the Winds, and they raise the Clouds: then does He spread them in the sky as He wills, and break them into fragments, until you see rain-drops issue from the midst thereof: then when He has made them reach such of his servants as He wills, behold, they do rejoice! (Surat ar-Room, 48)

As proof that this is an answer from Allah. Your Hindu friend might have a second opinion. After all, he performed Yajna earlier that morning; something obvious to do in honor of Krishna (rain is something explained in Hindu scripture). As a Muslim, you scoff; this can't possibly be an explanation. Your Hindu friend follows with the same exact charge of impossibility. How easily can this be resolved? If your explanations of the world are grounded in scripture, but your scripture diverges from someone else, how can you possibly dismiss the other on grounds of implausibility when you are operating on divergent world-view? So the supporting principle cannot be dependent on the plausibility of the arguments content. If something is suspected to be "implausible", it doesn't even seem the principle of Charity can be applicable. In other words, the rule simply does not apply.

It seems the principle should be dependent on any obvious blunders; that seem "obvious" to a universal audience. In other words, something (like in the case of Grice's Maxims) that anyone can intuitively understand but does not depend on vague notions of plausibility and harm. We have to allow for the fact that something implausible might be true; our rule should not immediately disregard the implausible simply on the basis of the propositional content. If something is implausible, we can subject it to further scrutiny but might not disregard it outright if its useful and we have no competing hypotheses. Likewise, if something seems "harmful" but there is no obvious pathway to the harm, or if our definition of harm seems too broad in scope; we should tentatively consider a charitable interpretation while subjecting it to additional scrutiny. Think of the principle of charity as a decision tree, rather than a logic gate. As we traverse further down the branches, we subject it to additional tests with the objective of minimizing our own bias and false negatives. The root node, and nodes nearer to the root, should be at the level of acceptance similar to the Gricean Maxims. As we traverse down the tree, we become stricter in our charitable interpretations. But how can we do this? What are some of the obvious ways an argument can be so stupid that anyone listening can recognize it?

I have some ideas that I think are non-dogmatic, minimize our error rate, maximize charity, and are not subject to our irrational tendencies. I think they are similar to the free speech corollaries. The paradox of tolerance states that we should not tolerate intolerant arguments; maybe we should not apply charity to arguments that are being explicitly non-charitable or non-serious in their pursuit of truth? I think generally many of the standards of argument will be applicable, but I will create a list of what I think to be "obvious" markers of a counter-argument being unworthy of charity. 

  1. The Loaded Question: you have probably seen this before if you are unfamiliar with the name. This will appear in debates or discussions where someone makes it seem as if they're genuinely probing for clarifying information, but in reality it is a subtle ad-hominem argument used to discredit you. It seems similar to poisoning the well; in which they are attempting to slander your position  by pigeon-holing you into an arbitrary yes-no answer, in which either answer attributes something negative to you. The classic example is "Have you stopped beating your wife?". "Yes" implies you once beat her and "No" implies you still beat her; both are unacceptable. To guard against this, you just need to remind the audience that the truth conditions necessary to even make the question intelligible are false. Sometimes, loaded questions take a less pernicious form. For example, if you ask someone "Would you like Red or White wine" without first establishing they do not drink alcohol. The key feature of the loaded question is its explicit use of slander and absolute disregard for approaching resolution on a topic. 
  2. False Equivalences: This is similar to an improper analogy, but becomes ridiculous when someone is clearly being lazy in their evaluation. It can take the form of a slippery slope, where someone asserts a cascade of uncontrollable events leading to some disastrous situation and then equates the initial state with the end state. It can take form when someone is comparing and evaluation two objects that are categorically different. For example, someone might claim that one capitalist society is likely corrupt on the basis of another capitalist society actually being corrupt. In this example, you can see how the transference of the negative quality (corruption) is lazily applied based on surface level similarities, when in fact the two societies under comparison could be radically different despite sharing the commonality of "free markets". If you wanted to argue that there is a fundamental flaw inherent to all capitalist societies, that would be different. False equivalence also occurs when someone unreasonable disregards scale and scope when doing a comparison. In the Wikipedia page we see the example "The Deepwater Horizon oil spill is no more harmful than when your neighbor drips some oil on the ground when changing his car's oil". The comparison is so bad that its almost laughable. But the point is, the person putting forth this position is either extremely ignorant or deliberately avoiding putting forth a reasonable argument; so you can dismiss it. This is similar to the concept of equivocation. A similar fallacious form of reasoning is the association fallacy: the tendency assert, by irrelevant association, that inherent qualities of one thing are shared by another. This can take many forms but the most obvious are guilt by association and honor by association. 
  3. Motte and Bailey: This fallacy is very similar to the above equivocation. From Nicholas Shackel, the philosopher who initially pointed it out: "A Motte and Bailey castle is a medieval system of defence in which a stone tower on a mound (the Motte) is surrounded by an area of land (the Bailey) which in turn is encompassed by some sort of a barrier such as a ditch. Being dark and dank, the Motte is not a habitation of choice. The only reason for its existence is the desirability of the Bailey, which the combination of the Motte and ditch makes relatively easy to retain despite attack by marauders. When only lightly pressed, the ditch makes small numbers of attackers easy to defeat as they struggle across it: when heavily pressed the ditch is not defensible and so neither is the Bailey. Rather one retreats to the insalubrious but defensible, perhaps impregnable, Motte. Eventually the marauders give up, when one is well placed to reoccupy desirable land, the Bailey, represents a philosophical doctrine or position with similar properties: desirable to its proponent but only lightly defensible. The Motte is the defensible but undesired position to which one retreats when hard pressed.". The idea is that a more defensible, but less controversial proposition, is substituted for the less defensible and more controversial proposition. After someone points out the lack of justification for the controversial version, the speaker equivocates, and falls back to the easier defense. They might claim "Well I am just saying that...." and then insert the easily defensible proposition (that normally everyone accepts), while failing to acknowledge that the more controversial proposition has yet to be defended. The inverse motte-and-bailey should also not be given charity. Consider someone asserting a rather mild claim, and someone mistakes it for the more controversial version of the claim. This is a variation of the strawman. The main idea with motte-and-bailey is that once the person reverts to the less controversial claim, critique is silenced. They then claim victory or revert back to the more controversial proposition. 
  4. Whataboutism: This one really doesn't require much explanation. It is a form of the Tu Quoque in which someone evades a discussion by replying with a counter-accusation. The reason why this is a non-charitable move is that the reasoner has simply avoided demonstrating the truth value of the claim; so it is simply a distraction or form of irrelevance. It is childish and yet many adults do it. Suppose a father tells his child not to smoke because it is bad for you, and the child replies "you smoke" and disregards the advice. We can see in this example why its childish; the child has not engaged with the ultimate claim (that smoking is bad for you). They simply ignore it and charge their father with inconsistency; without considering the nuances and struggles of addiction or the validity of the claim. Whataboutism can take MANY forms in ways that are not obvious. Consider a company that is treating its workers poorly. An employee speaks up and says "You need to improve the working conditions" and the employer responds by saying "what about the time you didn't show up?". The employer has completely disregarded whether conditions need to be improved or not, and has simply shifted the target to the employees worthiness. Just remember, whataboutism is not an argument or anything really; just defensiveness. 
  5. The Kafka Trap: This is less known, but surely something you will encounter at some point. The idea is that, your opposition claims you belong to some horrible category for reasons X,Y, and Z. Any attempt to demonstrate you are not that horrible thing, is proof that you are that horrible thing; because any form of denial is proof of membership in the category. Consider someone calling you a racist. You respond with a couple of reasons to demonstrate you are not one, but the denial itself is proof because "all racists will deny it". See more about this example by looking up "White Privilege". Accusation is sufficient to prove you belong in the category. See the link aboive for more examples and variations of the trap. The point is that it is not an argument, rather a baseless assertion, which therefore does not require a charitable interpretation. It is obviously silly and no one would accept this form of reasoning except for ideological motivations. Consider a police stopping a suspect "You are the murderer we just got a report about" and you say "uh no it wasn't me" and the cop says "That's exactly what we would expect the murderer to say". Its a type of guilty-until-proven innocent reasoning. In the White Privilege case, the only possible way out is repentance, and even then there is no guarantee you can overcome the permanent stain. The visual at the end shows the religious undertone of this sort of reasoning. It seems like there is a deep need for these sort of totalizing explanations. We are willing to believe odd things that are logically equivalent to other things we reject; yet we fail to generalize identification of the faulty reasoning when the conclusion suits us. Anyway, similar to this is the idea of a Double Bind: "a dilemma in communication in which an individual (or group) receives two or more reciprocally conflicting messages. In some scenarios (e.g. within families or romantic relationships) this can be emotionally distressing, creating a situation in which a successful response to one message results in a failed response to the other (and vice versa), such that the person responding will automatically be perceived as in the wrong, no matter how they respond. This double bind prevents the person from either resolving the underlying dilemma or opting out of the situation. Double binds are often utilized as a form of control without open coercion—the use of confusion makes them difficult both to respond to and to resist". It is fundamentally coercive and disgusting, similar to the Kafka Trap. It is not an argument, therefore does not deserve to be treated charitably. 
  6. The Narrative Fallacy: This one is very common, especially if you have friends in finance or amateurs who claim to be doing stock trading. This is our tendency to create a story with cause-and-effect explanations out of completely random details and events. The brain literally imposes structure, meaning, and narrative on complete randomness. Being formally trained in econometrics, I noticed people have an innate desire to construct a story around a random walk stochastic process. I would run into people who are looking at a stock and create some causal story (narration) for why the price trended in a certain direction, and where it will be going t+1 (with the utmost urgency to act now!). The stories can become very detailed, ranging from short explanations to drawn out "analyses". From Black Swan: "The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding". All stories have causal structure embedded into the narrative. The structure provides an explanation for where we have come from, where we are now, and where you can expect to be at some point in the future (and an implicit strategy for how to behave). The point is that we narrate before we have rigorously and empirically establish any logical links. This can be very harmful if a decision maker acts on the narrative without any sort of investigation into its validity. This is a form of apophenia; the tendency to see connections between completely unrelated events. Consider a gambler who is having "a hot streak"; they argue for a strategy of switching tables to increase their odds of winning, but we know a-priori that casinos design games such that the expected value of your bets is negative. Seeing "structure" in this randomness literally leads to riskier betting behavior that leads to significant losses. In terms of the principle of charity, I would suggest immediately calling someone out on narration if the stakes are high and many people will be impacted. If you are unaffected, sit back and relax to the free entertainment. 
  7. Silent Evidence: If someone is making an argument while ignoring the effects of silent evidence, I think you can readily disregard their position. The reason being that no matter how charitable of an interpretation you apply, their argument will still be lacking a holistic view of the relevant factors. A common instantiation of silent evidence is the survivorship bias; focusing on entities who passed the selection process (the survivors) while disregarding those missing from the dataset (the non-survivors). Consider an trivial example; a doctor recommends medicine claiming that it worked for 100 people. You find this amazing and immediately request a dose. The doctor failed to mention that the other 900 people died. If the initial sample was 1000, a 10% success rate sure doesn't sound as good. From the Wikipedia page "Survivorship bias is a form of selection bias that can lead to overly optimistic beliefs because multiple failures are overlooked, such as when companies that no longer exist are excluded from analyses of financial performance. It can also lead to the false belief that the successes in a group have some special property, rather than just coincidence as in correlation "proves" causality". Another obvious example are the "financial gurus" who "know the key to success". The success rate of day trading (and institutional investing for that matter) is rather low. Most people fail. But you will always come across a book at Barnes and Nobel who "beat the stock market" and gives you a list of traits which explain his success. This is appealing to many people who don't understand randomness; there has to be a winner by virtue of the game. Some people will win. There are always random factors that contribute to the success and yet the winner has a full explanation; they attribute all of the success to their strategy. You buy the book, repeat the strategy, find it doesn't work, because of course it won't work (but maybe their anecdote doesn't work for you because you just didn't implement it correctly). The very factors they claim are important were present in the failures! This is a fallacy of incomplete data collection. It pervades life. Ignoring it necessarily leads to a distorted view of the actual phenomenon you wish to explain. You cannot understand the full data generating process unless you have access to negative examples. 
  8. Cherry Picking: Closely related is the concept of selectively choosing information that confirms your presuppositions while ignoring evidence that is available that disconfirms it. This will occur when someone wants to make an inductive generalization but likely acknowledges the severe lack of evidence in favor of their position so they cherry pick certain pieces of evidence to make a stronger claim than the evidence warrants. I personally have no tolerance for this one because its such an obvious misrepresentation of reality in favor of what someone wants to be true. It is a type of one sided argumentation, so in general I don't think it is deserving of charity. Ask yourself before you personally attempt to do this: do I simply want to win the argument or am I actually concerned with truth? Being trained in analytics and statistics, I have seen this in the context of data-dredging. P-Hacking is very common; the tendency to search for statistically significant results while ignoring all of the trials that failed statistical validation. I understand there are institutional, social, and cultural factors influencing us to report and consume information in biased ways, but all critical thinkers should simply withhold judgement if data is inconclusive; not impose whatever you believe on reality. A closely related concept is the Texas Sharpshooter Fallacy; focusing on a subset or cluster of data while ignoring differences within the same dataset. The name comes from a joke about a Texan who fires gunshots at the side of a barn, then then paints a shooting target centered on one of the clusters, and claims to be a sharp shooter. In science and everyday life, someone might lack a specifically defined hypothesis before acquiring data, only forming it once they have seen the data. You need to construct your hypothesis before viewing the data. I think a related idea is this notion of False Balance; the tendency for journalists and other information providers to misrepresent consensus on certain issues, making it appear as if "there is still a debate". For example (hypothetically), there might be one study that demonstrates regressive tax structures enhance economic growth and a journalist might cite this, while ignoring a meta-analysis that indicates substantial agreement that there is no relationship between regressive taxes and economic growth. You can think about this as a generalized form of cherry-picking and P-Hacking; ignoring disconfirming studies to make it appear as if the one you selected represents the conclusion you wish to hold. If people understood the hierarchies of evidence this might help. 
  9. Misunderstanding Causation: There are a lot of ways someone can misunderstand causality. I want to point to a few obvious ones. I think we should not apply charity in these cases because they are so lazy it seems to me that someone is either being deceitful or intellectually apathetic. However, sometimes we might be in a situation where we are reliant on heuristics to determine the cause in question; but that's typically not during a debate or discussion. When we say X causes Y, what are we saying? Sometimes we mean to say that "without X, Y would not occur the way it did". This entails a sort of relationship of sufficiency. Sometimes we mean to say "X being present makes it more likely that Y"; implying that X is insufficient but is nevertheless influential. The key point is that when we say "X causes Y" we should be specific with what we mean and be careful with the scope of our claim. We might want to distinguish between Proximate Cause and Ultimate Cause; Person X hitting person Y with a car might be the proximate cause, while the ultimate cause was intoxication. We might want to distinguish between proximate cause (the legal sense of the term) and the but-for test; does doing X increase the probability of something happening to Y? Being specific with what exactly you mean can help you determine the root cause. However, sometimes there is not a root cause because we are stuck in a causal loop; a type of positive feedback loop where outputs feed back into the process as inputs. Generally, it is important to make a distinction between the underlying cause, immediate cause, and contributing factor. It is important to make explicit what we think to be the counterfactual state of affairs had such-and-such not occurred and to provide reason and evidence. Many people will falsely assert that because Y followed X, X was the cause of Y. This is formally known as the Post Hoc Ergo Propter Hoc Fallacy, which literally means "after this, therefore because of this". Temporal precedence seems to be a necessary condition for causation but it is by no means sufficient. You will see this all the time in politics when people wants to attribute blame to the opposition and take credit for something good. In this example, the illusion of control is combined with the post hoc fallacy; people falsely overestimate the effect the president has even on mundane day-to-day life, and then falsely attribute negative or positive outcomes to their actions. Closely related is the idea that correlation does not imply causation; people consistently fail to realize that two factors covarying does not imply a causal connection. You need to establish a direction of causality; does X cause Y or Y cause X? A common argument you will hear is that violent video games cause violent behavior. Seems to make sense, until you ask the question "do people inclined to violence choose violent video games?". Magical thinking underpins much of this (See the Barnum Effect as an example). A lot of poor scientific articles will be mentioned in tabloid "Researchers find link between X and Y"; a deeper dive reveals there is a slight statistical correlation that may as well be due to Data Dredging and P-Hacking. Take these with a grain of salt. And do not apply charity to such obvious laziness. 
  10. Understating Fallibilism: Most knowledge is fallible; that is, subject to revision. Most of the time, when we argue, we are actually reasoning defeasibly; that is, non-demonstrative, rationally compelling, subject to exceptions, contingent, investigative, and non-deductive. If someone approaches an argument while ignoring this reality, be skeptical. If someone is presenting a case that is overly dogmatic, cannot be wrong, cannot be falsified, or cannot be subject to revision, do not be charitable because (if true) they are presenting a position that is extremely rare, in order for us to accept it we should apply a high standard of proof and ensure there are absolutely no ambiguities or errors in evidence and reason. If there are absolutely no conditions under which the proposition can be shown to be false, be skeptical. If someone wants to assert something as axiomatic for the sake of argument, and they are not explicit, be weary. Just be cautious and non-charitable when someone is asserting extremely strong and consequential claims. 
  11. Type 1 and Type 2 Errors: I generally have no patience when people misunderstand basic statistical reasoning. If someone's reasoning is based on a fundamental misunderstanding of sensitivity and specificity , waste no time trying to reinterpret their conclusion. The reasoning is fundamentally structurally deficient. If they cannot distinguish between True Positive/ True Negative, and False Positive/False Negative, their reading of "the literature" is likely wrong. Think about it like this; suppose I administer an intelligence test, and by sheer luck an unintelligent person scores in the "intelligent" category. This would be an example of a false positive; your test did not distinguish between the intended categories. Consider another real life example; you have an alarm system that is supposed to go off when someone is breaking into your house. The alarm goes off; does this necessarily mean someone broke into your house? Is it possible that your cat tripped the alarm somehow? The reason I have no patience for these errors is that they are so intuitive and easy to learn, and yet everyone seems to forget them. Everyone seems to understand them in the context of pregnancy tests, so why should anyone be charitable to someone who fails to see the generality of the concept in other domains? People fail to remember that Actual is not equal to Expected. See the image below.
  12. Hindsight Bias: This one does not require charity. It is a sort of crude overconfidence and narcissism that skews reality. It is also called the "Knew it all along" fallacy; the tendency for people to perceive past events as more predictable then they actually were. As an example; the 2008 global recession had predictable aspects but it generally hit many people by surprise. Afterwards however, everyone seemed to knew it was coming all along? This probably explains why they chose to suffer significant losses to their portfolios; it was all part of their plan. This bias also shows the general lack of empathy we have. We view history in hindsight, see how decision makers reacted at the time, and judge them according to our knowledge base. Hindsight bias is the systematic error we make when evaluating past scenarios. One example of this is the Historians Fallacy; the assumption that decision makers in the past viewed events from the same perspective with the same information, as those analyzing the decision in later time periods. I also think this occurs when people say "history repeats itself" as if there is some general law-like structure to the evolution of history. We view historical events through our current narrative structure, and impose the regularity on the data. I think this is also a manifestation, or variant, of the illusion of asymmetric insight. The inverse of hindsight bias is the Outcome Bias.
  13. Category Mistakes: This one is simple so I will be short. From the Wikipedia page: "error in which things belonging to a particular category are presented as if they belong to a different category,[1] or, alternatively, a property is ascribed to a thing that could not possibly have that property. An example is a person learning that the game of cricket involves team spirit, and after being given a demonstration of each player's role, asking which player performs the "team spirit". Unlike bowling or batting, team spirit is not a task in the game but an aspect of how the team behaves as a group.". You can see from the example that Category Mistakes are fundamental misunderstandings of the concepts under discussion. They seem to manifest very often in forms of the Fallacy of Composition and Fallacy of Division; "birds cannot fly because feathers cannot fly" (misunderstanding the relationship between the part and the whole), or the "United States is rich therefore everyone in it is rich". What is true of the whole need not be true of all constituent parts, and what is true of some constituent parts need not be true of the whole. If there is a category error, you just simply need to correct it.
  14. Attribution Errors: This refers to a category of ways people attribute qualities to themselves and others in systematic and predictable ways that are biased. I include this on the list because there are quite a few ways people can be judgmental; if it is obvious misattribution and unjustified, we don't need to take our time trying to apply charity to their position. One instance is the fundamental attribution error: "where observers under-emphasize situational and environmental explanations for the behavior of an actor while overemphasizing dispositional- and personality-based explanations". Think "inferring qualities of a person" as an inverse problem; personality may predispose certain tendencies but making the reverse inference from uncontextualized behavior seems to be a bit of a stretch. It is the tendency to generalize situational features to all-encompassing explanations in systematically biased ways. Believing in the Just-World Hypothesis could be the reason; the belief that "you got what was coming to you" in all scenarios while neglecting the fact that there are uncontrollable aspects about many decisions. Take sickness as an example; we might assert that a persons illness is their fault. They could have taken more precautions, so they got what was coming to them. When you respond to this by saying "its flu season" someone might respond by cherry-picking an example of a person who avoided sickness this season for reasons XYZ, and if you were responsible you should have done the same. This is obviously absurd; there are factors beyond your control. Another example might be blaming an individual for being in poverty while ignoring institutional, economic, and political factors that were unforeseen and contribute to their employment status. If someone seems to be blaming individuals (or themselves for that matter) without warrant or justification, I would not provide charity. Closely related is the Actor-Observer Asymmetry; the idea that people tend to attribute personality defects when observing someone's situation, but when considering their own position they are keen to note the situational factors. Defensive attribution is the tendency of individuals to isolate factors or explanations that will minimize their effect when part of a group effort that fails. They may attribute more responsibility to others after the fact in order to feel better about themselves. The ultimate attribution error is a generalization of the fundamental attribution error in which attributions to the out-group are more negative than those attributed to the in-group. This obviously requires no charity; and is a key bias political pundits target. Closely related to this is the Group Attribution Error which is similar to the fallacies of illicit transference; when viewing the out-group, individuals decisions or behaviors become substituted for the entire group or when the out-group makes a decision we assume that the groups decision must reflect every individuals preference. There are some more attribution biases linked below. I can go on for days talking about these ones but the crucial point is that if any of them seem present in someone's argument, feel free to skip the charitable interpretation.  
  15. Ignoring Equifinality: This one irritates me. It is fundamentally due to a lack of imagination or unwillingness to try new strategies. Equifinality is the notion that in an open system, a given end state can be reached with multiple means. In other words, there is more than one path to an outcome. This is literally a maxim in software engineering; "there is more than one way to do it". On many occasions you will hear decision makers or leaders claiming "there is only one way it can be done" when after some consideration, selecting "this way" is arbitrary or simply done on the basis of tradition or misunderstanding of the alternative pathways. I don't really have patience for this one because it stems out of laziness and fear of change. If we pigeon-hole ourselves to one position, we could be ignoring alternative strategies that are less costly and maximize benefits for a greater number of people. On an individual level, if you convince yourself that "there is only one way to do this" you are dismissing the flexibility that is present in most situations at the expense of your own anxiety. I used to work in finance and one of the core intellectual abilities is being able to see and evaluate different investing strategies and options available to you within an uncertain environment. Maybe some people are scared of considering multiple options. Some people like the straight line, not thinking about different ways to achieve outcomes. I think that if someone, like a politician, is making the bold claim that "this is the only strategy available to us for achieving our goals", it is up to them to demonstrate that they have at least considered alternative viable options before pegging all of us to this decision. Their conclusion does not require charity because it is an incomplete argument. They have not assessed viable alternatives. 
Principle of charity, like all rules of thumb, will have conditions under which the rule is applicable. I think I named a few situations where we can supersede the principle in order to make progress in a debate or when reading an argument. I don't know if these are fool-proof; there are probably instances in which we should have applied charity but didn't on the basis of one of these exceptional scenarios. Nevertheless, they seem to make sense to me. This is how I have been thinking of the principle recently. 

Resource for Additional Reading:

  1. Pursuing Truth: A Guide to Critical Thinking
  2. Thinking Critically for Deciding What to Believe or to Do: Explore, Evaluate, Expand, Express
  3. The Case for Free Speech
  4. Neuroscience in Al Andalus and its Influence on Medieval Scholastic Medicine
  5. Medieval Medicine: A Reader
  6. Medieval Medicine Pt. 4
  7. Medicine or Magic? Physicians in the Middle Ages
  8. Medieval Medicine: Everything you Need to Know
  9. The Principle of Charity Analysis
  10. Common Sense Reasoning
  11. Putting "Explanation" Back In IBE
  12. The Narrative Fallacy
  13. Avoiding Falling Victim to the Narrative Fallacy
  14. Varieties of the Motte and Bailey
  15. The Extend and Consequences of P-Hacking in Science
  16. Statistical Validity
  17. Counterfactual Causality
  18. Illusory Control
  19. JustWorldFallacy
  20. Multiple Comparisons Problem
  21. Spurious Correlations
  22. Confusion Matrix
  23. Apex Fallacy
  24. Trait Ascription Bias
  25. False Consensus Affect
  26. Hostile Attribution Bias
  27. Self-Serving Bias










Comments

Popular posts from this blog

The Nature of Agnosticism Part 1

The Nature of Agnosticism Part 2

The Nature of Agnosticism Part 4.1