What Bad Bunny and Alex Pretti Teach us About American Culture

Combined

There is something deeply off putting about our relationship with media. People treat their preferred media as infallible golems, putting forth about zero brain power towards critical analysis, while being hyper distrustful towards anything that does not corroborate the same messages. It just seems that many people are completely media illiterate. This really isn’t surprising; I’m not aware of any public school k-12 that formally trains students to understand the structure, methods, and strategies performed by various media outlets and platforms. This is also not explicitly taught in university unless you’re choosing to specialize in something related to the topic. Many people are simply not equipped with critical thinking tools necessary to discern the messages they’re consuming. What’s worse is that many are under the impression they’re capable of discernment, creating an inertial attitude locking them in from gaining the required skills. You can observe this in the way people speak about the media: "they are biased, that’s their narrative, they’re motivated to say X because of corruption". While these certainly are important aspects of media to understand, when you probe a bit deeper to gain insight into their understanding of these concepts, you find they have little to no understanding of the use and application of the concepts. For example, when probing into “bias”, when asking for clarification, you might hear “liberal bias” instead of explicit types of biases and when they arise. When you ask what they mean by “bias”, they might reiterate “liberal bias” instead of “bias as a systematic deviation from reality that can take many forms, in many contexts, such as selection criteria, framing, or disincentives”. My point is that people often use these words as performances for their in-group rather than as tools to assess a media ecosystem and our place within it. This surface level understanding is actually a feature not a bug; and it has serious consequences.

My main motivation for writing this post has to do with two seemingly unrelated events that occurred recently: the murder of Alex Pretti and the Bad Bunny halftime performance. After reflecting on these two events and observing media behavior, I thought it be appropriate to diagnose our collective response as a symptom of the deeply perverse media structures that have been increasingly governing our lives. In the case of Alex Pretti, we have a clear example of how media processes can fundamentally shape how we interpret and categorize data. In the case of Bad Bunny, we have a clear example of how media exacerbate culture war nonsense so much so that it stifles our ability to even reason. I’ll be elaborating on both examples in the section to come.

Bad Bunny: A Tale of Reactionary Backlash

Unless you live under a rock, you are probably aware of the Bad Bunny halftime performance and the "alternative" halftime performance hosted by TPUSA. Before the performance even took place conservatives were critisizing this as a DEI pick (because the presumption now is that any non-white person must have been a DEI pick to "push the liberal agenda"). They were also in outrage because Bad Bunny is "not a patriot", likely due to his criticism of Trump; because disagreement has become interpreted as treason within modern US politics. You are now "Un-American" if you have any sort of criticism of the president (and possibly a target of NSPM-7). This is a pretty classic “culture-war counterprogramming” move: take a mass-culture event, claim it’s been “captured” by liberals/DEI/anti-America forces, then build an “alternative” spectacle that signals in-group identity (“faith/family/country”) and out-group threat (immigrants, Spanish language, “wokeness,” etc.).

The reporting around Turning Point USA’s alternative show paints it as a deliberate counter-message to the NFL’s official halftime headliner, including explicitly “patriotic” branding and stacking it with right-coded celebrities like Kid Rock. (San Francisco Chronicle) That’s not “apolitical entertainment.” It’s an identity performance: this is the “real America” show. Even if some people insist it’s “not race, it’s politics,” the pattern can still be racialized (and often xenophobic) in practice:

  • Spanish as a trigger: When “he’s going to sing in Spanish” gets treated as evidence of not being “American enough,” that’s not a policy argument — it’s a cultural boundary. (The Guardian)
  • Latino cultural ubiquity vs. ‘outsider’ framing: Latin music is huge in the U.S. The backlash often isn’t about popularity; it’s about who gets to symbolize the nation on its biggest stage.
  • Political dissent re-coded as disloyalty: In polarized environments, criticizing the government (or Donald Trump / his movement) gets framed as “hating America.” That makes it easier to turn any artist who voices dissent into an enemy symbol rather than just “a musician with opinions.”

You can see this dynamic in mainstream coverage that explicitly frames Bad Bunny’s halftime appearance as a political flashpoint tied to immigration enforcement rhetoric and broader MAGA media backlash. (The Guardian)

Claiming Bad Bunny is a DEI pick is actually hilarious. According to measurable market reality, he’s a huge global star; Latino audiences are a major U.S. demographic; the NFL wants that audience. In response some might say “Sure, he’s popular, but he’s ‘political’ / ‘anti-ICE’ / ‘anti-Trump’.” But that exposes the deeper issue: a lot of the same voices calling for “keep politics out of entertainment” are fine with politics when it’s their politics (i.e., “God/faith/country” branding and overt MAGA-adjacent performers). This is hypocrisy in its purest form. Not everyone objecting is motivated by racism. But the backlash is structurally racialized/xenophobic because it treats Latino identity and Spanish language as suspect, and it casts dissent as treason — while celebrating overt right-wing spectacle as ‘patriotism.’ It’s just funny to me because it’s like the most blatant and overt act of identity politics I can imagine. A lot of what gets denounced as “identity politics” is really out-group identity politics. When it’s their identity, it’s renamed as “tradition,” “values,” “patriotism,” or “normal.”

  • Identity politics they dislike: marginalized identities being visible on a prestige stage (Latino/Spanish-language pop stardom; artists who criticize Trump; anything read as “multicultural America”).
  • Identity politics they endorse: their in-group symbols and rituals (“God/Faith/Country,” flag aesthetics, “real America,” “take our culture back”), plus an “alternative” show that functions as a rally. That’s identity politics with a different label.

And the “alternative halftime show” idea makes it almost comically explicit: it’s not just having politics, it’s counter-programming a mass ritual to assert who counts as “America.” That’s why it feels so blatant. The objection isn’t to identity politics; it’s to other people’s identity politics. When conservative identity is foregrounded, it’s framed as patriotism and values. When Latino identity or dissent is foregrounded, it’s framed as DEI or disloyalty.

Symbolism in the Performance

While Bad Bunny performed, he depicted dancers dancing on power line poles. The group of people i was watching this with immediately thought this was stupid. I explained that this is a critique of absence of US support following the hurricane in Puerto Rico. People in the room correctly agreed that that was wrong, but then someone said “trump don’t give money because their government is corrupt”. This person is somewhat of a Trump apologist, so I didn't even attempt to engage in a fruitless conversation. But they completely miss the point: even if the puerto rican government is corrupt, the US government still has an obligation to our territories and the people living within those terriroties.

Puerto Rico is a U.S. territory; millions of U.S. citizens live there. Disaster aid isn’t a “gift” to a political class — it’s help for people whose homes, hospitals, power grid, and water systems are destroyed. If corruption is a concern, the answer is oversight and controls, not withholding help from U.S. citizens after a catastrophe. That’s not even partisan. It’s basic governance. That is precisely what an effective government is capable of doing: carrying out effective functions during a national crises. This really ties back into what I've written about before; many people don't even comprehend the concept of governance. In their minds, president == government.

It’s also useful to puncture the implied claim that “we didn’t help because they can’t be trusted.” There were real corruption/fraud cases tied to Hurricane Maria recovery — including federal officials and contractors, not just locals. The Department of Justice described indictments involving FEMA officials and a contractor as exploiting Puerto Rico’s vulnerability after the hurricane. (Department of Justice) Fraud happens in every big disaster. That’s why you audit and prosecute. You don’t abandon the victims. The record shows large recovery efforts were complicated and slow, and watchdogs and reporters have documented significant delays and bureaucratic obstacles in major funding streams. That sure complicates the "Puerto Rico bad" narrative; but pinning the blame on "them" rather than ineffective governance is a textbook strategy our government uses to avoid responsibility.

  • A HUD Office of Inspector General report examined HUD’s administration of Puerto Rico disaster-recovery funds and describes high-level interference/pressure issues in how the funds were handled. (Office of Inspector General)
  • GAO reports repeatedly describe the massive scale of remaining recovery work and the slow pace/complex steps to access and expend large FEMA Public Assistance awards. (U.S. Government Accountability Office)

The idea that “they’re corrupt, so no help” is not how disaster policy is supposed to work, and the reality was far more about delay, bureaucracy, and oversight fights than a clean morality tale. Corruption can be real — but that’s an argument for controls, not for letting people go without power and safe water. Puerto Rico is a U.S. territory full of U.S. citizens. If funds are at risk, you use inspectors general, audits, staged disbursements, and prosecutions. You don’t treat disaster relief like a punishment. There were fraud cases in the response ecosystem too — even involving FEMA officials/contractors. Big disaster spending always draws bad actors. That’s why oversight exists. (Department of Justice)

Woke Spotting

In the beginning of the performance, he depicted dancers in sugar-cane fields. This to me was obviously just the history of Puerto Rico; symbolism of early US involvement in the colony etc. nothing particularly political about this. But someone in the group said “see they are trying to make it look like slavery”. I am truly baffled by this level of stupidity. But I do think there is a cause. “Anti wokeness” is perhaps one of the most ridiculous reactionary thing I’ve seen in the past 5 years, “woke spotting”, finding “wokeness” in everything. "Woke Spotting" is a filter mechanism that interprets just about everything and anything as the “agenda of the left” or the “anti America left”. Someone even noticed a “rainbow on the uniforms of the players”. Even symbolism vaguely resembling a representation of actual American history is seen as “the enemy” or the devil.

“Woke spotting” is basically pattern-matching for threat, not interpretation. Once someone adopts an “anti-woke” frame, they’re not watching a performance the way normal people are (story/symbol/history). They’re scanning for signals that confirm a prior belief: “they’re pushing an agenda.” This is why you see the knee-jerk reaction to the sugarcane fields; not “Puerto Rican history,” but “they’re trying to accuse America of slavery.” Or in the case of a rainbow; not “branding/design,” but “LGBT propaganda designed to undermine the True America.” It’s less “analysis” and more like a filter that turns ambiguous cues into evidence.

Sugarcane is loaded in the Americas. It’s tied to colonial extraction and, in many places, slavery or coerced labor systems. So even if the performance is doing “Puerto Rico / colonial economy / land history,” someone primed to fight “woke narratives” will jump straight to: “they’re calling us slavers.” The irony is: acknowledging sugar’s colonial labor history isn’t “anti-American propaganda” — it’s just… history. Puerto Rico had plantation agriculture and deep ties to U.S. corporate ownership after 1898; you don’t have to be doing a partisan message to reference that. (And even if an artist is referencing exploitation, that’s not automatically “America bad,” it can be “this is part of our story.”)

The “they’re trying to make it look like slavery” reaction is a defensive move that does two things:

  • preemptive dismissal (“this is fake/agenda”), and
  • moral reversal (“they’re the bad guys for accusing us”).

It keeps the person from having to sit with discomfort or complexity. If “wokeness” is the villain, then anything that evokes historical injustice becomes “enemy messaging,” not a legitimate topic. The “rainbow on uniforms” example is textbook. You can’t argue someone out of it by explaining the design choice, because the point isn’t the object. The point is the identity threat narrative: “they’re everywhere.” It becomes a kind of interpretive hobby: spot the agenda, feel righteous, signal the in-group. It can become and has become more obsessive than the thing they claim to oppose, because it trains people to read ordinary reality as propaganda.

Sugarcane is just Puerto Rican history and a symbol of the island’s economy and colonial past. Not every historical image is an accusation. Sometimes it’s just context. Even if it’s referencing exploitation, that’s still part of American and Puerto Rican history. Acknowledging history isn’t an attack on Americans.

A few reinforcing loops tend to cause this:

  • Media ecosystems that reward outrage and “gotcha” detections.
  • Identity consolidation: being “anti-woke” becomes a positive identity (not just an opinion).
  • Ambiguity intolerance: complex symbols are reduced to “us vs them.”
  • Status performance: spotting “woke” becomes a way to show you’re savvy/loyal.

It is a mechanism, and once someone has it, they can “find” wokeness in almost anything. More on this later.

Brain Rot & Patterned Responses

When I explained the electric pole idea, as how bad bunny was representing the power outages in Puerto Rico, someone said “well that’s just hypocritical. Here they are in America using energy, If they cared so much why don’t they do something about it.” This is literally a patterned response, so much so that I think we can identify the general form of the implicit argument. It’s used everywhere to reject the notion of a critique. “If you care about X, do something about X”.

I responded by saying “well, many people did in fact try to help, by sending lots of money”. So that targets one of the core assumptions. But the second assumption is “donations can solve the problem”; which massively understates the actual issue during the power failures. Massive grid failures are necessarily something that needs fixing by a governing body in conjunction with contractors, not something donations can fix. But that’s the point of the argument; it deflects responsibility and misunderstands root causes

A very similar pattern of argument you’ll hear is when people call for higher taxes on wealthy for more social programs. Someone will point out that billionaires have too much money, and the response is “well look at you, you have a car” or something very minor. This obviously misses the point of the policy position: massive wealth distorts policy through rent seeking, wealth extraction leads to instability, wealth concentrated means less wealth flowing to communities in need, etc. I think both arguments are very similar, and they’re both not thoughtful. This thoughtlessness I believe is a symptom of how diluted public discourse has become, for reasons I'll elaborate on in the next section. But for now, let's address these forms of argument.

To start, it’s not an argument meant to evaluate the critique — it’s a critique-nullifier. The form is basically: “Unless you personally opt out of the system you’re criticizing (or personally fix it), your criticism is invalid.” That move shows up everywhere because it’s cheap, feels clever, and shifts the burden from institutions to individuals. The general form of the argument (implicit syllogism):

  1. You criticize harm (H) produced by system/institution (S).
  2. You still participate in (S) (because you live in society, use electricity, pay taxes, etc.).
  3. Therefore you’re hypocritical / your critique is illegitimate.

This is a mix of tu quoque (“you too”) + a category error about responsibility. It's clearly wrong for the following reasons:

  • Participation ≠ endorsement. Infrastructures are non-optional. “You used electricity in the U.S.” is not evidence you don’t care about Puerto Rico’s grid.
  • Critique doesn’t require personal purity. You can oppose injustice while being embedded in the world that contains it.
  • It changes the subject from “Is the critique true?” to “Is the speaker morally perfect?”

Even if Bad Bunny were inconsistent, that wouldn’t make the underlying claim false. This is a straightforward category error for two reasons:

  1. It confuses expression/critique with direct remediation - A performance is not a utility company. Art can:
  • make the problem visible,
  • shape public attention,
  • signal solidarity,
  • raise funds,
  • pressure institutions.

That’s a form of doing something — just not the kind they’re pretending is the only valid kind.

  1. It treats a structural/institutional failure like a consumer choice - Power restoration and grid hardening require:
  • public governance,
  • regulation,
  • procurement,
  • engineers/contractors,
  • long-term capital,
  • coordination with federal support.

So “why don’t they fix it?” is like telling someone upset about bridge collapses to “go pour concrete then.” Donations can help people survive; they don’t substitute for state capacity + competent contracting. This wasn’t a ‘buy some generators’ problem. It was a grid-and-governance problem. Individuals can help with relief, but only institutions can rebuild a power system.

The billionaire/taxes version is the same move with different props. “Billionaires have too much money” is met with “you have a car / iPhone, so you’re a hypocrite.” This has the same structure: turn a systemic claim (distribution, power, incentives, capture) into a personal purity test, then declare victory. The critique isn’t “nobody should own anything,” it’s about scale and power: extreme wealth concentration can distort politics (lobbying, agenda setting), rent-seeking and extraction can outcompete productive investment, and concentrated wealth can mean weaker public goods and higher instability. So the “but you have a car” retort is irrelevant: the claim is about orders of magnitude and institutional effects, not whether any individual owns stuff. Why this move is so attractive (and why it feels “patterned”)? Because it does three things instantly:

  1. Avoids the substance (no need to discuss Puerto Rico’s grid or tax policy).
  2. Reassigns responsibility downward (from governments/corporations to individuals).
  3. Provides moral gratification (“I caught you being inconsistent”).

It’s basically a social-media reflex: spot hypocrisy → dismiss issue → feel superior. What would count as ‘doing something’ in this view? Because raising awareness, donations, and political pressure are all actions. Notice how this standard only appears when someone criticizes power. Nobody says ‘if you care about the stock market, go personally fix it.’

Below are two Walton-style argumentation schemes for these patterns, with hidden assumptions and critical questions (CQs) designed to attack them cleanly.

  1. Scheme 1: Argument from Hypocrisy / Inconsistent Commitment (Tu quoque)
  • P1. Speaker S asserts/advocates claim (C) (often a critique of system (X)).
  • P2. S’s actions (A) are inconsistent with (C) (S participates in (X), benefits from (X), or fails to act in line with (C)).
  • C. Therefore, (C) should be rejected / S’s critique is not credible / S has no standing to criticize.

  • Hidden assumptions (common)

    • A1. If S does not fully comply with (C), then (C) is false or unjustified.
    • A2. Participation in (X) is voluntary and avoidable (so participation implies endorsement).
    • A3. Moral standing is a necessary condition for the truth/acceptability of (C).
    • A4. The only legitimate way to express commitment to (C) is through personal behavioral conformity (not speech, art, organization, voting, etc.).
    • A5. S has sufficient power/resources to avoid (X) or solve the criticized harm.
  • Critical Questions

    1. CQ1 (Relevance): Does S’s inconsistency, if real, actually bear on the truth or merits of (C), or only on S’s character/credibility?
    2. CQ2 (Degree): How strong is the alleged inconsistency—full contradiction, partial compromise, or trivial participation?
    3. CQ3 (Voluntariness): Is participation in (X) meaningfully optional, or is it a constraint of living in society (e.g., using electricity, money, roads)?
    4. CQ4 (Feasibility): Could S reasonably comply with (C) given structural constraints, costs, risks, or limited alternatives?
    5. CQ5 (Burden shift): Is the argument improperly shifting the burden from evaluating (C) to judging S’s purity?
    6. CQ6 (Independence): Are there independent reasons/evidence supporting (C) that remain even if S is inconsistent?
    7. CQ7 (Double standard): Is this purity standard applied consistently to all advocates, or only to targets from one side?

Puerto Rico pole example mapping:

  • (C): “Puerto Rico was neglected / grid failure is a serious injustice.”
  • (A): “Bad Bunny performed in the U.S. using electricity ⇒ hypocrisy ⇒ critique invalid.” CQs 1, 3, 4, 6 are especially lethal: hypocrisy (even if true) doesn’t refute the claim; electricity use is non-optional; rebuilding grids is institutional; independent evidence exists.
  1. Scheme 2: “If you care, you must personally fix it” (A distorted Argument from Practical Commitment / Argument from Responsibility, often functioning as a derail)
  • P1. S says they care about problem (P) (or criticizes harm (H)).
  • P2. If someone truly cares about (P), they will take direct personal action (A) that significantly reduces/solves (P).
  • P3. S has not taken (A) (or not enough (A)).
  • C. Therefore S doesn’t really care / S’s criticism is illegitimate / we can dismiss the critique.

  • Hidden Assumptions

    • A1. “Caring” entails an obligation to take one specific kind of action (usually direct remediation) rather than other meaningful actions (speech, organizing, donating, voting, raising awareness).
    • A2. The relevant action (A) is within S’s power and capacity.
    • A3. Individual action is sufficient (or the primary lever) to address (P).
    • A4. If the critic is not personally acting, then no one else is (ignores collective/institutional action).
    • A5. The critique is only valid if paired with an immediately actionable plan by the speaker.
    • A6. Attention/representation is “free” and cannot itself be part of remedy (denies agenda-setting effects).
  • Critical Questions

    1. CQ1 (Action pluralism): Are there multiple reasonable ways to “do something” about (P) besides (A)? Why is (A) the required one?
    2. CQ2 (Capacity): Does S have the capability, authority, or resources to do (A) at the scale needed?
    3. CQ3 (Collective action): Is (P) primarily an institutional/collective problem requiring government/industry coordination rather than individual fixes?
    4. CQ4 (Effectiveness): Would (A) actually help meaningfully, or is it symbolic/ineffective compared to systemic levers?
    5. CQ5 (Truth vs. messenger): Even if S did nothing, would that make the underlying diagnosis of (P) false?
    6. CQ6 (Burden placement): Why is the burden placed on the critic rather than on the responsible institutions/actors with duty/authority?
    7. CQ7 (Consistency): Do you apply this same requirement to people who defend the status quo (e.g., “If you think the grid is fine, what are you doing to ensure reliability?”)?

Puerto Rico grid mapping:

  • (P): “massive grid failure / inadequate response.”
  • Required (A) is implicitly “fix the grid / rebuild infrastructure.” CQs 2 and 3 are decisive: grid repair isn’t an individual-level remedy; authority lies with government + utilities + contractors; “doing something” includes advocacy and accountability.
  1. Scheme 3: “You have a car, so criticizing billionaires/taxes is hypocritical” (Blend of Scheme 1 + False Equivalence / Scale Collapse)
  • P1. S argues for policy (C) (e.g., higher taxes on billionaires, redistribution, anti-capture).
  • P2. S possesses some wealth/consumption item (W) (car, smartphone, comfort).
  • C. Therefore S’s policy critique (C) is hypocritical or invalid.

  • Hidden assumptions

    • A1. Any possession (W) is morally equivalent (in kind) to billionaire wealth (collapse of magnitude).
    • A2. The critique is “no one should have anything,” rather than “extreme concentration has systemic effects.”
    • A3. The policy claim’s validity depends on personal austerity by the speaker.
  • Critical Questions

    1. CQ1 (Relevance): Does S owning (W) address the causal claim about billionaire-scale wealth and political economy?
    2. CQ2 (Magnitude): Is the comparison proportionate, or does it ignore orders-of-magnitude differences that matter to the policy argument?
    3. CQ3 (Type): Is the critique about absolute possession or about concentration/power/externalities?
    4. CQ4 (Mechanism): Which mechanism of harm (capture, rent-seeking, inequality) is allegedly neutralized by S owning a car?

Vulgarity and Hypocrisy

Another critique about Bad Bunny, was the alleged vulgarity of his music. The “protect the children” line is a classic moral-panic frame. It works because it lets someone sound principled while smuggling in a different complaint (often cultural/identity discomfort) without saying it out loud. But the double standard couldn't be clear enough: the same ecosystem criticizing Bad Bunny as “inappropriate for kids” is also boosting an explicitly partisan counter-show that’s branded as wholesome/patriotic — while headlining Kid Rock, whose public persona has never been “children’s programming.”

The “think of the children” claim is being used alongside language/culture objections. The most on-the-nose example is Donald Trump’s post after Super Bowl LX, which (per reporting) explicitly complained about the performance being in Spanish and also framed parts of it as inappropriate for children. (EW.com) That combination matters: it’s not “purely about sexual content” — it’s also about cultural “Americanness” signaling. The alternative show was marketed as values-forward, which invites the hypocrisy critique. TPUSA described its “All-American Halftime Show” as centered on “faith” and “family values,” positioned as an alternative to what it framed as political messaging. (KOMO) So when that lineup includes Kid Rock, the “family-friendly” justification starts to look less like a principle and more like a pretext (or at least a selectively applied standard).

  • If vulgarity were the real concern, then vulgar artists would be criticized consistently regardless of ideology.
  • But the outrage is asymmetric: “vulgar” is deployed against Bad Bunny while the counter-program embraces partisan aesthetics and performers whose brands aren’t exactly PG, and it still gets labeled “values” and “patriotism.” (Awful Announcing)

That asymmetry strongly suggests the “kids” argument is strategic: a socially acceptable wrapper for other objections (political dissent, cultural change, Spanish-language visibility, etc.). (EW.com)

Grievance, Paranoia, Persecution, and Participatory Propaganda

I think this Bad Bunny performance exemplifies our cultures tendency towards anti-intellectualism, paranoia, grievance politics, and persecution narratives etc. I’ve spoke and thought about this in other threads; we are collectively distorted by media ecosystems that systematically use methods identified by media scholars, computational propaganda scholars, etc, to exploit cognitive vulnerabilities. I think the bad arguments are symptoms of that sort of targeting.

There are strategies like participatory disinformation , combined with disincentives based on social media structure / algorithms, legacy strategies. Etc that cause the sorts of low quality conversations I had about this bad bunny situation. Essentially, the room consisted of people generating confident, low-quality, highly “patterned” takes. This fits really well with how media/propaganda scholars describe today’s information environment: it’s not just that some people are “dumb,” it’s that systems reliably elicit dumb arguments because those arguments are socially useful and algorithmically rewarded.

“Participatory disinformation” explains why the bad takes feel crowd-produced. A lot of contemporary manipulation doesn’t look like a top-down lie. It looks like people “doing the work” themselves: remixing clips, inventing interpretations, adding “just asking questions,” piling on moral panic, and spreading it through their networks. That’s basically what Kate Starbird calls participatory disinformation: the idea that online audiences, influencers, and political/media figures form a feedback loop where ordinary users actively create and amplify misleading frames (often sincerely). (Center for an Informed Public) In the superbowl gathering, we are seeing the offline version of the same thing: a preloaded frame (“woke agenda,” “anti-America,” “think of the children,” “hypocrisy”) gets applied to ambiguous symbols, and the group co-produces a narrative. The Council of Europe framework by Claire Wardle and Hossein Derakhshan is useful because it distinguishes:

  • misinformation (false, not necessarily intended to harm),
  • disinformation (false, intended to harm),
  • malinformation (true but used to harm, e.g., decontextualized clips). (edoc.coe.int)

A lot of culture-war discourse runs on malinformation: a real visual (a cane field, a rainbow, a dance move) is stripped of context and recoded as “agenda.” You don’t need a fake; you need a frame.

“Computational propaganda” explains why these frames are everywhere. The Oxford Internet Institute defines computational propaganda as using automation, algorithms, and data to shape public life; Samantha Bradshaw and Philip N. Howard use that lens for organized social media manipulation. (demtech.oii.ox.ac.uk) Even when no one in the room is a bot, they’re often reacting to narratives that were selected and spread in an algorithmic environment that rewards such as outrage, moral disgust, threat perception, identity affirmation, and “owning” the out-group. So the “bad arguments” are often symptoms of a bigger selection mechanism: the ecosystem promotes the most socially contagious interpretation, not the most accurate one.

“Network propaganda” explains why it feels asymmetric and grievance-driven. Yochai Benkler, Robert Faris, and Hal Roberts argue that the U.S. disinformation crisis is strongly shaped by asymmetric media structures—networks that intensify partisan identity and reinforce grievance narratives. (OUP Academic) That helps explain why many of the examples discussed so far fit into a persecution-complex template: “They’re attacking us / replacing us", “They’re pushing it on children", “They hate America", and “If you disagree, you’re a traitor.” Those are not random; they’re identity-maintenance stories. This produces anti-intellectualism in everyday conversation. Once the dominant habit is frame-first, anti-intellectualism becomes almost inevitable because: Curiosity becomes disloyalty (“why are you defending them?”), Complexity becomes suspicion (“that’s just spin”), Context becomes ‘excuses’ (“you’re rationalizing”), and Expertise becomes enemy signaling (“elitist propaganda”). And then the conversational meta-goal changes: not “what’s true?” but “what does my team need this to mean?”

The “Bad Bunny arguments” are predictable outputs of the system. Without litigating the performance itself, the argument forms are exactly the ones these systems generate because they’re high-virality (due to processing fluency, availability heuristic, and framing effects, among many other cognitive heuristics):

  • Moral panic (“protect the children”) → maximizes disgust + urgency.
  • Woke-spotting / pattern matching → turns ambiguity into certainty.
  • Tu quoque / hypocrisy (“you use energy so you can’t criticize”) → dismisses claims without addressing them.
  • Scapegoat substitution (institutional failure → blame a symbolic out-group figure) → simplifies causality.
  • Double standard as identity (“our vulgarity is ‘authentic,’ their art is ‘degenerate’”) → maintains in-group virtue while permitting in-group behavior.

Our culture increasingly rewards performing certainty over earning accuracy. Media ecosystems cultivate paranoia and grievance by incentivizing interpretable symbols to be read as hostile propaganda. The resulting anti-intellectualism is not merely individual failure; it’s an expected system output. These are grounded in the three pillars above: participatory disinformation, information disorder, and computational propaganda. (Center for an Informed Public)

More on Patterned Responses

The paternicity seems predictable. I’ve been observing the resulting social media cascades after high profile events. In the bad bunny example, there seemed to be massive participatory disinformation and outrage immediately following the performance. I observed this in real time in the room, people on their phones looking at their feeds.

It seems to happen in other ways as well, with targeted campaigns from powerful actors. Earlier last year I remember reading about the Israeli government funding social media campaigns designed to demonize Muslims. A few weeks later, I noticed a massive uptick in Muslim hate speech by people who lean conservative within my community. It was fascinating, almost like clockwork. These messages diffused through social networks, resulting in less agreement among the American people about funding Israel. It's easier to support harsh middle east policies if you think the enemy is sub-human. There is nothing new about this form of propaganda; the mechanisms and scale however are quite distinct. In a lot of high-salience moments, people aren’t reasoning from first principles so much as selecting from a small library of pre-made interpretive scripts (“protect the children,” “hypocrisy,” “anti-America,” “agenda,” “they’re coming for you”), then fitting whatever they just saw into that script.

The responses are “patterned” for a variety of reasons. For starters, templated arguments are memetic primitives. Some arguments spread because they’re short, portable, and socially useful. They don’t need to be true; they need to signal team identity, produce moral certainty, flip blame, and shut down discussion. So you get “modules” like moral panic, tu quoque, agenda detection, and treason frames. This aligns with what scholars describe as participatory disinformation: people don’t just consume narratives — they actively remix and enforce them in their own words, which makes the narrative harder to trace and easier to scale. (YouTube) Secondly, algorithmic incentives select for “outrage-fit” interpretations and disincentivize truth enabling conditions. Platforms reward content that triggers high-arousal emotion (anger, disgust, fear, humiliation). So the “winning” interpretation is often the one that reads the event as a threat, identifies a villain, an offers a righteous posture. That’s one reason the same few frames pop up “like clockwork”: computational propaganda research emphasizes how algorithms + targeted distribution + coordinated human curation can push misleading frames at scale. (navigator.oii.ox.ac.uk) The offline room mirrors the online feed. People on their phones are being live fed whatever their feeds are surfacing. This creates a very tight loop: salient event, immediate feed of pre-framed clips, group repeats/competes in take-making, then repetition creates perceived consensus (“everyone is saying…”)

There are two pathways that both lead to the same output: bad arguments at scale.

  1. Path A: Coordinated influence operations - These are the “powerful actors” cases: state-linked messaging, paid networks, fake/inauthentic accounts, influencer seeding, etc. The research umbrella here is computational propaganda and influence operations. (navigator.oii.ox.ac.uk) On the specific example about Israeli government-linked efforts: there has been investigative reporting describing government initiatives intended to shape U.S. discourse (including via partner orgs and messaging operations) and public calls by Benjamin Netanyahu to invest in influence operations. (The Guardian) Separate reporting has also alleged campaigns that spread Islamophobic content; those claims exist in the media ecosystem, but details and framing vary by source, so it’s best to treat them as allegations unless you anchor to the underlying documents or platform takedown reports. (Middle East Monitor) Even when the top-down coordination is real, it rarely “convinces” people by pure argument. It primes the narrative environment so that ordinary users supply the rest (participatory disinformation again).

  2. Path B: “Coordination without conspirators” (organic but structurally produced) - Even with zero centralized plot, you can still get synchronized cascades because everyone is exposed to the same few viral clips, the same influencer class reacts within minutes, and the same engagement incentives apply. This produces the same pattern: an “instant interpretive swarm” that feels orchestrated, because it is structurally synchronized.

Think of high-profile events producing a predictable 4-stage cascade:

  1. Seeding: A few accounts post the “first frames” (often clipped, decontextualized).
  2. Amplification: Influencers/pundits choose the frame that best fits their audience incentives.
  3. Participation: Thousands of users create their own versions (participatory disinformation).
  4. Normalization: The frame becomes “what everyone knows,” and disagreement is treated as suspect.

Under these dynamics, arguments get worse for a vareity of reasons. First, the ecosystem rewards dismissal moves. A lot of the highest-performing “arguments” aren’t about truth; they’re about ending the conversation: “If you care, why don’t you fix it?”, “You live in society, curious!”, or “They’re indoctrinating kids.” These are rhetorically efficient because they require no evidence and force the other person onto defense. Second, grievance politics and persecution frames are “unfalsifiable glue”. Once a group identity is built around being under attack, every symbol can be interpreted as hostile. That produces paranoia (“hidden agenda everywhere”), anti-intellectualism (context = “excuses”), and escalating certainty (nuance = disloyalty). Low-quality arguments are not random; they’re the predictable output of an attention-optimized ecosystem in which participatory disinformation and influence operations (coordinated or structural) select for identity-confirming frames over accurate interpretations. (navigator.oii.ox.ac.uk)

To systematically understand this phenomena, you can use the following framework to identify and explain a media cascade,

  1. Frame convergence: how fast do people settle on the same 2–3 narratives?
  2. Template reuse: how often do you see identical argument forms (“protect kids,” “hypocrisy,” “anti-America”)?
  3. Timing spikes: do certain frames appear within minutes of influencer posts?
  4. Clip ecology: which specific clips/images become the “evidence objects”?
  5. Network bridges: which accounts connect fringe framing to mainstream community feeds?

Alex Pretti: When Something Becomes Evidence

This is something that has been on my mind for as long as I can remember. "Evidence” sounds like a simple, objective relationship (“E supports H”), but in actual inferential life it’s mediated—by background assumptions, concepts, standards, goals, and social context. So two people can share all the same “facts” and still reasonably (or unreasonably) disagree about what those facts mean for a claim. I first want to explore this nature of evidence, before showing how this "underdetermination" of evidence can be aritificially amplified (by the media), as observed in the Alex Pretti case.

Evidence isn’t a property of a fact alone; it’s a role a fact plays in an argument. A few concepts need to be clarified:

  • Data / fact: some observation, testimony, measurement, record.
  • Hypothesis / theory: what you’re trying to decide between. Some explanatory account of the data/facts.
  • Linking assumptions: how the world would have to be for that data to show up if the hypothesis were true (measurement reliability, causal story, category definitions, base rates, etc.).
  • Standards of relevance: what counts as “support” (prediction? explanation? coherence? robustness? practical stakes?).

On this view, E becomes evidence for H only relative to a background package: E is evidence for H given background assumptions B and a question Q. That’s already enough to explain a lot of “same facts, different conclusions” disagreements: people often share E, but not B or Q.

There are a few common mechanisms that explain why the same fact can support opposite conclusions. They look similar on the surface (“we disagree about evidence”), but they’re different kinds of disagreement.

  1. Different background beliefs (priors, base rates, auxiliary assumptions): Even in Bayesian terms, whether E favors A or B depends on likelihoods that come from background beliefs: P(E|A) vs P(E|B). If we disagree about those, we can disagree about evidential direction.
  2. Different “reference classes” (what this case is a case of): fact can shift meaning depending on what category you place it in. For example, something as simple as “He left early" can be evidence of disinterest if it’s a date or evidence of responsibility if it’s a parent leaving to pick up a kid. The observed behavior is the same; the classification changes the inference.
  3. Different causal stories (correlation gets “explained away”): E can look like evidence for H until someone introduces an alternative causal pathway that makes E expected even if H is false. This is one big way “facts themselves impact interpretive frames”: new facts can change which causal model you think you’re in, which changes what counts as evidence going forward.
  4. Different standards of support (prediction vs explanation vs robustness): Two people might both agree that E raises P(H), but disagree whether that is good enough to “count as evidence worth acting on.” One person might say “It nudges probability, so it’s evidence.” Another might say “Unless it’s robust across methods / not cherry-pickable / survives adversarial checks, it’s not evidence (or not strong evidence).” So the dispute isn’t about probability raising, but about epistemic norms.
  5. Different feature selection (which aspect of the fact matters): Many facts are high-dimensional. A graph has slope, variance, outliers, time window, scale, omitted variables. A witness report has confidence, vantage point, incentives, consistency, corroboration. Evidence disputes often come from spotlighting different features: one person treats the outliers as signal, the other as noise.

Its also common to have situations where “E used to support A, now supports B”. For example, a shift after sustained media exposure can happen through multiple routes, some rational, some not.

  • Rational-ish routes (in principle): In Bayesian terms, it can be perfectly coherent if your model of the world changes, what E discriminates between changes too.

  • New auxiliary beliefs: you learn (or think you learn) the measurement is biased, or the causal mechanism is different than assumed.

  • Reframing the question: you stop asking “Is A true?” and start asking “Is A the best explanation?” or “Is A what matters?”
  • Changing the hypothesis space: you previously considered only A vs not-A; later you entertain B as a live alternative that fits E better.

  • Non-rational / socially-driven routes

  • Motivated reasoning: you prefer B (identity, group status, moral signaling), so you re-weight what “counts.”

  • Availability and salience: media makes certain exemplars vivid, changing what seems typical.
  • Concept drift: the meaning of key terms shifts (“fraud,” “violence,” “freedom,” “expert”), so the same E is now “about” something else.
  • Trust realignment: you change which sources you treat as credible, which affects what you treat as “fact” in the first place.

So the evidential relation changes either because the world-model changes, or because the attention/trust/identity machinery changes (or both).

Philosophers often define evidence as a probability raising relation, that is: “E is evidence for H iff it raises P(H)”. This hides the messy parts of reasoning with and about evidence in the wild. The definition excludes recurring questions such as:

  • Where do priors come from?
  • How do we choose the hypothesis space?
  • How do we model likelihoods?
  • What do we treat as the relevant description of E?
  • How do we handle underdetermination (many theories fit the same facts)?

Those are not small technicalities; they’re where most disagreement lives. In real life, we often use meta-evidential norms—criteria that make a putative E trustworthy as evidence:

  • Independence: does it survive when measured different ways?
  • Robustness / replication: does it repeat across contexts?
  • Specificity: is it predicted uniquely by H, or also by many rivals?
  • Resistance to cherry-picking: does it depend on selective time windows, metrics, or anecdotes?
  • Error sensitivity: would we likely notice if we were wrong?
  • Adversarial testing: has it been stress-tested by people who want to debunk it?

Notice these norms aren’t captured by “probability raising” alone—they’re about how likely we are to be misled, given human and institutional limitations.

When you hear “E is evidence for H,” silently expand it to: Under description D of the data, within model M, given background assumptions B, with standards S, for question Q, E supports H. Then disagreements become diagnosable:

  • Are we disagreeing about D (what the fact is, or which aspect matters)?
  • about M/B (causal story, base rates, reliability, incentives)?
  • about S (what level/type of support counts)?
  • about Q (what we’re even trying to decide)?

Its this level of metaevidential reasoning that media operates.

How this Relates to the Case

What revived this line of questioning was the recent murder if Alex Pretti. I know people that, in the past, have viewed similar events as clear brutality and murder. However, after massive exposure to Fox News, and other life events, they see the Alex pretti case as a clear example of “terrorism” or whatever the current federal governments narrative is.

People “seeing the same event” but slotting it into brutality/murder vs terrorism—is almost a textbook case of how classification frames and trust networks turn raw facts into “evidence for X.” Using the Alex Pretti case as the anchor: the dispute isn’t only “what happened,” it’s also “what kind of thing was it?” In public reporting, you can already see (1) contested factual claims about the encounter and (2) official/political labels being applied (e.g., “domestic terrorist,” “would-be assassin”), alongside calls for investigations. (Axios) Here are the main moving parts that produce the divergence:

A lot of disagreement happens upstream of “probability raising,” at the level of what gets treated as the stable fact. In this case, different information environments emphasize different “base facts”: official statements about the threat posed / firearm / resistance (People.com), video interpretations (what was in his hands, what happened right before shots, etc.) , and internal/preliminary review claims that contradict an initial narrative (opb). If someone comes to treat the government’s characterization as the primary datum (and contrary footage as deceptive/edited/out-of-context), then the evidential pipeline flips even if the “surface facts” sound similar.

Labels like “terrorism” aren’t just descriptions; they bundle an entire causal-moral package: who is presumed the aggressor, what level of force is presumed justified, and whether the event is treated as a one-off tragedy or part of a larger coordinated threat. So the same observed elements (“there was a gun nearby,” “there was a protest,” “agents shot him”) can be reinterpreted once the category is chosen. And crucially: in public discourse, category selection is often driven by authority cues and coalitional cues (who said it, which side says it), not just by a neutral review of details. You can see high-profile figures amplifying “assassin/terrorist” framings even amid disputed accounts. (New York Post)

When you watch two people argue, it can look like “same facts, different inference,” but it might be one of these:

  1. Factual-model dispute (potentially evidence-responsive) - This kind of disagreement can update with better evidence—body-cam releases, independent investigations, corroborated timelines. There are ongoing investigations reported (civil rights / DHS / DOJ), which is exactly the kind of thing that—if trusted—should move beliefs. They genuinely disagree about:
  • what happened (e.g., whether he was armed at the moment he was shot)
  • source reliability (video vs DHS vs witnesses vs internal review)
  • causal story (threat escalation, sequence of actions)
  1. Norm/identity dispute (only loosely evidence-responsive) - Here, extra facts often don’t resolve it, because the disagreement is about standards and trust, not data. They may share many “facts” but disagree about:
  • what “counts” as terrorism (concept boundaries drift)
  • what level of threat “justifies” lethal force (norms)
  • whose institutions are trusted (epistemic allegiance)

Long media exposure can genuinely change “what counts as evidence”. Even without bad faith, people can undergo something like lens training:

  • Salience training: repeated examples make certain interpretations feel “obvious” (availability effects).
  • Suspicion inversion: the person learns to treat contrary sources as inherently manipulative; disconfirming facts become evidence of “cover-up.”
  • Concept drift: “terrorist” expands from “politically motivated violence against civilians” to “anyone opposing state enforcement with perceived threat potential.”
  • Default causal script: “protester + gun” becomes an automatic “attack on law enforcement” script.

Once those updates occur, the same E will be routed differently: not because they “ignored evidence,” but because they changed the epistemic wiring that tells them what evidence even is. If you want to locate where the divergence is, try asking (even just in your own head) four questions:

  1. What are you treating as the fixed facts? (video? official statements? medical examiner classification? witness reports?) (People.com)
  2. Which sources are allowed to count, and which are disqualified in advance? (this is often the biggest divider)
  3. What category is being applied (terrorism / murder / tragedy / justified force), and what does that category imply?
  4. What would change your mind? * If the answer is “nothing,” it’s mostly identity/team. * If the answer names concrete potential releases/findings, it’s at least partially evidence-governed.

In later sections, we will discuss how media can effectively change these epistemic norms and frames.

Facts of the Situation

Multiple major outlets describe video/forensic analyses show two incontrovertible facts: (1) he’s disarmed, and then (2) agents fire ~10 rounds into him. Reuters reported verified video footage, showing him holding a cellphone, being wrestled/disarmed, and then shot, and ABC reports a forensic audio analysis finding 10 shots in under ~5 seconds. (Reuters) People also summarizes footage (citing other reporting) as showing him unarmed/holding a phone while pinned. (People.com) So given those facts, the interesting question becomes: how can someone still come away with “terrorism” (or “justified”) rather than “brutality/murder”? Here are the main mechanisms that produce that divergence—even when people aren’t consciously lying.

A “terrorism” lens can treat the disarmed moment as almost irrelevant, because it’s classifying the person/event as belonging to a threat category: “He was a terrorist because he intended violence / was part of an attack / aligned with a cause”. Then the shooting is interpreted as “neutralization,” “split-second,” “chaotic,” or “tragic but necessary.” So the evidential center moves from what happened at second T to a broader claim like what kind of actor this was. That’s why two people can agree “he was disarmed” and still disagree on the headline label.

Once someone’s information environment (say Fox News + social feeds) trains them to distrust certain sources, disconfirming facts stop functioning as evidence and become “propaganda,” “selective editing,” or “context missing.” At that point, even identical footage can be pre-labeled: “That video angle is misleading”, “We don’t see what happened right before”, “He could still reach for a weapon”, or “The government has intel we don’t”. In fact, these were some of the exact responses I encountered when confronting people with these two basic facts. This is less “probability raising” and more “which pipelines are allowed to produce facts.”

I focus on the terminal slice of the event: disarmed → shot many times → therefore brutality/murder. A security-state lens focuses on the process slice: “threat emergence → struggle → risk to agents → lethal force.” In that frame, once a gun is in play, later disarmament doesn’t erase the earlier threat narrative—it just becomes “the moment right before he was shot.” Reuters/People/others describe the official account emphasizing resistance/handgun, while other reviews describe evidence contradicting parts of that. The interpretive fight is often: which slice gets to be “the fact” that controls the conclusion? Unfortunately, this is often determined by who can control or influence the media.

There is also moral inversion: “If he’s the enemy, then constraint violations become ‘necessary’”. This is how highly publicized events become like “team sports”, but it’s more specific than “identity politics.” It’s a moral-cognitive switch. If they are “terrorists,” then harsh force reads as protection. If we are “protectors,” then errors are excused as fog-of-war. Errors are okay in a situation where you are against a terrorist. Once that inversion happens, the same detail (“10 shots”) can be heard as either “execution” or “they had to make sure he was down.”

In polarized contexts, endorsements by officials (e.g., Department of Homeland Security statements) or party-aligned figures can function as evidence even when the underlying factual claims are contested. And when later evidence contradicts initial narratives, some people update; others double down by treating the contradiction itself as proof of a conspiracy or hostile media. Reuters also reports DOJ/DHS civil-rights review activity around the incident, which becomes “evidence” differently depending on trust. (Reuters)

Even if everyone agreed on the two facts, people can still diverge because they’re answering different implicit questions: “Was lethal force justified at the moment he was disarmed?” versus “Was he a member/example of a threatening category, and are agents broadly justified against that category?" Those are not the same inferential target, so “what counts as evidence” shifts. A good question to ask your interlocutor in discussions like these is: “What specific new information would make you stop calling it terrorism and start calling it unjustified killing?” If they can name concrete potential updates (body-cam release, timeline, independent reconstruction), they’re at least somewhat evidence-responsive. If the answer is “nothing,” it’s almost purely coalitional. The latter is unfortunately what characerizes most situations. Many people think its noble to firmly entrench yourself in an opinion regardless of counterevidence. They actually think its a virtue.

Bayesian Perspective

Bayesian updating is a rule for conditionalizing once you already have a model, but a lot of the real action is in model formation: deciding (i) what your hypothesis space is, (ii) what counts as the observation, and (iii) how the observation is generated. Bayes tells you how to update inside a model but it doesn’t necessarily tell you which model you should be in (anyone who studies statistics will yell at me for saying this; obviously there are model selection criterion but im taking the space of possible models to largely be a function of the meta-level filters that are a function of media). Deciding "what is evidence” is largely about choosing/constructing the model.

Bayes assumes E and H are already well-defined random variables. In the equation, (H) and (E) aren’t English sentences; they’re events/propositions in an algebra with specified meanings. To even write \(P(E\mid H)\), you’ve already done a ton of work: you have fixed a hypothesis space \(\{H_i\}\) (“what are the live possibilities?”), you have fixed an observation space \(\{E_j\}\) (“what are we treating as the data?”), and you have fixed a likelihood model \(P(E\mid H)\) (“how would the world produce this data under each hypothesis?”). Choosing (E) and (H) is outside the bare update rule.

Two people can update in opposite directions without “violating Bayes”. That can happen through several “external” degrees of freedom:

  1. Different hypothesis spaces: If one person’s space is \(\{A, \lnot A\}\) and another’s is ({A, B, C}), the same observation can shift mass differently. Sometimes adding a new live alternative “explains away” what previously supported (A).
  2. Different descriptions of E (coarse vs fine graining): One person uses (E) = “he was shot 10 times,” another uses (E') = “he was shot 10 times after being disarmed,” another uses (E'') = “a video shows X.” Those are different propositions; Bayes will happily treat them differently. This is basically: what is the data point? the raw sensory stream, a testimony, a measurement outcome, or an interpreted claim?
  3. Different likelihoods (the real engine of disagreement): Even with the same H and E, if I think \(P(E\mid H)\) is high and you think it’s low, we’ll diverge. And those likelihood disagreements are often about the upstream “evidence” issues such as: reliability of sources, selection effects, incentives to distort, and what background conditions obtain.

So what is Bayesianism good for here? It’s good for locating disagreement. When two people disagree about “what counts as evidence,” Bayesian framing helps you ask: Are we disagreeing about the hypothesis space? the data proposition (what E even is)? the trust/measurement model (how E was generated)? the likelihoods (diagnosticity)? or the priors? That turns “we see the same facts differently” into a checklist of specific moving parts.

Bayes Applied to Alex Pretti

Applying the Bayesian “what counts as evidence is upstream of Bayes” point to the Alex Pretti case actually becomes very concrete, because we can name the hidden variables people are implicitly disagreeing about. Major reporting does support something close to what i've summarized, but with the normal caveat that investigations are ongoing:

  • Reuters reports video it verified shows Alex Pretti holding a cellphone, being wrestled to the ground and disarmed, and then being shot moments later. (Reuters)
  • ABC reports a forensic audio analysis finding 10 shots in under five seconds. (ABC News)
  • AP reports videos contradict initial DHS claims and show a firearm being removed while he appears to be holding only a phone. (AP News)
  • Reuters also reports an initial government review did not mention him “brandishing” a firearm (contrary to early official rhetoric). (Reuters)
  • Reuters fact-checks viral edited imagery that implied he was holding a gun in-hand. (Reuters)
  1. Set up a Bayesian toy model for the dispute

Let’s define competing hypotheses (these are simplified but map to the debate):

  • H₁ (Unjustified lethal force / wrongful killing): lethal force was not justified at the moment shots were fired.
  • H₂ (Justified lethal force): lethal force was justified (e.g., imminent threat, reasonable perception of threat).
  • H₃ (“Terrorism” framing): Pretti was a “terrorist” / would-be attacker (a category/intent claim), and the shooting is treated as neutralization.

Now define evidence candidates:

  • Eáµ¥: Verified bystander video shows him holding a phone, pinned/disarmed, then shot. (Reuters)
  • E₁₀: 10 shots fired in <5 seconds (forensic audio analysis). (ABC News)
  • Eâ‚’: Official/DHS initial narrative emphasizing firearm/threat (even when later evidence disputes details). (AP News)

Bayes update inside a model is: \[\text{Posterior odds} \propto \text{Prior odds} \times \frac{P(E \mid H_i)}{P(E \mid H_j)}\]

The whole fight is: what is E, and what are the likelihoods \(P(E\mid H)\).

  1. The “what counts as evidence” problem becomes explicit as latent variables

People are not updating on “what happened.” They’re often updating on:

  • a) trust variable: which channel produces “real E”? Let T be “Reuters/AP/ABC video verification is reliable” vs “mainstream outlets manipulate / omit context.”

    • If (T=\text{high}), then Eáµ¥ is high-quality evidence.
    • If (T=\text{low}), then Eáµ¥ is not treated as E at all — it becomes “propaganda,” and may even be used as evidence of conspiracy.

This is why two people can hear the same sentence (“video shows he was holding a phone”) and treat it oppositely. Reuters even had to fact-check a viral altered image implying a gun-in-hand — that kind of info warfare feeds distrust and changes T. (Reuters)

  • b) A “frame” variable: what question are we answering? Let F be the frame:

    • (F=) use-of-force frame: “Was lethal force justified at the moment of shooting?”
    • (F=) threat-category frame: “Was he the kind of person/event that counts as terrorism?”

Under the threat-category frame, the moment of being disarmed can be demoted in relevance. The model shifts from “imminent threat now” to “intent / affiliation / category,” which is not the same hypothesis. Bayes can’t tell you which F to adopt; it can only update once you’ve adopted it. If you want to describe the divergence precisely in Bayesian terms, people aren’t disagreeing mainly about Bayes’ rule. They’re disagreeing about the inputs:

  1. Hypothesis space (is “terrorist” a live hypothesis? is “state murder” a live hypothesis?)
  2. Evidence variable (is the bystander video a legitimate observation? is the official narrative the observation?)
  3. Likelihood model (how often does “disarmed + 10 shots” happen under justified force vs unjustified force?)
  4. Trust and framing latent variables (T and F)

That’s why the same public event can produce opposite posterior movements even among “Bayes-respecting” reasoners. A sharp way to test whether someone is updating on evidence or allegiance is to ask them to specify an observation that would have high likelihood under the rival hypothesis. For example, if someone is in the “terrorism / justified” camp, ask: “If independent body-cam angles show he was pinned and his firearm was already in an agent’s hand before the first shot, what does that do to your confidence in ‘justified’?” (AP/Reuters reporting suggests the video is already close to this.) (AP News) If they can articulate a counterfactual that would move them, they’re at least modeling evidence. If they can’t, then their “Bayes” is effectively frozen because priors/trust/frame dominate.

This is not just "disagreement"

What the Alex Pretti case “shows” at a deeper level isn’t just disagreement over an incident; it’s the way media ecosystems can rewire the inputs to reasoning—your categories, priors, likelihoods, and even what counts as a datum worth updating on. You can see the basic ingredients in the public record:

  • There’s contested narration early on from officials, and then a lot of public attention to bystander video and reconstructions. Reuters reports verified video showing Pretti holding a cellphone, being wrestled/disarmed, then shot; Reuters also reports a preliminary CBP review that didn’t include the “brandishing” claim that circulated in the first hours. (Reuters)
  • There was also an explicit “information war” artifact: Reuters fact-checked a viral edited still that nudged viewers toward “he had something (a gun) in his hand,” illustrating how small manipulations can steer interpretation. (Reuters)
  • Independent / transparent investigation demands became part of the story itself, with former DOJ lawyers calling for transparency and polling showing broad support for independent investigation. (Axios)

Media ecosystems don’t just add evidence; they set the “epistemic parameters”. Bayes assumes you already have a hypothesis space (H), an observation (E), priors (P(H)), and likelihoods \(P(E\mid H)\). A polarized media ecosystem pressures all four—especially the parts Bayes treats as “given.”

Calling something “terrorism,” “rioting,” “self-defense,” “execution,” etc. isn’t a conclusion at the end of reasoning—it’s often a front-loaded categorization that changes what hypotheses feel live. Once “terrorist” becomes a salient category, the implicit hypothesis set shifts from “justified vs unjustified force at the moment" to “neutralizing an enemy actor vs being soft on enemies” …and a ton of downstream reasoning is now happening inside that different space. You can watch this happen socially: high-profile political statements become “evidence tokens” for the category claim, even when later reporting complicates the initial narrative. (Axios)

A key hidden variable is: which institutions you treat as truth-generators. If a person’s ecosystem trains them that “mainstream verification” is suspect, then Reuters/AP/ABC-style claims about video or reconstructions don’t function as (E); they function as anti-evidence (“they’re covering for the other side”). Reuters’ fact-check about edited imagery is a good micro-example of how the ecosystem can inject ambiguity and then monetize the ambiguity. (Reuters)

The likelihood \(P(E\mid H)\) is basically “how expected is this observation if that story about the world is true?” Media ecosystems sell stories (causal scripts): “lawless chaos / threats to order”, “state overreach / impunity”, “deep state / coverups” and “violent radicals everywhere”. Those scripts change the likelihoods people implicitly assign. For instance, if you’ve been trained into a “constant threat” script, then “multiple shots quickly” can be felt as expected under “reasonable officer response,” inflating \(P(E\mid H_{\text{justified}})\) and shrinking the Bayes factor.

Profit + algorithms push toward frames that are high engagement, not high accuracy. Two mechanisms matter a lot. First, Engagement incentives favor high-arousal frames. Platforms and audience-driven media reward content that triggers outrage, fear, and certainty—because it keeps people watching/sharing. That doesn’t merely misinform; it selects for frames that are resistant to disconfirmation (e.g., conspiracy-friendly frames where counterevidence proves the conspiracy). Research/policy summaries on polarization and platform incentives describe how algorithmic distribution and self-selection can create homogeneous clusters and accelerate misinformation diffusion. (citap.unc.edu) Second, Participatory disinformation turns users into co-producers. “Participatory disinformation” is exactly what it sounds like: people aren’t just consuming a narrative; they’re helping make it. The Reuters edited-still episode is an example of how small “crowd-sourced interpretation hacks” can be amplified until they feel like common knowledge. (Reuters) That co-production creates identity investment: backing down isn’t just updating a belief; it’s abandoning a team contribution.

Institutional rot isn’t only “bad actors”; it’s broken feedback loops. Information correction gets weaker when local institutions can’t credibly investigate or communicate, or when federal/local conflict makes transparency harder (raised in reporting about investigative friction and calls for independent probes). (Axios) Accountability capacity can be affected by staffing/mandate changes and selective enforcement patterns—Reuters has reporting more broadly about scaled-back civil-rights enforcement capacity and how that interacts with cases like Pretti’s. (Reuters) In Bayesian language: the system’s ability to generate high-quality (E) deteriorates, and then people rationally (or semi-rationally) fall back on identity/trust proxies. These are feedback loops inherent to the system.

The Pretti episode is a case study in how modern information systems shift people from updating on shared public evidence to updating on identity-indexed narratives, because the ecosystem manipulates (i) what counts as an observation, (ii) who is trusted to certify observations, and (iii) which frames are emotionally and socially rewarded.

The Ecosystem

Causal scripts diffuse through social networks via platform incentives, influencers, and algorithmic boosting etc. Public discourse becomes degraded as a consequence, leading to deep disagreement and policy impasse etc. Causal scripts produce patterned like responses and arguments, so much so that it's almost predictable what a person will say; and what they say is normally a variation of the common responses that have become acceptable discourse, as dictated by these platform media norms. "Arguments" are not really arguments, but thought terminating cliches used to signal identity. This is the “ecosystem-level” story: causal scripts (ready-made explanations + moral conclusions) spread through networks in ways that are structurally favored by platform incentives, and the result is degraded discourse: people talk past each other, disagreements become “deep” (about trust and identity), and policy gets stuck.

Causal scripts usually do not become dominant all at once. They tend to spread in stages, moving from a niche interpretation to a default lens for understanding events, and the first push often comes from the way platforms are designed. On major social platforms, ranking systems heavily reward engagement signals like replies, reshares, and watch time, and research consistently shows that this tends to amplify emotionally charged and divisive content because outrage and conflict generate exactly those kinds of reactions (Knight First Amendment Institute). In practice, that gives certain narratives a built-in advantage: scripts that make people angry, afraid, or morally certain travel farther and faster than ones that ask for patience or nuance.

Influencers play a major role in this process, not just by offering opinions, but by acting as what you might call “script entrepreneurs.” What they often produce is not a one-off take, but a reusable narrative template: a cast of heroes, villains, and victims; a causal mechanism that explains what is “really” happening; a moral frame that tells people what they should feel; and an action cue that tells them what to do next, whether that is sharing, boycotting, voting, or punishing. The strength of this packaging is that it is optimized for speed. It is short, vivid, and easy to repeat, which makes it highly portable across audiences. In that sense, it works like plug-and-play cognition: once people learn the template, they can apply it to the next event almost automatically.

Once one of these templates starts gaining traction inside a particular community, algorithmic boosting and network structure can turn it into a cascade. Social networks are shaped by homophily, so people are often surrounded by others who already share similar views, and that makes reinforcement easy. Likes and retweets function as social rewards, repeated posts across multiple accounts create the appearance of consensus, and remixable formats like clips, stitches, and quote-tweets help the script spread while preserving its core message. Formal social network models show how misinformation and fake news interact with these network dynamics to increase polarization and lock in false beliefs over time (ScienceDirect).

What makes this especially durable is that the process is not only top-down. Audiences become active participants in producing and maintaining the script. Communities collaboratively “work” the narrative by extracting short clips, creating captions and memes, collecting “receipts,” inventing rebuttal fragments, and building a shared vocabulary around the story. Research on participatory disinformation directly describes how these collaborative dynamics help shape disinformation narratives and sustain them long after the original post or claim appeared (ACM Digital Library). By that point, the script is no longer just a message people encountered; it becomes a communal object that they helped build. People are no longer merely persuaded by it—they are invested in it.

What makes this even more powerful is that the process is no longer purely top-down. Audiences don’t just consume the script; they help produce it. Communities collectively refine the narrative by cutting clips, making memes, writing captions, gathering “receipts,” and inventing shared phrases or rebuttals. Over time, the script becomes a communal object—something people have contributed to, defended, and circulated together. At that point, they’re not simply persuaded by it; they’re socially and emotionally invested in keeping it alive.

This is where the deeper damage shows up: public discourse starts to break down because the epistemic commons—the shared space where people can at least agree on what reality looks like—begins to collapse. Once these scripts take over, disagreement is no longer just about interpreting the same evidence differently. It becomes a fight over what even counts as evidence in the first place: whether a video is trustworthy, whether officials can be believed, whether independent reporting is actually independent, and even what category an event belongs in—terrorism, brutality, self-defense, false flag, and so on. Those judgments happen upstream of reasoning itself. They shape the priors people start with, what explanations they consider possible, and how they process any new information that comes in.

As that happens, people also begin updating less on signals from the world and more on signals from their social group. In other words, the “evidence” stops being the event itself and starts being who is saying what about it. Coalition cues become more important than direct observation. So the exact same piece of new information can be interpreted in completely opposite ways: for one group, it confirms wrongdoing; for another, it confirms media manipulation, depending on which meta-frame is already dominant.

Over time, this feeds affective polarization—the emotional side of polarization, where the issue is not just disagreement but growing dislike and distrust of the other side. There is a growing body of empirical work suggesting that algorithmic exposure can intensify this kind of affective polarization, shaping not just what people believe, but how they feel about political opponents (arXiv). And once that emotional hostility rises, argument starts to feel less like a search for truth and more like moral combat. Persuasion becomes harder, because changing your mind can feel like siding with the enemy. Compromise begins to look like betrayal. That is how discourse degrades from debate into deadlock, and why policy impasse becomes not just common, but structurally baked in.

Once a script is installed, people’s responses start to feel surprisingly predictable—not because they are incapable of thinking, but because they are drawing from a small, familiar library of memetic modules. You can often hear the same moves repeated in different combinations: a labeling move that assigns a role instantly (“terrorist,” “thug,” “crisis actor,” “deep state,” “woke,” “bootlicker”); a delegitimization move that discredits the source (“fake news,” “out of context,” “do your own research”); a whatabout move that redirects attention (“what about when they did X?”); a motive move that reframes the other person’s concern as bad faith (“you only care because…”); and a closure move that shuts down further discussion (“case closed,” “obvious”). What looks like spontaneous argument is often a rapid assembly of these preloaded pieces.

The important point is that this is not just a personal failing or a character flaw. It is a structural feature of highly compressed discourse—communication shaped by platforms that reward speed, certainty, and virality over reflection. In that environment, the most successful responses are not the most careful ones, but the ones that are easiest to recognize, repeat, and deploy under pressure. That is why arguments online can start to feel less like open inquiry and more like scripted reflexes. There is even computational research identifying recurring rhetorical patterns in divisive online speech, including slogans, thought-terminating clichés, and repetition, which helps explain why these exchanges often sound formulaic even when they feel emotionally intense (Nature).

You can frame this as the difference between two very different kinds of speech: arguments that are trying to track the truth, and phrases that are really tracking social identity. In the ideal sense, arguments are supposed to be truth-seeking. They make their premises visible, invite counterevidence, and leave open the possibility that new information could change the conclusion. A real argument, even a passionate one, still gives you some sense of what would make the speaker revise their view.

Thought-terminating clichés do something else entirely. They are less about inquiry and more about social coordination. Instead of opening a claim up to testing, they close the discussion down with phrases like “obvious” or “nothing to see here.” Instead of explaining a mechanism, they substitute a label. Instead of engaging with evidence, they signal coalition membership—basically, “I know the code, and I belong to the group that uses it.” In that way, they also protect the script from falsification, because they preemptively mark certain kinds of evidence as invalid before they are even considered.

If you want to put it in Bayesian terms, these phrases often act like likelihood-nullifiers: they make it so that no possible evidence (E) will count against a favored hypothesis (H), because the channel that produced (E) has already been declared illegitimate. Once that move is in place, the conversation may still look like an argument on the surface, but functionally it has stopped being one. A tight “mechanism → consequence” pattern you can use to identify the phenomena:

  1. Engagement-based ranking disproportionately rewards high-arousal, conflictual content. (Knight First Amendment Institute)
  2. Influencers and communities package repeatable causal scripts that fit those incentives. (ACM Digital Library)
  3. Algorithmic diffusion + homophilous networks turns scripts into default lenses. (ScienceDirect)
  4. Scripts compress discourse into predictable, memetic “argument forms,” including slogans and thought-terminating clichés. (Nature)
  5. The epistemic commons erodes, deep disagreement rises (trust, categories, standards), and policy becomes harder because compromise is re-coded as identity betrayal. (OUP Academic)

Self Sealing Discourse and Echo Chambers

Once a community internalizes certain causal scripts, it doesn’t just “have opinions.” It acquires a self-sealing epistemic immune system—a set of habits, categories, and stock moves that prevent outside information from functioning as evidence. Two pieces of theory are perfect for this:

  • C. Thi Nguyen on epistemic bubbles vs echo chambers (and why echo chambers are hard to “pop”) (Cambridge University Press &amp; Assessment)
  • Endre Begby on evidential preemption (how testimony can inoculate audiences against later counterevidence) (PhilPapers)

Phrases like “you should comply with the law” and “Alex Pretti is protecting pedophiles and murderers” directly connect to “enemy from within” rhetoric and institutional/policy breakdown. Below I’ll build the self-sealing pattern step-by-step, then show how these sorts of phrases contribute to the breakdown.

  1. The diffusion engine: why scripts spread fast and stick

What platforms reward, more than anything, is what you might call high-velocity cognition: fast, compressed ways of interpreting events that are easy to feel and even easier to share. A causal script works perfectly in that environment because it bundles several things at once—a simplified explanation of what happened, a moral verdict about who is guilty, and an action cue telling people what to do next. In attention markets, the scripts that win are usually the ones that are emotionally high-arousal (anger, fear, disgust), identity-confirming (clear signals about “us” versus “them”), short and memetic enough to repeat, and conflict-maximizing enough to trigger replies, quote-posts, duets, and stitches. You can see this in amplification dynamics themselves: engagement-based ranking systems can end up disproportionately boosting divisive content precisely because it produces the strongest engagement signals (Aeon).

Influencers are central to this because they do more than pass along beliefs—they package templates. A strong influencer script usually includes a cast list (heroes and villains), a causal story (“what’s really happening”), a normative verdict (who deserves what), an epistemic rule (who should be trusted and who should be dismissed), and a participation cue (“share this,” “wake up,” “they’re lying to you”). That final piece is especially important, because it shifts the audience from passive consumers into active participants. People are no longer just reacting to a message; they are being recruited into helping reproduce it.

Once that participation is rewarded with likes, status, and in-group approval—especially when people get social points for “ratioing” outsiders or defending the group line—disinformation and narrative warfare become collaborative projects. The community starts building a shared repertoire: catchphrases, screenshots, “debunk” fragments, attack lines, lists of enemy outlets, and ready-made responses. Over time, that repertoire becomes the social toolkit people reach for automatically, which is exactly why later responses can feel so predictable. They are not being generated from scratch each time; they are being assembled from a common script the group has already built together.

  1. Nguyen’s distinction: bubbles are omission; echo chambers are discrediting

Nguyen’s key distinction is really useful here because it separates two things that often get lumped together. In an epistemic bubble, important voices are missing mostly by omission—you simply do not encounter them, so your view of the world is incomplete (Cambridge University Press &amp; Assessment). In an echo chamber, though, the dynamic is much stronger: outside voices are not just absent, they are actively excluded and discredited, and members are trained in advance to distrust anything those outsiders say (Cambridge University Press &amp; Assessment).

That difference matters because it changes what “more information” can actually do. A bubble can sometimes be burst by exposure—if someone finally sees the missing evidence or perspective, it may genuinely shift their view. But echo chambers are more self-sealing. They often convert exposure into a reinforcement event, because the outside source has already been pre-labeled as corrupt, biased, or deceptive. So when contradictory evidence appears, it does not function as a challenge; it functions as proof that the chamber’s warning about outsiders was right.

That is why the phenomenon feels so durable. It is not just a filter bubble problem, where people are missing information. It is more specifically echo-chamber logic, where the system is built to neutralize outside information before it can do any epistemic work.

  1. Begby’s evidential preemption: inoculation against future counterevidence

Begby’s mechanism helps explain exactly how these narratives become self-sealing, because the persuasion is not just about getting someone to believe a claim in the moment. It is about shaping how they will interpret future evidence. His idea of evidential preemption is, roughly, that a speaker asserts a claim (p) while also warning the audience that they will later encounter apparent evidence against (p)—but that this future evidence should be treated as misleading, deceptive, or already accounted for (PhilPapers). In other words, the message arrives with its own built-in defense system.

You can see the structure everywhere in politics and media: “They’re going to tell you X, but that’s propaganda,” or “that clip is out of context,” or “it’s a hoax,” or “those are paid actors.” The power of this move is that it does not just persuade you now; it pre-programs your response later. When counterevidence eventually appears, it no longer lands as counterevidence. It lands as confirmation that the warning was correct. Begby explicitly describes this as a kind of inoculation—an audience is primed in advance to resist future contrary evidence (endrebegby.synthasite.com).

When you combine that with Nguyen’s distinction, the full loop becomes clear. In an echo chamber, outsiders are already discredited, so their testimony is treated as suspect before it is even heard (Cambridge University Press &amp; Assessment). Then evidential preemption adds a second layer: any future contradiction is pre-labeled as manipulation, which means it never gets to function as evidence in the first place (PhilPapers). That is the core of the self-sealing dynamic you are describing: not just disagreement, but a structure that protects itself by training people how to dismiss disconfirming evidence before it arrives.

  1. The self-sealing discourse pattern, step by step

Here is a fuller way to describe the anatomy of how these discourse ecosystems become closed under revision. It usually starts with a script installing a frame—basically, a preloaded answer to what kind of event this is. The frame determines which categories feel relevant from the beginning: law and order, terrorism, corruption, degeneracy, threat, protection, betrayal. Once that frame is in place, it does more than shape interpretation. It selects what data seem salient, which details get ignored, and even which hypotheses feel plausible enough to consider. Before people are debating facts, they are already operating inside a pre-structured sense of what sort of story they are in.

The next step is that the script installs an epistemic authority map: a social ranking of who counts as a reliable knower and who does not. This is where the difference between an epistemic bubble and an echo chamber really matters. In a bubble, people may simply not encounter relevant outside voices. In an echo chamber, those voices are actively framed as untrustworthy by nature, so their testimony is discounted in advance (Cambridge University Press &amp; Assessment). That means the issue is no longer just missing information—it is a prior commitment about which sources are allowed to count as information at all.

From there, the script installs preemption moves for dealing with counterevidence before it even appears. This is where Begby’s idea of evidential preemption fits perfectly: instead of waiting to see what critics will say, the audience is preloaded with a deflationary story that explains away future contradictions (PhilPapers). The patterns are familiar: source deflation (“fake news,” “state media,” “bought”), context deflation (“out of context,” “edited clip”), motive deflation (“they want chaos,” “they protect criminals”), and process deflation (“the investigation is rigged”). The point is not merely to argue against specific evidence, but to lower the evidential status of entire channels in advance.

Once those pieces are in place, conversation starts to become ritualized. Because the script offers a limited repertoire of acceptable moves, responses begin to follow a recognizable sequence: label, delegitimize, moralize, close inquiry, then pivot to coalition action. That is why so many exchanges feel strangely predictable, as if you can forecast the next few lines before they are spoken. The discussion still looks like argument on the surface, but underneath it is often a scripted performance constrained by the group’s repertoire.

At the final stage, self-sealing emerges. When counterevidence arrives, it is typically filtered out, distrusted because it comes from an outsider, reinterpreted to fit the corruption or conspiracy frame, or treated as socially risky to engage with seriously because doing so invites sanctions from the group. At that point, argument is no longer doing much epistemic work. It is no longer primarily about testing claims against the world. It is performing membership—showing that you know the script, trust the right people, and reject the right enemies.

Phrases like “you should comply with the law” sound, on the surface, like neutral moral guidance, but inside a self-sealing discourse ecosystem they often do much more than state a norm. They can lock the frame of the conversation by shifting the question away from what actually happened or whether a response was justified, and toward a simpler obedience-versus-deviance narrative. They also invert agency: if harm occurred, responsibility gets redirected onto the person who was targeted, because the key fact is now framed as noncompliance. And once “compliance” becomes the central lens, inquiry itself can be shut down. Further investigation into context, proportionality, or causation is treated as irrelevant—“doesn’t matter, they should have complied.” In that sense, the phrase functions like a thought-terminating cliché: it compresses a complex factual and moral assessment into a single move that ends the need to examine particulars.

The phrase “he is protecting pedophiles and murderers” works differently, but just as powerfully. It combines enemy construction with evidential preemption. Instead of treating disagreement as an error, it recasts disagreement as moral contamination: if you question the script, you are not merely mistaken, you are complicit. It also expands the scope of the conflict. What might have started as a specific factual or legal question gets transformed into a civilizational struggle against absolute evil, which makes nuance feel not just unnecessary but dangerous. And once that framing is installed, any attempt to insist on due process, proportionality, or evidential standards can be redescribed as “protection” of the enemy class.

This is especially effective in echo-chamber environments because it creates asymmetric social costs. Calling for caution, verification, or procedural fairness becomes risky, because it can be interpreted as disloyalty or hidden sympathy for the condemned group. That is how epistemic norms get overridden: not necessarily because people stop caring about truth in the abstract, but because identity safety and coalition signaling become more immediately important than evidential discipline.

  1. “Enemy from within” rhetoric as a stabilizer of echo chambers

Once outsiders are classified as enemies, keeping an echo chamber closed becomes much easier, because the closure is no longer just informational—it becomes moral and social. Nguyen’s point about echo chambers is that they do not merely omit outside voices; they actively train members to distrust them, and “enemy” rhetoric supercharges that process by turning distrust into a virtue while making trust feel like betrayal (Cambridge University Press &amp; Assessment). In that environment, skepticism is no longer a selective intellectual habit; it becomes a loyalty test.

At the same time, counterevidence stops functioning as evidence and starts functioning as a hostile act. This is where Begby-style preemption fits perfectly: if people are told in advance that “they will show you X to trick you,” then any future contradiction is received not as information but as an attack vector (PhilPapers). The audience is not just prepared to reject the content; they are prepared to experience it as manipulation. That changes the emotional valence of inquiry itself, because engagement with outside evidence begins to feel dangerous rather than clarifying.

And once the out-group is coded as the enemy, changing your mind becomes much more costly. Updating your beliefs is no longer framed as learning or correcting an error; it is framed as defection, as switching sides. That raises both the psychological and social price of revision, because what is at stake is not just whether a claim is true, but whether you still belong. This is the point where discourse becomes truly self-sealing: it closes not only at the level of cognition, but at the level of identity and social membership.

  1. From degraded discourse to deep disagreement and policy impasse

Once many groups are operating with different scripts, the shared epistemic commons that policy depends on starts to disappear. The disagreement is no longer just about conclusions; it is about the machinery of justification itself. People begin to differ on what counts as evidence, who counts as a credible witness, which institutions are legitimate fact-certifiers, and what norms should govern dispute resolution in the first place—courts, elections, journalism, expertise, or something else. So even when everyone says “show me evidence,” they are often talking about entirely different pipelines for producing and validating truth. On the surface, it sounds like a common demand. Underneath, the standards are no longer shared.

That is why policy compromise becomes so hard. Once the script moralizes the conflict, compromise itself gets recoded as corruption or surrender. If the other side is framed as “protecting pedophiles and murderers,” then negotiation starts to look immoral. If your own side is framed as the last defense against chaos, then restraint looks like weakness and procedural caution looks like betrayal. Under those conditions, deliberation loses legitimacy, because the issue is no longer treated as a dispute among citizens but as a battle between good and evil.

The result is a system structurally pulled toward stalemate or escalation. Stalemate happens when no shared standards remain for resolving disputes. Escalation happens when each side sees the breakdown of compromise as proof that stronger tactics are justified. Either way, the conditions for ordinary democratic problem-solving deteriorate, because the very tools that make compromise possible—shared evidence, trusted procedures, and mutual legitimacy—have been hollowed out.

A useful diagnostic is to ask a very simple question: what, exactly, would change this person’s mind? In a real argument, people can usually answer that. They can name the kind of evidence, event, or counterexample that would make them revise their view, even if they still strongly disagree in the moment. That openness is part of what makes the exchange truth-tracking rather than purely performative. In self-sealing discourse, though, that question tends to trigger more script instead of a real condition for revision. The answer is not a testable threshold; it is a closure move: “nothing, because they lie,” “it’s all rigged,” “you’re with the enemy.” At that point, the response is no longer about evidence at all. It is a way of defending the frame, the authority map, and the group boundary in one stroke. That is basically evidential preemption operating at the level of the whole community. The group has already installed a system in which potential disconfirming evidence is neutralized before it arrives, so “what would change my mind?” no longer functions as an epistemic question. It becomes a loyalty check.

Platform incentive structures amplify high-arousal, identity-confirming causal scripts. Those scripts don’t merely spread beliefs; they reshape epistemic norms by (i) discrediting outsiders (echo-chamber structure) and (ii) inoculating members against counterevidence (evidential preemption). The result is self-sealing discourse: predictable “arguments” that function as identity signals and thought-terminating clichés, degrading the epistemic commons and producing deep disagreement and policy impasse.

Luhmann: Media Selectors

In Die Realität der Massenmedien, “selectors” (Selektoren) are system-internal selection mechanisms/criteria that reduce the overabundance of possible communications—i.e., they help determine what becomes media communication and what does not. (dissent.is) Luhmann talks about “selectors” at three related levels:

  1. Selectors that constitute communication (general theory)

For Niklas Luhmann, communication is not simply the transfer of a message from one person to another; it only happens when three selections come together as a single event of meaning. First, there is information (Information), which is the selection of what is being communicated—the specific difference being marked out from all the other things that could have been said (unisalento.it). Second, there is utterance or Mitteilung, which is the selection of that information and the way it is presented—the act of saying, showing, framing, or expressing it in a particular form (unisalento.it). Third, there is understanding or Verstehen, which is the selection made by the receiver (or observer) who interprets the utterance as communication and attributes a meaning to it, whether that interpretation is accurate or not (unisalento.it). In this view, communication is not complete at the moment of speaking; it emerges only through the synthesis of these three selections, which is why Luhmann describes them as the classic “three selections” (drei Selektionen) of communication.

  1. Two selectors specific to mass media as a system (structural level)

At the structural level, Luhmann argues that mass media operate with a different set of selectors because they exclude real-time interaction between sender and receiver. In face-to-face communication, participants can immediately respond, clarify, and adjust, but mass media systems do not work that way. Instead, he says two selectors become especially important: Sendebereitschaft, or the producer’s/organization’s readiness to send—to generate and offer content into the system—and Einschaltinteresse, or the audience’s willingness to tune in, pay attention, and accept the communication as worth engaging (dissent.is). In other words, mass media communication depends not only on what is said, but on the structural coupling between institutions that are prepared to broadcast and audiences that are prepared to attend.

  1. The selector types for news (Nachrichten) in mass media (program level)

When he turns specifically to news (as distinct from more background “reports”), he lists “typical” selectors for how news gets chosen and formatted. (dissent.is) Here are all the numbered selector types (1)–(9) he discusses there:

  1. Striking discontinuity / novelty: information must be new and break expectations or resolve a bounded uncertainty (e.g., sports results). (dissent.is)
  2. Conflict: conflicts generate suspense by postponing resolution (winner/loser), pointing to future developments. (dissent.is)
  3. Quantities: numbers attract attention because a given number is informative in itself; comparisons amplify this. (dissent.is)
  4. Local reference / proximity: local relevance gives an item weight; distance must be compensated by higher “weight” or oddity. (dissent.is)
  5. Norm violations: especially legal/moral violations (and related breaches of “political correctness”), often framed as scandals. (dissent.is)
  6. Ability to attach moral evaluation: norm violations are especially selectable when they allow moral judgments (esteem/contempt) and reproduce the moral code (good/bad). (dissent.is)
  7. Personen (attribution to action/actors/persons): simplifying complex contexts by pinning events on actors supports fast opinion formation. (dissent.is)
  8. Topicality → focus on events/single cases: the demand for “up-to-dateness” concentrates news on incidents, accidents, disruptions, and enables follow-up recursion/series. (dissent.is)
  9. Opinions as news: opinions themselves become “events” (media self-mirroring). This requires (a) the topic to be interesting enough and (b) the opinion source to have reputation/status. (dissent.is)

Luhmann then adds that this whole “net” of selectors is reinforced by organizational routines in media production. (dissent.is)

In his news selectors, the “case” idea shows up in the selector that privileges events / single incidents (Ereignisse, Einzelfälle) because “topicality” demands items that can be marked as new and can be continued tomorrow as follow-ups. In other words: news is biased toward discretely reportable cases (accidents, disruptions, scandals, decisions, etc.). (dissent.is) Right after (or alongside) listing selectors, Luhmann warns that a pure “list of criteria” is too simple, because selection is an organizationally complex process. He then says the selectors are “reinforced and supplemented” by routines in media organizations (newsrooms). Concretely, routines include things like:

  • rubrics/sections and templates (Rubriken, Schablonen) that pre-structure what can count as a “news item,”
  • pre-selection workflows (beats, editorial desks, standard formats), and
  • final selection by available space/time in the medium (column inches, airtime). (GRIN)

And his point is that it’s “surprising” how much what looks sensational is produced by routine—i.e., the system can generate “exceptional” items through standardized processing. (GRIN)

They’re not one of the numbered “news selector” types (like conflict, norm violations, proximity, etc.). They operate one level over:

  • Selector types = semantic/programmatic criteria that make certain kinds of content more selectable as news.
  • Routines = organizational procedures that stabilize and reproduce selection under constraints (time, space, staffing, formats).

Because newsrooms work with rubrics, templates, beats, deadlines, and space/time limits, they continuously translate ongoing complexity into discrete, publishable “cases.” That means routines don’t just choose among cases; they also help constitute what counts as a case in the first place—by chopping reality into reportable event-units and making them serializable (updates, “new developments,” reactions, etc.). (dissent.is)

Adapting Luhmann to Computational Propaganda

You can adapt Luhmann to computational propaganda without losing the core of his theory, because what makes his framework powerful is that it is not mainly a theory of content—it is a theory of how communication gets selected, repeated, and stabilized at scale. In that sense, social media does not overturn Luhmann; it intensifies him. The key shift is that you update what counts as selectors and routines, and you add a new layer that did not exist in the same form before: platform algorithms and monetization systems as selection infrastructures. Luhmann’s basic insight still holds—communication systems survive by filtering and organizing complexity—but on platforms, the filtering machinery becomes more continuous, automated, and economically optimized.

In Luhmann’s account of mass media, selectors are reinforced by organizational routines: newsroom procedures fit information into recurring rubrics and templates (Rubriken, Schablonen) so that communication can be processed reliably. In the platform era, we can see an analogous set of routines, but they are now socio-technical and computational. Ranking and recommendation pipelines, A/B testing and continuous optimization, moderation queues and label systems, ad delivery systems and conversion logic, and even influencer management practices like briefs, scripts, posting schedules, and performance reporting all function as routines in Luhmann’s sense. They stabilize selection under constraints. The difference is that the constraints are no longer just editorial scarcity (limited pages or airtime), but real-time metrics, automated personalization, and auction-based advertising. In older mass media, routines organized scarcity; on platforms, routines engineer attention under conditions of abundance.

Luhmann’s two classic selectors for mass media still map surprisingly well to this environment: the producer’s readiness to send and the audience’s interest in tuning in. Those are both still present, though they now operate in a more distributed way, with creators and influencers taking on much of the productive role and audiences signaling attention through clicks, watch time, replies, and shares. But platforms introduce something like a third selector, and it is arguably the decisive one: platform allocation. That includes algorithmic distribution and ad auction allocation—the system-level mechanisms that decide what gets shown, to whom, and with what intensity. Once you add that layer, computational propaganda becomes easier to describe in Luhmannian terms: it is not just a struggle over messages, but a struggle over the system’s own selection and distribution routines.

This is also where the empirical relevance becomes clear. If engagement-optimized ranking systems systematically amplify divisive or emotionally charged content compared with chronological baselines, that is exactly the kind of system effect Luhmann’s framework would lead us to expect from a communication order optimized around attention rather than truth-tracking or deliberation (OUP Academic). In other words, computational propaganda is not just propaganda adapted to the internet; it is propaganda operating inside and through infrastructures whose selection logic is already biased toward what captures and holds attention.

A helpful way to frame computational propaganda, especially in a Luhmannian register, is to treat it as a strategy for steering selections. A widely used Oxford Internet Institute definition describes it as the use of algorithms, automation, and human curation to deliberately distribute misleading information through social networks (navigator.oii.ox.ac.uk). Through Luhmann’s lens, that translates neatly into a systems-level point: actors learn how the platform’s selectors and routines work, then design content that fits those selection conditions. In other words, they are not just trying to persuade individuals one by one; they are trying to align messages with the filters, incentives, and distribution logics that determine what the system will amplify.

That is why you do not really need an entirely new theory to make sense of it. Luhmann already gives you the core architecture: communication systems reproduce themselves through recurring selections stabilized by routines. What changes in the platform era is the selector list. Instead of only editorial rubrics, broadcast schedules, and audience tuning habits, you now have ranking systems, engagement metrics, recommendation pipelines, moderation rules, ad auctions, and creator optimization practices. Computational propaganda works by exploiting that updated selector environment. So the theoretical move is less “replace Luhmann” and more “extend Luhmann’s selector map to platform infrastructures.”

A practical way to extend Luhmann for the platform era is to add a new layer of selectors—because the classic news values (conflict, norm violations, proximity, novelty, scale) still matter, but on platforms they get folded into systems that are both metrified and personalized. In other words, what gets selected is increasingly determined by what the system can measure, optimize, and route. That gives you a more useful platform-era selector list. First are metric selectors: predicted engagement (clicks, likes, watch time, replies, shares), retention and return probability, follower growth and spread velocity, and conversion likelihood for ads, fundraising, or list-building. These function as selectors in a strict sense because they directly shape distribution probability—what the platform decides to show more of, to whom, and for how long.

Second are algorithmic compatibility selectors—the traits that make content travel well through ranking systems. Content that is emotionally intense, identity-threatening, or outrage-inducing often performs better because those features correlate with engagement. Simple narratives with clear agents and blame travel well because they fit quick attribution. High-contrast novelty (“breaking,” “shocking,” “you won’t believe”) fits discontinuity and interruption logics. And memetic packaging—short video, punchy captions, remixable clips—makes content easy to circulate across feeds and across platforms. These are not just stylistic features; they are forms of compatibility with the platform’s selection machinery.

Third are infrastructure selectors, which shape who gets reach in the first place. Account reputation and prior performance act as proxies for “creator quality,” paid amplification expands reach through ads and boosts, and cross-platform coordination (reposting swarms, bot-human brigades, coordinated creator networks) increases the odds that a message will break through and persist. This is where your point about dark money funding influencers becomes especially sharp: money does not just buy speech, it buys compatibility with platform selection systems—distribution, optimization, coordination, and persistence.

That last point also has a real-world institutional footing. Recent reporting from the Brennan Center argues that nontransparent political spending (“dark money”) in U.S. federal races reached a record level in the 2024 cycle, describing it as a major increase over prior cycles (Brennan Center for Justice). (Brennan Center for Justice) And a 2025 WIRED investigation reported on a politically aligned, donor-obscured initiative that allegedly offered influencers monthly payments while requiring secrecy and imposing content-related restrictions—an example of how “influencer” can operate as an organizational role in a propaganda supply chain rather than just an individual media persona (WIRED). (wired.com)

Luhmann already emphasizes recursion: media select events, then select reactions to those selections (opinions-about-opinions, scandals-about-scandals). Platforms intensify this because the system can run tight feedback loops: content is tested on micro-audiences, metrics decide scaling, creators adapt to metrics, the algorithm adapts to creator adaptation, and then external funders adapt to both. Recent theory and audit work on engagement-based ranking explicitly models / measures these feedback loops and their tradeoffs. (SSRN) A Concrete operationalization today:

  • audit the selectors (what signals predict reach?) with algorithmic audits and sock-puppet methods (common in current research on political exposure and ranking bias) (facctconference.org)
  • map organizational couplings: influencer agencies, PAC-adjacent nonprofits, platform ad infrastructure, content farms
  • trace routine templates: recurring story forms, talking points, “caption grammars,” content calendars, clip pipelines
  • follow the money as a selection accelerator: paid distribution + influencer stipends + production tooling

Below is a pipeline describing actors → routines → selectors → outputs → feedback:

  1. Money and governance layer

Dark/opaque funding creates capacity to manufacture attention at scale: paying creators, buying ads, hiring growth shops, running sockpuppet networks, subsidizing production, funding “issue orgs,” etc. The scale of nontransparent election spending has grown sharply in recent cycles, which matters because it buys repeated runs through the loop. (Brennan Center for Justice) The typical move in influencer campaigns is to pay for political messaging while minimizing disclosure (“keep it quiet”), which operationally means you’re buying routines (posting schedules, scripts, content restrictions, reporting) rather than buying one-off “speech.” (WIRED)

  1. Goal-setting and audience modeling

The next stage is goal-setting and audience modeling, and this is where the operation starts to look less like spontaneous online speech and more like a coordinated communications system. The actors involved can include political consultancies, aligned nonprofits and super-PAC ecosystems, PR firms, state-linked influence operations, and digital “growth” agencies. Their first job is usually to segment audiences and decide who matters for the campaign’s purposes: persuadable voters, people who can be demobilized, specific identity groups that can be polarized, or communities where wedge issues are likely to spread. From there, they define what success looks like in measurable terms—KPIs tied to reach, engagement, or conversion.

What matters in this phase, especially in the platform era, is that the targets and goals are chosen in terms the system can optimize. The selectors are no longer just political priorities in the abstract; they are metrics like predicted watch time, reshares, reply volume, follower conversion, and click-through rates. Those metrics matter because they are not only measures of performance—they are also inputs into distribution itself. In other words, they function as selectors because they shape the probability that content will be amplified by the platform. So audience modeling is not just about who to persuade; it is about designing a strategy that fits the measurable and optimizable logic of the platform’s attention system.

  1. Content manufacturing and packaging

The content manufacturing and packaging stage is where the operation becomes visible to the public, and it is often carried out by influencers, content farms, meme accounts, “news-like” channels, and clipper accounts that specialize in fast-turnaround media. Their routine is not simply to produce information, but to produce content in formats that are most likely to survive platform selection. That usually means short, high-arousal hooks, cliffhangers, outrage prompts, and strong moral framing; simple attribution stories that clearly identify who did what to whom, with a clean villain/hero structure; and serializable “episodes” that can keep the narrative alive through follow-ups, reactions, and updates over multiple posts.

This is also where the Luhmann connection becomes especially sharp. What these actors are doing is not just describing events—they are applying templates that routinely generate communicable “cases.” In other words, reality does not arrive already packaged as a sequence of obvious media objects. The routines carve it into cases that can be circulated, repeated, and updated. That is exactly the kind of selection-and-stabilization process Luhmann helps illuminate: communication systems do not merely transmit the world; they continuously format it into recognizable units that fit the system’s own operational logic.

  1. Seeding and initial distribution (the “test phase”)

The seeding and initial distribution stage is basically the test phase, where the goal is to see whether a piece of content can catch enough early traction to trigger wider amplification. The actors here are often creator accounts, coordinated account clusters, paid micro-influencers, and sometimes automation or bot support, all working to give the content an initial push. The routine is to seed the same message across multiple accounts, time the posts for moments when early engagement is most likely, and coordinate comments or quote-posts so the content looks active, relevant, and worth paying attention to.

What matters most at this stage are the platform’s early selectors—especially engagement velocity and predicted retention. In practical terms, the system is testing whether people stop to watch, keep watching, react, and help spread it. On short-form video platforms in particular, hold rate and watch-time equivalents function like a gate: if the content clears that threshold, it is far more likely to be distributed to larger audiences. So this phase is less about mass persuasion immediately and more about passing the platform’s first round of selection filters.

  1. Algorithmic allocation (the platform’s core selection infrastructure)

This is the structural break from classic mass media: distribution itself is a routine (continuous ranking + experimentation), not just editorial judgment. On X, there is a relatively explicit “recommendation pipeline” for the “For You” surface, and X has made recommendation code public in a repo, reflecting how central this pipeline is to selection. (GitHub) On TikTok, regulators and researchers emphasize the highly personalized recommender system and its design features that can intensify engagement. (The Verge) Selectors here are metricized: predicted engagement, watch-time, reply likelihood, “toxicity/quality” proxies, account reputation, and many others. The exact weights vary and change; the important Luhmannian point is: the system selects communications by its own internal criteria, then feeds that selection back into itself.

  1. Amplification options (buying reach vs. earning reach)

The amplification stage usually works through two routes, and in practice they often reinforce each other rather than operate separately. One route is earned amplification, where content spreads because it already fits the platform’s selectors—its format, tone, pacing, and emotional cues are compatible with the ranking system, so the algorithm scales it. The other route is paid amplification, where ads, boosts, sponsorships, or influencer payments effectively buy repeated shots at visibility. Even if one post fails, funding allows the same campaign to keep testing variants and pushing new attempts until something catches.

This is why money matters so much in these ecosystems. It does not just buy reach in a simple sense; it increases the number of trials a message gets as it moves through the platform’s selection machinery. More trials mean more chances to discover what the algorithm will reward. And because funding also supports teams, workflows, and content pipelines, it helps professionalize the routines that produce these highly portable, “case-like” narratives in the first place. In that sense, amplification is not only about distribution—it is about building a system that can repeatedly generate and optimize content for selection.

  1. Legitimacy laundering (turning platform signal into “public reality”)

The legitimacy-laundering stage is where platform signal gets converted into something that looks like public reality. The routine here is to transform attention into credibility: virality gets treated as proof (“everyone is talking about this”), screenshots and stitched videos are selectively cited as if they were independent confirmation, and reaction chains are used to make the narrative feel established. Often, higher-status accounts or pseudo-experts are then brought in to comment, which gives the impression that the issue has moved from online chatter into recognized public discourse. What began as a coordinated push now appears as something that “organically” emerged because so many people are discussing it.

This maps very cleanly onto a Luhmannian idea of recursion, where communication starts feeding on communication itself. Opinions become news, and reactions become events. The system is no longer mainly processing the original incident; it is processing the circulation, uptake, and commentary around it. That is also why Oxford’s definition of computational propaganda is so useful here: it explicitly highlights the combination of algorithms, automation, and human curation in distributing misleading information (navigator.oii.ox.ac.uk). In this stage, all three are working together—not just to spread a claim, but to manufacture the appearance that widespread attention itself is evidence of legitimacy.

  1. Analysis

When you compare TikTok and X, the biggest difference is not just culture or politics—it is the primary selection logic each platform uses to decide what gets seen. On TikTok, the core bottleneck is recommendation-driven discovery: content can reach large numbers of non-followers very quickly if the system decides it is retaining attention. That makes watch-time, retention, and completion-like signals especially powerful selectors, and it is one reason short, emotionally intense, tightly packaged narrative clips can travel so far. The platform is constantly testing and re-testing what holds attention, so propaganda-style content that is optimized for hooks, suspense, and affect can gain traction even without a large preexisting network. This is also why regulatory scrutiny around TikTok so often focuses on personalization and engagement design, including its highly personalized recommender system and addictive-use concerns (The Verge).

X works differently. Its selection system still relies heavily on the “For You” recommender, but the visible mechanics of reposts, quote-posts, replies, and pile-ons make network conflict itself a major distribution engine. In practice, that means engagement—especially reply and quote dynamics—interacts with account reputation and coordination effects to determine what looks salient. Propaganda-style messaging has an advantage here when it is built for conflict: dunks, outrage threads, adversarial quote-posting, and coordinated “conversation capture” all fit the platform’s discourse format unusually well. And unlike many platforms, X’s recommendation infrastructure has been partially documented in public code releases, which makes the recommendation stack’s role as a selection engine unusually visible (GitHub).

So in Luhmann-style terms, both platforms are selection systems—but they reward different kinds of compatibility. TikTok is especially strong at personalized discovery through retention-based filtering, while X is especially strong at network-amplified conflict through engagement and conversational visibility. That difference shapes not only what spreads, but what kinds of propaganda are most adaptive on each platform.

In the age of computational propaganda, “selectors” become metrics and model outputs, and “routines” become socio-technical production + distribution pipelines that continuously generate “cases” designed to survive ranking, monetization, and attention competition.

Elizabeth Dubois: Influencers and Elections

Elizabeth Dubois’s Influencers and Elections report (with Louise Stahl) is almost made for the Luhmann move we’ve been doing: treat platforms as selection infrastructures, and treat influencers as institutionalized “roles” inside that infrastructure rather than as random individuals. (PolCommTech.ca) Below is a way to splice Dubois/Stahl directly into Luhmann: roles → routines → selectors → “cases” → feedback loops, tuned specifically to TikTok and X.

  1. Influencers are role bundles

Dubois and Stahl’s key contribution, framed in Luhmannian terms, is the idea that influencers are not simply “ads with faces,” but rather role bundles operating across multiple positions in the political communication system. They argue that influencers play multiple political roles in elections, some directly connected to campaigns and others functioning more independently, which broadens how we should understand their political significance (PolCommTech.ca). In the lab’s summary materials, these roles include advertisers, celebrity endorsers, campaign volunteers, media outlets, data brokers, journalists, and lobbyists (PolCommTech.ca).

From a Luhmann-adapted perspective, these are not just labels but functionally distinct positions within a communication system. Each role carries its own routines—that is, its standard operating procedures—and its own selectors, or the criteria by which content gets chosen, trusted, and circulated. In other words, the same influencer may appear to be a single actor, but systemically they can operate through several different communicative logics at once, depending on which role is active in a given moment.

  1. Where “dark money / opaque funding” plugs in: it buys routines, not just speech

A core point in the report’s ecosystem framing is that influencers can be used to circumvent ad transparency and spending rules, including through undisclosed partnerships financed by third parties (University of Ottawa). Outside summaries of the report also emphasize that we often do not know whether or how creators are being paid, or how closely they may be tied to partisan groups. That lack of visibility is exactly the kind of opacity at issue here, because it makes it harder to determine who is shaping political communication and under what incentives.

In Luhmannian terms, this can be understood as the way money increases a communication system’s capacity to re-run selections until they stick. More funding means more creators, more posts, and more iterations of the same message across formats and audiences, which in turn makes influence operations more persistent and adaptive. It also helps professionalize what you describe as “case production,” turning what may look like spontaneous or authentic communication into a more organized and repeatable process.

  1. How Dubois/Stahl sharpen the “routines create cases” point

The report materials explicitly describe influencer practices that function as case-manufacturing pipelines, especially in the way they blur lines between personal expression, volunteer activity, and coordinated political communication. One example is the idea of influencers as digital canvassers or “digital door knockers.” In this role, influencers appear to be engaging in ordinary civic participation, but Dubois and Stahl (as quoted in a policy submission citing the report) identify a crucial boundary problem: at what point does nano-influencer activity stop looking like “normal volunteering” and start functioning as an in-kind contribution from a small business or third-party actor? This ambiguity matters because the communication may still look informal and personal even when it is operating within a broader campaign logic.

In Luhmannian terms, this can be understood as a role-confusion routine. It generates communications that present themselves as ordinary interpersonal talk—something like “just me sharing”—while in practice functioning as organized campaign communication. The appearance of authenticity becomes part of the mechanism itself, because it allows politically consequential messaging to circulate under the social cues of trust, familiarity, and everyday speech rather than under the more visible signs of official campaign messaging.

The same policy submission, drawing on Dubois and Stahl, also emphasizes how diaspora communities can be targeted across languages and on less-visible platforms, and how influencers’ trust within those communities can make them especially effective conduits for interference and microtargeting. This expands the ecosystem framing beyond mainstream platforms and highlights how political communication can travel through tightly networked communities where credibility is often relational and culturally specific. In these contexts, influencer communication is not just about reach; it is about access to trusted social pathways that are harder to monitor and regulate.

In Luhmannian terms, this dynamic reflects structural coupling between two systems: (a) influencer-audience trust networks and (b) campaign or targeting infrastructure. It also represents a direct expansion of the selector “proximity” into identity proximity—including shared language, community membership, and niche affinity. That is what makes these practices so effective as case-manufacturing pipelines: they combine institutional targeting capacity with socially embedded trust, allowing messages to be tailored, circulated, and reinforced in ways that feel personal while remaining strategically organized.

  1. Updated selector map: what influencers add that classic mass media didn’t have

Luhmann’s classic mass-media selectors—conflict, norm violations, moralization, personalization, novelty, and related criteria—still apply here, but Dubois and Stahl help make visible a set of platform-native selectors that are especially exploitable through influencers. Their work shows that influencer politics is not only about message content; it is also about the conditions under which content gets selected, amplified, and trusted within digital platforms. In that sense, influencers do not simply participate in political communication; they reshape the selection environment through the affordances of platform culture and audience relationships.

One major category is trust or relational selectors, where parasocial credibility functions as a selection advantage. Dubois and Stahl note that influencers can have “strong trust relationships” with audiences, along with deep platform expertise, which is precisely what makes them attractive to campaigns (University of Ottawa). The selector effect here is that the system preferentially circulates content that audiences experience as authentic or intimate, even when the communication is strategically designed. What looks like closeness or sincerity becomes, in systemic terms, a powerful filter for selection and circulation.

A second category is niche-fit selectors, which operate through micropublic relevance rather than broad public salience. Influencers are often leveraged because they “reach young voters and niche audiences” (University of Ottawa), and this changes the logic of what gets selected. Instead of optimizing for “general public significance,” communication is optimized for segment resonance—that is, whether a message lands with a specific community, subculture, or demographic slice. This makes influencer-driven communication particularly effective in fragmented media environments where targeted relevance often matters more than mass visibility.

A third category is what we might call compliance-with-infrastructure selectors—the kinds of content that can survive, and even benefit from, platform ranking systems, moderation practices, and advertising rules. Dubois and Stahl foreground risks such as misinformation, disinformation, and the use of influencers for “evading existing laws and regulations” (PolCommTech.ca). In Luhmannian terms, the selector effect is that content gets engineered to be both algorithmically promotable and regulatorily ambiguous. Messaging is shaped so it can circulate effectively while remaining difficult to classify or regulate, often appearing as “not an ad,” “just my opinion,” “satire,” or “fan edit.” The ambiguity is not incidental—it is part of what makes the content selectable across multiple systems at once.

Finally, Dubois and Stahl’s typology explicitly includes data brokers as one of the political roles influencers can play (PolCommTech.ca), which points to a fourth category: data selectors. Here, selection is increasingly based on measured response—engagement patterns, behavioral signals, audience segmentation, and campaign analytics—rather than editorial judgment alone. The consequence is that content and targeting decisions are guided by feedback loops tied to performance data and, in some cases, audience data flows. This further intensifies the platform-native nature of political selection, because communication is continuously adjusted according to what the data indicates will perform, circulate, or persuade most effectively.

  1. TikTok vs X: same roles, different selection bottlenecks

On TikTok, the dominant bottleneck is recommendation-driven discovery shaped by retention signals, which means political communication succeeds when it is formatted to keep people watching. In this environment, roles like advertiser/endorser and journalist or media outlet become especially powerful when they package politics into high-retention “cases”—short clips, strong hooks, and serial updates that invite repeat viewing. This aligns closely with Dubois and Stahl’s framing that influencers do not merely transmit political messages but actively interpret news and shape narratives (PolCommTech.ca). In practice, that means creators are not just delivering information; they are formatting reality into watchable units that the recommender system can recognize, prioritize, and scale.

On X, by contrast, the dominant bottleneck is less about passive retention and more about interaction dynamics, especially reply and quote cascades, along with the visibility of conflict within networks. In that setting, roles like lobbyist, journalist, and campaign volunteer are especially effective when they generate argumentative, reactive, and personality-centered cases—for example, callouts, “receipt” threads, and outrage prompts. What thrives is communication designed to trigger response, escalation, and recirculation. In Luhmannian terms, this intensifies classic selectors such as conflict and opinion-as-news, but in a platform environment where those selectors are operationalized through routinized influencer posting strategies.

  1. Putting it all together: a “computational propaganda” pipeline with Dubois roles

Actors (funders, campaign/proxy orgs, agencies) → buy or coordinate with influencers in specific roles (advertiser/endorser/volunteer/media outlet/data broker/journalist/lobbyist) (PolCommTech.ca) → Routines: briefs, scripts, posting calendars, cross-posting, comment management, disclosure ambiguity, microtargeting strategies; payments routed through third parties/off-platform (University of Ottawa) → Selectors exploited: trust + niche-fit + engagement/retention + personalization + moralization/conflict + identity proximity (University of Ottawa) → Outputs: scalable “cases” (events, scandals, reactions, meme-able takes) → Feedback: metrics → iterate creatives → recruit more creators → further blur role boundaries (volunteer vs paid; journalism vs advertising; opinion vs reporting).

Codes & Success Criteria

You can translate Dubois and Stahl’s account of influencers’ “many roles” into Luhmann’s framework by treating each role as a distinct functional program, each with its own code (the binary distinction it implicitly operates through) and its own success criteria (what counts as a successful outcome within that program). This is a useful way to explain why influencer politics is so difficult to classify and regulate: Dubois and Stahl explicitly stress that influencers’ overlapping roles “complicate how we identify, categorize and regulate political participation” on social media (BC FIPA). In other words, the problem is not only that influencers do many things at once, but that each of those things follows a different communicative logic.

Using the roles Dubois and Stahl list—advertisers, celebrity endorsers, campaign volunteers, media outlets, data brokers, journalists, and lobbyists (PolCommTech.ca)—we can map each one pragmatically in Luhmannian terms. As advertisers, influencers operate through a code like convert / don’t convert (or sell / don’t sell), and their success is measured by performance: click-throughs, signups, donations, app installs, or other calls to action. This role is highly compatible with platform selectors because ad-like communication can be tested, optimized, and repeated through small creative variations. As celebrity endorsers, the code shifts to status / non-status (or cool / not cool), and success means lending borrowed legitimacy to a candidate or cause. Here, the endorsement itself often becomes the story, which aligns strongly with personalization and “people-as-news” dynamics, especially on X.

As campaign volunteers—especially in the “digital door knocker” sense—the code becomes mobilize / don’t mobilize, and success is measured by turnout-facing actions such as registering, attending events, canvassing, sharing voting information, or showing up at the polls. But this is also where Dubois and Stahl’s boundary problem becomes central: when does ordinary-seeming volunteering become an in-kind contribution by an influencer acting as a small business or third-party political actor? (BC FIPA) That ambiguity is politically productive because it allows organized communication to appear as informal civic participation. As media outlets, influencers operate through a platformized code such as attention / no attention, often combined with timely / not timely. Success in this role depends on reach, repetition, format discipline, and being recognized as a reliable source within a niche. This is especially effective on TikTok, where retention and seriality matter, and also on X, where quote and reply cascades create recursive visibility.

As data brokers, the code is best described as targetable / not targetable—that is, whether an audience can be segmented into actionable micropublics rather than treated as noise. Success here is measured through usable segmentation, response prediction, and performance lift. Dubois and Stahl explicitly include “data brokers” as a distinct political role for influencers (PolCommTech.ca), which is significant because it indicates that influencers are not only content producers but can also be embedded in the data infrastructures that shape who sees what and when. As journalists, influencers operate through a code like true/verified / not verified (or at least credible / not credible), and success depends on being treated as an authority: cited by others, able to set agendas, or recognized for interpretive power. Dubois and Stahl’s emphasis that influencers “interpret news” and “shape narratives” maps directly onto the journalist function, even when the presentation style differs from legacy journalism (PolCommTech.ca).

Finally, as lobbyists, influencers operate through a code such as access/influence / no access (or policy movement / no movement). Success is measured by attracting decision-maker attention, generating elite uptake, winning policy concessions, or building coalitions through visible public pressure. In platform terms, this role is especially compatible with selectors like controversy, conflict, and moralization—particularly on X—while also benefiting from identity-proximity niches on TikTok, where community-based trust can be mobilized toward policy messaging. Taken together, this mapping shows why the same influencer can appear inconsistent across contexts while actually behaving quite systematically: each role activates a different program, code, and metric of success, even when all of them are carried by the same account and persona.

Dubois and Stahl’s central warning is that influencers can move across overlapping roles—journalist, media outlet, campaigner, and others—in ways that make classification and regulation unusually difficult (BC FIPA). The “covert” quality here is usually not a matter of secrecy in the dramatic sense, but rather a product of role ambiguity combined with platform affordances. What makes this powerful is that the same account can preserve a consistent voice and persona while shifting its political function underneath that surface continuity.

One of the clearest mechanisms is that a single account can apply multiple frames to what is functionally the same strategic message. The same post can be presented as journalism (“here’s what’s really happening”), as a media product (“daily update,” “clip,” or “explainer”), as volunteering (“I’m just encouraging you to vote” or “get involved”), or as endorsement (“I support X because…”). Because platforms reward an “authentic” voice, the style and tone can remain stable even as the communicative role changes. To audiences, it still feels like the same person talking; systemically, however, the post may be operating under a different function, code, and regulatory category.

A second mechanism is the blurring of the volunteer and paid boundary. Dubois and Stahl specifically identify the unresolved line at which nano-influencer activity stops looking like ordinary volunteering and begins to function as an in-kind contribution by a business or third-party actor (BC FIPA). That grey zone is politically useful because it allows coordinated political work to circulate under the appearance of ordinary civic participation. The ambiguity does not merely complicate enforcement after the fact; it is part of how the communication is made to work in the first place.

A third mechanism is third-party intermediation, especially where disclosure rules are weak or hard to apply. Dubois and Stahl note that influencers can be used to “evade existing laws and regulations impacting elections,” and public summaries of the report emphasize transparency concerns around influencer involvement in campaigns (PolCommTech.ca). In practical terms, this can mean relationships are organized in ways that do not resemble a classic political ad buy, even when funding and coordination are clearly present. The result is not necessarily total invisibility, but rather a structure of plausible deniability in which political messaging can appear informal, organic, or independently motivated.

In Luhmannian terms, another way to describe this is that influencers can switch codes midstream while keeping the same surface aesthetic. The tone, memes, pacing, and persona remain constant, but the operative distinction changes—from something like verified / not verified (journalism), to mobilize / don’t mobilize (volunteer), to convert / don’t convert (advertiser). From the audience’s perspective, this often still registers as “just them talking.” But from a systems perspective, the communication has shifted programs without changing costume, which is precisely why it is hard to detect and regulate consistently.

Finally, platform selectors themselves can serve as cover. Because platforms heavily privilege engagement and retention, political messaging can present itself as simply “what performs” rather than what persuades. Dubois and Stahl note that influencers can shape discourse and engage voters, but also spread misinformation or disinformation, and in some cases be used in interference contexts (PolCommTech.ca). This means the system’s selectors—what gets visibility, reach, and recirculation—can obscure the actor’s intent. In other words, the platform can make strategic political communication look like ordinary high-performing content, which is exactly what enables covert role-sliding to operate so effectively.

A simple diagnostic for the question “Which code is this post actually running on?” If you’re trying to detect role-sliding, ask of any piece of content:

  1. What would count as success here? (votes? donations? reach? credibility? access?)
  2. What kind of evidence is offered? (sources? vibes? metrics? insider access?)
  3. What is the call to action? (watch/share vs sign up/donate vs contact representatives)
  4. Who benefits if it spreads? (the creator’s brand? a campaign? an issue org? a policy faction?)

That’s a very Luhmann + Dubois way to analyze it: treat the influencer as a node that can execute multiple political programs while looking like one continuous persona.

Connecting to Broader Propaganda Concepts

Here’s a way to braid together (1) Elizabeth Dubois / Louise Stahl’s findings about influencer roles in elections with (2) Luhmann’s selectors/routines, and then show how that explains “flooding the zone,” the 4D model, and network diffusion—specifically on TikTok and X.

Dubois and Stahl’s report frames influencers as actors who can play multiple political roles in elections—including advertisers, celebrity endorsers, campaign volunteers, media outlets, data brokers, journalists, and lobbyists—and it emphasizes that they can shape narratives, interpret news, spread misinformation or disinformation, be used in foreign interference contexts, and at times evade transparency rules (PolCommTech.ca). What makes this framework especially useful is that it shifts the focus away from seeing influencers as merely promotional figures and toward understanding them as flexible political communicators embedded in the wider election information ecosystem.

In Luhmannian terms, this can be translated by treating those roles as different programs of communication, each with its own success criteria and operational logic. The relevant routines are the repeatable production and distribution practices that keep these communications circulating across platforms, while the key selectors are the mechanisms that determine what gets chosen and amplified—now increasingly shaped by algorithms, engagement metrics, and platform-specific optimization. Taken together, this means influencer-driven electioneering works not simply because it is persuasive, but because it institutionalizes political communication inside platform selection machinery rather than outside it.

  1. Broad strategy: “govern the selectors,” not the facts

A lot of modern propaganda is not really about making a single lie persuasive; it is about making the environment select and amplify your outputs. In Luhmannian terms, the strategic shift is from persuading one audience with one message to shaping the conditions of selection themselves—so that certain kinds of content are more likely to circulate, recur, and dominate attention. What matters, then, is not only message design but also how well a message fits the platform’s own filtering and amplification logic.

One broad strategy is to manufacture case-objects that platforms can scale. Luhmann’s idea that mass media favors discrete “cases” or “events” translates neatly into platform politics, where the goal is to produce clip-sized, meme-sized, and reaction-sized cases that are easy to watch to completion (especially on TikTok), easy to quote-post and fight over (especially on X), and easy to serialize through formats like “part 2,” “new update,” or “they responded.” In this way, routine content production turns complexity into platform-native “events.” The point is not simply to communicate a claim, but to package it into a form that the platform can repeatedly surface and recirculate.

A second broad strategy is to build a division of labor across influencer roles. Dubois and Stahl explicitly distinguish roles that are tightly linked to campaigns—such as advertisers, endorsers, volunteers, media outlets, and data brokers—from more “independent” roles like journalists, lobbyists, and “average people shaping conversations” (University of Ottawa). In practice, this often looks like a pipeline: a data broker or targeting function identifies which micro-publics to reach and what emotional tone is likely to resonate; a media outlet or journalist function gives the message the appearance of interpretation or authority; a volunteer function reframes it as civic participation and mobilization; an endorser function adds status and identity cues; and an advertiser function converts attention into action, whether that means donating, joining, voting, or harassing. This division of labor makes the overall campaign more resilient, because if one element gets debunked, flagged, or throttled, the system can reroute through another role or format while keeping the broader communicative project intact.

A third broad strategy is to buy repetition through the loop. Dubois and Stahl, as summarized by the University of Ottawa, emphasize that influencer politics can be used to circumvent ad transparency and spending rules, including through undisclosed partnerships financed by third parties (University of Ottawa). Whether this is described as “dark money” or simply opaque funding, the key systems-level point is the same: money buys more iterations. It funds more creator recruitment, more content testing, more cross-posting, and more attempts to adapt messages until something “sticks” with the selectors. In that sense, funding is not just paying for reach; it is paying for repeated passes through the platform’s selection machinery until the environment itself begins to do the propagating.

  1. Micro-strategies: what influencers do inside the machine

These are not really “secret tricks” so much as role-switching plus packaging under constant metric feedback. The core mechanism is that influencers can adapt what a post is doing without substantially changing how it looks or sounds. That makes the communication feel continuous to audiences while allowing it to serve different political functions behind the scenes.

One micro strategy is covert role sliding, where the influencer keeps a stable style but switches function. The same creator can post in one continuous voice while toggling between the role of journalist (offering interpretation and credibility), media outlet (providing regular programming and updates), volunteer (mobilizing followers and their friends), endorser (signaling identity or status alignment), and advertiser (issuing calls to action), often without any clear disclosure boundary. This works because the audience experiences one familiar persona, while the system is actually processing different programs of communication optimized for different selectors. Dubois and Stahl’s ecosystem framing directly captures this problem by showing how these overlapping roles blur together and complicate tracking and regulation (PolCommTech.ca).

A second micro strategy is selector-fitting packaging. Through analytics, repetition, and performance feedback, influencers learn which formats platforms are most likely to select and amplify. In practice, this often means using high-arousal hooks (especially outrage or fear), simple attribution frames (“who did this to you”), moral sorting (good versus bad), and platform-native formatting such as tight visuals, captions, and remixable audio on TikTok, or quote-post bait, dunk formats, and “receipts” on X. In Luhmannian terms, this is an updated version of classic selectors: not only conflict or norm violation, but also predicted watch time, reply likelihood, and reshare probability. The content is shaped less around truth or coherence than around what is most likely to survive and travel through the platform’s selection system.

A third micro strategy is distributed amplification, or networked scaling. Instead of relying on one central broadcaster, a campaign can spread variants of the same message across many small and mid-sized accounts, making the attention look organic while also making it harder to moderate as a single coordinated effort. The Media Manipulation project describes this dynamic as “distributed amplification” (Media Manipulation Casebook). This structure is especially effective because it combines reach with deniability: messages can appear to emerge from many independent voices even when they are functionally reinforcing the same narrative pathway.

  1. Connecting to “flooding the zone with shit”

“Flood the zone with shit” is commonly attributed to Steve Bannon, often via a reported remark to Michael Lewis, and the phrase captures a strategy of overwhelming mediation by saturating the information environment with a constant stream of conflicting, emotionally charged, and highly “newsworthy” items (Vox). The point is not necessarily to make any single claim fully persuasive, but to make the overall environment harder to process, verify, and stabilize.

In Luhmannian terms, and especially when read alongside Dubois and Stahl, this works when political actors can continuously produce cases and push them through the system’s selectors faster than journalists, fact-checkers, or ordinary users can assemble a shared picture of reality. Dubois and Stahl’s framework helps explain why influencers are especially effective in this role: they can operate simultaneously as media outlets, endorsers, and volunteers, often at scale and within high-trust niches (PolCommTech.ca). That combination allows them to generate and circulate new “cases” rapidly while maintaining the appearance of authenticity and social proximity.

Platforms reinforce this dynamic because their selection infrastructures are optimized primarily for attention and engagement, not for epistemic coherence. In that sense, “flooding the zone” is not just about volume; it is about producing volume that is selector-compatible. What matters is not merely posting a lot, but posting in forms the platform will repeatedly choose, surface, and recirculate.

  1. Connecting to the 4D propaganda model

The “Four Ds” model, often attributed to Ben Nimmo, summarizes a common set of disinformation maneuvers: Dismiss, Distort, Distract, and Dismay (online.umich.edu). As a framework, it is useful for identifying recurring content tactics, but Dubois and Stahl’s influencer-role approach helps show how those tactics can be operationalized as a coordinated communication system rather than isolated messaging moves.

In this combined view, Dismiss—delegitimizing critics—can be carried by a mix of influencer roles, especially the endorser, lobbyist, and “journalist persona.” These roles are well suited to attacking the messenger, framing critics as untrustworthy, and triggering quote-post pile-ons. The selector fit is strong because platforms tend to reward conflict and personalization, both of which make dismissal tactics highly visible and easy to recirculate. What looks like a spontaneous backlash can therefore function as a routinized delegitimization strategy.

Distort, by contrast, often runs through journalist or media-outlet modes, where influencers present themselves as providing “context” or explaining what someone “really meant.” This is especially effective because it does not always require direct falsification; it can work by reframing facts, selectively interpreting evidence, or narrating events through moralized and simplified storylines. The selector fit here comes from narrative simplicity, moral sorting, and the serial “hot take” format, all of which are highly compatible with influencer-driven political commentary.

Distract works by shifting attention, fragmenting focus, or drowning out inconvenient topics, and it is especially compatible with media-outlet and volunteer-style roles. Influencers can redirect audiences with “look over here” cues, “new scandal” framing, or rapid-response mobilization around whatever is most attention-grabbing in the moment. This aligns closely with constant novelty and event-production, which is also where “flooding the zone” fits: distraction succeeds when the system can generate new, selector-friendly cases faster than audiences can process or verify them.

Finally, Dismay—using fear, intimidation, or demobilizing pressure—can be supported by lobbyist and volunteer roles (especially in pressure campaigns), as well as by endorser roles that activate identity-based threat cues. The selector fit here is high-arousal fear content, identity proximity, and doom framing, all of which tend to perform well under engagement-driven selection systems. In practice, this can make dismay tactics feel intensely personal and socially embedded, especially when they come from trusted creators within a community.

The key point is that the Four Ds are primarily content tactics, while Luhmann helps explain why they succeed when they align with selection routines. Dubois and Stahl then add the missing institutional layer by showing how influencer role structures make it easier to run these tactics repeatedly, across formats and audiences, as part of a broader political communication pipeline.

  1. Connecting to diffusion of misinformation through networks

Diffusion research consistently shows that misinformation spread depends on network structure, superspreaders, and platform recommender systems, not just on whether a claim is inherently “believable.” For example, research on recommender systems’ role in diffusion highlights how certain recommendation approaches—especially popularity-based and network-based systems—can become major drivers of how misinformation travels across platforms (ACM Digital Library). Likewise, classic cascade studies show that polarized communities can sustain distinct narratives with different cascade dynamics, meaning that diffusion patterns themselves can vary systematically across ideological or social groupings (PMC).

In diffusion terms, influencers matter because they function as high-centrality nodes (or as bridges into otherwise separate micro-publics), and they provide trusted injection points for claims, frames, and interpretations. Just as importantly, their content is often optimized for algorithmic acceleration—for example, through retention-friendly formatting on TikTok or interaction spirals on X. This means they are not simply passing information along; they are positioned to trigger and sustain cascades under conditions that platforms are already primed to amplify.

When you combine Dubois and Stahl’s role ecology (who does what), Luhmann’s concepts of selection and routines (how communications get chosen and stabilized), and diffusion dynamics (how cascades form and spread), you get a coherent explanation for why influencer-centered propaganda can be so effective. It is best understood as a coordinated effort to control the conditions under which communications become visible, repeatable, and socially real. The strategy is not only to persuade, but to structure the flow of communication so that some messages become difficult to avoid and easy to reproduce.

This also helps clarify platform differences. On TikTok, the key bottleneck is retention-based discovery, so campaigns adapt by turning politics into serial, emotionally charged cases that can hold attention and generate repeat viewing. On X, the bottleneck is more about conflict-driven interaction and recursion, so campaigns lean into reactive, argumentative formats that provoke replies, quote-posts, and recirculation. Across both platforms, influencer campaigns tend to work by packaging politics into selector-friendly cases, distributing them through creators who can slide between roles, and iterating rapidly until the platform’s selection machinery promotes them.

What does all of this mean?

What I have been trying to show, is that our public and private discourse is incredibly polluted by nefarious sources, constantly bombarding us with utter nonsense (intnetionally or unintentionally). Events like the Bad Bunny half time performance should not elicit massive outrage. Tragedies like the murder of Alex Pretti should be unanimously morally condemned. The fact that we have such radically divergent stances indicates a deeper problem with how we approach information, and how that affects subsequent communication.

What I've been word-vomiting are the resources I've found relevant to the topic. They provide a clear framework for understanding our position within a broader media ecology that has radically evolved within the past decade. I think having this understanding allows an individual to be more vigilant with respect to what they consume on these media platoforms, without resorting to cynicism.

Comments

Popular posts from this blog

Core Concepts in Economics: Fundamentals

Michael Levin's Platonic Space Argument

The Nature of Agnosticism Part 5