Nomological Harmony

The argument from nomological harmony is a recent theistic argument. It is trying to show that there is something in need of explanation about the way the laws of nature and the actual contents/state of the universe fit together, and that theism explains that fit better than brute naturalism does.

The core idea is the universe is not just a pile of stuff plus a separate list of laws. The laws we have actually apply to the stuff that exists, and when applied to the universe’s states they generate further states. Cutter and Saad describe the striking fact this way: our universe has a “harmonious match between laws and states.” They also say things might have been otherwise: the universe could have been “stillborn,” with laws that did not engage its initial state in any productive way.

So the argument is doing two things at once. First, it identifies a surprising fact; why do the actual laws "hook into" the actual furniture of the world so neatly? Second, it uses that "fact" as evidence for god. A devine mind or designer could explin why the laws and the world match up, whereas the non-theistic view that match would be brute or coincidental. A simple way to picture it is the analogy often used around this argument: imagine finding a box of game pieces and a rulebook. The surprising thing is not just that there are pieces and rules, but that these rules are exactly the ones that govern these pieces. That pairing seems less surprising if someone intentionally matched them.

So, in rough premise form, the argument is something like:

  • There is a striking harmony between the actual laws of nature and the actual contents/initial states of the universe.
  • This harmony calls for explanation.
  • Theism provides a better explanation of that harmony than unguided naturalism or brute fact.
  • Therefore, this harmony counts as evidence for God.

What it is not mainly trying to do is prove God from ordinary fine-tuning alone. It is related to fine-tuning, but it is slightly different. Fine-tuning arguments focus on the laws/constants being life-permitting; nomological harmony focuses more basically on the fact that the laws are even apt for the kind of universe that exists at all. The main philosophical pressure point is this: why assume that a mismatch between laws and states was a live possibility? Critics may say the laws and the kinds of entities/states they govern are not independently selectable in the way the argument assumes. If so, the “surprise” may be overstated. Some critics also argue that appealing to God just relocates the coincidence unless one can explain why God would choose exactly this harmonious arrangement.

In short, The argument from nomological harmony is trying to turn the fit between the universe’s laws and its actual contents into evidence for a cosmic mind, by claiming that this fit is non-trivial and calls for explanation.

I find this argument absurdly disappointing. It seems like theists are really grasping at thin air with this. And yet, some find it "ridiculously convincing". Very bizarre how two people can look at the same argument and come to radically differing conclusions about its persuasiveness. I see this as such low quality, that it might not just be a failed argument, but a piece of anthropoligical evidence disconfirming theism. So in this post, we will take a look at why I think its utter nonsense.

The Argument Seems Severely Confused

The argument from nomological harmony seems to treat laws and world-states as though they were two independently chosen ingredients that then happen to “fit.” That is already suspicious. On many views, laws are not an extra item laid alongside reality like a rulebook beside chess pieces. They are instead our best general descriptions of regularities, structures, symmetries, or counterfactual patterns in reality. If so, asking “why do the laws match the world?” starts to sound like asking “why does an accurate description describe what it describes?” The appearance of mystery may come from sliding back and forth between world and description of world.

That is very close to a map/territory fallacy. If laws are even partly representational, the harmony is not like two separately existing things coinciding. It is more like the unsurprising fact that a map, when made well, tracks the terrain. The interesting question is not “why is there harmony between map and terrain?” but “how do creatures like us manage to construct useful maps?” That is an epistemic question about abstraction, idealization, model selection, and predictive success — not obviously a metaphysical pointer to God.

It really just seems to be a question about why mathematical modeling and theory building works; asking “why do some models map onto reality?” The argument identifies implicitly the puzzle of the effectiveness of mathematics and modeling, but then mislocates the puzzle. Scientific models work because they are selectively built to capture stable structure, often under idealizations and restricted domains. They do not mirror reality wholesale. They are tools with varying fidelity, scope, and interpretation. So if the nomological harmony argument says “look how well the laws fit the world,” one response is: of course they do, insofar as they are the polished outputs of a practice that keeps the models that work and discards the ones that fail. That does not dissolve every realism question, but it undercuts the idea that there is some startling cosmic coincidence here.

The argument often gains traction by quietly assuming a fairly governing conception of laws: laws as something like metaphysical rules imposed on reality. But that is controversial. There are rival accounts:

  • Humean: laws summarize the mosaic of particular facts.
  • Best-system: laws are the axioms of the simplest, strongest true systematization.
  • Dispositionalist / necessitarian: laws arise from powers, essences, or modal structure in things.
  • Structuralist variants: laws capture mathematically expressible structure.

On the first two especially, “why do the laws fit reality?” is close to malformed, because the laws are extracted from reality as part of the best description of it. They are not an independently floating code that then needs to latch on.

A Tarskian picture pushes you to distinguish clearly between object language and metalanguage, between reality and the sentences or formal systems used to describe it. Once that distinction is enforced, some formulations of nomological harmony can look like category mistakes: they talk as if formal descriptions stand over against the world in the way a legislator stands over against citizens. But from a semantic or formalist angle, a law statement is not a little metaphysical governor. It is a sentence in a theory, interpreted in relation to a model or to the world. So the “harmony” may just be the ordinary satisfaction relation between theory and domain, inflated into metaphysics.

I really take issue with the Chess analogy. Chess rules are constitutive norms: they create the game. Physics is not like that, unless one already builds in a heavyweight metaphysics of laws. From a formalist perspective, mathematics does not command reality; it provides formal structures that may or may not prove useful in representing patterns in reality. The chess analogy smuggles in exactly the picture the argument needs: rules first, pieces second, fit as contingent coordination. But that is not how many philosophers of science think physical theorizing works. The analogy is doing illicit work. I'll dedicate a section to this later.

The argument seems fundamentally cheap. The argumentative move is:

  1. Recast a familiar descriptive success as a surprising metaphysical gap
  2. Treat that gap as in need of special explanation
  3. Offer God as the unifying explainer.

That is a classic structure of explanatory opportunism. The complaint is not merely “this is God of the gaps,” but more specifically: the gap is manufactured by a tendentious metaphysics of laws. If the gap only appears after you model laws as quasi-Platonic prescriptions floating free of the world, then theism is not solving a neutral datum. It is cashing out a problem introduced by the argument’s own assumptions.

The argument from nomological harmony reifies scientific laws into independently existing rules and then treats their applicability to the world as a surprising coincidence. But if laws are descriptive abstractions or best-system summaries, and models are representational tools, then no such coincidence arises. The argument confuses map with territory, mistakes semantics for metaphysics, and manufactures an explanatory gap that theism is then invited to fill.

Possible Diversions

There are some things a defender might say in reply. They might argue that even if laws are descriptive, it remains surprising that reality is so deeply unified, compressible, mathematically tractable, and stable under elegant generalization. That would shift the argument away from “laws fitting states” and toward a broader intelligibility argument. But that is a different, more sophisticated claim than the crude rulebook-and-pieces version. But even these intelligibility arguments are somewhat ridiculous.

What these intelligibility arguments often do is take the success conditions of theory and redescribe them as mysterious features of reality. That flips the order of explanation. We call a theory intelligible, unified, compressible, tractable, elegant, or generalizable because those are the features that make it usable as a theory for finite thinkers like us. So when someone says, “Isn’t it amazing that reality is describable in unified and compressible ways?”, the natural reply is: you are selecting for descriptions that have those virtues. The filter is coming from us. These are theoretical virtues we ascribe to models and theories.

That does not mean reality contributes nothing. The world has to contain enough regularity for some modeling to work at all. But the theistic move usually overreaches by taking this modest fact (some structures in reality are stable enough to support modeling) and inflating it into (reality is metaphysically pre-arranged for elegant cognition by minds like ours.) That inflation is doing most of the work.

The main issue with this intelligibility argument is it has the direction flipped. These arguments often say:

  • The world is intelligible.
  • That is surprising.
  • Therefore intelligence is behind it.

But really the better framing is:

  • We count as “intelligible” those aspects of the world our cognitive and theoretical practices can successfully regiment.
  • Our theories are deliberately shaped around simplicity, scope, compression, and inferential manageability.
  • So it is no shock that successful theories exhibit those virtues.

In other words, “intelligibility” is not a raw datum independent of our practices. It is already partly a relation between world, mind, and method.

Consider a natural language example, that uses the same logic. If someone says, “Isn’t it astounding that humans can communicate via implicature, pragmatics, and shared context? Surely this cries out for transcendent explanation,” the right response is not necessarily to deny that communication is remarkable. It is to notice that the supposed mystery is being generated by an artificially impoverished picture of language. If you start with a bizarre model where meanings are private mental atoms and sentences are just bare literal encodings, then yes, successful communication looks miraculous. But once you understand language as a socially embedded, inferential, context-sensitive practice shaped by shared forms of life, the “miracle” dissolves. The surprise was manufactured by the bad starting model.

I think this is a strong analogy for these intelligibility arguments. They often begin from a weird picture where reality is fundamentally opaque chaos, mind is somehow external to it, and theory is a kind of detached mirror. Then they marvel that the mirroring works at all. But that setup is already distorted. We are organisms inside the world, with cognitive capacities shaped to track enough structure for action, prediction, and coordination. Our conceptual tools and scientific methods are also historically refined to exploit recurring structure. Once that is in view, intelligibility is no longer a cosmic shock. It is an outcome of embedded agents developing representational practices under selective constraints. These arguments mistake an achievement of situated cognition for a metaphysical feature in need of supernatural explanation.

The tractability "surprise" is quite absurd. Intractable theories are by definition, not understandable. “Reality is understandable” often just means: we have managed to build some representations of some domains that are understandable enough for our purposes. That is not the same as saying the world comes pre-labeled for ideal rational apprehension. Also, many of these arguments quietly ignore how much of science is not especially intuitive or tractable at all. Quantum theory, turbulence, strongly nonlinear systems, consciousness, complex biological development, macroeconomic dynamics — these are hardly cases of transparent intelligibility. Science often advances by using mathematics and instruments to cope with what is not naturally intelligible to us. So even the premise that the world is neatly and pervasively intelligible is overstated.

Intelligibility arguments reverse the explanatory order. Unity, compression, tractability, and generalizability are not surprising metaphysical properties waiting to be explained by God; they are theoretical virtues built into our representational practices. We construct, refine, and retain theories partly because they exhibit those virtues. So the fact that successful theories are intelligible tells us as much about the norms of inquiry and the limits of cognition as it does about the world itself. The argument treats the outputs of epistemic selection as if they were brute inputs from reality. That is why the “surprise” feels contrived.

Saying “the world’s intelligibility needs divine explanation” is like saying “human communication needs divine explanation because implicature works.” In both cases, the sense of mystery comes from starting with an artificially thin model — of language in one case, of science and representation in the other. Once the relevant practices, shared structures, and selection effects are included, the miracle evaporates.

This is Yet Another Common Argument Pattern

I think this is a portable argumentative template rather than a genuinely independent family of arguments. The template looks like this:

  1. Isolate some phenomenon.
  2. Describe it as deeply surprising, brute, or resistant to current naturalistic explanation.
  3. Introduce theism as a possible high-level explanation.
  4. Quietly slide from “possible” or “compatible with” to “best” or “actual.”
  5. Reinterpret any natural explanation as merely the means by which God acted.
  6. When the original gap closes, relocate the mystery one level up.

I see this outside of philosophical apologetics. For example, once I heard a pastor claiming it to be a miracle that someone’s bone healed. They then restated, claiming that it’s a miracle that bones can even heal at all. On another occasion this pastor said it’s a miracle some cancer was cured, but then stepped back and said it’s a miracle the doctors had the knowledge to heal the cancer. In both cases it’s like a shifting of the goal posts, after years of discoveries showing how cancer can go into remission or the mechanisms of bone healing; we take a step back and identify some other explanatory gap.

That last step in the pattern is the key. It makes the strategy unusually resilient, because it is not committed to any particular empirical gap. If one gap closes, the argument just migrates:

  • not this healing, then healing as such;
  • not this remission, then the existence of medical knowledge;
  • not this design feature, then the laws;
  • not the laws, then their intelligibility;
  • not intelligibility, then the very possibility of rational inference.

This is a kind of explanatory ratcheting. The explanandum is repeatedly redescribed at a higher level of abstraction whenever lower-level accounts improve. That is why these arguments can feel endlessly renewable. Their strength does not come from a stable evidential base; it comes from the ability to reframe what the real mystery was supposed to be all along. Notice what happens in the case with the pastor. The claim becomes harder and harder to falsify, because it retreats from a specific intervention claim into a more global dependency claim. The cost is that it also becomes thinner and less discriminating. It explains everything in the same generic way.

That same pattern appears in philosophical apologetics. Fine-tuning, psychophysical harmony, moral knowledge, consciousness, rationality, nomological harmony, mathematical applicability, beauty, and so on can all be presented in this format:

  • here is some phenomenon,
  • naturalism allegedly cannot make satisfying sense of it,
  • theism can,
  • therefore theism gains support.

What makes this feel unsatisfying is that the arguments are often not independent. They are often instances of the same underlying maneuver applied to different targets. And that means two important things. First, the arguments may not provide cumulative force in the way apologists often suggest. If they all rely on the same questionable move — manufacturing surprise, overstating naturalistic insufficiency, and treating theism’s generic compatibility as explanatory superiority — then ten such arguments are not ten independent lines of evidence. They are one strategy repeated ten times. Second, theism often functions here less as a genuine explanation than as an explanatory universal solvent. It can absorb any outcome:

  • if X happens, God intended it;
  • if X is lawlike, God authored the laws;
  • if X is random, God chose to work through indeterminacy;
  • if X is understood naturally, God created the natural process;
  • if X is not understood naturally, God explains the gap.

A framework that can accommodate every possible evidential state may have low discriminatory power. That is a serious philosophical weakness. therefore, many of these arguments do not uncover new evidence for theism; they instantiate a reusable schema for relocating explanatory dependence onto God whenever a domain can be framed as incomplete, surprising, or brute.

That is also why “naturalism can’t explain X” is often too quick. Very often what is really meant is one of these:

  • naturalism has not yet fully explained X,
  • there is no consensus explanation of X,
  • current explanations of X leave residual philosophical questions,
  • or the speaker personally finds naturalistic explanations existentially unsatisfying.

Those are much weaker claims than “naturalism cannot explain X.” And the move from those weaker claims to “therefore theism explains X” is usually where the slide happens.

Many theistic arguments are not truly distinct arguments but reiterations of a common schema: identify some phenomenon, portray it as resistant to naturalistic explanation, offer theism as a global alternative, and then treat any future naturalistic account as merely the mechanism through which the theistic explanation operates. This makes the strategy flexible but also evidentially weak, because it can be reapplied indefinitely to new target domains without increasing explanatory precision. The pattern does not so much solve explanatory gaps as preserve them by redescribing them at progressively higher levels of abstraction.

That is why the arguments can proliferate endlessly. If the method is “find any explanandum and embed it in a larger frame where naturalism is said to fall short,” then there is no principled stopping point. You can generate infinitely many variants. That does not prove every such argument fails. But it does show why one should be suspicious of their apparent diversity. The multiplicity may be rhetorical, not evidential.

The Not-Naturalism, Therefore Theism Fallacy

The step from “naturalism does not explain X” to “theism explains X” is not a small inference; it is the entire burden of the argument. And in many apologetic contexts, that burden is barely argued for at all.

This is what often happens. There is a live explanatory difficulty, or at least something presented as one. Then “God” is introduced not as a worked-out explanation with its own constraints, predictions, and costs, but as a kind of explanatory placeholder. The mere fact that theism can be verbally attached to X gets treated as though X has now been explained. But that is much too quick. Saying “God did it” is not yet an explanation in any robust sense unless we are told why God would do it, why in this way, why under these constraints, why not otherwise, and why this hypothesis is better than its competitors.

You see this constantly in the fine-tuning argument. Theism is often treated as if it reduces arbitrariness, when in fact it may simply relocate it. Instead of "why these physical constants?", the space of possible questions explodes into:

  • “Why did God choose these constants?”
  • “Why a universe with this kind of life?”
  • “Why embodied biological life at all?”
  • “Why life through cosmological fine-tuning instead of direct creation?”
  • “Why this level of suffering, waste, and indirection?”
  • “Why a law-governed universe requiring delicate parameter ranges rather than a world where life is simply sustained by divine will?”

So rather than eliminating contingency, theism often introduces a huge space of divine-choice questions. The state-space explodes under theistic alternatives. Fine-tuning arguments frequently rely on treating the relevant possibility space as if it were sharply constrained under naturalism but comparatively simple under theism. Yet theism, if anything, can blow open the space of possibilities. Under a bare naturalistic setup, maybe there is some fixed set of constants or laws whose possible values we are considering. Under theism, by contrast, one often seems committed to far more freedom:

  • God could create different laws.
  • God could create different constants.
  • God could create different dimensionalities.
  • God could create no stable matter at all.
  • God could create minds without biology.
  • God could create life under entirely different physical regimes.
  • God could create a world without needing fine-tuned intermediaries.

So if one is really tracking explanatory flexibility, theism may massively enlarge the space of possible worlds rather than narrow it. That means its appeal to “this world is not so improbable on theism” is often under-motivated. Why think this life-permitting structure is especially expected on theism, as opposed to any of the countless other ways a god might create value, minds, order, or moral agents? That is the neglected issue of theism’s predictive looseness. A good explanation does not merely permit the observed data; it should make the data more expectable than relevant alternatives do. But bare theism is often so unconstrained that almost anything could be said to fit it after the fact. That is of course, because it is a pre-theoretic framework, not designed for modern inference.

Theism gradually accumulates ad hoc hypotheses to accomodate for such failures. Once challenged, apologists often add layer after layer to the follow up questions:

  • God values embodied creatures.
  • God prefers elegant laws.
  • God wants a stable, discoverable cosmos.
  • God values free processes.
  • God wants creatures to emerge gradually.
  • God balances hiddenness and revelation.
  • God has morally sufficient reasons for suffering.
  • God prefers this kind of biological life.

So the actual inference problem is not just “naturalism can’t explain X, therefore theism explains X,” but more precisely, “naturalism allegedly struggles with X, therefore theism makes X likely enough, specifically enough, and non-arbitrarily enough to count as a superior explanation.” That stronger claim is what usually goes undefended. “Theism can explain X” is often being used in a very thin sense — not that theism independently predicts or tightly accounts for X, but merely that one can tell some story on which God would have reason to allow or produce X. But that is cheap. You can tell such a story about almost anything. The real question is whether the story is constrained, non-ad hoc, and better than competing accounts.

The crucial inference in many theistic arguments is simply assumed rather than earned. From the alleged incompleteness of naturalistic explanation, it does not follow that theism explains the phenomenon. To establish that, one would need to show that theism makes the phenomenon more expected, does so with fewer arbitrary assumptions, and does not merely relocate the explanatory burden onto divine choice. In many cases, theism instead expands the space of possibilities and requires additional ad hoc claims to recover the observed world. So yes: “theism can explain it” often functions less as a genuine explanatory achievement than as a stop-gap permission slip — a way of halting further inquiry by labeling the phenomenon as something divine intention could in principle cover.

Back to the Chess Analogy

The chess analogy is not just weak; it actively smuggles in the very metaphysics it is supposed to support. The central problem is that chess has constitutive rules, while physics — on many views — has at most descriptive laws. That is an enormous difference.

In chess, the rules help define what chess even is. A bishop is a bishop partly because of how it is allowed to move. The identity of the piece is tied to the rule-system. But that immediately undermines the analogy’s intended point: the “fit” between rules and pieces is not some extra coincidence needing explanation. It is internal to the practice. The pieces are what they are in virtue of the rules. But the key point is that The wooden object itself is incidental. A bishop can be wood, plastic, a mark on paper, a digital icon, or purely imagined. What matters is not the material token but its place in a rule-governed representational system. So when the nomological argument says “look how surprising it is that these rules fit these pieces,” it is treating as contingent what is actually constitutively linked.

So the analogy collapses in one of two ways. If the chess case is taken seriously, then the “fit” between pieces and rules is not surprising at all, because the pieces are defined through the rules. In that case the analogy does not support nomological harmony. If instead the analogy is adjusted so that the pieces are genuinely independent of the rules, then they cease to be chess pieces in the relevant sense. Then the analogy no longer resembles chess. That is a serious dilemma.

Another obvious problem with this analogy is that it straightforwadly equivocates on the meaning of "rule". The analogy trades on the ordinary human sense of rules as prescriptions or stipulations — like game rules, legal rules, classroom rules. But then it quietly extends that image to laws of nature, as if laws were similarly imposed instructions that reality obeys. That is a completely unjustified leap. In the game case, rules are explicit, conventional, human-authored, normative, and constitutive of the activity. In scence, laws are often taken to be inferred from patterns, theory-laden, revisable, representational, or explanatory/predictive summaries. Those are not remotely the same thing. The analogy works only by gliding from one sense of “rule” to another without argument. And once that slide happens, the rhetorical force is easy to generate: “Just as chess pieces are paired with chess rules by a mind, so too physical entities are paired with physical laws by a mind.” But that is exactly the illicit move. It is not a discovery; it is a projection of the game-model onto nature.

Scientific "laws" are theory dependent. On many philosophies of science, “laws” are not ontologically basic prescriptions written into reality. They are abstractions appearing within scientific representation. They depend on idealization, scale, background theory, and explanatory aims. So to say “why do the rules fit the entities?” is already misleading. The laws are not a separately given rulebook sitting next to the furniture of the universe. They are part of our best systematization of that furniture and its behavior. That is why the analogy feels backward. It is like saying: “Isn’t it surprising that the contours on this topographic map match the mountain?” No!!! — the map was constructed to represent those contours. Likewise, if laws are inferred, model-relative, and representational, then the supposed surprise is largely an artifact of reification.

And again, even if we grant the point, the theistic explanation is ridiculously unsatisfactory. Suppose someone says "God intentionally matched these entities with these rules", That does not explain much. It just restates the pairing in intentional language. Then the real questions begin:

  • Why these rules rather than others?
  • Why this kind of universe rather than another?
  • Why deterministic rules at all, if libertarian freedom matters?
  • Why probabilistic rules, if order matters?
  • Why elegant laws if hiddenness is desired?
  • Why discoverable laws if so much suffering and confusion remain?
  • Why embodied, law-bound creatures instead of direct divine sustenance?

So the explanatory burden is not removed — it is displaced into divine psychology and divine choice. And because those choices are usually unconstrained, the account starts to require ad hoc supplements: God values elegant order, or stable causation, or creaturely freedom, or discovery, or gradual development, or hiddenness, or regularity, and so on. Thats not explanatory economy; it is patchwork.

The chess analogy fails because chess pieces are constituted by the game’s rules, whereas physical entities are not straightforwardly constituted by scientific laws in the same sense. The analogy equivocates between prescriptive human rules and descriptive scientific laws, then imports intentional design through that equivocation. And even if one granted the analogy, “God matched the rules to the entities” would not explain the fit without further ad hoc assumptions about why God selected these rules rather than others.

A Computational Perspective

A computational perspective makes the chess analogy look even worse, because it exposes how vague and overloaded their notion of a “rule” is. From the standpoint of computation, a rule is not automatically a little command imposed on passive stuff. A rule can mean very different things: a transition function in an automaton, an update rule in a cellular automaton, an inference rule in a formal calculus, a production rule in a grammar, an algorithmic procedure, a constraint on admissible states, or just a compact description of input-output behavior. These are not all the same. And the nomological argument tends to slide across them as if they were one unified metaphysical kind.

That matters because in computation, the relation between “rules” and “things governed by rules” is often not one of external matching at all. Take a finite automaton. The states, alphabet, and transition function are defined together as one formal system. It would be confused to ask, “Why do the transition rules happen to fit these states?” The states are individuated partly by their place in the transition structure. There is no extra coincidence. The rule/state pairing is internal to the formalism. Same with a Turing machine. The machine table is not some externally imposed lawbook that accidentally matches a pre-existing tape ontology. The whole machine description specifies a computational system. If you changed the transition table enough, you would not have “the same machine with different rules”; you would have a different machine description. In formal systems, representations and rules are often co-defined, not independently paired and then marvelously matched.

A second problem comes from the distinction between syntax and semantics. Computational rules are usually syntactic. They specify symbol manipulation. Whether those symbols represent anything at all is another matter. A formal system can be perfectly well-defined without denoting a physical reality. So when the nomological argument says, in effect, “look how amazing it is that the rules fit the entities,” it risks confusing a formal rule system with an interpretation of that system with the physical process being modeled. That is a serious category mistake. In computation, you learn quickly that a rulebook by itself does not magically latch onto reality. Interpretation, encoding, semantics, and implementation all matter. This is where the analogy to scientific laws breaks down badly. Scientific laws are not just bare transition tables sitting in the void. They are part of theoretical models that require interpretation, idealization, parameterization, measurement conventions, and domain restrictions. So the relevant relation is not “rules matched to pieces,” but something more like “formal structure used to model recurring patterns under an interpretation.” That is much less mysterious.

A third issue comes from multiple realizability. In computer science, the same abstract computation can be implemented in many different physical substrates. A chess program can run on silicon, paper, relays, or in principle human clerks following procedures. Likewise, the same formal rule structure can be represented in many ways. That means the material “pieces” are often irrelevant to the rule structure, exactly as you noted. What matters is the abstract organization, not the token substrate. So the chess analogy’s emphasis on “these rules fitting these pieces” is computationally naive. The pieces are not the deep issue. The issue is the state space and legal transformations over it. But then the nomological argument loses its grip, because the supposed wonder of “matching rules to pieces” was doing the rhetorical work.

A fourth and even deeper point comes from complexity theory. From a computational perspective, not every rule system is equally tractable, compressible, or informative. Some systems are easy to simulate, some are intractable, some are chaotic, some are computationally universal, some are irreducible in the sense that no shortcut predicts their behavior better than stepping through them. This matters because intelligibility arguments often say: “Isn’t it amazing that the world is governed by elegant, tractable rules?” But computationally, that is already a selection effect. We preferentially understand, retain, and praise theories that compress phenomena and support feasible inference. A world may have immense local complexity while still admitting some coarse-grained tractable descriptions. That is enough for science to get started. So from a complexity perspective, there is no deep surprise in the fact that our successful theories exhibit computational virtues. Of course they do. A theory that was maximally incompressible, computationally useless, or intractable would not function well as a theory for bounded agents. Again the direction is flipped. The argument treats tractability and compressibility as mysterious gifts from reality, when in many cases they are conditions of successful theorizing by limited reasoners.

A fifth problem: rules do not explain their own implementation. In computer science, an abstract automaton description is not the same as a physical implementation. Knowing the transition function of a cellular automaton does not by itself explain why some physical medium instantiates it. Likewise, if someone says “God wrote the rules,” that does not explain why there is a world instantiating them, why it instantiates these and not others, or why the implementation has the specific features we observe. So even on the most charitable reading, the theist is moving from “formal description” to “metaphysical source” far too quickly.

A sixth point: computation undermines the prescriptive reading of law. In many computational systems, the “rule” is just a concise way of describing state evolution. Nothing is being commanded. The system does not “obey” the rule in the normative sense. Rather, the rule captures its transition pattern. This should make us suspicious of importing the everyday notion of rules as prescriptions into nature. That is exactly what the chess analogy does wrong. Chess rules are normative and constitutive because chess is an artifact practice. Physical law, from a computational or formal viewpoint, is better thought of as a description of transformation structure, not a commandment.

So the argument equivocates between rules as prescriptions, rles as constitutive stipulations, rules as syntactic transition functions, and laws as descriptive summaries.

From a computational perspective, the analogy between chess rules and laws of nature is deeply misleading. In formal systems, states and transition rules are typically specified together, so their “fit” is not an additional coincidence. Rules are often syntactic descriptions of state evolution, not externally imposed prescriptions. The same abstract rule structure can be multiply realized in different substrates, making the material “pieces” largely irrelevant. And virtues like tractability, compressibility, and generalizability are partly artifacts of epistemic selection by bounded reasoners. So the alleged surprise of “nomological harmony” results from conflating constitutive human rules with descriptive scientific models, and from misunderstanding the role of formal rules in computation.

Final Thought

The nomological argument treats nature as though it were a board game whose pieces were intentionally paired with a rulebook. Computation suggests a very different picture: formal rules define state transitions within a representation, and successful models are selected for computational usefulness. Once that is clear, the supposed harmony is not a deep coincidence but largely a byproduct of how formal description works.

Comments

Popular posts from this blog

Michael Levin's Platonic Space Argument

Core Concepts in Economics: Fundamentals

The Nature of Agnosticism Part 5