Core Concepts in Economics: Fundamentals

This post serves as a primer for future posts related to topics in economics. While not necessarily core to a critical thinking curriculum, economic concepts are vital for understanding a modern policy environment. I cannot begin to describe the vast disconnect between basic economic literacy and the folk models held by large swaths of the general public. As an economist myself, perhaps I am biased in thinking many of our problems in society are in some way, caused by this illiteracy. However, I think there is a strong case to be made that obsolete, ideologically laden, underdeveloped, and empirically unsound economic mental models held by various groups, are contributing significantly to the deterioration of the nations welfare. Therefore, I think it's important to lay out how professional or academic economists come to their conclusions. The public broadly has no idea what economists do; how they reason, their models of the economy, what a model even is, their sources of information etc. This post introduces some of the most basic concepts implicit in most economic theory and regularly used by applied economists. Lets start with some basics. These might seem abstract initially, so I'll try to concretize them:

Economy

Is defined as:
"a system of interactions among agents (households, firms, and governments) in which scarce resources are allocated through markets and institutions to produce, distribute, and consume goods and services."
This is actually a lot to unpack; what is a system? what is a market? what is an agent? what are goods? Each of these terms within this definition are just as abstract as the definition itself. Its also important to point out that different schools of economics will often define an economy differently, emphasizing certain aspects they find to be of crucial importance. For our purposes here, just note that this definition will likely be broadly accepted by most economists and is consistent with definitions within textbooks. 

After years if study, I've come to appreciate the conceptualization discussed by George E. Mobus in "Systems Science: Theory, Analysis, Modeling, and Design". It is defined here as an instance of the broader class of systems called Complex Adaptive Evolutionary Systems (CAES). The author conceives of a "generic economic system", that is usually embedded within a larger system (like an ecosystem), characterized by certain elements such as the way in which the system obtains resources in energy/materials, how they do internal work to grow/replicate/maintain themselves, and how they export waste back into their environment. In other words, an economy is a type pattern within a broader set of CAES. Specifically, he states:
"What all of these systems do internally is to use the energy and material imports in work processes that build and maintain essential internal structures. Waste products are inevitable in all such products in nature. The organization of the work processes and their ongoing management (which collectively we can call the governance of the system) is the pattern of which we spoke. Whether we are talking about the internal organization and dynamics of a living cell, of the multicellular organism of a Neolithic tribe, a single household, a complex organization, or a modern nation state, the patterns of organization and dynamics follow them. These patterns cluster under the title 'economy'."
This is an even more generic definition that invokes concepts from system science. It's analogous to a biological subsystem that performs that performs vital functions that ensure the survival of the supersystem in which it is embedded. Doyne Farmer, a complexity economist, has drawn an analogy between an economic system and a metabolic system. The economic system functions like a metabolic system, in that it consumes resources, processes energy & information, and adapts over time. It is always changing, growing, or decaying. It's primary function is the flow of energy and matter. Using this analogy, we can conceive of the economic metabolism; how economies transform inputs into outputs, similarly to how living organisms metabolize food into energy and biomass. 

I'll admit that this is definitely a heterodox approach but you can see the parallels with how mainstream economists define an economy. You might see in standard textbooks the etymology of the term "economy". It originally derives from "oikonomia", a Greek term meaning "management of a household". In ancient Greece, the "masters" of the household had to practice "economic activities", like raising sufficient food to support the household. Mobus states that "this meant managing processes so that the household recognizes an accumulation of wealth that would ultimately provide for the support of subsequent generations." In modern times, these households are embedded within a broader complex system and interact with institutions such as "markets" to achieve these ends. Mobus offers another abstraction:
"At an abstract level, an economy is a fabric of transactions, a network of sub-processes of resource acquisition , transformation (value added), production, and consumption, in which biological beings are sustained.... In the systems view, an economy is a way of managing the flows of high potential energy and of the transformation of high entropy materials into low entropy assets via work processes that use that high potential energy to do useful work. The low entropy assets support the existence and stability of the system."
The key thing to note with these definitions, is that economists are concerned with the flow and utilization of scarce resources, typically within some national boundary, but increasingly at a global scale. 

Models (Modeling)

So, what is a model? At its core, a model is a simplified representation of reality. It’s a tool we use to describe, explain, or predict how something works. Consider a map; it's not the territory, rather its a simplified version that leaves out irrelevant details, allowing you to navigate the terrain efficiently. A model is a map of a system of interest. Many different models can describe the same system. A single system like the economy, or climate, or a human body can be described in many different ways, depending on what you want to understand or predict, the data you have access to, and generally speaking the question you are asking. In economics, there are a plethora of models that describe the same economy. The choice of model depends on the question, context, and needed level of accuracy. The level of fidelity or granularity of the model also depends on these considerations. Low fidelity models are fast, easy to understand, captures big-picture behavior. They are good for teaching, quick decisions, or back-of-the-envelope estimates. Medium fidelity models capture more details; attempting to balance tractability and accuracy. High fidelity models are very detailed, often computational, can model complex interactions; they are used when precision really matters. Many economic models are "coarse grained models"; coarse-graining means reducing a system’s complexity by grouping together small-scale components and modeling only the aggregate behavior. Think of it like pixels on a screen; fine grained would be something like ultra HD while coarse grained would be something lower resolution, but you can still make out the general objects in the screen. 

Everyone has a model of reality in their mind. Our brains construct maps of reality, which we use to navigate the terrain. The deliberate act of modeling, allows us to interrogate, investigate, and revise these models. This is particularly useful for explicating our assumptions. Models are highly sensitive to their underlying assumptions. If we are not aware of these assumptions, our model will be inaccurate and useless. Every mathematical model starts with assumptions about what variables matter, how the variables relate, what is constant vs what can change, the environment in which the system is embedded, and assumptions about the fundamental unit of analysis (in economics, our "agents"). Even if our model is not mathematical, it still depends on these sorts of assumptions. The use of mathematics in the physical sciences and economics is for clarity. Language is incredibly ambiguous and vague; mathematization allows us to be incredibly precise with our definitions; leaving little room for ambiguity when we must interpret the model. Model assumptions define the world. Models don’t discover truth,  they operate within a truth you’ve defined. So if your assumptions are flawed, your model will produce misleading results, even if the math is perfect. Some models are relatively unaffected by slight changes in the underlying assumptions, other models are highly sensitive. In finance, models like the Black-Scholes equation assume no transaction costs and continuous trading. In the real world, these assumptions break down and so does the model’s accuracy. Our assumptions also guide the interpretation of the model results. Therefore, the economist (ideally) should be constantly asking “What’s this model assuming? And is that reasonable for the question I care about?”

I would argue that many economics students don't fully appreciate the rationale behind mathematical modeling in economics or more broadly across disciplines. The very essence of economics is to treat the economy as a system governed by universal laws. We invoke modeling notions such as equilibrium to describe how markets stabilize over time in response to external perturbations. We construct analogies like "market forces" which help explain movements in some quantity of interest, typically price. We use differential equations to model the dynamics of quantities like inflation or interest over time. Much of these techniques are inspired by physical sciences. In fact, many economists are simply mathematicians who apply this math to questions about economics. 

The problem I've recognized within economics departments, is that students pay less attention to the process of modeling, and focus more on the results of a particular economic model. It is easier to memorize the implications of a model, rather than to interrogate the structure and assumptions governing a model. I've simultaneously recognized an overemphasis on formalisms and equation solving within economics departments. What I mean is, students are evaluated by their abilities to solve a particular set of equations, rather than by the model thinking required to solve economic problems. Both of these miss the point of what were doing. The first set of students is primarily interested in the answers while the latter set of students is treating economics as if its mathematics. 

Educators really need to stress why we even model to begin with. I've explained briefly what a model is, while only alluding to the many reasons why we might want to model. The "why" we model is core to scientific inquiry. Much of the "Why" I'll be describing comes from Why Model by Joshua Epstein and Different Modeling Purposes by Bruce Edmonds. 

Epstein argues that modeling is an inherent part of human cognition. Whenever individuals make projections or imagine scenarios, such as the spread of an epidemic or the outcome of a war, they are effectively running mental models. These are implicit models with hidden assumptions, untested internal consistency, and unknown logical consequences. The distinction lies in making these models explicit, allowing for scrutiny, replication, and validation.​ Building explicit models involves clearly stating assumptions, which facilitates understanding their implications. Explicit models can be shared, tested against data, and refined. They enable sensitivity analysis, allowing researchers to explore how changes in parameters affect outcomes. This process is crucial for identifying uncertainties, robust regions, and critical thresholds, particularly in policy-making contexts.​ Epstein also describes the value of modeling beyond out of sample prediction. He lists 16 different uses:
  1. Explain (very distinct from predict)
  2. Guide data collection
  3. Illuminate core dynamics
  4. Suggest dynamical analogies
  5. Discover new questions
  6. Promote a scientific habit of mind
  7. Bound (bracket) outcomes to plausible ranges
  8. Illuminate core uncertainties.
  9. Offer crisis options in near-real time
  10. Demonstrate tradeoffs / suggest efficiencies
  11. Challenge the robustness of prevailing theory through perturbations
  12. Expose prevailing wisdom as incompatible with available data
  13. Train practitioners
  14. Discipline the policy dialogue
  15. Educate the general public
  16. Reveal the apparently simple (complex) to be complex (simple)
Explanation is distinct from prediction. When scientists use the word "predict", they do not necessarily mean "forecast". A quote attributed to Neils Bohr says "Prediction is hard, especially about the future!" This highlights the distinction between prediction "within sample" vs prediction "out of sample", the latter being something more akin to a forecast; an extrapolation based on historical trends. To predict in a scientific sense, means given a model of a system, some input (X) is expected to produce output (Y) within some margin of error. So for example, if I know the mass of a solid and its acceleration, then I can predict what the force will be; it is determined by the formalization of the system. I can then conduct an experiment to verify whether my description corresponds to observations. In economics, if i know the quantity demanded and the quantity supplied of a product, I can determine the market price. Sometimes prediction is very hard given the complexity of a system. This is fine, because the model can still be explanatory. Epstein gives the example of plate tectonics being explanatory of earthquakes, despite not predicting the time and location of any specific earthquake. 

Epstein critiques the naïve inductivist view of science, which assumes that researchers first gather data and then build models to explain it. This view is common among both non-modelers and some modelers, especially in the social sciences, where it's often believed that one should "collect lots of data and run regressions." While data-driven research can be valuable, Epstein argues that this is not how science typically works. In many significant scientific breakthroughs, theory came before data and actually guided what data should be sought. Models are not just tools for explaining existing data, they are invaluable for shaping and guiding data collection. Without models, researchers might not know what data is most relevant or worth collecting in the first place. 

Models don’t have to be precisely accurate to be incredibly useful. In fact, all the best models are technically "wrong", they are simplifications or idealizations of reality. But this wrongness doesn’t diminish their value. On the contrary, their simplicity and abstraction make them powerful tools for understanding the core dynamics of complex systems. Though models are idealizations and approximations, they nevertheless enable us to describe broad qualitative behavior of a system such as feedback loops, threshold effects, tipping points etc. The real question isn't whether a model is idealized (all models are), but whether the model is a fertile idealization, does it generate insight, understanding, and foundational intuition; hence George Box famously stating that  “All models are wrong, but some are useful.” When first introduced to modeling, it can seem absurd to deliberately simplify, ignoring potentially relevant details. The truth of the matter is that we simply have to ignore things, we do all the time, without realizing it. We simply cannot function in the world without ignoring most facts about the world. We have limited cognitive resources; we could not possibly consider the mountains of information bombarding our senses at any given moment. Formal modeling forces us to be very explicit with what we are considering; meaning that it's clearly communicated what's not being considered, thus providing deeper intellectual engagement with the question at hand. 

Paul Smaldino gives an thoughtful anecdote illustrating the precision of a formalism provided by modeling. Remember from earlier, verbal models suffer from ambiguity. Formalizing theories in terms of mathematical models helps us be precise with our concepts. Smaldino recounts an experience from his undergraduate days when he and a friend, while waiting in a theater basement, constructed a whimsical LEGO figure they dubbed a "Cubist chicken." Both agreed on its identity until a third friend questioned its features. Upon attempting to explain, they realized each had a different interpretation of which parts represented the chicken's head, body, and tail. This divergence highlighted that their shared understanding was more assumed than actual.​ The parable serves as a metaphor for the pitfalls of relying solely on verbal models in science. While verbal descriptions can seem comprehensive, they often harbor ambiguities that lead to misinterpretations. In contrast, formal models, despite being simplifications, require explicit definitions and assumptions, fostering clearer communication and understanding among researchers. Smaldino emphasizes that the "stupidity" or simplicity of models is a strength, as it forces clarity and facilitates the testing of specific hypotheses.​ Models force us to make sure we are all talking about the same thing. A corollary to precision is tractability. Formal models are logical engines that transform assumptions into conclusion. Stating assumptions precisely allows us to know what necessarily follows from those assumptions, which helps us find potential gaps in our explanations. 

Mainstream economics frequently gets criticized by its approach to modeling. This brings us to the distinction between as-is and as-if modeling. As-if models assume that agents behave "as if" they are behaving a certain way (optimizing) even if they don't do so in reality. For example, the classic rational actor model in economics assumes individuals maximize utility as if they were calculating costs and benefits precisely, even though real humans may not. Friedman famously defended this in "The Methodology of Positive Economics" (1953), arguing that the realism of assumptions is less important than the accuracy of predictions. Mainstream economics often uses as-if models for their analytical tractability and predictive usefulness. On the other hand, "as-is" models try to represent how agents actually behave based on empirical observation, often including heuristics, bounded rationality, learning, or psychological factors. These models are often less mathematically neat but aim to be more descriptive of real-world behavior. An example might be an agent based model that computationally encodes more complex decision rules an agent might adaptively utilize, within an evolving system. Economists like J. Doyne Farmer and others in the complexity economics tradition (at the Santa Fe Institute, inspired by Herbert Simon) are strong proponents of as-is modeling, especially using agent-based models and data-driven simulations. They reject the representative agent:, emphasizing heterogeneity and micro-level interaction. They emphasize emergence of macro-level phenomena from simple decentralized rules. They place a heavy emphasis on empirical validation; models must fit and explain data, not simply derive from axioms. From the mainstream economist perspective, the goal is to predict behavior, so highly unrealistic idealizations are fine if they aid in prediction. They deemphasize descriptive realism. 

Systems

When describing "the economy", we are describing an economic system. This was somewhat alluded to in the definition of economics. We are interested in studying aspects about a particular system of interest. Sometimes we are interested in studying the "global economic system". Other times, we are interested in studying a proper subsystem of the global economic system, like the financial system. But what exactly is a "system"? How do economists think about systems? I'll first start with a broad definition of "system" from systems science, before modifying it to encapsulate how economists broadly think of "systems". Economists are usually not this explicit when conceiving "systems"; in fact, in all my years of study I rarely find economists formally defining "system". Mainstream economists certainly do not formalize the notion. Heterodox approaches like complexity economists tend to be more descriptive. The following formalization is consistent with heterodox approaches taken by someone like Brian Arthur. But we will see broadly, economists are concerned with these elements.

Mobus and Kalton describe a system as a cohesive, organized whole composed of interrelated components that work together to produce behaviors or functions not reducible to those of the individual parts. A system exists within a boundary, interacts with its environment, and processes inputs and outputs in structured ways. It exhibits emergence, meaning the behavior or purpose of the whole cannot be understood merely by analyzing its parts in isolation. Systems can be open or closed, adaptive or rigid, and operate across multiple scales and timeframes. They maintain internal organization through feedback, control, and governance mechanisms, and are often hierarchically embedded in larger systems.

Formally, a system Si,lS_{i,l} at identity ii and hierarchical level ll is defined as the 9-tuple:

Si,l=(Ci,l,Ni,l,Srci,l,Snki,l,Gi,l,Bi,l,Ki,l,Hi,l,Δti,l)S_{i,l} = (C_{i,l}, N_{i,l}, Src_{i,l}, Snk_{i,l}, G_{i,l}, B_{i,l}, K_{i,l}, H_{i,l}, \Delta t_{i,l})

Each component of this tuple describes a key aspect of what constitutes a system in precise, structural terms.

1. Ci,lC_{i,l}: Components

This is the set of parts or elements that make up the system. Each component may itself be a system (a subsystem), reflecting the hierarchical nature of systems in general. Components are defined by their roles and characteristics within the system, and their inclusion in the system may not always be binary—it is sometimes useful to model their membership as “fuzzy,” meaning that components may participate in the system to varying degrees or under specific conditions. For instance, in an ecosystem, a migratory species might only be part of the system seasonally, making its membership conditional.

The components of an economy include a vast array of agents and institutions—households, firms, banks, government agencies, and markets. Each of these components has a defined role: households provide labor and consume goods, firms produce goods and services, banks manage financial intermediation, and governments set fiscal and monetary policy. These entities interact within and across sectors, forming the basic building blocks of the system. Some components, like multinational corporations or informal economies, may participate across boundaries, making them partially included or fuzzy in membership.

2. Ni,lN_{i,l}: Network of Relations

This element represents the structural or functional connections between components. It defines how parts of the system influence, support, or depend on each other. These relationships can be bidirectional or unidirectional, physical (like wires connecting electrical components) or abstract (like authority or influence in a social organization). The network structure determines the topology of the system and underlies many dynamic behaviors, such as cascades, feedback loops, and resilience to disruption.

In an economy, components are linked through a dense network of financial, legal, and social relationships. Firms are connected to consumers through market exchanges, to suppliers via supply chains, and to banks through credit and investment. Governments interact with all other components through taxation, regulation, and public spending. These relationships define flows of goods, services, money, labor, and influence. The structure of these connections—centralized or distributed, robust or fragile—has a strong impact on economic performance and resilience to shocks.

3. Srci,lSrc_{i,l}: Sources

Sources refer to the inputs that enter the system from its environment. These might include material resources, energy, or information—anything required for the system to function and persist over time. In a manufacturing system, for example, raw materials are sources. In a cognitive system, sensory stimuli are inputs. The nature and availability of these sources can have a profound impact on how the system behaves or whether it can sustain itself.

The sources of an economy are the external inputs that support its functioning. These include natural resources (like oil, minerals, or water), imported goods and services, foreign investments, immigrant labor, and technological innovation originating abroad. These sources flow into the system and are transformed, consumed, or circulated internally. For a closed economy, these sources would be limited, but in reality, most modern economies are highly open and reliant on continuous input from the global system.

4. Snki,lSnk_{i,l}: Sinks

Complementary to sources, sinks are the outputs of the system—where its products, wastes, or by-products go. Sinks represent how the system interacts with and impacts its external environment. In an ecological system, this could be the dispersal of nutrients or the release of waste products. In an economic system, sinks might be the markets that receive goods or the environment that absorbs pollution. A system’s outputs can affect not only its own stability but also the systems it interfaces with.

Sinks are where the economy’s outputs go, including exports, emissions, and waste. For example, manufactured goods might be sold to international markets, information products consumed globally, or pollutants expelled into the environment. These outputs affect external systems—ecological, social, and economic. Negative sinks, like environmental degradation, can feed back as costs to the system, making them crucial to sustainability and long-term modeling.

5. Gi,lG_{i,l}: Flow Graph

The flow graph represents the directed movement of resources—whether energy, matter, or information—between components within the system. It formalizes the system's internal dynamics and enables modeling of how internal processes operate, such as transport, transformation, or communication. Flow graphs are typically weighted and directional, capturing both the pathways and intensities of flows, and are critical for analyzing systemic phenomena like bottlenecks, delays, and accumulation.

The flow graph of an economy maps how resources circulate between components—money flows from consumers to producers, taxes from businesses to government, subsidies from government to agriculture, and so on. It includes supply chains, labor markets, investment flows, and trade routes. This graph allows us to trace bottlenecks, feedback loops (like inflationary spirals), and cyclical behaviors (like recessions). Monetary policy and interest rates, for instance, are interventions into specific flow patterns meant to influence broader systemic outcomes.

6. Bi,lB_{i,l}: Boundary

The boundary distinguishes what is part of the system from what is not. It may be physical (like the hull of a ship), functional (such as firewall rules in a network), or even conceptual (the defined scope of a scientific model). Boundaries determine the scope of analysis and define where inputs enter and outputs exit. They are essential for understanding how a system maintains its integrity, interfaces with its environment, and evolves over time. In some cases, boundaries may shift, blur, or be contested, especially in social or conceptual systems.

An economic system’s boundary defines what is considered part of the national or regional economy and what lies outside it. For example, the boundary of the U.S. economy would include all production and consumption activities within its jurisdiction, but also interactions with foreign economies through trade and capital flows. The boundary is often fuzzy—offshore accounts, black markets, or informal economies may blur the lines of inclusion. How we define the boundary impacts the scope of data collection (GDP, for instance) and policy-making.

7. Ki,lK_{i,l}: Knowledge

Knowledge is the information stored, encoded, or maintained within the system. This might be in the form of data, memory, genetic information, operating rules, or even learned behaviors. Knowledge enables the system to regulate itself, adapt, and evolve in response to internal or external conditions. In biological systems, this could be DNA; in human organizations, it could be culture or institutional memory. Knowledge may also define how the system models itself or anticipates its environment, enabling more sophisticated forms of control.

Knowledge in an economy includes institutional memory, laws, technologies, business practices, education, and even cultural norms. It resides in human capital, embedded in institutions, and codified in technologies and systems of production. This internal knowledge base enables innovation, guides decision-making, and supports coordination across vast distances and organizational layers. A highly developed economy typically has dense knowledge structures that promote adaptability and efficiency.

8. Hi,lH_{i,l}: Governance

Governance encompasses the mechanisms, rules, and feedback processes that guide the system’s behavior and maintain its stability. This may include control systems, management structures, or algorithms. It can be centralized or distributed, rigid or adaptive. Governance ensures that the system responds to changes, corrects errors, and aligns its operations with desired outcomes. In ecosystems, governance might take the form of natural feedback loops; in engineered systems, it might be a set of protocols or a software routine managing operations.

Economic governance is exercised through central banks, treasuries, regulatory bodies, legal frameworks, and international institutions like the IMF or WTO. It includes fiscal policy (spending and taxation), monetary policy (interest rates, money supply), and regulatory actions (banking laws, labor protections). These governance mechanisms manage inflation, employment, growth, and inequality. Effective governance keeps the system stable, resilient to shocks, and aligned with societal goals. Poor governance can lead to systemic crises.

9. Δti,l\Delta t_{i,l}: Time Interval

Finally, the time interval refers to the period over which the system is observed, modeled, or understood. It provides temporal context, distinguishing between fast, transient processes and long-term, evolutionary changes. Some systems operate over milliseconds (e.g., electronic circuits), others over centuries (e.g., climate systems). Time determines not only how the system behaves but also how we interpret causality, feedback, and system lifecycle stages such as growth, decay, and renewal.

The time interval for studying an economy could vary dramatically depending on the question—short-term intervals might focus on quarterly business cycles, while long-term intervals could examine industrial development, technological evolution, or demographic shifts over decades. Time plays a critical role in understanding lag effects, feedback delays, and compounding dynamics like debt accumulation or climate-related economic changes. Economies are dynamic systems that evolve, adapt, and sometimes collapse across different time horizons.

Given all of this, it's clear that economists are concerned with "economic systems". They are certainly concerned with governance, knowledge structures, network relations etc. However, they do not use formalisms like what's used in systems science. When referring to "systems", they will use designations like "capitalist system" or "mixed system"; highlighting different aspects of these systems that are present or absent. In mainstream economics, the concept of a “system” is only partially formalized, it’s often used metaphorically or descriptively (often ideologically laden), but elements of it are embedded in formal models without always being explicitly labeled as “systems.” For example, dynamic stochastic general equilibrium (DSGE) models, used in macroeconomics, formalize how an economy responds over time to shocks. They're system-like in structure, with state variablesfeedback, and evolution over time. In systems, equilibrium is a state of balance or homeostasis. In economics, equilibrium is central; markets “clear” when supply equals demand; economists typically assume there to be some homeostasis absent external perturbations. This is a central analogy used in economics to physical systems. Systems are defined by what’s inside (endogenous) and what’s outside (exogenous inputs); more on this later because these are absolutely critical concepts you must understand to understand the practice of economics. In economics, models clearly separate endogenous variables (explained within the model) and exogenous ones (shocks, policy, technology); this mirrors the "system boundary" concept. In many applications, like game theory or IO analysis, “system” means a set of interrelated equations that represent how different variables (like prices, consumption, wages) influence each other. It’s a structural or computational sense of a system, closer to engineering or control theory. Economic agents are modeled like controllers optimizing an objective (utility, profit) subject to constraints (budget, production function). Very similar to optimal control or operations research. Central banks have access to the systems control variables, which are the levers used to influence the systems behavior. Monetary policy is modeled with feedback rules (like the taylor rule), which is essentially a PID controller. DSGE models are analyzed for stability around a steady state, and then impulse response functions are used to study how these systems react to shocks (impulses). Every agent has an objective function they're seeking to optimize and follow simple rules for optimizing these function over time. When I was in graduate school, I learned quite a lot about state space models. The difference between a systems science approach, is that economists typically do not consider emergence, adaptation, agent heterogeneity, network interactions, path dependence, tipping points, learning, or evolution. It primary uses a rational agent framework (more on this later) which assumes away the possibility of these phenomenon. 

When economists are speaking more loosely about "systems", they are using the term in the broad metaphorical sense rather than the rigorous mathematical sense. They are essentially shorthand for institutional arrangement, decision making structures, and norms of organization. They refer to who owns resource, how decisions are made, and the role of the government. I personally think we can juxtapose the systems science formalism onto these terms. For example, "Capitalist system" would by an instance of a broader class of related systems that can be described using the 9-tuple above. I really wish the discipline went this way because there is so much confusion around these ideologically loaded terms; can't imagine how much of humanity is wasted speaking past each other because they have no conceptual foundations. "Systems" in this loose sense, refers to broad institutional configurations, typically involving the following:
  • Ownership of the means of production
  • Coordination mechanisms (think prices)
  • Incentive Structures (think the profit motive)
  • Role of the state (think of "laissez faire")
  • Allocation of Resources (budgeting, planning, markets)
  • Legal and Institutional Framework (property rights, taxation etc)
Samuelson uses "capitalist" and "market economy" pretty much interchangeably; emphasizing property rights, private enterprise, and markets. Other economists emphasize "price signals" as what determines the allocation of resources, based on decentralized choice of individual agents. Honestly, given these loose definitions, anything can be capitalist. Not going down that rabbit hole, but I think it depends on which aspects of the system we are primarily focused on. 

One last concept I think is important; system decomposition. I'll quote Smaldino from Modeling Social Behavior: 
"What are the parts of the system we are interested in? What are their properties? What are the relationships between the parts and their properties? How do those properties and relationships change? Decomposition consists of usable answers to these questions"
This is relevant to the modeling section above, because when you want to hypothesize something about your system, you first must articulate the parts of that system. Economists are primarily concerned with causal hypotheses, meaning the level of description must be sufficient to capture the parts of the system relevant to the question. There is no right level of decomposition; it fundamentally depends on the question, and the value of the model depends on how well it's decomposition answers the question. Oh and remember, assumptions are behind all of this. 

Optimization


Rationality

Economists describe "agents" as "rational". I've made use of the term "agent" in earlier sections but haven't explicitly defined how economists use it. The "agent" is the fundamental unit of analysis shared by most economists. This is the basic building block or elementary unit in many economic models; the lowest level at which behavior is assumed or described. In disciplines like physics, the fundamental unit might be a particle. In Sociology, it might be a group or institution. It's important to note that, some people self described as economists, take the institution to be fundamental. The fundamental unit doesn't have to be the "smallest" per se, it's typically just the starting point for a specific line of inquiry. In economics, it is an "agent"; an individual decision making entity. We build models by specifying how these agents behave.

"Agents" in economics have ordered and complete preferences, respond to incentives, make choices given constraints, and are modeled to optimized an objective like utility (more on that in a bit) or profit. A "consumer" is an agent that maximizes utility from consumption of finished products, goods, services etc. A firm is an "agent" that maximizes profit or minimizes costs. A government is an agent that maximizes "welfare" (more on that later). Macroeconomists make use of the "representative agent", encapsulating the average behavior of a collection of homogenous agents. We often assume that agents are rational (the point of this section), forward looking, self interested, autonomous and atomistic. You may think you know what these terms refer to, but economists tend to use them in very specific ways. For example, self interest need not mean "selfishness". Atomistic is pretty much an assumption about how behaviors are influenced by peer groups. 

In behavioral economics, agents are conceptualized differently. For example, in the classical conceptualization, economists assume transitive preferences, meaning if I prefer A to B and B to C, then I prefer A to C. There is a well orderedness that follows classical laws of logic. Behavioral economists on the other hand, relax this assumption, frequently showing empirically that this assumption doesn't hold in reality. They also use a model of bounded rationality, in which cognitive resources are fundamentally constrained, implying we engage in suboptimal behavior utilizing simple decision rules like heuristics. These heuristics are influenced by framing, emotions, social norms. They can be inconsistent intertemporally and context dependent. They are subject to cognitive biases that may have been evolutionary inherited. Notice how this becomes extremely more complex to model. In agent based modeling, the "atomistic" and "homogenous" assumptions are relaxed. Agents are conceived as heterogenous and adaptive. They don't "optimize" anything, rather they follow simple rules of thumb and use learning algorithms (implicitly). Macroeconomic phenomenon are the emergent patterns arising from this underlying heterogeneity. In other words, there is no "representative agent". In game theory, agents are fundamentally strategic. They form Nth order beliefs about other peoples beliefs, updating these beliefs as games (situations of strategic integration) evolve. They project signals to competitors to influence their competitors beliefs. They can be both cooperative and competitive dependent on perceived payoffs. In social theory, agents are inherently inseparable from their social structures. Decisions aren't made based on constrained optimization and are shaped by culture, norms, and institutions. Preferences aren't a "given"; they are not taken for granted. 

Notice how assumptions about the fundamental unit of analysis literally determines the conclusions we draw about "the economy". The choice of how to model an agent, determines what behavior is explain, what can be predicted, and what policies are recommended. Mainstream economists use the classical conceptualization of an agent, reflecting methodological individualism. There is nothing inherently wrong with this approach, insofar as it is a useful model of agency. It is becoming increasable augmented and challenged by more pluralistic conceptions of agency.

The foundational view of rationality in economics is the idea that agents make decisions that are consistent with their preferences and goals, given the information they have access to and their constraints. This is a form of "instrumental rationality", a practical rationality that describes how people choose efficient means to achieve their goals. Economists assume preferences are consistent. This means they are complete, transitive, independent and monotone. Completeness refers to the idea that, given a complete set of choices, it is possible for the agent to rank all the options, according to some preference mapping (i.e. utility). Transitivity was mentioned earlier. Independence refers to the idea that preferences are formed in isolation and that a persons choice between two options are not influenced by the presence or absence or a third option. In economics, we call this "Independence of Irrelevant Alternatives", or IIA. This is a crucial assumption ensuring preferences are well defined and consistent; meaning that if it fails then we can't model agents as optimizers. The IIA assumption implies that choices can be predicted based solely on the properties of the options being considered, not on irrelevant circumstances or other options that might be available. Monotone preferences, or "Non-Satiation", means that "more is better". An agent will always choose a bundle of goods that contains more of every good, subject to their constraints. I cannot stress how foundational this assumption is. It assumes that consumers always prefer more of a good or service to less; and that there are no limits to their potential satisfaction of consuming more. Satisfaction might increase at a decreasing rate, this is called diminishing marginal utility, something I'll write about later. Monotone functions are functions that either always increase or decrease within their domain. This is what we assume about consumer utility with respect to consuming a product. An important implication of this is that preferences are convex . Convexity refers to a feature of a mathematical function. Why is this important in economics? Simply put, preferences must be convex, because if not then its pretty frickin hard to optimize.

It's so easy to take this for granted when studying economics. It's also why many people struggle with studying economics; they're unfamiliar with the foundational assumptions of the fundamental unit of analysis. Every economic theory, recommendation, policy proposal, analysis etc in some form or another, builds off this. Many economists literally take it for granted; they do not recognize these as simplifying assumptions for the model, but see them as eternal truths about human behavior. One of my main contentions with economics, is the economists who are incapable of critically analyzing these central assumptions. When I was in highschool, I remember taking an economics class where on one day, the teaching was so dogmatic that in hindsight, the best description of the situation was child abuse. For the record, the instructor was not an economist. He probably had no familiarization with mathematical modeling. But he was trying to indoctrinate us with the monotonicity assumption by insisting that it's an eternal truth of "homo economicus" that we have infinite wants, and that we would act on these infinite wants if it werent for scarcity (constraints). This was not a discussion, this was not interactive, it was taught as a central dogma; literally like a Sunday School session. It's always been interesting to me how a modeling formalization has literally become ideology. 

I have no problem with this rationality assumption in principle. It makes modeling choice mathematically tractable. It can offer some predictive power. It provides a normative benchmark, allowing us to make comparisons between different policies because it serves as a baseline for efficiency. However, I am a pluralist when it comes to economic methodology. I think this framework is a special instance of a broader set of possible descriptions of human behavior. It's unable to explain many real world phenomena like bubbles, crises, and persistent unemployment. It is inconsistnt with behavioral predictions like loss aversion, framing effects, and time inconsistency. It's not capable of predicting how agents will respond to policy and price changes universally. It's really contrived; I prefer as-is modeling to as-if modeling. Assuming perfect information and infinite computation is kind of absurd and obviously not empirically valid. Rationality is useful in that it provides coherence and rigor to models, but it abstracts away human psychology, institutional aspects, and computational limits (more on this later).

Nevertheless, like I mentioned, this is foundational in economics. It informs how the models are constructed, and the subsequent policy recommendations. For example, the supply and demand model imply rent control is economically inefficient. Since the framework is deductive in nature, the implications of the model would recommend removing this policy. If A then B, A therefore B. If rent control, then inefficiency, rent control therefore inefficiency. These deductions literally fall out of the assumptions about our fundamental unit of analysis and how agents aggregate. In macroeconomics, a representative agent stands in as the single agent who can represent the economy. Heterogeneity is very difficult to model because aggregation is impossible (well, technically its possible, but its not possible to have a stable single equilibrium). This agent is assumed to solve intertemporal optimization problems. They also have rational expectations about the future, this is the forward looking assumption. Specifically, it means that individuals gather all the available information, including past trends and economic data. They form expectations based on this historical data. These are statistical expectations about key macroeconomic measures such as inflation and interest rates. Therefore, central banks set interest rates based on forward looking inflation expectations falling directly out of the DSGE model assuming rational expectations. 

There are a variety of different conceptualizations of rationality, as I've touch on briefly above. Some of these highlight the importance of cognitive resources. I think this is absolutely crucial for understanding rationality. Dan Sperber and Deirdre Wilson present a concept of "Relevance" in their book "Relevance: Communication and Cognition" which I think implies a model of rationality that is not only realistic, but consistent with definition of rationality in computational fields, and therefore cross disciplinary agreed upon. To me, this is an important sign of a concepts usefulness; if other disciplines converge on something similar. The authors introduce the cognitive principle of relevance. Human cognition is geared towards the maximization of relevance. In other words, we tend to pay attention to information that yields the most cognitive effect (like new insights or changes in belief) for the least processing effort. This leads to the communicative principle of relevance; Every act of communication carries with it the presumption of its own relevance. When someone says something, the listener expects that it is worth the mental effort to understand — i.e., it will be relevant enough to justify the attention. Communication is not just about encoding and decoding messages (as traditional code models suggest), but about inferring intentions. Listeners use context and assumptions to figure out what the speaker meant, not just what they said. They define communication as a two part process: Ostension, in which a speaker signals they want to communicate, and inference, the lister inteprets the signal, guided by the assumption it will be relevant. Communication and thought are guided by the search for relevance, and this means acheiving the most meaningful impact with the least amount of cognitive effort. 

How is a theory of relevance, well, relevant to rationality? Well, their theory gives us a description about how cognitive faculties function. Rationality and decision making are obviously interconnected with that. Their theory implies something about rationality because it gives us a description of cognitive information processing, something fundamental to the concept of rationality. Rememeber, economists simply assume infinite processing power. How absurd is that? If you are in anyway familiar with computational complexity, supercomputing, or scientific computing, you'll be very aware of the fact that some of the fastest computers in the world still can only provide approximations to even relatively simply computations. And yet, economists think its safe to assume that, the human brain is capable of handling some of the most computationally complex problems. The theory of relevance highlights the fact that communication is an inferential process, which depends on cognitive resources. We cant possibly attend to all information, or even determine which information is relevant to a decision problem, without simplfying assumptions. This implies a model of bounded rationality. Humans are not perfectly logical agents. Instead of maximizing truth or utility in a strict sense, we use heuristics (mental shortcuts) to make satisficing decisions — ones that are good enough given our cognitive limitations. According to Sperber and Wilson, rationality is driven by the search for relevance. A person is rational if they pursue thoughts, beliefs, and interpretations that provide high cognitive effects (like useful inferences or knowledge) with minimal effort; this can be completely independent of utility. Their model of rationality is inferential, we interpret others not just by decoding language, but by inferring their intentions in context, guided by relevance. So communication is rational when it makes those inferences easy and rewarding.

Tom Griffiths, a cognitive scientist at Princeton, has a computational model of reality that is consistent with the rationality implied by Sperber and Wilsons communication model. Griffiths argues that to understand human rationality, we should think in terms of optimal use of limited resources; including time, information, and cognitive capacity. His work blends Bayesian inference, machine learning, and resource-bounded computation to model how people make decisions and draw conclusions. Tom Griffiths’ theory of rationality sees humans as computationally rational agents; not perfectly logical, but using clever approximations, heuristics, and probabilistic reasoning to solve problems efficiently under real-world constraints.

His approach is very interesting in my opinion because elements of it still incorporate the "as-if" modeling approach taken by economists while also incorporating insights from artificial intelligence, psychology, and computer science. For example, he assumes that at some level, humans reason as if they're doing bayesian inference. At some general level, they handle uncertainty by weighing evidence and prior beliefs to update their understanding of the world. Our cognitive process approximate bayesian reasoning. Other researchers in neuroscience like Karl Friston take this approach also. He also takes the resourc-rational approach, meaning that human thought and behavior should be understood as the best possible use of limited computational resources. So instead of optimizing, like in economics, people are bound by computational resources such that hueristics and approximations take place of a potentially resource intensive optimization. Think of it this way, economists assume that preference are complete. This means that agents are capable of enumerating all possible choices and rank ordering them according to which choices maximize utility. They do this intertemporally, meaning they are aware of how the choice space will look N years in the future. They also are aware of the secondary effects of choosing action A over action B, meaning they have counterfactual knowledge of the decision space. For any given decision, they can compute the optimal solution. Kind of hilarious when you pose it this way. But a more realistic approach, is to model people as cognitively resource constrained, not just physical resource constrained. Griffiths emphasizes computational-level analysis, following David Marr’s idea that we should ask: What is the goal of the computation? What is the optimal solution, and how close are humans to achieving it under real-world limits? His work connects to the idea that human cognition is adapted to the structure of the environment; we make the best decisions possible based on the patterns we’ve learned from experience (like machine learning models trained on data). This is known as ecological rationality

Obviously, I'm not here to reconcile the differences between Griffiths and Sperber/Wilsons theories. Strictly speaking there isn't anything to reconcile, the latter aren't advancing a model of rationality. But I think their communicative model implies many elements in Griffiths approach to rationality. For sperber/wilson, agents seek to maximize relevance and minimize cognitive load during communication. From a rational agent perspective, agents must engage with other agents to acquire information relevant to their decision problem. They obviously do not exhaustively inquire with other agents over the space of possible information sources and level of depth when engaging with the information. They are bound by cognitive resources and update their posterior beliefs accordingly. The mind is a relevance engine that is evolved to make inferential leaps based on minimal effort; I think this is literally what a heuristic is. I suppose the main difference between the two is that, since communication is inherently contextual and social, interpretation is bound by expectations of relevance, which are socially generated. In other words, rationality itself is something bound to the social. Griffiths approach is still individual based. 

Below are some more non-classical approaches to modeling rationality that have inspired this section:
  1. Herbert Simon – Bounded Rationality: Humans are not fully rational due to cognitive and informational limits. Instead, they are boundedly rational; they make satisficing (satisfy + suffice) decisions rather than optimizing. People choose the first acceptable solution, not necessarily the best one. Memory, attention, and time restrict decision-making. Behavior is shaped by the structure of the environment; people adapt, rather than optimize.
  2. Gigerenzer – Ecological & Heuristic Rationality: In many environments, simple heuristics can be more effective than complex reasoning. Rationality is adaptive, not absolute. Fast and frugal heuristics are quick, efficient mental shortcuts that exploit environmental structure. Ecological rationality means What’s rational depends on the match between the mind’s heuristics and the structure of the environment.
  3. Amos Tversky & Daniel Kahneman – Heuristics and Biases: Humans rely on heuristics, which often lead to systematic biases, deviations from ideal rationality. There are many catalogued biases and heuristics including the Availability heuristic (Judging likelihood by ease of recall) and Representativeness heuristic (Judging similarity over base rates). Prospect theory is also central. This states that people evaluate gains/losses relative to a reference point and are loss-averse. This is where the idea "predictably irrational" comes from. 
Implicit within all of this is the concept of self interest. I didn't designate a section specifically for this concept because its normally covered by many definitions of rationality; although sometimes glossed over. Self interest does not mean selfish. Let's just establish that. It primarily refers to a specific vantage point of an individual within a broader economic system. This really goes back to Adam Smith. In The Theory of Moral Sentiments, Smith argues that our moral sentiments (feelings of empathy, sympathy, approval etc.) are stronger toward people who are closer to us, whether emotionally, socially, or physically. Smith talks about a hierarchy of concern, such that the literal proximity determines how much we care; the closest is ourselves, then family/friends/community, and then countrymen/strangers, finally humanity at large. Smith is providing an explanatory account of emotional distance, not evaluative. He is saying that it takes work to extend our sympathies beyond our immediate circles. Self interest refers to the fact that individuals attend to things that are more immediately salient, more directly important to them, or within a more proximal frame of reference. Smith never referred to self-interest as a dog-eat-dog kind of behavior; on the contrary, he repudiated that.

Self interest and altruism tend to operate simultaneously. Likewise, competition and cooperation tend to operate in conjunction. Many people confuse these concepts, juxtaposing them against one another, thinking they are mutually exclusive. However, economists understand these concepts like the following: for a rational agent pursuing self preservation, very often its necessary to engage in altruistic and cooperative behavior. We engage in this behavior not in begrudgingly, but because its part of a broader self preservation goal that encompasses a wide range of ethical behaviors. But can we be perfectly altruistic, by showing a high level of empathy for those outside your immediate proximity? The answer is probably no. We are oriented toward our own immediacy, and sometimes this comes at the expense of showing empathy towards the out group. Since we cant empathize with all possible vantage points (we cant put ourselves in everyones shoes), we tend to place our cognitive effort towards our immediate groups. This is in essence how economists think of self interest. I should qualify that last statement actually. Many economists who have actually read Adam Smith will think this. Other economists reduce self-interest to mere utility maximization. Smith’s insight that our sympathy weakens with distance shows that he saw self-interest not as a cold, calculating force, but as entangled with the limits of our emotional imagination. The economist’s self-interest is abstract and constant; Smith’s self-interest is human, fallible, and bound up with emotion and moral perception.

Opportunity Cost

This is a fundamental concept used by economists to understand the allocation of some scarce resource. The resource can be time, energy, money, something physical like land, etc. Opportunity cost refers to a decision about the scarce resource and the implicit tradeoff you are making relative to alternative uses. For example, suppose you have some resource called X, that can be used for purposes, ranked in descending order, (a,b,c). By selecting option (a), your opportunity cost is (b); it represents what you have to "give up" in order to devote those resources to option (a). The opportunity cost is the actual value (usually defined in terms of utility, something we will define later) someone must give up. 

Let's concretize this with an example. Suppose you have a portfolio of financial assets, with a mixture of fixed income and equities. You can imagine a scenario where you have 1000$ to spend on different mixtures, or combinations of these assets. Call these mixtures M1, M2 and M3. Lets say M1 gives expected profits of 100$, M2 90$ and M3 85$. Your opportunity cost will be the difference between M1 and M2. 

This seems like a rather useless concept at first, but I cannot stress how essential it is within the economists toolkit. It is fundamentally about how agents identify, rank, and select among alternative uses of some resource; which is at the core of economic behavior. Opportunity cost focuses on the single most valuable thing you had to give up, when making your choice. Here is an example you might see in a textbook. Suppose a company owns a building. It can choose to rent it out for 50k a year or use it for its own operations. If the company decides to use it, it is forgoing 50k dollars of rent they could have earned. So they will not use the building unless they expect they can make more than 50k on the alternative use. 

Consider another example. Imagine a group of citizens under an oppressive regime. They’re frustrated, hungry, overtaxed, but they haven’t revolted yet. Why might this be? Revolting involves opportunity costs. To participate in a revolution, people must give up their current income (even if its low), relative safety, and the time/effort that could go into something else like fleeing. So the opportunity cost of revolting is the stability and limited benefits of not revolting. Now suppose those current benefits diminish, the opportunity cost of not revolting will increase. So from an economists perspective, when the opportunity cost of action becomes lower than the opportunity cost of inaction, a revolution might spark. 

I'm not here to argue about whether revolutions are made based on an economic calculus. All I am attempting to do is show how the concept is applied. Obviously, ideological factors strongly influence the nature of revolutions. Also, even within a business context, someone might not pick the better alternative (from a financial perspective) for reasons such as sentimentality, or emotional connection to one alternative. Opportunity costs are intimately connected with incentives, which we'll learn about next.

Incentives

These are broadly thought to be factors that influence the choices people make by altering the perceived costs and benefits of different actions. Economists use an extremely broad definition; incentives are anything that motivate or influence human behavior by means of altering the cost-benefit structure. When an incentive lowers the opportunity cost of an action, that action becomes more attractive. If it raises the opportunity cost, it's less likely to be chosen. Opportunity cost is what’s given up when making a choice. Incentives are what shift the relative attractiveness of those choices by influencing opportunity costs.

From a systemic perspective, we normally refer to "incentive structures". This refers to the framework of rewards and penalties built into a system that shapes how people behave. It is the underlying setup that determines which actions are encouraged and discouraged, and what outcomes are rewarded or punished. The system itself can be formal, in the form of laws, contracts and policies. It can also be informal, in the form of cultural norms, social pressure, and status. 

For example, imagine a corporate bonus system. Managers might get bonuses for short term profits. This might result in cost cutting behavior like laying off workers or slashing R&D. So the stock prices of that corporation rise in the short term, at the expense of long term innovation and employee maturation. This would be characterized as an incentive structure flaw. Or consider an environmental policy that subsidizes gasoline to the poor. The well intentioned incentive structure might have the unintended consequences of increasing pollution; since people are incentivized to drive more. Also, if there is no long term plan to phase out the program, driving can become so entrenched within the economic system, that transitioning to renewals becomes near impossible in the future because there are no incentives to adjust. 

This might be one of the most fundamental concepts economists use to characterize human behavior. We are constantly asking whether a certain policy distorts economic incentives. There is even a subset of economics called Mechanism Design, which studies how to construct rules or institutions, to reverse engineer incentives to get the outcomes you want. It studies how to create systems or rules (mechanisms) so that individuals, acting in their own self-interest, will still produce a desirable overall outcome. A traditional "economics as an observational science" approach is interested in studying how people behave within a given system, while a "economics as a field of engineering" uses mechanism design to design the system itself. Given the outcome we want, what rules should we write? 

Directly related to mechanism design, and more broadly, incentive structures, is the field of Law & Economics. The field is fundamentally about designing legal rules and institutions that align individual incentives with socially desirable outcomes. For example, in tort law (accidents, negligence, and liability), our goal might be to reduce harm while not stifling productive activity. A mechanism to achieve this, might be to select liability rules that create optimal care incentives for injurers and victims. This would be highly relevant in a situation where employers might be accountable for employee injuries. The main idea is that we want to write rules such that people can act in their own self interest but this does not result in a tragedy of the commons (more on that later). Property rights are also a fundamental concept in economics. I'll write more about that later, but the rules of ownership will dramatically determine the resulting allocation and distribution of resources. 

I hope this begins to show how these basic concepts have fundamentally shaped our modern institutions. Interestingly, I became aware of Law & Economics through its critics. I frequently read a legal theorist named Richard Wright, who specializes in tort law, and is highly critical of what he sees as the degradation of legal theory due to the introduction of economic concepts into the discipline. In particular, he is critical of Richard A. Posner, who might be the most influential figure in Law & Economics. Essentially, Wright and others argue that Posner's view reduces law to cost-benefit analysis, ignoring justice, rights, and duties. Law is not just a tool for efficient outcomes—it’s a normative system rooted in justice, fairness, and moral responsibility.  I am not going to elaborate any further on this, but if interested, I definitely recommend reading about these two people. You'll begin to see how prolific economic thinking has penetrated the most fundamental institutions governing our lives. The nature of the legal system directly impacts incentive structures and opportunity costs, and hence how the economic metabolism of society functions.

Utility

This might be one of the most underappreciated concepts in economics, among non-practitioners. Utility is a core concept in decision theory, economics, and operations research, representing a way to model and quantify preferences, satisfaction, or value that an agent (individual or organization) assigns to outcomes. It's used to guide rational choices under conditions of uncertainty, scarcity, or competing alternatives. Utility is a numerical representation of preferences. Higher utility values represent more preferred outcomes. Its used to model rational choices under uncertainty (e.g., expected utility theory) and to describe consumer behavior, market demand, and welfare economics. A major principle in economics is that people try to maximize utility; agents choose options that yield the highest point on the utility curve. Another important concept is marginal utility; the additional utility derived from consuming one or more unit of a good or service. This is typically understood by analyzing the derivative of the utility function; the function mapping input combinations (typically combinations of goods and services) to utility space. A utility function is typically written as:
u:XRu: X \to \mathbb{R}

where:

  • XRnX \subseteq \mathbb{R}^n is the consumption set or set of possible bundles (e.g., combinations of goods).

  • u(x)Ru(x) \in \mathbb{R} is the real-valued utility assigned to a bundle x=(x1,x2,,xn)Xx = (x_1, x_2, \dots, x_n) \in X.

This function represents the preferences of a consumer over bundles of goods. Here are a few examples of utility functions used in economics:

  1. Cobb-Douglas Utility:

u(x1,x2)=x1αx2β,where α,β>0u(x_1, x_2) = x_1^\alpha x_2^\beta, \quad \text{where } \alpha, \beta > 0
  1. Perfect Substitutes:

u(x1,x2)=ax1+bx2,where a,b>0u(x_1, x_2) = a x_1 + b x_2, \quad \text{where } a, b > 0
  1. Perfect Complements:

u(x1,x2)=min{ax1,bx2},where a,b>0u(x_1, x_2) = \min\{a x_1, b x_2\}, \quad \text{where } a, b > 0
  1. Quasilinear Utility:

u(x1,x2)=ln(x1)+x2u(x_1, x_2) = \ln(x_1) + x_2
  1. CES (Constant Elasticity of Substitution) Utility:

u(x1,x2)=(ax1ρ+bx2ρ)1/ρ,where ρ0u(x_1, x_2) = \left( a x_1^\rho + b x_2^\rho \right)^{1/\rho}, \quad \text{where } \rho \neq 0

The famous "demand curve" in economics is indirectly derived from the utility function. Utility functions represent a consumers preferences over a bundle of goods. Since economics is about scarcity, this utility is constrained by the household budget (we do not have infinite resources to satisfy every desire). The global maximum U(.) might be outside the feasible set. So the consumers goal is to maximize utility subject to the budget constraint (which is a constrained optimization problem). Solving these equations, gives the optimal quantities as functions of price and income, which correspond to the demand curve. Economists typically assume diminishing marginal utility, meaning the first derivative of the utility function is a decreasing function. The utility function increases at a decreasing rate, while the marginal utility function (first derivative) asymptotically approaches some constant. The demand curve falls out of the budget constraint problem (p*x < I). In plain English, the demand curve (partly) is downward sloping because of diminishing marginal utility. 

This might sound extremely abstract and unremoved from practice but its at the very core of how economists model the economy. Consider a real example. The Federal Reserve has a core set of monetary policy tools it can use to influence interest rates, inflation, employment, and economic stability:

1. The Federal Funds Rate (Main Tool): The interest rate banks charge each other for overnight loans of reserves. The Fed doesn't directly set this rate but targets it by adjusting the supply of reserves in the banking system via open market operations (OMO) or interest on reserves. This rate affects all other short-term interest rates — from savings accounts to business loans to mortgage rates. Changing it influences consumption, investment, and inflation.

2. Open Market Operations (OMO): Buying or selling government securities (like Treasury bills) in the open market.

- Buy securities → inject money → lower interest rates (stimulate economy)

- Sell securities → pull money out → raise interest rates (cool economy)

3. Interest on Reserve Balances (IORB): The interest rate the Fed pays banks on their reserves held at the Fed. Banks won’t lend at rates lower than what they can earn risk-free from the Fed. So this effectively sets a floor for the Fed Funds Rate.

4. Discount Rate (Lender of Last Resort): The interest rate the Fed charges commercial banks for short-term loans from the Fed's discount window. Mostly used in emergencies when banks can’t get funding elsewhere. It’s a backup liquidity tool, not a regular lever for setting monetary conditions.

5. Forward Guidance (Communication Tool): Public statements about future policy intentions, e.g. "Rates will remain low for an extended period. Expectations drive behavior. Clear guidance can influence long-term interest rates and financial market conditions.

Based on a model of consumer utility, the fed adjusts these policy tools, influencing consumption choices, by changing the incentives embedded within the utility function. In monetary theory, utility functions shape how people make choices about consumption today vs. in the future. The Euler equation formalizes that tradeoff. 

The Core Euler Equation (in real terms):

u(ct)=βEt[u(ct+1)(1+rt)]u'(c_t) = \beta \, \mathbb{E}_t \left[ u'(c_{t+1}) \cdot (1 + r_t) \right]

Where:

  • u(ct)u'(c_t): marginal utility of consumption today

  • β\beta: discount factor

  • rtr_t: real interest rate (nominal rate minus expected inflation)

It says households will only give up consumption today if they’re compensated by enough expected utility tomorrow , which depends on the real return (how much they can get by saving). The curvature of the utility function directly controls how much consumption will shift if rates change. Let’s map the tools to the equation:

1. Federal Funds Rate (nominal rate iti_t)

Appears in the Euler equation indirectly, via the real interest rate:

rtitEt[πt+1]r_t \approx i_t - \mathbb{E}_t[\pi_{t+1}]

So:

  • If the Fed raises iti_t → higher rtr_t → future consumption is more attractive → households reduce current consumption.

  • That’s how rate hikes cool demand.

2. Open Market Operations (OMO)

OMO are used to hit the Fed’s interest rate target. So they influence iti_t, and thus rtr_t, the same way. They’re the operational tool to implement changes that show up in the Euler equation.

3. Interest on Reserve Balances (IORB)

IORB affects bank incentives to lend, which influence the actual market interest rates (including iti_t). If banks can earn more by parking money at the Fed, they tighten lending → market interest rates go up → higher rtr_t in the Euler equation.

4. Forward Guidance

Even if the Fed doesn’t change iti_t today, if it signals future rate hikes, that changes:

Et[(1+rt+1)] affects today’s consumption ct\mathbb{E}_t[(1 + r_{t+1})] \Rightarrow \text{ affects today's consumption } c_t

So expectations of higher rates reduce current consumption — directly through the expectations operator in the Euler equation.

5. Quantitative Easing (QE)

QE works by reducing long-term interest rates, including rtr_t, especially when short-term rates are at zero.

Lower rtr_t → higher current consumption ctc_t → economic stimulus.

The form of u(c)u(c) determines how strongly households respond to changes in rtr_t. For example:

  • If u(c)=ln(c)u(c) = \ln(c), then:

    u(c)=1cPeople are fairly responsive to changes in interest rates.u'(c) = \frac{1}{c} \Rightarrow \text{People are fairly responsive to changes in interest rates.}
  • If utility is more curved (higher σ\sigma in CRRA), they are less responsive.

So Fed economists calibrate utility functions in DSGE models to match observed behavior, how much households change consumption in response to interest rate changes. Households maximize utility subject to constraints (budget, cash-in-advance, etc.). Solving this yields Euler equations and money demand functions, which help predict how interest rates affect savings/consumption, how inflation expectations influence money holdings (which affects output), and how policy changes influence economic behavior. 

So as you can see, utility builds off of the more basic notions of incentives and opportunity costs. 

Supply/Demand Functions and Ceteris Paribus

I think we have all the basics necessary to derive the supply and demand model. This is cornerstone in economics.


Efficiency


Expectations


Information


Risk and Uncertainty


Nash Equilibrium


Elasticity


Time Preference and Discounting (Present Value, Discount Rate)







Comments

Popular posts from this blog

The Nature of Agnosticism Part 5

MAGA Psychology and an Example of Brain Rot

The Nature of Agnosticism: Part 4.4