Core Concepts in Economics: Fundamentals
Table of Contents
- Economy
- Models (Modeling)
- Systems
- Optimization
- Rationality
- Opportunity Cost
- Incentives
- Utility
- Supply/Demand Functions and Ceteris Paribus
- Assumptions
- Efficiency
- Risk and Uncertainty
- Nash Equilibrium
- Elasticity
- Time Preference and Discounting (Present Value, Discount Rate)
Economy
"a system of interactions among agents (households, firms, and governments) in which scarce resources are allocated through markets and institutions to produce, distribute, and consume goods and services."
"What all of these systems do internally is to use the energy and material imports in work processes that build and maintain essential internal structures. Waste products are inevitable in all such products in nature. The organization of the work processes and their ongoing management (which collectively we can call the governance of the system) is the pattern of which we spoke. Whether we are talking about the internal organization and dynamics of a living cell, of the multicellular organism of a Neolithic tribe, a single household, a complex organization, or a modern nation state, the patterns of organization and dynamics follow them. These patterns cluster under the title 'economy'."
"At an abstract level, an economy is a fabric of transactions, a network of sub-processes of resource acquisition , transformation (value added), production, and consumption, in which biological beings are sustained.... In the systems view, an economy is a way of managing the flows of high potential energy and of the transformation of high entropy materials into low entropy assets via work processes that use that high potential energy to do useful work. The low entropy assets support the existence and stability of the system."
Models (Modeling)
- Explain (very distinct from predict)
- Guide data collection
- Illuminate core dynamics
- Suggest dynamical analogies
- Discover new questions
- Promote a scientific habit of mind
- Bound (bracket) outcomes to plausible ranges
- Illuminate core uncertainties.
- Offer crisis options in near-real time
- Demonstrate tradeoffs / suggest efficiencies
- Challenge the robustness of prevailing theory through perturbations
- Expose prevailing wisdom as incompatible with available data
- Train practitioners
- Discipline the policy dialogue
- Educate the general public
- Reveal the apparently simple (complex) to be complex (simple)
Systems
Formally, a system at identity and hierarchical level is defined as the 9-tuple:
Each component of this tuple describes a key aspect of what constitutes a system in precise, structural terms.
1. : Components
This is the set of parts or elements that make up the system. Each component may itself be a system (a subsystem), reflecting the hierarchical nature of systems in general. Components are defined by their roles and characteristics within the system, and their inclusion in the system may not always be binary—it is sometimes useful to model their membership as “fuzzy,” meaning that components may participate in the system to varying degrees or under specific conditions. For instance, in an ecosystem, a migratory species might only be part of the system seasonally, making its membership conditional.
The components of an economy include a vast array of agents and institutions—households, firms, banks, government agencies, and markets. Each of these components has a defined role: households provide labor and consume goods, firms produce goods and services, banks manage financial intermediation, and governments set fiscal and monetary policy. These entities interact within and across sectors, forming the basic building blocks of the system. Some components, like multinational corporations or informal economies, may participate across boundaries, making them partially included or fuzzy in membership.
2. : Network of Relations
This element represents the structural or functional connections between components. It defines how parts of the system influence, support, or depend on each other. These relationships can be bidirectional or unidirectional, physical (like wires connecting electrical components) or abstract (like authority or influence in a social organization). The network structure determines the topology of the system and underlies many dynamic behaviors, such as cascades, feedback loops, and resilience to disruption.
In an economy, components are linked through a dense network of financial, legal, and social relationships. Firms are connected to consumers through market exchanges, to suppliers via supply chains, and to banks through credit and investment. Governments interact with all other components through taxation, regulation, and public spending. These relationships define flows of goods, services, money, labor, and influence. The structure of these connections—centralized or distributed, robust or fragile—has a strong impact on economic performance and resilience to shocks.
3. : Sources
Sources refer to the inputs that enter the system from its environment. These might include material resources, energy, or information—anything required for the system to function and persist over time. In a manufacturing system, for example, raw materials are sources. In a cognitive system, sensory stimuli are inputs. The nature and availability of these sources can have a profound impact on how the system behaves or whether it can sustain itself.
The sources of an economy are the external inputs that support its functioning. These include natural resources (like oil, minerals, or water), imported goods and services, foreign investments, immigrant labor, and technological innovation originating abroad. These sources flow into the system and are transformed, consumed, or circulated internally. For a closed economy, these sources would be limited, but in reality, most modern economies are highly open and reliant on continuous input from the global system.
4. : Sinks
Complementary to sources, sinks are the outputs of the system—where its products, wastes, or by-products go. Sinks represent how the system interacts with and impacts its external environment. In an ecological system, this could be the dispersal of nutrients or the release of waste products. In an economic system, sinks might be the markets that receive goods or the environment that absorbs pollution. A system’s outputs can affect not only its own stability but also the systems it interfaces with.
Sinks are where the economy’s outputs go, including exports, emissions, and waste. For example, manufactured goods might be sold to international markets, information products consumed globally, or pollutants expelled into the environment. These outputs affect external systems—ecological, social, and economic. Negative sinks, like environmental degradation, can feed back as costs to the system, making them crucial to sustainability and long-term modeling.
5. : Flow Graph
The flow graph represents the directed movement of resources—whether energy, matter, or information—between components within the system. It formalizes the system's internal dynamics and enables modeling of how internal processes operate, such as transport, transformation, or communication. Flow graphs are typically weighted and directional, capturing both the pathways and intensities of flows, and are critical for analyzing systemic phenomena like bottlenecks, delays, and accumulation.
The flow graph of an economy maps how resources circulate between components—money flows from consumers to producers, taxes from businesses to government, subsidies from government to agriculture, and so on. It includes supply chains, labor markets, investment flows, and trade routes. This graph allows us to trace bottlenecks, feedback loops (like inflationary spirals), and cyclical behaviors (like recessions). Monetary policy and interest rates, for instance, are interventions into specific flow patterns meant to influence broader systemic outcomes.
6. : Boundary
The boundary distinguishes what is part of the system from what is not. It may be physical (like the hull of a ship), functional (such as firewall rules in a network), or even conceptual (the defined scope of a scientific model). Boundaries determine the scope of analysis and define where inputs enter and outputs exit. They are essential for understanding how a system maintains its integrity, interfaces with its environment, and evolves over time. In some cases, boundaries may shift, blur, or be contested, especially in social or conceptual systems.
An economic system’s boundary defines what is considered part of the national or regional economy and what lies outside it. For example, the boundary of the U.S. economy would include all production and consumption activities within its jurisdiction, but also interactions with foreign economies through trade and capital flows. The boundary is often fuzzy—offshore accounts, black markets, or informal economies may blur the lines of inclusion. How we define the boundary impacts the scope of data collection (GDP, for instance) and policy-making.
7. : Knowledge
Knowledge is the information stored, encoded, or maintained within the system. This might be in the form of data, memory, genetic information, operating rules, or even learned behaviors. Knowledge enables the system to regulate itself, adapt, and evolve in response to internal or external conditions. In biological systems, this could be DNA; in human organizations, it could be culture or institutional memory. Knowledge may also define how the system models itself or anticipates its environment, enabling more sophisticated forms of control.
Knowledge in an economy includes institutional memory, laws, technologies, business practices, education, and even cultural norms. It resides in human capital, embedded in institutions, and codified in technologies and systems of production. This internal knowledge base enables innovation, guides decision-making, and supports coordination across vast distances and organizational layers. A highly developed economy typically has dense knowledge structures that promote adaptability and efficiency.
8. : Governance
Governance encompasses the mechanisms, rules, and feedback processes that guide the system’s behavior and maintain its stability. This may include control systems, management structures, or algorithms. It can be centralized or distributed, rigid or adaptive. Governance ensures that the system responds to changes, corrects errors, and aligns its operations with desired outcomes. In ecosystems, governance might take the form of natural feedback loops; in engineered systems, it might be a set of protocols or a software routine managing operations.
Economic governance is exercised through central banks, treasuries, regulatory bodies, legal frameworks, and international institutions like the IMF or WTO. It includes fiscal policy (spending and taxation), monetary policy (interest rates, money supply), and regulatory actions (banking laws, labor protections). These governance mechanisms manage inflation, employment, growth, and inequality. Effective governance keeps the system stable, resilient to shocks, and aligned with societal goals. Poor governance can lead to systemic crises.
9. : Time Interval
Finally, the time interval refers to the period over which the system is observed, modeled, or understood. It provides temporal context, distinguishing between fast, transient processes and long-term, evolutionary changes. Some systems operate over milliseconds (e.g., electronic circuits), others over centuries (e.g., climate systems). Time determines not only how the system behaves but also how we interpret causality, feedback, and system lifecycle stages such as growth, decay, and renewal.
The time interval for studying an economy could vary dramatically depending on the question—short-term intervals might focus on quarterly business cycles, while long-term intervals could examine industrial development, technological evolution, or demographic shifts over decades. Time plays a critical role in understanding lag effects, feedback delays, and compounding dynamics like debt accumulation or climate-related economic changes. Economies are dynamic systems that evolve, adapt, and sometimes collapse across different time horizons.
- Ownership of the means of production
- Coordination mechanisms (think prices)
- Incentive Structures (think the profit motive)
- Role of the state (think of "laissez faire")
- Allocation of Resources (budgeting, planning, markets)
- Legal and Institutional Framework (property rights, taxation etc)
"What are the parts of the system we are interested in? What are their properties? What are the relationships between the parts and their properties? How do those properties and relationships change? Decomposition consists of usable answers to these questions"
Optimization
- Identify objectives
- What is the agent (a person, a firm, a policymaker) trying to achieve?
- Make the goals explicit
- Clarify constraints
- What limits the available choices—budget, time, regulations, technology?
- Map feasible alternatives
- What are the real options available?
- Evaluate trade-offs
- How do changes in one choice affect the outcome or cost of another?
- Choose the best course of action
- What’s the most effective or efficient choice given the above?
- Evaluate the costs and benefits of alternatives.
Economists are not just thinking about these concepts devoid of practical application. Often they are hired by employers to engage in prescriptive modeling on behalf of the firm. They migh construct a model of business operations within the firm to ask what should be done to acheive an optimal outcome (usually maximizing profits). Within academic economics, we use optimization primarily for explanatory and predictive purposes to describe features of a whatever unit of analysis we are concerned with. But outside of academics, economists often work in a prescriptive decision making capacity under conditions of uncertainty, risk, or resource limits where they must identify optimal course of action in situations involving strategic interaction.
Most economists should have this generic formulation internalized:
Let \( x \in \mathbb{R}^n \) be the vector of decision variables. A standard constrained optimization problem is formulated as:
\[ \begin{aligned} \text{Minimize (or Maximize)} \quad & f(x) \\\\ \text{subject to} \quad & g_i(x) \leq b_i, \quad i = 1, \dots, m \\\\ & h_j(x) = c_j, \quad j = 1, \dots, p \\\\ & x \in X \end{aligned} \]
Where:
- \( f(x) \) is the objective function to be minimized or maximized (e.g., cost, utility, profit).
- \( x \) is the vector of decision variables (e.g., quantities to produce, allocate, consume).
- \( g_i(x) \leq b_i \) are inequality constraints, representing resource limits or bounds.
- \( h_j(x) = c_j \) are equality constraints, such as balance conditions or conservation laws.
- \( X \) is the feasible set—the domain of allowable solutions (e.g., \( x \geq 0 \), or \( x \) must be integer-valued).
Whether you're modeling a consumer’s utility maximization, a firm’s cost minimization, or a government's resource allocation, almost all applied optimization problems in economics and decision science can be framed in this form.
Dynamic optimization is also very pervasive within economics. Dynamic optimization models capture decisions that unfold over time, where today’s choice affects tomorrow’s possibilities. These can be broken down into discrete time models or continuous time models; where the major difference is whether the time variable can be discretized. Economists use dynamic optimization to model:
- How consumers smooth consumption across life (life-cycle hypothesis)
- How firms invest in capital over time (investment theory)
- How governments plan fiscal or monetary policy (dynamic programming)
- How agents form expectations and adjust behavior (rational expectations models)
- How does a shock today (e.g., a technology shock or policy change) propagate over time?
- What’s the optimal investment plan when capital is costly to adjust?
The key ingredients are:
- A state variable that summarizes the relevant "current condition" (e.g., wealth, capital stock)
- A control variable (or decision variable) that the agent chooses (e.g., how much to consume or invest)
- A transition equation that describes how today’s decision shapes tomorrow’s state
- An objective function that evaluates the total value of decisions over time
Dynamic optimization is MUCH harder to explain succinctly and I struggled with it (and still do) when in graduate school. I am not going to provide the generic mathematical setup but i'll provide a very common problem macroeconomists should be familiar with. The Cake Eating Problem, a problem of intertemporal choice, asks: how much of X should I enjoy today and how much of X should I leave for the future (where X is cake)? This sounds really trivial at first until you realize it's an extremely general problem; a trade-off between current and future utility. Imagine you have a fixed-sized cake, say, one whole cake. You can eat some of it now, and save the rest for later. But once you eat a piece, it’s gone. The challenge is: How should you allocate consumption of the cake over time to maximize your total satisfaction (utility)? Is it optimal to eat it all now? Should you eat a small piece a day? If you eat more today, you'll have less tomorrow, and therefore lower future utility. The goal is to find an optimal policy (the best sequence of decisions), a specific plan for how much cake to eat at each period that balances current and future satisfaction. Economists use this idea to study decisions where you have a limited resource and you have to decide how to use it over time; like money, food, energy, or natural resources. The goal is to make choices today that don't ruin your happiness tomorrow. So here is the formal model:
- Let \( W_t \) be the amount of cake (or wealth) at time \( t \).
- You choose how much cake to eat: \( c_t \), with \( 0 \leq c_t \leq W_t \).
The remaining cake becomes:
\[ W_{t+1} = W_t - c_t \]
- You derive utility from eating: \( u(c_t) \), where \( u(\cdot) \) is a utility function (e.g., \( u(c) = \ln(c) \)).
- You discount future utility with a factor \( \beta \in (0, 1) \), meaning you value today’s consumption more than tomorrow’s.
Your goal is to choose the sequence \( \{c_t\}_{t=0}^{\infty} \) to maximize lifetime utility:
\[ \max_{\{c_t\}} \sum_{t=0}^{\infty} \beta^t u(c_t) \]
subject to:
\[ \begin{aligned} W_{t+1} &= W_t - c_t \\\\ W_0 &> 0, \quad c_t \in [0, W_t] \end{aligned} \]
Rationality
- Herbert Simon – Bounded Rationality: Humans are not fully rational due to cognitive and informational limits. Instead, they are boundedly rational; they make satisficing (satisfy + suffice) decisions rather than optimizing. People choose the first acceptable solution, not necessarily the best one. Memory, attention, and time restrict decision-making. Behavior is shaped by the structure of the environment; people adapt, rather than optimize.
- Gigerenzer – Ecological & Heuristic Rationality: In many environments, simple heuristics can be more effective than complex reasoning. Rationality is adaptive, not absolute. Fast and frugal heuristics are quick, efficient mental shortcuts that exploit environmental structure. Ecological rationality means What’s rational depends on the match between the mind’s heuristics and the structure of the environment.
- Amos Tversky & Daniel Kahneman – Heuristics and Biases: Humans rely on heuristics, which often lead to systematic biases, deviations from ideal rationality. There are many catalogued biases and heuristics including the Availability heuristic (Judging likelihood by ease of recall) and Representativeness heuristic (Judging similarity over base rates). Prospect theory is also central. This states that people evaluate gains/losses relative to a reference point and are loss-averse. This is where the idea "predictably irrational" comes from.
Opportunity Cost
Incentives
Utility
where:
-
is the consumption set or set of possible bundles (e.g., combinations of goods).
-
is the real-valued utility assigned to a bundle .
This function represents the preferences of a consumer over bundles of goods. Here are a few examples of utility functions used in economics:
Cobb-Douglas Utility:
-
Perfect Substitutes:
-
Perfect Complements:
-
Quasilinear Utility:
-
CES (Constant Elasticity of Substitution) Utility:
The famous "demand curve" in economics is indirectly derived from the utility function. Utility functions represent a consumers preferences over a bundle of goods. Since economics is about scarcity, this utility is constrained by the household budget (we do not have infinite resources to satisfy every desire). The global maximum U(.) might be outside the feasible set. So the consumers goal is to maximize utility subject to the budget constraint (which is a constrained optimization problem). Solving these equations, gives the optimal quantities as functions of price and income, which correspond to the demand curve. Economists typically assume diminishing marginal utility, meaning the first derivative of the utility function is a decreasing function. The utility function increases at a decreasing rate, while the marginal utility function (first derivative) asymptotically approaches some constant. The demand curve falls out of the budget constraint problem (p*x < I). In plain English, the demand curve (partly) is downward sloping because of diminishing marginal utility.
This might sound extremely abstract and unremoved from practice but its at the very core of how economists model the economy. Consider a real example. The Federal Reserve has a core set of monetary policy tools it can use to influence interest rates, inflation, employment, and economic stability:
1. The Federal Funds Rate (Main Tool): The interest rate banks charge each other for overnight loans of reserves. The Fed doesn't directly set this rate but targets it by adjusting the supply of reserves in the banking system via open market operations (OMO) or interest on reserves. This rate affects all other short-term interest rates — from savings accounts to business loans to mortgage rates. Changing it influences consumption, investment, and inflation.
2. Open Market Operations (OMO): Buying or selling government securities (like Treasury bills) in the open market.
- Buy securities → inject money → lower interest rates (stimulate economy)
- Sell securities → pull money out → raise interest rates (cool economy)
3. Interest on Reserve Balances (IORB): The interest rate the Fed pays banks on their reserves held at the Fed. Banks won’t lend at rates lower than what they can earn risk-free from the Fed. So this effectively sets a floor for the Fed Funds Rate.
4. Discount Rate (Lender of Last Resort): The interest rate the Fed charges commercial banks for short-term loans from the Fed's discount window. Mostly used in emergencies when banks can’t get funding elsewhere. It’s a backup liquidity tool, not a regular lever for setting monetary conditions.
5. Forward Guidance (Communication Tool): Public statements about future policy intentions, e.g. "Rates will remain low for an extended period. Expectations drive behavior. Clear guidance can influence long-term interest rates and financial market conditions.
The Core Euler Equation (in real terms):
Where:
-
: marginal utility of consumption today
-
: discount factor
-
: real interest rate (nominal rate minus expected inflation)
It says households will only give up consumption today if they’re compensated by enough expected utility tomorrow , which depends on the real return (how much they can get by saving). The curvature of the utility function directly controls how much consumption will shift if rates change. Let’s map the tools to the equation:
1. Federal Funds Rate (nominal rate )
Appears in the Euler equation indirectly, via the real interest rate:
So:
-
If the Fed raises → higher → future consumption is more attractive → households reduce current consumption.
-
That’s how rate hikes cool demand.
2. Open Market Operations (OMO)
OMO are used to hit the Fed’s interest rate target. So they influence , and thus , the same way. They’re the operational tool to implement changes that show up in the Euler equation.
3. Interest on Reserve Balances (IORB)
IORB affects bank incentives to lend, which influence the actual market interest rates (including ). If banks can earn more by parking money at the Fed, they tighten lending → market interest rates go up → higher in the Euler equation.
4. Forward Guidance
Even if the Fed doesn’t change today, if it signals future rate hikes, that changes:
So expectations of higher rates reduce current consumption — directly through the expectations operator in the Euler equation.
5. Quantitative Easing (QE)
QE works by reducing long-term interest rates, including , especially when short-term rates are at zero.
Lower → higher current consumption → economic stimulus.
The form of determines how strongly households respond to changes in . For example:
-
If , then:
-
If utility is more curved (higher in CRRA), they are less responsive.
So Fed economists calibrate utility functions in DSGE models to match observed behavior, how much households change consumption in response to interest rate changes. Households maximize utility subject to constraints (budget, cash-in-advance, etc.). Solving this yields Euler equations and money demand functions, which help predict how interest rates affect savings/consumption, how inflation expectations influence money holdings (which affects output), and how policy changes influence economic behavior.
So as you can see, utility builds off of the more basic notions of incentives and opportunity costs.
Supply/Demand Functions and Ceteris Paribus
- How does a change in one variable affect others?
- What happens to employment if the minimum wage increases?
- What are the effects of policy interventions or external shocks?
- How does a tax affect consumer prices and producer revenue?
- What are the conditions for equilibrium?
- At what price do supply and demand balance?
- How do markets and agents react to changes in incentives or constraints?
- How do consumers respond to a change in interest rates?
In economic analysis, comparative statics and ceteris paribus are fundamental analytical and conceptual tools used to understand how changes in one variable affect others within an economic model. Comparative statics is the method economists use to compare two different equilibrium states resulting from a change in an exogenous variable (something determined outside the model, like government policy or consumer preferences). It compares before-and-after outcomes and does not focus on the process of adjustment or dynamics over time. For example, if the price of gasoline increases, comparative statics would look at how this affects the quantity of gasoline demanded, holding everything else constant. It compares the original equilibrium (before the price change) with the new equilibrium (after the price change). Ceteris paribus is a Latin phrase meaning “all other things being equal” or “holding everything else constant.” It is a simplifying assumption used to isolate the effect of a single variable. For example, when analyzing how the demand for apples changes with price, economists assume ceteris paribus; that income, prices of other goods, and consumer preferences remain unchanged. Very often, comparative statics often asks counterfactual questions that explore outcomes in alternative states of the world. Comparative statics uses counterfactual reasoning to compare equilibria before and after a change in an exogenous factor, holding everything else constant (ceteris paribus). This is implemented by analyzing the partial derivatives of the resulting model, which I'll show shortly. Ceteris paribus is powerful: it lets you isolate these effects clearly, and see whether your model produces intuitive or surprising behavior. Partial derivatives are the mathematical formalization of a comparative static analysis.
So now that we understand what we are trying to do, lets derive a supply curve, demand curve, and analyze equilibrium by doing comparative statics. Remember, we are assuming economic agents are maximizers. Individual consumers will be seeking to maximize utility and firms will be seeking to maximize profit (minimize cost). Choice variables are at the heart of any optimization problem. They represent the things the decision maker can control or choose to adjust in order to acheive these goals. For a consumer, the choice variables are quantities of a good to buy. For a firm, these are the factors of production (labor or capital) or the output level. For a policy maker, it might be tax rates or subsidy levels. These are "variables" because the decision maker is free to choose the values, under some set of constraints, to optimize the objective.
Remember, an optimization problem has three main components:
- Objective function: what you're trying to maximize or minimize (e.g., profit, utility)
- Choice variables: the variables you control (e.g., \(x_1, x_2, \dots, x_n\))
- Constraints: limits or requirements (e.g., budget, production technology)
Example 1: Consumer Choice
\[ \max_{x_1, x_2} \ u(x_1, x_2) \quad \text{subject to } p_1 x_1 + p_2 x_2 \leq I \]
- Choice variables: \(x_1\), \(x_2\) — quantities of goods
- Objective: maximize utility
- Constraint: budget
Example 2: Firm Profit Maximization
\[ \max_{L, K} \ \pi = p f(L, K) - wL - rK \]
- Choice variables: \(L\) (labor), \(K\) (capital)
- Objective: maximize profit
- Constraints: often none in short-run, or include capacity or cost restrictions
Choice variables enter the objective function, so changing them changes the value the decision-maker cares about. They are adjusted subject to constraints; you can’t just pick anything, but you choose what’s best from what’s allowed. In mathematical optimization, they are what you're solving for. Without choice variables, there's no decision to make. The solution to an optimization problem is always a value (or set of values) for the choice variables that maximizes or minimizes the objective.
Let’s derive the market supply curve from the firm’s profit maximization problem using the Lagrangian method:
- Start with a single firm facing a cost constraint (cost minimization problem as dual)
- Derive the firm's supply function
- Aggregate to get the market supply curve
1. Setup: Firm’s Profit Maximization Problem
Let’s assume the firm has:
- A production function: \(f(x)\), where \(x\) is the input
- Input price: \(w\)
- Output price: \(p\)
The firm chooses input \(x\) to maximize profit:
\[ \max_{x} \ \pi(x) = p \cdot f(x) - w \cdot x \]
2. Cost Minimization Problem via Lagrangian
We now minimize cost subject to producing a fixed amount \(q\):
\[ \min_{x} \ w \cdot x \quad \text{subject to} \quad f(x) \geq q \]
This is suitable for Lagrangian optimization. Define the Lagrangian:
\[ \mathcal{L}(x, \lambda) = w x + \lambda (q - f(x)) \]
First-Order Conditions (FOCs):
- \(\frac{\partial \mathcal{L}}{\partial x} = w - \lambda f'(x) = 0 \Rightarrow \lambda = \frac{w}{f'(x)}\)
- \(\frac{\partial \mathcal{L}}{\partial \lambda} = q - f(x) = 0 \Rightarrow f(x) = q\)
So, the cost-minimizing input \(x^*(q)\) solves:
\[ f(x) = q \quad \Rightarrow \quad x = f^{-1}(q) \]
Then cost function:
\[ C(q) = w \cdot f^{-1}(q) \]
3. Derive Supply Function
Now switch back to the profit maximization side.
Profit:
\[ \pi(q) = p q - C(q) = p q - w \cdot f^{-1}(q) \]
Maximize with respect to \(q\):
\[ \frac{d\pi}{dq} = p - w \cdot \frac{d}{dq} f^{-1}(q) = 0 \]
But from inverse function rule:
\[ \frac{d}{dq} f^{-1}(q) = \frac{1}{f'(x)} \quad \text{where } x = f^{-1}(q) \]
So:
\[ p = \frac{w}{f'(x)} \Rightarrow p \cdot f'(x) = w \]
This matches the first-order condition from profit maximization: value of marginal product = input price.
Solve this equation for \(q\), via \(x\), to get the firm's supply function:
\[ q = f(x^*(p)) \quad \text{where } f'(x^*) = \frac{w}{p} \]
4. Example: Cobb-Douglas Production
Say: \(f(x) = x^\alpha\), \(0 < \alpha < 1\)
- \(f'(x) = \alpha x^{\alpha - 1}\)
- Set: \(p \cdot f'(x) = w \Rightarrow p \cdot \alpha x^{\alpha - 1} = w\)
- Solve for \(x\):
\[ x = \left( \frac{p \alpha}{w} \right)^{\frac{1}{1 - \alpha}} \]
So output: \(q = f(x) = \left( \frac{p \alpha}{w} \right)^{\frac{\alpha}{1 - \alpha}}\)
This is the individual firm’s supply function:
\[ q(p) = A p^{\frac{\alpha}{1 - \alpha}}, \quad A = \left( \frac{\alpha}{w} \right)^{\frac{\alpha}{1 - \alpha}} \]
5. Market Supply Curve
If all firms are identical (simplifying assumption by economists) and there are \(N\) of them:
\[ Q(p) = N \cdot q(p) = N A p^{\frac{\alpha}{1 - \alpha}} \]
This is the market supply curve. The inverse supply function expresses price as a function of quantity supplied rather than the other way around. This is useful because it tells us what price is required to induce a given level of output. It’s particularly helpful for market analysis, equilibrium modeling, and comparative statics.
Recall from earlier:
\[ q(p) = A \cdot p^{\frac{\alpha}{1 - \alpha}}, \quad \text{where } A = \left( \frac{\alpha}{w} \right)^{\frac{\alpha}{1 - \alpha}} \]
Let’s solve for \(p\) in terms of \(q\): Start with:
\[ q = A \cdot p^{\frac{\alpha}{1 - \alpha}} \]
Solve for \(p\):
- Divide both sides by \(A\):
\[ \frac{q}{A} = p^{\frac{\alpha}{1 - \alpha}} \]
- Take both sides to the power \(\frac{1 - \alpha}{\alpha}\):
\[ p = \left( \frac{q}{A} \right)^{\frac{1 - \alpha}{\alpha}} \]
Substitute back in the expression for \(A\):
\[ A = \left( \frac{\alpha}{w} \right)^{\frac{\alpha}{1 - \alpha}} \quad \Rightarrow \quad \frac{1}{A} = \left( \frac{w}{\alpha} \right)^{\frac{\alpha}{1 - \alpha}} \]
So:
\[ p(q) = \left( q \cdot \left( \frac{w}{\alpha} \right)^{\frac{\alpha}{1 - \alpha}} \right)^{\frac{1 - \alpha}{\alpha}} \]
Or more cleanly:
\[ p(q) = B \cdot q^{\frac{1 - \alpha}{\alpha}}, \quad \text{where } B = \left( \frac{w}{\alpha} \right) \]
(Note: the exponent is positive because \(0 < \alpha < 1\).)
How do we interpret this? As \(q\) increases, \(p\) must increase to induce more output. This reflects increasing marginal cost; typical of most realistic production settings. The elasticity of supply depends directly on the exponent \(\varepsilon\), which comes from technology (i.e., how easily output increases with input).
\[ p(q) = B \cdot q^{\varepsilon}, \quad \varepsilon = \frac{1 - \alpha}{\alpha} > 0 \]
If market supply is \(Q(p) = N \cdot q(p)\), then:
\[ Q(p) = N A p^{\frac{\alpha}{1 - \alpha}} \Rightarrow p(Q) = \left( \frac{Q}{N A} \right)^{\frac{1 - \alpha}{\alpha}} \]
Now economists can make statements about what a market of firms will do under various conditions such as an unforseen policy shock or perhaps a natural disaster.
Now let’s now derive the market demand function from the consumer’s utility maximization problem.
1. Conceptual Setup: Consumer Choice
A consumer wants to choose a bundle of goods that gives the highest utility subject to a budget constraint.
Let:
- \(x_1, x_2\): quantities of two goods
- \(p_1, p_2\): prices of goods 1 and 2
- \(I\): income
- \(u(x_1, x_2)\): utility function
2. Consumer’s Problem (Utility Maximization)
\[ \max_{x_1, x_2} \ u(x_1, x_2) \quad \text{subject to } p_1 x_1 + p_2 x_2 \leq I \]
We’ll assume the constraint binds (no free money), so:
\[ p_1 x_1 + p_2 x_2 = I \]
We’ll use the Lagrangian:
\[ \mathcal{L}(x_1, x_2, \lambda) = u(x_1, x_2) + \lambda (I - p_1 x_1 - p_2 x_2) \]
3. Solve Using Cobb-Douglas Utility (Example)
Let’s use a classic Cobb-Douglas utility function:
\[ u(x_1, x_2) = x_1^{\alpha} x_2^{1 - \alpha}, \quad \text{where } 0 < \alpha < 1 \]
First-Order Conditions (FOCs)
- \(\frac{\partial \mathcal{L}}{\partial x_1} = \alpha x_1^{\alpha - 1} x_2^{1 - \alpha} - \lambda p_1 = 0\)
- \(\frac{\partial \mathcal{L}}{\partial x_2} = (1 - \alpha) x_1^{\alpha} x_2^{-\alpha} - \lambda p_2 = 0\)
- \(\frac{\partial \mathcal{L}}{\partial \lambda} = I - p_1 x_1 - p_2 x_2 = 0\)
4. Solve the FOCs
Take the ratio of (1) and (2):
\[ \frac{\alpha x_1^{\alpha - 1} x_2^{1 - \alpha}}{(1 - \alpha) x_1^{\alpha} x_2^{-\alpha}} = \frac{\lambda p_1}{\lambda p_2} \Rightarrow \frac{\alpha}{1 - \alpha} \cdot \frac{x_2}{x_1} = \frac{p_1}{p_2} \]
Solve for \(x_2\) in terms of \(x_1\):
\[ x_2 = \frac{1 - \alpha}{\alpha} \cdot \frac{p_1}{p_2} \cdot x_1 \]
Now plug into the budget constraint:
\[ p_1 x_1 + p_2 x_2 = I \Rightarrow p_1 x_1 + p_2 \left( \frac{1 - \alpha}{\alpha} \cdot \frac{p_1}{p_2} \cdot x_1 \right) = I \]
Simplify:
\[ p_1 x_1 \left(1 + \frac{1 - \alpha}{\alpha} \right) = I \Rightarrow p_1 x_1 \cdot \frac{1}{\alpha} = I \Rightarrow x_1 = \frac{\alpha I}{p_1} \]
Plug back into \(x_2\):
\[ x_2 = \frac{(1 - \alpha) I}{p_2} \]
5. Marshallian (Ordinary) Demand Functions
\[ x_1(p_1, p_2, I) = \frac{\alpha I}{p_1}, \quad x_2(p_1, p_2, I) = \frac{(1 - \alpha) I}{p_2} \]
These are the individual demand functions: they show how much of each good the consumer will demand, as a function of prices and income. This function is Homogeneous of degree 0 in \((p_1, p_2, I)\): If all prices and income double, demand stays the same. It is Downward sloping: As \(p_1\) increases, \(x_1\) decreases. Demand for each good is proportional to the budget share: \(\alpha\) and \(1 - \alpha\).
Now lets derive the Inverse demand function, expressing price as a function of quantity demanded, holding income and other prices constant and Market demand, aggregating individual demand functions across consumers. From earlier, we have the Marshallian demand, these express the maximum price the consumer is willing to pay for a given quantity. Demand is downward-sloping: as quantity increases, the price the consumer is willing to pay falls. These functions reflect the marginal utility per dollar, the more you have of a good, the less you're willing to pay for more.
Suppose there are \(N\) identical consumers with the same utility function and income \(I\). Then the market demand is just the sum of individual demands. These are aggregate demand curves for goods 1 and 2 in the market.
\[ X_1(p_1) = N \cdot x_1 = N \cdot \frac{\alpha I}{p_1} = \frac{N \alpha I}{p_1} \]
\[ X_2(p_2) = N \cdot x_2 = N \cdot \frac{(1 - \alpha) I}{p_2} = \frac{N (1 - \alpha) I}{p_2} \]
Rewriting in inverse form:
\[ p_1(X_1) = \frac{N \alpha I}{X_1}, \quad p_2(X_2) = \frac{N (1 - \alpha) I}{X_2} \]
We are almost there. We have the two major ingredients for establishing an equilibrium needed to do comparative statics! In a competitive market, equilibrium occurs at the price \(p^*\) and quantity \(q^*\) where:
\[ \text{Quantity demanded} = \text{Quantity supplied} \quad \Rightarrow \quad Q_d(p^*) = Q_s(p^*) \]
It’s the point where buyers and sellers agree: no excess demand, no excess supply. Assume a single good, \(q\), and a Cobb-Douglas utility and production structure. Recall the market demand and market supply functions from earlier:
\[ Q_d(p) = \frac{N \alpha I}{p} \]
\[ Q_s(p) = N A p^{\frac{\alpha}{1 - \alpha}} \]
This shows upward-sloping supply and downward-sloping demand. Set quantity demanded equal to quantity supplied:
\[ Q_d(p) = Q_s(p) \quad \Rightarrow \quad \frac{N \alpha I}{p} = N A p^{\frac{\alpha}{1 - \alpha}} \]
Cancel \(N\) on both sides:
\[ \frac{\alpha I}{p} = A p^{\frac{\alpha}{1 - \alpha}} \]
Multiply both sides by \(p\):
\[ \alpha I = A p^{\frac{\alpha}{1 - \alpha} + 1} \]
To solve for equilibrium price \(p^*\), let’s simplify the exponent:
\[ \frac{\alpha}{1 - \alpha} + 1 = \frac{\alpha + (1 - \alpha)}{1 - \alpha} = \frac{1}{1 - \alpha} \]
So:
\[ \alpha I = A p^{\frac{1}{1 - \alpha}} \Rightarrow p^{\frac{1}{1 - \alpha}} = \frac{\alpha I}{A} \Rightarrow p^* = \left( \frac{\alpha I}{A} \right)^{1 - \alpha} \]
Now we want to find the Equilibrium Quantity \(q^*\). Plug \(p^*\) back into either supply or demand. Using demand:
\[ q^* = Q_d(p^*) = \frac{N \alpha I}{p^*} = N \alpha I \cdot \left( \frac{A}{\alpha I} \right)^{1 - \alpha} = N \cdot A^{1 - \alpha} \cdot (\alpha I)^\alpha \]
What determines the equilibrium?
- On the demand side:
- \(N\): number of consumers
- \(I\): income
- \(\alpha\): preference weight on good 1
- On the supply side:
- \(A\): a function of technology (\(\alpha\)) and input cost \(w\)
- \(N\): number of firms
Comparative Statics:
- Increase in income \(I\):
- Demand shifts right
- \(p^*\) increases
- \(q^*\) increases
- Decrease in input cost \(w\):
- Supply shifts right (A increases)
- \(p^*\) falls
- \(q^*\) rises
- More consumers or firms:
- Increasing \(N\) on the demand side raises \(q^*\) and \(p^*\)
- Increasing \(N\) on the supply side increases \(q^*\) but reduces \(p^*\)
This is a theoretical model that makes predictions about what will happen empirically. Typically what an economist would do next is collect data and fit an econometric model to test whether the theoretical model matches reality. This is also known as the "Validation" stage of applied mathematical modeling. If the empirical estimates align with the predictions made by the theoretical model, this would act as confirmation of the model; it describes economic reality to a sufficient degree. I'll say more about empirics in later posts. For now, just remember that many economists consider an empirical model to be successful if the signs of the coefficients align with the theoretical model. For example, if the empirical model suggests that an increase in income decreases demand for a product, that would indicate something is wrong with the model. If the model fits poorly to the data (based on econometric criteria of model fit), economists might revise or relax their assumptions, consider alternative functional forms, or move to more advanced methods like non-parametric estimation.
It should also be clear that policy implications flow naturally from economic models. Once a model is formalized and possibly validated empirically, it becomes a tool for counterfactual reasoning; "What would happen if something changes"? The "something" is typically a policy. If something shifts, it gives us a predictive framework for understanding how the system will respond. The model identifies key variables that influence market outcomes so economists can use this to figure out the direction and magnitude of effects of some policy based on the structure of the model. This is where the common textbook examples of policy derive: tax policy, price controls, subsidies etc. I've not yet talked about efficiency, but this is a common theme in economics. Efficiency is a deviation from the optimal market solution. But efficiency is conceptualized very specifically in economics and derives from the assumptions governing these standard models (welfare economics). More on that later. These models also help economists run cost-benefit analyses, compare trade-offs between the short-run and long-run, and allow them to draw conclusions about who will be effected by something.
The model provides clarity about mechanisms and outcomes, it makes our assumptions explicit, and allows simulation of alternatives without trial-and-error in the real world. Policy implications are not just guesses; they are disciplined logical consequences of the structure of the model. The more grounded the model is in both theory and data, the more reliable its guidance becomes for shaping public policy. Often in economics, much dispute arises at the level of assumptions. Differing assumptions lead to differing policy implications, which have a real effect on people. These assumptions ought to be interrogated. Unfortunately, in the public sphere, economic discourse is polluted by ideology. Models are just maps of reality; they are necessarily wrong but can be useful. They are useful for gaining analytical clarity and are fundamental tools of inquiry, not conversation stoppers. Often times, these models become justifications for dogmatic economic ideology. This was the major motivation driving me to write about this because nothing pisses me off more than ideologues at think tanks who pollute the broader public dialogue.
Assumptions
- Rationality: Economic agents (consumers and firms) act rationally, meaning they seek to maximize their utility or profit given constraints. For example, a consumer chooses the combination of goods that provides the highest satisfaction within their budget. For a consumer maximizing utility:
\[ \max_{x} U(x) \]
subject to:
\[ p \cdot x \leq w \]
where:
\(U(x)\) is the utility function,
\(x\) is the consumption bundle,
\(p\) is the price vector,
\(w\) is the consumer’s wealth.For a firm maximizing profit:
\[ \max_{q} \pi(q) = R(q) - C(q) \]
where:
\(q\) is output,
\(R(q)\) is revenue,
\(C(q)\) is cost. - Complete Preferences: Consumers can rank all possible consumption bundles. For example, a consumer can compare apples and oranges and decide whether they prefer one to the other or are indifferent. A preference relation \(\succsim\) satisfies completeness if:
\[ \forall x, y \in X, \quad x \succsim y \text{ or } y \succsim x. \]
where \(X\) is the set of all consumption bundles.
- Transitivity: If a consumer prefers bundle \(A\) to \(B\) and \(B\) to \(C\), then they must prefer \(A\) to \(C\). If a consumer prefers coffee to tea and tea to soda, then they should prefer coffee to soda.
\[ \forall x, y, z \in X, \quad x \succsim y \text{ and } y \succsim z \Rightarrow x \succsim z. \]
- Non-Satiation (Monotonicity): More of a good is always preferred to less, assuming no negative effects. A consumer prefers 5 chocolates over 4 chocolates. If \(x'\) has at least as much of each good as \(x\) and strictly more of at least one good, then:
\[ x' \succ x. \]
- Convex Preferences: Consumers prefer a balanced mix of goods rather than consuming only one type. A consumer prefers a mix of apples and bananas over consuming only apples or only bananas. If \(x \sim y\), then for \(0 \leq \lambda \leq 1\),
\[ \lambda x + (1-\lambda)y \succsim x, y. \]
- Diminishing Marginal Utility: As consumption of a good increases, the additional utility gained from consuming one more unit decreases. The first slice of pizza is highly satisfying, but the tenth slice provides much less additional satisfaction. Mathematically:
\[ \frac{\partial^2 U}{\partial x^2} < 0. \]
- Perfect Information: All economic agents have full knowledge of prices, product quality, and available alternatives. For example, a consumer knows the price of apples at every store and always buys from the cheapest source.
- Perfect Competition: Markets have many buyers and sellers, no single agent has market power, and goods are homogeneous. For example, the wheat market has many sellers, and no single farmer can influence the price. Mathematically, each firm is a price taker:
\[ P = MC(q) \]
- No Externalities: All costs and benefits of a transaction are borne by the buyer and seller, with no third-party effects. For example, a factory polluting a river affects nearby residents, violating this assumption. Mathematically, market efficiency is:
\[ MC_{\text{private}} = MC_{\text{social}}. \]
- No Barriers to Entry or Exit: Firms can freely enter or exit the market based on profitability. For example, if the coffee shop industry becomes highly profitable, new competitors can enter the market. Mathematically this means long-run equilibrium requires zero economic profits:
\[ P = AC. \]
- Time-Invariant Preferences: Consumer preferences do not change unpredictably over time. If a person prefers Coke to Pepsi today, they will likely prefer it tomorrow. Mathematically:
\[ U(x,t) = U(x) \text{ for all } t. \]
- Well-Defined Property Rights: Resources have clear ownership, allowing markets to function efficiently. A farmer owns land and can decide how to use or sell it. If \(x\) is owned by agent \(i\), then:
\[ x \in X_i. \]
- Continuity of Preferences: A small change in consumption does not cause abrupt changes in preferences. For example, if a consumer slightly increases the quantity of an orange, their utility does not change dramatically. Mathematically, this means:
\[ \lim_{x \to x'} U(x) = U(x'). \]
- Production Function Assumptions: Firms use inputs efficiently and experience diminishing returns to inputs. Doubling workers in a factory may not double output due to inefficiencies. For a production function \(f(L, K)\), where \(L\) is labor and \(K\) is capital:
\[ \frac{\partial^2 f}{\partial L^2} < 0, \quad \frac{\partial^2 f}{\partial K^2} < 0. \]
Many of the assumptions in standard microeconomic theory do not hold in real-world settings. Violating these assumptions has significant implications for economic models, often necessitating alternative approaches. Many standard microeconomic assumptions fail in practice, leading to market failures, inefficiencies, and suboptimal decision-making. Alternative models—behavioral economics, game theory, contract theory, and institutional economics—provide more realistic approaches to understanding real-world behavior. Relaxing these assumptions makes economic models more complex but also more applicable to real-world problems. Below are key microeconomic assumptions that are frequently violated and the implications of their violations:
- Rationality (Utility and Profit Maximization): Behavioral economics shows that individuals frequently exhibit bounded rationality (Simon, 1955). People rely on heuristics and biases (Kahneman & Tversky, 1979) rather than maximizing expected utility. Firms may satisfice instead of maximizing profit, meaning they aim for a satisfactory rather than the optimal outcome. Standard demand and supply models may fail when consumers make suboptimal choices.
- Standard models may overpredict rational behavior in decision-making
- Markets may not clear efficiently due to systematic errors in decision-making
- Firms can exploit consumer biases for profit
- Bubbles and irrational behaviors (such as excessive risk-taking) may arise
- Perfect Information: Consumers and firms often have incomplete or asymmetric information. A few examples are the lemons problem (Akerlof, 1970), moral hazard, and adverse selection in insurance markets. Search costs and information-processing limitations can also affect choices.
- Markets may fail due to information asymmetries
- Firms may exploit consumers using advertising, leading to suboptimal consumption
- Government intervention may be required (e.g., regulations on truth-in-advertising or financial disclosures)
- Transitivity of Preferences: People’s preferences are inconsistent over time (Tversky & Kahneman, 1981). Often they have context-dependent preferences (e.g., framing effects): choices depend on how alternatives are presented. They frequently have cyclical preferences: People may prefer A over B, B over C, but C over A.
- Standard utility maximization fails because preference orderings are not well-defined
- Choices may be intransitive, leading to unstable market equilibria
- The Revealed Preference Theory fails when choices change with context
- Perfect Competition (Price Taking Behavior): Many markets have monopolies, oligopolies, and monopolistic competition. Firms engage in strategic pricing, branding, and product differentiation. Market power allows firms to set prices above marginal cost.
- The first welfare theorem (which states that competitive markets lead to efficiency) does not hold
- Price distortions lead to deadweight loss
- Firms engage in rent-seeking behavior (more on this later)
- No Externalities: Real-world markets generate negative externalities (pollution) and positive externalities (innovation spillovers). Firms and consumers do not internalize the full social cost/benefit.
- Market outcomes are not Pareto efficient
- Public goods (e.g., clean air) are underprovided
- Tragedy of the commons occurs (Hardin, 1968)
Obviously, there are economists that relax these assumptions if the situation calls for it. You can think of these as default assumptions, or a base case. Economists will make adjustments if this methodological individualism fails to describe the system. For systemic risk, macro crises, and institutional evolution, individual-level analysis is insufficient. Newer models increasingly blend individual and collective dynamics, incorporating social networks, heuristics, and institutional effects. For example, complexity economics rejects many of these assumptions. Here is a table for comparison:
| Assumption | Standard Micro & Macro Theory | Complexity Economics |
|---|---|---|
| Rationality | Agents are fully rational, optimizing utility or profit | Agents have bounded rationality and use heuristics (Adaptive Expectations) |
| Homogenous Representative Agent | A single agent represents an entire sector or economy | Heterogeneous agents interact and adapt (Agent Based Models) |
| Equilibrium | Systems converge to a stable equilibrium | Systems are out-of-equilibrium, evolving over time (Positive Feedback Dynamics) |
| Perfect Information | Agents have full or at least rational expectations | Information is localized and incomplete |
| Linear Dynamics | Small shocks lead to small effects (predictable responses) | Systems exhibit nonlinear dynamics and tipping points |
| Exogenous Shocks | Crises are caused by external factors (e.g., policy mistakes) | Crises emerge endogenously from network effects (Financial Contagion, Technology Diffusion) |
| Aggregate Behavior | Macro outcomes result from simple aggregation of individual behavior | Emergence: Macro outcomes arise from micro interactions (Non-Linear and Evolutionary) |
When I began heavily questioning the assumptions of economic models in graduate school, I came across a well-known result in computer science and computational complexity theory that demonstrates the inherent computational difficulty of finding equilibrium in economic models. The result shows that many equilibrium problems in economics, particularly those based on fixed-point theorems, are NP-hard or even in the complexity class PPAD-complete (Polynomial Parity Argument on Directed graphs). It took a while for me to wrap my head around this because Economists typically do not study computational complexity, and hence would never imagine questioning the equilibrium assumption in economics. Theoretical economic models assume that equilibrium exists (e.g., Walrasian general equilibrium, Nash equilibrium). However, from a computational complexity perspective, actually computing these equilibria is often infeasible in practice. Several studies have shown that finding general equilibrium prices or Nash equilibria is at least NP-hard. This means It may take exponentially long computation time to find an equilibrium, making it impractical for real-world markets. This implies that markets may not reach equilibrium in reasonable time scales, meaning market equilibrium assumptions might not hold in practice. I think Kenneth Arrow proved that under certain assumptions, economies described by the competitive model will have a unique equilibrium. But then (Daskalakis, Goldberg & Papadimitriou, 2006) showed that, even if an equilibrium exists (by Nash’s theorem), finding it is as hard as any problem in the PPAD class. Nash's theorem relies on Brouwer's fixed-point theorem but Daskalakis et al. showed that computing such a fixed point is PPAD-hard, meaning there is no polynomial-time algorithm unless PPAD problems are easy. In large markets or games, even if an equilibrium theoretically exists, it may be computationally impossible to find. This questions the practical relevance of equilibrium-based economic models. (https://people.cs.pitt.edu/~kirk/CS1699Fall2014/lect4.pdf)
I also began asking about what happens if we have heterogenous utility functions across the collection of consumers and whether utility functions are strictly independent from one another. How can we do any aggregation? Turns out, aggregation under heterogenous preferences is a known issue. We cannot aggregate individual behavior cleanly unless very specific conditions hold. If each consumer \( i \) has their own utility function \( U_i(x_i, y_i) \), we can’t, in general, assume that aggregate demand behaves like a “representative” consumer’s demand. Why?
- Income Effects Differ Across Consumers: Suppose one consumer has a strong income effect, another has a weak one. As total income or prices change, aggregate demand won’t behave like any single demand function — the composition of demand changes.
- Preferences Might Not Be Homothetic: If utilities are non-homothetic (e.g., demand depends on income in nonlinear ways), the shape of aggregate demand depends on the income distribution — not just total income. That makes aggregate demand non-representable by a single utility function in general.
- Non-Separability & Interdependence: If consumers' utilities interact (e.g., network effects, social preferences), you can’t even write their problem as separate maximization problems. For example: \( U_i(x_i, y_i; x_j) \) — consumer \( i \)'s utility depends on what consumer \( j \) does. Aggregation fails hard in this case; the economy is strategic, not just additive.
In comes The Sonnenschein-Mantel-Debreu (SMD) Theorem: Almost any shape of market demand can be generated by aggregating rational individual demands, even if each consumer behaves "nicely" (i.e., maximizes utility, has well-behaved preferences). Aggregate demand functions need not satisfy The law of demand, Uniqueness, Smoothness, or Downward-sloping structure. Even if all individual preferences are convex, continuous, monotonic, etc., aggregate demand can be wild. There are special cases where you can cleanly aggregate.
Case 1: Identical, Homothetic Preferences: Everyone has the same utility function, and it’s homothetic (e.g., Cobb-Douglas, CES). Then aggregate demand depends only on total income, not its distribution. A representative consumer exists.
Case 2: Gorman Polar Form: The Gorman form shows when individual demands can be aggregated: If each consumer’s indirect utility function is quasi-linear in income, i.e.,
\[ v_i(p, m_i) = a_i(p) + b(p) \cdot m_i \]
and all consumers share the same marginal utility of income function \( b(p) \), then aggregate demand can be represented as if from one representative consumer. This condition is very restrictive. If utility functions differ, especially non-homothetically, and income effects vary, then aggregation fails. You cannot represent aggregate demand with a single utility function. The distribution of income and preferences matters deeply. And if preferences are not independent (e.g., they depend on others), then you are no longer even in a “representative agent” world, you’re in game-theoretic territory. This is fascinating; Arrow and Debreu have shown that in most cases (because of aggregation problems), there will probably not be a unique stable equilibrium. In the ideal case, where there is a stable equilibrium, Papadimitriou essentially shows that it will take an economy exponentially long to find it. So what are economists doing with their time? Who knows. Earlier I mentioned the rationality of assumption revision, and the general drawbacks of assuming something false. I think the implications are clear. I should specify that these results are applicable in macroeconomics, but I don't see how they wouldn't also be applicable to single markets which are just localized instantiations of the general problem.
- https://rwer.wordpress.com/2017/09/19/stiglitz-and-the-full-force-of-sonnenschein-mantel-debreu/
- https://profstevekeen.substack.com/p/the-anything-goes-market-demand-curve
"Almost a century and a half after Léon Walras founded general equilibrium theory, economists still have not been able to show that markets lead economies to equilibria. We do know that — under very restrictive assumptions — equilibria do exist, are unique and are Pareto-efficient. But — what good does that do? As long as we cannot show that there are convincing reasons to suppose there are forces which lead economies to equilibria — the value of general equilibrium theory is nil. As long as we cannot really demonstrate that there are forces operating — under reasonable, relevant and at least mildly realistic conditions — at moving markets to equilibria, there cannot really be any sustainable reason for anyone to pay any interest or attention to this theory."
In order to have a coherent market demand function, you typically need these assumptions:
| Assumption | Why It's Needed | What Happens if It Fails |
|---|---|---|
| 1. Homothetic preferences | So Engel curves (income → quantity) are straight lines through the origin — demand only depends on relative prices and total income. | Demand becomes sensitive to income distribution, not just total income — aggregation fails. |
| 2. Identical preferences | Ensures that income effects and substitution effects are similar across individuals. | Different preferences create non-canceling income effects, making demand shapes unpredictable. |
| 3. No wealth effects (or quasi-linear utilities) | Gorman polar form requires linear Engel curves with the same slope across consumers. | Consumers react differently to income changes → aggregation fails. |
| 4. Independent preferences | Each consumer’s utility is independent of others’. | Interdependent preferences (e.g., externalities, network effects) destroy separability. No meaningful aggregation. |
| 5. Complete markets & no rationing | Ensures each agent optimizes fully. | If constraints exist (e.g., liquidity, quantity rationing), individual demands don’t reflect preferences alone. |
| 6. Convex preferences | Makes individual demands well-behaved (single-valued, continuous, responsive). | Non-convexity introduces multiple optima, discontinuities, or non-monotonic demand — invalidates aggregation. |
| 7. Perfect information and price-taking behavior | Ensures individual demands are responsive only to prices and income. | If strategic behavior or uncertainty exists, individual choices reflect beliefs, not pure preferences. |
Generally these are the issues:
- Different marginal propensities to consume across individuals → income redistribution changes aggregate demand.
- Presence of luxuries and necessities → demand depends on income distribution, not just totals.
- Social preferences, peer effects, positional goods → utility is not separable.
- Price-dependent wealth effects → demand depends on who holds wealth, not just how much exists.
- Differential exposure to prices (e.g., subsidies, taxes, discrimination) → different consumers face different effective prices.
- Behavioral heterogeneity (bounded rationality, reference dependence, etc.) → violates utility maximization assumptions.
- Multiple equilibria or discontinuities → market demand can't be summarized by a stable function.
Efficiency
- Allocative Efficiency – Resources are distributed in a way that maximizes societal welfare, meaning goods and services are produced according to consumer preferences. It occurs when the price of a good equals its marginal cost (P = MC).
- Productive Efficiency – Goods and services are produced at the lowest possible cost, meaning no additional output can be achieved without increasing input costs. This happens when firms operate at the lowest point of their average cost curve.
- Pareto Efficiency (Pareto Optimality) – A state where no one can be made better off without making someone else worse off. It represents an ideal allocation of resources where improvements in one area would require trade-offs in another.
- Linear Programming (LP): Formulates decision problems as a linear objective (such as maximizing profit or minimizing cost) subject to linear constraints that capture limited resources, capacities, and business rules. LP solvers search the feasible region and identify the best combination of decision variables (e.g., production quantities, workforce levels, shipping amounts) that satisfies all constraints. By providing optimal allocations and sensitivity information (like shadow prices and reduced costs), LP helps managers understand trade-offs, identify bottlenecks, and make efficient, data-driven decisions.
- Integer & Mixed-Integer Programming (IP/MIP): Extends linear programming by requiring some or all decision variables to be integers (e.g., 0–1 yes/no choices, counts of trucks, machines, or facilities), making it ideal for planning and design problems where fractional decisions are meaningless. IP/MIP models capture logical relationships (open/close, assign/not assign, either-or) and complex constraints that reflect real-world operational rules. By searching over discrete combinations of decisions, these models uncover high-quality, implementable plans for scheduling, routing, location, and capacity expansion, directly supporting strategic and operational decision-making.
- Nonlinear Programming (NLP): Handles optimization problems where the objective or constraints are curved (nonlinear), such as diminishing returns, nonlinear costs, physical laws, or risk measures. NLP can capture realistic relationships like nonlinear fuel consumption, power flows, or portfolio risk-return profiles that linear models cannot accurately represent. By optimizing over these nonlinear relationships, NLP provides decision makers with feasible, often more realistic “best possible” solutions that balance performance, cost, and risk in engineering, energy, and financial applications.
- Multi-Objective Optimization: Simultaneously considers several objectives (such as cost, service level, environmental impact, and risk) instead of collapsing everything into a single metric. These models generate a set of Pareto-efficient solutions that represent different trade-offs among objectives, rather than one “one-size-fits-all” answer. Decision makers can then explore this efficient frontier, compare scenarios, and select the alternative that best reflects their strategic priorities and preferences, enabling transparent and structured decision support.
- Network Optimization: Represents systems as nodes (locations, facilities, servers) and arcs (roads, pipelines, communication links) with capacities and costs, then finds the most efficient routes, flows, or configurations. Classic problems include shortest paths, minimum-cost flows, and network design for logistics, transportation, and telecommunication networks. By optimizing how goods, information, or resources move through the network, these models reduce transportation and operating costs, improve reliability, and support strategic decisions about where to invest in new capacity.
- Stochastic Programming: Explicitly incorporates uncertainty in parameters like demand, prices, processing times, or failures by modeling multiple scenarios with associated probabilities. Decisions are typically split into “here-and-now” choices (made before uncertainty is realized) and “recourse” actions (adjustments made after outcomes are known). This framework produces solutions that hedge against risk, perform well on average across scenarios, and help decision makers understand the value of flexibility and contingency plans under uncertainty.
- Robust Optimization: Focuses on solutions that remain feasible and effective across a range of uncertain parameter values, without requiring detailed probability distributions. It defines uncertainty sets (e.g., demand within a range, costs varying within a band) and finds decisions that perform well under worst-case or adverse conditions. This approach is especially valuable when data is scarce or noisy, delivering plans that are more resilient to surprises and reducing the likelihood of costly constraint violations or service failures.
- Queueing Theory: Models systems where customers, jobs, or tasks arrive, wait, and receive service (e.g., call centers, hospital ERs, repair shops, checkouts). Using arrival and service rate assumptions, it predicts key performance measures such as average waiting time, queue length, utilization, and probability of delay. These insights help managers choose staffing levels, number of servers, and priority rules that balance service quality with cost, enabling efficient capacity planning and service-level management.
- Discrete-Event Simulation: Builds a time-based, virtual model of processes where events (arrivals, service completions, breakdowns, shifts) occur at discrete points in time. By simulating the system many times under different configurations, managers can test “what-if” scenarios—such as adding staff, changing layout, or modifying rules—without disrupting real operations. Simulation reveals bottlenecks, variability effects, and system-wide interactions, providing rich decision support for improving throughput, reducing congestion, and designing more efficient workflows.
- Monte Carlo Simulation: Uses random sampling from probability distributions to propagate uncertainty through models and generate distributions of outcomes (e.g., costs, profits, completion times). This technique helps quantify risk by estimating metrics like the probability of meeting a target, worst-case losses, or expected overruns. Decision makers can then compare alternatives not just on single-point estimates but on their full risk profiles, leading to more informed choices about buffers, contingency plans, and risk-reward trade-offs.
- Heuristics & Metaheuristics: Provide general-purpose search strategies (such as genetic algorithms, simulated annealing, tabu search, or GRASP) designed to quickly find good, near-optimal solutions to very large or complex problems where exact optimization may be too slow or impossible. These methods explore the solution space intelligently, using rules of thumb and guided randomness to escape local optima and improve solutions over time. They are widely used for routing, scheduling, design, and planning problems, where they deliver high-quality, implementable decisions within practical time limits, enhancing operational efficiency.
- Decision Trees: Represent sequential decisions, chance events, and outcomes in a branching tree structure, with probabilities and payoffs attached to each branch. By “rolling back” the tree using expected values, managers can systematically compare strategies, account for uncertainty, and determine the optimal decision at each stage. Decision trees also support the evaluation of information-gathering actions (such as tests or market research), helping quantify the value of information and clarify complex, multi-step decision problems.
- Multi-Criteria Decision-Making (e.g., AHP, TOPSIS): Helps evaluate and rank alternatives when multiple, often conflicting criteria matter, such as cost, quality, risk, sustainability, and strategic fit. Methods like AHP structure the problem into a hierarchy and elicit relative importance weights from decision makers, while techniques like TOPSIS rank alternatives based on their closeness to an ideal solution. These approaches transform subjective preferences into transparent, quantitative scores and rankings, enabling consistent, defendable decisions in complex evaluations like vendor selection, project prioritization, or technology choice.
- Inventory Models (EOQ, reorder points, (s,S) policies): Determine how much to order and when to reorder to balance ordering costs, holding costs, and the risk of stockouts. Models like EOQ give optimal lot sizes under stable demand, while reorder point and (s,S) policies incorporate variability and service-level targets. By linking inventory decisions to demand patterns, lead times, and cost parameters, these models support efficient inventory control, reduce excess stock, and improve product availability and customer service.
- Data Envelopment Analysis (DEA): Evaluates the relative efficiency of comparable decision-making units (DMUs)—such as plants, branches, hospitals, or service centers—by comparing multiple inputs (e.g., labor, capital, materials) and outputs (e.g., units produced, patients treated, services delivered). DEA constructs an empirical “efficient frontier” and measures how far each unit is from this frontier, identifying efficient peers and quantifying potential input savings or output increases. This information supports performance benchmarking, target setting, and resource reallocation, guiding managers toward more efficient operations and best practices across the organization.
- Multiple, often conflicting objectives – Governments aim to maximize social welfare, which includes economic growth, healthcare, education, infrastructure, and public safety. These goals don’t always align or have a clear financial return.
- Lack of a single performance metric – Unlike profit, government performance is measured through qualitative and quantitative indicators like literacy rates, life expectancy, poverty reduction, and public satisfaction. These are difficult to compare directly.
- Political constraints – Decision-making in government is influenced by political pressures, elections, interest groups, and bureaucracy, which can lead to inefficient allocation of resources.
- Equity vs. Efficiency Trade-offs – Governments often prioritize fairness and accessibility over pure efficiency. For example, providing healthcare to all citizens may be less cost-efficient than a private insurance model but is justified on social grounds.
Evaluating government programs involves assessing their design, implementation, and impact to determine effectiveness and efficiency. Different evaluation methods help policymakers decide whether to continue, modify, or discontinue programs. Below are the main types of program evaluation methods, categorized based on the evaluation focus.
1. Program Design Evaluations
Program design evaluations focus on whether a program’s design is logical, feasible, and likely to achieve its objectives before it is fully implemented.
- A. Needs Assessment. A needs assessment determines whether a problem exists and if a government program is truly necessary, often using surveys, focus groups, and statistical analysis of demographic or economic data to clarify the nature and extent of the issue. For example, before launching a job-training program, a government might conduct a needs assessment to determine whether unemployment stems from a lack of skills or from other factors such as limited job availability or structural barriers in the labor market.
- B. Theory of Change / Logic Model Analysis. Theory of change or logic model analysis evaluates whether the program’s underlying logic—how inputs and activities are expected to produce outputs and ultimately outcomes—is coherent and realistic. This type of evaluation often relies on stakeholder interviews, expert reviews, and literature reviews to assess whether the proposed causal pathways are supported by evidence. For instance, a homelessness prevention program should clearly explain how providing housing subsidies and related services will lead to reduced homelessness rates, and a logic model analysis tests whether that pathway is plausible.
- C. Feasibility and Pilot Studies. Feasibility and pilot studies test a program’s viability on a small scale before broader implementation, using small-scale trials, randomized controlled trials (RCTs), or process simulations to identify potential problems and refine the design. For example, a new voting system might first be piloted in select districts so that issues with technology, logistics, or voter behavior can be addressed before the system is scaled up to a national level.
2. Implementation Evaluations
Implementation evaluations examine how well a program is being delivered in practice and whether it follows its intended structure and procedures.
- D. Process Evaluation. Process evaluation investigates whether program activities are being carried out as planned by looking closely at program operations through site visits, administrative data analysis, and interviews with program staff. For example, in a government food assistance program, a process evaluation would examine whether benefits are reaching the intended recipients on time and in the correct amounts, and whether administrative procedures are functioning effectively.
- E. Fidelity Assessment. Fidelity assessment checks whether a program is being implemented according to its original design and standards, relying on methods such as direct observation, program audits, and comparisons of implementation data with program manuals or guidelines. For instance, if a tutoring program for struggling students is designed to provide 10 hours of weekly instruction but records show that students receive only 5 hours, a fidelity assessment would identify this gap as a deviation from the intended model.
- F. Capacity and Resource Evaluation. Capacity and resource evaluation determines whether a program has sufficient funding, personnel, and technology to operate effectively, typically using budget analysis, workforce capability studies, and infrastructure assessments. For example, a government healthcare program might have a strong policy design but still fail to achieve its goals because there are not enough doctors, nurses, or clinics in rural areas to deliver services to the target population.
3. Outcome and Impact Evaluations
Outcome and impact evaluations assess whether a program achieves its intended goals and whether observed changes can be attributed to the program itself rather than to other factors.
- G. Performance Monitoring (Key Performance Indicators – KPIs). Performance monitoring tracks ongoing program success using predefined key performance indicators (KPIs), often presented through data dashboards, regular reports, and trend analyses. For example, a job placement program may monitor the percentage of participants who find employment within six months of completing training, using that metric to gauge whether the program is performing as expected over time.
- H. Cost-Benefit Analysis (CBA). Cost-benefit analysis compares the total social benefits of a program to its total costs in monetary terms, using economic modeling and comparisons with historical or alternative data to estimate net gains or losses. For example, a public preschool program might cost $10 million to operate but generate an estimated $50 million in future economic benefits through higher lifetime earnings, reduced crime, and lower remedial education costs, leading analysts to judge the program as highly cost-effective.
- I. Cost-Effectiveness Analysis (CEA). Cost-effectiveness analysis compares the costs of a program to its non-monetary benefits—such as lives saved, illnesses prevented, or students educated—through statistical analysis and, often, longitudinal studies. For instance, two public health campaigns might be compared based on their cost per life saved or cost per case of disease prevented, allowing policymakers to choose the option that achieves similar or better outcomes at a lower cost.
- J. Randomized Controlled Trials (RCTs). Randomized controlled trials test cause-and-effect relationships by randomly assigning participants to a group that receives the intervention and a control group that does not, and then tracking outcomes for both groups over time. Using rigorous experimental design and outcome measurement, an RCT can determine whether a program truly causes the observed effects. For example, to test a job training program, participants are randomly assigned either to receive training or not, and later comparisons of employment rates reveal whether the program significantly improves job prospects.
- K. Quasi-Experimental Designs. Quasi-experimental designs estimate program impact when random assignment is not feasible, using methods such as difference-in-differences, propensity score matching, or regression discontinuity. These techniques compare outcomes across groups or time periods that differ in exposure to the program but are otherwise similar, to approximate causal effects. For instance, analysts might compare crime rates before and after a new policing strategy is implemented in one city, using another city without the strategy as a comparison group.
4. Policy and System-Level Evaluations
Policy and system-level evaluations look beyond individual programs to assess broader, long-term policy effectiveness and system-wide impacts.
- L. Equity and Distributional Analysis. Equity and distributional analysis examines whether program benefits are fairly distributed across different income groups, racial or ethnic groups, and geographic regions by analyzing disaggregated data and engaging stakeholders from affected communities. For example, an equity analysis of a government housing program might reveal that most benefits flow to urban areas while rural communities receive relatively little support, prompting reforms to address these disparities.
- M. Sustainability Assessment. Sustainability assessment evaluates whether a program can continue to deliver benefits over time without relying on unsustainable levels of funding or external support, often using long-term financial modeling and impact forecasting. For instance, a government solar panel subsidy program might be assessed to determine whether households will continue adopting solar technology once subsidies are reduced or removed, and whether the program’s financial structure can be maintained over the long run.
- N. Comparative Policy Analysis. Comparative policy analysis compares a program’s design and performance with similar policies or programs in other regions or countries, using benchmarking and policy literature reviews to identify best practices and areas for improvement. For example, analysts might compare U.S. healthcare policy outcomes with those of universal healthcare systems in Europe to understand differences in cost, access, and health outcomes, and to inform potential reforms.
Selecting the Right Evaluation Method
The appropriate evaluation method depends on the stage and purpose of the assessment. Before launching a program, tools like needs assessments, theory of change analyses, and feasibility or pilot studies help determine whether a program is necessary and likely to work. During program operation, process evaluations, fidelity assessments, and capacity and resource evaluations reveal whether the program is being implemented as intended and whether it has the means to function effectively. To assess impact, approaches such as randomized controlled trials, quasi-experimental designs, cost-benefit analysis, cost-effectiveness analysis, and performance monitoring with KPIs help determine whether the program is achieving its goals and doing so efficiently. For long-term policy effectiveness and system-wide understanding, equity and distributional analysis, sustainability assessment, and comparative policy analysis provide insights into fairness, durability, and how a policy compares to alternatives in other jurisdictions.
I'll elaborate on these empirical approaches in another post. We will discuss econometric methods of policy evaluation that utilize the counterfactual framework. Some methods include: Difference-in-Differences, Regression Discontinuity Design, Instrumental Variables, Propensity Score Matching, and Synthetic Control Method. This is standard methodology for any applied econometrician or microeconomist with an empirical emphasis.
Risk and Uncertainty
a. Expected utility over lotteries
The workhorse representation:
- There’s a set of states of the world (\(S = \{s_1, \dots, s_n\}\)).
-
A lottery (or risky prospect) is a vector of outcomes with probabilities, e.g.:
\[ L = (x_1, p_1; x_2, p_2; \dots; x_n, p_n) \] -
Under von Neumann–Morgenstern expected utility:
\[ U(L) = \sum_{i} p_i \, u(x_i) \] -
Risk attitude is captured by the curvature of \(u(\cdot)\):
- Concave → risk-averse
- Linear → risk-neutral
- Convex → risk-loving
So risk here = lotteries with known \(p_i\).
b. Subjective expected utility
Savage’s subjective expected utility theory allows:
- Probabilities to be subjective beliefs rather than “objective frequencies.”
- But still a single probability measure \(P\) over states.
-
Preferences over acts (functions from states to outcomes) can be represented as:
\[ V(f) = \int u(f(s)) \, dP(s). \]
Here, risk is still “well-defined”: the agent behaves as if they have a single coherent probability distribution.
c. General equilibrium and Arrow–Debreu
In Arrow–Debreu general equilibrium:
- There are state-contingent commodities and state-contingent securities.
- Uncertainty enters as multiple states \(s\), and a security might pay 1 unit in state \(s\) and 0 in others.
- If markets are complete, agents can fully insure against risk by trading these securities.
Risk is a structure on the state space plus a probability measure over it; prices encode how the market aggregates these risks.
d. Finance: mean–variance, CAPM, etc.
In finance, risk often appears as:
- Random returns \(R\) with known distribution.
- Variance or standard deviation as a summary of risk.
- E.g. Markowitz mean–variance: investors choose portfolios to trade off expected return vs variance.
- CAPM, APT, etc., treat shocks as random variables with known distributions; risk that cannot be diversified away is priced.
e. Macroeconomics and dynamic models
In macro (RBC/DSGE models):
- There are shocks to productivity, preferences, etc., modeled as stochastic processes (e.g. \(AR(1)\) with normal innovations).
- Agents know the stochastic law of motion and form rational expectations (their subjective distribution = true distribution).
Again: risk = stochastic shocks with known distributions, embedded in dynamic optimization.
f. Game theory
In games:
- Nature’s moves (e.g. a type or state) are modeled with a known probability distribution.
- Players have beliefs about others’ types and actions, often represented as a probability measure.
This is risk about unknown but probabilistically well-characterized events.
I actually can't describe in great detail how economists go about modeling Knightian Uncertainty; that's typically not standard curriculum for masters students. From what I can gather, it often involves relaxing assumptions, incorporating model uncertainty, and allowing for multiple priors. Different groups of economists treat risk and uncertainty somewhat differently, depending on their methodological commitments, empirical focus, and philosophical views about probability and knowledge. The same words—“risk,” “uncertainty,” “ambiguity”—can therefore mean slightly different things in different subfields.Mainstream micro / finance
In mainstream microeconomics and modern finance theory, the default approach is to model virtually all randomness as risk in the sense of well-defined probabilities. Preferences are typically represented using von Neumann–Morgenstern expected utility or close variants (such as subjective expected utility or models with time or state separable utility). If the true data-generating process is not objectively known, it is standard to assume that agents hold subjective probability distributions over states of the world and then maximize expected utility with respect to those beliefs. In this framework, saying that “you don’t know the true probabilities” is resolved by the idea that “your beliefs are your probabilities”: whatever uncertainty you face is captured by a single coherent subjective prior.
Ambiguity and Knightian uncertainty are acknowledged, but they are usually treated as more specialized topics rather than as the baseline. They tend to appear in decision theory, certain parts of asset pricing (e.g., models with ambiguity-averse investors), or specialized macro and finance papers. The core textbooks and canonical models, however, remain firmly within the expected-utility, single-prior framework, where risk is always representable by a probability measure (objective or subjective) over a fixed state space.
Behavioral and experimental economics
Behavioral and experimental economics place strong emphasis on documented deviations from standard expected-utility behavior under risk. A central example is prospect theory and its later refinements, which incorporate features such as reference dependence, loss aversion, and probability weighting to better match observed choices. Other models, like rank-dependent utility and cumulative prospect theory, similarly modify how probabilities enter the utility calculation, capturing the empirical finding that people tend to overweight small probabilities and underweight moderate to large ones.
This literature also pays particular attention to ambiguity aversion: robust evidence from experiments (such as Ellsberg-type choices) shows that people often treat unknown odds differently from known odds, even when expected payoffs are matched. To capture this, behavioral and decision-theoretic models frequently use non-additive probabilities (capacities), probability weighting functions, or multiple-priors (max–min or smooth ambiguity) representations. These approaches allow the formal separation of “risk” (known probabilities) from “ambiguity” or “uncertainty” (imprecise or non-unique probabilities). In practice, behavioral economists are often the ones pushing hardest on the claim that, at the level of actual human behavior, “risk ≠ uncertainty”: people react systematically differently when probabilities are themselves obscure or ill-defined, not just risky.
Macroeconomics
In traditional real business cycle (RBC) and New Keynesian DSGE (dynamic stochastic general equilibrium) models, all randomness is typically treated as risk with known probability distributions. Shocks to productivity, preferences, policy, or financial frictions are modeled as stochastic processes (e.g., AR(1) processes with normally distributed innovations), and agents form rational expectations: their subjective beliefs coincide with the true model probabilities. Uncertainty is thus “fully probabilistic” and embedded in the stochastic structure of the model; agents know the distribution of shocks, even if actual realizations are unknown ex ante.
More recent strands of macroeconomics, however, explicitly introduce richer notions of uncertainty. “Uncertainty shocks” refer to periods in which the dispersion, volatility, or perceived ambiguity about future outcomes rises, often modeled via time-varying volatility, stochastic volatility, or changes in the cross-sectional dispersion of shocks. Another strand uses robust control and model uncertainty: policymakers or firms are modeled as fearing that their baseline model may be misspecified, so they behave cautiously or choose policies that are robust to worst-case scenarios. In this context, “uncertainty” can mean either a higher variance of shocks (which is still risk in the classic sense) or deeper doubts about the correctness of the model itself (which is closer to true Knightian uncertainty). Macroeconomists thus use the word “uncertainty” in both a narrow, variance-based sense and in a broader, model-uncertainty sense, depending on the specific framework.
Keynesian / Post-Keynesian / radical uncertainty views
Starting with John Maynard Keynes and later developed by some Post-Keynesian and “radical uncertainty” authors, a different perspective argues that much of the economic world is characterized by fundamental or radical uncertainty. Here, the future is viewed as genuinely non-ergodic: it does not behave like a stable, repeatable statistical process that can be inferred reliably from past data. Many crucial events—wars, financial crises, institutional shifts, technological breakthroughs, or changes in social norms—are seen as unique, path-dependent, and not well-described by known or even well-approximated probability distributions.
In this view, the expected-utility apparatus with well-defined probabilities is not just an approximation but can be actively misleading for analyzing major economic decisions, especially in areas like investment, innovation, and high-level financial or policy choices. Instead of rational optimization under a single known prior, people are thought to rely heavily on conventions, social norms, narratives, “animal spirits,” and simple rules of thumb to navigate an inherently unknowable future. Confidence, sentiment, and storytelling play a central role in driving investment and spending, and instability can emerge endogenously as these conventions and narratives shift. This contrasts sharply with mainstream models that treat all randomness as risk, suggesting that for many important questions, uncertainty is qualitatively different from—and more profound than—probabilistic risk.
Austrian, evolutionary, complexity perspectives
Austrian, evolutionary, and complexity-oriented economists emphasize Knightian uncertainty as central to entrepreneurship, innovation, and economic change. They view the economy as a complex, adaptive, and evolving system in which new technologies, products, institutions, and even entirely new states of the world emerge over time. Because these future states cannot be fully anticipated or listed in advance, it is impossible to assign complete probability distributions over a fixed state space. The standard “state space + probability measure” framework is therefore seen as too static and limited for understanding genuine novelty and creative destruction.
In these perspectives, entrepreneurs are rewarded precisely for bearing non-insurable, fundamentally uncertain outcomes and for discovering opportunities that others have not foreseen. Formal models with multiple priors or non-additive probabilities are less central here; instead, the focus is often on qualitative arguments, historical examples, and computational or agent-based models that highlight open-ended evolution and feedback effects. Conceptually, however, these schools are firmly in the camp that insists “uncertainty is not just risk”: economic dynamics are shaped by unknown unknowns, structural change, and the ongoing creation of new possibilities that cannot be captured by a single, stable probability distribution over a fixed set of states.
Now that we have somewhat surveyed how economists think about risk and uncertainty, let's unpack the process by which economists model decision problems. It more or less follows this general template:Decision Problems Under Risk in Econonomics
A (single-agent) decision problem under risk is typically described within a formal framework that makes all sources of randomness probabilistic and well specified. In this setting, the economist defines a set of possible choices and a set of possible states of the world, together with a probability distribution over those states and a way to map choices and states into outcomes that the decision maker cares about. Preferences over these risky outcomes are then represented by some utility functional, most commonly expected utility.
- A set of actions \(A\) (also called policies or choices), which captures what the decision maker can choose. These actions may be discrete (e.g., buy vs. not buy insurance) or continuous (e.g., portfolio shares, effort levels), and they can be thought of as the feasible decisions given technology and institutional rules.
- A set of states of the world \(S\), which represents all the relevant ways the world might turn out once uncertainty is resolved (e.g., high return vs. low return, good weather vs. bad weather). States are assumed to be mutually exclusive and collectively exhaustive, so that exactly one state occurs.
- A probability measure \(P\) over \(S\), which is what makes the problem one of risk rather than pure uncertainty. This measure may be interpreted as an objective distribution (e.g., long-run frequencies) or as a subjective probability distribution representing the agent’s beliefs. In either case, \(P\) is assumed to be a single, coherent probability measure defined on all relevant events.
- A set of consequences/outcomes \(X\), such as consumption bundles, wealth levels, or payoff vectors, that describe what the agent ultimately cares about in each state. Outcomes can be multidimensional (e.g., consumption today and tomorrow, different goods, leisure and labor) and may incorporate both material and non-material aspects of wellbeing.
- A consequence function \(f : A \times S \to X\), which specifies what outcome you get if you choose action \(a \in A\) and state \(s \in S\) occurs. Formally, for each pair \((a,s)\), the consequence is \(x = f(a,s)\). This function captures how technology, market structure, and constraints translate decisions and states into realized outcomes.
- A preference relation over random outcomes, usually represented by a utility function and an aggregation rule, most commonly expected utility. In the canonical case, the agent has a von Neumann–Morgenstern utility function \(u : X \to \mathbb{R}\), and evaluates risky prospects (lotteries over \(X\)) by their expected utility with respect to \(P\).
In risk models, the key assumption is the existence of a single, well-defined probability distribution \(P\) over states (objective or subjective). Everything else—optimal choices, comparative statics, welfare analysis, and equilibrium concepts in multi-agent settings—is built on top of that probabilistic structure. The distinction between risk and deeper forms of uncertainty (where probabilities may be ill-defined or non-unique) is deliberately set aside in this framework.
The “standard” modeling pipeline under pure risk
You can think of the standard approach to modeling decisions under risk as a structured checklist that an economist implicitly runs through when translating a verbal description of a problem into a formal model. The pipeline below is written for a single-agent problem, but the same logic extends to multi-agent environments and general equilibrium models.
Step 0 — Verbal problem statement
Example: “A household chooses how to allocate its wealth between a risky asset and a risk-free asset to maximize expected lifetime welfare.”
At this step, the economist formulates the problem in plain language. This involves:
- Identifying the agents (e.g., household, firm, bank, government).
- Clarifying the key decisions they make (e.g., how much to save, what portfolio to hold, what effort level to exert).
- Describing the main constraints they face (budget, technology, institutional rules).
- Highlighting what is random (prices, returns, shocks) and what is taken as given.
- Specifying the time scale (one-period vs. multi-period, short run vs. long run) and the general context in which the decision is made.
The goal of Step 0 is to distill the economic intuition and narrative into a form that can then be mapped to a formal model in subsequent steps.
Step 1 — Choose the economic environment
The next step is to choose the basic structure of the economic environment in which the decision is embedded. This requires answering several high-level questions:
- Unit of analysis? Is the relevant decision maker an individual consumer, a representative household, a firm, a financial intermediary, or a government? This choice affects how we interpret utility, constraints, and objectives.
- Time structure? Is the problem a one-shot (static) choice, a finite-horizon problem, or an infinite-horizon (dynamic) problem? In dynamic settings, we must also decide on the length of periods and whether decisions are made in discrete or continuous time.
- Interaction? Is this a single-agent problem taking prices and other variables as given, or a strategic interaction (game) where other agents’ behavior matters and must be modeled explicitly? In multi-agent settings, equilibrium concepts (e.g., Nash or competitive equilibrium) come into play.
For exposition, it is common to start with the simplest case: a single agent with a static decision, taking prices and probabilities as given, and then later extend the framework to dynamic or multi-agent environments.
Step 2 — Specify states, probabilities, and information
Once the environment is chosen, the next step is to formalize uncertainty via states and probabilities, and to clarify what the agent knows at the time of decision.
- Define a finite or continuous state space \(S\), such as \(\{s_1, s_2, \dots, s_n\}\) for discrete states, or a subset of \(\mathbb{R}^k\) for continuous uncertainty. Examples might include “high return vs. low return,” “good weather vs. bad,” or a continuum of possible productivity shocks.
- Assign probabilities \(P(s)\) for each state in the discrete case, or a density \(p(s)\) with respect to some reference measure in the continuous case. These probabilities satisfy the usual axioms (non-negativity and summing/integrating to one).
- Specify what the agent knows at the time of decision. Under risk, the agent is assumed to know (or behave as if they know) the probability measure \(P\). Information can be summarized by an information set or sigma-algebra, but in simple models we often just say “the agent knows the distribution of shocks.”
If probabilities are subjective (beliefs rather than physical frequencies), the standard risk framework still imposes a single coherent subjective probability measure \(P\). The agent may update these beliefs over time via Bayes’ rule as new data arrive, but at each point in time, her uncertainty is summarized by a single prior (or posterior) distribution over states.
Step 3 — Specify actions and constraints
Next, we define what the agent can do and what limits those choices.
- The action set \(A\) describes the feasible decisions (e.g., portfolio weights in different assets, effort levels, consumption–savings choices, production plans). The action set may be continuous, discrete, or a mixture, and is often assumed to be closed and bounded for technical reasons (to guarantee the existence of optimal choices).
-
Constraints link actions and states to feasible outcomes:
- Budget constraints (e.g., wealth cannot be negative, spending cannot exceed income).
- Technological constraints (e.g., production functions that limit output given inputs).
- Resource or institutional constraints (e.g., borrowing limits, regulatory constraints, participation constraints).
Mathematically, for each action \(a \in A\) and state \(s \in S\), the outcome is \(x = f(a,s)\) and must lie in some feasible set \(X(a,s)\). The function \(f\) together with the constraints embodies the technological and institutional structure of the problem, and determines which lotteries over outcomes are attainable.
Step 4 — Specify preferences under risk (expected utility)
We then specify how the agent evaluates risky prospects. The standard assumption is expected utility under risk.
-
A utility function \(u : X \to \mathbb{R}\), usually:
- Increasing in “good” things (e.g., more consumption, higher wealth, better health yields higher utility).
- Concave in core arguments (e.g., consumption), which reflects risk aversion: the agent prefers the expected outcome of a lottery to the lottery itself.
- A preference functional over risky prospects (lotteries) induced by expected utility. If the agent chooses action \(a\), the random outcome is \(f(a,S)\), and her expected utility is \[ U(a) = \mathbb{E}_P\big[ u(f(a,S)) \big] = \sum_{s \in S} P(s)\, u(f(a,s)) \] in the discrete case, or the corresponding integral in the continuous case: \[ U(a) = \int u(f(a,s)) \, dP(s). \]
Here, the risk attitude of the agent is encoded in the curvature of \(u\); probabilities themselves are typically taken as given and enter linearly inside the expectation. Alternative but related specifications (such as constant relative risk aversion or Epstein–Zin preferences in dynamic settings) still treat risk via a single probability measure but may separate attitudes toward risk, intertemporal substitution, and other dimensions of choice.
Step 5 — Define the optimization problem
Given the primitives above, the agent’s problem is to choose an action that maximizes expected utility subject to constraints:
\[ \max_{a \in A} \; \mathbb{E}_P\big[ u(f(a,S)) \big] \]
subject to the technological, budgetary, and institutional constraints described in Step 3.
For dynamic problems, this optimization is embedded in a dynamic programming framework. A typical setup involves:
- State variables \(z_t\) (e.g., wealth, capital, productivity, information).
- Control variables (actions) \(a_t\) chosen at each date.
- A transition equation \[ z_{t+1} = g(z_t, a_t, \varepsilon_{t+1}), \] where \(\varepsilon_{t+1}\) is a random shock with a known distribution under \(P\).
- A Bellman equation of the form \[ V(z) = \max_{a \in A(z)} \Big\{ u(x(z,a)) + \beta \, \mathbb{E}_P\big[ V(z') \mid z,a \big] \Big\}, \] where \(0 < \beta < 1\) is the discount factor and \(z' = g(z,a,\varepsilon')\) denotes next period’s state.
Expectations are always taken with respect to a known probability distribution, either objective or subjective, and the agent is assumed to correctly anticipate how the state evolves given her choices and the stochastic environment.
Step 6 — Solve the model: derive optimal policy and comparative statics
The next step is to actually solve the optimization problem, either analytically (when possible) or numerically (which is common in more complex or dynamic models).
- Find the optimal action \(a^*\) in the static case or the policy function \(a^*(z)\) in dynamic settings, which describes the optimal choice as a function of the state.
-
Derive comparative statics, asking how optimal choices change when exogenous parameters change:
- How does \(a^*\) change when wealth increases?
- How do optimal portfolio shares change when risk aversion rises?
- How do choices respond to changes in probabilities, interest rates, or the distribution of shocks?
These comparative statics provide the main qualitative predictions of the model. For example: “If risk aversion increases, the optimal share of wealth in the risky asset falls,” or “If the probability of a bad state increases, precautionary saving rises.” In dynamic models, we may also analyze stability, convergence to a steady state, or the behavior of the system in response to different shock processes.
Step 7 — Map the model to observables
To use the model empirically or for policy analysis, we must connect its abstract objects to real-world data.
-
Identify what is observable:
- Choices: portfolio shares, insurance purchases, consumption–savings decisions, labor supply, etc.
- Outcomes: consumption levels, income, asset returns, default events, realized shocks when they can be measured.
- Some state variables: wealth, employment status, prices and interest rates, sometimes beliefs (via surveys).
-
Identify what is not observable and must be inferred:
- Utility parameters: risk aversion coefficients, discount factors, habit parameters.
- Beliefs: subjective probabilities over states or over future variables.
- Latent state variables or shocks that are not directly measured but influence behavior.
This step typically involves specifying an observation equation: a statistical relationship that maps from model variables (true actions, states, and shocks) to actual data (which may be measured with noise or only partially observed). This mapping is crucial for estimation, identification, and for comparing the model’s predictions with empirical patterns.
Step 8 — Estimation and empirical verification
Finally, we ask: does the model describe actual behavior under risk? This involves taking the model to data and assessing how well it fits, explains, or predicts observed outcomes.
Several strategies include:
- Structural estimation: Specify the full model, including functional forms and distributions, and estimate its parameters (e.g., risk aversion, discount factors) by fitting the model to observed choices and outcomes. Methods include maximum likelihood, generalized method of moments (GMM), and the method of simulated moments.
- Calibration: Choose parameter values so that the model matches key empirical moments (e.g., average consumption growth, volatility of returns, portfolio shares). Calibration is common in macro and finance when full structural estimation is difficult.
- Experimental and field evidence: Use laboratory experiments, surveys, or field experiments to estimate risk preferences and beliefs directly, and then compare these with the implications of the model. This can reveal systematic deviations from expected utility or from the assumed probability structure.
- Reduced-form tests and model comparison: Derive testable implications (e.g., Euler equations for consumption, portfolio choice conditions) and check whether they hold in the data. Compare competing models (e.g., different utility specifications or different belief assumptions) by their empirical performance.
- Out-of-sample prediction and policy evaluation: Assess whether the model can predict behavior in new environments (e.g., after a policy change or under different risk scenarios) and use it to evaluate counterfactual policies.
Through these empirical exercises, economists assess the adequacy of the pure risk framework, identify where it works well, and pinpoint situations where more complex notions of uncertainty, behavioral deviations, or richer institutional details may be needed.
Finally, incorporating risk and uncertainty into models informs the conclusions that can be drawn. Conclusions are often qualified and understood differently in light of uncertainty; it constraints the decision making space of the modeler.
1. Deterministic vs. Risky/Uncertain Models: What Actually Changes?
In a purely deterministic economic model, everything is known with certainty. Once you fix the parameters and initial conditions, the model delivers a single, precise path for outcomes like consumption, investment, GDP, or employment. Conclusions in that world naturally take the form of sharp statements such as, “If we raise taxes by X, consumption falls by Y,” or “The optimal savings rate is s*.” There is no distinction between what happens on average versus what happens in good or bad states, no notion of volatility or tail events, and no need to think about how spread out the possible futures might be. Optimization in such a setting is simply about choosing the best point on a known path.
Once you incorporate risk and uncertainty, the entire nature of the conclusions changes, because the model now describes distributions of outcomes rather than single trajectories. Instead of saying “consumption will be 100,” the model might say “consumption is 100 on average, but it could be 70 or 140 with certain probabilities.” Economic decisions become trade-offs between expected outcomes and the riskiness of those outcomes: more return vs. more downside risk, higher average growth vs. more volatility, or higher expected welfare vs. worse outcomes in some states or for some groups. Agents now maximize expected utility (or some other risk-sensitive criterion) rather than a deterministic utility function, and policy evaluations hinge on how an intervention shifts not just the mean, but the entire distribution of possible futures. In short, with risk and uncertainty, the key object of interest is no longer a point prediction but the whole probability distribution around it.
2. How Conclusions Look Different Once Risk and Uncertainty Are in the Model
The move from a risk-free to a risky or uncertain model doesn’t just add noise; it fundamentally alters what “optimal” behavior and “good” policy look like. One clear example is precautionary behavior. In a deterministic life-cycle model, a household chooses savings based on time preference and the interest rate alone, leading to a simple plan: “save this much to smooth consumption over time.” When future income and health are risky, however, households face bad potential states and respond by saving more than in the deterministic benchmark to self-insure against shocks. This precautionary saving can be reduced by social insurance (like unemployment benefits or health insurance), which changes the conclusion from “insurance is a distortion” to “insurance can raise welfare by reducing downside risk.” Without risk, the entire notion of saving as self-insurance and the interaction with social policy would be invisible.
Another major shift occurs in asset pricing and risk premia. In a deterministic world, two assets with the same expected return are equally attractive once you account for time, because there is no risk to compensate. With risk and risk aversion, investors demand a risk premium to hold assets whose payoffs are uncertain, especially those that do poorly in bad aggregate states. This leads to central conclusions in finance: assets with higher covariance with bad times must yield higher average returns, and policies or institutions that change the risk environment (such as financial regulation) can alter risk premia, not just levels of returns. Similarly, the presence of risk explains why insurance, diversification, and other risk-sharing institutions exist at all. In a deterministic setting, insurance markets, diversified portfolios, and complex state-contingent contracts would have no role; once risk is present, complete markets allow Pareto-efficient risk-sharing, and incomplete markets or borrowing constraints lead some agents to bear too much risk, affecting their consumption, investment, and labor supply. This underpins conclusions such as “social insurance can raise welfare despite distortions” by providing risk-sharing that private markets fail to deliver.
Risk and uncertainty also transform how economists understand investment and the timing of decisions. In a simple deterministic net-present-value (NPV) framework, any project with positive NPV should be undertaken immediately, and waiting only reduces value. When future demand, costs, or regulation are uncertain and investment is irreversible, there is an option value of waiting to gather more information. Firms optimally delay investment when uncertainty is high, even if expected NPV is positive, and policy conclusions change accordingly: reducing regulatory or macroeconomic uncertainty can stimulate investment simply by lowering the option value of waiting, even if expected profitability is unchanged. Lastly, in macroeconomic policy, stochastic shocks make stabilization policy meaningful. A deterministic model either has no fluctuations or only predictable cycles, making policy about levels and steady states. Introducing shocks and risk aversion shows that reducing volatility and the probability of severe recessions can raise expected welfare. Policies that slightly lower average output but significantly reduce crisis risk may be desirable, and the trade-off becomes one between mean outcomes and volatility or tail risk. None of these conclusions emerge in a stripped-down deterministic world where uncertainty is absent by construction.
3. How Conclusions Are Qualified and Interpreted in Light of Risk and Uncertainty
Once risk and uncertainty are explicitly modeled, economists rarely present conclusions as unconditional and exact. Instead of saying, “Policy X increases employment by 1%,” they say things like, “Given the assumed distribution of shocks, parameters, and model structure, policy X raises the expected level of employment by about 1%, with a confidence interval around that estimate.” This kind of qualification emphasizes that results depend on how shocks are modeled, how risk aversion is specified, and what data the model was calibrated or estimated on. If the stochastic process for shocks is misspecified or if the degree of risk aversion is very different from what the model assumes, then optimal choices and recommended policies may change substantially. In other words, conclusions are conditional on a particular characterization of risk and uncertainty embedded in the model.
Economists also use intervals and robustness checks to express how fragile or stable their conclusions are under uncertainty. Forecasts and policy effects are often presented with confidence or credible intervals, predictive ranges, or fan charts that show not just a central estimate but the range of plausible outcomes. These intervals help assess whether an effect is statistically meaningful and how much uncertainty surrounds it. Beyond that, sensitivity analysis is used to test how conclusions respond to alternative parameter values (such as different degrees of risk aversion or shock variances), different modeling assumptions (for example, alternative frictions or shock processes), or even different decision criteria (like expected utility versus robust control or max–min preferences). If a conclusion only holds under very narrow assumptions about risk and uncertainty, it is treated more cautiously than a result that survives a wide range of plausible scenarios. Risk-aware analysis pushes economists to express results as “under these assumptions and within these ranges, we find X,” rather than as universal, context-free claims.
4. How Risk and Uncertainty Constrain and Inform Actual Decisions
For individuals and firms, explicitly recognizing risk and uncertainty reshapes how they manage their finances and real decisions. Households in uncertain environments choose portfolios that reflect their tolerance for risk instead of simply chasing the highest expected return, and they willingly pay for insurance (such as health, disability, or unemployment coverage) even when expected payouts are lower than premiums, because they value protection in bad states. They also hold precautionary savings and avoid excessively leveraged or fragile positions that could fail under adverse shocks. Firms behave similarly: they hedge with derivatives, diversify across products or markets, maintain cash buffers or credit lines, and often stage or delay large investments when uncertainty about future demand, costs, or regulation is high. None of this behavior makes sense in a fully deterministic model, but it follows naturally once risk and uncertainty are part of the modeling framework and the decision criterion.
For policy-makers and central banks, formal models of risk and uncertainty lead to prudential, robust, and intertemporal trade-offs. Regulatory authorities design capital buffers, liquidity requirements, and stress tests for financial institutions precisely because they want the system to withstand rare but severe adverse states. Central banks consider not just the most likely inflation and output outcomes under different policy rules, but also the distribution of possible paths and the probability of hitting constraints like the zero lower bound. Governments evaluate social insurance, disaster relief, and climate policy by trading off current costs against the reduction in the probability or severity of future crises and catastrophic outcomes. This often motivates robust policies that perform reasonably well across many possible models and parameter values, rather than policies that are optimal under one precise, but possibly misspecified, model. Once uncertainty is explicit, the value of information and flexibility also becomes central: it can be optimal to invest in data, research, or experimentation to reduce uncertainty, and to design decisions that preserve options (for example, reversible or phased policies) rather than locking into a single course of action. In this way, risk and uncertainty directly constrain what is considered acceptable policy and guide both private and public decision-makers toward strategies that balance expected benefits with resilience to adverse scenarios.
5. Summary: How Risk and Uncertainty Reframe Economic Conclusions
Incorporating risk and uncertainty into economic models fundamentally changes both the content of conclusions and the way conclusions are expressed. Instead of focusing only on point predictions or steady states, economists emphasize distributions of outcomes, risk premia, precautionary motives, option values, and the role of volatility and tail events. Concepts like precautionary saving, demand for insurance, asset risk premia, delayed investment due to uncertainty, and welfare-improving stabilization policy all arise directly from the explicit treatment of risk. Without these ingredients, many of the most important real-world behaviors and policy issues simply cannot be captured: why people insure, why firms hedge and hold cash, why financial regulation exists, why macro volatility matters, and why reducing the risk of crises can be more valuable than marginally raising average growth.
At the same time, risk and uncertainty push economists to qualify their claims and to speak in terms of conditional, probabilistic statements. Model-based conclusions become “under these assumptions, here is how policy X shifts the distribution of outcomes,” accompanied by confidence intervals, sensitivity checks, and robustness analyses across alternative parameter values and model specifications. For both private decision-makers and policy-makers, decisions are framed as balancing expected gains against risk exposure, downside protection, and robustness to model error. Thus, the explicit incorporation of risk and uncertainty does not just add technical complexity; it reshapes what “good decisions” and “good policies” mean, making resilience, variance, and tail risks central considerations alongside expected outcomes.
Nash Equilibrium
Elasticity
Time Preference and Discounting (Present Value, Discount Rate)
-
Intertemporal Choice Theory (Microeconomics)
In consumer theory, individuals make choices between consumption today and consumption in the future. The discount rate reflects how much a person values present consumption over future consumption.
-
Utility Function Example: In a two-period model, the utility function might be:
\[ U = u(C_0) + \frac{1}{1 + \rho}\, u(C_1) \]
- \( C_0 \): consumption today
- \( C_1 \): consumption in the future
- \( \rho \): subjective discount rate (how impatient the person is)
The higher \( \rho \), the less value is placed on future utility — i.e., more impatience.
-
Utility Function Example: In a two-period model, the utility function might be:
-
Ramsey Growth Model (Macroeconomics)
This model studies optimal savings and consumption over time in an economy. It includes a social discount rate, reflecting how a planner values future utility compared to present utility.
\[ U = \int_0^\infty e^{-\rho t} u(c(t)) \, dt \]
- \( \rho \): pure rate of time preference (discount rate)
- \( u(c(t)) \): utility from consumption over time
This determines optimal paths of capital accumulation, saving, and consumption. A higher \( \rho \) leads to more present consumption and less saving.
-
Cost-Benefit Analysis & Public Economics
Governments use a social discount rate to evaluate long-term projects (infrastructure, environmental protection, etc.). A high discount rate may make long-term benefits look trivial, which can discourage investments in sustainability or climate action.
Debate:
- High discount rate → undervalues future generations
- Low discount rate → emphasizes intergenerational equity
-
Environmental and Climate Economics
The discount rate is crucial in climate models, such as the Stern Review or DICE model.
- Stern Review used a very low discount rate (around \( 1.4\% \)), emphasizing long-term climate costs.
- Critics (like Nordhaus) use higher rates (around \( 3\%-5\% \)), leading to more moderate action now.
Small differences in the discount rate can drastically change climate policy recommendations.
-
Finance Theory
In asset pricing and discounted cash flow (DCF) models, the discount rate is used to evaluate the present value of uncertain future returns.
\[ PV = \sum \frac{E(R_t)}{(1 + r)^t} \]
- \( r \): discount rate = risk-free rate + risk premium
- The discount rate affects firm valuation, stock prices, and investment decisions.
\[ 1 = \beta (1 + r) \quad \Rightarrow \quad r = \frac{1}{\beta} - 1 \]
| Concept | Formula | Role of Discount Factor ( \( \beta \) ) |
|---|---|---|
| Basic PV | \( PV = \frac{FV}{(1 + r)^t} \) | \( \beta = \frac{1}{1 + r} \), so \( PV = FV \cdot \beta^t \) |
| Basic FV | \( FV = PV \cdot (1 + r)^t \) | \( FV = \frac{PV}{\beta^t} \) |
| Multiple Payments | \( PV = \sum \beta^t C_t \) | Time-weighted sum of cash flows |
| Perpetuity | \( PV = \frac{C}{r} = \frac{C}{1 - \beta} \) | Implies \( \beta \to 1 \) as \( r \to 0 \) |
| Continuous Discounting | \( PV = FV \cdot e^{-rt} \) | Discount factor = \( e^{-rt} \) |
More Reading (Highly Recommended):
- The Marginalist Revolution
- The Cambridge Neoclassicals
- Walrasian General Equilibrium Theory
- The Lausanne School
- Leon Walras
- The Cowles Commission
- The Neo-Walrasian General Equilibrium School
- The Paretian Revival
- New Classical Macroeconomics
- The Paretian System
- Paul Samuelson
- Tjalling C. Koopmans
- The Swedish Schools
- Piero Sraffa
- The Contributions of the Economics of Information to Twentieth Century Economics
- Ceteris Paribus Laws
Comments
Post a Comment