Skip to content

Instantly share code, notes, and snippets.

@davidefiocco
Last active February 20, 2026 15:35
Show Gist options
  • Select an option

  • Save davidefiocco/b5f5e5a04c5f958730b3a4416b51cfb6 to your computer and use it in GitHub Desktop.

Select an option

Save davidefiocco/b5f5e5a04c5f958730b3a4416b51cfb6 to your computer and use it in GitHub Desktop.

Energy-Economy-Environment Modeling: A Concepts Primer

From Carbon Taxes to Climate Risk — How Models Answer the Hard Questions


Purpose: This primer teaches the foundational concepts behind energy-economy-environment (E3) modeling, then shows how different modeling approaches tackle the concrete questions that drive climate and energy policy. It is designed for readers with a quantitative background who are new to the field.

How to use this document: Parts I–III build the conceptual toolkit — economics, mathematics, and energy systems. Part IV is the heart: eight policy questions, each analyzed through multiple modeling paradigms that often give different answers. Part V covers practical integration. Appendix E provides standalone one-page profiles of each modeling paradigm, referenced throughout.


Table of Contents

Part I: Economic Foundations

Part II: Mathematical Toolkit

Part III: Energy Systems

Part IV: The Questions

Part V: Putting It Together

Appendices


Part I: Economic Foundations


Chapter 1: Markets and Demand

1.1 What Is a Market?

A market is any arrangement that brings together buyers and sellers to exchange goods, services, or resources. Markets can be physical (like a farmer's market) or abstract (like the global oil market or the European electricity market).

In energy-economy models, we represent many interconnected markets:

  • Goods markets: Steel, chemicals, transportation services
  • Factor markets: Labor, capital
  • Energy markets: Electricity, natural gas, oil products
  • Emissions markets: Carbon permits (in cap-and-trade systems)

1.2 Demand

Demand describes how much of a good buyers are willing and able to purchase at various prices.

The Law of Demand: All else equal (ceteris paribus—Latin for "other things being equal"), as price increases, quantity demanded decreases. This inverse relationship is fundamental to economics.

Demand curve: Shows the maximum price each buyer is willing to pay. Different buyers value the good differently:

  • Some buyers value it highly (would pay a lot) → top of curve
  • Others value it less (would only buy if cheap) → bottom of curve
Price (P)
    │
    │\  ← Buyers who value it most (would pay high price)
    │ \
    │  \
    │   \  Demand curve (D)
    │    \
    │     \ ← Buyers who value it least
    └──────\─────── Quantity (Q)

Demand shifters (factors that move the entire curve):

  • Income: Higher income typically increases demand
  • Prices of related goods: Substitutes and complements
  • Preferences: Consumer tastes and habits
  • Expectations: Anticipated future price changes
  • Population: Number of potential buyers

1.3 Price Elasticity

Elasticity measures responsiveness—how much one variable changes when another changes.

Price elasticity of demand (ε_d): $$\varepsilon_d = \frac{\text{% change in quantity demanded}}{\text{% change in price}} = \frac{\Delta Q / Q}{\Delta P / P}$$

Point elasticity (calculus form): $$\varepsilon_d = \frac{dQ}{dP} \cdot \frac{P}{Q}$$

The calculus form gives the instantaneous rate of change—more precise for smooth demand curves. The percentage formula above is the discrete approximation (arc elasticity). Both forms are equivalent in the limit.

Elasticity Value Classification Interpretation
|ε| > 1 Elastic Demand responds strongly to price changes
|ε| = 1 Unit elastic Proportional response
|ε| < 1 Inelastic Demand responds weakly to price changes
ε = 0 Perfectly inelastic Demand unchanged by price (e.g., insulin)

Why elasticity matters in E3 models:

Energy demand is typically inelastic in the short run (you can't immediately change your car or heating system) but more elastic in the long run (you can buy a more efficient car, insulate your home). You'll see this distinction drive different model answers in Ch 11 (carbon pricing) and Ch 13 (fossil fuel futures).

Energy Type Short-run Elasticity Long-run Elasticity
Gasoline -0.1 to -0.3 -0.5 to -0.8
Electricity -0.1 to -0.2 -0.3 to -0.7
Natural gas -0.1 to -0.3 -0.5 to -1.0

Literature ranges—values vary substantially by study, time period, region, and estimation method. These are illustrative central tendencies, not consensus estimates. See Espey (1998) for gasoline, Labandeira et al. (2017) for a meta-analysis.

Cross-price elasticity measures how demand for one good responds to prices of another: $$\varepsilon_{xy} = \frac{\text{% change in demand for x}}{\text{% change in price of y}}$$

  • If ε_xy > 0: goods are substitutes (gas and coal for electricity generation)
  • If ε_xy < 0: goods are complements (cars and gasoline)

Income elasticity measures how demand responds to income changes: $$\varepsilon_I = \frac{\text{% change in demand}}{\text{% change in income}}$$

  • Normal goods: ε_I > 0 (demand increases with income)
  • Inferior goods: ε_I < 0 (demand decreases with income)
  • Necessities: 0 < ε_I < 1
  • Luxuries: ε_I > 1

How do producers decide what to supply? That depends on production costs—see Chapter 2.


Chapter 2: Production and Supply

Chapter 1 covered what buyers want. Now: how do firms produce, and what determines what they're willing to sell?

2.1 What Is a Production Function?

A production function describes how inputs (labor, capital, energy, materials) are transformed into outputs:

$$Y = F(K, L, E, M)$$

where:

  • $Y$ = output (quantity produced)
  • $K$ = capital (machines, buildings)
  • $L$ = labor (workers, hours)
  • $E$ = energy
  • $M$ = materials (intermediate inputs)

2.2 Key Concepts

Marginal product: Additional output from one more unit of input $$MP_L = \frac{\partial F}{\partial L}$$

This partial derivative measures how much extra output you get from adding one more unit of labor, holding capital fixed.

Returns to scale:

  • Constant returns (CRS): Double inputs → double output
  • Increasing returns (IRS): Double inputs → more than double output
  • Decreasing returns (DRS): Double inputs → less than double output

Many models assume constant returns to scale (modeling choice: required for competitive equilibrium with zero profits, but many real industries have increasing returns).

Substitution: Can one input replace another?

  • Perfect substitutes: can trade 1-for-1 (e.g., different grades of coal)
  • Perfect complements: must use in fixed proportions (e.g., one driver per truck)
  • Imperfect substitutes: can trade off, but at varying rates

2.3 The CES Production Function

Constant Elasticity of Substitution (CES) is the workhorse of general equilibrium models. (Modeling choice: CES is chosen for tractability, not because it's the "true" form of production.)

The core question: You combine inputs to produce output. If one input gets expensive, how much can you substitute the other?

Example (capital K vs labor L):

  • Wages rise → do you automate (buy machines instead of hiring)?
  • How responsive is that substitution to the price change?

This responsiveness is measured by the elasticity of substitution σ:

$$\sigma = \frac{\Delta (K/L) / (K/L)}{\Delta (P_L/P_K) / (P_L/P_K)}$$

In words: σ = (% change in K/L ratio) / (% change in relative price).

Real-world intuition for σ:

  • σ ≈ 0 (complements): Truck + driver. 10 trucks + 1 driver = 1 delivery. Can't substitute.
  • σ ≈ 1 (Cobb-Douglas): Factory automation. Can replace some workers with machines, but diminishing returns.
  • σ → ∞ (perfect substitutes): Hiring contractors vs employees for identical work. Only cost matters.

The CES formula: $$Y = A \left[ \alpha \cdot K^{\rho} + (1-\alpha) \cdot L^{\rho} \right]^{1/\rho}$$

where σ = 1/(1-ρ). The formula is chosen so that σ stays constant regardless of your current K/L mix—that's the "constant" in CES. This makes the math tractable: you don't need to track where you are on some complex curve.

Note: CES can combine ANY two inputs—not just K and L. In nested CES structures (see next section), different nests combine different things: capital vs labor, coal vs gas, energy vs materials. Each nest has its own σ.

Parameters:

  • $A$ = productivity multiplier (better technology → more output from same inputs)
  • $\alpha$ = capital's importance (how much of output is "due to" capital vs labor)
  • $\rho$ = substitution parameter → determines σ

Why CES is used:

  • Flexible: nests multiple functional forms
  • Empirically estimable: σ can be estimated from data
  • Analytically tractable: has nice mathematical properties

Limitations to keep in mind:

  • CES assumes substitution elasticity is constant across all input ratios—real production may have varying substitutability
  • Aggregating heterogeneous firms into a single CES function can bias estimated elasticities
  • The nesting structure (which inputs are grouped together) is a modeling choice that affects results but is rarely tested empirically
  • CES cannot represent situations where inputs become complements at some ratios and substitutes at others

2.4 Nested CES Production

Real production uses many inputs with different substitution possibilities. Nested CES handles this by grouping inputs:

Output (σ ≈ 0, near-Leontief)
├── Value Added (σ_KL ≈ 0.5)
│   ├── Capital
│   └── Labor
└── Intermediate Bundle (σ_M ≈ 0.3)
    ├── Energy Bundle (σ_E ≈ 0.5-1.0)
    │   ├── Electricity
    │   └── Fossil Fuels (σ_F ≈ 1.0-2.0)
    │       ├── Coal
    │       ├── Oil
    │       └── Gas
    └── Materials (σ_NE ≈ 0.2)

Each nest has its own elasticity:

  • Low σ at top (output needs both labor/capital AND materials)
  • Higher σ within energy (can substitute between fuels)
  • Highest σ within fossil fuels (gas for coal is easier than electricity for heat)

2.5 Elasticity Values Matter!

The substitution elasticities are crucial parameters—they determine how the economy responds to price changes. You'll see in Ch 11 how different elasticity assumptions drive dramatically different conclusions about the cost of carbon pricing.

Typical ranges used in CGE models:

Elasticity Symbol Range Impact
Capital-Labor σ_KL 0.4-1.0 How automation responds to wages
Energy-Value Added σ_E,VA 0.1-0.5 Very important for climate policy
Interfuel σ_F 0.5-2.0 Fuel switching response
Armington (trade) σ_A 2.0-8.0 Trade response to prices

These ranges are illustrative, drawn from econometric literature. Actual values are contested and vary by sector, region, and estimation approach. The energy-value added elasticity (σ_E,VA) is particularly uncertain and consequential—different credible estimates can change policy cost projections by factors of 2-3x.

Higher σ_E,VA → easier to reduce energy use → lower cost of carbon policy Lower σ_E,VA → harder to reduce energy use → higher cost of carbon policy

This is why sensitivity analysis on elasticities is essential—results should always be tested against alternative plausible values.

2.6 Technical Change

Autonomous Energy Efficiency Improvement (AEEI): Over time, economies become more energy-efficient even without price changes. This is captured by a trend parameter:

$$E_t = E_0 \cdot (1 - AEEI)^t \cdot f(Y_t, P_t)$$

This says energy demand at time $t$ equals base-year demand, reduced by autonomous efficiency gains $(1-AEEI)^t$, then adjusted for economic activity and prices via $f(Y_t, P_t)$. Typical AEEI values: 0.5-1.5% per year.

Endogenous technical change: Some models make efficiency improvements depend on R&D spending or learning-by-doing.

2.7 Cost Functions and Input Demands

The firm's problem: Produce Y units at minimum cost, given input prices (wage W, capital rental R).

Cost function: $C(R, W, Y)$ = minimum cost to produce Y at prices R, W.

With "constant returns to scale" (2× inputs → 2× output), cost-per-unit is constant: $C = Y \cdot c(R,W)$.

Shephard's Lemma: Optimal input demand equals the derivative of cost w.r.t. that input's price: $$K^* = \frac{\partial C}{\partial R}, \quad L^* = \frac{\partial C}{\partial W}$$

General intuition: When an input price rises by €1, total cost rises by the amount of that input used (€1 × quantity). Hence ∂C/∂(price) = quantity.

Example: 10 machines × €1 price increase = €10 cost increase, so ∂C/∂R = 10.

(Re-optimization effects are second-order—"envelope theorem.")

Why this is useful: The cost function C embeds the solution to "minimize cost subject to producing Y." Shephard's Lemma is a shortcut: once you have C, extract input demands by differentiation—no need to re-solve the optimization. CGE models specify convenient functional forms for C (e.g., CES), and input demands follow automatically.

2.8 From Costs to Supply

Now we can complete the producer's story: production function → cost function → supply curve.

The supply curve is the marginal cost curve. A profit-maximizing firm supplies another unit only if the price covers the cost of producing it:

  • At low quantities, marginal cost is low → willing to supply at low price
  • At high quantities, marginal cost rises → need higher price to justify production
Price (P)
    │           /
    │         / ← Expensive units (need high price)
    │       /  Supply = Marginal Cost
    │     /
    │   /
    │  / ← Cheap units (low marginal cost)
    │ :  ← Minimum price (no supply below this)
    └─────────────── Quantity (Q)

Caveat: This upward slope assumes diminishing returns, not economies of scale. With economies of scale (larger production → lower costs), you get natural monopolies instead of competitive markets.

Supply shifters:

  • Input prices: Cost of labor, raw materials, energy (affects cost function → shifts supply)
  • Technology: Productivity improvements (higher A in production function)
  • Number of sellers: Market entry/exit
  • Government policies: Taxes, subsidies, regulations

Carbon tax example: A carbon tax raises energy input prices → increases production costs → shifts supply curve up → at any given price, less is supplied. The economy-wide consequences of this cascade are explored in Ch 11.

Now that we understand both demand (Chapter 1) and supply (Chapter 2), we can see how they come together—see Chapter 3.


Chapter 3: Equilibrium and Beyond

We've seen what buyers want (Chapter 1) and what sellers offer (Chapter 2). Now: how do they meet—and what happens when the "meeting" assumption fails?

3.1 Market Equilibrium

Equilibrium occurs when quantity demanded equals quantity supplied. At the equilibrium price, the market "clears"—no excess supply or demand.

Price (P)
    │           S
    │         /
    │       / 
    │     /
P*  │----X--------  ← Equilibrium price
    │   / \
    │ /    \
    │/      \ D
    └────────────── Quantity (Q)
          Q*
          ↑
    Equilibrium quantity

Market clearing condition: $$Q^D(P^) = Q^S(P^)$$

where $Q^D$ = quantity demanded, $Q^S$ = quantity supplied, and $P^*$ = equilibrium price.

3.2 What Is Economic Equilibrium? (General Concept)

More broadly, equilibrium is a state where no agent has an incentive to change their behavior given current prices and the choices of others. It's a "rest point" of the economic system.

Think of it like a ball in a bowl: the ball settles at the bottom where forces balance. In economics, equilibrium is where supply and demand forces balance across all markets.

3.3 Partial Equilibrium vs. General Equilibrium

This distinction is crucial for understanding why different models give different answers to the same policy question (see Part IV).

Key terminology:

  • Exogenous variable: Determined outside the model; taken as given (e.g., world oil prices in a national model)
  • Endogenous variable: Determined within the model by the equilibrium conditions (e.g., domestic prices, quantities)

Partial Equilibrium (PE):

  • Analyzes one market in isolation
  • Holds prices in other markets constant ("ceteris paribus")
  • Ignores feedback effects from the rest of the economy
  • Simpler, more detailed for the market in question

Example: Analyzing the electricity market, taking GDP, labor costs, and other prices as given.

General Equilibrium (GE):

  • Analyzes all markets simultaneously
  • All prices adjust together to clear all markets
  • Captures feedback effects and interdependencies
  • More complex, but more comprehensive

Example: Analyzing how a carbon tax affects electricity prices, which affects production costs, which affects wages, which affects consumption, which affects electricity demand again...

Partial Equilibrium:
+-------------------------------------+
|         Energy Sector               |
|  +---------+    +---------+         |
|  | Supply  | <> | Demand  |         |
|  +---------+    +---------+         |
|         v equilibrium v             |
|        Energy prices                |
+-------------------------------------+
        ^                   v
   GDP (fixed)         Energy prices
        ^                   v
  ---------- REST OF ECONOMY (exogenous) ----------


General Equilibrium:
+-------------------------------------------------+
|                 ENTIRE ECONOMY                  |
|  +----------+  +----------+  +----------+       |
|  |  Goods   |<>| Factors  |<>|  Energy  |       |
|  | markets  |  |  (L,K)   |  | markets  |       |
|  +----------+  +----------+  +----------+       |
|       ^             ^             ^             |
|       +--------------------------+              |
|                     v                           |
|           ALL PRICES ADJUST                     |
|           SIMULTANEOUSLY                        |
+-------------------------------------------------+

3.4 Walrasian General Equilibrium

The theoretical foundation for Computable General Equilibrium (CGE) models comes from Léon Walras (1874) and was formalized by Arrow and Debreu (1954). (See Appendix E: CGE Models for representative models.)

Perfect competition — the key assumption:

  • Many buyers and sellers — no single agent can influence the market price
  • Price-takers — everyone takes the market price as given and decides only how much to buy/sell
  • Homogeneous goods — products are identical (no brand loyalty)
  • Perfect information — everyone knows all prices and qualities
  • Free entry/exit — firms can enter or leave the market without barriers

This is an idealization. Real markets have market power, asymmetric information, and barriers to entry. But it's a useful benchmark.

Walrasian equilibrium (theoretical framework) is a set of prices such that:

  1. Each consumer maximizes utility given their budget constraint
  2. Each firm maximizes profit given its technology
  3. All markets clear (supply = demand)

Mathematically: Find price vector $\mathbf{p}^* = (p_1^, p_2^, ..., p_n^*)$ such that:

$$z_i(\mathbf{p}^*) = 0 \quad \forall i$$

where:

  • $\mathbf{p}$ = price vector (prices of all $n$ goods)
  • $z_i(\mathbf{p})$ = excess demand in market $i$: $z_i = D_i(\mathbf{p}) - S_i(\mathbf{p})$
  • $D_i, S_i$ = demand and supply for good $i$ (functions of all prices, not just $p_i$)

Key properties of Walrasian equilibrium:

  • Walras' Law: The value of total excess demand is always zero: $\sum_i p_i \cdot z_i(\mathbf{p}) = 0$
  • If all but one market clears, the last one clears automatically
  • Prices are only determined up to a numeraire (we can normalize one price to 1)

Limitations to keep in mind:

  • Walrasian equilibrium assumes all agents are price-takers with perfect information—real markets have market power, asymmetric information, and transaction costs
  • The theory says nothing about how equilibrium is reached or how long adjustment takes
  • Multiple equilibria may exist; the model finds one but can't tell you which one the economy would actually reach
  • The framework is comparative statics: it compares equilibria, not the transition path between them

Simple Example: Two-Good Exchange Economy

Consider two consumers (A and B) and two goods (apples and bread):

Apples Bread
A's endowment 10 0
B's endowment 0 10

Both prefer variety. If $P_{apples} = P_{bread} = 1$:

  • A sells 5 apples, buys 5 bread → ends with (5, 5)
  • B sells 5 bread, buys 5 apples → ends with (5, 5)

Market clearing check:

  • Apples: A supplies 5, B demands 5 ✓
  • Bread: B supplies 5, A demands 5 ✓

This is a Walrasian equilibrium: both maximize utility given their budget, and both markets clear.

3.5 The Auctioneer Metaphor

Walras imagined a fictional "auctioneer" who:

  1. Announces prices
  2. Collects supply and demand from all agents
  3. Adjusts prices (raise if excess demand, lower if excess supply)
  4. Repeats until all markets clear

This is called tâtonnement (French for "trial and error" or "groping toward equilibrium"). CGE models effectively implement this process computationally.

3.6 Existence and Uniqueness

Arrow-Debreu Theorem: Under certain conditions (continuous preferences, no increasing returns to scale, etc.), a Walrasian equilibrium exists.

Uniqueness is not guaranteed—there may be multiple equilibria. Models typically find one equilibrium (the one closest to the starting point).

3.7 Comparative Statics

CGE models use comparative statics: comparing two equilibria (before and after a policy change) without modeling the transition path.

Equilibrium A          →          Equilibrium B
(no carbon tax)       Policy      (with carbon tax)
                      change

We compare A and B, but don't model the path between them

This is a limitation: Comparative statics tells you the new equilibrium, not how long it takes to get there or what happens during the transition.

3.8 Dynamic Extensions

Recursive dynamics:

  • Solve a sequence of static equilibria
  • Each period, capital stocks update based on previous period's investment
  • Agents have adaptive expectations (don't perfectly foresee the future)
Period 1 → Period 2 → Period 3 → ...
    ↓          ↓          ↓
  K₁ → I₁ → K₂ → I₂ → K₃ → ...

Intertemporal optimization:

  • Agents optimize over entire time horizon
  • Perfect foresight (or rational expectations)
  • More computationally demanding

3.9 Labor Markets: Beyond Perfect Competition

Standard Walrasian equilibrium assumes labor markets clear → no involuntary unemployment. Reality disagrees.

Efficiency Wages (Shapiro-Stiglitz 1984)

Core idea: Firms can't perfectly monitor effort. If wages = market-clearing, workers have nothing to lose by shirking (jobs are easy to find). So firms pay a premium → creates unemployment → workers fear job loss → don't shirk.

The efficiency wage: $$w = w^* \cdot \left(1 + \frac{e}{b + \rho/q}\right)$$

  • $w^*$ = market-clearing wage; $e$ = required effort
  • $b$ = unemployment benefit rate; $\rho$ = discount rate; $q$ = detection probability

Intuition: Higher required effort or weaker penalties for shirking (generous benefits, low detection, high discounting) → bigger premium needed. Higher unemployment → smaller premium (job loss is scarier).

This matters for policy analysis: unemployment is endogenous. Revenue recycling (carbon tax → cut labor taxes) can deliver a "double dividend" — see Ch 11.

3.10 Beyond Equilibrium

Not all models assume the economy reaches equilibrium. Several alternative frameworks challenge the equilibrium paradigm:

Disequilibrium and path dependence. Econometric simulation models (see Appendix E: Econometric Simulation) do not assume the economy is in equilibrium. Instead, they estimate historical relationships and project forward, allowing for persistent unemployment, demand-driven output, and path-dependent dynamics. A recession today can permanently lower the growth path — something equilibrium models cannot capture.

Complexity and emergence. Agent-based models (ABMs) represent individual agents (firms, households, banks) with heterogeneous rules and local interactions. Macro-level patterns — booms, crashes, technology adoption waves — emerge from micro-level behavior without anyone optimizing a global objective function. There is no auctioneer. Equilibrium may never be reached, and the economy may cycle through qualitatively different regimes. (See Appendix E: Agent-Based / Complexity.)

Why this matters. Whether you assume the economy reaches equilibrium profoundly affects policy conclusions. A carbon tax analyzed in a CGE model (equilibrium, full employment) can look costly. The same tax in an econometric model (demand-driven, with slack) can look beneficial. Ch 11 explores this directly.


Chapter 4: Welfare and Distribution

4.1 What Is Welfare?

Welfare in economics refers to the well-being or satisfaction of individuals and society. Welfare economics asks: Is one economic outcome "better" than another? How do we measure the impact of policies on society?

This matters because policymakers need to know not just what happens (GDP change, emissions reduction) but whether society is better or worse off.

4.2 Utility

Utility = numerical measure of satisfaction. Can't observe it directly, but infer from choices.

Utility function: $U = U(x_1, x_2, ..., x_n)$ where $x_i$ = quantity of good $i$.

Key properties: More is better (non-satiation). Diminishing marginal utility (10th slice < 1st slice). Ordinal only (rankings matter, absolute numbers don't).

Example (Cobb-Douglas): $$U(C, L) = C^{\alpha} \cdot L^{1-\alpha}$$

  • C = consumption (goods/services); L = leisure (time not working); α ∈ (0,1) = weight on consumption

This models the labor-leisure tradeoff: work more → more income (C) but less free time (L). Both essential (U=0 if either is zero). Constant shares: fraction α of time-budget goes to work, (1-α) to leisure.

4.3 Consumer and Producer Surplus

Markets use one price for heterogeneous buyers and sellers:

  • Consumer surplus: Buyers who would've paid more than P* keep the difference
  • Producer surplus: Sellers who would've accepted less than P* keep the difference
Price (P)
    │ D
    │  \        S
    │   \      /    Consumer Surplus (above P*, below D)
    │    \    /
P*  │------X------ ← uniform market price
    │     /  \      Producer Surplus (below P*, above S)
    │    /    \
    │ S          D
    └──────────────── Quantity (Q)
          Q*

Total surplus = gains from trade. Deadweight loss = surplus destroyed by distortions (taxes, monopoly)—trades that would've been mutually beneficial but didn't happen.

4.4 Pareto Efficiency

Allocation: Who gets what—a distribution of goods across people, each yielding some utility.

Pareto efficient: No way to make someone better off without making someone else worse off. Think: pie split with no crumbs on the table. But 50-50 and 99-1 are both Pareto efficient—it says nothing about fairness.

Pareto improvement: A change that helps at least one person without hurting anyone.

The Welfare Theorems:

First: Competitive markets reach Pareto efficient outcomes. (No gains from trade left on the table.)

Second: Any Pareto efficient allocation can be achieved via markets—if you first redistribute endowments using lump-sum (non-distortionary) transfers. This conceptually separates efficiency (let markets work) from equity (who starts with what).

The catch: Real redistribution (income tax, means-tested welfare) depends on behavior → creates distortions → efficiency costs. Lump-sum transfers are a theoretical device, not a policy tool. Hence the equity-efficiency tradeoff: in practice, more redistribution typically means some efficiency loss.

For policy analysis: Markets achieve efficiency but not fairness. Policies create winners and losers. We need welfare measures to compare outcomes (next sections).

4.5 Measuring Welfare Changes

When prices change, how do we convert utility changes into money?

Equivalent Variation (EV): The € amount (at OLD prices) that would cause the same utility change as the policy.

Compensating Variation (CV): The € amount (at NEW prices) needed to restore original utility after the policy.

Measure Evaluated at Question
EV OLD prices "How much € (at old prices) = same utility change?"
CV NEW prices "How much € (at new prices) = same utility change?"

Why they differ (for a price increase):

  • EV (old prices): €1 buys more → need fewer € taken away to reach new lower utility → EV smaller
  • CV (new prices): €1 buys less → need more € added to restore old higher utility → CV larger

Example: Price rises €0.10 → €0.15/kWh. You cut consumption 1000 → 800 kWh.

  • Bill rises €20, but welfare loss > €20 (you also gave up 200 kWh you valued)
  • EV (at old prices where €1 = 10 kWh): smaller € amount
  • CV (at new prices where €1 = 6.7 kWh): larger € amount

For small price changes, EV ≈ CV ≈ consumer surplus change. CGE models commonly use EV for cross-scenario comparability.

4.6 Social Welfare Functions

To compare outcomes where some gain and others lose, we need a social welfare function (SWF) that aggregates individual utilities:

Utilitarian (Benthamite): $$W = \sum_i U_i$$ Sum of all utilities—treats everyone equally.

Rawlsian: $$W = \min_i {U_i}$$ Only care about the worst-off individual.

Weighted sum: $$W = \sum_i \omega_i \cdot U_i$$ Different groups have different weights (distributional preferences).

CGE models typically use utilitarian welfare (sum or average of equivalent variations), but can report distributional impacts separately.

4.7 Efficiency vs. Equity Trade-offs

Most real policies involve trade-offs:

Policy Efficiency Equity
Carbon tax ✅ Corrects externality, efficient ❌ Regressive (hurts low-income more)
Carbon tax + rebates ✅ Still efficient ✅ Can be made progressive
Subsidies for renewables ❓ May distort markets ✅ Benefits vary

4.8 The Double Dividend Hypothesis

A key policy question: Can environmental taxes provide a "double dividend"?

  1. First dividend: Environmental improvement (less pollution)
  2. Second dividend: Economic improvement (if tax revenue is used to reduce distortionary taxes like labor taxes)

This is testable in models that include labor markets with unemployment and detailed tax systems — see Ch 11 for how different modeling paradigms disagree on whether the second dividend is real.

4.9 Distributional Analysis: Beyond the Representative Agent

Most CGE models use a representative agent — a single household that stands in for the entire population. This captures average welfare but is completely blind to who gains and who loses.

Why this matters: A carbon tax might increase aggregate welfare (double dividend) while being deeply regressive — low-income households spend a larger share of income on energy and transport.

Approaches to distribution:

Method Resolution Strengths Weaknesses
Representative agent 1 household per region Simple, consistent Blind to distribution
Multiple household groups 5-10 quintiles Some distributional insight Still aggregated
CGE + microsimulation coupling Thousands of real households Rich distributional detail Complex linkage, post-hoc
Agent-based models Individual heterogeneity built in Distribution is intrinsic Harder to calibrate

The CGE + microsimulation approach (e.g., linking a CGE model with a tax-benefit microsimulation model like EUROMOD) works in two stages: the CGE provides macro changes (prices, wages, employment), and the microsimulation model maps these onto a representative household survey to compute impacts by income decile, region, household type, etc.

This is explored in depth in Ch 14: Who Wins, Who Loses?


End of Part I: Economic Foundations

Continue to Part II: Mathematical Toolkit for the mathematical tools used in these models.


Part II: Mathematical Toolkit


A note on reading this part: The sections below mix three types of statements:

  • Definitions — what terms mean (e.g., "elasticity is the ratio of percentage changes")
  • Modeling choices — how models implement concepts (e.g., "CGE models use CES production functions")
  • Empirical findings — what data suggests (e.g., "energy demand is typically inelastic")

These are different kinds of claims with different epistemic status. Definitions are conventions; modeling choices are decisions made for tractability; empirical findings are contestable summaries of evidence.

Chapter 5: Optimization

5.1 Why Optimization?

Economic models assume agents optimize: consumers maximize utility, firms minimize costs. Combined with market clearing (supply = demand), this generates equilibrium.

Different models use different mathematical structures:

Approach What's optimized Mathematical form
Cost-minimization energy models Total energy system cost LP/MILP — single objective, find minimum
Econometric simulation Market clearing Nonlinear equations — find prices where supply = demand
CGE models Many agents simultaneously MCP — no single objective; find prices where everyone's optimal choices are mutually consistent
Stochastic programming Expected cost over uncertain futures LP/NLP with scenarios
Robust optimization Worst-case cost Min-max over uncertainty set

The key difference: some models have one decision-maker (a social planner minimizing cost). Others have many decision-makers (consumers, firms, government), each optimizing their own objective. You can't write "many agents optimizing" as a single optimization problem—but you can write down the conditions that must hold when all are at their optima simultaneously. That's what MCP does.

5.2 The Basic Optimization Problem

Unconstrained optimization: $$\max_{x} f(x) \quad \mathrm{or} \quad \min_{x} f(x)$$

First-order condition (FOC): At an optimum, the derivative is zero: $$\frac{df}{dx} = 0$$

Second-order condition (SOC): For a maximum, $f''(x) &lt; 0$; for a minimum, $f''(x) &gt; 0$.

5.3 Constrained Optimization and Lagrange Multipliers

Most economic problems involve constraints (budget constraints, resource limits, etc.).

General form: $$\max_{x} f(x) \quad \mathrm{s.t.} \quad g(x) = 0$$

Lagrangian method: $$\mathcal{L}(x, \lambda) = f(x) - \lambda \cdot g(x)$$

First-order conditions: $$\frac{\partial \mathcal{L}}{\partial x} = 0, \quad \frac{\partial \mathcal{L}}{\partial \lambda} = g(x) = 0$$

The Lagrange multiplier (λ) has a powerful interpretation:

λ is the shadow price of the constraint—the marginal value of relaxing the constraint by one unit.

Why? At the optimum, ∇f = λ∇g (gradients parallel). Moving in the direction that relaxes g by 1 unit increases f by λ. So λ = df*/dc where c is the constraint bound.

Connection to duality: λ is also the dual variable. Every constrained optimization has a "dual" problem where λ becomes the decision variable (interpreted as the "price" of the constraint). Under convexity, primal and dual have the same optimal value—this is why λ simultaneously measures: (1) how much you'd pay to relax the constraint, and (2) the "price" that makes the constraint worth respecting.

Example: In a carbon-constrained economy: $$\max \ GDP \quad \mathrm{s.t.} \quad Emissions \leq \bar{E}$$

The Lagrange multiplier on the emissions constraint is the marginal abatement cost—the GDP sacrifice from reducing emissions by one more ton.

5.4 Types of Optimization Problems

Linear Programming (LP): $$\min_{x} c^T x \quad \mathrm{s.t.} \quad Ax \leq b, ; x \geq 0$$

where $c$ = cost vector, $x$ = decision variables, $A$ = constraint matrix, $b$ = constraint bounds.

  • Objective and constraints are linear
  • Fast to solve (polynomial time)
  • Used in cost-minimization energy system models (TIMES/MARKAL, OSeMOSYS)

Mixed-Integer Linear Programming (MILP): Same as LP, but some variables must be integers.

  • Models yes/no decisions (build a power plant or not)
  • Much harder to solve (NP-hard)
  • Used in investment planning models

Nonlinear Programming (NLP): $$\min_{x} f(x) \quad \mathrm{s.t.} \quad g(x) \leq 0, ; h(x) = 0$$

Quadratic Programming (QP): Quadratic objective with linear constraints—a special case of NLP that's easier to solve.

5.5 Shadow Prices (Dual Variables)

Every constraint in an optimization problem has an associated shadow price (also called dual variable or Lagrange multiplier).

Interpretation: The shadow price tells you the marginal value of relaxing the constraint.

In energy models:

Constraint Shadow Price
Electricity demand = supply Wholesale electricity price (€/MWh)
Emissions ≤ cap Carbon permit price (€/tCO₂)
Capacity ≤ installed capacity Scarcity rent (€/MW)
Renewable share ≥ target Cost of RES constraint (€/MWh)

5.6 Complementarity Problems

CGE equilibrium conditions are typically cast as a Mixed Complementarity Problem (MCP) and solved by specialized solvers.

The complementarity condition: $$0 \leq x \perp f(x) \geq 0$$

The symbol $\perp$ ("perp") denotes complementarity. This notation means: $x \geq 0$, $f(x) \geq 0$, and $x \cdot f(x) = 0$.

Interpretation: Either $x = 0$ or $f(x) = 0$ (or both). They "complement" each other.

In market context: $$0 \leq P \perp (S - D) \geq 0$$

  • If there's excess supply ($S &gt; D$), price must be zero
  • If price is positive, the market must clear ($S = D$)

Why MCP instead of standard optimization?

A single agent's optimum satisfies the Karush-Kuhn-Tucker (KKT) conditions:

  • Gradient condition (∇f = λ∇g)
  • Feasibility (constraints satisfied)
  • Complementary slackness (if constraint slack, multiplier = 0)

In a market, each agent has their own KKT conditions. The equilibrium is where:

  • All agents satisfy their individual KKT conditions
  • Markets clear (supply = demand for each good)

You can't write this as "minimize some function"—there's no single objective. But you CAN collect all agents' KKT conditions plus market clearing into one big system of complementarity conditions. That's the MCP.

Why specialized solvers (PATH)? Standard optimization solvers (gradient descent, Newton) assume you're minimizing something. MCP solvers handle the "either-or" structure of complementarity (either x=0 or f(x)=0) directly, using pivoting methods similar to how simplex solves LPs.

Limitations to keep in mind:

  • MCP finds an equilibrium, but multiple equilibria may exist—the solution found depends on the starting point
  • Convergence is not guaranteed for all problem structures; poorly specified models may fail to solve
  • The equilibrium found is a mathematical fixed point; whether it represents real-world market outcomes depends on whether the underlying behavioral assumptions hold

5.7 Stochastic Programming and Robust Optimization

Standard optimization assumes the future is known. But what if key parameters (fuel prices, demand growth, technology costs) are uncertain?

Stochastic programming optimizes the expected outcome across a set of scenarios:

$$\min_{x} \sum_s p_s \cdot f(x, \xi_s)$$

where $p_s$ = probability of scenario $s$, $\xi_s$ = uncertain parameters in that scenario.

Two-stage stochastic programming is the most common form:

  • First stage: Make "here-and-now" decisions before uncertainty resolves (e.g., which power plants to build)
  • Second stage: Make "wait-and-see" decisions after uncertainty resolves (e.g., how to dispatch)

This naturally captures the idea that some decisions are irreversible (investment) while others are flexible (operations). You'll see this at work in Ch 18 (uncertainty).

Robust optimization takes a different approach: instead of optimizing the expected outcome, it optimizes the worst case:

$$\min_{x} \max_{\xi \in \mathcal{U}} f(x, \xi)$$

where $\mathcal{U}$ = uncertainty set (the range of plausible futures).

Trade-off: Stochastic programming gives better expected performance but requires probability estimates. Robust optimization requires no probabilities but is conservative — it hedges against the worst case, which may be unlikely.

5.8 Solvers and Software

Modeling languages let you describe the problem declaratively; the solver finds the solution:

Language / Tool Strengths License
GAMS Industry standard for CGE/energy; MCP support via PATH Commercial
JuMP.jl (Julia) Modern algebraic modeling; excellent performance Open source
Pyomo (Python) Python ecosystem integration Open source
AMPL Concise syntax; wide solver support Commercial

Solvers:

Solver Problem types License
CPLEX LP, MILP, QP Commercial (free academic)
Gurobi LP, MILP, QP Commercial (free academic)
HiGHS LP, MILP Open source
CONOPT NLP Commercial
PATH MCP Commercial (bundled with GAMS)
Ipopt NLP Open source

The open-source solver ecosystem has matured substantially — HiGHS now matches commercial LP/MILP solvers for many problem sizes.


Chapter 6: Discrete Choice Models

6.1 The Problem: Heterogeneous Choices

In reality, not everyone makes the same choice even when facing the same prices. Some buy electric cars, others buy gasoline cars. Some install heat pumps, others stick with gas boilers.

Why?

  • Different preferences
  • Different constraints (budget, space, access)
  • Different information
  • Behavioral factors (risk aversion, habits)

6.2 Random Utility Models

Basic idea: Each option has a "utility" that includes:

  • Observable components (price, performance)
  • Unobservable components (personal taste, hidden costs)

$$U_i = V_i + \varepsilon_i$$

where:

  • $U_i$ = total utility of option $i$
  • $V_i$ = systematic (observable) utility
  • $\varepsilon_i$ = random (unobservable) component

Consumer chooses option with highest total utility.

6.3 The Logit Model

If the random components follow an extreme value (Gumbel) distribution, the probability of choosing option $i$ is:

$$P_i = \frac{e^{V_i / \mu}}{\sum_j e^{V_j / \mu}}$$

where $\mu$ is a scale parameter.

Properties:

  • Probabilities sum to 1
  • Higher utility → higher probability
  • But not deterministic: even expensive options get some market share

Limitations to keep in mind:

  • IIA (Independence of Irrelevant Alternatives): Adding a new option doesn't change relative shares of existing options. This is often unrealistic — adding a third car brand should affect similar brands more than dissimilar ones.
  • Assumes unobserved heterogeneity follows a specific distribution (Gumbel). If real heterogeneity differs, market share predictions can be biased.
  • Nested logit (Section 6.5) partially addresses IIA but requires specifying the nest structure, which is itself a modeling choice.

Example: Technology choice in buildings $$V_i = -\alpha \cdot Cost_i - \beta \cdot Hassle_i + \gamma \cdot Efficiency_i$$

6.4 The Weibull / Logit Market Share Function

Many energy models use Weibull-based market shares for technology selection:

$$S_i = \frac{e^{-\nu \cdot C_i}}{\sum_j e^{-\nu \cdot C_j}}$$

where:

  • $S_i$ = market share of technology $i$
  • $C_i$ = generalized cost
  • $\nu$ = heterogeneity parameter

The generalized cost includes "intangible costs": $$C_i = (CAPEX_i + OPEX_i) + \mu_i$$

where:

  • CAPEX = capital expenditure (upfront cost: buying the equipment)
  • OPEX = operating expenditure (ongoing costs: fuel, maintenance)
  • μ_i = intangible costs (non-monetary factors — see below)

Intangible costs capture:

  • Risk aversion (new technology is risky)
  • Hidden costs (installation complexity, learning time)
  • Behavioral inertia (familiarity with current technology)
  • Market barriers (lack of information, financing constraints)

6.5 Nested Logit

When choices have natural groupings, nested logit is used:

Travel choice
├── Private car
│   ├── Gasoline car
│   ├── Diesel car
│   └── Electric car
├── Public transit
│   ├── Bus
│   └── Train
└── Active (walk/bike)

Two-level choice:

  1. Choose mode (car, transit, active)
  2. Choose specific option within mode

Formula:

$$P(mode\ m) = \frac{e^{V_m / \lambda_m}}{\sum_{m'} e^{V_{m'} / \lambda_{m'}}}$$

$$P(option\ i\ |\ mode\ m) = \frac{e^{V_i / \mu}}{\sum_{j \in m} e^{V_j / \mu}}$$

where $\lambda_m$ = nest-specific scale parameter (captures correlation within mode $m$), $\mu$ = scale parameter for options within a nest, and $V$ = systematic utility.

6.6 Calibration of Intangible Costs

The intangible costs ($\mu_i$) are calibrated, not estimated:

  1. Observe actual market shares in base year
  2. Calculate what intangible costs would be needed to match these shares given known financial costs
  3. Use these calibrated values for projections

This is a limitation: We're inferring behavior from outcomes, not from direct measurement of preferences.

6.7 Why This Matters for Policy

Discrete choice modeling affects policy analysis profoundly — see Ch 17 for how different models of technology adoption give dramatically different answers about transition speed.

Without heterogeneity:

  • Carbon tax makes EVs cheapest
  • Everyone switches to EVs immediately
  • Unrealistic!

With heterogeneity:

  • Some early adopters switch quickly
  • Others need larger price signals
  • Technology diffusion is gradual
  • More realistic policy impact

Chapter 7: Uncertainty and Stochastic Methods

7.1 The Challenge of Uncertainty

Energy projections face deep uncertainty in:

  • Future oil prices
  • Technology costs (will solar keep getting cheaper?)
  • Economic growth (especially in emerging economies)
  • Policy evolution
  • Resource availability

7.2 Types of Uncertainty

Aleatory uncertainty: Inherent randomness (weather, accidents)

  • Can be characterized probabilistically
  • Won't disappear with more research

Epistemic uncertainty: Lack of knowledge (future technology costs)

  • Could be reduced with more information
  • But often we must act before uncertainty resolves

Deep uncertainty: Fundamental unknowns (paradigm shifts, black swans)

  • Hard to assign probabilities
  • Scenario analysis may be more appropriate
  • Explored in Ch 18

7.3 Probability Distributions

Common distributions used in stochastic energy models:

Normal distribution: $X \sim N(\mu, \sigma^2)$

  • Symmetric, bell-shaped
  • Used for: demand elasticities, growth rates

Lognormal distribution: $X \sim LN(\mu, \sigma^2)$

  • Always positive, right-skewed
  • Used for: prices, resource estimates

Triangular distribution: Defined by min, mode, max

  • Easy to elicit from experts
  • Used for: technology learning rates

Uniform distribution: Equal probability in range

  • Maximum ignorance within bounds
  • Used when we only know plausible range

7.4 Monte Carlo Simulation

Monte Carlo method:

  1. Draw random values from input distributions
  2. Run the model with these values
  3. Record outputs
  4. Repeat many times (1000-10000)
  5. Analyze distribution of outputs
Input: θ ~ distribution
       │
       ▼
┌──────────────────┐
│     MODEL         │
│  (deterministic   │
│   per draw)       │
└──────────────────┘
       │
       ▼
Output: Oil price, demand, etc.

Repeat N times → Output distribution

Output: Not a single forecast, but a probability distribution

  • Mean, median
  • Standard deviation
  • Percentiles (P10, P50, P90)
  • Full distribution

7.5 Latin Hypercube Sampling (LHS)

Simple random sampling can miss parts of the parameter space. Latin Hypercube Sampling ensures better coverage:

  1. Divide each parameter's range into N equal-probability intervals
  2. Sample exactly once from each interval
  3. Randomly pair samples across parameters

Advantage: More efficient—variance reduction is roughly O(1/N) vs O(1/√N) for random sampling. For expensive models, this can cut required runs by 5-10×.

Simple Random:          Latin Hypercube:
X₂                     X₂
│  •    •              │     •  
│    •     •           │  •     
│  •    •              │        •
│      •  •            │     •  
└──────────── X₁       └──────────── X₁
(clumpy coverage)      (stratified coverage)

7.6 Sensitivity Analysis

Question: Which uncertain inputs drive the most uncertainty in outputs?

Note: Local sensitivity is a deterministic concept (just calculus). We include it here as context before the stochastic method (Sobol indices), which requires Monte Carlo runs.

Local sensitivity: Change one input slightly, see output change $$S_i = \frac{\partial Y}{\partial X_i}$$

Global sensitivity (Sobol indices): Decompose output variance

First-order Sobol index: $$S_i = \frac{V[E[Y|X_i]]}{V[Y]}$$

where $V[\cdot]$ = variance, $E[\cdot]$ = expected value, and $X_{-i}$ = all inputs except $X_i$.

Fraction of output variance explained by input $X_i$ alone.

Total-effect index: $$S_{T,i} = 1 - \frac{V[E[Y|X_{-i}]]}{V[Y]}$$

Includes all interactions involving $X_i$.

7.7 Gaussian Process Emulators and Surrogate Models

Complex energy-economy models can take hours per run, making Monte Carlo with thousands of draws impractical. Surrogate models (also called emulators or metamodels) approximate the full model with a fast statistical function.

Gaussian Process (GP) emulators:

  • Train a GP on a small set of model runs (design points)
  • The GP interpolates between runs, providing both a prediction and an uncertainty estimate for untested input combinations
  • Enables fast Monte Carlo on the emulator instead of the full model

$$Y(\mathbf{x}) \sim \mathcal{GP}\big(m(\mathbf{x}),; k(\mathbf{x}, \mathbf{x}')\big)$$

where $m(\mathbf{x})$ is the mean function and $k(\mathbf{x}, \mathbf{x}')$ is the covariance (kernel) function.

Other surrogate approaches:

  • Polynomial chaos expansion: Represent output as polynomial of inputs
  • Neural network surrogates: Train a neural network on model inputs/outputs
  • Reduced-order models: Simplify the original model structure

Why this matters: Surrogates make formal uncertainty quantification feasible for models that are otherwise too slow. They are increasingly used for Bayesian calibration (see Ch 20).

7.8 Bayesian Calibration

Classical calibration (see Ch 20) finds one parameter set that replicates base-year data. Bayesian calibration treats parameters as random variables with probability distributions:

$$P(\theta | D) \propto P(D | \theta) \cdot P(\theta)$$

  • $P(\theta)$ = prior — what we believe about parameters before seeing data
  • $P(D | \theta)$ = likelihood — how well the model with parameters $\theta$ explains observed data $D$
  • $P(\theta | D)$ = posterior — updated beliefs after seeing data

Advantages over classical calibration:

  • Produces parameter distributions, not point estimates
  • Naturally propagates parameter uncertainty to model outputs
  • Can incorporate expert knowledge through priors
  • Quantifies how much the data actually constrains each parameter

Practical implementation typically uses Markov Chain Monte Carlo (MCMC) sampling — or, when the model is too slow for MCMC, a GP emulator of the model (combining Sections 7.7 and 7.8).

7.9 Interpreting Stochastic Results

Don't:

  • Treat the mean as "the forecast"
  • Ignore the distribution width

Do:

  • Report ranges: "Oil prices in 2040: €60-120/barrel (80% probability)"
  • Use for stress-testing policies
  • Identify which uncertainties matter most

End of Part II: Mathematical Toolkit

Continue to Part III: Energy Systems for energy-specific concepts.


Part III: Energy Systems


Chapter 8: Energy Fundamentals

8.1 Why Energy Matters for Economic Modeling

Energy is unique among economic goods:

  • Essential input for virtually all production
  • Limited substitutability in many uses
  • Environmental externalities (emissions)
  • Strategic importance (national security)
  • Infrastructure-intensive (long-lived capital)

8.2 Energy Forms and Conversions

Primary energy: Energy as found in nature

  • Fossil fuels (coal, crude oil, natural gas)
  • Nuclear (uranium)
  • Renewables (solar, wind, hydro, biomass, geothermal)

Secondary energy: Transformed/refined energy

  • Electricity (from any primary source)
  • Refined petroleum products (gasoline, diesel, jet fuel)
  • Hydrogen (produced from various sources)

Final energy: Energy delivered to end users

  • What you buy (electricity at the meter, gasoline at the pump)
  • Before losses in end-use equipment

Useful energy: Energy service actually provided

  • Heat delivered to room
  • Motion of vehicle
  • Light from bulb
Primary Energy (100 units)
    │
    │ Extraction, refining, generation
    │ (losses: ~30-40%)
    ▼
Secondary/Final Energy (60-70 units)
    │
    │ End-use conversion
    │ (losses: ~30-70%)
    ▼
Useful Energy (20-40 units)

8.3 Energy Units

Unit Definition Typical Use
Joule (J) SI unit of energy Scientific
kWh 3.6 MJ Electricity billing
toe Tonne of oil equivalent (41.868 GJ) Energy statistics
Mtoe Million toe National/EU level
PJ Petajoule (10¹⁵ J) Energy balances
TWh Terawatt-hour (3.6 PJ) Electricity statistics
BTU British Thermal Unit (~1055 J) US/UK
MMBtu Million BTU Natural gas (US)

Conversion factors:

  • 1 toe = 11.63 MWh = 41.868 GJ
  • 1 TWh = 0.086 Mtoe
  • 1 barrel of oil ≈ 0.136 toe

8.4 Energy Balances

An energy balance is an accounting framework showing all energy flows in an economy.

┌──────────────────────────────────────────────────────────────┐
│                         SUPPLY                               │
│  Domestic production + Imports - Exports - Stock changes     │
└──────────────────────────────────────────────────────────────┘
                              ↓
                      Primary Energy Supply
                              ↓
┌──────────────────────────────────────────────────────────────┐
│                     TRANSFORMATION                           │
│  Power plants, refineries, heat plants                       │
│  (input - output = losses)                                   │
└──────────────────────────────────────────────────────────────┘
                              ↓
                      Final Energy Consumption
                              ↓
┌──────────────────────────────────────────────────────────────┐
│                         DEMAND                               │
│  Industry + Transport + Residential + Services + Agriculture │
└──────────────────────────────────────────────────────────────┘

Key identity: $$Primary\ Supply = Final\ Consumption + Transformation\ Losses + Own\ Use$$

Models are calibrated to Eurostat and/or IEA energy balances for the base year.

8.5 Energy Intensity

Energy intensity measures how much energy is used per unit of economic output:

$$Energy\ Intensity = \frac{Total\ Primary\ Energy\ Supply}{GDP}$$

Usually expressed in toe/€million or MJ/€.

Decomposition: $$\frac{E}{GDP} = \sum_s \frac{E_s}{Y_s} \cdot \frac{Y_s}{GDP}$$

This decomposes aggregate energy intensity into: (1) sector-level intensity ($E_s/Y_s$) and (2) sectoral shares of GDP ($Y_s/GDP$). A country can reduce energy intensity either by making each sector more efficient or by shifting toward less energy-intensive sectors.

Trends:

  • Developed economies: ~1-2% annual decrease in energy intensity
  • Driven by structural change + efficiency improvements
  • AEEI parameter in models captures this

8.6 Emissions and Carbon Intensity

Carbon intensity of energy: $$Carbon\ Intensity = \frac{CO_2\ Emissions}{Energy\ Consumption}$$

Carbon content by fuel:

Fuel kg CO₂/GJ Relative
Coal 94-96 Highest
Oil 73-75 Medium
Natural gas 56-58 Lowest fossil
Biomass 0* Net zero (if sustainable)
Nuclear 0 Zero direct
Renewables 0 Zero direct

*Biomass carbon is considered biogenic (part of natural cycle).

Emissions accounting: $$EMI_{f,s,r} = \epsilon_f \cdot E_{f,s,r}$$

where $\epsilon_f$ = emission factor for fuel $f$ (kg CO₂/GJ), $E_{f,s,r}$ = energy consumption of fuel $f$ in sector $s$ and region $r$. This simple multiplication links economic models to physical emissions.

8.7 Exhaustible Resources: Hotelling Theory

The Hotelling Rule (1931) is fundamental to understanding fossil fuel supply.

Core insight: Exhaustible resources (oil, gas, coal) have a finite stock. Extracting today means less available tomorrow. Owners must consider the opportunity cost of extraction.

The Hotelling condition: $$\frac{dP}{dt} = r \cdot P$$

In words: the price of an exhaustible resource rises at the interest rate $r$.

Intuition:

  • Resource owner can either extract now (get price P, invest at interest r) or wait (get price P_{t+1})
  • In equilibrium, they must be indifferent: $P_{t+1} = P_t(1+r)$
  • Otherwise arbitrage: if prices rise faster than r, everyone waits; if slower, everyone extracts now

With extraction costs: $$\frac{d(P - MC)}{dt} = r \cdot (P - MC)$$

The resource rent (price minus marginal cost) rises at rate r.

Why prices actually fluctuate: The simple Hotelling model predicts smooth price rises, but real prices are volatile because:

  • Demand shocks (recessions, growth spurts)
  • Supply shocks (discoveries, wars, technology)
  • Market power (OPEC decisions)
  • Uncertainty about reserves

Limitations to keep in mind:

  • Hotelling assumes rational, forward-looking resource owners with perfect information about reserves — real actors have limited information and heterogeneous expectations
  • Historical oil prices have not followed Hotelling paths; empirical tests generally reject the simple model
  • The theory works better as a benchmark for understanding long-run tendencies than as a short-run price predictor
  • Political factors (sanctions, nationalization, OPEC quotas) often dominate economic logic

8.8 Market Power in Energy: OPEC

Global oil markets are not perfectly competitive. OPEC (Organization of Petroleum Exporting Countries) has significant market power.

The dominant firm with competitive fringe:

Market demand = OPEC supply + Non-OPEC supply
                    ↓               ↓
            (strategic)      (price-taking)

The dominant firm model:

  1. Competitive fringe supplies according to marginal cost: Fringe supply: $S^{fringe}(P)$ = supply from non-OPEC producers at price $P$

  2. OPEC faces residual demand: $$D^{OPEC}(P) = D^{world}(P) - S^{fringe}(P)$$

  3. OPEC maximizes profit: $$\max_P (P - MC^{OPEC}) \cdot D^{OPEC}(P)$$

Result: OPEC restricts output below competitive level, raising prices.

Why this matters: Oil price projections are highly sensitive to assumptions about OPEC behavior — a major source of uncertainty explored in Ch 13.

8.9 International Trade in Energy

Comparative advantage (Ricardo): Countries benefit from specializing in goods where they have the lowest opportunity cost, even if one country is more productive at everything.

The Armington assumption (1969): Domestic and imported goods are imperfect substitutes.

$$X = \left[ \delta \cdot D^{\rho} + (1-\delta) \cdot M^{\rho} \right]^{1/\rho}$$

where $X$ = composite good, $D$ = domestic, $M$ = imported, $\sigma = 1/(1-\rho)$ = Armington elasticity. This prevents unrealistic all-or-nothing trade switches.

Carbon leakage occurs when climate policy in one region causes emissions to increase elsewhere — production relocates to avoid carbon costs. The leakage rate measures what fraction of domestic emission reductions are offset by increases abroad. This is explored in Ch 11.

Border Carbon Adjustments (BCAs) apply carbon costs to imports based on embodied emissions, aiming to reduce leakage.

8.10 Environmental Policy Instruments

Externality: A cost or benefit not reflected in market prices. Burning fossil fuels causes climate change — costs fall on society, not the polluter.

Carbon pricing internalizes the externality:

Feature Carbon Tax Cap-and-Trade
Price certainty
Quantity certainty
Revenue predictability Variable

Marginal Abatement Cost (MAC): $$MAC = -\frac{\partial C}{\partial E} &gt; 0$$

The cost of reducing emissions by one more ton. In an equilibrium model, the carbon price equals the MAC at the emissions constraint.

Revenue recycling options (lump-sum rebates, labor tax cuts, green investment) profoundly affect the economic outcome — see Ch 11.


Chapter 9: Power Sector and Flexibility

9.1 Why Power Gets Special Treatment

The electricity sector is modeled in detail because:

  • Central to decarbonization (electrification of transport, heat)
  • Highly capital-intensive (long-lived assets)
  • Complex operations (real-time balancing)
  • Subject to extensive regulation

9.2 Generation Technologies

Technology Type Dispatchable? Capacity Factor
Coal Thermal Yes 40-85%
Natural gas CCGT Thermal Yes 30-60%
Nuclear Thermal Baseload 80-95%
Hydro (reservoir) Renewable Yes 30-50%
Wind onshore Renewable No 20-35%
Wind offshore Renewable No 35-50%
Solar PV Renewable No 10-25%
Battery storage Storage Yes N/A

Dispatchable: Can increase/decrease output on demand (gas, coal, hydro) Variable (non-dispatchable): Output depends on weather (wind, solar) Baseload: Technically dispatchable, but economically optimal to run at constant high output (nuclear)

9.3 Key Technical Parameters

Capacity factor: $$CF = \frac{Actual\ Generation}{Maximum\ Possible\ Generation} = \frac{E}{P \cdot 8760}$$

where E = annual generation (MWh), P = capacity (MW), 8760 = hours/year.

Efficiency: $$\eta = \frac{Electricity\ Output}{Fuel\ Input}$$

Typical efficiencies:

  • Coal: 35-45%
  • CCGT: 55-62%
  • Nuclear: 33-37%
  • Solar/Wind: N/A (no fuel)

9.4 Cost Structure

Capital cost (CAPEX): €/kW installed

  • Nuclear: €4,000-8,000/kW
  • Offshore wind: €2,500-4,000/kW
  • Onshore wind: €1,000-1,500/kW
  • Solar PV: €400-800/kW
  • CCGT: €600-900/kW

Levelized Cost of Electricity (LCOE): $$LCOE = \frac{\sum_t \frac{I_t + M_t + F_t}{(1+r)^t}}{\sum_t \frac{E_t}{(1+r)^t}} = \frac{\text{NPV of lifetime costs}}{\text{NPV of lifetime electricity}}$$

Where this comes from: Set NPV(revenues) = NPV(costs), assume a constant selling price P, and solve for P. The result is the LCOE — the unique break-even price.

9.5 Load and Dispatch

Merit order dispatch: Plants dispatched from lowest to highest marginal cost

Marginal Cost
(€/MWh)
    │              ┌──┐ Peak (gas turbines)
    │         ┌────┘  │
    │    ┌────┘       │ Mid (CCGT)
    │ ───┘            │ Baseload (nuclear, coal)
    │                 │
    └─────────────────┴──── Cumulative Capacity (GW)

Wholesale price = marginal cost of most expensive unit running

9.6 Time Slices vs. Hourly Resolution

This is one of the most consequential modeling choices in energy systems — see Ch 15 for how it affects conclusions about renewable integration.

Time-slice approach (traditional, used in many PE and CGE models):

Time slices = Season × Day type × Hour type

Seasons: Winter, Summer, Intermediate (3)
Day type: Peak day, Average day (2)  
Hour type: Peak, Shoulder, Off-peak, Night (4)

Total: 3 × 2 × 4 = 24 representative periods

Each time slice has typical demand, renewable availability, and a weight (hours it represents). This captures seasonal and diurnal averages efficiently.

Hourly dispatch (high-resolution models like PyPSA, Calliope, SWITCH):

  • Model all 8,760 hours of the year
  • Capture ramps, multi-day low-wind events, duck curves, storage cycling
  • Essential when variable renewable energy (VRE) exceeds ~30-40% of generation

Why the choice matters: Time-slice models average away extremes. A week of low wind followed by a storm is invisible to a model that only sees "winter average." At high VRE penetrations, these extremes drive storage needs, curtailment, and system costs.

9.7 Storage and Flexibility

As VRE penetration grows, flexibility becomes the binding constraint:

Short-duration storage (hours):

  • Lithium-ion batteries: 1-4 hours, declining costs
  • Pumped hydro: 6-12 hours, mature technology

Long-duration storage (days to weeks):

  • Hydrogen electrolysis + fuel cells/turbines
  • Compressed air energy storage (CAES)
  • Flow batteries

Other flexibility sources:

  • Demand response (shift consumption to match supply)
  • Grid interconnection (spatial smoothing)
  • Flexible thermal generation (gas peakers)
  • Vehicle-to-grid (EVs as distributed storage)

Storage economics: $$LCOS = \frac{CAPEX_{storage} + \sum_t OPEX_t / (1+r)^t}{\sum_t E_{discharged,t} / (1+r)^t}$$

Storage value depends on price spreads — the gap between expensive and cheap hours. As storage saturates, it arbitrages away the spreads that justify its existence. Models must capture this feedback.

9.8 Sector Coupling

Sector coupling links electricity with other energy carriers:

┌──────────────┐     Electrolysis     ┌──────────────┐
│  Electricity │ ──────────────────► │   Hydrogen    │
│   (power)    │ ◄────────────────── │              │
└──────────────┘     Fuel cells       └──────────────┘
       │                                     │
       │ Heat pumps              Industrial heat
       ▼                                     ▼
┌──────────────┐                   ┌──────────────┐
│     Heat     │                   │   Industry   │
│  (buildings) │                   │              │
└──────────────┘                   └──────────────┘
       ▲                                     ▲
       │ CHP                        Synthetic fuels
┌──────────────┐                   ┌──────────────┐
│   Gas grid   │ ──────────────── │  Transport   │
└──────────────┘   Gas/H2 blend    └──────────────┘

Why sector coupling matters:

  • Electricity becomes the backbone of decarbonization
  • Flexibility can come from other sectors (heat storage is cheaper than electricity storage)
  • Optimizing sectors in isolation misses synergies — a combined power-heat-hydrogen model can find solutions invisible to separate models

9.9 Electricity Pricing

Wholesale market pricing (competitive): The wholesale price equals the marginal cost of the most expensive plant needed to meet demand.

The Ramsey-Boiteux rule for regulated infrastructure (grids): $$\frac{P_i - MC_i}{P_i} = \frac{k}{\varepsilon_i}$$

Charge higher markups to consumers with lower elasticity, minimizing total deadweight loss while recovering fixed costs.

Retail price decomposition: $$P_{retail} = P_{wholesale} + T_{network} + T_{policy} + Taxes$$


Chapter 10: Technology and Adoption

10.1 Why Technology Detail Matters

Energy transitions are fundamentally about technology change:

  • From coal to gas to renewables
  • From internal combustion to electric vehicles
  • From gas boilers to heat pumps

Modeling technology choice and evolution is crucial for policy analysis.

10.2 Technology Vintages and Stock Turnover

Energy-using capital has long lifetimes:

  • Power plants: 30-60 years
  • Buildings: 50-100 years
  • Vehicles: 10-20 years
  • Appliances: 5-20 years

Stock turnover model: $$K_t = K_{t-1} \cdot (1 - \delta) + I_t$$

Implication: Even with zero new fossil fuel investment, old plants keep running. Decarbonization takes decades.

10.3 Learning Curves

Experience curve: Technology costs decline as cumulative production increases

$$C_t = C_0 \cdot \left(\frac{Q_t}{Q_0}\right)^{-b}$$

where $C_t$ = unit cost at time $t$, $Q_t$ = cumulative production, $b$ = learning parameter.

Learning rate (LR): Percentage cost reduction per doubling of capacity $$LR = 1 - 2^{-b}$$

Typical learning rates:

Technology Learning Rate
Solar PV modules 20-24%
Wind turbines 10-15%
Batteries (Li-ion) 15-20%
Nuclear ~0% (no learning in recent decades)
CCS (Carbon Capture & Storage) Uncertain (5-15%?)

Historical estimates that may not persist. Solar's high learning rate is well-documented but unprecedented in energy history.

10.4 S-Curves and Diffusion Dynamics

Technology adoption rarely follows the smooth market-share transitions that logit models predict. Instead, it often follows an S-curve (logistic growth):

$$A(t) = \frac{K}{1 + e^{-r(t - t_0)}}$$

where $K$ = saturation level, $r$ = growth rate, $t_0$ = inflection point.

The Bass diffusion model distinguishes two adoption mechanisms:

  • Innovators (coefficient $p$): adopt independently, driven by external information
  • Imitators (coefficient $q$): adopt because others have, driven by word-of-mouth

$$\frac{dN}{dt} = (p + q \cdot N/K)(K - N)$$

Why S-curves matter:

  • Early adoption is slow (few adopters, high costs, limited infrastructure)
  • Once a critical mass is reached, positive feedbacks kick in (costs fall via learning, infrastructure expands, social norms shift)
  • Adoption accelerates rapidly through the middle
  • Then saturates as easy adopters are exhausted

This creates tipping points — thresholds beyond which transition becomes self-reinforcing. Models that don't capture this can dramatically underestimate transition speed once it begins. See Ch 17.

10.5 Path Dependence and Carbon Lock-in

Path dependence means that early choices constrain later options. In energy systems:

  • Infrastructure (pipelines, refineries, grid topology) embodies past choices
  • Skills and supply chains develop around incumbent technologies
  • Institutions and regulations reflect existing systems
  • Consumer habits and expectations form around what exists

Carbon lock-in (Unruh 2000) is the specific case where fossil fuel infrastructure creates self-reinforcing barriers to clean alternatives, even when the clean alternative is cheaper on a levelized basis.

Implications for modeling:

  • Models that allow free technology switching underestimate lock-in
  • Models with vintage capital and infrastructure constraints are more realistic
  • Policy timing matters: early action prevents lock-in, delayed action requires costly stranded assets

10.6 Intangible Costs

Real technology adoption is slower than pure cost optimization would predict. Intangible costs capture this:

$$C_i^{total} = C_i^{financial} + \mu_i$$

Factor Description
Risk premium Uncertainty about new technology performance
Hidden costs Installation complexity, training needs
Hassle factor Time and effort to research, purchase
Financing constraints Limited access to capital
Information barriers Lack of awareness
Behavioral inertia Preference for familiar options

10.7 Autonomous vs. Price-Induced Technical Change

AEEI (Autonomous Energy Efficiency Improvement):

  • Efficiency improves over time independent of prices
  • Exogenous parameter (typically 0.5-1.5%/year)

PIEEI (Price-Induced Energy Efficiency Improvement):

  • Higher energy prices → more investment in efficiency
  • Endogenous response to policy
  • Captured through substitution elasticities

Debate: How much is autonomous vs. induced? Important for policy analysis — if most improvement is autonomous, carbon prices matter less.


End of Part III: Energy Systems

Continue to Part IV: The Questions — the heart of this primer.


Part IV: The Questions

Each chapter in this part asks a concrete policy question, shows how different modeling paradigms tackle it, and explains why they often disagree. For standalone summaries of each paradigm, see Appendix E: Paradigm Profiles.


Chapter 11: What Does a Carbon Price Do to the Economy?

11.1 The Question

A government imposes a €50/tCO₂ carbon price. What happens to GDP, employment, competitiveness, and trade? Does the economy shrink, or can clever revenue recycling make the policy a net positive? The answer depends profoundly on which model you ask.

11.2 The Approaches

CGE models (e.g., GEM-E3, GTAP-E, ENV-Linkages — see Appendix E: CGE):

The carbon tax enters as an ad-valorem tax on fossil fuels proportional to their carbon content: $$P_f^{eff} = P_f + \tau \cdot \epsilon_f$$

The model finds a new general equilibrium where all markets clear simultaneously. Because the economy starts at or near full employment (Walrasian or efficiency-wage), resources freed from carbon-intensive sectors must be reabsorbed elsewhere. GDP falls because the carbon tax is a distortion (unless it exactly offsets the externality).

Revenue recycling is crucial: the same carbon price can produce very different GDP impacts depending on how revenue is used. Lump-sum rebates add a distortion. Cutting labor taxes removes one, potentially yielding a "double dividend."

Econometric simulation models (e.g., E3ME — see Appendix E: Econometric Simulation):

No equilibrium assumption. The model estimates aggregate demand, investment, and trade equations from historical data. The economy can have involuntary unemployment and demand-deficient output. A carbon tax:

  • Raises energy costs (negative)
  • But recycled revenue stimulates demand (positive)
  • And induced investment in clean technology has multiplier effects

If the economy has slack, the demand stimulus can exceed the energy cost burden, producing positive GDP effects.

11.3 Worked Comparison

Setup: EUR 50/tCO₂ carbon price applied across the EU, revenue recycled via labor tax cuts.

Outcome CGE model result Econometric model result
GDP impact -0.3% to -0.8% -0.2% to +0.5%
Employment Slight decrease or neutral Moderate increase
Carbon leakage 5-20% 5-15%
Electricity price +15-25% +15-25%
Mechanism Resource reallocation cost Demand stimulus + investment

11.4 Why They Disagree

The divergence traces to a few core assumptions:

Assumption CGE Econometric
Labor market Full employment (or efficiency wage) Can have persistent unemployment
Output determination Supply-side (production capacity) Demand-side (spending drives output)
Investment response Crowds out other investment (fixed savings) Can increase total investment (multiplier)
Expectations Rational/adaptive Based on historical patterns
Time dynamics Comparative statics or recursive Path-dependent, cumulative causation

The crux: In a CGE world, the economy is already using all its resources efficiently. Any tax is a distortion that reduces efficiency. In an econometric world, idle resources exist. A tax that redirects spending toward clean investment can mobilize these idle resources, growing the pie.

Neither is "wrong" — they represent different views of how the economy works. The question is which view better describes the economy at the time the policy is implemented.

11.5 The Frontier

  • Heterogeneous-firm trade models (Melitz): Firms differ in productivity. Carbon pricing causes the least productive (most polluting) firms to exit, raising average productivity. This can offset some GDP loss — an effect invisible to standard Armington trade.
  • CBAM modeling: The EU's Carbon Border Adjustment Mechanism (phasing in from 2026) changes the leakage calculus. Models must now handle border tariffs linked to embodied carbon.
  • Green industrial policy: Carbon pricing interacts with subsidies (IRA in the US, Green Deal Industrial Plan in EU). Modeling the interaction requires capturing both price signals and quantity instruments.

Chapter 12: What Energy Mix Do We Get in 2050?

12.1 The Question

How much wind, solar, nuclear, gas, and hydrogen will we use in 2050? How much investment is needed? The answer shapes infrastructure planning, industrial strategy, and climate targets.

12.2 The Approaches

LP/MILP cost-minimization models (e.g., TIMES/MARKAL, OSeMOSYS — see Appendix E: LP Energy Models):

Minimize total system cost subject to meeting demand, emission caps, and technical constraints: $$\min \sum_t \sum_k \frac{C_{k,t} \cdot X_{k,t}}{(1+r)^t}$$

subject to demand satisfaction, capacity constraints, emission limits, etc.

The solution is the "socially optimal" investment plan — the cheapest way to meet all constraints. Technologies are chosen purely on cost.

Hybrid partial equilibrium models (e.g., PRIMES, NEMS, WEM — see Appendix E: Partial Equilibrium / Hybrid):

Add behavioral realism on top of cost optimization:

  • Consumer heterogeneity (discrete choice / Weibull market shares)
  • Intangible costs (risk aversion, hassle factors)
  • Market equilibrium (supply meets demand at endogenous prices)
  • Myopic or limited-foresight decision-making

The result: more gas, less nuclear than the cost-optimal model predicts, because real investors are risk-averse and consumers face non-financial barriers.

High-resolution energy models (e.g., PyPSA, Calliope, SWITCH — see Appendix E: High-Resolution Energy):

Similar optimization structure to LP models, but with hourly (or sub-hourly) time resolution and detailed network representation. This reveals integration costs — curtailment, storage needs, transmission bottlenecks — that are invisible to time-slice models.

12.3 Worked Comparison

Setup: EU net-zero by 2050, similar technology costs and demand projections.

Outcome Cost-minimization (LP) Behavioral hybrid (PE) High-resolution
Solar capacity Very high High Very high (but with curtailment)
Nuclear Significant (if allowed) Less (risk premium) Similar to LP
Natural gas Phase-out by 2040 Persists longer (inertia) Phase-out but with gas peakers
Storage (GW) Moderate Moderate 2-3× more (captures extremes)
System cost Lowest (by construction) 5-15% higher 10-20% higher than LP

12.4 Why They Disagree

The discount rate is arguably the single most important parameter:

  • Social discount rate (3-5%): Favors capital-intensive technologies (nuclear, renewables) — society is patient
  • Private discount rate (8-15%): Favors low-capital technologies (gas) — investors want quick returns

A single percentage point change in the discount rate can shift billions of euros in investment between technologies.

Time resolution drives the storage answer: a time-slice model that sees "winter average" demand and "winter average" wind cannot reveal that a two-week cold, still period requires massive storage or backup. The hourly model sees this event and plans for it.

Behavioral realism drives the transition speed: a cost optimizer switches instantly when clean technology crosses the cost threshold. A behavioral model switches gradually because of inertia, risk aversion, and information barriers.

12.5 The Frontier

  • Weather-year sampling: Instead of one representative year, run multiple historical weather years (e.g., 30 years) to capture rare low-wind/high-demand events
  • Sector coupling: Optimize electricity + heat + hydrogen + transport together — the cheapest 2050 system may look very different from the sum of sector-optimal plans
  • Endogenous learning: Technology costs depend on deployment, which depends on the plan. This creates feedback loops that standard LP cannot handle without iterative or nonlinear extensions

Chapter 13: What Will Oil and Gas Cost?

13.1 The Question

What will fossil fuel prices be in 10, 20, 30 years? Energy prices drive investment decisions, government revenue projections, geopolitical strategy, and the pace of the energy transition. Getting them systematically wrong has enormous consequences.

13.2 The Approaches

Econometric simulation (e.g., PROMETHEUS, Oxford Global Energy Model — see Appendix E: Econometric Simulation):

Estimate demand and supply equations from historical data, including:

  • Income and price elasticities for demand
  • Hotelling-type resource depletion for supply
  • OPEC behavior (dominant firm model)
  • Technology learning for renewables (substitution pressure)

Run Monte Carlo over uncertain parameters to produce probability distributions of future prices — not point forecasts, but P10/P50/P90 ranges.

Scenario-based expert judgment (e.g., IEA World Energy Outlook):

Construct 3-5 internally consistent narratives ("Current Policies," "Stated Policies," "Net Zero"). Each scenario implies a price path. No formal probabilities — the scenarios are "what-if" analyses.

Fundamentals-based market models:

Bottom-up models of supply costs (field-by-field for oil, plant-by-plant for gas) combined with demand projections. Strong on short-to-medium term, weaker on structural breaks.

13.3 Worked Comparison

Setup: Oil price projection for 2040.

Approach Central estimate Range Treatment of uncertainty
Econometric simulation (Monte Carlo) $75/bbl $45-130/bbl (80% CI) Formal probability distribution
IEA WEO scenarios $60-90/bbl Three scenarios No probabilities
Market forward curves $70/bbl (at time of writing) Implied volatility Only extends ~5 years

13.4 Why They Disagree

Structural breaks are the fundamental problem. Historical regressions missed:

  • The shale revolution (2010s) — US oil production doubled in a decade
  • COVID demand collapse (2020) — unprecedented demand shock
  • Russia-Ukraine supply disruption (2022) — geopolitical supply shock

Econometric models extrapolate historical relationships. When the structure changes (new technology, new geopolitics), the relationships break.

OPEC behavior is inherently unpredictable — it's a strategic game, not a natural process. Different assumptions about OPEC cooperation vs. competition can swing price projections by $30-50/bbl.

13.5 The Frontier

  • ML-augmented forecasting: Machine learning on high-frequency data (satellite imagery of storage tanks, shipping traffic, real-time industrial activity) to improve short-term price prediction
  • Agent-based oil market models: Model OPEC members as individual strategic agents with different cost structures, fiscal needs, and discount rates
  • Transition risk: As electric vehicle adoption accelerates, oil demand may peak and decline. Models must capture the possibility of a "death spiral" where falling demand → lower prices → reduced investment → future supply crunch

Chapter 14: Who Wins, Who Loses?

14.1 The Question

Climate policy creates winners and losers. Who bears the cost of a carbon tax — low-income households who spend more on heating, or shareholders of fossil fuel companies? Who benefits from renewable energy subsidies — rooftop solar owners or grid-connected renters? Getting the distributional answer right determines whether policy is politically sustainable.

14.2 The Approaches

CGE with representative agent (standard approach — see Appendix E: CGE):

Reports aggregate welfare (EV) per region. Can show that Germany loses more than France, or that developing countries bear higher costs. But within each region, there is only one "representative household" — completely blind to income distribution.

CGE + microsimulation coupling (e.g., CGE → EUROMOD):

Two-stage approach:

  1. CGE provides macro changes (price levels, wage rates, employment by sector)
  2. Microsimulation model maps these onto a representative household survey (e.g., EU-SILC) to compute impacts by income decile

This can reveal that a carbon tax is regressive (bottom quintile loses 2% of income, top quintile loses 0.5%) even when the aggregate EV is positive (net social gain).

Agent-based models (see Appendix E: Agent-Based / Complexity):

Heterogeneity is built in from the start. Individual households have different incomes, housing types, commute distances, vehicle ownership, and behavioral responses. Distribution is not a post-hoc add-on — it's intrinsic to the model dynamics.

Heterogeneous-agent macro models (HANK):

A recent development in macroeconomics that introduces household heterogeneity within a general equilibrium framework. Households differ in income, wealth, and marginal propensities to consume. Monetary and fiscal policy have distributional effects through income and wealth channels.

14.3 Worked Comparison

Setup: €50/tCO₂ carbon tax, revenue recycled as equal per-capita rebate.

Outcome Representative agent CGE CGE + microsimulation Agent-based
Aggregate welfare +0.1% GDP (double dividend) +0.1% GDP Depends on run
Bottom income quintile Not computed -1.5% to -2% income -1% to -3%
Top income quintile Not computed +0.2% to +0.5% income +0.1% to +0.8%
Spatial variation By country By country + income group By individual location

14.4 Why They Disagree

The representative agent CGE says "net positive." The microsimulation reveals that low-income households are worse off because:

  • They spend a larger share of income on energy (energy is a necessity)
  • The per-capita rebate doesn't fully compensate (energy spending is regressive)
  • They have fewer options to switch (can't afford an EV or heat pump)

Which answer goes to parliament? The aggregate welfare gain justifies the policy on efficiency grounds. The distributional impact reveals a political and ethical problem. Both are "correct" — they answer different questions.

14.5 The Frontier

  • Spatial general equilibrium: Households differ not just by income but by location. A carbon tax hits rural commuters harder than urban residents. Models that combine spatial and income heterogeneity provide richer policy guidance.
  • Behavioral heterogeneity: Not just different incomes but different decision-making — some households are forward-looking, others are myopic. Some respond to incentives, others don't. ABMs naturally accommodate this.
  • Political economy feedback: Distributional impacts affect political support, which affects policy durability, which affects long-term outcomes. Models that ignore this feedback may predict policies that never survive implementation.

Chapter 15: Can the Grid Handle 80% Renewables?

15.1 The Question

Many net-zero scenarios require 70-90% of electricity from variable renewables (wind and solar). Is this operationally feasible? How much storage is needed? What happens during a two-week European wind drought in January? The answer depends critically on temporal and spatial resolution.

15.2 The Approaches

Time-slice models (traditional PE models, some CGE models — see Appendix E: Partial Equilibrium / Hybrid):

Represent the year as 12-24 representative periods. Each period has average demand, average renewable availability, and average costs. The model finds the cost-minimizing capacity mix that meets demand in all time slices.

Hourly dispatch models (e.g., PyPSA, Calliope, SWITCH, Plexos — see Appendix E: High-Resolution Energy):

Model all 8,760 hours of the year (or more — some use 5-minute resolution). Include:

  • Ramping constraints (how fast plants can change output)
  • Network constraints (power flows, congestion)
  • Storage cycling (charge/discharge across hours, days, weeks)
  • Renewable variability from weather data

15.3 Worked Comparison

Setup: 80% renewable electricity, same cost assumptions, same demand.

Outcome Time-slice model (24 periods) Hourly model (8,760 hours)
Storage capacity needed X GW 2.5-3.5× X GW
Curtailment <5% 8-15%
Backup gas capacity Moderate Higher (for rare events)
Total system cost Lower (optimistic) 15-30% higher
"Dunkelflaute" (dark doldrums) Invisible Visible — drives long-duration storage need

15.4 Why They Disagree

Temporal averaging hides extremes. A time-slice model that represents "winter peak" as the average of the highest-demand hours never sees the specific week in January where wind across all of Northern Europe drops to 5% of capacity while demand spikes during a cold snap. This event — called Dunkelflaute (German: "dark doldrums") — occurs every few years and can last 5-14 days.

The time-slice model builds enough storage for the average peak-to-trough swing. The hourly model builds enough for the worst multi-day event in the weather dataset.

Network constraints matter too. A time-slice model typically has a "copper plate" assumption (electricity flows freely). An hourly model with network representation reveals that 80% national renewables may still require massive transmission investment or local curtailment.

15.5 The Frontier

  • Multi-weather-year optimization: Run 30+ historical weather years to capture rare events; optimize capacity that performs well across all years
  • Stochastic unit commitment: Optimize dispatch under uncertain renewable forecasts (what you expect tomorrow vs. what actually happens)
  • Sector-coupled models: Jointly optimize power + heat + transport + hydrogen. Excess renewable electricity can make hydrogen (seasonal storage) or charge heat storage (cheaper than batteries). The optimal power system depends on what other sectors can absorb.
  • Climate change impacts on weather: Future wind and solar patterns may differ from historical data. Some models now use climate model projections of future weather to assess renewable resource adequacy under 2°C or 3°C warming.

Chapter 16: What Is the Social Cost of Carbon?

16.1 The Question

What is the dollar value of emitting one additional ton of CO₂? This single number — the Social Cost of Carbon (SCC) — underpins cost-benefit analysis of any climate policy. The US government uses it to evaluate regulations; the EU implicitly targets it through the ETS. Yet credible estimates range from $20 to $200+/tCO₂.

16.2 The Approaches

Integrated Assessment Models (IAMs) (e.g., DICE/RICE, PAGE, FUND, REMIND — see Appendix E: Integrated Assessment Models):

Couple three components:

  1. Economic module: GDP growth, emissions, abatement costs
  2. Climate module: Emissions → concentrations → temperature
  3. Damage function: Temperature → economic damage (% GDP lost)

$$SCC = \frac{\partial}{\partial E_t} \sum_{s=t}^{T} \frac{D(T_s)}{(1+r)^{s-t}}$$

The SCC is the present discounted value of all future damages caused by one extra ton emitted today.

Empirical damage estimation (Burke et al. 2015, Kalkuhl & Wenz 2020):

Estimate the temperature-GDP relationship from historical panel data across countries and years. These empirical estimates suggest damages may be 3-6× larger than the damage functions in DICE, because they capture effects (reduced agricultural productivity, lower labor productivity, increased mortality) that the parametric damage functions undercount.

16.3 Worked Comparison

Setup: SCC of one ton CO₂ emitted in 2025.

Model/Approach SCC estimate ($/tCO₂) Key driver
DICE (Nordhaus, 3% discount rate) $30-50 Moderate damage function, market discount rate
PAGE (Stern Review, 1.4% discount rate) $80-120 Same damages, low discount rate
DICE with empirical damages (Burke) $100-200+ Higher damage function, market rate
FUND $10-30 Low damage function, benefits of mild warming
US EPA (2023 interim) ~$190 Updated damages, lower discount rate

16.4 Why They Disagree

Two parameters dominate:

The damage function. DICE assumes 3°C warming reduces global GDP by ~2.1%. Burke et al.'s empirical estimates suggest 10-23% GDP loss. This is the most consequential disagreement in all of climate economics. The damage function is a guess — we have no historical experience of 3°C warming, so all estimates extrapolate.

The discount rate. Should we use:

  • Market rate (4-5%): How financial markets actually value future vs. present money
  • Prescriptive rate (1-2%): How society should value future generations' welfare (Stern's argument: pure time preference near zero, because future people matter almost as much as us)

This is an ethical choice disguised as a technical parameter. Nordhaus and Stern use similar models but get 4× different SCC values because they disagree about how much we should sacrifice today for people in 2100.

Tipping points and fat tails. Weitzman (2009) argued that the possibility of catastrophic warming (>6°C) dominates the expected-value calculation. If the damage function is convex and the temperature distribution has fat tails, the SCC can be infinite. Standard IAMs, which assume smooth, moderate damages, miss this entirely.

16.5 The Frontier

  • Persistent vs. transient damages: Does warming permanently reduce the level of GDP, or does it reduce the growth rate? If growth-rate effects, damages compound over centuries and the SCC multiplies by 5-10×
  • Non-market damages: Biodiversity loss, cultural heritage, forced migration — real but hard to monetize
  • Tipping point modules: New IAMs include probabilistic tipping points (ice sheet collapse, Amazon dieback, permafrost methane) as discrete, irreversible events
  • Equity weighting: A dollar of damage to a poor country should count more than a dollar to a rich country (declining marginal utility of income). This can double the SCC.

Chapter 17: How Fast Will the Transition Happen?

17.1 The Question

When will EVs dominate? How quickly can heat pumps replace gas boilers? When does green hydrogen become competitive? The speed of the energy transition determines whether climate targets are achievable and how much stranded fossil infrastructure we create.

17.2 The Approaches

Discrete choice / logit models (used in many PE models — see Ch 6):

Technology adoption follows a Weibull/logit market-share function. As the cost gap between clean and dirty technologies widens (through carbon pricing or learning), market share shifts gradually. The transition speed is governed by the heterogeneity parameter (ν) and the rate of cost change.

S-curve / diffusion models (Bass model, logistic — see Section 10.4):

Adoption follows a logistic S-curve. Slow at first (early adopters), then accelerating through the inflection point, then saturating. The key parameters are the innovation rate ($p$), imitation rate ($q$), and market potential ($K$). Historical technology transitions (cars, TVs, smartphones, LEDs) consistently follow S-curves.

Agent-based models (see Appendix E: Agent-Based / Complexity):

Individual agents decide based on local information, neighbor effects, and heterogeneous constraints. Tipping points emerge from interaction:

  • A few EV adopters create demand for charging stations
  • More stations make EVs viable for more people
  • Network effect accelerates adoption
  • Once infrastructure crosses a threshold, adoption goes exponential

17.3 Worked Comparison

Setup: EV market share in Europe, 2025-2040, with steadily declining battery costs.

Approach 2030 market share (new sales) 2035 2040 Pattern
Logit (gradual) 35-45% 55-65% 70-80% Smooth, monotonic
S-curve (Bass) 30-40% 60-80% 85-95% Slow start, rapid middle
ABM (with infrastructure feedback) 25-40% 50-90% 80-98% Highly nonlinear, regime shift

17.4 Why They Disagree

Logit models have no positive feedback — each buyer decides independently based on cost. The transition is smooth and predictable. This is the most conservative projection.

S-curve models capture imitation and social influence but with fixed parameters. The inflection point is predetermined by the parameter values.

ABMs generate emergent tipping points. The transition can be slow for years, then suddenly accelerate when infrastructure and social norms reach a critical mass. But the timing of the tipping point is sensitive to model specification and initial conditions — different ABM runs can give very different trajectories.

Path dependence matters. Early policy that accelerates adoption (subsidies, charging infrastructure mandates) can shift the tipping point forward by years. A logit model shows a proportional response. An ABM can show a phase transition — the same subsidy that barely moves the needle in 2020 triggers exponential adoption in 2025 because the system is closer to its tipping point.

17.5 The Frontier

  • Reinforcement learning for adaptive policy: Use RL to find policy sequences that are optimal under uncertainty about when tipping points occur
  • Real-time data integration: Vehicle registration databases, satellite imagery of rooftop solar, smart meter data — calibrate adoption models to unfolding reality rather than historical analogies
  • Co-evolution models: Technology, infrastructure, institutions, and behavior co-evolve. Models that capture only one dimension miss the interactions that drive or block transitions
  • Behavioral experiments: Lab and field experiments on technology adoption decisions, providing micro-level data to calibrate agent rules

Chapter 18: Deciding Under Deep Uncertainty

18.1 The Question

How do we design energy and climate policy when the future is fundamentally unknown? We don't know future technology costs, climate sensitivity, economic growth, or geopolitical stability. Yet we must make irreversible investment decisions today. What decision framework should we use?

18.2 The Approaches

Scenario analysis (the traditional approach):

Construct 3-5 internally consistent scenarios (e.g., "High growth, strong policy," "Low growth, weak policy"). Run the model under each. Present the range of outcomes. Decision-makers choose based on judgment.

Monte Carlo / stochastic simulation (see Ch 7):

Assign probability distributions to uncertain inputs. Run the model thousands of times. Report probability distributions of outputs (P10, P50, P90 for each metric).

Stochastic programming (see Section 5.7):

Optimize the decision that's best in expectation across scenarios, with the ability to adjust (recourse) as uncertainty resolves:

$$\min_{x} \mathbb{E}_\xi [f(x, \xi)]$$

First-stage decisions (build which plants) are fixed. Second-stage decisions (how to operate) adapt to the realized scenario.

Robust optimization:

Find the decision that performs best in the worst case:

$$\min_{x} \max_{\xi \in \mathcal{U}} f(x, \xi)$$

No probabilities needed — only a set of plausible futures.

Robust Decision Making (RDM) / Exploratory Modeling:

Run thousands of scenarios spanning a wide uncertainty space. Instead of finding one optimal plan, identify which plans are robust — performing acceptably across the widest range of futures. Visualize vulnerability: under which conditions does a given plan fail?

18.3 Worked Comparison

Setup: Generation investment plan for a country, uncertain future gas prices and carbon prices.

Approach Result Strengths Weaknesses
Scenario analysis (3 scenarios) Three different plans Simple, transparent No probabilities; decision-maker must choose
Monte Carlo Distribution of NPVs Formal uncertainty Requires probability estimates
Stochastic programming One hedged plan Optimal in expectation Computationally demanding; needs probabilities
Robust optimization One conservative plan No probabilities needed Conservative — may over-hedge
RDM Set of robust plans + vulnerability map Rich decision support Computationally intensive

Key finding: The stochastic programming plan costs ~5% more than the scenario-optimal plan on average, but avoids the worst-case outcomes that the single-scenario plan hits. The robust plan costs ~10% more on average but never fails catastrophically.

18.4 Why They Disagree

The fundamental tension is between optimality and robustness:

  • Optimizing for one scenario gives the best outcome if that scenario occurs but can fail badly otherwise
  • Hedging across scenarios sacrifices expected performance for insurance against bad outcomes
  • The "right" choice depends on risk preferences — and on whether you can assign meaningful probabilities

Deep uncertainty arises when you can't even agree on the probability distribution. How likely is a 4°C world? A breakthrough in fusion? A global pandemic? For these questions, scenario analysis and RDM may be more honest than stochastic programming, which requires probability weights that may be spurious.

18.5 The Frontier

  • Adaptive pathways: Instead of choosing one plan now, design a decision tree with pre-specified trigger points. "Build gas now; if carbon price exceeds €100 by 2030, switch to hydrogen." Monitor signposts and adapt.
  • Deep uncertainty frameworks (DMDU): Decision Making under Deep Uncertainty combines RDM, adaptive pathways, and scenario discovery into an integrated decision support methodology
  • Info-gap theory: Robustness measured as how much the world can deviate from expectations before the plan fails
  • Real options: Value the flexibility of keeping options open. A modular, scalable investment (solar + battery) may be worth more than a large, inflexible one (nuclear) even if the latter has lower expected cost — because the former can adapt to surprises

End of Part IV: The Questions

Continue to Part V: Putting It Together for model integration and practical guidance.


Part V: Putting It Together


Chapter 19: Coupling Models

19.1 Why No Single Model Answers All Questions

Part IV demonstrated that different questions demand different tools. A CGE model tells you about GDP and trade but not about hourly grid operations. An hourly dispatch model tells you about grid feasibility but not about macroeconomic feedback. An IAM tells you about the social cost of carbon but not about distributional impacts.

The solution: Couple models to get the benefits of each.

19.2 Types of Model Linkage

Hard linking (full integration):

  • Models share the same code/platform
  • Solve simultaneously
  • Ensures perfect consistency
  • Computationally demanding, less flexible

Soft linking (iterative):

  • Models run separately
  • Exchange data between runs
  • Iterate until convergence
  • More flexible, easier to maintain

One-way linking:

  • Output of one model feeds input of another
  • No feedback loop
  • Simplest, but may miss important feedbacks

19.3 Generic Coupling Patterns

Pattern 1: Macro-energy coupling (CGE ↔ energy system model)

┌─────────────────┐
│   CGE Model     │◄──────────────────┐
│  (economy-wide) │                   │
└────────┬────────┘                   │
         │                            │
         │ GDP, sectoral activity,    │ Energy prices,
         │ investment, population     │ system costs,
         ▼                            │ investment needs
┌─────────────────┐                   │
│  Energy System  │───────────────────┘
│     Model       │
└─────────────────┘

Example instances:

  • GEM-E3 ↔ PRIMES (E3-Modelling)
  • REMIND ↔ MAgPIE (PIK)
  • MESSAGE-ix ↔ GLOBIOM (IIASA)

Pattern 2: Energy-climate coupling (energy model + climate module)

The energy model produces emission trajectories; the climate module converts emissions to temperature; damages feed back to the economy. This is the structure of Integrated Assessment Models (DICE, REMIND, etc.).

Pattern 3: Energy-land coupling (energy model + land use model)

Bioenergy, land-use change emissions, and food security interact. The energy model's demand for biomass affects land use; land-use change affects emissions; agricultural prices affect food security.

19.4 The Iteration Process (Soft Linking)

Iteration 1:  Model A (initial assumptions) → outputs₁
              Model B (uses outputs₁) → outputs₂
Iteration 2:  Model A (uses outputs₂) → outputs₃
              Model B (uses outputs₃) → outputs₄
...
Convergence:  |outputs_n - outputs_{n-1}| < tolerance

Challenges:

  • Different sectoral aggregations → mapping tables needed
  • Different time steps → interpolation
  • Different geographic boundaries → aggregation/disaggregation
  • Convergence not guaranteed if models have very different structures

19.5 Computational Considerations

Typical run times:

  • Econometric simulation model: minutes
  • CGE (one year): seconds to minutes
  • Detailed energy model (one country-year): minutes
  • High-resolution hourly dispatch: minutes to hours
  • Full coupled iteration (all countries, all years): hours to days
  • Monte Carlo on coupled system: days to weeks (or use surrogates)

Practical implications:

  • Scenario design matters (can't run everything)
  • Parallel computing helps (per-country, per-scenario)
  • Result caching is important
  • Surrogate models (Ch 7) can approximate the full coupled system for uncertainty analysis

Chapter 20: Data, Calibration, and Validation

20.1 The Data Challenge

E3 models require vast amounts of data:

  • Economic flows between sectors and regions
  • Energy production, transformation, consumption
  • Technology costs and performance
  • Emissions by source
  • Trade flows by commodity and partner

Getting this data consistent and comprehensive is a major challenge.

20.2 Social Accounting Matrix (SAM)

A Social Accounting Matrix is the data foundation for CGE models.

What is a SAM?

  • A square matrix showing all value flows in an economy for a base year
  • Rows = income (receipts)
  • Columns = expenditure (payments)
  • Row total = Column total for each account (everything balances)
                              EXPENDITURES
             ┌──────────┬──────────┬─────────┬──────────┬─────┬─────┐
             │Activities│Commodit. │ Factors │Households│ Gov │ RoW │
┌────────────┼──────────┼──────────┼─────────┼──────────┼─────┼─────┤
│Activities  │          │ Domestic │         │          │     │Exp- │
│            │          │ sales    │         │          │     │orts │
├────────────┼──────────┼──────────┼─────────┼──────────┼─────┼─────┤
│Commodities │ Intermed │          │         │Consumpt. │ Gov │     │
│            │ inputs   │          │         │          │     │     │
├────────────┼──────────┼──────────┼─────────┼──────────┼─────┼─────┤
│Factors     │ Value    │          │         │          │     │     │
│(L, K)      │ added    │          │         │          │     │     │
├────────────┼──────────┼──────────┼─────────┼──────────┼─────┼─────┤
│Households  │          │          │ Factor  │          │Trans│     │
│            │          │          │ income  │          │fers │     │
├────────────┼──────────┼──────────┼─────────┼──────────┼─────┼─────┤
│Government  │ Indirect │          │         │ Direct   │     │     │
│            │ taxes    │          │         │ taxes    │     │     │
├────────────┼──────────┼──────────┼─────────┼──────────┼─────┼─────┤
│Rest of     │          │ Imports  │         │          │     │     │
│World       │          │          │         │          │     │     │
└────────────┴──────────┴──────────┴─────────┴──────────┴─────┴─────┘
     ↑
 RECEIPTS

Key property: Every row sum equals the corresponding column sum.

20.3 Data Sources

Data Type Primary Source Secondary Sources
Economic structure GTAP database Eurostat, national accounts
Bilateral trade GTAP UN Comtrade
Energy balances Eurostat, IEA National statistics
Emissions UNFCCC, IPCC National inventories
Elasticities Literature Econometric estimates
Technology costs IEA, IRENA, BNEF Engineering estimates

GTAP Database (Global Trade Analysis Project):

  • Maintained by Purdue University
  • Global database covering 141 regions, 65 sectors
  • Used as starting point for most CGE models
  • Updated every few years (current: GTAP 11, base year 2017)

20.4 The Calibration Principle

Calibration assumption: The base year SAM represents an equilibrium.

Procedure:

  1. Set all prices to 1 (numeraire normalization)
  2. Choose functional forms (CES)
  3. Specify elasticities from literature/estimation
  4. Solve for share parameters such that model replicates SAM flows

Example: CES calibration

Given observed inputs $X_1 = 60$, $X_2 = 40$ and prices $P_1 = P_2 = 1$:

For CES: $Y = [\alpha X_1^{\rho} + (1-\alpha) X_2^{\rho}]^{1/\rho}$

At calibration: $$\alpha = \frac{X_1}{X_1 + X_2} = \frac{60}{100} = 0.6$$

20.5 Replication Test

A properly calibrated model must replicate the base year exactly.

If you run the model with no policy shock:

  • All prices should remain at 1
  • All quantities should match the SAM
  • This is the "benchmark replication test"

If replication fails: There's a bug or inconsistency in the data/model.

20.6 SAM Balancing

Real data sources don't perfectly balance. SAM balancing adjusts raw data to ensure consistency.

Methods:

  • RAS method: Iteratively scale rows and columns
  • Cross-entropy: Minimize information distance from original data
  • Least squares: Minimize sum of squared adjustments

20.7 Bayesian Calibration

Classical calibration finds one parameter set. Bayesian calibration (see Section 7.8) produces distributions:

$$P(\theta | D) \propto P(D | \theta) \cdot P(\theta)$$

The spectrum of calibration approaches:

Approach Parameters Outputs Use case
Deterministic (standard) Point estimates Replicates base year exactly Standard CGE practice
Systematic sensitivity Point estimates × ranges Multiple runs Robustness check
Bayesian Posterior distributions Output distributions Full uncertainty quantification

Bayesian calibration is more honest about parameter uncertainty but requires either a very fast model or a surrogate (GP emulator).

20.8 Validation

Out-of-sample validation: Can the model predict data it wasn't calibrated to?

  • Calibrate to year X, predict years X+1 to X+5, compare to actual data
  • Few E3 models are formally validated this way — this is a known weakness

Cross-validation across models: If multiple models agree on a result, confidence increases. If they disagree, understanding why they disagree (see Part IV) is more valuable than averaging their outputs.

20.9 Open Data Ecosystems

The E3 modeling field is moving toward greater openness:

  • Open models: PyPSA, OSeMOSYS, TIMES (partially), MESSAGE-ix are open-source
  • Open data: GTAP, Eurostat, IEA (partially), Climate Watch
  • Open platforms: IIASA's Scenario Explorer, NGFS scenarios
  • Reproducibility: Growing expectation that model results should be reproducible by independent teams

Chapter 21: Choosing the Right Model

21.1 No Model Is Best — But Some Are Wrong for Your Question

The most important modeling decision is choosing the right tool for the question. Using a CGE model to study hourly grid operations, or an hourly dispatch model to study macroeconomic impacts, will produce misleading results regardless of how good the model is.

21.2 Decision Framework

Step 1: What question are you asking?

Question category Example Key requirements
Economy-wide impacts GDP, employment, trade General equilibrium, multi-sector
Energy system planning Technology mix, investment Technology detail, optimization
Short-term operations Grid stability, dispatch High temporal resolution
Price forecasting Fuel price trajectories Demand/supply dynamics, uncertainty
Distributional impacts Who bears costs Household heterogeneity
Long-term climate SCC, damage assessment Economy-climate coupling
Transition dynamics Adoption speed, tipping points Behavioral realism, feedbacks

Step 2: Map question to paradigm(s)

Question (from Part IV) Primary paradigm Supporting paradigm
Ch 11: Carbon pricing impacts CGE or Econometric Energy system model for detail
Ch 12: 2050 energy mix LP/MILP or hybrid PE High-resolution for grid validation
Ch 13: Fossil fuel prices Econometric simulation Fundamentals-based for cross-check
Ch 14: Distributional impacts CGE + microsimulation ABM for behavioral heterogeneity
Ch 15: Grid feasibility High-resolution hourly Coupled with PE for investment
Ch 16: Social cost of carbon Integrated Assessment Model Empirical damages for calibration
Ch 17: Transition speed ABM or diffusion models PE for investment context
Ch 18: Decision under uncertainty Stochastic programming / RDM Any model + Monte Carlo

Step 3: Consider practical constraints

Constraint Implication
Time available Simple model done > complex model late
Data available Don't build detail you can't calibrate
Expertise available Use tools the team knows
Audience Policymakers want interpretable results
Budget Open-source vs. commercial licenses

21.3 Using Multiple Models

Model comparison (running the same scenario on different models) is the gold standard for robustness. When models agree, you can be more confident. When they disagree, the disagreement itself is informative — it reveals which assumptions drive the results (as demonstrated throughout Part IV).

Major model comparison exercises include:

  • EMF (Stanford Energy Modeling Forum): Multi-model comparison since 1977
  • IPCC Scenario Database: Hundreds of IAM runs underlying IPCC assessment reports
  • NGFS Scenarios: Climate stress-testing scenarios for financial institutions

21.4 Common Pitfalls

  1. Using a model outside its scope. A CGE model cannot tell you whether the grid can handle 80% renewables. An hourly dispatch model cannot tell you about GDP impacts.
  2. Treating model output as prediction. Models are conditional projections — "if these assumptions hold, then..." They are tools for exploring consequences, not crystal balls.
  3. Ignoring parameter uncertainty. A single-run deterministic result creates false precision. Always test sensitivity, or better, run Monte Carlo.
  4. Confusing precision with accuracy. A model with six decimal places is not more accurate than one with two. Precision should reflect genuine confidence.
  5. Black-boxing. If you can't explain why the model gives a particular result, you don't understand it well enough to use it for policy advice.

End of Part V: Putting It Together

Continue to Appendices for reference materials.


Appendices


Appendix A: Glossary

Term Definition
ABM Agent-Based Model — simulation of heterogeneous interacting agents
AEEI Autonomous Energy Efficiency Improvement — exogenous efficiency gains over time
Armington elasticity Substitution elasticity between domestic and imported goods
BCA Border Carbon Adjustment — tariff on imports based on embodied carbon
Calibration Process of determining model parameters to replicate base year data
Carbon leakage Increase in emissions outside a policy region due to the policy
CBAM Carbon Border Adjustment Mechanism (EU implementation of BCA)
CES Constant Elasticity of Substitution — a production/utility function
CGE Computable General Equilibrium — economy-wide model with market clearing
Complementarity Mathematical condition where either price=0 or market clears
Damage function Relationship between temperature change and economic loss
Discount rate Rate used to convert future values to present values
DMDU Decision Making under Deep Uncertainty
Dominant firm Market structure where one large firm sets price, others follow
Dunkelflaute Extended period of low wind and solar output (German: "dark doldrums")
EPEC Equilibrium Problem with Equilibrium Constraints
ETS Emissions Trading System (cap-and-trade)
Efficiency wage Above-market wage paid to prevent shirking; explains unemployment
Equivalent Variation Welfare measure: money to give before price change to reach new utility
Externality Cost or benefit not reflected in market prices
GP emulator Gaussian Process surrogate model that approximates a complex model
GTAP Global Trade Analysis Project — database for CGE models
HANK Heterogeneous Agent New Keynesian — macro model with household heterogeneity
Hotelling rule Price of exhaustible resource rises at rate of interest
IAM Integrated Assessment Model — couples economy, energy, and climate
LCOE Levelized Cost of Electricity
LP Linear Programming
MAC Marginal Abatement Cost — cost of reducing one more unit of emissions
MCP Mixed Complementarity Problem — mathematical formulation for CGE equilibrium
MILP Mixed-Integer Linear Programming
Monte Carlo Simulation method using random sampling
Nested CES Hierarchical structure of CES functions for multiple inputs
NLP Nonlinear Programming
Numeraire Good whose price is normalized to 1
OPEC Organization of Petroleum Exporting Countries
Partial equilibrium Analysis of one market holding others constant
Path dependence Outcome depends on history of decisions, not just current conditions
Ramsey-Boiteux pricing Setting prices to recover fixed costs while minimizing welfare loss
RDM Robust Decision Making — decision framework for deep uncertainty
S-curve Logistic adoption curve: slow start, rapid middle, saturation
SAM Social Accounting Matrix — balanced data of economic flows
SCC Social Cost of Carbon — monetary value of one ton CO₂ emitted
Sector coupling Linking electricity with heat, hydrogen, and transport systems
Shadow price Marginal value of relaxing a constraint
Sobol indices Global sensitivity analysis measures
Stochastic programming Optimization under uncertainty with probabilistic scenarios
Surrogate model Fast approximation of a complex model for uncertainty analysis
Time slice Representative period used to approximate temporal variation
Tipping point Threshold beyond which a transition becomes self-reinforcing
URR Ultimate Recoverable Resources
VRE Variable Renewable Energy (wind, solar)
Walrasian equilibrium Price vector where all markets clear simultaneously
Welfare Economic well-being; often measured as utility or equivalent variation

Appendix B: Mathematical Notation

Sets and Indices

Symbol Meaning
$i, j$ Goods/sectors
$f$ Fuels
$r, s$ Regions
$t$ Time periods
$h$ Hours/time slices
$k$ Technologies
$\xi$ Uncertain scenarios

Variables

Symbol Meaning Units
$Y$ Output € or physical
$K$ Capital
$L$ Labor persons or hours
$E$ Energy GJ, MWh
$P$ Price €/unit
$W$ Wage €/hour
$r$ Interest rate %
$U$ Utility index
$C$ Consumption
$I$ Investment
$X$ Exports
$M$ Imports
$EMI$ Emissions Mt CO₂
$\tau$ Carbon price €/tCO₂
$SCC$ Social cost of carbon $/tCO₂

Parameters

Symbol Meaning
$\sigma$ Elasticity of substitution
$\alpha, \beta$ Share parameters
$\rho$ CES substitution parameter ($\rho = (\sigma-1)/\sigma$)
$\delta$ Depreciation rate
$\epsilon$ Emission coefficient
$\nu$ Weibull heterogeneity parameter
$\mu$ Intangible cost
$b$ Learning parameter
$\theta$ Generic parameter vector

Appendix C: Key Equations Cheat Sheet

Production

CES Production Function: $$Y = A \left[ \alpha K^{\rho} + (1-\alpha) L^{\rho} \right]^{1/\rho}, \quad \sigma = \frac{1}{1-\rho}$$ Output Y is produced by combining capital K and labor L, where σ controls how easily one can substitute for the other.

Cost Minimization (FOC): $$\frac{K}{L} = \left( \frac{\alpha}{1-\alpha} \cdot \frac{W}{R} \right)^{\sigma}$$ Firms use more capital relative to labor when wages W rise relative to the cost of capital R; σ determines how much they switch.

Demand

Price Elasticity: $$\varepsilon = \frac{\partial Q / Q}{\partial P / P} = \frac{\partial \ln Q}{\partial \ln P}$$ If price rises by 1%, quantity demanded falls by ε%. Inelastic (|ε|<1) means demand barely budges; elastic (|ε|>1) means big response.

Discrete Choice (Logit): $$P_i = \frac{e^{V_i / \mu}}{\sum_j e^{V_j / \mu}}$$ Probability of choosing option i depends on its "attractiveness" V_i relative to all alternatives. Same as softmax in ML.

Equilibrium

Market Clearing: $$Q^{supply}(P^) = Q^{demand}(P^)$$ The equilibrium price P is where buyers want exactly what sellers offer.*

Complementarity: $$0 \leq P \perp (S - D) \geq 0$$ Either price is zero (free good) OR supply equals demand. Can't have both positive price AND excess supply.

Trade

Armington Composite: $$X = \left[ \delta D^{\rho} + (1-\delta) M^{\rho} \right]^{1/\rho}$$ Domestic goods D and imports M are imperfect substitutes. Prevents unrealistic all-or-nothing trade swings.

Environment

Emissions: $$EMI_f = \epsilon_f \cdot E_f$$ Emissions from fuel f = emission factor × energy consumed.

Effective Fuel Price (with carbon tax): $$P_f^{eff} = P_f + \tau \cdot \epsilon_f$$ Carbon tax τ raises dirty fuels more than clean ones, making low-carbon alternatives relatively cheaper.

Social Cost of Carbon: $$SCC = \frac{\partial}{\partial E_t} \sum_{s=t}^{T} \frac{D(T_s)}{(1+r)^{s-t}}$$ Present value of all future damages from one extra ton of CO₂ emitted today.

Learning

Experience Curve: $$C_t = C_0 \cdot \left( \frac{Q_t}{Q_0} \right)^{-b}, \quad LR = 1 - 2^{-b}$$ Costs fall as cumulative production grows. Learning rate LR = % cost drop per doubling of capacity.

Technology Adoption

S-Curve (Logistic): $$A(t) = \frac{K}{1 + e^{-r(t - t_0)}}$$ Adoption follows slow-fast-slow pattern. K = saturation, r = growth rate, t₀ = inflection point.

Bass Diffusion: $$\frac{dN}{dt} = (p + q \cdot N/K)(K - N)$$ Innovators (p) adopt independently; imitators (q) adopt because others did.

Capital Dynamics

Stock Update: $$K_{t+1} = (1 - \delta) K_t + I_t$$ Next year's capital = what survives depreciation + new investment.

NPV

Net Present Value: $$NPV = \sum_{t=0}^{T} \frac{CF_t}{(1+r)^t}$$ Future cash flows discounted to today. Positive NPV = profitable investment.

Resource Economics

Hotelling Rule: $$\frac{dP}{dt} = r \cdot P$$ Price of exhaustible resources rises at the interest rate.

Pricing

Ramsey-Boiteux Rule: $$\frac{P_i - MC_i}{P_i} = \frac{k}{\varepsilon_i}$$ To recover fixed costs, charge higher markups to less price-sensitive customers.

Labor Market

Efficiency Wage (Shapiro-Stiglitz): $$w = w^* \cdot \left(1 + \frac{e}{b + \rho/q}\right)$$ Firms pay above market-clearing wage to discourage shirking. Creates equilibrium unemployment.

Uncertainty

Bayesian Calibration: $$P(\theta | D) \propto P(D | \theta) \cdot P(\theta)$$ Posterior = likelihood × prior. Updates parameter beliefs given observed data.

First-Order Sobol Index: $$S_i = \frac{V[E[Y|X_i]]}{V[Y]}$$ Fraction of output variance explained by input X_i alone.


Appendix D: Bridging Notes — From Physics & ML to E3 Modeling

For readers with a background in computational physics and/or machine learning.

D.1 Statistical Physics → Economic Equilibrium

E3 Concept Physics Analogue Connection
Walrasian equilibrium Thermodynamic equilibrium Both are stable states where no agent/particle has incentive to change; system minimizes a "potential"
Market clearing Detailed balance Flows in = flows out; conservation at each node
CES aggregation Generalized mean / partition function $Y = [\sum \alpha_i X_i^\rho]^{1/\rho}$ interpolates between sum (ρ→1) and min (ρ→-∞), like temperature controlling ensemble behavior
Elasticity of substitution (σ) Inverse "temperature" High σ = agents easily switch (high T, flat distribution); Low σ = locked in (low T, peaked distribution)
Shadow price (λ) Lagrange multiplier Identical math — the marginal "force" enforcing a constraint
Utility maximization Free energy minimization Agents maximize U subject to budget ↔ systems minimize F subject to constraints
SAM balance Conservation laws Row sums = column sums, like current conservation or mass balance
Agent-based models Molecular dynamics Individual entities with rules; macro behavior emerges from micro interactions

D.2 Computational Physics → E3 Computation

E3 Technique Physics Analogue Connection
Monte Carlo simulation MC in condensed matter Sample parameter space, propagate to outputs, build distributions
Solving coupled nonlinear systems Self-consistent field methods CGE solves for prices where all markets clear simultaneously; iterative like SCF
Sensitivity analysis (Sobol) Parameter sweeps Which inputs drive output variance? Same question, same methods
PATH solver Newton-Raphson, conjugate gradient Iterative solution of F(x)=0; PATH uses pivoting for complementarity
GP emulators Response surfaces, kriging Interpolate between expensive simulation runs
S-curves / phase transitions Order-disorder transitions System undergoes qualitative change at critical threshold

D.3 Machine Learning → E3 Modeling

E3 Concept ML Analogue Connection
Calibration Training / fitting Adjust parameters so model replicates observed data
Elasticities Hyperparameters Control model behavior; often from literature, not estimated on the SAM
CES production function Parametric model family Like choosing between L1/L2/Huber loss — functional form matters
Discrete choice (logit) Softmax $P_i = e^{V_i}/\sum e^{V_j}$ — identical to softmax in classification
Intangible costs (μ) Regularization / prior Calibrated to match observed behavior; captures "non-financial" factors
Counterfactual simulation What-if analysis Change inputs (policy), observe outputs (welfare, emissions)
Shadow prices Dual variables in constrained optimization scipy.optimize returns these too
Neural surrogates Model distillation Train fast NN to approximate slow physics/economic model
Bayesian calibration Bayesian neural nets / posterior inference Same Bayes' rule, same MCMC sampling
Reinforcement learning Adaptive policy optimization RL agents learn optimal policy sequences under uncertainty

D.4 Key Intuition Shortcuts

  • "Zero profit" doesn't mean firms make no money — it means no excess returns above opportunity cost (competitive equilibrium)
  • "Elasticity" is just the log-log slope: $\varepsilon = d\ln Y / d\ln X$
  • "Armington" = "domestic and imported goods are imperfect substitutes" — prevents unrealistic all-or-nothing trade swings
  • "Recursive dynamic" = solve year-by-year, updating capital stocks — no perfect foresight (unlike intertemporal optimization)
  • "Complementarity" = "either the constraint binds OR the shadow price is zero" — like KKT conditions you know from physics optimization
  • "Tipping point" = "phase transition" — qualitative regime shift driven by nonlinear feedbacks, as in Ising model or percolation
  • "Path dependence" = "hysteresis" — the system's state depends on its history, not just current conditions
  • "Differentiable programming" = run gradients through the economic model, enabling gradient-based calibration and sensitivity analysis. Emerging area connecting ML automatic differentiation with traditional E3 models.

Appendix E: Paradigm Profiles

One-page standalone summaries of each modeling paradigm. Part IV's problem chapters reference these profiles; they can also be read independently.


E.1 Input-Output Models and Social Accounting Matrices

Identity: The oldest quantitative approach to economy-wide analysis. Describes the flow of goods and services between sectors using a system of linear equations.

Core math:

$$\mathbf{x} = (\mathbf{I} - \mathbf{A})^{-1} \mathbf{d}$$

where $\mathbf{x}$ = total output vector, $\mathbf{A}$ = matrix of technical coefficients (input per unit output), $\mathbf{d}$ = final demand vector, and $(\mathbf{I} - \mathbf{A})^{-1}$ = the Leontief inverse.

Key features:

  • Fixed coefficients (no substitution)
  • Demand-driven (output adjusts to demand)
  • Linear (easy to solve, easy to scale)

Strengths: Transparent; shows sector interdependencies; minimal data requirements. The SAM extends I-O to include factor markets, households, government, and trade.

Weaknesses: No prices, no substitution, no behavioral response. Unsuitable for policy analysis where agents respond to price changes.

Representative models:

Model Developer Scope
GTAP I-O tables Purdue University Global, multi-regional
WIOD Various European universities World input-output database
National I-O tables National statistical offices Country-level

Key references: Leontief (1936), Miller & Blair (2009)


E.2 LP Energy System Models

Identity: Minimize total energy system cost over a multi-decade horizon, subject to demand satisfaction, emission limits, and technical constraints. The workhorse of energy system planning.

Core math:

$$\min \sum_{t,k} \frac{C_{k,t} \cdot X_{k,t}}{(1+r)^t} \quad \text{s.t. demand, capacity, emission constraints}$$

where $k$ = technology, $t$ = time period, $C$ = cost, $X$ = activity level.

Key features:

  • Perfect foresight (optimizes entire horizon at once)
  • Technology-rich (hundreds of supply/demand technologies)
  • Social planner perspective (cost-optimal, not market equilibrium)
  • Shadow prices on constraints = marginal costs of policy targets

Strengths: Technology detail; transparent logic (cheapest wins); well-understood math (LP/MILP duality).

Weaknesses: No behavioral realism (everyone adopts the cheapest option); no economic feedback; social planner ≠ market outcome; time-slice resolution may miss integration challenges.

Representative models:

Model Developer Scope
TIMES/MARKAL IEA-ETSAP Multi-regional energy system
OSeMOSYS KTH/UNDESA Open-source, developing countries
TEMOA NC State University Open-source, US-focused
MESSAGE IIASA Global energy-climate

Key references: Loulou et al. (2004), Howells et al. (2011)


E.3 Computable General Equilibrium (CGE)

Identity: Find the set of prices at which all markets (goods, factors, energy, trade) clear simultaneously, with all agents (consumers, firms, government) optimizing their individual objectives.

Core math:

$$z_i(\mathbf{p}^_) = D_i(\mathbf{p}^_) - S_i(\mathbf{p}^*) = 0 \quad \forall i$$

Implemented as a Mixed Complementarity Problem (MCP): $0 \leq P \perp (S-D) \geq 0$.

Key features:

  • Economy-wide scope (all sectors, all markets)
  • Micro-founded behavior (utility max, cost min)
  • Consistent accounting (SAM balance)
  • Comparative statics or recursive dynamics
  • Trade via Armington assumption

Strengths: Captures economy-wide feedbacks, carbon leakage, revenue recycling, terms-of-trade effects. Ideal for analyzing policies that affect the whole economy.

Weaknesses: Assumes equilibrium (full/near-full employment); representative agent (no distribution); aggregate technology (no engineering detail); elasticities drive results but are uncertain.

Representative models:

Model Developer Scope
GEM-E3 E3-Modelling/NTUA EU + global
GTAP-E Purdue University Global multi-regional
ENV-Linkages OECD Global
EPPA MIT Global

Key references: Arrow & Debreu (1954), Shoven & Whalley (1992), Böhringer & Rutherford (2008)


E.4 Partial Equilibrium / Hybrid Models

Identity: Detailed optimization of the energy sector with market equilibrium, behavioral realism (discrete choice), and some economic feedback — but without full general equilibrium of the whole economy.

Core math: Varies by model. Typically a combination of:

  • Cost optimization (LP/MILP for investment and dispatch)
  • Market equilibrium (supply = demand at endogenous prices)
  • Discrete choice (Weibull/logit for technology adoption)

Often formulated as an EPEC (Equilibrium Problem with Equilibrium Constraints) when multiple agents optimize subject to each other's behavior.

Key features:

  • Technology-rich (hundreds of explicit technologies)
  • Behavioral realism (intangible costs, heterogeneous consumers)
  • Endogenous prices (not just cost-optimal)
  • Typically energy-sector focused; takes GDP as exogenous

Strengths: Combines engineering detail with economic behavior. More realistic technology adoption than pure LP. More detail than CGE.

Weaknesses: Missing economy-wide feedbacks (no GDP response); complex calibration; intangible costs are somewhat ad-hoc; typically needs coupling with a macro model.

Representative models:

Model Developer Scope
PRIMES E3-Modelling/NTUA EU energy system
NEMS EIA (US) US energy system
WEM IEA World energy
POLES GAEL/EDDEN Global energy

Key references: Capros et al. (2014), EIA (Annual Energy Outlook methodology)


E.5 Econometric Simulation

Identity: Estimate behavioral equations from historical data and simulate forward. No equilibrium assumption — the economy can have unemployment, excess capacity, and path-dependent dynamics.

Core math: Systems of estimated equations:

$$Y_t = f(Y_{t-1}, X_t, \theta) + \varepsilon_t$$

where $Y$ = endogenous variables (GDP, demand, prices), $X$ = exogenous drivers, $\theta$ = estimated parameters, $\varepsilon$ = error terms.

For stochastic variants, $\theta$ and/or $\varepsilon$ are drawn from distributions and Monte Carlo is used to generate output distributions.

Key features:

  • Data-driven (parameters estimated from history)
  • Demand-driven output (Keynesian features)
  • Path-dependent dynamics (history matters)
  • Can incorporate uncertainty (Monte Carlo)
  • No equilibrium assumption

Strengths: Captures real-world dynamics (recessions, hysteresis); can produce probability distributions; doesn't assume away unemployment.

Weaknesses: Extrapolates historical relationships (may break under structural change); limited theoretical consistency; Lucas critique (parameters may change when policy changes).

Representative models:

Model Developer Scope
PROMETHEUS E3-Modelling/NTUA Global energy, stochastic
E3ME Cambridge Econometrics Global economy-energy-environment
Oxford GEM Oxford Economics Global energy
NiGEM NIESR Global macroeconometric

Key references: Pollitt et al. (2015), Fragkos et al. (2017)


E.6 Integrated Assessment Models (IAMs)

Identity: Couple a simplified economic model with a climate model and a damage function to assess the interactions between human activity and the climate system over centuries.

Core math (DICE example):

$$\max \sum_t \frac{U(C_t) \cdot L_t}{(1+\rho)^t} \quad \text{s.t. economy, emissions, climate, damage equations}$$

Climate equations link emissions → CO₂ concentration → radiative forcing → temperature. Damage function maps temperature → GDP loss.

Key features:

  • Economy-climate coupling (feedbacks in both directions)
  • Very long time horizon (centuries)
  • Highly aggregated (1-16 regions, few sectors)
  • Central to SCC estimation

Strengths: Only framework that consistently links economic decisions to climate outcomes and back. Essential for climate policy cost-benefit analysis.

Weaknesses: Highly aggregated; damage functions are extremely uncertain; strong dependence on discount rate; miss sector detail and distributional impacts.

Representative models:

Model Developer Scope
DICE/RICE Yale (Nordhaus) Global (aggregated/regional)
PAGE Cambridge Global (probabilistic)
FUND Various Global (disaggregated damages)
REMIND PIK Global (detailed energy)
MESSAGE-ix / GLOBIOM IIASA Global (energy-land coupling)
GCAM PNNL Global (detailed sectors)

Key references: Nordhaus (2017), Stern (2007), Weyant (2017)


E.7 High-Resolution Energy Models

Identity: Optimize power system investment and operations at hourly (or sub-hourly) time resolution with detailed network representation. Essential for assessing high-VRE systems.

Core math: Same as LP energy models but with:

  • 8,760+ time steps (vs. 12-24 time slices)
  • Network constraints (power flow equations)
  • Storage with state-of-charge tracking
  • Ramping and minimum stable output constraints

$$\min \sum_{t=1}^{8760} \sum_k C_k \cdot g_{k,t} + \sum_k I_k \cdot cap_k$$

subject to: $\sum_k g_{k,t} = d_t$, power flow constraints, storage dynamics, etc.

Key features:

  • Captures variability, ramping, curtailment, storage cycling
  • Network representation (congestion, transmission)
  • Weather-year dependent results
  • Computationally intensive

Strengths: Reveals integration costs, storage needs, and operational challenges invisible to lower-resolution models.

Weaknesses: Computationally expensive; typically power-sector only (no economy-wide feedback); optimistic (social planner, no behavioral realism); results sensitive to weather year chosen.

Representative models:

Model Developer Scope
PyPSA TU Berlin / PyPSA community European/global power + sector coupling
Calliope ETH Zürich Multi-scale energy system
SWITCH UC Berkeley / others Power sector planning
Plexos Energy Exemplar Power market simulation
ReEDS NREL US power sector

Key references: Brown et al. (2018), Pfenninger & Pickering (2018)


E.8 Agent-Based / Complexity Models

Identity: Simulate individual heterogeneous agents (households, firms, traders) with behavioral rules and local interactions. Macro patterns emerge from micro behavior — no equilibrium assumed, no representative agent.

Core math: No single equation — each agent has a decision rule:

$$a_{i,t+1} = R_i(a_{i,t}, {a_{j,t}}_{j \in N_i}, s_t)$$

where $a_i$ = agent $i$'s action, $N_i$ = neighbors, $s_t$ = state of the world. Rules can include: bounded rationality, imitation, social influence, learning.

Key features:

  • Heterogeneity is fundamental (not an add-on)
  • Interactions and networks matter
  • Emergence: macro patterns not imposed but arise from micro behavior
  • Can exhibit tipping points, regime shifts, path dependence
  • No equilibrium assumption

Strengths: Natural framework for adoption dynamics, distributional analysis, and phenomena that depend on heterogeneity and interaction (herding, cascades, lock-in).

Weaknesses: Hard to calibrate (many free parameters); results can be sensitive to agent rules and interaction topology; less analytically tractable; validation is challenging.

Representative models:

Model Developer Scope
AMIRIS DLR Electricity market
ENGAGE Various Energy transition dynamics
EURACE Various European Macroeconomy
Energy Transition Model (ABM variants) Various National energy systems

Key references: Farmer & Foley (2009), Hansen et al. (2019), Arthur (2021)


E.9 ML-Augmented Approaches

Identity: Not a standalone modeling paradigm but a set of techniques that enhance traditional E3 models: surrogate models for speed, machine learning for parameter estimation, and differentiable programming for gradient-based calibration.

Key techniques:

Surrogate models / emulators:

  • Train a GP or neural network on a small set of full-model runs
  • Use the surrogate for Monte Carlo, optimization, or Bayesian calibration
  • Enables uncertainty quantification for models that are otherwise too slow

ML for parameter estimation:

  • Estimate elasticities, learning rates, or damage function parameters from large datasets using ML methods (random forests, neural nets, causal forests)
  • Can handle nonlinearities and interactions that traditional econometrics misses

Differentiable programming:

  • Implement the economic model in an automatic differentiation framework (JAX, PyTorch)
  • Enables gradient-based calibration, sensitivity analysis, and optimization
  • Promising for large-scale models where derivative-free methods are too slow

Physics-Informed Neural Networks (PINNs):

  • Encode physical constraints (energy balance, mass conservation) as loss terms
  • Learn model parameters from data while respecting physics
  • Early-stage for E3 applications but promising for process models

Strengths: Speed (surrogates), scalability (differentiable programming), flexibility (ML estimation), formal UQ (Bayesian surrogates).

Weaknesses: "Black box" risk if not carefully validated; requires ML expertise alongside domain knowledge; surrogates are only as good as the underlying model runs they're trained on.

Key references: Razavi et al. (2012), Kasim et al. (2022), Cranmer et al. (2020)


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment