Chapter 14: Financial Networks and Systemic Risk
About This Chapter
On the morning of September 15, 2008, Lehman Brothers Holdings Inc. filed for bankruptcy — at the time, the largest such filing in American history. Within hours, the Reserve Primary Fund, a money-market fund holding Lehman commercial paper, “broke the buck,” triggering a run that froze the entire money-market industry. Within days, interbank lending seized globally. Within weeks, governments on three continents had committed trillions of dollars to prevent a complete collapse of the financial system.
Lehman was not the largest bank in the world. It was not even close. Citigroup, JPMorgan Chase, and Bank of America each held balance sheets four to five times larger. What Lehman held, in excess of anything its size alone would suggest, was a position of exceptional centrality in a dense web of bilateral financial exposures: repurchase agreements, derivative contracts, unsecured loans, and prime brokerage relationships that tied it to virtually every other major institution. When Lehman fell, the network conducted the shock outward at a speed and scale that the size-focused risk frameworks of regulators and risk managers alike had entirely failed to anticipate.
This chapter develops the formal tools for understanding why. The central insight — one that distinguishes systemic risk analysis from the risk management of individual institutions — is that the financial system is a network, and the properties of that network determine how shocks originate, amplify, and propagate. A bank that looks perfectly safe on a standalone basis may be the critical node whose failure cascades through every connected counterparty. A shock that is too small to bankrupt any individual institution in isolation may, traveling across network exposures, accumulate enough force to default several of them simultaneously.
We proceed as follows. We first build a formal model of the interbank network and establish its empirically observed structural properties. We then develop the Eisenberg–Noe clearing mechanism, the theoretical foundation for computing which banks default when a shock hits the system. We apply DebtRank — the systemic-importance centrality measure analogous to PageRank — to rank banks by their contribution to aggregate financial fragility. We analyze the Acemoglu–Ozdaglar–Tahbaz-Salehi result on how network density changes the qualitative nature of contagion. We examine fire-sale amplification through common-asset holdings. And we close with a mini case study that assembles all these mechanisms into a single integrated stress test, tracing a 25% asset shock through direct losses, forced liquidations, price impacts, and second-round defaults — the full anatomy of a financial crisis.
The mathematics required is linear algebra and fixed-point theory, all implemented in NumPy. The economics is the 2008 crisis, calibrated with real numbers.
Table of Contents
- The Interbank Network
- Eisenberg–Noe (2001): The Clearing Vector
- Stress Testing — Pure Contagion vs. Pure Asset Shocks
- DebtRank: A Centrality Measure for Systemic Importance
- Acemoglu–Ozdaglar–Tahbaz-Salehi (2015): When Density Helps and When It Hurts
- Fire Sales and Common-Asset Contagion
- CoVaR and Marginal Expected Shortfall
- Mini Case Study: A Full Two-Round Stress Test
The Interbank Network
Nodes, edges, and exposure weights
A financial system can be represented as a directed weighted graph \(G = (V, E, W)\), where the node set \(V = \{1, 2, \ldots, n\}\) indexes financial institutions (banks, broker-dealers, money-market funds, or any entity with bilateral exposures), a directed edge \((i \to j)\) in \(E\) indicates that institution \(j\) owes money to institution \(i\), and the weight \(w_{ij} \geq 0\) is the face value of that obligation.
The full set of bilateral obligations is encoded in the interbank liability matrix \(L \in \mathbb{R}^{n \times n}_{\geq 0}\), where \(L_{ij}\) is the nominal amount that bank \(j\) owes to bank \(i\). This matrix is sparse — most pairs of banks have no direct exposure — but in a large banking system the non-zero entries collectively represent tens of trillions of dollars of claims.
Alongside the interbank obligations, each bank \(i\) holds:
- External assets \(a_i \geq 0\): loans to non-financial firms, government bonds, mortgage securities, and any other assets that can be valued without reference to what other banks owe.
- External liabilities \(\bar{p}_i \geq 0\): obligations to depositors, bondholders, and other creditors outside the banking system. We call this the bank’s nominal obligation or par value of debt.
The bank’s net worth in the absence of any interbank default is:
\[e_i = a_i + \sum_j L_{ij} - \bar{p}_i\]
When \(e_i > 0\), bank \(i\) is solvent on a standalone basis. When \(e_i < 0\), it is insolvent even before accounting for potential counterparty defaults.
Empirically observed structure
Three structural properties of real interbank networks have been established by the empirical literature (Upper, 2011; Craig and von Peter, 2014; Alves et al., 2013):
Core-periphery organization. A small group of large, internationally active banks — the “core” — have dense bilateral exposures with each other. A larger group of smaller, more specialized banks — the “periphery” — borrow from and lend to the core but have sparse mutual exposures. The 2007 BIS data on interbank lending among G10 banks revealed that the top 15 institutions accounted for over 60% of total interbank claims, while the bottom 1,000 institutions accounted for less than 10%.
Scale-free in-degree. The distribution of the number of creditors per bank follows an approximate power law: most banks borrow from only a few other institutions, while a handful of money-center banks borrow from hundreds. This is the same structural signature we identified in Chapter 4 for citation networks and the World Wide Web. The financial network is scale-free on the borrowing side.
Sparsity with high concentration. The interbank matrix \(L\) is sparse overall — perhaps 5–10% of potential bilateral pairs are active — but the active exposures are highly concentrated. This combination of sparsity and concentration is precisely the “robust-yet-fragile” structure that Haldane and May (2011) identified as the hallmark of the pre-crisis banking system: the network can absorb many small shocks because most banks are weakly connected to the affected institution, but a shock to a highly connected core institution propagates everywhere simultaneously.
The Bank for International Settlements (BIS) publishes quarterly data on consolidated banking statistics, including bilateral cross-border exposures at the country level. At the bank level, bilateral exposure data is confidential and held by national supervisors (the Federal Reserve, ECB, Bank of England). The Federal Reserve’s stress-testing program (CCAR — Comprehensive Capital Analysis and Review) uses bilateral exposure networks that are never published in full but have been described in Federal Reserve staff papers. The best publicly available proxies for bilateral bank exposures are the payment-system transaction data published by several central banks (Fedwire Funds Service, TARGET2 in Europe, CHAPS in the UK), which record actual money flows between institutions in real time and can be used to infer exposure patterns.
Live cell: constructing an interbank network with core-periphery structure
The stochastic block model (SBM) is the natural generative framework for core-periphery networks. We place \(n_c\) banks in the core and \(n_p\) banks in the periphery. A directed edge from bank \(j\) (creditor) to bank \(i\) (debtor) exists with probability \(p_{cc}\) if both are in the core, \(p_{cp}\) if \(j\) is in the core and \(i\) in the periphery, \(p_{pc}\) if \(j\) is in the periphery and \(i\) in the core, and \(p_{pp}\) if both are in the periphery. Core-periphery structure requires \(p_{cc} \gg p_{pp}\) and \(p_{cp}, p_{pc} > p_{pp}\).
Reading the figure. The left panel shows the directed exposure graph: core banks (red, center) are densely interconnected and receive many inbound exposures from periphery banks (blue, outer ring). Edge thickness is proportional to the size of the obligation. The right panel confirms the in-degree asymmetry: core banks are owed money by many more counterparties than periphery banks, reflecting their role as wholesale funding intermediaries. A shock to any core bank — C1, C2, or C3 — propagates outward through a much denser web of claims than a shock to any periphery bank.
Eisenberg–Noe (2001): The Clearing Vector
The setting
Larry Eisenberg and Thomas Noe’s 2001 paper in Management Science posed the following question: given a network of interbank obligations, and given that some banks may not be able to pay in full, what is the equilibrium outcome — the final vector of payments that clears the system while respecting two constraints that are fundamental to bankruptcy law?
The two constraints are:
Limited liability. No bank can pay more than it actually has. If bank \(j\)’s available resources are less than its nominal obligation \(\bar{p}_j\), it pays whatever it has and enters default.
Proportional sharing (the absolute priority rule for creditors of equal seniority). If bank \(j\) cannot pay in full, all its creditors receive the same recovery rate — they are paid in proportion to their nominal claim. No creditor can be favored over another.
These two principles jointly define the clearing payment vector \(p^* = (p_1^*, p_2^*, \ldots, p_n^*)\), where \(p_i^*\) is the actual amount bank \(i\) pays to its creditors in equilibrium.
The clearing map
Define the relative liability matrix \(\Pi\) where \(\Pi_{ij} = L_{ij} / \bar{p}_j\) is the fraction of bank \(j\)’s total nominal obligations that are owed to bank \(i\). Each column of \(\Pi\) sums to at most 1 (exactly 1 if bank \(j\) has any creditors). Then bank \(i\)’s available resources, when the rest of the system pays \(p\), are:
\[v_i(p) = a_i + \sum_j \Pi_{ij} p_j\]
The first term is bank \(i\)’s external assets. The second term is its receipts from interbank claims: for each bank \(j\), bank \(i\) receives fraction \(\Pi_{ij}\) of whatever \(j\) actually pays. Given \(v_i(p)\), the limited-liability constraint gives:
\[p_i^* = \min\!\left(\bar{p}_i,\; a_i + \sum_j \Pi_{ij} p_j^*\right)\]
In vector form, define the clearing map \(\Phi: [0, \bar{p}] \to [0, \bar{p}]\):
\[\Phi(p)_i = \min\!\left(\bar{p}_i,\; a_i + \sum_j \Pi_{ij} p_j\right) \tag{8.1}\]
The clearing payment vector \(p^*\) is a fixed point of \(\Phi\): it satisfies \(p^* = \Phi(p^*)\).
Existence and uniqueness of the greatest fixed point
Theorem (Eisenberg–Noe 2001). Under mild regularity conditions (the network is “regular” in their sense — essentially, every defaulting bank is ultimately connected to some solvent entity), the clearing map \(\Phi\) has a greatest fixed point \(p^*\). This greatest fixed point is the unique economically relevant clearing vector: it maximizes total payments, assigns the highest possible recovery rates to all creditors, and is the outcome of any orderly bankruptcy process.
The proof uses Tarski’s fixed-point theorem. \(\Phi\) is a monotone operator on the complete lattice \([0, \bar{p}]\) ordered componentwise. By Tarski’s theorem, every monotone operator on a complete lattice has a greatest fixed point. The iteration that converges to it is called fictitious-default iteration:
\[p^{(0)} = \bar{p} \quad \text{(assume all banks pay in full)}\] \[p^{(t+1)} = \Phi(p^{(t)}) \quad \text{(update based on available resources given previous iteration)}\]
Starting from the upper end of the lattice (all payments at face value) and iterating downward, \(\{p^{(t)}\}\) is a decreasing sequence that converges monotonically to \(p^*\) in a finite number of steps (in fact, at most \(n\) steps for a network of \(n\) banks, since each iteration either leaves the number of defaulting banks unchanged or increases it by at least one).
Numerical stability. The iteration must be initialized at \(p^{(0)} = \bar{p}\) (the full nominal payment vector), not at zero. Starting from zero would converge to the least fixed point (everyone defaults maximally), which is economically irrelevant. Also, since all quantities are monetary amounts in a specific currency, watch for rounding errors when \(a_i + \sum_j \Pi_{ij} p_j^*\) is very close to \(\bar{p}_i\) — the distinction between default and solvency at the margin. A tolerance of \(10^{-8}\) in the \(\ell^\infty\) norm is standard.
Live cell: implementing the Eisenberg–Noe clearing algorithm
Reading the output. The table shows each bank’s nominal obligation, its actual clearing payment, and the implied recovery rate. Banks that pay less than par (recovery rate \(< 1\)) are in default. The net worth column shows which banks remain solvent after accounting for interbank receipts: a negative net worth confirms bankruptcy. Notice that some banks that appeared solvent on a standalone basis (positive \(e_i = a_i - \bar{p}_i\) when ignoring interbank positions) may be pushed into default by the reduced payments they receive from their own defaulting counterparties. That second-order contagion is the essence of the Eisenberg–Noe mechanism.
The Eisenberg–Noe framework is the theoretical foundation used by the Federal Reserve, ECB, and Bank of England in their network-based stress-testing models. The Fed’s CCAR (Comprehensive Capital Analysis and Review) and DFAST (Dodd–Frank Act Stress Test) programs apply a version of this logic when computing contagion losses across the 33 largest US bank holding companies. In practice, the bilateral exposure matrix \(L\) is collected through regulatory reporting (Schedule HC-L for derivatives, FR Y-15 for systemic indicators) and the clearing calculation is repeated across thousands of macroeconomic scenarios. The 2012 European Banking Authority (EBA) stress test famously revealed that several European sovereigns appeared in the interbank liability matrix as implicit counterparties via sovereign bond holdings — an observation that later motivated the ESRB’s work on sovereign-bank doom loops.
Stress Testing — Pure Contagion vs. Pure Asset Shocks
Two channels of loss propagation
Financial contagion travels through two distinct channels. Understanding which channel dominates in a given scenario determines both the magnitude of the systemic event and the appropriate policy response.
Channel 1 — Direct contagion (network channel). Bank A defaults. Bank B holds a claim on Bank A that is now only partially repaid. Bank B’s assets shrink by the haircut on that claim. If the haircut is large enough to exhaust Bank B’s equity buffer, Bank B also defaults, passing further losses to Bank C, and so on. This is the pure interbank contagion mechanism captured by Eisenberg–Noe: losses travel through the liability graph.
Channel 2 — Common asset exposure (portfolio channel). Banks A and B both hold the same portfolio of mortgage-backed securities. An aggregate shock to housing prices reduces the value of those securities simultaneously for both banks. Neither bank is directly connected to the other — the contagion channel is inactive — but both suffer the same correlated loss. This is the mechanism underlying the 2007 quant quake, when dozens of quantitative long-short equity funds held nearly identical factor exposures and a simultaneous deleveraging by a few funds triggered mark-to-market losses for all of them.
In reality, both channels operate at once and interact. A common asset shock reduces external assets \(a_i\), which then flows through the Eisenberg–Noe clearing mechanism to produce additional defaults via the network channel. The amplification that results from the combination of the two channels is consistently larger than either channel alone — a finding confirmed in Greenwood, Landier, and Thesmar (2015) and in most calibrated systemic risk models.
Shock decomposition in the Eisenberg–Noe framework
To isolate the network channel, apply a shock to bank \(i\)’s external assets only: \(a_i \to a_i - \delta\), and re-run the clearing algorithm. Any additional defaults generated by this re-run that would not have occurred without the shock are attributable to network contagion.
To isolate the portfolio channel, reduce all banks’ external assets simultaneously by an amount proportional to their holdings of the shocked asset class, and re-run. Additional defaults are attributable to common exposure.
The amplification ratio measures how much larger the total loss is than the initial shock: if the shock size is \(\delta\) (in dollar terms) and the total reduction in aggregate payments to external creditors is \(\Delta\), then the amplification ratio is \(\Delta / \delta\). In calibrated models of the pre-crisis banking system, this ratio typically lies between 1.5 and 4.
Before running the next cell, predict: if we shock the largest bank’s external assets by 30%, how many of the six banks will be in default in the clearing equilibrium? Think about which bank is most central and how losses propagate through the obligation network. Write down your prediction, then run the cell and compare.
Live cell: shock propagation through the network
Reading the output. The amplification ratio tells you how many dollars of total payment reduction occur for each dollar of the initial shock. A ratio above 1.0 means the network has amplified the shock: counterparties of the shocked bank receive less, reduce their own payments, and so on. The payment waterfall chart makes the cascade visible bank by bank — the orange bar (post-shock payment) and the red stacked extension (default shortfall) together show which banks absorb first-round versus second-round losses.
DebtRank: A Centrality Measure for Systemic Importance
Why existing centrality measures fall short
Degree centrality, betweenness, and PageRank were all designed for networks where importance means something about information flow or social endorsement. For financial networks, the question is different: how much aggregate economic value is destroyed if bank \(i\) experiences distress? This is not a question about random walks. It is a question about loss propagation — and it requires a centrality measure that respects the financial mechanics of the system.
In-degree is a rough proxy: banks with many creditors spread their distress to many counterparties. But it ignores the size of exposures and the equity buffer of downstream banks. A bank can have high in-degree but all its counterparties hold such large capital buffers that none of them defaults. A bank with only two counterparties, each leveraged 30-to-1 against the claim on this bank, may be far more systemically dangerous.
The DebtRank recursion
Battiston, Puliga, Kaushik, Tasca, and Caldarelli (2012) introduced DebtRank in Nature Scientific Reports as a recursive centrality measure for systemic importance. Define the economic weight of bank \(i\) as:
\[w_i = \frac{E_i}{\sum_j E_j}\]
where \(E_i\) is the total equity (net worth) of bank \(i\). The economic weight is the fraction of total system equity held by bank \(i\): if bank \(i\) is wiped out, the system loses fraction \(w_i\) of its total capital base.
The impact of bank \(i\) on bank \(j\) through a direct exposure is:
\[W_{ij} = \frac{L_{ji}}{E_j}\]
This is the fraction of bank \(j\)’s equity capital that would be wiped out if bank \(i\) fails to pay in full — a leverage-weighted transmission coefficient. Note: \(L_{ji}\) is the amount bank \(i\) owes bank \(j\) (bank \(j\) is a creditor of bank \(i\)), so \(W_{ij}\) measures how much of \(j\)’s equity is at risk from \(i\)’s default.
DebtRank then propagates distress recursively. Assign each bank a distress level \(h_i^{(t)} \in [0, 1]\) at time step \(t\), where 0 is fully solvent and 1 is fully defaulted. Initialize by shocking bank \(i\): \(h_i^{(0)} = 1\), \(h_j^{(0)} = 0\) for \(j \neq i\). The DebtRank update rule is:
\[h_j^{(t+1)} = \min\!\left(1,\; h_j^{(t)} + \sum_{k: h_k \text{ changed at } t} W_{kj} \cdot h_k^{(t)}\right) \tag{8.2}\]
where the sum is over banks \(k\) whose distress level changed at time \(t\) — they have not yet been “absorbed” in the language of Battiston et al. Banks that reach \(h = 1\) (full default) or that have already transmitted their distress are absorbed and do not propagate further. The algorithm terminates when no distress level changes.
The DebtRank of bank \(i\) is:
\[DR_i = \sum_j w_j \, h_j^{(\infty)} - w_i h_i^{(0)} \tag{8.3}\]
The second term subtracts bank \(i\)’s own contribution (we care about the externality imposed on others). \(DR_i \in [0, 1]\) measures the fraction of total system equity that is at risk due to bank \(i\)’s distress. A \(DR_i = 0.3\) means bank \(i\)’s failure puts 30% of the system’s equity capital at risk.
DebtRank has been adopted by the European Central Bank as a component of its systemic risk monitoring framework. The ECB’s Network Analysis of MFI (Monetary and Financial Institution) Exposures publishes quarterly DebtRank estimates for Euro-area banks, though the bilateral exposure data underlying the computation is not published. The Federal Reserve Bank of New York uses a closely related measure — the “Systemic Capital Adequacy Requirement” — that scales capital surcharges for SIFIs by a version of this network-propagation logic. Both measures address the same fundamental gap that Basel II left open: a bank’s capital requirement was determined by its own standalone risk, with no adjustment for the network externality imposed on the system by the bank’s potential failure.
Live cell: DebtRank from scratch
Reading the output. The three bar charts rank the six banks by DebtRank, normalized in-degree, and normalized eigenvector centrality. The key observation is that the rankings often disagree — particularly for banks that have moderate connectivity but high leverage. A bank with few counterparties but whose counterparties hold large, poorly capitalized claims is more systemically dangerous than a bank that is merely well-connected. DebtRank captures this because it scales each exposure by the receiving bank’s equity capital: an exposure of $30M to a bank with $35M of equity is far more threatening than the same exposure to a bank with $200M of equity. Degree and eigenvector centrality cannot distinguish these two cases.
Acemoglu–Ozdaglar–Tahbaz-Salehi (2015): When Density Helps and When It Hurts
The central paradox of financial networks
For decades, regulators and practitioners assumed that a more densely interconnected banking system was a safer one. The logic was simple: if each bank spreads its exposure across many counterparties, a shock is shared, diluted, and absorbed. This is risk-sharing through diversification applied at the network level. The post-crisis conventional wisdom swung in the opposite direction: interconnection created contagion.
Daron Acemoglu, Asuman Ozdaglar, and Alireza Tahbaz-Salehi published a 2015 American Economic Review paper that resolved this apparent contradiction with a sharp theoretical result. Their key finding, stated informally, is:
For small shocks, a more interconnected financial network is more stable. Each bank’s creditors absorb a smaller share of the shock, no single creditor is wiped out, and the shock dissipates without generating defaults. For large shocks above a critical threshold, a more interconnected network is less stable. The very density that spread small losses now transmits large losses to every connected institution simultaneously, producing a wave of correlated defaults.
This is the robust-yet-fragile property: robustness to small shocks coexists with fragility to large ones in the same network. The crossover between the two regimes occurs at a shock size threshold that depends on the network architecture.
Formal statement
Fix a total interbank exposure budget \(\bar{L}\) (the aggregate dollar value of bilateral obligations in the system). Distribute this budget across a network of \(n\) banks in different configurations, varying the density \(\rho\) (fraction of possible bilateral pairs that have active exposures) while keeping \(\bar{L}\) constant. As \(\rho\) increases, each bank has more counterparties but each individual exposure is smaller (since the budget is fixed).
For a shock of size \(\delta\) to the external assets of one bank:
- If \(\delta\) is small: denser networks produce fewer defaults. Risk sharing dominates.
- If \(\delta\) is large (above a threshold \(\delta^*\) that depends on the equity buffers): denser networks produce more defaults. Contagion dominates.
The threshold \(\delta^*\) is roughly of order \(n\varepsilon\), where \(\varepsilon\) is the minimum equity buffer per bank. The intuition: as long as each bank’s share of the shock (\(\delta / \rho n\)) is below its equity buffer \(\varepsilon\), no bank defaults and risk sharing works perfectly. Once the per-bank share of the shock exceeds the equity buffer, every connected bank defaults — and more connections means more banks defaulting.
The Acemoglu–Ozdaglar–Tahbaz-Salehi result reframes the regulatory debate about “too interconnected to fail.” The pre-crisis endorsement of credit derivatives as a risk-sharing tool — they allowed banks to disperse credit risk broadly — was correct for the small shocks that characterized the 1990s and early 2000s. The crisis revealed that the underlying shock to US housing was large enough to cross the threshold: the very dispersion of mortgage risk through CDOs and credit default swaps that had improved small-shock stability turned into the channel through which a large-shock cascade propagated globally. The policy implication is that systemic resilience cannot be assessed without knowing the shock-size distribution: a network that is optimal for normal times may be catastrophic for tail events.
Live cell: the crossover between risk-sharing and contagion
Reading the figure. The left panel shows both shock sizes together: the crossover is the defining feature of the graph. Under a small shock (blue), defaults decline as density increases — risk-sharing is at work, and the banking system absorbs the shock without cascade. Under a large shock (red), defaults increase with density — the same mechanism that dispersed small losses now distributes a large shock to every connected bank simultaneously. The right panel isolates the small-shock regime to make the downward slope unambiguous. This crossover — and the threshold at which it occurs — is the central prediction of Acemoglu et al. (2015).
Before running the cell, predict the sign of the slope of the large-shock curve: should more interconnected banks produce more or fewer defaults when the shock is large? Commit to a prediction and then verify. Most students initially predict the wrong sign for the large-shock case, which is precisely why this result was a surprise when first published.
Fire Sales and Common-Asset Contagion
The mechanism
The Eisenberg–Noe framework takes external asset values as given. In reality, when a bank is in distress, it sells assets to cover losses — and those sales move prices. If other banks hold the same assets, they suffer mark-to-market losses even though no money has changed hands between them. The two banks may have zero bilateral interbank exposure. Nevertheless, one’s distress damages the other through the price channel.
This mechanism, modeled by Cifuentes, Ferrucci, and Shin (2005) and later formalized by Brunnermeier and Pedersen (2009), is the fire-sale externality: a bank that liquidates assets in a stress scenario imposes costs on all other holders of the same asset. The externality grows with leverage: a bank levered 20-to-1 that experiences a 2% decline in asset values must sell assets to restore its capital ratio, and even a modest forced sale of a large portfolio moves prices measurably.
The financial system can be represented as a bipartite bank-asset graph \(\mathcal{B} = (V_B \cup V_A, E)\), where \(V_B\) is the set of banks, \(V_A\) is the set of illiquid asset classes, and an edge \((i, a)\) with weight \(h_{ia}\) indicates that bank \(i\) holds quantity \(h_{ia}\) of asset \(a\). The asset portfolio matrix \(H \in \mathbb{R}^{n_B \times n_A}\) collects these holdings.
The price-impact equation
Let \(q_a\) denote the price of asset \(a\) after forced sales. The Cifuentes–Ferrucci–Shin model augments the Eisenberg–Noe clearing condition with a price-impact equation:
\[q_a = q_a^0 \cdot \exp\!\left(-\alpha_a \cdot \sum_i \Delta h_{ia}\right) \tag{8.4}\]
where \(q_a^0\) is the pre-crisis price, \(\Delta h_{ia} \geq 0\) is the quantity of asset \(a\) sold by bank \(i\) under its deleveraging constraint, and \(\alpha_a > 0\) is the price-impact coefficient (the inverse of market depth for asset \(a\)). The exponential form ensures prices remain positive; the linear approximation \(q_a \approx q_a^0 (1 - \alpha_a \sum_i \Delta h_{ia})\) is used in most calibrated models.
The deleveraging trigger: bank \(i\) must sell assets when its leverage ratio \(\lambda_i = A_i / E_i\) exceeds its regulatory maximum \(\bar{\lambda}_i\). The amount of forced selling is:
\[\text{sell}_i = \max\!\left(0,\; A_i - \bar{\lambda}_i E_i\right)\]
where \(A_i\) is total assets and \(E_i\) is equity after the shock. This system is coupled: prices affect equity, equity determines whether selling is triggered, selling moves prices, which further reduces equity. The interaction generates the amplification spiral.
The August 2007 “quant quake” is the canonical empirical example of fire-sale amplification through common-asset exposure rather than bilateral obligations. Several large quantitative equity funds held very similar long-short factor portfolios — effectively, identical positions in the same stocks. When one fund began deleveraging in early August 2007, its selling depressed the prices of the long legs and supported the prices of the short legs. Other funds holding the same positions suffered immediate mark-to-market losses and were also forced to delever, producing a feedback loop. Within a week, several hundred billion dollars of portfolio value had been destroyed across funds that had no direct bilateral exposure to one another. Khandani and Lo (2007) documented this event in detail; it became the first major warning that the 2007–2009 crisis would not be confined to mortgage credit.
Live cell: fire-sale amplification in a bipartite bank-asset system
Reading the output. The amplification factor measures how much larger the total equity loss is with fire sales than without. A factor above 1.0 confirms that the price-impact channel amplifies the initial shock. Banks most exposed to the shocked asset (MBS in this case) suffer the largest first-round losses. Banks heavily exposed to other assets that are also sold as collateral damage (the correlated deleveraging effect) suffer second-round losses through the common-asset channel. The equity trajectory chart makes the spiral dynamics visible: equity falls in steps as each round of forced selling moves prices further against the distressed banks.
CoVaR and Marginal Expected Shortfall
The limits of standalone risk measures
Value at Risk (VaR) at the \(q\)-th quantile of bank \(i\)’s loss distribution, \(\text{VaR}_q^i\), is the loss that bank \(i\) will not exceed with probability \(q\). It is computed on bank \(i\) in isolation, using bank \(i\)’s own asset distribution. As a measure of systemic risk, it suffers from a fundamental limitation: it says nothing about the risk that bank \(i\)’s distress imposes on the rest of the financial system.
Two complementary systemic risk measures have been developed to address this: CoVaR (Adrian and Brunnermeier, 2016) and Marginal Expected Shortfall (Acharya, Pedersen, Philippon, and Richardson, 2017).
CoVaR
Define the Conditional Value at Risk of institution \(j\) given that institution \(i\) is in distress as:
\[\text{CoVaR}_{q}^{j|i} = \text{VaR}_q\!\left(r_j \mid r_i = \text{VaR}_q^i\right) \tag{8.5}\]
where \(r_i\) and \(r_j\) are the returns (or losses) of institutions \(i\) and \(j\). In practice, Adrian and Brunnermeier define “distress” as institution \(i\) being at its own \(q\)-th quantile (typically \(q = 0.01\) or \(q = 0.05\), so we are conditioning on a severe event for \(i\)).
The \(\Delta\)CoVaR of institution \(i\) with respect to the system is the difference between the system’s VaR conditional on \(i\)’s distress and the system’s VaR when \(i\) is at its median:
\[\Delta\text{CoVaR}_q^{j|i} = \text{CoVaR}_q^{j|i} - \text{CoVaR}_{0.5}^{j|i} \tag{8.6}\]
A large \(|\Delta\text{CoVaR}|\) for institution \(i\) means that \(i\)’s stress is associated with large movements in the system — bank \(i\) is systemically important.
CoVaR is estimated empirically from time-series regression of the system’s weekly returns on institution \(i\)’s returns and a set of state variables:
\[r_{\text{system},t} = \alpha^{j|i} + \gamma^{j|i} r_{i,t} + \boldsymbol{\beta}^{j|i} \boldsymbol{Z}_{t-1} + \varepsilon_t\]
The conditional quantile regression at level \(q\) delivers the CoVaR estimate. The slope coefficient \(\hat{\gamma}_q^{j|i}\) is the key parameter: it measures how much a 1-unit move in institution \(i\)’s loss distribution shifts the system-level loss at quantile \(q\).
Why CoVaR is a network measure
At first glance, CoVaR looks like a pairwise statistical relationship between two return series — far removed from the network models developed in this chapter. The connection is not superficial. Adrian and Brunnermeier show that \(\hat{\gamma}_q^{j|i}\) can be decomposed into:
- Direct exposure: institution \(j\) holds direct claims on institution \(i\), so \(j\)’s losses move with \(i\)’s losses through the interbank liability channel.
- Common factor exposure: \(i\) and \(j\) hold overlapping portfolios; a shock that distresses \(i\) also marks down \(j\)’s portfolio through the common-asset channel.
- Liquidity feedback: \(i\)’s distress causes market-wide liquidity deterioration that disproportionately damages institutions like \(j\) with similar funding structures.
All three channels are exactly the channels we have been studying in this chapter — the liability network of Eisenberg–Noe, the bipartite bank-asset network of Cifuentes–Ferrucci–Shin, and the funding-liquidity network of Brunnermeier and Pedersen. CoVaR compresses all three into a single reduced-form statistic that can be estimated from market prices alone, without needing the bilateral exposure data that regulators guard jealously.
Marginal Expected Shortfall
A closely related measure is the Marginal Expected Shortfall of institution \(i\), introduced by Acharya et al. (2017):
\[\text{MES}_i = \mathbb{E}\!\left[r_i \mid r_{\text{system}} < \text{VaR}_q^{\text{system}}\right] \tag{8.7}\]
MES is the expected return of institution \(i\) on the days when the system is in its worst \(q\%\) of outcomes. A large negative MES means that \(i\) tends to lose a lot precisely when the system as a whole is losing a lot — a high co-movement with systemic tail events. Acharya et al. show that MES is a strong predictor of which institutions will require government capital injections in a financial crisis: the institutions with the worst MES in 2006–2007 were significantly more likely to be bailed out or fail in 2008–2009.
The LIBOR–OIS spread became the real-time market indicator of systemic stress during the 2008 crisis. LIBOR (the rate at which banks said they could borrow unsecured from other banks) diverged sharply from the Overnight Index Swap rate (a near-riskless rate tied to the Fed Funds rate): the spread measures the credit and liquidity premium banks demand for unsecured interbank lending. On September 15, 2008, the 3-month LIBOR–OIS spread spiked to over 350 basis points, a level more than ten times its pre-crisis average. This single market signal captured the complete breakdown of interbank trust — the network, in effect, had dissolved into a collection of isolated nodes, each unwilling to extend credit to the others. Systemic risk models calibrated to pre-crisis network data had not assigned significant probability to this outcome because the models did not account for the possibility that the entire network structure could endogenously unravel when trust collapsed.
Mini Case Study: A Full Two-Round Stress Test
Setup and motivation
We now assemble all the mechanisms developed in this chapter into a single integrated stress test. The scenario is deliberately stylized but calibrated to be consistent with pre-crisis financial conditions: five banks, three illiquid asset classes, leverage of approximately 10x, and an initial shock to one asset class.
The stress test proceeds in two rounds:
Round 1 — Direct losses. Asset class 1 (MBS analogues) suffers a 25% price decline due to an exogenous shock (a housing market correction). Each bank marks its MBS holdings to market. Banks whose equity is insufficient to absorb the mark-to-market loss are in first-round distress.
Round 2 — Fire-sale amplification. Banks in distress are forced to deleverage to maintain their leverage constraint. Their forced sales move prices of all three asset classes (because deleveraging banks sell across their full portfolio, not just the shocked asset). The price decline in turn produces further mark-to-market losses for all banks — including those that survived Round 1 without distress. This is the second-round loss channel. Banks whose equity cannot absorb second-round losses are in second-round default.
The amplification factor is the ratio of total losses in the two-round scenario to the direct first-round losses alone. In calibrated models of the 2007–2008 crisis, Greenwood et al. (2015) find amplification factors of 2–4x for the US banking system; Adrian and Shin (2010) find similar magnitudes using individual bank balance-sheet data.
Live cell: the integrated stress test
Reading the output and figure. The waterfall chart tells the story in one image. The system enters the scenario with a certain equity base. The Round-1 bar (orange) shows what happens if banks simply absorb mark-to-market losses without any behavioral response — the “no fire sale” counterfactual. The Round-2 bar (red) shows the outcome when distressed banks are forced to deleverage: additional forced selling amplifies the price decline, inflicting second-round losses on banks that survived Round 1. The amplification factor is the ratio of the total drop to the Round-1 drop alone. In calibrated pre-crisis models this factor is typically between 2 and 4 — meaning that ignoring the fire-sale channel understates total systemic losses by a factor of two to four. This is the chapter’s central punchline: the network is not just a conduit for losses; through the behavioral response of leveraged institutions, it is an amplifier.
Summary and Reflection
This chapter has applied the network analysis toolkit developed across the entire book to the highest-stakes domain in applied economics: the stability of the financial system.
We began with the interbank network — nodes as banks, directed weighted edges as bilateral obligations — and established its core-periphery structure and scale-free in-degree distribution. These structural properties, familiar from our study of generative models in Chapter 4, have a direct consequence in the financial context: the network is simultaneously robust to small idiosyncratic shocks (periphery bank failures) and fragile to large shocks to core banks (the Acemoglu–Ozdaglar–Tahbaz-Salehi result, Section 5).
The Eisenberg–Noe clearing vector (Section 2) formalized the mechanics of default propagation. Given external asset values and the interbank liability matrix, the clearing algorithm finds the equilibrium vector of actual payments by iterating the clearing map to its greatest fixed point. Default cascades emerge naturally from this framework: a shock to one bank’s external assets reduces what it pays its creditors, which reduces their equity, which may push them below solvency, and so on through the network.
DebtRank (Section 4) gave us a centrality measure tailored to the financial context. Unlike degree or PageRank, DebtRank incorporates the equity capital of each bank as a buffer against transmitted losses. A bank with moderate connectivity but highly leveraged counterparties may rank higher on DebtRank than a bank with many well-capitalized connections. This distinction is invisible to classical centrality measures and is precisely the distinction that pre-crisis risk management frameworks failed to make.
The fire-sale mechanism (Section 6) added the portfolio channel to the liability channel: banks that share common asset holdings are connected through the price mechanism even when they have no bilateral interbank obligation. The amplification factor — the ratio of total losses with fire sales to losses without — is the measure of how much the behavioral response of distressed, leveraged institutions magnifies an initial shock. In calibrated models, this factor is consistently above 1.5 and can reach 4 or higher in severe scenarios.
CoVaR and Marginal Expected Shortfall (Section 7) showed that systemic risk measures can be extracted from market prices alone, without bilateral exposure data, by conditioning on tail events. Both measures implicitly capture the network structure through the co-movement of returns that the interbank and portfolio channels generate.
The mini case study (Section 8) assembled all channels into a single integrated stress test, tracing a 25% asset shock through direct losses, forced sales, price impacts, and second-round defaults. The two-round comparison — loss absorption versus loss amplification — is the central analytical contribution of the chapter.
Financial networks as the convergence of all network analysis themes
Looking back across the book, financial networks are a uniquely demanding domain because they require all the analytical tools simultaneously. They are structural (Chapter 1–3): the topology of the interbank liability graph determines which banks are systemically important and which shocks are absorbed locally versus globally. They are generative (Chapter 4): the core-periphery structure that characterizes real banking systems emerges from a preferential-attachment process by which large banks attract more counterparties, producing scale-free in-degree. They are behavioral (Chapter 5): the fire-sale spiral is driven by the behavioral response of leveraged institutions to mark-to-market losses — a game played on the network in which each bank’s deleveraging decision imposes externalities on all other holders of the same assets. They are implicitly temporal (Chapter 6): contagion cascades unfold over time, from the initial shock through Round-1 defaults to second-round amplification, and the dynamics matter as much as the equilibrium. And the causal identification questions of Chapter 7 resurface here with full force: distinguishing network contagion (the liability channel) from common exposure (the portfolio channel) in observational data requires instrumental variables or natural experiments that are precisely the credible identification strategies of modern empirical finance.
The practical stakes are correspondingly high. The tools in this chapter — Eisenberg–Noe clearing, DebtRank, fire-sale amplification models, CoVaR — are the foundation of the stress-testing frameworks used by the Federal Reserve, the European Central Bank, and the Bank of England to assess the resilience of the global financial system. They are also the tools that were not in widespread use before 2008. The intellectual project of this chapter is, in part, a post-mortem: understanding, with the precision that network analysis provides, exactly how a shock to a relatively small corner of the US housing market became the worst global financial crisis since the Great Depression.
The network was the crisis. Building the network is how we study it.
Prof. Xuhu Wan · HKUST · Modern AI Stack for Social Data · 2026 Edition