Forcing & Independence

Cohen's method of forcing and Boolean-valued models — how to prove independence results.


Forcing is the single most powerful technique ever devised for proving independence results in set theory. Invented by Paul Cohen in 1963 to show that the Continuum Hypothesis is not provable from the axioms of ZFC, forcing allows one to start with a model of set theory and carefully adjoin new sets to it, producing an enlarged model in which specific statements become true or false as desired. The method has since been extended and refined by Solovay, Easton, Martin, Shelah, and many others, yielding a vast landscape of consistency and independence results that map out the boundaries of what ZFC can and cannot decide.

Forcing Posets and Generic Filters

The basic setup of forcing begins with a forcing notion (also called a forcing poset), which is a partially ordered set (P,)(\mathbb{P}, \leq) equipped with a greatest element 1\mathbf{1}. The elements of P\mathbb{P} are called conditions, and they represent pieces of partial information about a new object that we wish to adjoin to a given ground model MM — a countable transitive model of ZFC. If pqp \leq q, we say that pp extends qq, meaning that pp carries at least as much information as qq. Two conditions pp and qq are compatible if there exists a condition rr with rpr \leq p and rqr \leq q; otherwise they are incompatible.

A subset DPD \subseteq \mathbb{P} is called dense if every condition in P\mathbb{P} can be extended to a condition in DD. Dense sets represent constraints that any sufficiently comprehensive collection of conditions must satisfy. A filter GPG \subseteq \mathbb{P} is a non-empty upward-closed subset that is directed: for any p,qGp, q \in G, there exists rGr \in G with rpr \leq p and rqr \leq q. The crucial concept is that of a generic filter: a filter GG is MM-generic (or simply generic over MM) if it meets every dense set DD that belongs to MM:

GDfor every dense DM.G \cap D \neq \emptyset \quad \text{for every dense } D \in M.

The existence of generic filters is guaranteed by the Rasiowa-Sikorski lemma, which states that if MM is countable (so that there are only countably many dense sets to meet), then for any condition pPp \in \mathbb{P}, there exists a generic filter GG containing pp. The proof is a straightforward diagonalization: enumerate the dense sets in MM as D0,D1,D2,D_0, D_1, D_2, \ldots, start with p0=pp_0 = p, and at each step extend pnp_n into DnD_n to obtain pn+1p_{n+1}. The filter generated by the sequence {pn}\{p_n\} is the desired generic filter.

Once a generic filter GG is obtained, the generic extension M[G]M[G] is formed. This is the smallest transitive model of ZF that contains both MM and GG as elements. Its elements are the interpretations in GG of all P\mathbb{P}-names — formal objects in MM that encode how to build sets from GG. A P\mathbb{P}-name is a set of pairs (τ,p)(\tau, p) where τ\tau is itself a P\mathbb{P}-name and pPp \in \mathbb{P}. The interpretation τ˙G\dot{\tau}^G is defined recursively: τ˙G={σ˙G:(σ,p)τ and pG}\dot{\tau}^G = \{\dot{\sigma}^G : (\sigma, p) \in \tau \text{ and } p \in G\}. Every element of M[G]M[G] is the interpretation of some name in MM, and M[G]M[G] is built entirely from information available in MM and the generic filter GG.

The Forcing Relation and Truth Lemma

The remarkable feature of forcing is that truth in the generic extension can be analyzed from within the ground model, without ever constructing GG explicitly. This is accomplished through the forcing relation. Given a condition pPp \in \mathbb{P} and a sentence φ\varphi in the forcing language (which augments the language of set theory with names for elements of M[G]M[G]), we write pφp \Vdash \varphi and say that pp forces φ\varphi if, for every MM-generic filter GG containing pp, the sentence φ\varphi holds in M[G]M[G].

The forcing relation has several key properties that make it tractable. It is monotone: if pφp \Vdash \varphi and qpq \leq p, then qφq \Vdash \varphi. It respects logical connectives in the expected way: pφψp \Vdash \varphi \land \psi if and only if pφp \Vdash \varphi and pψp \Vdash \psi; and p¬φp \Vdash \neg\varphi if and only if no extension of pp forces φ\varphi. For existential statements, pxψ(x)p \Vdash \exists x\, \psi(x) if and only if the set of conditions forcing ψ(τ˙)\psi(\dot{\tau}) for some name τ˙\dot{\tau} is dense below pp.

A crucial technical point is the definability of the forcing relation. For any formula φ\varphi, the relation "pφp \Vdash \varphi" is definable within the ground model MM. This means that even though GG itself is not an element of MM, the ground model can reason about what any generic extension will satisfy. Cohen’s original presentation defined the forcing relation by recursion on the complexity of formulas, establishing it first for atomic formulas and then propagating through logical connectives.

The central result connecting the forcing relation to truth in generic extensions is the Truth Lemma (also called the Forcing Theorem): for every sentence φ\varphi in the forcing language and every MM-generic filter GG,

M[G]φif and only ifpG such that pφ.M[G] \models \varphi \quad \text{if and only if} \quad \exists\, p \in G \text{ such that } p \Vdash \varphi.

This tells us that the generic filter GG “decides” every sentence — one way or the other — through the conditions it contains. As an immediate corollary, the Generic Model Theorem states that if MM is a countable transitive model of ZFC, then M[G]M[G] is also a model of ZFC. The axioms of ZFC are verified one by one: the key insight is that the forcing relation preserves the axiom schemes of separation and replacement because it is definable, and the axiom of choice is preserved because any well-ordering of a set in MM can be extended to a well-ordering in M[G]M[G] using names.

Independence Proofs via Forcing

The most celebrated application of forcing is Cohen’s original proof that the Continuum Hypothesis is independent of ZFC, building on Godel’s earlier 1938 result that CH is consistent with ZFC (via the constructible universe LL, as introduced in The Continuum Hypothesis). To show that ¬\negCH is also consistent, Cohen constructed a forcing extension in which 2022^{\aleph_0} \geq \aleph_2.

The forcing notion used is Cohen forcing: the poset P\mathbb{P} consists of all finite partial functions p ⁣:ω×ω2{0,1}p \colon \omega \times \omega_2 \to \{0, 1\}, ordered by reverse inclusion (so pqp \leq q means pqp \supseteq q). Each condition pp specifies finitely many values of a function from ω×ω2\omega \times \omega_2 to {0,1}\{0, 1\}. A generic filter GG assembles these finite pieces into a total function fG ⁣:ω×ω2{0,1}f_G \colon \omega \times \omega_2 \to \{0, 1\}. For each α<ω2\alpha < \omega_2, the slice fG(,α)f_G(\cdot, \alpha) defines a subset of ω\omega — that is, a real number (in the Cantor space sense). Different ordinals αβ\alpha \neq \beta yield different reals because the dense sets Dα,β={pP:n,p(n,α)p(n,β)}D_{\alpha,\beta} = \{p \in \mathbb{P} : \exists n,\, p(n, \alpha) \neq p(n, \beta)\} belong to MM and are met by GG. Thus M[G]M[G] contains at least 2\aleph_2 many distinct reals, which means 2022^{\aleph_0} \geq \aleph_2 in M[G]M[G].

A critical step is showing that forcing with P\mathbb{P} preserves cardinals — that 1M\aleph_1^M and 2M\aleph_2^M remain the first and second uncountable cardinals in M[G]M[G]. This follows from the countable chain condition (c.c.c.): Cohen forcing has the property that every antichain (a set of pairwise incompatible conditions) is countable. Any c.c.c. forcing preserves all cardinals, a fundamental fact proved using a counting argument on nice names.

Well beyond CH, forcing has been used to establish the independence of numerous set-theoretic principles. Easton’s theorem (1970) shows that the behavior of the power function κ2κ\kappa \mapsto 2^\kappa at regular cardinals is almost completely unconstrained by ZFC: the only restrictions are that 2κκ+2^\kappa \geq \kappa^+ and that κ<λ\kappa < \lambda implies 2κ2λ2^\kappa \leq 2^\lambda (monotonicity), plus Konig’s theorem requiring cf(2κ)>κ\mathrm{cf}(2^\kappa) > \kappa. Easton achieved this using a class-length product of forcing notions, one for each regular cardinal, carefully arranged to avoid interference. Other landmark independence results include the independence of Suslin’s hypothesis (whether every dense complete linear order without endpoints satisfying the c.c.c. is isomorphic to R\mathbb{R}) and the consistency of Martin’s axiom with ¬\negCH.

Iterated Forcing and Consistency Results

Many of the most important consistency results cannot be obtained by a single forcing step. Iterated forcing is the technique of performing a transfinite sequence of forcing extensions, one after another, building a chain of models M=M0M1MαM = M_0 \subseteq M_1 \subseteq \cdots \subseteq M_\alpha for ordinals α\alpha up to some limit. At each successor stage α+1\alpha + 1, one forces over MαM_\alpha with a poset QαMα\mathbb{Q}_\alpha \in M_\alpha; at limit stages, one takes a suitable direct limit. The entire iteration can be described as a single forcing notion in the ground model, called the iteration of the Qα\mathbb{Q}_\alpha.

The technical challenge of iterated forcing lies in controlling what happens at limit stages. In a finite support iteration (FSI), conditions at stage α\alpha are functions pp defined on an initial segment of the ordinals, where p(β)p(\beta) is a Qβ\mathbb{Q}_\beta-name for a condition in Qβ\mathbb{Q}_\beta, and p(β)=1Qβp(\beta) = \mathbf{1}_{\mathbb{Q}_\beta} for all but finitely many β\beta. FSI preserves the c.c.c. when each iterand Qα\mathbb{Q}_\alpha satisfies it, which makes it ideal for constructing models of Martin’s axiom (MA). The consistency of MA with ¬\negCH, originally proved by Solovay and Tennenbaum in 1971, uses an FSI of length ω2\omega_2 in which every c.c.c. poset of size 1\leq \aleph_1 is forced with at some stage.

An alternative is countable support iteration (CSI), where conditions are non-trivial on at most countably many coordinates. CSI is essential for iterating proper forcing notions — a broad class introduced by Saharon Shelah in the late 1970s. A forcing notion is proper if it preserves stationary subsets of [λ]ω[\lambda]^\omega for all λ\lambda. Every c.c.c. forcing is proper, but properness is much more general: it includes forcings that add uncountable objects, collapse cardinals in a controlled manner, or shoot clubs through stationary sets. Shelah’s Proper Forcing Axiom (PFA) is the analogue of MA for proper forcings and has far-reaching consequences, including the failure of CH with 20=22^{\aleph_0} = \aleph_2.

Iterated forcing has produced a wealth of consistency results in combinatorial set theory. Through carefully designed iterations, one can make the continuum any prescribed regular cardinal, create models where specific combinatorial principles (such as the diamond principle \diamondsuit or the square principle \square) hold or fail, and engineer models where cardinal characteristics of the continuum — such as the bounding number b\mathfrak{b}, the dominating number d\mathfrak{d}, and the additivity of measure — take on independently specified values. The theory of iterated forcing, particularly the preservation theorems that guarantee which properties survive through a long iteration, remains one of the most active areas of modern set theory.

Boolean-Valued Models

An elegant alternative to the combinatorial approach of forcing posets was developed by Dana Scott and Robert Solovay in the mid-1960s. Instead of constructing a generic filter over a poset, one works with a complete Boolean algebra B\mathcal{B} and assigns every set-theoretic sentence a truth value in B\mathcal{B}, rather than a classical truth value in {0,1}\{0, 1\}.

The construction proceeds as follows. Starting from the universe VV of all sets, one builds a Boolean-valued model VBV^{\mathcal{B}} whose elements are B\mathcal{B}-valued names (defined by transfinite recursion in a manner analogous to P\mathbb{P}-names). For any two names x˙\dot{x} and y˙\dot{y}, the truth values x˙y˙\|\dot{x} \in \dot{y}\| and x˙=y˙\|\dot{x} = \dot{y}\| are elements of B\mathcal{B}, defined by mutual recursion:

x˙y˙=(z˙,b)y˙bx˙=z˙,\|\dot{x} \in \dot{y}\| = \bigvee_{(\dot{z}, b) \in \dot{y}} b \wedge \|\dot{x} = \dot{z}\|,

x˙=y˙=(z˙,b)x˙(bz˙y˙)    (z˙,b)y˙(bz˙x˙).\|\dot{x} = \dot{y}\| = \bigwedge_{(\dot{z}, b) \in \dot{x}} (b \Rightarrow \|\dot{z} \in \dot{y}\|) \;\wedge\; \bigwedge_{(\dot{z}, b) \in \dot{y}} (b \Rightarrow \|\dot{z} \in \dot{x}\|).

Every axiom of ZFC receives truth value 1\mathbf{1} (the maximum of B\mathcal{B}) in VBV^{\mathcal{B}}, so the Boolean-valued model “thinks” it satisfies ZFC regardless of what B\mathcal{B} is. To recover a classical two-valued model from VBV^{\mathcal{B}}, one takes a quotient by an ultrafilter UU on B\mathcal{B}. The quotient VB/UV^{\mathcal{B}} / U identifies two names whenever x˙=y˙U\|\dot{x} = \dot{y}\| \in U, and a sentence φ\varphi holds in the quotient precisely when φU\|\varphi\| \in U. If UU is MM-generic (meeting all dense sets in a countable ground model MM), the resulting quotient is isomorphic to the generic extension M[G]M[G], where GG is the generic filter on the corresponding poset. This establishes the equivalence of the Boolean-valued approach and the poset-based approach: they describe the same extensions from different vantage points.

The Boolean-valued perspective has several advantages. It eliminates the need to work with a countable ground model — one can reason about VBV^{\mathcal{B}} directly over the full universe VV. It makes the algebraic structure of forcing transparent: products of Boolean algebras correspond to product forcings, quotients correspond to iterated forcing, and homomorphisms between algebras correspond to projections between forcing extensions. For some independence proofs, especially those involving symmetries and automorphisms, the Boolean-valued framework is technically cleaner and more natural. Solovay’s celebrated 1970 result that (assuming an inaccessible cardinal) there is a model of ZF + DC in which every set of reals is Lebesgue measurable was originally carried out in the Boolean-valued setting, using the Boolean algebra of random forcing (the measure algebra). The Boolean-valued approach thus provides a powerful algebraic lens through which the combinatorial complexity of forcing becomes a matter of Boolean algebra — making the technique simultaneously more abstract and, for many purposes, more transparent.