The Continuum Hypothesis

CH, GCH, and the question of how many real numbers there are.


The continuum hypothesis is the statement that there is no set whose cardinality lies strictly between that of the natural numbers and that of the real numbers. First conjectured by Georg Cantor in 1878 and enshrined as the very first problem on David Hilbert’s famous list of 1900, it stood for nearly a century as one of the most tantalizing open questions in all of mathematics. Its ultimate resolution — not as true or false, but as independent of the standard axioms of set theory — transformed our understanding of mathematical truth, revealed the inherent limitations of formal systems, and launched an entirely new branch of set-theoretic research into the structure of the set-theoretic universe itself.

Cantor’s Continuum Problem

In 1874, Georg Cantor published his groundbreaking proof that the set of real numbers R\mathbb{R} is uncountable: there is no bijection between N\mathbb{N} and R\mathbb{R}. His diagonal argument of 1891 provided an even more elegant demonstration and, more generally, established that for any set XX, the power set P(X)\mathcal{P}(X) has strictly greater cardinality than XX itself. Since the set of all subsets of N\mathbb{N} can be identified with the set of all infinite binary sequences — which in turn corresponds bijectively to R\mathbb{R} — the cardinality of the continuum equals 202^{\aleph_0}, the cardinal exponentiation of 22 to the power 0\aleph_0. Cantor’s theorem guarantees that 20>02^{\aleph_0} > \aleph_0, but it says nothing about how much larger the continuum is.

The continuum hypothesis (CH) is Cantor’s conjecture that there is no intermediate cardinality: every infinite subset of R\mathbb{R} is either countable or has the same cardinality as R\mathbb{R} itself. In the language of cardinal arithmetic, this is the assertion

20=1,2^{\aleph_0} = \aleph_1,

where 1\aleph_1 is the smallest uncountable cardinal, as developed in Ordinals and Cardinals. Cantor devoted years of intense effort to proving this conjecture, alternating between attempts to prove it true and attempts to prove it false, and the resulting frustration contributed to his severe bouts of depression. In 1900, Hilbert placed the continuum hypothesis first on his celebrated list of 23 problems for the twentieth century, calling it a question “of the highest importance” for the foundations of mathematics.

The generalized continuum hypothesis (GCH) extends CH to all infinite cardinals: for every ordinal α\alpha,

2α=α+1.2^{\aleph_\alpha} = \aleph_{\alpha+1}.

GCH asserts that the power set operation always produces the next aleph — that there are no “gaps” in the cardinal hierarchy when exponentiation is applied. Early set theorists recognized that GCH, if true, would impose a beautifully simple and rigid structure on cardinal arithmetic, reducing all questions of cardinal exponentiation to the aleph sequence. But the question of whether CH (let alone GCH) follows from the axioms of Zermelo-Fraenkel set theory with the axiom of choice (ZFC) would prove far more subtle than anyone initially imagined. It took the combined genius of Godel and Cohen, working decades apart, to show that the standard axioms leave the size of the continuum entirely undetermined.

Godel’s Constructible Universe and Consistency

The first major advance on the continuum problem came in 1938, when Kurt Godel announced that CH cannot be disproved from ZFC — that is, if ZFC is consistent, then so is ZFC + CH. His proof introduced one of the most important constructions in all of set theory: the constructible universe, denoted LL.

The constructible universe is built by transfinite recursion. At stage 00, one begins with the empty set: L0=L_0 = \varnothing. At each successor stage α+1\alpha + 1, one adds every subset of LαL_\alpha that is definable by a first-order formula with parameters from LαL_\alpha:

Lα+1=Def(Lα)={{xLα:Lαφ(x,aˉ)}:φ a formula,aˉLα}.L_{\alpha+1} = \text{Def}(L_\alpha) = \{ \{x \in L_\alpha : L_\alpha \models \varphi(x, \bar{a})\} : \varphi \text{ a formula}, \bar{a} \in L_\alpha \}.

At limit stages λ\lambda, one takes the union: Lλ=α<λLαL_\lambda = \bigcup_{\alpha < \lambda} L_\alpha. The constructible universe is then L=αOrdLαL = \bigcup_{\alpha \in \text{Ord}} L_\alpha, the union over all ordinals. The key point is that only definable subsets are admitted at each stage — unlike the full cumulative hierarchy VαV_\alpha, which includes all subsets. This makes LL a “thin” inner model: it contains enough sets to satisfy ZFC, but it excludes the more exotic sets that might exist in the full universe VV.

Godel proved three fundamental facts about LL. First, LL is a model of ZFC — all the Zermelo-Fraenkel axioms with the axiom of choice hold when the universe is restricted to constructible sets. Second, LL satisfies the axiom of constructibility V=LV = L, which asserts that every set is constructible. Third, and most importantly for the continuum problem, the GCH holds in LL. The argument for GCH in LL relies on a condensation lemma: every elementary substructure of Lω1L_{\omega_1} is isomorphic to some LαL_\alpha with αω1\alpha \leq \omega_1, which limits the number of subsets of ω\omega that can appear in LL to exactly 1\aleph_1.

The consequence for independence is immediate. If ZFC is consistent, then LL provides a model of ZFC in which GCH (and therefore CH) holds. In the language of relative consistency:

Con(ZFC)    Con(ZFC+GCH).\text{Con}(\text{ZFC}) \implies \text{Con}(\text{ZFC} + \text{GCH}).

This means CH cannot be refuted from ZFC alone. However, Godel himself believed that CH was false — that the axioms of ZFC were simply too weak to settle the question, and that new axioms (perhaps large cardinal axioms) would eventually decide it. He regarded the constructible universe as an artificially restricted model, too “thin” to reflect the true richness of the set-theoretic universe. The question of whether CH could be shown consistent with its negation — whether ZFC + ¬\negCH is also consistent — would remain open for another quarter century.

Cohen’s Forcing and Independence

In 1963, Paul Cohen completed the independence picture by showing that CH cannot be proved from ZFC either. His method, called forcing, is one of the most profound inventions in the history of mathematical logic. Where Godel had shown how to shrink the universe (by passing to LL) to make CH true, Cohen showed how to expand the universe by carefully adjoining new sets to make CH false.

The intuition behind forcing is roughly this. One starts with a countable transitive model MM of ZFC (a “ground model”) and constructs an extension M[G]M[G] by adjoining a new object GG — a so-called generic filter over a partially ordered set PM\mathbb{P} \in M. The partial order P\mathbb{P} encodes the “approximations” to the desired new set, and the genericity of GG ensures that GG meets every dense set of conditions in MM, making M[G]M[G] satisfy ZFC as well. The key technical achievement is a forcing relation pφp \Vdash \varphi that allows one to determine, while still working inside the ground model MM, which statements will be true in the extension M[G]M[G].

To make CH fail, Cohen used a partial order now called Cohen forcing: the set of all finite partial functions from ω2×ω\omega_2 \times \omega to {0,1}\{0, 1\}, ordered by reverse inclusion. Each condition specifies finitely many bits of 2\aleph_2-many new subsets of ω\omega. The generic filter GG assembles these finite approximations into 2\aleph_2 distinct subsets of ω\omega — that is, 2\aleph_2 distinct real numbers that were not in the ground model. A careful counting argument shows that no cardinals are collapsed in the extension (the ground model’s 1\aleph_1 and 2\aleph_2 remain the same in M[G]M[G]), and the resulting model satisfies

M[G]20=2.M[G] \models 2^{\aleph_0} = \aleph_2.

Since 21\aleph_2 \neq \aleph_1, this model witnesses ¬\negCH. Combined with Godel’s result, this establishes the full independence of CH:

Con(ZFC)    Con(ZFC+CH)andCon(ZFC)    Con(ZFC+¬CH).\text{Con}(\text{ZFC}) \implies \text{Con}(\text{ZFC} + \text{CH}) \quad \text{and} \quad \text{Con}(\text{ZFC}) \implies \text{Con}(\text{ZFC} + \neg\text{CH}).

Cohen was awarded the Fields Medal in 1966 for this work — the only time the prize has been given for a contribution to mathematical logic. An alternative formulation of forcing uses Boolean-valued models, developed independently by Scott, Solovay, and Vopenka in the mid-1960s. In this approach, instead of working with generic filters, one replaces the two truth values {0,1}\{0, 1\} with a complete Boolean algebra B\mathbb{B} and builds a model VBV^{\mathbb{B}} where every set-theoretic statement receives a truth value in B\mathbb{B}. The Boolean-valued approach is technically cleaner in some respects and avoids the need to work with countable ground models, though the two frameworks are ultimately equivalent.

Continuum Hypothesis Variants

The independence of CH opened a vast landscape of questions about what values 202^{\aleph_0} can consistently take, and more broadly, what the function κ2κ\kappa \mapsto 2^\kappa can look like across all infinite cardinals. The most sweeping answer for regular cardinals came from William Easton in 1970. Easton’s theorem states that if FF is any class function defined on regular cardinals satisfying two necessary constraints — first, κF(κ)\kappa \leq F(\kappa) (the power set is at least as large as the set), and second, cf(F(κ))>κ\text{cf}(F(\kappa)) > \kappa (a consequence of Konig’s theorem, which asserts that cf(2κ)>κ\text{cf}(2^\kappa) > \kappa) — then there is a cofinality-preserving forcing extension in which 2κ=F(κ)2^\kappa = F(\kappa) for all regular κ\kappa. In other words, the behavior of the continuum function at regular cardinals is almost entirely unconstrained by ZFC: Konig’s theorem is essentially the only restriction.

The situation at singular cardinals is dramatically different and far more delicate. The singular cardinal hypothesis (SCH) states that if κ\kappa is a singular strong limit cardinal (meaning 2λ<κ2^\lambda < \kappa for all λ<κ\lambda < \kappa), then 2κ=κ+2^\kappa = \kappa^+. SCH captures the expectation that cardinal exponentiation should behave “normally” at singular cardinals. Remarkably, the consistency of the failure of SCH requires large cardinal assumptions — Magidor showed in 1977 that if a supercompact cardinal exists, then SCH can fail, and Silver’s theorem (1974) showed that GCH cannot first fail at a singular cardinal of uncountable cofinality.

The deepest modern tool for studying cardinal arithmetic at singular cardinals is Saharon Shelah’s pcf theory (pcf standing for “possible cofinalities”), developed in the 1980s and 1990s. Pcf theory analyzes the structure of products of regular cardinals modulo an ideal, and it yields striking ZFC-provable bounds on cardinal arithmetic that do not depend on any additional axioms. Shelah’s celebrated result that ω0<ω4\aleph_\omega^{\aleph_0} < \aleph_{\omega_4} (assuming ω\aleph_\omega is a strong limit) shows that ZFC alone imposes nontrivial constraints on cardinal arithmetic at singular cardinals — constraints that have no analogue for regular cardinals. Pcf theory thus reveals an unexpected asymmetry: the continuum function is wild and unconstrained at regular cardinals but subject to deep structural laws at singular ones.

Martin’s Axiom and Cardinal Invariants

Since CH is independent of ZFC, set theorists have investigated axioms that interact with CH in interesting ways. The most prominent of these is Martin’s axiom (MA), formulated by Donald Martin and Robert Solovay in 1970. Martin’s axiom can be stated as follows: for every partial order P\mathbb{P} satisfying the countable chain condition (ccc) — meaning every antichain in P\mathbb{P} is countable — and every family D\mathcal{D} of fewer than 202^{\aleph_0} dense subsets of P\mathbb{P}, there exists a filter GG in P\mathbb{P} meeting every member of D\mathcal{D}. When 20=12^{\aleph_0} = \aleph_1 (that is, when CH holds), MA is trivially true. The interesting case is MA + ¬\negCH, which is consistent with ZFC and asserts that the continuum is large while generic filters exist for substantial families of dense sets. MA functions as a kind of “weak CH” — it shares many of the combinatorial consequences of CH (such as the existence of Suslin lines being ruled out, and every union of fewer than continuum-many measure-zero sets having measure zero) while being compatible with the continuum being any regular cardinal.

The independence of CH also gave rise to the study of cardinal characteristics of the continuum (also called cardinal invariants), which measure the combinatorial complexity of the real line at a finer scale than mere cardinality. These are cardinals, each defined by a specific combinatorial property, that always lie between 1\aleph_1 and 202^{\aleph_0}. Under CH they all collapse to 1\aleph_1, but when CH fails their relative positions become a rich subject of investigation. Among the most important are: the bounding number b\mathfrak{b}, the smallest cardinality of an unbounded family in (ωω,)(\omega^\omega, \leq^*) (where fgf \leq^* g means f(n)g(n)f(n) \leq g(n) for all but finitely many nn); the dominating number d\mathfrak{d}, the smallest cardinality of a cofinal family; the splitting number s\mathfrak{s}, the smallest cardinality of a splitting family of subsets of ω\omega; and the tower number t\mathfrak{t}, the smallest cardinality of a tower in P(ω)/fin\mathcal{P}(\omega)/\text{fin} with no infinite pseudo-intersection.

The relationships between these invariants are summarized in Cichon’s diagram, a partially ordered diagram showing provable ZFC inequalities among cardinal invariants related to measure and category on the real line. Cichon’s diagram includes the cardinal characteristics associated with Lebesgue measure (the additivity, covering, uniformity, and cofinality of the null ideal) and the Baire category analogues (the corresponding characteristics for the meager ideal), together with b\mathfrak{b} and d\mathfrak{d}. The diagram displays a web of inequalities such as add(N)cov(N)d\text{add}(\mathcal{N}) \leq \text{cov}(\mathcal{N}) \leq \mathfrak{d} and bnon(M)\mathfrak{b} \leq \text{non}(\mathcal{M}), and a major achievement of modern set theory has been to show, through elaborate iterated forcing constructions, that essentially any assignment of values to these invariants consistent with the diagram’s inequalities can be realized. The complete separation of all entries in Cichon’s diagram was finally achieved by Goldstern, Kellner, and Shelah in 2019, resolving a decades-long program. This body of work demonstrates that while CH itself is independent, the fine structure of the continuum — the combinatorial landscape between 1\aleph_1 and 202^{\aleph_0} — is a deep and active area of mathematical research, with forcing as its indispensable tool.