Analytic Number Theory
The Riemann zeta function, the Prime Number Theorem, and L-functions.
Analytic number theory is the branch of mathematics that brings the tools of real and complex analysis to bear on questions about the integers — above all, on the distribution of prime numbers. The field was born in the mid-nineteenth century when Bernhard Riemann showed, in a single landmark 1859 paper, that the prime-counting function is controlled by the zeros of a complex-analytic function now called the Riemann zeta function, and it has grown into one of the richest and most active areas of modern mathematics, connecting elementary arithmetic to complex analysis, harmonic analysis, and representation theory. At its heart lies a stunning fact: the seemingly erratic sequence of prime numbers is governed by a precise, beautiful law, and the tools needed to prove it come not from arithmetic at all, but from analysis.
The Riemann Zeta Function
The Riemann zeta function is the central object of analytic number theory. For a complex number with real part greater than one, it is defined by the absolutely convergent Dirichlet series
The connection to prime numbers appears immediately. Leonhard Euler, working in the eighteenth century, observed that the fundamental theorem of arithmetic — unique factorization of integers into primes — translates directly into a product formula. Since every positive integer has a unique prime factorization, the sum over all integers splits into an independent product over all primes :
This Euler product encodes all information about primes into the analytic function . When , the sum becomes the harmonic series , which diverges — and from this divergence Euler extracted a new proof that there are infinitely many primes, since if there were only finitely many, the product would converge to a finite value.
Riemann’s 1859 paper Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse (On the number of primes less than a given magnitude) transformed the subject by treating as a function of a complex variable. The series converges absolutely for , but Riemann showed that extends to a meromorphic function on the entire complex plane, with a single simple pole at with residue . The key tool is analytic continuation: starting from the Dirichlet series, one uses integral representations and functional equations to extend the function far beyond its original domain.
The central symmetry of is expressed by its functional equation, which Riemann established using the completed zeta function . This completed function satisfies the elegant relation
reflecting a deep symmetry about the line . The gamma factor introduces trivial zeros at the negative even integers , where the gamma function has poles. All other zeros — the non-trivial zeros — lie inside the critical strip . Riemann computed the first few non-trivial zeros and noticed they all lie on the critical line . He conjectured, cautiously, that this is always the case.
The Riemann Hypothesis — that all non-trivial zeros of satisfy — remains unproven, and is widely regarded as the most important unsolved problem in mathematics. It is one of the Clay Millennium Prize Problems. Computational verification has confirmed the hypothesis for the first non-trivial zeros, all lying precisely on the critical line, but no proof is in sight. The stakes are high: the Riemann Hypothesis is equivalent to the sharpest possible error bound for the prime-counting function, and its truth (or falsity) would have sweeping consequences throughout number theory and beyond.
Prime Number Theorem and its Proof
The prime-counting function counts the number of primes less than or equal to . The fundamental question — how fast does grow? — occupied mathematicians for centuries. Empirical evidence accumulated by Gauss and Legendre in the late eighteenth century suggested an asymptotic law, and Gauss conjectured that grows roughly like or like the logarithmic integral
The Prime Number Theorem makes this precise: as ,
which means . Equivalently, using the Chebyshev function (which weights prime powers by their logarithms), the theorem is equivalent to .
The Prime Number Theorem was proved independently in 1896 by Jacques Hadamard and Charles-Jean de la Vallée Poussin, using Riemann’s approach. The key step is to show that has no zeros on the line — the boundary of the critical strip. Specifically, one proves the estimate for all real . A clever trigonometric argument using the inequality establishes this zero-free region. Once this is known, contour integration via the explicit formula — which expresses as a sum over zeros of — yields the asymptotic .
Riemann’s explicit formula is the jewel at the center of this proof. It states, in a precise sense:
where the sum runs over all non-trivial zeros of . This formula shows that the zeros of act as “harmonics” in the distribution of primes: each zero contributes an oscillating term , and the prime distribution is the superposition of all these oscillations. If all zeros satisfy , as the Riemann Hypothesis asserts, then for each term, and one obtains the conditional error bound
or equivalently . Unconditionally, de la Vallée Poussin established a zero-free region of the form for an absolute constant , which yields the error term
Improving this error term — and in particular reducing the exponent from toward — is one of the central open problems in the field, and any progress is intimately tied to knowledge of the zero-free region for .
Dirichlet L-Functions and Primes in Progressions
The natural generalization asks not just how many primes there are up to , but how many primes fall in a given arithmetic progression for integers and with . The intuitive expectation is that primes are roughly equidistributed among the residue classes modulo that are coprime to .
This was made precise by Peter Gustav Lejeune Dirichlet in 1837 in his celebrated theorem: if , then there are infinitely many primes . Dirichlet’s proof introduced two fundamental innovations that shaped the next century of number theory.
The first innovation is Dirichlet characters: completely multiplicative functions of period satisfying when and when . The characters modulo form a group of order under pointwise multiplication, and they satisfy the crucial orthogonality relations:
These orthogonality relations allow one to “detect” a residue class: the indicator of the condition is expressed as
The second innovation is the Dirichlet L-function associated to a character :
Like the zeta function, each has an Euler product reflecting unique factorization, and extends to an entire function (if is non-principal) or a function with a single pole at (for the principal character , where is essentially ). By combining the logarithmic derivative of with the orthogonality of characters, Dirichlet reduced the count of primes in the progression to a sum over all characters:
The key analytical step is proving that for all non-principal characters . For complex characters this follows quickly from the Euler product, but for real characters (those with ) Dirichlet needed a subtle argument involving class numbers of quadratic forms — a hint of the deep connections between L-functions and algebraic number theory.
The quantitative form, the Siegel-Walfisz theorem, states that for any and for , one has
where the constant depends only on . The generalized Riemann hypothesis — that all zeros of all Dirichlet L-functions lie on the critical line — would sharpen this dramatically to an error of size .
The zeros of Dirichlet L-functions are controlled by the zero-free regions analogous to those for . A notorious difficulty is the possible existence of a Siegel zero: a real zero of very close to for a real character . The non-existence of Siegel zeros is essentially equivalent to improved bounds for primes in progressions, and despite enormous effort, this remains out of reach.
Additive and Multiplicative Problems
Analytic number theory also attacks problems of an additive nature: when can an integer be expressed as a sum of primes, powers, or other special integers? These questions require a different toolkit — in particular, exponential sums and the circle method.
Goldbach’s conjecture, stated in a 1742 letter from Christian Goldbach to Euler, asserts that every even integer greater than is the sum of two primes. It remains open, despite being verified computationally for all even numbers up to . The best proven result in this direction is Chen’s theorem (1973), which establishes that every sufficiently large even integer is the sum of a prime and a number with at most two prime factors.
The ternary Goldbach conjecture — that every odd number greater than is the sum of three primes — was resolved in 2013 by Harald Helfgott, who proved it for all odd numbers greater than without exception. Earlier, Ivan Vinogradov had shown in 1937 that every sufficiently large odd number is the sum of three primes, using his method for bounding Vinogradov exponential sums of the form .
The fundamental tool for these additive problems is the Hardy-Littlewood circle method, developed by G. H. Hardy and John Edensor Littlewood in the 1920s. The idea is to express the number of representations of as a sum of primes using an integral around the unit circle:
where is an exponential sum over primes. The circle is divided into major arcs (where is close to a rational number with small denominator) and minor arcs (the rest). On major arcs, is well-approximated using knowledge of primes in arithmetic progressions, giving a main term called the singular series . On minor arcs, the exponential sum cancels due to equidistribution, giving a negligible error. The singular series is an explicit product over primes that equals for even (three-prime Goldbach) and is bounded away from zero for odd , explaining why the conjecture should hold only for odd numbers.
Waring’s problem asks for the minimum number such that every positive integer is the sum of at most perfect -th powers. Lagrange’s four-square theorem (, ) is the classical case. For general , it was shown by David Hilbert in 1909 that is always finite. The exact values are known for all : , , and for large one has . The Hardy-Littlewood method again provides the main tool, though the minor arc estimates require substantial work.
On the multiplicative side, analytic number theory studies the average behavior of arithmetic functions. The average order of the divisor function (which counts divisors) satisfies
where is the Euler-Mascheroni constant. The error term is classical (due to Dirichlet), but improving the exponent to the conjectured is the Dirichlet divisor problem, still open. The Erdős-Kac theorem (1940) shows that the number of prime factors of a “typical” integer near is normally distributed with mean and variance both equal to — a striking probabilistic statement about a completely deterministic function.
Modular Forms and L-Functions
The deepest direction of modern analytic number theory leads into the theory of modular forms and their associated L-functions. A modular form is a complex-analytic function on the upper half-plane that transforms in a controlled way under the action of the modular group . Specifically, a modular form of weight for is a holomorphic function satisfying
together with a growth condition at the cusp . If vanishes at the cusp, it is called a cusp form. Every modular form has a Fourier expansion where , and the coefficients carry profound arithmetic information.
The L-function of a cusp form is defined by
These L-functions share all the key properties of the Riemann zeta function and Dirichlet L-functions: they have Euler products, analytic continuations, and functional equations. For an eigenform — a cusp form that is simultaneously an eigenfunction of all the Hecke operators — the Euler product takes the factored form
where is the Hecke eigenvalue at . The multiplicativity of Hecke eigenvalues, established by Erich Hecke in the 1930s, implies the multiplicativity for , which is precisely what gives the Euler product.
The arithmetic significance of modular forms is best illustrated by the Ramanujan tau function , defined by the expansion of the cusp form . In 1916, Srinivasa Ramanujan conjectured that for all primes . This was proved in 1974 by Pierre Deligne as a consequence of his proof of the Weil conjectures — a profound connection between the theory of modular forms and algebraic geometry over finite fields, for which Deligne received the Fields Medal.
The most spectacular application of modular forms to number theory is the proof of Fermat’s Last Theorem. In 1955, Yutaka Taniyama and Goro Shimura conjectured that every elliptic curve over is modular, meaning its L-function equals the L-function of some weight-2 cusp form. In 1986, Ken Ribet proved that the Taniyama-Shimura conjecture (for semistable curves) implies Fermat’s Last Theorem. In 1995, Andrew Wiles, with crucial help from Richard Taylor, proved the semistable case of the conjecture, thereby establishing Fermat’s Last Theorem after 358 years. The full modularity theorem — that every elliptic curve over is modular — was completed in 2001 by Breuil, Conrad, Diamond, and Taylor.
The general philosophy connecting arithmetic objects to automorphic L-functions is the Langlands program, initiated by Robert Langlands in a celebrated 1967 letter to André Weil. The program predicts a vast web of correspondences between Galois representations, automorphic forms, and L-functions. It subsumes class field theory, the modularity theorem, and Dirichlet’s theorem as special cases, and its full realization would constitute a unification of number theory on a scale comparable to what the Langlands program itself calls “a kind of arithmetic Fourier analysis.” Current research — on the geometric Langlands program, on the local and global correspondences for reductive groups, and on the applications to Diophantine equations — represents the frontier of analytic number theory and algebraic number theory combined, pointing toward a deep and still-unfolding synthesis of analysis, algebra, and arithmetic.