Elementary Number Theory
Divisibility, congruences, arithmetic functions, and the fundamental theorem of arithmetic.
Elementary number theory is the study of integers through the lens of divisibility, primality, and congruence — questions that are easy to state but often surprisingly deep to answer. It is one of the oldest branches of mathematics, stretching back to ancient Greece, and yet its results underpin modern cryptography, coding theory, and computer science. What makes number theory so compelling is this dual nature: the objects are familiar (whole numbers), yet the landscape they inhabit is vast, structured, and full of mystery.
Divisibility and the Euclidean Algorithm
At the heart of elementary number theory lies a single, simple relation. We say that an integer divides an integer , written , if there exists an integer such that . When this holds, is called a divisor or factor of , and is a multiple of . For instance, because , but because no integer satisfies .
Divisibility is a partial order on the positive integers: it is reflexive (), antisymmetric (if and with , then ), and transitive (if and , then ). These properties, elementary as they are, provide the scaffolding for everything that follows.
The Division Algorithm formalizes the familiar act of long division. Given any integers and with , there exist unique integers (the quotient) and (the remainder) satisfying
The uniqueness of and is just as important as their existence: it means every integer has a canonical decomposition with respect to any modulus. This theorem was known to ancient mathematicians in various forms, but its precise statement and proof became standard in European mathematics by the eighteenth century.
When we want to understand what two integers have in common, we turn to the greatest common divisor. For integers and not both zero, the GCD is the largest positive integer that divides both and . A fundamental fact, known as Bézout’s identity, asserts that the GCD is always a linear combination of and : there exist integers and such that
This is not obvious, but it follows from the Euclidean algorithm, one of the oldest algorithms in mathematics. Described by Euclid in his Elements around 300 BCE, the algorithm repeatedly applies the division algorithm: since where , we reduce the problem to smaller and smaller inputs until the remainder is zero. The last nonzero remainder is the GCD.
For example, to compute :
The extended Euclidean algorithm traces back through these steps to express , giving the Bézout coefficients and . This is not merely a theoretical curiosity: the extended algorithm is the standard method for computing modular inverses, which lie at the core of RSA encryption.
Two integers are coprime (or relatively prime) if . In that case, Bézout’s identity guarantees integers with , which means : is a multiplicative inverse of modulo . The complementary notion is the least common multiple , the smallest positive integer divisible by both and . The identity
connects these two concepts in an elegant way that will reappear when we discuss prime factorizations.
Congruences and Modular Arithmetic
Congruence is divisibility repackaged for comparison. We say that is congruent to modulo , written , if , that is, if and leave the same remainder when divided by . Carl Friedrich Gauss introduced this notation in his landmark 1801 treatise Disquisitiones Arithmeticae, which effectively founded modern number theory as a unified discipline. Gauss was twenty-four years old when it was published.
Congruence modulo is an equivalence relation: it is reflexive, symmetric, and transitive. The equivalence class of is its residue class . There are exactly residue classes modulo , and the set of all of them is written (or sometimes ). What makes congruences genuinely powerful is that they are compatible with arithmetic: if and , then
This means is a ring — it inherits addition and multiplication from the integers, with all arithmetic “wrapping around” at multiples of .
A linear congruence has solutions if and only if . When this condition holds, there are exactly distinct solutions modulo , equally spaced by . In particular, if , the congruence has a unique solution modulo , obtained by multiplying both sides by the modular inverse of .
Systems of simultaneous congruences can often be solved using the Chinese Remainder Theorem (CRT). In one of the earliest number-theoretic results from any civilization, problems of this type appear in the Sun Tzu Suanjing (ca. 3rd–5th century CE): “find a number that leaves remainder 2 when divided by 3, remainder 3 when divided by 5, and remainder 2 when divided by 7.” The CRT states that if are pairwise coprime, then the system
has a unique solution modulo . The proof is constructive: one builds the solution as , where and is the inverse of modulo . In abstract algebra terms, the CRT asserts an isomorphism of rings .
The multiplicative structure of is governed by Euler’s totient function , which counts the integers from to that are coprime to . The set of such residues forms the multiplicative group , which has order . Euler’s theorem generalizes Fermat’s Little Theorem: if , then
For prime , this reduces to Fermat’s Little Theorem: for , a result Pierre de Fermat stated (without proof) in a letter to Bernard Frénicle de Bessy in 1640. Euler provided the first published proof in 1736. The formula for the totient of a general integer is
where the product runs over the distinct prime divisors of .
Prime Numbers and the Fundamental Theorem of Arithmetic
A prime number is an integer greater than 1 whose only positive divisors are 1 and itself. All other integers greater than 1 are composite. Primes are the atoms of multiplicative arithmetic — the irreducible building blocks from which every other positive integer is assembled. The first few primes are , and their distribution among the integers has fascinated mathematicians since antiquity.
The Fundamental Theorem of Arithmetic asserts that every integer greater than 1 can be written as a product of primes, and that this factorization is unique up to the order of the factors. More precisely: for any integer , there exist primes such that
and any two such representations are identical. The existence half is straightforward by strong induction: if is not prime, it factors as with , and each of and factors by induction. The uniqueness half requires more care — it uses the crucial fact that if a prime divides a product , then or (sometimes called Euclid’s lemma).
Though the theorem feels obvious, its uniqueness can fail in other algebraic settings. In the ring , for instance, one has , two distinct factorizations into irreducibles. Understanding why such failures occur, and how to restore uniqueness, was a central motivation for the development of ideal theory by Ernst Kummer and Richard Dedekind in the nineteenth century — and ultimately led to algebraic number theory, which is explored more deeply in later sub-topics of this branch.
The prime factorization of gives a formula for and : if and (taking or for primes not appearing), then
Are there infinitely many primes? Euclid showed they are, in one of the most elegant proofs in all of mathematics. Suppose, for contradiction, that are all the primes. Form . Then is not divisible by any of , so any prime factor of is a prime not on our list — a contradiction. This argument appears in Book IX of Euclid’s Elements.
The distribution of primes, while irregular locally, becomes predictable on large scales. The prime counting function denotes the number of primes at most . Adrien-Marie Legendre conjectured around 1798 that . This was proved independently by Jacques Hadamard and Charles-Jean de la Vallée Poussin in 1896 using complex analysis, giving the celebrated Prime Number Theorem:
The exact rate of convergence is connected to the zeros of the Riemann zeta function and the Riemann Hypothesis, one of the great open problems of mathematics.
Primitive Roots and Orders
Every nonzero element in (for prime ) generates a cyclic subgroup of the multiplicative group . The order of modulo , written , is the smallest positive integer such that . By Fermat’s Little Theorem, the order always divides .
A primitive root modulo is an element whose order equals , meaning the powers form a complete set of nonzero residues modulo . In other words, is cyclic and is a generator. The existence of primitive roots modulo every prime is a nontrivial theorem, first proved by Euler and later given a clean treatment by Gauss in the Disquisitiones Arithmeticae. For , for instance, is a primitive root: , , , , , modulo 7.
The existence of primitive roots extends beyond primes. Primitive roots exist modulo if and only if for odd prime and . For all other moduli, is not cyclic — it decomposes into a product of cyclic groups of smaller orders. Understanding this structure requires the classification of finitely generated abelian groups, a topic developed in abstract algebra.
The notion of order is the key to discrete logarithms. If is a primitive root modulo and , then there exists a unique exponent satisfying . This is called the discrete logarithm of to the base modulo , written . Computing discrete logarithms efficiently is believed to be computationally hard — no polynomial-time algorithm is known for general — and this hardness assumption underpins the security of the Diffie-Hellman key exchange and related protocols in modern cryptography.
The structure of orders modulo prime powers is more subtle. The lifting the exponent lemma and Hensel’s lemma (discussed more fully in the context of -adic numbers) give methods for bootstrapping solutions modulo to solutions modulo higher powers . This lifting perspective recurs throughout number theory: a local solution over is a candidate for a global solution over , and the extent to which local solutions combine to give global ones is measured by the Chinese Remainder Theorem and, more deeply, by class field theory.
Applications of Modular Arithmetic
The abstract machinery of congruences and orders connects directly to concrete problems in computing, cryptography, and everyday life. Modular arithmetic provides both the theoretical foundation and the practical algorithms for a range of modern technologies.
Primality testing is the first major application. To verify that a large integer is prime, one cannot afford to trial-divide by all integers up to when has hundreds of digits. Fermat’s test offers a shortcut: if is prime, then for every coprime to . A failure means is composite. However, certain composite numbers — called Carmichael numbers — pass Fermat’s test for every coprime base, with being the smallest example. The Miller-Rabin primality test, introduced independently by Gary Miller (1976) and Michael Rabin (1980), refines this by checking additional conditions derived from the factorization of . A composite fools Miller-Rabin for at most of all possible bases, making the test highly effective when run with multiple random bases. The deterministic AKS test, published by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena in 2002, settled a long-open question by showing that primality can be verified in polynomial time without any unproven assumptions.
RSA cryptography is the most famous application of elementary number theory. To set up an RSA key pair, one chooses two large primes and , forms , and selects an encryption exponent coprime to . The decryption exponent is , computed via the extended Euclidean algorithm. Encryption and decryption operate by modular exponentiation: a message becomes ciphertext , and decryption recovers , valid by Euler’s theorem since implies . The security of RSA rests on the apparent difficulty of factoring without knowledge of and — the integer factorization problem, for which no efficient general algorithm is known. RSA was introduced in 1977 by Ron Rivest, Adi Shamir, and Leonard Adleman, though an equivalent system had been discovered earlier (and kept classified) by Clifford Cocks at the UK’s GCHQ in 1973.
Check digit systems offer a more quotidian application. The ISBN-10 system for book identification assigns a 10-digit code satisfying
The final digit is the check digit, chosen to make this congruence hold. If a single digit is transcribed incorrectly, the checksum fails, alerting the reader to an error. Transposition of adjacent digits — the most common human transcription error — is also always detected, because the weights are all different modulo 11. The Luhn algorithm used for credit card numbers is a similar but simpler scheme using modulo 10, though it detects only a subset of transpositions.
Efficient computation by repeated squaring is the final cornerstone. Computing naively requires multiplications, which is useless when is a 256-bit exponent. The square-and-multiply (or binary exponentiation) algorithm reduces this to modular multiplications: write in binary, and at each bit position either square (for a 0 bit) or square-and-multiply (for a 1 bit). For instance, is computed as , requiring only 5 multiplications instead of 12. This algorithm is the computational engine behind RSA, Diffie-Hellman, and virtually every other modular-exponentiation-based protocol. The interplay between the theoretical structures developed in this topic and their efficient algorithmic implementation is what makes elementary number theory not just a historical curiosity, but a living and economically vital field of mathematics.
The ideas introduced here — divisibility, primes, congruences, orders — are the foundation on which the rest of number theory is built. Moving deeper into the branch, one encounters quadratic residues and the law of quadratic reciprocity, Diophantine equations such as the Pell equation, and the rich arithmetic of algebraic number fields, all of which extend the elementary machinery developed in this topic to dramatically broader and more powerful settings.