Abstract Algebra
Groups, rings, fields, Galois theory, and homological algebra — the algebra of structures.
Abstract algebra is the study of mathematical structures defined by sets equipped with operations satisfying specified axioms — groups, rings, fields, and modules — rather than the study of numbers or geometric objects directly. Where elementary algebra asks “solve for ,” abstract algebra asks “what kind of thing is part of, and what can we say about all things of that kind?” The subject emerged gradually across the nineteenth century, crystallizing in the work of Évariste Galois, Arthur Cayley, Emmy Noether, and Emil Artin into a unified language that underpins nearly every branch of modern mathematics.
Foundations and Basic Algebraic Structures
The starting point of abstract algebra is the recognition that many familiar mathematical objects — the integers, the rational numbers, the symmetries of a geometric shape, the polynomials with real coefficients — share deep structural similarities that can be captured by a small number of axioms. To make this precise we need the language of binary operations. A binary operation on a set is a function that takes two elements and returns a third, always staying inside (the closure property). From this elementary beginning, we layer on additional requirements.
The most primitive structure is a magma: just a set with a binary operation, no additional constraints. Adding associativity — the requirement that for all — gives a semigroup. Associativity is the essential requirement that makes multi-step computation unambiguous: it says that the order in which we apply the operation does not depend on how we parenthesize.
A monoid is a semigroup that additionally possesses an identity element satisfying for all . The natural numbers under addition form a monoid with identity ; the positive integers under multiplication form a monoid with identity . The key transition from monoid to group is the addition of inverses: every element must have a partner that “undoes” it.
A group is a set with a binary operation satisfying four axioms: closure, associativity, the existence of an identity element , and the existence of an inverse for every such that . The integers under addition are the prototypical group: identity , inverse of is . The nonzero rationals under multiplication form another: identity , inverse of is . A group is abelian (or commutative) if additionally for all , a condition named after Niels Henrik Abel, who proved in 1824 that general quintic equations are unsolvable by radicals.
Beyond groups, algebra introduces structures with two interacting operations. A ring is a set with addition and multiplication such that is an abelian group, multiplication is associative, and the two operations satisfy left and right distributivity: . The integers , the real polynomials , and the matrices are all rings. When multiplication is also commutative, we say the ring is commutative. When every nonzero element has a multiplicative inverse, we say the ring is a division ring; a commutative division ring is a field. The rational numbers , the reals , and the complex numbers are the canonical fields; the quaternions are a non-commutative division ring discovered by William Rowan Hamilton in 1843.
Group Theory and Classification
The theory of groups is one of the great success stories of nineteenth- and twentieth-century mathematics: starting from four axioms, mathematicians built a rich classification theory culminating, after a century of effort, in the complete list of finite simple groups.
The first fundamental theorem is Lagrange’s theorem (1771), which states that if is a subgroup of a finite group , then divides . The proof rests on the notion of cosets: the left coset partitions into pieces of equal size , so where is the index of in . Lagrange’s theorem immediately implies that the order of any element — the smallest positive integer with — must divide , a result with immediate consequences in number theory (it implies Fermat’s little theorem).
A subgroup is normal if for all , meaning is closed under conjugation. Normal subgroups are exactly the kernels of group homomorphisms, and they allow us to form quotient groups whose elements are the cosets with multiplication . The First Isomorphism Theorem then asserts that if is a group homomorphism, then . This is the fundamental tool for understanding group structure: every surjective homomorphism out of can be factored through a quotient.
Cyclic groups are the simplest: under addition, generated by a single element. Every subgroup of a cyclic group is cyclic, and the subgroups of are in bijection with the divisors of . The fundamental theorem of finite abelian groups generalizes this completely: every finite abelian group is isomorphic to a direct product of cyclic groups of prime-power order,
a classification proved in its modern form by Leopold Kronecker in 1870. This result shows that finite abelian groups are completely understood; the classification of non-abelian groups is far harder.
For non-abelian groups, the key tool is the Sylow theorems, proved by Peter Ludwig Sylow in 1872. If where is prime and , then a Sylow -subgroup is a subgroup of order . The Sylow theorems state: such subgroups always exist; they are all conjugate to each other; and the number of Sylow -subgroups, denoted , satisfies and . These constraints are powerful enough to classify all groups of many small orders — for instance, every group of order or is cyclic, deduced purely from the arithmetic of Sylow numbers.
A group is simple if it has no normal subgroups other than and itself — it cannot be “factored” further. Simple groups are the atomic building blocks: the Jordan-Hölder theorem guarantees that every finite group has a composition series whose factors are simple groups, and those factors (with multiplicity) are uniquely determined by up to reordering. The classification of all finite simple groups — the CFSG — is one of the longest and most complex theorems in mathematics, completed around 1983 after contributions from hundreds of mathematicians across tens of thousands of journal pages. The finite simple groups fall into a small number of infinite families (cyclic groups of prime order, alternating groups for , groups of Lie type) plus sporadic exceptions, the largest being the Monster group of order approximately .
Solvable groups are those whose composition factors are all cyclic of prime order. Galois showed that a polynomial equation is solvable by radicals if and only if its Galois group is solvable — a connection that simultaneously solved a 300-year-old open problem and invented the theory of groups. The Feit-Thompson theorem (1963), which required 255 pages to prove, established that every group of odd order is solvable, a result conjectured by Burnside in 1911 and central to the CFSG strategy.
Ring Theory and Ideals
Rings generalize the arithmetic of the integers to abstract settings, and the theory of ideals plays the role that normal subgroups play in group theory: they are the kernels of ring homomorphisms and the building blocks from which quotient rings are constructed.
An ideal of a ring is a nonempty subset closed under addition and under multiplication by arbitrary ring elements: if and , then and . In a commutative ring these two conditions coincide. The simplest ideals are principal ideals generated by a single element. In , every ideal is principal: the ideals are exactly the sets for nonneg integer . The quotient ring recovers modular arithmetic .
An ideal is prime if whenever then or — generalizing the notion of a prime number. The quotient is an integral domain (no zero divisors) if and only if is prime. An ideal is maximal if it is not properly contained in any proper ideal; the quotient is a field if and only if is maximal. Every maximal ideal is prime, but not conversely: in , the ideal is neither prime nor maximal, is maximal (with quotient , a field), and the zero ideal is prime but not maximal.
The question of unique factorization — so familiar for integers — extends to a hierarchy of ring classes. An integral domain is a unique factorization domain (UFD) if every nonzero non-unit factors into irreducibles in a way that is unique up to order and units. is a UFD by the fundamental theorem of arithmetic; (the Gaussian integers) is also a UFD. A principal ideal domain (PID) is an integral domain in which every ideal is principal; every PID is a UFD, but the polynomial ring is a UFD that is not a PID (the ideal is not principal). An Euclidean domain is one equipped with a division algorithm (a degree function generalizing the absolute value on ); every Euclidean domain is a PID. This hierarchy — Euclidean PID UFD integral domain — organizes a vast range of examples.
Emmy Noether transformed ring theory in the 1920s by introducing the ascending chain condition (ACC): a ring is Noetherian if every ascending chain of ideals eventually stabilizes. Her Hilbert Basis Theorem (originally proved by David Hilbert in 1890, though Noether gave it its modern form) states that if is Noetherian, then the polynomial ring is Noetherian. By induction, is Noetherian whenever is — a result that guarantees, among other things, that systems of polynomial equations always have finitely generated solution sets. The Lasker-Noether theorem generalizes the primary decomposition of integers: in a Noetherian ring, every ideal can be written as a finite intersection of primary ideals, a result that carries deep geometric content in commutative algebra.
The Chinese Remainder Theorem in its abstract ring-theoretic form states that if are pairwise comaximal ideals (meaning for ), then . For this recovers the classical CRT from number theory: the system has a unique solution mod whenever the are pairwise coprime.
Field Theory and Galois Theory
Field theory studies the arithmetic of fields and the ways fields can be extended to contain roots of polynomials. Its crown jewel — Galois theory — bridges field extensions and group theory in a correspondence of breathtaking elegance, resolving questions about the solvability of polynomial equations that had been open since the Renaissance.
Given a field and a polynomial , a field extension is a larger field containing as a subfield. The degree is the dimension of as a vector space over . An element is algebraic over if it satisfies some nonzero polynomial in ; it is transcendental otherwise. The real number is algebraic over (satisfying ), while and are transcendental, proved by Ferdinand von Lindemann in 1882 and Charles Hermite in 1873 respectively. For any algebraic element , the minimal polynomial is the unique monic irreducible polynomial over of which is a root, and .
The tower law governs degrees of successive extensions: if are fields, then . This multiplicativity has immediate consequences. Doubling the cube — constructing with compass and straightedge — requires an extension of degree over , but all compass-and-straightedge constructions yield degrees that are powers of . Since is not a power of , the problem is impossible. Similarly, trisecting a angle requires constructing , a root of , an irreducible cubic — again impossible by straightedge and compass. Galois theory converts these geometric impossibilities into clean algebraic facts.
The heart of Galois theory is the Galois group of an extension , defined as . An extension is Galois if it is both normal (every irreducible polynomial over that has one root in splits completely in ) and separable (all roots of irreducible polynomials are distinct). For a Galois extension, .
The Fundamental Theorem of Galois Theory establishes a perfect correspondence between the lattice of intermediate fields and the lattice of subgroups of . Concretely: to each intermediate field associate the subgroup fixing pointwise; to each subgroup associate the fixed field . This correspondence reverses inclusions, and . Moreover, is itself Galois if and only if is normal in , with .
The solvability question now becomes purely group-theoretic. A polynomial is solvable by radicals — meaning its roots can be expressed via and th roots applied to elements of — if and only if its Galois group (the Galois group of its splitting field over ) is a solvable group. The symmetric group is solvable, explaining why quartics are solvable (Ferrari, 1545). The symmetric group is not solvable (since is simple and nonabelian), proving that no formula by radicals can exist for the general quintic — the Abel-Ruffini theorem, first proved by Paolo Ruffini in 1799 and completed by Abel in 1824, then recast by Galois in 1832 in a far more precise form.
Finite fields — fields with finitely many elements — are completely classified. For each prime and positive integer , there exists exactly one field of order , denoted or , and its multiplicative group is cyclic. The Frobenius automorphism generates the Galois group . Finite fields underlie modern cryptography — the Diffie-Hellman protocol, elliptic curve cryptosystems, and the AES cipher all depend on their arithmetic.
Module Theory
Modules generalize vector spaces by replacing the coefficient field with an arbitrary ring. This single generalization unifies linear algebra, group representation theory, and the theory of abelian groups under a single framework, and provides the natural setting in which to prove the Jordan normal form theorem from first principles.
A left -module over a ring is an abelian group together with a scalar multiplication satisfying the obvious analogues of the vector space axioms: , , , and . When is a field, modules are exactly -vector spaces. When , modules are exactly abelian groups (-scalar multiplication is forced: , times). When for a field , an -module structure on an -vector space amounts to choosing an -linear operator (the action of ), giving the polynomial the meaning of the linear map .
The structure theorem for finitely generated modules over a PID is the central result. If is a PID and is a finitely generated -module, then
where is the rank and are nonzero non-unit elements of (the invariant factors). Alternatively, can be written using elementary divisors — the primary decomposition into cyclic modules of prime-power order. For this recovers the fundamental theorem of finite abelian groups. For , applied to the module with operator , the invariant factors become the invariant factors of (determining the rational canonical form) and the elementary divisors become the elementary divisors of (determining the Jordan normal form). Thus the entire theory of canonical forms of linear operators, seemingly a topic of linear algebra, is a corollary of abstract algebra.
A module is free if it has a basis — a linearly independent spanning set — just like a vector space. Over a field, every module (vector space) is free. Over a general ring this fails: as a -module is not free (it has torsion). An element is a torsion element if for some nonzero . A module is projective if it is a direct summand of a free module, and injective if every module homomorphism into it can be extended to any ambient module. These notions, which measure how far a module is from being free, become the foundation of homological algebra.
Homological Algebra
Homological algebra is the machinery for measuring how close exact sequences are to splitting, and more broadly for extracting algebraic invariants from algebraic structures. It emerged from algebraic topology in the 1940s — from the work of Samuel Eilenberg and Saunders Mac Lane — and rapidly became the common language of commutative algebra, algebraic geometry, and representation theory.
A chain complex is a sequence of abelian groups (or modules) connected by homomorphisms , called boundary maps, satisfying , i.e., the image of each map lies in the kernel of the next. The th homology group of the complex is
The elements of are cycles and those of are boundaries; the condition means every boundary is a cycle. Homology measures the extent to which cycles fail to be boundaries — “holes” in the algebraic structure. A complex is exact at position if , meaning every cycle is a boundary. A short exact sequence captures the idea that embeds into and is the quotient — .
The Snake Lemma and the resulting long exact sequence in homology are the workhorses of the subject: a short exact sequence of chain complexes gives rise to a long exact sequence
where the connecting homomorphism encodes how cycles in can fail to lift to cycles in .
The derived functors Tor and Ext extract information from projective and injective resolutions. Given modules and over a ring , measures the failure of the tensor product to preserve exactness (i.e., how far is from being flat), while measures the failure of to preserve exactness (related to extensions of modules). In particular, classifies extensions up to equivalence — a beautiful connection between homology and module structure. The projective dimension is the length of the shortest projective resolution of ; the global dimension of is . Serre’s theorem connects global dimension to regularity: a Noetherian local ring is regular (in the geometric sense of having a smooth spectrum) if and only if it has finite global dimension.
Noncommutative Algebra
Most of classical algebra assumes commutativity of multiplication, but many natural objects — matrix rings, group rings, rings of differential operators — are noncommutative. Noncommutative algebra studies these settings and uncovers structure theorems that generalize the commutative case, sometimes with surprising differences.
The Jacobson radical of a ring is the intersection of all maximal left ideals — equivalently, the set of elements such that is a unit for all . The radical measures the “nilpotent” or “non-semisimple” part of the ring: is always semisimple in the sense of having zero Jacobson radical. For finite-dimensional algebras over a field, is nilpotent: some power .
The Artin-Wedderburn theorem (due to Joseph Wedderburn in 1908 for finite rings and Emil Artin in 1927 in general) is the crowning structure theorem of noncommutative ring theory: a ring is semisimple (Artinian with ) if and only if
for division rings and positive integers . Over an algebraically closed field such as , Wedderburn’s theorem on finite division rings (1905) implies all division rings appearing are fields, so the decomposition becomes a product of matrix rings over .
Group rings are the key example. Given a group and a field , the group ring consists of formal linear combinations with , multiplied by extending linearly. Understanding as a ring is precisely the study of representations of over . Maschke’s theorem (1898) states that if does not divide , then is semisimple; by Artin-Wedderburn, over an algebraically closed field, and the are the dimensions of the irreducible representations of . The number of irreducible representations equals the number of conjugacy classes in — a beautiful combinatorial fact with profound consequences in character theory.
Ore domains and skew fields of fractions extend the commutative notion of a field of fractions to certain noncommutative domains, though the construction requires the Ore condition: for any with , there exist with such that . When this condition holds, embeds into a division ring. The theory of noncommutative localization, developed by Paul Cohn in the 1970s, handles the general case with a more sophisticated universal construction.
Computational Algebra
Abstract algebra is not merely theoretical — many of its concepts admit efficient algorithms that have made algebraic computation a practical reality for solving problems in cryptography, coding theory, and symbolic mathematics.
The cornerstone of computational commutative algebra is the theory of Grobner bases, introduced by Bruno Buchberger in his 1965 doctoral thesis (named in honor of his advisor Wolfgang Grobner). A Grobner basis for an ideal is a generating set with a particularly favorable property: the leading monomials of the basis elements generate the same ideal of leading monomials as does. To define this precisely, one first fixes a monomial ordering — a total order on monomials compatible with multiplication; the two most common are lexicographic order (lex) and graded reverse lexicographic order (grevlex). With a monomial ordering chosen, every polynomial has a well-defined leading monomial , and a finite set is a Grobner basis if .
The power of Grobner bases comes from the multivariate division algorithm: every polynomial can be reduced to a unique normal form modulo a Grobner basis , meaning . Membership in is then decidable: if and only if . Moreover, for lex order, Grobner bases produce a kind of elimination theory: if , then is generated by the elements of a lex Grobner basis that happen to involve only — a systematic algebraic analogue of Gaussian elimination that can solve systems of polynomial equations.
Buchberger’s algorithm computes Grobner bases by iteratively computing and reducing S-polynomials: given two polynomials and , the -polynomial is designed to cancel their leading monomials, and adding its reduction to the basis until all reductions vanish yields a Grobner basis. While worst-case complexity is doubly exponential, practical implementations in computer algebra systems such as Singular, Macaulay2, and SageMath routinely handle problems with dozens of variables and hundreds of polynomials, enabling computational commutative algebra to function as a bridge between abstract theory and explicit computation.
Beyond polynomial systems, computational group theory provides algorithms for working with groups given by generators and relations (the word problem) or as permutation groups. The Todd-Coxeter algorithm (1936) enumerates cosets of a subgroup. The Schreier-Sims algorithm (1970s) finds a base and strong generating set for permutation groups of large degree in polynomial time. The BSGS (Baby-Step Giant-Step) algorithm solves the discrete logarithm problem in cyclic groups in group operations — a result of foundational importance in public-key cryptography, where the hardness of discrete logarithms in groups such as elliptic curves over finite fields underpins the security of widely deployed protocols.