In mathematics, polynomial functions, or polynomials, are an important class of simple and smooth functions. Here, simple means they are constructed using only multiplication and addition. Smooth means they are infinitely differentiable, i.e., they have derivatives of all finite orders.
Because of their simple structure, polynomials are very easy to evaluate, and are used extensively in numerical analysis for polynomial interpolation or to numerically integrate more complex functions.
In linear algebra, the characteristic polynomial of a square matrix encodes several important properties of the matrix. In graph theory the chromatic polynomial of a graph encodes the different ways to vertex color the graph using x colors.
With the advent of computers, polynomials have been replaced by splines in many areas in numerical analysis. Splines are piecewise defined polynomials and provide more flexibility than ordinary polynomials when defining simple and smooth functions. They are used in spline interpolation and computer graphics.
Determining the roots of polynomials, or "solving algebraic equations", is among the oldest problems in mathematics. Some polynomials, such as f(x) = x² + 1, do not have any roots among the real numbers. If, however, the set of allowed candidates is expanded to the complex numbers, every (non-constant) polynomial has a root: this is the statement of the fundamental theorem of algebra.
There is a difference between approximating roots and finding concrete closed formulas for them. Formulas for the roots of polynomials of degree up to 4 have been known since the 16th century (see quadratic equation, Gerolamo Cardano, Niccolo Fontana Tartaglia). But formulas for degree 5 eluded researchers for a long time. In 1824, Niels Henrik Abel proved the striking result that there can be no general formula (involving only the arithmetical operations and radicals) for the roots of a polynomial of degree 5 or greater in terms of its coefficients (see Abel-Ruffini theorem). This result marked the start of Galois theory which engages in a detailed study of relations among roots of polynomials.
The difference engine of Charles Babbage was designed to create large tables of values of logarithms and trigonometric functions automatically, by evaluating approximating polynomials at many points using Newton's difference method.
For given constants (i.e., numbers) a0, …, an in some field (possibly but not limited to R or C) with an non-zero, a polynomial of degree n is a function of the form
More concisely, the polynomial can be written in sigma notation as
The constants a0, …, an are called the coefficients of the polynomial. a0 is called the constant coefficient and an is called the leading coefficient. When the leading coefficient is 1, the polynomial is called monic or normed.
Each summand ai xi of the polynomial is called a term. A polynomial with one, two or three terms is called monomial, binomial or trinomial respectively.
Some examples of polynomials of low degree:
is an example of a cubic function with leading coefficient −7 and constant coefficient 3.
The polynomials up to degree n form a vector space of dimension n + 1, which is usually called Πn. In this article polynomials are written using a monomial basis (i.e. 1, x, x2, …, xn), but it should be mentioned that other bases exist, for example the Chebyshev polynomials, which may be preferable depending on the problem domain.
A root or zero of a polynomial f is a number ζ so that f(ζ) = 0. The fundamental theorem of algebra states that a polynomial of degree n over the complex numbers has exactly n complex roots (not necessarily distinct ones). Therefore a polynomial can be factorized as
where each ζi is a root of the polynomial f.
The Abel-Ruffini theorem in algebra states that generally there is no closed formula to calculate the roots of a polynomial of degree 5 or higher. Closed formula means a formula constructed using only the coefficients of the polynomial and the operations of addition, multiplication and exponentiation (and their inverse operations).
Polynomials and calculus
One important aspect of calculus is the project of analyzing complicated functions by means of approximating them with polynomials. The culmination of these efforts is Taylor's theorem, which roughly states that every differentiable function locally looks like a polynomial, and the Stone-Weierstrass theorem, which states that every continuous function defined on a compact interval of the real axis can be approximated on the whole interval as closely as desired by a polynomial. Polynomials are also frequently used to interpolate functions.
Quotients of polynomials are called rational functions. Piecewise rationals are the only functions that can be evaluated directly on a computer, since typically only the operations of addition, multiplication, division and comparison are implemented in hardware. All the other functions that computers need to evaluate, such as trigonometric functions, logarithms and exponential functions, must then be approximated in software by suitable piecewise rational functions.
Evaluation of polynomials
The fast and numerically stable evaluation of a polynomial for a given x is a very important topic in numerical analysis. Several different algorithms have been developed for this problem. Which algorithm is used for a given polynomial depends on the form of the polynomial and the chosen x.
To evaluate a polynomial in monomial form one can use the Horner scheme. For a polynomial in Chebyshev form the Clenshaw algorithm can be used. If several equidistant xn have to be calculated one would use Newton's difference method.
As there is no general closed formula to calculate the roots of a polynomial of degree 5 and higher, root-finding algorithms are used in numerical analysis to approximate the roots. Approximations for the real roots of a given polynomial can be found using Newton's method, or more efficiently using Laguerre's method which employs complex arithmetic and can locate all complex roots.
In multivariate calculus, polynomials in several variables play an important role. These are the simplest multivariate functions and can be defined using addition and multiplication alone. An example of a polynomial in the variables x, y, and z is
The total degree of such a multivariate polynomial can be gotten by adding the exponents of the variables in every term, and taking the maximum. The above polynomial f(x, y, z) has total degree 6.
In abstract algebra, one must take care to distinguish between polynomials and polynomial functions. A polynomial f is defined to be a formal expression of the form
where the coefficients a0, ..., an are elements of some ring R and X is considered to be a formal symbol. Two polynomials are considered to be equal if and only if the sequences of their coefficients are equal. Polynomials with coefficients in R can be added by simply adding corresponding coefficients and multiplied using the distributive law and the rules
- for all elements a of the ring R
- for all natural numbers k and l.
One can then check that the set of all polynomials with coefficients in the ring R forms itself a ring, the ring of polynomials over R, which is denoted by R[X]. If R is commutative, then R[X] is an algebra over R.
One can think of the ring R[X] as arising from R by adding one new element X to R and only requiring that X commute with all elements of R. In order for R[X] to form a ring, all sums of powers of X have to be included as well. Formation of the polynomial ring, together with forming factor rings by factoring out ideals, are important tools for constructing new rings out of known ones. For instance, the clean construction of finite fields involves the use of those operations, starting out with the field of integers modulo some prime number as the coefficient ring R (see modular arithmetic).
To every polynomial f in R[X], one can associate a polynomial function with domain and range equal to R. One obtains the value of this function for a given argument r by everywhere replacing the symbol X in f's expression by r. The reason that algebraists have to distinguish between polynomials and polynomial functions is that over some rings R (for instance, over finite fields), two different polynomials may give rise to the same polynomial function. This is not the case over the real or complex numbers and therefore analysts don't separate the two concepts.
In commutative algebra, one major focus of study is divisibility among polynomials. If R is an integral domain and f and g are polynomials in R[X], it is said that f divides g if there exists a polynomial q in R[X] such that f q = g. One can then show that "every zero gives rise to a linear factor", or more formally: if f is a polynomial in R[X] and r is an element of R such that f(r) = 0, then the polynomial (X − r) divides f. The converse is also true. The quotient can be computed using the Horner scheme.
If F is a field and f and g are polynomials in F[X] with g ≠ 0, then there exist unique polynomials q and r in F[X] with
and such that the degree of r is smaller than the degree of g. The polynomials q and r are uniquely determined by f and g. This is called "division with remainder" or "polynomial long division" and shows that the ring F[X] is a Euclidean domain.
Analogously, polynomial "primes" (more correctly, irreducible polynomials) can be defined which cannot be factorized into the product of two polynomials of lesser degree. It is not easy to determine if a given polynomial is irreducible. One can start by simply checking if the polynomial has linear factors. Then, one can check divisibility by some other irreducible polynomials. Eisenstein's criterion can also be used in some cases to determine irreducibility.
One also speaks of polynomials in several variables, obtained by taking the ring of polynomials of a ring of polynomials: R[X,Y] = (R[X])[Y] = (R[Y])[X]. These are of fundamental importance in algebraic geometry which studies the simultaneous zero sets of several such multivariate polynomials.
Polynomials are frequently used to encode information about some other object. The characteristic polynomial of a matrix or linear operator contains information about the operator's eigenvalues. The minimal polynomial of an algebraic element records the simplest algebraic relation satisfied by that element.
Other related objects studied in abstract algebra are formal power series, which are like polynomials but may have infinite degree, and the rational functions, which are ratios of polynomials.