The Miller-Rabin primality test is a primality test: an algorithm
which determines whether a given number is prime,
similar to the Fermat primality test and the Solovay-Strassen primality test. Its original version, due to G. L. Miller, is deterministic, but it relies on the unproven generalized Riemann hypothesis; M. O. Rabin modified it to obtain an unconditional probabilistic algorithm.
Just like with the Fermat and Solovay-Strassen tests, with the Miller-Rabin test we will rely on an equality or set of equalities that hold true for prime values, and then see whether or not they hold for a number that we want to test for primality.
Let n be an odd prime, then we can write n − 1 as 2s × d, where s is an integer and d is odd -- this is the same as factoring out 2 from n repeatedly. Then, one of the following must be true for some :
- for some
To show that one of these must be true, recall Fermat's little theorem:
So, if we keep taking square roots of an − 1, we will either get 1 or −1. If we get −1 then the second equality holds and we are done.
In the case when we've taken out every power of 2 and the second equality never held, we are left with the first equality which also must be equal to 1 or −1, as it too is a square root. However, if the second equality never held, then it couldn't have held for r = 0, meaning that
Thus in the case the second equality doesn't hold, the first equality must.
The Miller-Rabin primality test is based on the above equalities. If we want to test to see if n is prime, then if
- for all
then a is called a strong witness for the compositeness of n. Otherwise a is called a strong liar and n is a probable prime.
Algorithm and running time
The algorithm can be written as follows:
- Inputs: n: a value to test for primality; k: a parameter that determines the number of times to test for primality
- Output: composite if n is composite, otherwise probably prime
- write n − 1 as 2s × d by factoring powers of 2 from n − 1
- repeat k times:
- pick a randomly in the range [1, n − 1]
- if ad mod n ≠ 1 and mod n ≠ −1 for all r in the range [0, s − 1] then return composite
- return probably prime
Using modular exponentiation by repeated squaring, the running time of
this algorithm is O(k × log3 n), where k
is the number of different values of a we test, furthermore fast
FFT multiplication can push the running time down to Õ(k
× log2 n), thus this algorithm is polynomial time and efficient.
Like all probabilistic primality tests, there are values of n that will repeatedly produce liars, thus claiming that n is prime when it is actually composite -- these values are known as pseudoprimes.
The more values of a we test, the better the accuracy of the test. It can be shown that there always exists a strong witness for any odd composite n, and that at least 3/4 of the values for a are strong witnesses for the compositeness of n. Thus, the set of strong liars is smaller than the set of Euler liars (used in the Solovay-Strassen primality test).
In general usage the actual number of witnesses is much greater than the lower bound. For instance if we test a 1024-bit odd integer n using the lower bound we would need to test 44 different values of a to guarantee that the chance a given n was declared prime when it was actually composite is less than 2-80, which means that the value of n could be used safely in cryptographic purposes. However, in practice we generally need to test only 6 different values of a to guarantee this probability. Compare this to 90 iterations for Solovay-Strassen.
Assuming the truth of the generalized Riemann hypothesis, one can prove that, if all the values of a up to 2(log n)2 have been tested and n is still pronounced a "probable prime", then it is in fact guaranteed to be prime. This leads to a deterministic primality test that has runtime Õ((log n)4).