2.5: The Riemann Hypothesis

Definition 2.19

The Riemann zeta function (zeta(z)) is a complex function defined as follows on ({z in mathbb{C} | mbox{Re}z > 1})

[zeta (z) = sum_{n=1}^{infty} n^{-z} onumber]

On other values of (z in mathbb{C}) it is defined by the analytic continuation of this function (except at (z = 1) where it has a simple pole).

Analytic continuation is akin to replacing (e^x) where (x) is real by (e^z) where (z) is complex. Another example is the series (sum_{j=0}^{infty} z^j). This series diverges for (|z| > 1). But as an analytic function, it can be replaced by ((1-z)^{-1}) on all of (mathbb{C}) except at the pole (z = 1) where it diverges.

Recall that an analytic function is a function that is differentiable. Equivalently, it is a function that is locally given by a convergent power series. If (f) and (g) are two analytic continuations to a region (U) of a function (h) given on a region (V subset U), then the difference (f-g) is zero on some (U) and therefore all its power expansions are zero and so it must be zero on the the entire region. Hence, analytic conjugations are unique. That is the reason they are meaningful. For more details, see for example [4, 14].

It is customary to denote the argument of the zeta function by (s). We will do so from here on out. Note that (|n-s| = n-mbox{Re} s), and so for (mbox{Re} s > 1) the series is absolutely convergent. At this point, the student should remember – or look up in [23] – the fact that absolutely convergent series can be rearranged arbitrarily without changing the sum. This leads to the following proposition.

Proposition 2.20

For (mbox{Re} s > 1) we have

[sum_{n=1}^{infty} n-s = prod_{p prime} (1-p^{-s})^{-1} onumber]

There are two common proofs of this formula. It is worth presenting both.


The first proof uses the Fundamental Theorem of Arithmetic. First, we recall that use geometric series

[(1-p^{-s})^{-1} = sum_{k=0}^{infty} p^{-ks} onumber]

to rewrite the right hand of the Euler product. This gives

[prod_{p prime} (1-p^{-s})^{-1} = (sum_{k_1 = 0}^{infty} p_{1}^{-k_{1}s}) (sum_{k_2 = 0}^{infty} p_{2}^{-k_{2}s}) (sum_{k_3 = 0}^{infty} p_{3}^{-k_{3}s}) onumber]

Re-arranging terms yields

[dots = (p_{1}^{k_{1}}p_{2}^{k_{2}}p_{3}^{k_{3}} dots)^{-s} onumber]

By the Fundamental Theorem of Arithmetic, the expression ((p_{1}^{k_{1}}p_{2}^{k_{2}}p_{3}^{k_{3}} dots)) runs through all positive integers exactly once. Thus upon re-arranging again we obtain the left hand of Euler’s formula.

The second proof, the one that Euler used, employs a sieve method. This time, we start with the left hand of the Euler product. If we multiply (zeta) by (2^{-s}), we get back precisely the terms with (n) even. So

[(1-2^{-s}) zeta(s) = 1+3^{-s}+5^{-s}+cdots = sum_{2 mid n} n^{-s} onumber]

Subsequently we multiply this expression by ((1-3^{-s})). This has the effect of removing the terms that remain where (n) is a multiple of (3). It follows that eventually

[(1-p_{l}^{-s}) dots (1-p_{1}^{-s}) zeta (s) = sum_{p_{1} mid n, cdots p_{l} mid n} n^{-s} onumber]

The argument used in Eratosthenes sieve (Section 1.1) now serves to show that in the right hand side of the last equation all terms other than (1) disappear as l tends to infinity. Therefore, the left hand tends to 1, which implies the proposition.

The most important theorem concerning primes is probably the following (without proof).

Figure 3. On the left, the function (int_{2}^{x} mbox{ln} t dt) in blue, (pi (x)) in red, and (x/ mbox{ln}x) in green. On the right, we have (int_{2}^{x} mbox{ln} t dt - x/mbox{ln} x) in blue, (pi (x)- x/mbox{ln} x) in red.

Theorem 2.21 (Prime Number Theorem)

Let (pi (x)) denote the prime counting function, that is: the number of primes less than or equal to (x > 2).


  1. (lim_{x ightarrow infty} frac{pi (x)}{(x/mbox{ln} x)} = 1) and
  2. (lim_{x ightarrow infty} frac{pi (x)}{int_{2}^{x} mbox{ln} t dt} = 1)

where (mbox{ln}) is the natural logarithm.

The first estimate is the easier one to prove, the second is the more accurate one. In Figure 3 on the left, we plotted, for (x in [2,1000]), from top to bottom the functions (int_{2}^{x} mbox{ln} t dt) in blue, (pi (x)) in red, and (x/mbox{ln} x). In the right hand figure, we augment the domain to (x in [2, 100000]). and plot the difference of these functions with (x/mbox{ln} x). It now becomes clear that (int_{2}^{x} mbox{ln} t dt) is indeed a much better approximation of (pi (x)). From this figure one may be tempted to conclude that (int_{2}^{x} mbox{ln} t dt - pi (x)) is always greater than or equal to zero. This, however, is false. It is known that there are infinitely many n for which (int_{2}^{x} mbox{ln} t dt - pi (x) < 0). The first such (n) is called the Skewes number. Not much is known about this number, except that it is less than 10317.

Perhaps the most important open problem in all of mathematics is the following. It concerns the analytic continuation of (zeta (s)) given above.

Conjecture 2.22 (Riemann Hypothesis)

All non-real zeros of (zeta (s)) lie on the line (mbox{Re} s = frac{1}{2})

In his only paper on number theory [20], Riemann realized that the hypothesis enabled him to describe detailed properties of the distribution of primes in terms of of the location of the non-real zero of (zeta (s)). This completely unexpected connection between so disparate fields – analytic functions and primes in (mathbb{N}-)spoke to the imagination and led to an enormous interest in the subject. In further research, it has been shown that the hypothesis is also related to other areas of mathematics, such as, for example, the spacings between eigenvalues of random Hermitian matrices [2], and even physics [5, 6].

References for Riemann Hypotheis giving the best bound for Prime Number Theorem

Which books cover the proof that Riemann Hypothesis is equivalent to the best error bound for the Prime Number Theorem?

My understanding is that Riemann Hypothesis is equivalent to the best bound of prime number theorem. Von Koch (1901) proved that the Riemann hypothesis is equivalent to the "best possible" bound for the error of the prime number theorem, but Koch's paper is in German, I could not read it.

Can anyone recommend books or English articles which cover this proof?

Also, did Schoenfeld give an improved version of this argument?

The Riemann Hypothesis

Recipient of the Mathematical Association of America's Beckenbach Book Prize in 2018!

The Riemann hypothesis concerns the prime numbers 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47 &hellip Ubiquitous and fundamental in mathematics as they are, it is important and interesting to know as much as possible about these numbers. Simple questions would be: how are the prime numbers distributed among the positive integers? What is the number of prime numbers of 100 digits? Of 1,000 digits? These questions were the starting point of a groundbreaking paper by Bernhard Riemann written in 1859. As an aside in his article, Riemann formulated his now famous hypothesis that so far no one has come close to proving: All nontrivial zeroes of the zeta function lie on the critical line. Hidden behind this at first mysterious phrase lies a whole mathematical universe of prime numbers, infinite sequences, infinite products, and complex functions.

The present book is a first exploration of this fascinating, unknown world. It originated from an online course for mathematically talented secondary school students organized by the authors of this book at the University of Amsterdam. Its aim was to bring the students into contact with challenging university level mathematics and show them what the Riemann Hypothesis is all about and why it is such an important problem in mathematics.

The Riemann Hypothesis

The Basic Library List Committee suggests that undergraduate mathematics libraries consider this book for acquisition.

The Riemann Hypothesis is one of the hardest and most famous problems in mathematics. Its original formulation, which comes from the theory of complex functions, asserts that all non-real zeros of the Riemann zeta function have real part equal to one-half. Because of its technical formulation, it is not easy to talk about the Riemann Hypothesis without assuming knowledge of the complex function theory, but we can exploit its connections to other branches of mathematics. One of the most important is the light it sheds on the distribution of prime numbers. And there are also some elementary conjectures that turn out to be equivalent to the Riemann Hypothesis.

The book under review, which seems to be one of the first books at this level about the Riemann Hypothesis, is aimed at high school and undergraduate students. It focuses mainly on the so-called &ldquoexplicit formula,&rdquo which connects the distribution of the zeros of the Riemann zeta function with the distribution of the prime numbers. The authors set the stage by explaining basic facts about complex numbers and functions, introducing the zeta function and its product formula. They then conduct numerical experiments on the explicit formula, reporting them in several figures.

The book consists of four chapters and four appendices. Each chapter ends with some exercises. The authors provide computer programming code for exercises with computational flavor and full solutions for the rest of them.

The book will be useful for students and teachers to become familiar with the Riemann Hypothesis. It may also be used as a text in a mini-course. The interested reader will not, however, find here all that could be said about the Riemann Hypothesis at this level. I believe that there would have been room in the present book for some related topics, including the many elementary statements known to be equivalent of the Riemann hypothesis, such as an inequality involving the sum of divisor function and harmonic numbers.

Mehdi Hassani is a faculty member at the Department of Mathematics, Zanjan University, Iran. His fields of interest are Elementary, Analytic and Probabilistic Number Theory.

1. Prime Numbers
1.1 Primes as elementary building blocks
1.2 Counting Primes
1.3 Using the logarithm to count powers
1.4 Approximations for
1.5 The prime number theorem
1.6 Counting prime powers logarithmically
1.7 The Riemann hypothesis &mdash a look ahead
1.8 Additional exercises

2. The zeta function
2.1 Infinite sums
2.2 Series for well-known functions
2.3 Computation of (zeta(2))
2.4 Euler&rsquos product formula
2.5 Looking back and a glimpse of what is to come
2.6 Additional exercises

3. The Riemann hypothesis
3.1 Euler&rsquos discovery of the product formula
3.2 Extending the domain of the zeta function
3.3 A crash course on complex numbers
3.4 Complex functions and powers
3.5 The complex zeta function
3.6 The zeroes of the zeta function
3.7 The hunt for zeta zeroes
3.8 Additional exercises

4. Primes and the Riemann hypothesis
4.1 Riemann&rsquos functional equation
4.2 The zeroes of the zeta function
4.3 The explicit formula for (psi(x))
4.4 Pairing up the non-trivial zeroes
4.5 The prime number theorem
4.6 A proof of the prime number theorem
4.7 The music of the primes
4.8 Looking back
4.9 Additional exercises

Appendix A. Why big primes are useful
Appendix B. Computer support
Appendix C. Further reading and internet surfing
Appendix D. Solutions to the exercises

Heres why we care about attempts to prove the Riemann hypothesis

A famed mathematical enigma is once again in the spotlight.

The Riemann hypothesis, posited in 1859..

A famed mathematical enigma is once again in the spotlight.

The Riemann hypothesis, posited in 1859 by German mathematician Bernhard Riemann, is one of the biggest unsolved puzzles in mathematics. The hypothesis, which could unlock the mysteries of prime numbers, has never been proved. But mathematicians are buzzing about a new attempt.

Esteemed mathematician Michael Atiyah took a crack at proving the hypothesis in a lecture at the Heidelberg Laureate Forum in Germany on September 24. Despite the stature of Atiyah — who has won the two most prestigious honors in mathematics, the Fields Medal and the Abel Prize — many researchers have expressed skepticism about the proof. So the Riemann hypothesis remains up for grabs.

Let's break down what the Riemann hypothesis is, and what a confirmed proof — if one is ever found — would mean for mathematics.

What is the Riemann hypothesis?[hhmc]

The Riemann hypothesis is a statement about a mathematical curiosity known as the Riemann zeta function. That function is closely entwined with prime numbers — whole numbers that are evenly divisible only by 1 and themselves. Prime numbers are mysterious: They are scattered in an inscrutable pattern across the number line, making it difficult to predict where each prime number will fall (SN Online: 4/2/08).

But if the Riemann zeta function meets a certain condition, Riemann realized, it would reveal secrets of the prime numbers, such as how many primes exist below a given number. That required condition is the Riemann hypothesis. It conjectures that certain zeros of the function — the points where the functions value equals zero — all lie along a particular line when plotted (SN: 9/27/08, p. 14). If the hypothesis is confirmed, it could help expose a method to the primes madness.

Why is it so important?[hhmc]

Prime numbers are mathematical VIPs: Like atoms of the periodic table, they are the building blocks for larger numbers. Primes matter for practical purposes, too, as they are important for securing encrypted transmissions sent over the internet. And importantly, a multitude of mathematical papers take the Riemann hypothesis as a given. If this foundational assumption were proved correct, “many results that are believed to be true will be known to be true,” says mathematician Ken Ono of Emory University in Atlanta. “Its a kind of mathematical oracle.”

Havent people tried to prove this before?[hhmc]

Yep. Its difficult to count the number of attempts, but probably hundreds of researchers have tried their hands at a proof. So far none of the proofs have stood up to scrutiny. The problem is so stubborn that it now has a bounty on its head: The Clay Mathematics Institute has offered up $1 million to anyone who can prove the Riemann hypothesis.

Why is it so difficult to prove?[hhmc]

The Riemann zeta function is a difficult beast to work with. Even defining it is a challenge, Ono says. Furthermore, the function has an infinite number of zeros. If any one of those zeros is not on its expected line, the Riemann hypothesis is wrong. And since there are infinite zeros, manually checking each one wont work. Instead, a proof must show without a doubt that no zero can be an outlier. For difficult mathematical quandaries like the Riemann hypothesis, the bar for acceptance of a proof is extremely high. Verification of such a proof typically requires months or even years of double-checking by other mathematicians before either everyone is convinced, or the proof is deemed flawed.

What will it take to prove the Riemann hypothesis?[hhmc]

Various mathematicians have made some amount of headway toward a proof. Ono likens it to attempting to climb Mount Everest and making it to base camp. While some clever mathematician may eventually be able to finish that climb, Ono says, “there is this belief that the ultimate proof … if one ever is made, will require a different level of mathematics.”

Editorial Reviews


'Throughout the book careful proofs are given for all the results discussed, introducing an impressive range of mathematical tools. Indeed, the main achievement of the work is the way in which it demonstrates how all these diverse subject areas can be brought to bear on the Riemann hypothesis. The exposition is accessible to strong undergraduates, but even specialists will find material here to interest them.' D. R. Heath-Brown, Mathematical Reviews

'This two volume catalogue of many of the various equivalents of the Riemann Hypothesis by Kevin Broughan is a valuable addition to the literature … all in all these two volumes are a must have for anyone interested in the Riemann Hypothesis.' Steven Decke, MAA Reviews

‘The two volumes are a very valuable resource and a fascinating read about a most intriguing problem.' R.S. MacKay, London Mathematical Society Newsletter

‘All in all these books serve as a good introduction to a wide range of mathematics related to the Riemann Hypothesis and make for a valuable contribution to the literature. They are truly encyclopedic and I am sure will entice many a reader to consult some literature quoted and who knows, eventually make an own contribution to the area.’ Pieter Moree, Nieuw Archief voor Wiskunde

‘This book may serve as reference for the Riemann hypothesis and its equivalent formulations or as an inspiration for everyone interested in number theory. It is written in a very readable style and for most parts only assumes basic knowledge from (complex analysis). Thus it may also serve as a (somewhat specific) introduction to analytic number theory.’ J. Mahnkopf, Encyclopedia of Mathematics and its Applications


The Riemann zeta function is defined for complex s with real part greater than 1 by the absolutely convergent infinite series

Leonhard Euler already considered this series in the 1730s for real values of s, in conjunction with his solution to the Basel problem. He also proved that it equals the Euler product

where the infinite product extends over all prime numbers p. [2]

The Riemann hypothesis discusses zeros outside the region of convergence of this series and Euler product. To make sense of the hypothesis, it is necessary to analytically continue the function to obtain a form that is valid for all complex s. This is permissible because the zeta function is meromorphic, so its analytic continuation is guaranteed to be unique and functional forms equivalent over their domains. One begins by showing that the zeta function and the Dirichlet eta function satisfy the relation

But the series on the right converges not just when the real part of s is greater than one, but more generally whenever s has positive real part. Thus, this alternative series extends the zeta function from Re(s) > 1 to the larger domain Re(s) > 0 , excluding the zeros s = 1 + 2 π i n / log ⁡ 2 of 1 − 2 / 2 s > where n is any nonzero integer (see Dirichlet eta function). The zeta function can be extended to these values too by taking limits, giving a finite value for all values of s with positive real part except for the simple pole at s = 1.

In the strip 0 < Re(s) < 1 the zeta function satisfies the functional equation

One may then define ζ(s) for all remaining nonzero complex numbers s ( Re(s) ≤ 0 and s ≠ 0) by applying this equation outside the strip, and letting ζ(s) equal the right-hand side of the equation whenever s has non-positive real part (and s ≠ 0).

If s is a negative even integer then ζ(s) = 0 because the factor sin(πs/2) vanishes these are the trivial zeros of the zeta function. (If s is a positive even integer this argument does not apply because the zeros of the sine function are cancelled by the poles of the gamma function as it takes negative integer arguments.)

The value ζ(0) = −1/2 is not determined by the functional equation, but is the limiting value of ζ(s) as s approaches zero. The functional equation also implies that the zeta function has no zeros with negative real part other than the trivial zeros, so all non-trivial zeros lie in the critical strip where s has real part between 0 and 1.

. es ist sehr wahrscheinlich, dass alle Wurzeln reell sind. Hiervon wäre allerdings ein strenger Beweis zu wünschen ich habe indess die Aufsuchung desselben nach einigen flüchtigen vergeblichen Versuchen vorläufig bei Seite gelassen, da er für den nächsten Zweck meiner Untersuchung entbehrlich schien.

. it is very probable that all roots are real. Of course one would wish for a rigorous proof here I have for the time being, after some fleeting vain attempts, provisionally put aside the search for this, as it appears dispensable for the immediate objective of my investigation.

Riemann's original motivation for studying the zeta function and its zeros was their occurrence in his explicit formula for the number of primes π (x) less than or equal to a given number x, which he published in his 1859 paper "On the Number of Primes Less Than a Given Magnitude". His formula was given in terms of the related function

which counts the primes and prime powers up to x, counting a prime power p n as 1 ⁄ n . The number of primes can be recovered from this function by using the Möbius inversion formula,

where μ is the Möbius function. Riemann's formula is then

where the sum is over the nontrivial zeros of the zeta function and where Π0 is a slightly modified version of Π that replaces its value at its points of discontinuity by the average of its upper and lower limits:

The summation in Riemann's formula is not absolutely convergent, but may be evaluated by taking the zeros ρ in order of the absolute value of their imaginary part. The function li occurring in the first term is the (unoffset) logarithmic integral function given by the Cauchy principal value of the divergent integral

The terms li(x ρ ) involving the zeros of the zeta function need some care in their definition as li has branch points at 0 and 1, and are defined (for x > 1) by analytic continuation in the complex variable ρ in the region Re(ρ) > 0, i.e. they should be considered as Ei(ρ log x) . The other terms also correspond to zeros: the dominant term li(x) comes from the pole at s = 1, considered as a zero of multiplicity −1, and the remaining small terms come from the trivial zeros. For some graphs of the sums of the first few terms of this series see Riesel & Göhl (1970) or Zagier (1977).

This formula says that the zeros of the Riemann zeta function control the oscillations of primes around their "expected" positions. Riemann knew that the non-trivial zeros of the zeta function were symmetrically distributed about the line s = 1/2 + it, and he knew that all of its non-trivial zeros must lie in the range 0 ≤ Re(s) ≤ 1. He checked that a few of the zeros lay on the critical line with real part 1/2 and suggested that they all do this is the Riemann hypothesis.

The practical uses of the Riemann hypothesis include many propositions known to be true under the Riemann hypothesis, and some that can be shown to be equivalent to the Riemann hypothesis.

Distribution of prime numbers

Von Koch (1901) proved that the Riemann hypothesis implies the "best possible" bound for the error of the prime number theorem. A precise version of Koch's result, due to Schoenfeld (1976), says that the Riemann hypothesis implies

where π(x) is the prime-counting function, and log(x) is the natural logarithm of x.

Schoenfeld (1976) also showed that the Riemann hypothesis implies

This is an explicit version of a theorem of Cramér.

Growth of arithmetic functions

The Riemann hypothesis implies strong bounds on the growth of many other arithmetic functions, in addition to the primes counting function above.

One example involves the Möbius function μ. The statement that the equation

is valid for every s with real part greater than 1/2, with the sum on the right hand side converging, is equivalent to the Riemann hypothesis. From this we can also conclude that if the Mertens function is defined by

for every positive ε is equivalent to the Riemann hypothesis (J.E. Littlewood, 1912 see for instance: paragraph 14.25 in Titchmarsh (1986)). (For the meaning of these symbols, see Big O notation.) The determinant of the order n Redheffer matrix is equal to M(n), so the Riemann hypothesis can also be stated as a condition on the growth of these determinants. The Riemann hypothesis puts a rather tight bound on the growth of M, since Odlyzko & te Riele (1985) disproved the slightly stronger Mertens conjecture

The Riemann hypothesis is equivalent to many other conjectures about the rate of growth of other arithmetic functions aside from μ(n). A typical example is Robin's theorem, [5] which states that if σ(n) is the divisor function, given by

for all n > 5040 if and only if the Riemann hypothesis is true, where γ is the Euler–Mascheroni constant.

Another example was found by Jérôme Franel, and extended by Landau (see Franel & Landau (1924)). The Riemann hypothesis is equivalent to several statements showing that the terms of the Farey sequence are fairly regular. One such equivalence is as follows: if Fn is the Farey sequence of order n, beginning with 1/n and up to 1/1, then the claim that for all ε > 0

is equivalent to the Riemann hypothesis. Here

is the number of terms in the Farey sequence of order n.

For an example from group theory, if g(n) is Landau's function given by the maximal order of elements of the symmetric group Sn of degree n, then Massias, Nicolas & Robin (1988) showed that the Riemann hypothesis is equivalent to the bound

for all sufficiently large n.

Lindelöf hypothesis and growth of the zeta function

The Riemann hypothesis has various weaker consequences as well one is the Lindelöf hypothesis on the rate of growth of the zeta function on the critical line, which says that, for any ε > 0,

The Riemann hypothesis also implies quite sharp bounds for the growth rate of the zeta function in other regions of the critical strip. For example, it implies that

so the growth rate of ζ(1+it) and its inverse would be known up to a factor of 2. [6]

Large prime gap conjecture

The prime number theorem implies that on average, the gap between the prime p and its successor is log p. However, some gaps between primes may be much larger than the average. Cramér proved that, assuming the Riemann hypothesis, every gap is O( √ p log p). This is a case in which even the best bound that can be proved using the Riemann hypothesis is far weaker than what seems true: Cramér's conjecture implies that every gap is O((log p) 2 ), which, while larger than the average gap, is far smaller than the bound implied by the Riemann hypothesis. Numerical evidence supports Cramér's conjecture. [7]

Analytic criteria equivalent to the Riemann hypothesis

Many statements equivalent to the Riemann hypothesis have been found, though so far none of them have led to much progress in proving (or disproving) it. Some typical examples are as follows. (Others involve the divisor function σ(n).)

The Riesz criterion was given by Riesz (1916), to the effect that the bound

holds for all ε > 0 if and only if the Riemann hypothesis holds.

Nyman (1950) proved that the Riemann hypothesis is true if and only if the space of functions of the form

where ρ(z) is the fractional part of z, 0 ≤ θν ≤ 1 , and

is dense in the Hilbert space L 2 (0,1) of square-integrable functions on the unit interval. Beurling (1955) extended this by showing that the zeta function has no zeros with real part greater than 1/p if and only if this function space is dense in L p (0,1)

Salem (1953) showed that the Riemann hypothesis is true if and only if the integral equation

Weil's criterion is the statement that the positivity of a certain function is equivalent to the Riemann hypothesis. Related is Li's criterion, a statement that the positivity of a certain sequence of numbers is equivalent to the Riemann hypothesis.

The Farey sequence provides two equivalences, due to Jerome Franel and Edmund Landau in 1924.

The De Bruijn–Newman constant denoted by Λ and named after Nicolaas Govert de Bruijn and Charles M. Newman, is defined via the zeros of the function

that uses a real parameter λ, a complex variable z and a super-exponentially decaying function defined as

Since the Riemann hypothesis is equivalent to the claim that all the zeroes of H(0, z) are real, the Riemann hypothesis is equivalent to the conjecture that Λ ≤ 0 . Brad Rodgers and Terence Tao discovered the equivalence is actually Λ = 0 by proving zero to be the lower bound of the constant. [8] Proving zero is also the upper bound would therefore prove the Riemann hypothesis. As of April 2020 the upper bound is Λ ≤ 0.2 . [9]

Consequences of the generalized Riemann hypothesis

Several applications use the generalized Riemann hypothesis for Dirichlet L-series or zeta functions of number fields rather than just the Riemann hypothesis. Many basic properties of the Riemann zeta function can easily be generalized to all Dirichlet L-series, so it is plausible that a method that proves the Riemann hypothesis for the Riemann zeta function would also work for the generalized Riemann hypothesis for Dirichlet L-functions. Several results first proved using the generalized Riemann hypothesis were later given unconditional proofs without using it, though these were usually much harder. Many of the consequences on the following list are taken from Conrad (2010).

  • In 1913, Grönwall showed that the generalized Riemann hypothesis implies that Gauss's list of imaginary quadratic fields with class number 1 is complete, though Baker, Stark and Heegner later gave unconditional proofs of this without using the generalized Riemann hypothesis.
  • In 1917, Hardy and Littlewood showed that the generalized Riemann hypothesis implies a conjecture of Chebyshev that
  • In 1923 Hardy and Littlewood showed that the generalized Riemann hypothesis implies a weak form of the Goldbach conjecture for odd numbers: that every sufficiently large odd number is the sum of three primes, though in 1937 Vinogradov gave an unconditional proof. In 1997 Deshouillers, Effinger, te Riele, and Zinoviev showed that the generalized Riemann hypothesis implies that every odd number greater than 5 is the sum of three primes. In 2013 Harald Helfgott proved the ternary Goldbach conjecture without the GRH dependence, subject to some extensive calculations completed with the help of David J. Platt.
  • In 1934, Chowla showed that the generalized Riemann hypothesis implies that the first prime in the arithmetic progression a mod m is at most Km 2 log(m) 2 for some fixed constant K.
  • In 1967, Hooley showed that the generalized Riemann hypothesis implies Artin's conjecture on primitive roots.
  • In 1973, Weinberger showed that the generalized Riemann hypothesis implies that Euler's list of idoneal numbers is complete. showed that the generalized Riemann hypothesis for the zeta functions of all algebraic number fields implies that any number field with class number 1 is either Euclidean or an imaginary quadratic number field of discriminant −19, −43, −67, or −163.
  • In 1976, G. Miller showed that the generalized Riemann hypothesis implies that one can test if a number is prime in polynomial time via the Miller test. In 2002, Manindra Agrawal, Neeraj Kayal and Nitin Saxena proved this result unconditionally using the AKS primality test. discussed how the generalized Riemann hypothesis can be used to give sharper estimates for discriminants and class numbers of number fields. showed that the generalized Riemann hypothesis implies that Ramanujan's integral quadratic formx 2 + y 2 + 10z 2 represents all integers that it represents locally, with exactly 18 exceptions.

Excluded middle

Some consequences of the RH are also consequences of its negation, and are thus theorems. In their discussion of the Hecke, Deuring, Mordell, Heilbronn theorem, Ireland & Rosen (1990, p. 359) say

The method of proof here is truly amazing. If the generalized Riemann hypothesis is true, then the theorem is true. If the generalized Riemann hypothesis is false, then the theorem is true. Thus, the theorem is true!! (punctuation in original)

Care should be taken to understand what is meant by saying the generalized Riemann hypothesis is false: one should specify exactly which class of Dirichlet series has a counterexample.

Littlewood's theorem

This concerns the sign of the error in the prime number theorem. It has been computed that π(x) < li(x) for all x ≤ 10 25 (see this table), and no value of x is known for which π(x) > li(x).

In 1914 Littlewood proved that there are arbitrarily large values of x for which

and that there are also arbitrarily large values of x for which

Thus the difference π(x) − li(x) changes sign infinitely many times. Skewes' number is an estimate of the value of x corresponding to the first sign change.

Littlewood's proof is divided into two cases: the RH is assumed false (about half a page of Ingham 1932, Chapt. V), and the RH is assumed true (about a dozen pages). (Stanisław Knapowski [[#CITEREFKnapowski|]]) followed this up with a paper on the number of times Δ ( n ) changes sign in the interval Δ ( n ) .

Gauss's class number conjecture

This is the conjecture (first stated in article 303 of Gauss's Disquisitiones Arithmeticae) that there are only finitely many imaginary quadratic fields with a given class number. One way to prove it would be to show that as the discriminant D → −∞ the class number h(D) → ∞.

The following sequence of theorems involving the Riemann hypothesis is described in Ireland & Rosen 1990, pp. 358–361:

Theorem (Hecke 1918). Let D < 0 be the discriminant of an imaginary quadratic number field K. Assume the generalized Riemann hypothesis for L-functions of all imaginary quadratic Dirichlet characters. Then there is an absolute constant C such that

h ( D ) > C | D | log ⁡ | D | . >>.>

Theorem (Deuring 1933). If the RH is false then h(D) > 1 if |D| is sufficiently large.

Theorem (Mordell 1934). If the RH is false then h(D) → ∞ as D → −∞.

Theorem (Heilbronn 1934). If the generalized RH is false for the L-function of some imaginary quadratic Dirichlet character then h(D) → ∞ as D → −∞.

(In the work of Hecke and Heilbronn, the only L-functions that occur are those attached to imaginary quadratic characters, and it is only for those L-functions that GRH is true or GRH is false is intended a failure of GRH for the L-function of a cubic Dirichlet character would, strictly speaking, mean GRH is false, but that was not the kind of failure of GRH that Heilbronn had in mind, so his assumption was more restricted than simply GRH is false.)

In 1935, Carl Siegel later strengthened the result without using RH or GRH in any way.

Growth of Euler's totient

Dirichlet L-series and other number fields

The Riemann hypothesis can be generalized by replacing the Riemann zeta function by the formally similar, but much more general, global L-functions. In this broader setting, one expects the non-trivial zeros of the global L-functions to have real part 1/2. It is these conjectures, rather than the classical Riemann hypothesis only for the single Riemann zeta function, which account for the true importance of the Riemann hypothesis in mathematics.

The generalized Riemann hypothesis extends the Riemann hypothesis to all Dirichlet L-functions. In particular it implies the conjecture that Siegel zeros (zeros of L-functions between 1/2 and 1) do not exist.

The extended Riemann hypothesis extends the Riemann hypothesis to all Dedekind zeta functions of algebraic number fields. The extended Riemann hypothesis for abelian extension of the rationals is equivalent to the generalized Riemann hypothesis. The Riemann hypothesis can also be extended to the L-functions of Hecke characters of number fields.

Function fields and zeta functions of varieties over finite fields

Artin (1924) introduced global zeta functions of (quadratic) function fields and conjectured an analogue of the Riemann hypothesis for them, which has been proved by Hasse in the genus 1 case and by Weil (1948) in general. For instance, the fact that the Gauss sum, of the quadratic character of a finite field of size q (with q odd), has absolute value q >> is actually an instance of the Riemann hypothesis in the function field setting. This led Weil (1949) to conjecture a similar statement for all algebraic varieties the resulting Weil conjectures were proved by Pierre Deligne (1974, 1980).

Arithmetic zeta functions of arithmetic schemes and their L-factors

Arithmetic zeta functions generalise the Riemann and Dedekind zeta functions as well as the zeta functions of varieties over finite fields to every arithmetic scheme or a scheme of finite type over integers. The arithmetic zeta function of a regular connected equidimensional arithmetic scheme of Kronecker dimension n can be factorized into the product of appropriately defined L-factors and an auxiliary factor Jean-Pierre Serre (1969–1970). Assuming a functional equation and meromorphic continuation, the generalized Riemann hypothesis for the L-factor states that its zeros inside the critical strip ℜ ( s ) ∈ ( 0 , n ) lie on the central line. Correspondingly, the generalized Riemann hypothesis for the arithmetic zeta function of a regular connected equidimensional arithmetic scheme states that its zeros inside the critical strip lie on vertical lines ℜ ( s ) = 1 / 2 , 3 / 2 , … , n − 1 / 2 and its poles inside the critical strip lie on vertical lines ℜ ( s ) = 1 , 2 , … , n − 1 . This is known for schemes in positive characteristic and follows from Pierre Deligne (1974, 1980), but remains entirely unknown in characteristic zero.

Selberg zeta functions

Selberg (1956) introduced the Selberg zeta function of a Riemann surface. These are similar to the Riemann zeta function: they have a functional equation, and an infinite product similar to the Euler product but taken over closed geodesics rather than primes. The Selberg trace formula is the analogue for these functions of the explicit formulas in prime number theory. Selberg proved that the Selberg zeta functions satisfy the analogue of the Riemann hypothesis, with the imaginary parts of their zeros related to the eigenvalues of the Laplacian operator of the Riemann surface.

Ihara zeta functions

The Ihara zeta function of a finite graph is an analogue of the Selberg zeta function, which was first introduced by Yasutaka Ihara in the context of discrete subgroups of the two-by-two p-adic special linear group. A regular finite graph is a Ramanujan graph, a mathematical model of efficient communication networks, if and only if its Ihara zeta function satisfies the analogue of the Riemann hypothesis as was pointed out by T. Sunada.

Montgomery's pair correlation conjecture

Montgomery (1973) suggested the pair correlation conjecture that the correlation functions of the (suitably normalized) zeros of the zeta function should be the same as those of the eigenvalues of a random hermitian matrix. Odlyzko (1987) showed that this is supported by large-scale numerical calculations of these correlation functions.

Montgomery showed that (assuming the Riemann hypothesis) at least 2/3 of all zeros are simple, and a related conjecture is that all zeros of the zeta function are simple (or more generally have no non-trivial integer linear relations between their imaginary parts). Dedekind zeta functions of algebraic number fields, which generalize the Riemann zeta function, often do have multiple complex zeros. [11] This is because the Dedekind zeta functions factorize as a product of powers of Artin L-functions, so zeros of Artin L-functions sometimes give rise to multiple zeros of Dedekind zeta functions. Other examples of zeta functions with multiple zeros are the L-functions of some elliptic curves: these can have multiple zeros at the real point of their critical line the Birch-Swinnerton-Dyer conjecture predicts that the multiplicity of this zero is the rank of the elliptic curve.

Other zeta functions

There are many other examples of zeta functions with analogues of the Riemann hypothesis, some of which have been proved. Goss zeta functions of function fields have a Riemann hypothesis, proved by Sheats (1998). The main conjecture of Iwasawa theory, proved by Barry Mazur and Andrew Wiles for cyclotomic fields, and Wiles for totally real fields, identifies the zeros of a p-adic L-function with the eigenvalues of an operator, so can be thought of as an analogue of the Hilbert–Pólya conjecture for p-adic L-functions. [12]

Several mathematicians have addressed the Riemann hypothesis, but none of their attempts has yet been accepted as a proof. Watkins (2007) lists some incorrect solutions.

Operator theory

Hilbert and Pólya suggested that one way to derive the Riemann hypothesis would be to find a self-adjoint operator, from the existence of which the statement on the real parts of the zeros of ζ(s) would follow when one applies the criterion on real eigenvalues. Some support for this idea comes from several analogues of the Riemann zeta functions whose zeros correspond to eigenvalues of some operator: the zeros of a zeta function of a variety over a finite field correspond to eigenvalues of a Frobenius element on an étale cohomology group, the zeros of a Selberg zeta function are eigenvalues of a Laplacian operator of a Riemann surface, and the zeros of a p-adic zeta function correspond to eigenvectors of a Galois action on ideal class groups.

Odlyzko (1987) showed that the distribution of the zeros of the Riemann zeta function shares some statistical properties with the eigenvalues of random matrices drawn from the Gaussian unitary ensemble. This gives some support to the Hilbert–Pólya conjecture.

and even more strongly, that the Riemann zeros coincide with the spectrum of the operator 1 / 2 + i H ^ >> . This is in contrast to canonical quantization, which leads to the Heisenberg uncertainty principle σ x σ p ≥ ℏ 2 sigma _

geq <2>>> and the natural numbers as spectrum of the quantum harmonic oscillator. The crucial point is that the Hamiltonian should be a self-adjoint operator so that the quantization would be a realization of the Hilbert–Pólya program. In a connection with this quantum mechanical problem Berry and Connes had proposed that the inverse of the potential of the Hamiltonian is connected to the half-derivative of the function

The analogy with the Riemann hypothesis over finite fields suggests that the Hilbert space containing eigenvectors corresponding to the zeros might be some sort of first cohomology group of the spectrum Spec (Z) of the integers. Deninger (1998) described some of the attempts to find such a cohomology theory. [14]

Zagier (1981) constructed a natural space of invariant functions on the upper half plane that has eigenvalues under the Laplacian operator that correspond to zeros of the Riemann zeta function—and remarked that in the unlikely event that one could show the existence of a suitable positive definite inner product on this space, the Riemann hypothesis would follow. Cartier (1982) discussed a related example, where due to a bizarre bug a computer program listed zeros of the Riemann zeta function as eigenvalues of the same Laplacian operator.

Schumayer & Hutchinson (2011) surveyed some of the attempts to construct a suitable physical model related to the Riemann zeta function.

Lee–Yang theorem

The Lee–Yang theorem states that the zeros of certain partition functions in statistical mechanics all lie on a "critical line" with their real part equals to 0, and this has led to some speculation about a relationship with the Riemann hypothesis. [15]

Turán's result

Pál Turán (1948) showed that if the functions

Noncommutative geometry

Connes (1999, 2000) has described a relationship between the Riemann hypothesis and noncommutative geometry, and shows that a suitable analog of the Selberg trace formula for the action of the idèle class group on the adèle class space would imply the Riemann hypothesis. Some of these ideas are elaborated in Lapidus (2008).

Hilbert spaces of entire functions

Louis de Branges (1992) showed that the Riemann hypothesis would follow from a positivity condition on a certain Hilbert space of entire functions. However Conrey & Li (2000) showed that the necessary positivity conditions are not satisfied. Despite this obstacle, de Branges has continued to work on an attempted proof of the Riemann hypothesis along the same lines, but this has not been widely accepted by other mathematicians. [16]


The Riemann hypothesis implies that the zeros of the zeta function form a quasicrystal, a distribution with discrete support whose Fourier transform also has discrete support. Dyson (2009) suggested trying to prove the Riemann hypothesis by classifying, or at least studying, 1-dimensional quasicrystals.

Arithmetic zeta functions of models of elliptic curves over number fields

When one goes from geometric dimension one, e.g. an algebraic number field, to geometric dimension two, e.g. a regular model of an elliptic curve over a number field, the two-dimensional part of the generalized Riemann hypothesis for the arithmetic zeta function of the model deals with the poles of the zeta function. In dimension one the study of the zeta integral in Tate's thesis does not lead to new important information on the Riemann hypothesis. Contrary to this, in dimension two work of Ivan Fesenko on two-dimensional generalisation of Tate's thesis includes an integral representation of a zeta integral closely related to the zeta function. In this new situation, not possible in dimension one, the poles of the zeta function can be studied via the zeta integral and associated adele groups. Related conjecture of Fesenko (2010) on the positivity of the fourth derivative of a boundary function associated to the zeta integral essentially implies the pole part of the generalized Riemann hypothesis. Suzuki (2011) proved that the latter, together with some technical assumptions, implies Fesenko's conjecture.

Multiple zeta functions

Deligne's proof of the Riemann hypothesis over finite fields used the zeta functions of product varieties, whose zeros and poles correspond to sums of zeros and poles of the original zeta function, in order to bound the real parts of the zeros of the original zeta function. By analogy, Kurokawa (1992) introduced multiple zeta functions whose zeros and poles correspond to sums of zeros and poles of the Riemann zeta function. To make the series converge he restricted to sums of zeros or poles all with non-negative imaginary part. So far, the known bounds on the zeros and poles of the multiple zeta functions are not strong enough to give useful estimates for the zeros of the Riemann zeta function.

Number of zeros

The functional equation combined with the argument principle implies that the number of zeros of the zeta function with imaginary part between 0 and T is given by

for s=1/2+iT, where the argument is defined by varying it continuously along the line with Im(s)=T, starting with argument 0 at ∞+iT. This is the sum of a large but well understood term

and a small but rather mysterious term

So the density of zeros with imaginary part near T is about log(T)/2π, and the function S describes the small deviations from this. The function S(t) jumps by 1 at each zero of the zeta function, and for t ≥ 8 it decreases monotonically between zeros with derivative close to −log t.

points where the function S(t) changes sign.

Selberg (1946) showed that the average moments of even powers of S are given by

This suggests that S(T)/(log log T) 1/2 resembles a Gaussian random variable with mean 0 and variance 2π 2 (Ghosh (1983) proved this fact). In particular |S(T)| is usually somewhere around (log log T) 1/2 , but occasionally much larger. The exact order of growth of S(T) is not known. There has been no unconditional improvement to Riemann's original bound S(T)=O(log T), though the Riemann hypothesis implies the slightly smaller bound S(T)=O(log T/log log T). [6] The true order of magnitude may be somewhat less than this, as random functions with the same distribution as S(T) tend to have growth of order about log(T) 1/2 . In the other direction it cannot be too small: Selberg (1946) showed that S(T) ≠ o((log T) 1/3 /(log log T) 7/3 ) , and assuming the Riemann hypothesis Montgomery showed that S(T) ≠ o((log T) 1/2 /(log log T) 1/2 ) .

Numerical calculations confirm that S grows very slowly: |S(T)| < 1 for T < 280 , |S(T)| < 2 for T < 6 800 000 , and the largest value of |S(T)| found so far is not much larger than 3. [17]

Riemann's estimate S(T) = O(log T) implies that the gaps between zeros are bounded, and Littlewood improved this slightly, showing that the gaps between their imaginary parts tends to 0.

Theorem of Hadamard and de la Vallée-Poussin

Hadamard (1896) and de la Vallée-Poussin (1896) independently proved that no zeros could lie on the line Re(s) = 1. Together with the functional equation and the fact that there are no zeros with real part greater than 1, this showed that all non-trivial zeros must lie in the interior of the critical strip 0 < Re(s) < 1 . This was a key step in their first proofs of the prime number theorem.

Both the original proofs that the zeta function has no zeros with real part 1 are similar, and depend on showing that if ζ(1+it) vanishes, then ζ(1+2it) is singular, which is not possible. One way of doing this is by using the inequality

for σ > 1, t real, and looking at the limit as σ → 1. This inequality follows by taking the real part of the log of the Euler product to see that

where the sum is over all prime powers p n , so that

which is at least 1 because all the terms in the sum are positive, due to the inequality

Zero-free regions

Hardy (1914) and Hardy & Littlewood (1921) showed there are infinitely many zeros on the critical line, by considering moments of certain functions related to the zeta function. Selberg (1942) proved that at least a (small) positive proportion of zeros lie on the line. Levinson (1974) improved this to one-third of the zeros by relating the zeros of the zeta function to those of its derivative, and Conrey (1989) improved this further to two-fifths.

Most zeros lie close to the critical line. More precisely, Bohr & Landau (1914) showed that for any positive ε, the number of zeroes with real part at least 1/2+ε and imaginary part at between -T and T is O ( T ) . Combined with the facts that zeroes on the critical strip are symmetric about the critical line and that the total number of zeroes in the critical strip is Θ ( T log ⁡ T ) , almost all non-trivial zeroes are within a distance ε of the critical line. Ivić (1985) gives several more precise versions of this result, called zero density estimates, which bound the number of zeros in regions with imaginary part at most T and real part at least 1/2+ε.

Hardy–Littlewood conjectures

> lying on the interval ( 0 , T ]

Selberg's zeta function conjecture

Numerical calculations

has the same zeros as the zeta function in the critical strip, and is real on the critical line because of the functional equation, so one can prove the existence of zeros exactly on the real line between two points by checking numerically that the function has opposite signs at these points. Usually one writes

where Hardy's Z function and the Riemann–Siegel theta function θ are uniquely defined by this and the condition that they are smooth real functions with θ(0)=0. By finding many intervals where the function Z changes sign one can show that there are many zeros on the critical line. To verify the Riemann hypothesis up to a given imaginary part T of the zeros, one also has to check that there are no further zeros off the line in this region. This can be done by calculating the total number of zeros in the region using Turing's method and checking that it is the same as the number of zeros found on the line. This allows one to verify the Riemann hypothesis computationally up to any desired value of T (provided all the zeros of the zeta function in this region are simple and on the critical line).

Some calculations of zeros of the zeta function are listed below, where the "height" of a zero is the magnitude of its imaginary part, and the height of the nth zero is denoted by γn. So far all zeros that have been checked are on the critical line and are simple. (A multiple zero would cause problems for the zero finding algorithms, which depend on finding sign changes between zeros.) For tables of the zeros, see Haselgrove & Miller (1960) or Odlyzko.

They also verified the work of Gourdon (2004) and others.

Gram points

A Gram point is a point on the critical line 1/2 + it where the zeta function is real and non-zero. Using the expression for the zeta function on the critical line, ζ(1/2 + it) = Z(t)e − iθ(t) , where Hardy's function, Z, is real for real t, and θ is the Riemann–Siegel theta function, we see that zeta is real when sin(θ(t)) = 0. This implies that θ(t) is an integer multiple of π, which allows for the location of Gram points to be calculated fairly easily by inverting the formula for θ. They are usually numbered as gn for n = 0, 1, . where gn is the unique solution of θ(t) = nπ.

Gram observed that there was often exactly one zero of the zeta function between any two Gram points Hutchinson called this observation Gram's law. There are several other closely related statements that are also sometimes called Gram's law: for example, (−1) n Z(gn) is usually positive, or Z(t) usually has opposite sign at consecutive Gram points. The imaginary parts γn of the first few zeros (in blue) and the first few Gram points gn are given in the following table

g−1 γ1 g0 γ2 g1 γ3 g2 γ4 g3 γ5 g4 γ6 g5
0 3.436 9.667 14.135 17.846 21.022 23.170 25.011 27.670 30.425 31.718 32.935 35.467 37.586 38.999

The first failure of Gram's law occurs at the 127th zero and the Gram point g126, which are in the "wrong" order.

g124 γ126 g125 g126 γ127 γ128 g127 γ129 g128
279.148 279.229 280.802 282.455 282.465 283.211 284.104 284.836 285.752

A Gram point t is called good if the zeta function is positive at 1/2 + it. The indices of the "bad" Gram points where Z has the "wrong" sign are 126, 134, 195, 211, . (sequence A114856 in the OEIS). A Gram block is an interval bounded by two good Gram points such that all the Gram points between them are bad. A refinement of Gram's law called Rosser's rule due to Rosser, Yohe & Schoenfeld (1969) says that Gram blocks often have the expected number of zeros in them (the same as the number of Gram intervals), even though some of the individual Gram intervals in the block may not have exactly one zero in them. For example, the interval bounded by g125 and g127 is a Gram block containing a unique bad Gram point g126, and contains the expected number 2 of zeros although neither of its two Gram intervals contains a unique zero. Rosser et al. checked that there were no exceptions to Rosser's rule in the first 3 million zeros, although there are infinitely many exceptions to Rosser's rule over the entire zeta function.

Gram's rule and Rosser's rule both say that in some sense zeros do not stray too far from their expected positions. The distance of a zero from its expected position is controlled by the function S defined above, which grows extremely slowly: its average value is of the order of (log log T) 1/2 , which only reaches 2 for T around 10 24 . This means that both rules hold most of the time for small T but eventually break down often. Indeed, Trudgian (2011) showed that both Gram's law and Rosser's rule fail in a positive proportion of cases. To be specific, it is expected that in about 73% one zero is enclosed by two successive Gram points, but in 14% no zero and in 13% two zeros are in such a Gram-interval on the long run.

Mathematical papers about the Riemann hypothesis tend to be cautiously noncommittal about its truth. Of authors who express an opinion, most of them, such as Riemann (1859) and Bombieri (2000), imply that they expect (or at least hope) that it is true. The few authors who express serious doubt about it include Ivić (2008), who lists some reasons for skepticism, and Littlewood (1962), who flatly states that he believes it false, that there is no evidence for it and no imaginable reason it would be true. The consensus of the survey articles (Bombieri 2000, Conrey 2003, and Sarnak 2005) is that the evidence for it is strong but not overwhelming, so that while it is probably true there is reasonable doubt.

Some of the arguments for and against the Riemann hypothesis are listed by Sarnak (2005), Conrey (2003), and Ivić (2008), and include the following:

  • Several analogues of the Riemann hypothesis have already been proved. The proof of the Riemann hypothesis for varieties over finite fields by Deligne (1974) is possibly the single strongest theoretical reason in favor of the Riemann hypothesis. This provides some evidence for the more general conjecture that all zeta functions associated with automorphic forms satisfy a Riemann hypothesis, which includes the classical Riemann hypothesis as a special case. Similarly Selberg zeta functions satisfy the analogue of the Riemann hypothesis, and are in some ways similar to the Riemann zeta function, having a functional equation and an infinite product expansion analogous to the Euler product expansion. But there are also some major differences for example, they are not given by Dirichlet series. The Riemann hypothesis for the Goss zeta function was proved by Sheats (1998). In contrast to these positive examples, some Epstein zeta functions do not satisfy the Riemann hypothesis even though they have an infinite number of zeros on the critical line. [6] These functions are quite similar to the Riemann zeta function, and have a Dirichlet series expansion and a functional equation, but the ones known to fail the Riemann hypothesis do not have an Euler product and are not directly related to automorphic representations.
  • At first, the numerical verification that many zeros lie on the line seems strong evidence for it. But analytic number theory has had many conjectures supported by substantial numerical evidence that turned out to be false. See Skewes number for a notorious example, where the first exception to a plausible conjecture related to the Riemann hypothesis probably occurs around 10 316 a counterexample to the Riemann hypothesis with imaginary part this size would be far beyond anything that can currently be computed using a direct approach. The problem is that the behavior is often influenced by very slowly increasing functions such as log log T, that tend to infinity, but do so so slowly that this cannot be detected by computation. Such functions occur in the theory of the zeta function controlling the behavior of its zeros for example the function S(T) above has average size around (log log T) 1/2 . As S(T) jumps by at least 2 at any counterexample to the Riemann hypothesis, one might expect any counterexamples to the Riemann hypothesis to start appearing only when S(T) becomes large. It is never much more than 3 as far as it has been calculated, but is known to be unbounded, suggesting that calculations may not have yet reached the region of typical behavior of the zeta function. 's probabilistic argument for the Riemann hypothesis [19] is based on the observation that if μ(x) is a random sequence of "1"s and "−1"s then, for every ε > 0 , the partial sums

2.5: The Riemann Hypothesis

The Riemann Hypothesis is a problem in mathematics which is currently unsolved.

To explain it to you I will have to lay some groundwork.

First: complex numbers, explained. You may have heard the question asked, "what is the square root of minus one?" Well, maths has an answer and we call it i. i multiplied by i equals -1. If the real number line . -4, -3, -2, -1, 0, 1, 2, 3, 4. is represented as a horizontal line, then the numbers . -4i, -3i, -2i, -i, 0, i, 2i, 3i, 4i. can be thought of as the vertical axis on this diagram. The whole plane taken together is then called the complex plane. This is a two-dimensional set of numbers.

Every complex number can be represented in the form a + b i. For real numbers, we simply take b =0.

Next: functions. In mathematics, a function is a black box which, when you put a number into it, spits a different number out. A function is represented by a letter - usually " f ". If you put a number x into the function you call f , then what f then spits out is written " f ( x )".

In most cases there is a convenient way to express f ( x ) in terms of x . For example, f ( x )= x 2 is a very simple function. Whatever x you put in, you'll get x 2 out. f (1)=1. f (2)=4. f (3)=9. And so on.

You're probably most familiar with real functions, or functions where you put a real number in and always get a real number out. HOWEVER. There's nothing stopping you from putting these weird new complex numbers into a function. For example, if f ( x )= x 2 and we let x =i, which is the square root of minus one I mentioned above, then you'll get f (i)=-1. That's just the beginning of what's more generally known as complex functions - where you can put any complex number a + b i in and get (potentially) any complex number out.

The Riemann Zeta Function is just such a complex function. "Zeta" is a Greek letter which is written "&zeta". For any complex number a + b i, &zeta ( a + b i) will be another complex number, c + d i.

The actual description of the Zeta Function is too boringly complicated to explain here.

Now, a zero of a function is (pretty obviously) a point a + b i where f ( a + b i)=0. If f ( x )= x 2 then the only zero is obviously at 0, where f (0)=0. For the Riemann Zeta Function this is more complicated. It basically has two types of zeros: the "trivial" zeroes, that occur at all negative even integers, that is, -2, -4, -6, -8. and the "nontrivial" zeroes, which are all the OTHER ones.

As far as we know, all the nontrivial zeroes occur at 1/2 + b i for some b . No others have been found in a lot of looking. but are they ALL like that? The Riemann Hypothesis suggests that they are. but nobody has yet been able to prove it.

Just to understand the$^dagger$ statement of the problem, you would have to be familiar with complex analysis and analytic number theory. The $zeta$ function itself is an analytic object from number theory and to understand its significance (just on the surface!) you would have to study it in these realms. Of course it is also a function on $Bbb C$ after analytic continuation - attained using a functional equation - with a simple pole at $1$, and understanding what this means and how to manipulate the function deftly will mean studying complex analysis.

$^dagger$I refer to the statement that $zeta(s)$ has all nontrivial zeros on the critical line. There are actually a lot of equivalent statements that require very little knowledge of complex analysis (you'll still need to pick up a few definitions of arithmetic functions from analytic NT for many of them, these aren't too hard). You can find a lot of equivalences listed here for example.

Beyond that, to understand the modern approaches to RH and related or generalized conjectures and all of the theory there is surrounding this creature, you must go much further in algebraic number theory at the very least, and travel to many other worlds like modular forms, differential geometry, quantum theory and random matrices, etc. - basically at least a basic knowledge of most advanced subjects in analysis, algebra and geometry, and then especially deeply in pertinent areas.

Everything about the Riemann hypothesis

Today's topic is The Riemann hypothesis.

This recurring thread will be a place to ask questions and discuss famous/well-known/surprising results, clever and elegant proofs, or interesting open problems related to the topic of the week.

Experts in the topic are especially encouraged to contribute and participate in these threads.

Next week's topic will be Galois theory.

These threads will be posted every Wednesday around 12pm UTC-5.

If you have any suggestions for a topic or you want to collaborate in some way in the upcoming threads, please send me a PM.

For previous week's "Everything about X" threads, check out the wiki link here

To kick things off, here is a very brief summary provided by wikipedia and myself:

Named after Bernhard Riemann, the Riemann hypothesis is one of the most famous open problems in mathematics, attracting the interest of both experts and laymen.

On Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse, Riemann studies the behaviour of the prime counting function and presents the now famous conjecture: The nontrivial zeros of the zeta function have real part 1/2.

The (Generalized) Riemann Hypothesis is famous for implying different results in related areas, inspiring the creation of entire branches of mathematics studied to this day, and having a 1M USD bouty

The Riemann Hypothesis is very easy to state, but its significance is not so straightforward.

It all boils down to two product formulas for the Riemann Zeta Function. The first is the product of (1-1/p -s ) -1 over all primes (valid for s>1). It is easy to use this expression to extract prime related functions, like the Chebyshev Functions, demonstrating that if we know stuff about the Riemann Zeta Function, then we know stuff about primes. On the other hand, we have that the Riemann Zeta Function is meromorphic on the entire complex plane (and we know its only pole), which means that we have all the niceness of entire functions at our disposal. The theory of Complex Analysis can then be used to set up another product formula for the Riemann Zeta Function, known as the Weierstrass Factorization. This, essentially, says that entire functions behave a whole lot like infinite degree polynomials, including the fact that they are uniquely determined, up to "scale", by their zeros. The Weierstrass Factorization is then analog of factoring a polynomial by its roots it's a product of expressions over all the zeros of the zeta function.

If we go through the manipulations on the Riemann Zeta Function that gave us the Chebyshev function (which is a "smooth" prime-counting function), then we can write the Chebyshev function explicitly in terms of the zeros of the Riemann Zeta Function. This is the Riemann von-Mangoldt Explicit Formula. It is nothing more than an integral transformation of the two product representations of the Riemann Zeta Function. But this integral transformation explicitly gives us the information we seek about primes.

Now, the Functional Equation of the Riemann Zeta Function tells us that, outside a certain region, the only zeros of the Riemann Zeta Function are the negative even integers. But these, asymptotically, contribute nothing to the Chebyshev function and so are trivial. The zeros that really contribute to the growth of the Chebyshev function are the zeros in this certain region. In fact, the form of the Riemann von Mangold Formula is

Chebyshev = (Main Growth Term) + (Decay Term) + (Oscillatory Term)

The "Main Growth Term" comes directly from the pole of the Riemann Zeta Function. The "Decay Term" comes from the trivial zeros. The "Oscillatory Term" comes from the non-trivial zeros. The Oscillatory Term has the chance to contribute nontrivially to the growth of the Chebyshev function, but we would like to say that this does not happen and that the growth of the Chebyshev function is, more or less, completely governed by the "Main Growth Term".

Now, the nontrivial zeros lie in some region of the complex plane. But the amount that they contribute to the growth of the Chebyshev function through the Oscillatory Term is dependent on how close to the boundary of this region that they live. The Prime Number Theorem, which says that the Chebyshev function does, indeed, grow like the Main Growth Term, follows from proving that there are no zeros on the boundary of this region. But we would like to say that the Oscillatory Term contributes as little as possible to the growth of the Chebyshev function. This will then happen when the zeros are as far inside the critical region as possible. This is what the Riemann Hypothesis says. It is basically a conjecture on the error between the Chebyshev function and it's main asymptotic growth given by the Main Growth Term.

The Riemann Hypothesis, and its generalizations, is assumed for a lot of important results. It is mainly used to control the errors associated with out approximations for the prime counting function. If, say, you want to show that there is a number N so that there are infinitely many primes a distance at most N apart, then having a close and reliable approximation to where the primes are is probably a good thing. Luckily for the Bounded Gaps theorem, the exact General Riemann Hypothesis is not needed, instead you just need that it is true "on average". The Bombieri-Vinogradov Theorem is a sufficient enough result for this (after some tweaking) and basically says that the Generalized Riemann Hypothesis is true on average, and it's statement is a clear statement about the error between the prime counting function and its asymptotic approximation.

EDIT: I'm not sure if /u/chebushka was referring to my post of the original post description, but it should be emphasized that the important results generally all depend on the Generalized Riemann Hypothesis, or even the "Grand Riemann Hypothesis" which says that all zeros of all Riemann Zeta-like functions are on the critical line and are all their zeros are linearly independent over the rationals. Though the moral of bounding the error is relatively consistent throughout, a lot of the applications bounding the error for different types of prime-counting functions that each have their own "Riemann Hypothesis".