 Chernoff bound

In probability theory, the Chernoff bound, named after Herman Chernoff, gives exponentially decreasing bounds on tail distributions of sums of independent random variables. It is better than the first or second moment based tail bounds such as Markov's inequality or Chebyshev inequality, which only yield powerlaw bounds on tail decay.
It is related to the (historically earliest) Bernstein inequalities, and to Hoeffding's inequality.
Let X_{1}, ..., X_{n} be independent Bernoulli random variables, each having probability p > 1/2. Then the probability of simultaneous occurrence of more than n/2 of the events {X_{k} = 1} has an exact value P, where
The Chernoff bound shows that P has the following lower bound:
This result admits various generalisations as outlined below. One can encounter many flavours of Chernoff bounds: the original additive form (which gives a bound on the absolute error) or the more practical multiplicative form (which bounds the error relative to the mean).
Contents
A motivating example
The simplest case of Chernoff bounds is used to bound the success probability of majority agreement for n independent, equally likely events.
A simple motivating example is to consider a biased coin. One side (say, Heads), is more likely to come up than the other, but you don't know which and would like to find out. The obvious solution is to flip it many times and then choose the side that comes up the most. But how many times do you have to flip it to be confident that you've chosen correctly?
In our example, let X_{i} denote the event that the ith coin flip comes up Heads; suppose that we want to ensure we choose the wrong side with at most a small probability ε. Then, rearranging the above, we must have:
If the coin is noticeably biased, say coming up on one side 60% of the time (p = .6), then we can guess that side with 95% () accuracy after 150 flips(n = 150). If it is 90% biased, then a mere 10 flips suffices. If the coin is only biased a tiny amount, like most real coins are, the number of necessary flips becomes much larger.
More practically, the Chernoff bound is used in randomized algorithms (or in computational devices such as quantum computers) to determine a bound on the number of runs necessary to determine a value by majority agreement, up to a specified probability. For example, suppose an algorithm (or machine) A computes the correct value of a function f with probability p > 1/2. If we choose n satisfying the inequality above, the probability that a majority exists and is equal to the correct value is at least 1 − ε, which for small enough ε is quite reliable. If p is a constant, ε diminishes exponentially with growing n, which is what makes algorithms in the complexity class BPP efficient.
Notice that if p is very close to 1/2, the necessary n can become very large. For example, if p = 1/2 + 1/2^{m}, as it might be in some PP algorithms, the result is that n is bounded below by an exponential function in m:
The first step in the proof of Chernoff bounds
The Chernoff bound for a random variable X, which is the sum of n independent random variables X_{1},X_{2},...,X_{n}, is obtained by applying e^{tX} for some wellchosen value of t. This method was first applied by Sergei Bernstein to prove the related Bernstein inequalities.
From Markov's inequality and using independence we can derive the following useful inequality:
For any t > 0,
In particular optimizing over t and using independence we obtain,
Similarly,
and so,
Precise statements and proofs
Theorem for additive form (absolute error)
The following Theorem is due to Wassily Hoeffding and hence is called ChernoffHoeffding theorem.
Assume random variables are i.i.d. Let , , and ε > 0. Then
and
where
is the KullbackLeibler divergence between Bernoulli distributed random variables with parameters x and y respectively. If , then
Proof
The proof starts from the general inequality (+) above. q = p + ε. Taking a = mq in (+), we obtain:
Now, knowing that Pr[X_{i} = 1] = p, Pr[X_{i} = 0] = (1 − p), we have
Therefore we can easily compute the infimum, using calculus and some logarithms. Thus,
Setting the last equation to zero and solving, we have
so that .
Thus, .
As q = p + ε > p, we see that t > 0, so our bound is satisfied on t. Having solved for t, we can plug back into the equations above to find that
We now have our desired result, that
To complete the proof for the symmetric case, we simply define the random variable Y_{i} = 1 − X_{i}, apply the same proof, and plug it into our bound.
Simpler bounds
A simpler bound follows by relaxing the theorem using , which follows from the convexity of and the fact that . This results in a special case of Hoeffding's inequality. Sometimes, the bound for , which is stronger for p < 1 / 8, is also used.
Theorem for multiplicative form of Chernoff bound (relative error)
Let random variables be independent random variables taking on values 0 or 1. Further, assume that Pr(X_{i} = 1) = p_{i}. Then, if we let and μ be the expectation of X, for any δ > 0
Proof
According to (+),
The third line above follows because takes the value e^{t} with probability p_{i} and the value 1 with probability 1 − p_{i}. This is identical to the calculation above in the proof of the Theorem for additive form (absolute error).
Rewriting p_{i}e^{t} + (1 − p_{i}) as p_{i}(e^{t} − 1) + 1 and recalling that (with strict inequality if x > 0), we set x = p_{i}(e^{t} − 1). The same result can be obtained by directly replacing a in the equation for the Chernoff bound with (1 + δ)μ.^{[1]}
Thus,
If we simply set t = log(1 + δ) so that t > 0 for δ > 0, we can substitute and find
This proves the result desired. A similar proof strategy can be used to show that

 Pr[X < (1 − δ)μ] < exp( − μδ^{2} / 2).
Better Chernoff bounds for some special cases
We can obtain stronger bounds using simpler proof techniques for some special cases of symmetric random variables.
Let X_{1},X_{2},...,X_{n} be independent random variables,
 .
(a) .
Then,
 ,
and therefore also
 .
(b)
Then,
 ,
 ,
 ,
 .
Applications of Chernoff bound
Chernoff bounds have very useful applications in set balancing and packet routing in sparse networks.
The set balancing problem arises while designing statistical experiments. Typically while designing a statistical experiment, given the features of each participant in the experiment, we need to know how to divide the participants into 2 disjoint groups such that each feature is roughly as balanced as possible between the two groups. Refer to this book section for more info on the problem.
Chernoff bounds are also used to obtain tight bounds for permutation routing problems which reduce network congestion while routing packets in sparse networks. Refer to this book section for a thorough treatment of the problem.
Matrix Chernoff bound
Rudolf Ahlswede and Andreas Winter introduced a Chernoff bound for matrixvalued random variables.
See also
 Bernstein inequalities (probability theory)
 Hoeffding's inequality
 Markov's inequality
 Chebyshev's inequality
References
 ^ Refer to the proof above
 Chernoff, H. (1952). "A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations". Annals of Mathematical Statistics 23 (4): 493–507. doi:10.1214/aoms/1177729330. JSTOR 2236576. MR57518. Zbl 0048.11804.
 Hoeffding, W. (1963). "Probability Inequalities for Sums of Bounded Random Variables". Journal of the American Statistical Association 58 (301): 13–30. doi:10.2307/2282952. JSTOR 2282952.
 Chernoff, H. (1981). "A Note on an Inequality Involving the Normal Distribution". The Annals of Probability 9 (3): 533. doi:10.1214/aop/1176994428. JSTOR 2243541. MR614640. Zbl 0457.60014.
 Hagerup, T. (1990). "A guided tour of Chernoff bounds". Information Processing Letters 33 (6): 305. doi:10.1016/00200190(90)90214I.
 Ahlswede, R.; Winter, A. (2003). "Strong Converse for Identification via Quantum Channels". IEEE Transactions on Information Theory 48 (3): 569–579. arXiv:quantph/0012127.
 Mitzenmacher, M.; Upfal, E. (2005). Probability and Computing: Randomized Algorithms and Probabilistic Analysis. ISBN 9780521835404. http://books.google.com/books?id=0bAYl6d7hvkC.
 Nielsen, F. (2011). "Chernoff information of exponential families". arXiv:1102.2684 [cs.IT].
Categories: Probabilistic inequalities

Wikimedia Foundation. 2010.
Look at other dictionaries:
Chernoff — is a surname and may refer to: Adrian Chernoff Herman Chernoff applied mathematician, statistician and physicist Chernoff bound, also called Chernoff s inequality Chernoff face Joel Chernoff singer songwriter Joel Chernoff movie producer Maxine… … Wikipedia
Herman Chernoff — (born July 1, 1923) is an American applied mathematician, statistician and physicist formerly a professor at MIT and currently working at Harvard University. [cite journal last1 = Bather  first1 = John journal = Statistical Science title = A… … Wikipedia
Inégalité de Chernoff — En théorie des probabilités, l inégalité de Chernoff, d après Herman Chernoff (de), énonce le résultat suivant : soient des variables aléatoires indépendantes, telles que E[Xi] = 0 et pour tout i. On pose … Wikipédia en Français
List of statistics topics — Please add any Wikipedia articles related to statistics that are not already on this list.The Related changes link in the margin of this page (below search) leads to a list of the most recent changes to the articles listed below. To see the most… … Wikipedia
Disjunct matrix — Disjunct and separable matrices play a pivotal role in the mathematical area of non adaptive group testing. This area investigates efficient designs and procedures to identify needles in haystacks by conducting the tests on groups of items… … Wikipedia
List of mathematics articles (C) — NOTOC C C closed subgroup C minimal theory C normal subgroup C number C semiring C space C symmetry C* algebra C0 semigroup CA group Cabal (set theory) Cabibbo Kobayashi Maskawa matrix Cabinet projection Cable knot Cabri Geometry Cabtaxi number… … Wikipedia
Concentration inequality — In mathematics, concentration inequalities provide probability bounds on how a random variable deviates from some value (e.g. its expectation). The laws of large numbers of classical probability theory state that sums of independent random… … Wikipedia
PP (complexity) — In complexity theory, PP is the class of decision problems solvable by a probabilistic Turing machine in polynomial time, with an error probability of less than 1/2 for all instances. The abbreviation PP refers to probabilistic polynomial time.… … Wikipedia
Central limit theorem — This figure demonstrates the central limit theorem. The sample means are generated using a random number generator, which draws numbers between 1 and 100 from a uniform probability distribution. It illustrates that increasing sample sizes result… … Wikipedia
BPP — In complexity theory, BPP is the class of decision problems solvable by a probabilistic Turing machine in polynomial time, with an error probability of at most 1/3 for all instances. The abbreviation BPP refers to Bounded error, Probabilistic,… … Wikipedia