Gambler's fallacy


Gambler's fallacy

The Gambler's fallacy, also known as the Monte Carlo fallacy (because its most famous example happened in a Monte Carlo Casino in 1913)[1], and also referred to as the fallacy of the maturity of chances, is the belief that if deviations from expected behaviour are observed in repeated independent trials of some random process, future deviations in the opposite direction are then more likely.

For example, if a fair coin is tossed repeatedly and tails comes up a larger number of times than is expected, a gambler may incorrectly believe that this means that heads is more likely in future tosses.[2]. Such an expectation could be mistakenly referred to as being due, and it probably arises from everyday experiences with nonrandom events (such as when a scheduled train is late, where it can be expected that it has a greater chance of arriving the later it gets). This is an informal fallacy. It is also known colloquially as the law of averages.

What is true instead are the law of large numbers – in the long term, averages of independent trials will tend to approach the expected value, even though individual trials are independent – and regression toward the mean, namely that following a rare extreme event (say, a run of 10 heads), the next event is likely to be less extreme (the next run of heads is likely to be less than 10), simply because extreme events are rare.

The gambler's fallacy implicitly involves an assertion of negative correlation between trials of the random process and therefore involves a denial of the exchangeability of outcomes of the random process. In other words, one implicitly assigns a higher chance of occurrence to an event even though from the point of view of "nature" or the "experiment", all such events are equally probable (or distributed in a known way).

The reversal is also a fallacy, in which a gambler may instead decide that tails are more likely out of some mystical preconception that fate has thus far allowed for consistent results of tails; the false conclusion being: Why change if odds favor tails? Again, the fallacy is the belief that the "universe" somehow carries a memory of past results which tend to favor or disfavor future outcomes.[citation needed]

The conclusion of this reversed gambler's fallacy may be correct, however, if the empirical evidence suggests that an initial assumption about the probability distribution is false. If a coin is tossed ten times and lands "heads" ten times, the gambler's fallacy would suggest an even-money bet on "tails", while the reverse gambler's fallacy (not to be confused with the inverse gambler's fallacy) would suggest an even-money bet on "heads". In this case, the smart bet is "heads" because the empirical evidence—ten "heads" in a row—suggests that the coin is likely to be biased toward "heads", contradicting the (general) assumption that the coin is fair.

Contents

An example: coin-tossing

Simulation of coin tosses: Each frame, a coin is flipped which is red on one side and blue on the other. The result of each flip is added as a colored dot in the corresponding column. As the pie chart shows, the proportion of red versus blue approaches 50-50 (the Law of Large Numbers). But the difference between red and blue does not systematically decrease to zero.

The gambler's fallacy can be illustrated by considering the repeated toss of a fair coin. With a fair coin, the outcomes in different tosses are statistically independent and the probability of getting heads on a single toss is exactly 12 (one in two). It follows that the probability of getting two heads in two tosses is 14 (one in four) and the probability of getting three heads in three tosses is 18 (one in eight). In general, if we let Ai be the event that toss i of a fair coin comes up heads, then we have,

\Pr\left(\bigcap_{i=1}^n A_i\right)=\prod_{i=1}^n \Pr(A_i)={1\over2^n}.

Now suppose that we have just tossed four heads in a row, so that if the next coin toss were also to come up heads, it would complete a run of five successive heads. Since the probability of a run of five successive heads is only 132 (one in thirty-two), a believer in the gambler's fallacy might believe that this next flip is less likely to be heads than to be tails. However, this is not correct, and is a manifestation of the gambler's fallacy; the event of 5 heads in a row and the event of "first 4 heads, then a tails" are equally likely, each having probability 132. Given the first four rolls turn up heads, the probability that the next toss is a head is in fact,

\Pr\left(A_5|A_1 \cap A_2 \cap A_3 \cap A_4 \right)=\Pr\left(A_5\right)=\frac{1}{2}.

While a run of five heads is only 132 = 0.03125, it is only that before the coin is first tossed. After the first four tosses the results are no longer unknown, so their probabilities are 1. Reasoning that it is more likely that the next toss will be a tail than a head due to the past tosses, that a run of luck in the past somehow influences the odds in the future, is the fallacy.

Explaining why the probability is 1/2 for a fair coin

We can see from the above that, if one flips a fair coin 21 times, then the probability of 21 heads is 1 in 2,097,152. However, the probability of flipping a head after having already flipped 20 heads in a row is simply 12. This is an application of Bayes' theorem.

This can also be seen without knowing that 20 heads have occurred for certain (without applying of Bayes' theorem). Consider the following two probabilities, assuming a fair coin:

  • probability of 20 heads, then 1 tail = 0.520 × 0.5 = 0.521
  • probability of 20 heads, then 1 head = 0.520 × 0.5 = 0.521

The probability of getting 20 heads then 1 tail, and the probability of getting 20 heads then another head are both 1 in 2,097,152. Therefore, it is equally likely to flip 21 heads as it is to flip 20 heads and then 1 tail when flipping a fair coin 21 times. Furthermore, these two probabilities are equally as likely as any other 21-flip combinations that can be obtained (there are 2,097,152 total); all 21-flip combinations will have probabilities equal to 0.521, or 1 in 2,097,152. From these observations, there is no reason to assume at any point that a change of luck is warranted based on prior trials (flips), because every outcome observed will always have been as likely as the other outcomes that were not observed for that particular trial, given a fair coin. Therefore, just as Bayes' theorem shows, the result of each trial comes down to the base probability of the fair coin: 12.

Other examples

There is another way to emphasize the fallacy. As already mentioned, the fallacy is built on the notion that previous failures indicate an increased probability of success on subsequent attempts. This is, in fact, the inverse of what actually happens, even on a fair chance of a successful event, given a set number of iterations. Assume a fair 16-sided die, where a win is defined as rolling a 1. Assume a player is given 16 rolls to obtain at least one win (1−p(rolling no ones)). The low winning odds are just to make the change in probability more noticeable. The probability of having at least one win in the 16 rolls is:

1-\left[\frac{15}{16}\right]^{16} \,=\, 64.39%

However, assume now that the first roll was a loss (93.75% chance of that, 1516). The player now only has 15 rolls left and, according to the fallacy, should have a higher chance of winning since one loss has occurred. His chances of having at least one win are now:

1-\left[\frac{15}{16}\right]^{15} \,=\, 62.02%

Simply by losing one toss the player's probability of winning dropped by 2%. By the time this reaches 5 losses (11 rolls left), his probability of winning on one of the remaining rolls will have dropped to ~50%. The player's odds for at least one win in those 16 rolls has not increased given a series of losses; his odds have decreased because he has fewer iterations left to win. In other words, the previous losses in no way contribute to the odds of the remaining attempts, but there are fewer remaining attempts to gain a win, which results in a lower probability of obtaining it.

The player becomes more likely to lose in a set number of iterations as he fails to win, and eventually his probability of winning will again equal the probability of winning a single toss, when only one toss is left: 6.25% in this instance.

Some lottery players will choose the same numbers every time, or intentionally change their numbers, but both are equally likely to win any individual lottery draw. Copying the numbers that won the previous lottery draw gives an equal probability, although a rational gambler might attempt to predict other players' choices and then deliberately avoid these numbers. Low numbers (below 31 and especially below 12) are popular because people play birthdays as their so-called lucky numbers; hence a win in which these numbers are over-represented is more likely to result in a shared payout.

A joke told among mathematicians demonstrates the nature of the fallacy. When flying on an aircraft, a man decides to always bring a bomb with him. "The chances of an aircraft having a bomb on it are very small," he reasons, "and certainly the chances of having two are almost none!" A similar example is in the book The World According to Garp when the hero Garp decides to buy a house a moment after a small plane crashes into it, reasoning that the chances of another plane hitting the house have just dropped to zero.

A very real-world example of this is how mothers and couples trying for another child tend to think that if they've had several children of the same sex previously, that this somehow makes their chances more likely of finally having a child of the opposite sex. This is similar to what people tend to think of with Henry VIII of England trying so desperately for a son. While the Trivers–Willard hypothesis explains how there is actually a slight change in a woman's likelihood to birth males towards birthing females over the course of her life, it is almost always a 50% chance of either sex, despite what parents may hope for their next child.

The most famous example happened in a Monte Carlo Casino in the summer of 1913, when the ball fell in black 26 times in a row, an extremely uncommon occurrence (but no more or less common than any of the other 67,108,863 sequences of 26 balls, neglecting the 0 or 00 spots on the wheel), and gamblers lost millions of francs betting against black after the black streak happened. Gamblers reasoned incorrectly that the streak was causing an "imbalance" in the randomness of the wheel, and that it had to be followed by a long streak of red.[1]

Non-examples of the fallacy

There are many scenarios where the gambler's fallacy might superficially seem to apply, but actually does not. When the probability of different events is not independent, the probability of future events can change based on the outcome of past events (see statistical permutation). Formally, the system is said to have memory. An example of this is cards drawn without replacement. For example, if an ace is drawn from a deck and not reinserted, the next draw is less likely to be an ace and more likely to be of another rank. The odds for drawing another ace, assuming that it was the first card drawn and that there are no jokers, have decreased from 452 (7.69%) to 351 (5.88%), while the odds for each other rank have increased from 452 (7.69%) to 451 (7.84%). This type of effect is what allows card counting schemes to work (for example in the game of blackjack).

Meanwhile, the reversed gambler's fallacy may appear to apply in the story of Joseph Jagger, who hired clerks to record the results of roulette wheels in Monte Carlo. He discovered that one wheel favored nine numbers and won large sums of money until the casino started rebalancing the roulette wheels daily. In this situation, the observation of the wheel's behavior provided information about the physical properties of the wheel rather than its "probability" in some abstract sense, a concept which is the basis of both the gambler's fallacy and its reversal. Even a biased wheel's past results will not affect future results, but the results can provide information about what sort of results the wheel tends to produce. However, if it is known for certain that the wheel is completely fair, then past results provide no information about future ones.

The outcome of future events can be affected if external factors are allowed to change the probability of the events (e.g., changes in the rules of a game affecting a sports team's performance levels). Additionally, an inexperienced player's success may decrease after opposing teams discover his or her weaknesses and exploit them. The player must then attempt to compensate and randomize his strategy. See Game theory.

Many riddles trick the reader into believing that they are an example of the gambler's fallacy, such as the Monty Hall problem.

Non-example: unknown probability of event

When the probability of repeated events are not known, outcomes may not be equally probable. In the case of coin tossing, as a run of heads gets longer and longer, the likelihood that the coin is biased towards heads increases. If one flips a coin 21 times in a row and obtains 21 heads, one might rationally conclude a high probability of bias towards heads, and hence conclude that future flips of this coin are also highly likely to be heads. In fact, Bayesian inference can be used to show that when the long-run proportion of different outcomes are unknown but exchangeable (meaning that the random process from which they are generated may be biased but is equally likely to be biased in any direction) previous observations demonstrate the likely direction of the bias, such that the outcome which has occurred the most in the observed data is the most likely to occur again.[3]

Psychology behind the fallacy

Amos Tversky and Daniel Kahneman proposed that the gambler's fallacy is a cognitive bias produced by a psychological heuristic called the representativeness heuristic.[4][5] According to this view, "after observing a long run of red on the roulette wheel, for example, most people erroneously believe that black will result in a more representative sequence than the occurrence of an additional red",[6] so people expect that a short run of random outcomes should share properties of a longer run, specifically in that deviations from average should balance out. When people are asked to make up a random-looking sequence of coin tosses, they tend to make sequences where the proportion of heads to tails stays closer to 0.5 in any short segment than would be predicted by chance;[7] Kahneman and Tversky interpret this to mean that people believe short sequences of random events should be representative of longer ones.[8]

The representativeness heuristic is also cited behind the related phenomenon of the clustering illusion, according to which people see streaks of random events as being non-random when such streaks are actually much more likely to occur in small samples than people expect.[9]

See also

References

  1. ^ a b Lehrer, Jonah (2009). How We Decide. New York: Houghton Mifflin Harcourt. p. 66. ISBN 978-0-618-62011-1. 
  2. ^ Colman, Andrew (2001). "Gambler's Fallacy - Encyclopedia.com". A Dictionary of Psychology. Oxford University Press. http://www.encyclopedia.com/doc/1O87-gamblersfallacy.html. Retrieved 2007-11-26. 
  3. ^ O'Neill, B. and Puza, B.D. (2004) Dice have no memories but I do: A defence of the reverse gambler's belief. [1]. Reprinted in abridged form as O'Neill, B. and Puza, B.D. (2005) In defence of the reverse gambler's belief. The Mathematical Scientist 30(1), pp. 13–16.
  4. ^ Tversky, Amos; Daniel Kahneman (1974). "Judgment under uncertainty: Heuristics and biases". Science 185 (4157): 1124–1131. doi:10.1126/science.185.4157.1124. PMID 17835457. 
  5. ^ Tversky, Amos; Daniel Kahneman (1971). "Belief in the law of small numbers". Psychological Bulletin 76 (2): 105–110. doi:10.1037/h0031322. 
  6. ^ Tversky & Kahneman, 1974.
  7. ^ Tune, G.S. (1964). "Response preferences: A review of some relevant literature". Psychological Bulletin 61 (4): 286–302. doi:10.1037/h0048618. PMID 14140335. 
  8. ^ Tversky & Kahneman, 1971.
  9. ^ Gilovich, Thomas (1991). How we know what isn't so. New York: The Free Press. pp. 16–19. ISBN 0-02-911706-2. 

Wikimedia Foundation. 2010.

Look at other dictionaries:

  • Gambler's Fallacy — When an individual erroneously believes that the onset of a certain random event is less likely to happen following an event or a series of events. This line of thinking is incorrect because past events do not change the probability that certain… …   Investment dictionary

  • gambler's fallacy — Also known as the Monte Carlo fallacy. Either (i) the mistake of supposing that results on a system such as a roulette table will continue to display some pattern they have recently been showing (e.g. reds are ‘hot’), or (ii) the converse mistake …   Philosophy dictionary

  • Inverse gambler's fallacy — The inverse gambler s fallacy is a term coined by philosopher Ian Hacking to refer to a formal fallacy of Bayesian inference which is similar to the better known gambler s fallacy. It is the fallacy of concluding, on the basis of an unlikely… …   Wikipedia

  • Ошибка игрока (gambler's fallacy) — О. и., или ложный вывод Монте Карло, отражает распространенное неправильное понимание случайности событий. Предположим, что монета подбрасывается много раз подряд. Если выпадает подряд 10 «орлов» и если эта монета яв ся «правильной», для… …   Психологическая энциклопедия

  • Gambler's conceit — is defined by economist David Ewing as the mistaken belief that one will be able to stop performing a risky action while one continues to succeed or win at it. This belief frequently arises during games of chance, such as casino games, or stock… …   Wikipedia

  • Fallacy — In logic and rhetoric, a fallacy is usually incorrect argumentation in reasoning resulting in a misconception or presumption. By accident or design, fallacies may exploit emotional triggers in the listener or interlocutor (appeal to emotion), or… …   Wikipedia

  • Fallacy of composition — The fallacy of composition arises when one infers that something is true of the whole from the fact that it is true of some part of the whole (or even of every proper part). For example: This fragment of metal cannot be broken with a hammer,… …   Wikipedia

  • Fallacy of division — A fallacy of division occurs when one reasons logically that something true of a thing must also be true of all or some of its parts. An example: A Boeing 747 can fly unaided across the ocean. A Boeing 747 has jet engines. Therefore, one of its… …   Wikipedia

  • Fallacy of quoting out of context — The practice of quoting out of context, sometimes referred to as contextomy or quote mining , is a logical fallacy and a type of false attribution in which a passage is removed from its surrounding matter in such a way as to distort its intended… …   Wikipedia

  • Gambler's ruin — The basic meaning of gambler s ruin is a gambler s loss of the last of his bank of gambling money and consequent inability to continue gambling. In probability theory, the term sometimes refers to the fact that a gambler will almost certainly go… …   Wikipedia


Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”

We are using cookies for the best presentation of our site. Continuing to use this site, you agree with this.