Jump to content

Talk:Checking whether a coin is fair

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

PDF

[edit]

"The graph on the right shows the probability density function of r" - Is this really the case? Isn't the area under the curve in a PDF always 1? This doesn't seem to be the case in the graph. — Preceding unsigned comment added by 213.142.96.115 (talk) 11:59, 22 July 2020 (UTC)[reply]

That indeed is a PDF. The integral from 0 to 1 (the entire probability space) of that function is 1 as per WolframAlpha. 78.1.182.236 (talk) 16:48, 18 April 2022 (UTC)[reply]

Correct answer

[edit]

To check the bias of the coin, given the event of heads out of tosses, one has to calculate the probability of such an event to occur given that the coin is fair. Then one has to decide whether the coin is fair based on whether such a probability is acceptable or not. The probability of the event, given the fair coin, is equal to

where is the observed deviation. The formula accounts for possible deviation either way as well as deviations larger than observed. For example, if one observes 7 heads out of 10 tosses, a probability of such a deviation from expected value of .5 is equal to

Then one has to decide whether such probability is acceptable. In the example above, the event is not uncommon, so the coin can be fair. The fact that the observation may be more probable given a somewhat unfair coin is irrelevant.

(Igny 18:12, 19 September 2005 (UTC))[reply]

It seems to me that the article was written by an undergraduate student possibly following a solution of a poorly informed tutor. Likely both failed the introductory statistics course that semester. Unless there are strong objections, I will edit the article a bit later.(Igny 18:25, 19 September 2005 (UTC))[reply]

Isn't this what you are talking about ? : Would you like to write a new section in the article using your method. Does this method assumes that the coin is fair to begin with? ie. fair until proven unfair. Ohanian 13:20, 3 November 2005 (UTC)[reply]

Other applications

[edit]

The above mathematical analysis for determining if a coin is fair, can also be applied to other uses. Some examples of other usage includes:


  • Determining the product defective rates of a product when subjected to a particular (but well defined) condition.

Sometimes a product can be very difficult or expensive to make. Furthermore if testing such products will result in their destruction, a minimum amount of products should be tested. Using the same analysis the probability density function of the product defect rate can be found.


  • Two party polling. If a small sample poll is taken where the there are only two mutually exclusive choices, then this is equivalent to tossing a single coin multiple times using a bias coin. The same analysis can therefore be applied to determine actual voting ratio.


  • Finding the proportion of females in an animal group.

Determining the gender ratio in a large group of an animal species. Provided that a very small sample is taken when performing the random sampling of the population, the analysis is similar to determining the probability of obtaining heads in a coin toss.

By Ohanian 01:45, 2005 Mar 26 (UTC)

Accuracy of article

[edit]

The article manages to say that tossing a coin 10 times and getting 7 heads is enough to say "one may be pretty confident that the coin is indeed biased". No statistician of whatever philosophy would accept that. In fact getting 7 out of 10 from a fair coin is not uncommon.

Most Bayesian statisticians would not accept the argument given through the article. In fact the article has a mixture of classical hypothesis testing and some Bayesian language, and manages come out with strange conclusions as a result. The talk about a normal curve and the central limit theorem is odd, given that the drawn curve is clearly a beta distribution.

The article on Bayes factors give some ideas about the way Bayesians might go (and 7 out of 10 would, like the example there, could in fact minimally increase their belief that the coin was fair), but in fact they base their decision theory not on confidence intervals, but by minimising the expected loss when combining their posterior probability distribution and their loss function (i.e. their prior probability distribution, the likelihood of the evidence and their loss function). In my view this article should either use Bayesian methods fully (best) or frequentist language and analysis (second best). That is of course assuming that the article adds value at all.--Henrygb 01:01, 8 Apr 2005 (UTC)

Right. I'd rewritten portions of this article, but the latter part still doesn't make sense to me. The conclusion of the first part could be radically restated, simply showing the Beta distribution plot as the posterior density based on 7 heads and 3 tails and a uniform prior. I think everyone would agree up to this point. What conclusions to draw from the posterior is an entirely different matter that deserves a better discussion. --MarkSweep 01:43, 8 Apr 2005 (UTC)


The first part, about using Bayesian methods is mathematically correct, though not well written. The prior should be stated (and justified) first, then the posterior derived. He has the posterior mentioned first, then says what prior he used. Secondly, I would say the strategy of estimating the binomial parameter is a poor one. It's not the best way of answer the question. The best way is set it up as Bayesian hypothesis test. Let H1 be the hypothesis that the coin is fair, H2 that it is not. Give them both a prior probability of one half, then compute the Bayes factor.

The second part switches to classical methods used for sample size determination. As others have said, it would best to do that part by Bayesian methods. I would approach it as follows. Bayesians consider inference to be a special case of decision-making under uncertainty, where the actions being contemplated are to report particular values. Here we would have to specify a loss function that defines how much it 'costs' us (in some sense) to toss the coin, versus the 'cost' (consequences) of not knowing. If that loss function can be defined, we can compute the expected reduction in loss corresponding one more coin toss. If it is positive we should toss again. For a sensible loss function this reduction in loss will eventually be negative and we should stop tossing.

Blaise 18:13, 20 Apr 2005 (UTC)


Hello, Please let me respond to this. The problem is basically the interpretation of the results.

Firstly, before getting any further, let defined an "unbias coin" very loosely as 0.45 < Pr(head) < 0.55 This is a very lousy definition but I need one to make my case.

Now, why was 10 coin tosses choosen in the example section? The answer is simple. To make the calculations of the factorials easy to make. 11! (eleven factorial) 7! (seven factorial) and 3! (three factorial) are very easy to calc on your typical scientific calculator.

If a more reasonable number of coin toss was choosen, say 10,000 coin tosses, it would be impossible to calculate the factorials using a high school calculator. This defeats the whole purpose of the example section which is to make things clearer to the reader.

Next, the intepretation of the result.

Before even a single toss of the coin is performed, the odds are already stacked against an unbias coin.

This is because the prior distribution of the probability of the coin was ASSUMED to be UNIFORM. In this distribution the odds that the coin is unbias (before any coin tossing) is 0.10 while the odds that it is bias is 0.90

Compare this with a normal human's everyday experience of coin. Most human experience of coin is that they are mostly unbias (even before any coin tosing is performed).

So after tossing the coin 10 times with 7 heads and 3 tails. The calculations suggested that the probability that "the toss results came from an unbias coin" is 13% and that the probability that "the toss results came from a bias coin" is 87%.

Note that it does NOT say that the coin is bias. Merely the results are more likely to come from a bias coin GIVEN the prior distribution of a UNIFORM probability distribution (where the odds are already stacked against an unbias coin.

The number of toss 10, is a red herring because the probability calculations are correct (given the prior assumptions).

Let me conclude by giving you this story. Suppose:

An earth astronaut has arrived at a planet on the Alpha Centauri System. He noticed that the local aliens play a game where an irregular shaped object with a red dot is tossed. He only witnessed the aliens tossing the object 10 times in which the object landed on the ground with the dot facing up 7 times and 3 times with the dot facing down. From this, the astronaut did his calculations and concluded that there is only 13% chance that the object is fair(unbias).

What would you say about the above story? The mathematical calculations are EXACTLY the same.

Ohanian 08:15, 2005 Apr 8 (UTC)

I would say that before he saw the evidence be had a personal probability of 0.1 that the coin was fair, and after seeing the evidence he had a personal probability of 0.13 that it was fair. In other words, the evidence of 7 out of 10 pointed very slightly towards the coin being fair - as I said above and using Bayes factors would show. But if I had been the astronaut, I doubt I would have started with the same prior probability, and I would have taken into account any costs or consequences of making a wrong decision and how wrong my decision was. --Henrygb 09:16, 8 Apr 2005 (UTC)
The Bayesian discussion doesn't describe how most Bayesians would approach the problem. The problem should be framed as a sharp null hypothesis versus a vague alternative hypothesis. That is, the sharp null hypothesis is that the coin is exactly fair, and the vague alternative hypothesis is that there is some prior probability that the coin has a bias . One would then calculate the Bayes factor as the probability of obtaining the data given no bias divided by the integrated (over all ) probability of obtaining the data given that the coin has bias...with the weight for a particular bias being given by the prior on .
What the author of this article has done, instead, is to calculate, given that the coin is biased, what the probability of the bias lying in a particular interval is. As a Bayesian hypothesis test this is completely flawed, because it never considers probabilities calculated on the actual null hypothesis of no bias.
See, for example, the articles by Berger and Delampady and Berger and Sellke (access to jstor.org required).
Henrygb's comments about using decision theory (loss function and all that) are correct and appropriate. As it is, the article is seriously misleading, and wrong. Personally, I think the article is redundant...everything in it is better discussed elsewhere in WikiPedia. Bill Jefferys 14:54, 8 September 2006 (UTC)[reply]
I agree. It is also misleading to state that , which is simply WRONG. In fact the author defined for and else. This is not only already an assumption about the coin but also violating the condition . The calculation is therefore wrong, isn't it? Herrkami (talk) 11:30, 26 November 2013 (UTC)[reply]

"How-To" Format Changes/Candidacy for Movement

[edit]

I've gone through and changed most of the language that made this seem like a how-to guide, and removed the "candidate to be moved to Wikibooks" notice accordingly. I got rid of all the necessary "we"'s, "you"'s, "one"'s, et cetera in favor of more straightforward wording.

However, I can't say I've improved the factual accuracy of the article significantly (and I don't have time to research the material in question at the moment), so I've left the corresponding notice up for now. - 68.20.21.191 04:53, 15 Apr 2005 (UTC)

Mergers and acquisitions

[edit]

Do we need both this article and Checking if a coin is biased

Blaise 09:10, 28 Apr 2005 (UTC)

What a mess. Those are two incompatible forks of coin flipping. Merging in the changes from ...biased will be "fun". I added a notice to that article, because the present article contains more material and was edited more recently. --MarkSweep 16:35, 28 Apr 2005 (UTC)
Sure. So does mean we go into "merge into" and "merged with" tags? I think this page should be the final page. But they are more or less the same, I say merge. HereToHelp 12:37, 13 October 2005 (UTC)[reply]

Has anyone ever done experiments to deduce the fairness of a coin?

[edit]

I am very disappointed with this article, because it is all about thinking inside a man's mind (all with mathematics which is also all in man's mind), on how a coin will perform heads or tails with tossing it for a number of times.

It is so easy for today's inventors to just rig a piece of machine that will flip a specific coin and record its results, heads or tails, and thereby we know a coin is biased or not.

From my part, there cannot be ever a non-biased coin, meaning one in theory that is completely balanced in such a way that no matter how many fixed times the specific coin is tossed, it will return 50% heads and 50% tails.

Besides, there are factors outside the coin itself in nature, which man cannot control at all, which factors also intervene in how the coin will land, heads or tails.

So, the article is useless, at least on my expectation, that it will present results from the actual tossings of specific coins a fixed number of times: WHEN today such an experiment is so very easy to enact! Pachomius2000 (talk) 09:11, 9 November 2017 (UTC)Pachomius[reply]


I have a friend of mine who is a very competent statistician. Some time ago he did a number of experiments with a variety of coins he made. These had uneven weights, shapes etc. He found that there is no such thing as a biased coin. His results are unpublished, although I'd like him to publish them, but I'm wondering if anyone has done the same experiment?

Well, a statistician wouldn't be the man for the job; a physicist or computer simulator would. -Grick(talk to me!) 05:14, Jun 16, 2005 (UTC)
How's this? Coin toss simulator Coolgamer 16:35, Jun 22, 2005 (UTC)

Cleanup tag

[edit]

I've moved this cleanup tag from the article:

While the article could certainly be improved, it's not obvious to me why this tag is warranted. Enchanter 21:28, 30 October 2005 (UTC)[reply]

Wow.

[edit]

Jaw-droppingly unencyclopedic. This thing needs deletion. --frothT C 07:45, 29 December 2006 (UTC)[reply]

I thought it was a pretty good example of how significance testing can be applied to real life. --M1ss1ontomars2k4 (T | C | @) 01:26, 7 February 2007 (UTC)[reply]

This is an example of experiment, where you fundamentally do not know what will happen. Keep. Ancheta Wis 11:54, 14 February 2007 (UTC)[reply]

Fair coin

[edit]
Moved here from User talk:Trialsanderrors

I've moved checking if a coin is fair back to that title (and the title perhaps could be improved) and created a new article titled fair coin. The latter topic is much broader than the narrower topic of checking if a coin is fair. Possibly fair coin should redirect to Bernoulli trial; I will think about that. One goes through graduate school reading incessantly about (metaophorically named) "fair coins" and one reads innumerable scholarly papers relying on the concept of "fair coins"; and the topic is not ONLY about statistical hypothesis testing treating only that one hypothesis. Michael Hardy 20:35, 19 February 2007 (UTC)[reply]

Well "Checking if a coin is fair" has no encyclopedic standing, since the title itself implies a how-to. The article needs to expand on the statistical concenpt of a fair coin and the discrepancy between theoretical construct and real world application, but it should be done in conjunction. ~ trialsanderrors 21:03, 19 February 2007 (UTC)[reply]
Your definition at fair coin is incorrect too. A fair coin is a Bernoulli random variable with p = 0.5, a Bernoulli trial is the coin toss, and a Bernoulli process is the series of coin tosses. ~ trialsanderrors 06:15, 20 February 2007 (UTC)[reply]

What the heck is "t"?

[edit]

At the top of the "Estimator of True Probability" section, there's a little box with an equation in it. Not being a statistician, I have no idea what the "t" in it is supposed to be. There is seemingly no reference to it in the article... 65.183.135.40 (talk) 03:40, 18 February 2008 (UTC)[reply]

See first paragraph of the previous section. h and t are the numbers of heads and tails respectively.77.73.111.214 (talk) 14:48, 19 February 2008 (UTC)[reply]

Problem with standard error formula

[edit]

Yet another problem here is that the usual standard error formula is not a very good one for estimating binomial proportions. See e.g. "Approximate is better than "exact" for interval estimation of binomial proportions". American Statistician. 52: 119–126. 1998. Plf515 (talk) 15:56, 13 March 2008 (UTC) Not sure what's wrong with the citation....authors are Alan Agresti and B.A. Coull Plf515 (talk) 15:56, 13 March 2008 (UTC)[reply]

Understandability

[edit]

Breaks down when the article starts speaking about priors without saying what they are or linking to their definition. Furthermore, saying that $r$ is a probability, and then using the notation $\mathrm{Pr}(event)$ doesn't help. I just stopped reading, what are events and what are probabilities should be absolutely clear from the notation. —Preceding unsigned comment added by 138.246.7.147 (talk) 09:55, 14 March 2008 (UTC)[reply]

Math error?

[edit]

In the article, the example is used: "For example, let n=10, h=7, i.e. the coin is tossed 10 times and 7 heads are obtained:"

So why does the n become h+t+1? Isn't that like saying the coin is tossed 11 times? Shouldn't it be h+t? i.e. 10?

Aatombomb (talk) 22:31, 26 March 2008 (UTC)[reply]

Merge to Fair coin

[edit]

Yes... I've seen some older discussions about it above. Still this article goes into a bit too extensive detail (it gets too close to school manual style, I think), while the other is a bit poor. So I think merging would help. - Nabla (talk) 23:34, 7 May 2008 (UTC)[reply]

Situation on these articles is a bit FUBAR. As well as this article and [[Fair coin), there is also Checking_if_a_coin_is_biased. And the merge discussion has been dragging on for at least five years with no decision made. Is anyone interested in moving this forward? Centrepull (talk) 03:45, 22 December 2009 (UTC)[reply]

This last point (regarding "Checking_if_a_coin_is_biased") has evidently been dealt with, as that name now redirects to this article. Melcombe (talk) 12:35, 11 January 2010 (UTC)[reply]
I have removed the merge template, as no action taken and earlier discussion of difference in context for fair coin seems decisive. Melcombe (talk) 13:20, 11 January 2010 (UTC)[reply]

Change name to "WHETHER" (replacing incorrect "if")

[edit]

The article should be entitled "Checking whether a coin be fair", not "if". Kiefer.Wolfowitz (talk) 14:37, 24 June 2009 (UTC)[reply]

Whether would be better, but "if" might be OK in an informal American context ...so says one of the books I have. Melcombe (talk) 12:17, 6 July 2009 (UTC)[reply]
Let us strive for proper written English rather than okay informality. Piranian has an example: Go to the window to see "whether it is raining; if it is raining, close the attic window", etc.
George Piranian "Say it Better" The Mathematical Intelligencer 4, 1 (1982).
Kiefer.Wolfowitz (talk) 13:15, 6 July 2009 (UTC)[reply]

Why Checking rather than Testing? The former strikes me as much more informal (and American?). —Tamfang (talk) 16:14, 16 April 2018 (UTC)[reply]

… testing whether that coin, or any coin so minted … —Tamfang (talk) 23:02, 27 May 2023 (UTC)[reply]

Expert tag July 2009

[edit]

I have added this tag because:

  • as noted above, the Bayesian section does not well explain is arguments at all well.
  • the topic of estimating the probability applies equally to both Bayesian and frequentist approaches, but is apparently only in the frequentist bit
  • there was an attempt to retitle the main subsections as Bayesian and frequentist, but this requires some rewriting of the starting portions so that they make sense having done this
  • much of the "frequentist" bit seems taken up with the question of finding a sensible sample size, but this should be achievable in a Bayesian context as well
  • the Bayesian bit should at least point to the conjugate distributions and related results.

Melcombe (talk) 12:15, 6 July 2009 (UTC)[reply]

Unclear preamble

[edit]

The text says "The true probability of obtaining a particular side when a fair coin is tossed is unknown, but the uncertainty is initially represented by the "prior distribution"." in the preamble. I do not understand this. Isn't the probability to get a particular side with *fair* coin 50% by definition? Given my (limited) understanding of Bayesian probability, I propose to use the following text instead: "Initially, the true probability of obtaining a particular side when a coin is tossed is unknown, but the uncertainty is represented by the "prior distribution"." --Snipergang (talk) 09:47, 22 December 2015 (UTC)[reply]

False negative rate is 0.5 in the examples (low power)

[edit]

I was working through the examples and noticed that getting the supplied answers requires low statistical power, (1-β)=0.5, where β = p(false negative error). This doesn't make much sense; in real-world applications, the false negative rate is at least as or perhaps more important than the false positive rate α. The current example in the article is heavily weighted toward confidently detecting fair coins at the expense of missing a lot of unfair coins.

Confidence should include successfully classifying both fair and unfair coins, not just fair coins.

If you want to be totally unbiased in your confidence, you should consider α and β together with the ratio α/β=1. Confidence is how often you're correct, or C= 1-(α+β); false positive and false negative results are mutually exclusive events and thus their probabilities can be added.

These calculations can be done with G*Power[1], a free software package for working with error, sample sizes, and effect sizes.

Example 1 currently reads:

1. If a maximum error of 0.01 is desired, how many times should the coin be tossed?

at 68.27% level of confidence (Z=1)
at 95.45% level of confidence (Z=2)
at 99.90% level of confidence (Z=3.3)

But really, taking α/β=1 and considering confidence = 1-(α+β), I get:

at 68.27% level of confidence (Z=1)
at 95.45% level of confidence (Z=2)
at 99.90% level of confidence (Z=3.3)


--Dubium et Libertas (talk) 14:42, 15 December 2019 (UTC)[reply]

References