The Danger of False Positives
Suppose you are being screened for some rare medical condition. This is simply a screening; the test is not being given because you are in a high risk group or because you are exhibiting symptoms. For instance, imagine your company is testing every employee or your doctor is testing everyone who comes in for a routine physical. The test has a high degree of accuracy both for providing positive results for people who have the condition (99%, say) and for giving negative results for people who do not have the condition (95%). The test comes back positive. How worried should you be? It turns out that in some cases, you should not be overly concerned. In this example, if 1% of the population (people like you—your age, gender, etc.) has the condition, for instance, there's only a 17% chance that you have the condition.
In this applet, you can try out different scenarios of this type. You can set the accuracy of the test (for people who do and do not have the condition) and the percent of the population with the condition. Then the applet will tell you the probability that you have the condition.
What's the idea? Even though the percent of false positives is very small, we're taking a small fraction of a huge number. That means there will be a large number of false positives (this is the red area in the figure below). On the other hand, even though the accuracy of the test is very good for people who have the condition, it is catching a large percent of the small number of people who have the condition. That will still be a small number of people (this is the green region below). So in cases like this, among all of the people who test positive (red and green together), most of them do not have the condition.
If you would like to work out these calculations yourself, look up Bayes' Theorem, which gives the mathematical details about conditional probability.