Statistics Definitions > False Positive and False Negative
What is a False Positive?
A false positive is where you receive a positive result for a test, when you should have received a negative results. It’s sometimes called a “false alarm” or “false positive error.” It’s usually used in the medical field, but it can also apply to other arenas (like software testing). Some examples of false positives:
- A pregnancy test is positive, when in fact you aren’t pregnant.
- A cancer screening test comes back positive, but you don’t have the disease.
- A prenatal test comes back positive for Down’s Syndrome, when your fetus does not have the disorder(1).
- Virus software on your computer incorrectly identifies a harmless program as a malicious one.
False positives can be worrisome, especially when it comes to medical tests. Researchers are consistently trying to identify reasons for false positives in order to make tests more sensitive.
A related concept is a false negative, where you receive a negative result when you should have received a positive one. For example, a pregnancy test may come back negative even though you are in fact pregnant.
The False Positive Paradox
If a test for a disease is 99% accurate and you receive a positive result, what are the odds that you actually have the disease?
If you said 99%, you might be surprised to learn you’re wrong. If the disease is very common, your odds might approach 99%. But the rarer the disease, the less accurate the test and the lower the odds that you actually have the disease. The difference can be quite dramatic. For example, if you test positive for a rare disease (one that affects, say, 1 in 1,000 people), your odds might be less than percent of actually having the disease! The reason involves conditional probability. This article by Stan Brown walks you through the probabilities behind the paradox.
False Positives and Type I errors
In statistics, a false positive is usually called a Type I error. A type I error is when you incorrectly reject the null hypothesis. This creates a “false positive” for your research, leading you to believe that your hypothesis (i.e. the alternate hypothesis) is true, when in fact it isn’t.
The Drug Test Paradox and HIV Tests
You take an HIV test that is 99% accurate and the test is positive. What is the probability that you are HIV positive?
- Pretty high: 99%. I’m freaking out.
- Pretty low. Probably about 1 in 100. I’ll sleep on it and then take the test again.
If you answered 1(99%), you’re wrong. But don’t worry — you aren’t alone. Most people will answer the same way as you. But the fact is (assuming you are in a low risk group), you only have a very slim chance of actually having the virus, even if you test positive for the HIV test. That’s what’s called the drug test paradox.
An HIV test (or any other test for diseases for that matter) isn’t 99% accurate for you, it’s 99% accurate for a population.* Let’s say there are 100,000 people in a population and one person has the HIV virus. That one person with HIV will probably test positive for the virus (with the test’s 99% accuracy). But what about the other 99,999? The test will get it wrong 1% of the time, meaning that out of 99,999 who do not have HIV, about 100 will test positive.
In other words, if 100,000 people take the test, 101 will test positive but only one will actually have the virus.
Don’t worry if this paradox is a little mind-bending. Even physicians get it wrong. There have been several studies that show physicians often alarm patients by informing them they have a much higher risk of a certain disease than is actually indicated by the statistics (see this article in U.S. News).
Peter Donnely is an English statistician who included the above information in a really fascinating TED Talk about how people are fooled by statistics. If you haven’t seen it, it’s worth a look, especially as he highlights the problem with juries misunderstanding statistics:
*These figures aren’t exactly accurate — the actual prevalence of HIV in a population depends on your lifestyle and other risk factors. At the end of 2008, there were about 1.2 million people with HIV in the U.S. out of a total population of 304,059,724. Additionally, most HIV tests are now 99.9% accurate.
What is a False Negative?
A false negative is where a negative test result is wrong. In other words, you get a negative test result, but you should have got a positive test result. For example, you might take a pregnancy test and it comes back as negative (not pregnant). However, you are in fact, pregnant. The false negative with a pregnancy test could be due to taking the test too early, using diluted urine, or checking the results too soon. Just about every medical test comes with the risk of a false negative. For example, a test for cancer might come back negative, when in reality you actually have the disease. False negatives can also happen in other areas, like:
- Quality control in manufacturing; a false negative in this area means that a defective item passes through the cracks.
- In software testing, a false negative would mean that a test designed to catch something (i.e. a virus) has failed.
- In the Justice System, a false negative occurs when a guilty suspect is found “Not Guilty” and allowed to walk free.
False negatives create two problems. The first is a false sense of security. For example, if your manufacturing line doesn’t catch your defective items, you may think the process is running more effectively than it actually is. The second, potentially more serious issue, is that potentially dangerous situations may be missed. For example, a crippling computer virus can wreak havoc if not detected, or an individual with cancer may not receive timely treatment.
False Negatives in Hypothesis Testing
Need help with a homework or test question? With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. Your first 30 minutes with a Chegg tutor is free!
Comments? Need to post a correction? Please post a comment on our Facebook page.