This lesson explores the basic principle of statistical significance and why it is important to understand when performing nearly any statistical test.

## Definition

There was a big hubbub some years ago that accused vaccines of causing autism.

(Vaccines are a weak form of an infection given to people to prevent them from getting the full infection. Autism is a pervasive developmental disorder that is characterized by difficulties with socializing and communication, and it is believed to affect over 1 in 100 people.) This is, of course, ridiculous since it has been found that the brain structure of those with autism is different than those of other people. No simple shot could alter a person’s brain like that.

However, some people still didn’t know this and freaked out.My reason for bringing this up is to show you how understanding significance tests, along with your statistical tests, is necessary. **Statistically significant** means the relationship in the results did not occur by random chance. Most researchers work with **samples**, defined as a section of the population. A **population** is defined as the complete collection to be studied. Since a researcher is not looking at everyone, there is a possibility that they will collect, by accident, a sample that will lead them to erroneous conclusions. To avoid this, nearly all statistical tests look for statistical significance.

Using our autism and vaccine example, after several tests had been conducted, the researchers found that there was no relationship. They could make this assertion with confidence because their results were statistically significant despite using a sample. So, even though they didn’t test every person who had a vaccine, the researchers found no relationship between those who had the vaccine and those who were diagnosed with autism.

## P-Value

When you run a statistical test, you will compute a ** p-value**, which is defined as the significance level value. This value will be represented by a decimal, anywhere between 1.0 to below .01. This value will inform you how likely the

**null hypothesis**, or the prediction that there is no relationship, is true.The commonly accepted level of the

*p*-value for the relationship to be statistically significant is .05.

This means that 1 in 20 times your results will be positive despite there being no actual relationship. In our autism example, there will be some studies that find significant connections between autism and vaccines. This means that 1 in 20 times the results will be positive based on pure probability and not because there is a statistical relationship.Where does this .05 come from? What is the *p*-value? It is derived from looking at the bell curve and examining the strength of the relationship and the likelihood that there is no relationship.

Back to our autism/vaccine example – if I were to make up some of the *p*-values, they might be .03, .001, and .

5. I listed three, and they are all made up. The first two, .03 and .001, would be statistically significant. The .

5 would not be statistically significant.

## Critical Regions

Statistical significance comes from the bell curve. In a statistical test, you are looking to see if there is a relationship between the numbers. This relationship can be in the form of scores being similar, like a correlation, or different, like a *t*-test. When testing for significance, you are testing your data to see if your value falls in the **critical region**, defined as the statistical value that will allow you to reject the null hypothesis.What this means is when you perform a statistical test, you will end up with a *p*-value.

If the number falls in the critical regions area, then your relationship is statistically significant, and you are able to reject the null hypothesis. This is because in the critical regions, you are testing to see if the relationship between your scores is strong enough. If the relationship is weak, then your score will fall in the null hypothesis area and not be statistically significant.The critical regions make up the top 5%, or .05, of the possible responses. If the relationship is weak, then it is likely that any results you get are the result of collection error. If the relationship is strong, then you can say with some confidence that even if there is a collection error, that the results are still informing us that there is a relationship.

Why was .05 chosen? Because at 1 in 20 times, the results will be wrong and will give you a specific type of error that was seen as acceptable. We will now explore these errors.

## Error

What if you do end up with a result that is statistically significant on paper but actually is not? This could be due to a sampling error or a fluke in the numbers. What if you don’t find results, but there actually is?A **type I error** is said to occur when a null hypothesis is incorrectly rejected.

If you remember, a null hypothesis states that there is no relationship. So if you incorrectly rejected a null hypothesis, you are effectively saying, ‘Yes, there is a relationship,’ when there actually isn’t one. A type I error is seen as far more harmful and dangerous than the other type of error, which we will get into. This is because scientists will typically make decisions and work under the assumption that everything is right.

This is like stepping out onto a bridge, but the bridge isn’t really there.A **type II error** is said to occur when a researcher fails to reject the null hypothesis. This translates to a researcher not finding a relationship when one actually exists. This is sort of like not wanting to walk out over the cliff because you don’t think there is anything there, when there actually is. A type II error is not seen as severe as a type I because researchers eventually could come across it in their testing.

## Lesson Summary

**Statistically significant** means the relationship in the results did not occur by random chance. Since researchers work with **samples**, defined as sections of the population, and not with **populations**, defined as the complete collection to be studied, there is a chance of a sampling error.To ensure that this doesn’t happen, a ** p-value**, defined as the significance level value, is calculated to ensure that the

**null hypothesis**, or the prediction that there is no relationship, is rightfully rejected. This is typically set at .05. This is based on the

**critical regions**, defined as the statistical value that will allow you to reject the null hypothesis.

However, there is some chance that the null hypothesis will not be correctly rejected. A **type I error** is said to occur when a null hypothesis is incorrectly rejected. A **type II error** is said to occur when a null hypothesis is incorrectly not rejected.

## Learning Outcome

After watching this lesson, you should be able to define statistically significant and explain how researchers avoid sampling errors.