Error bars in statistics. Do you know the difference?

And if you don’t, you’re not alone. Difficulties with use and interpretation of error bars are not only an issue for undergraduates and Malcolm Gladwell, but also experienced researchers get these wrong from time to time. The most common error bars are the range, standard deviation (SD), Confidence interval (CI) and standard error (SE). Different bars, different information.

Descriptive error bars: Range and standard deviation

Standard deviation of different data spread (same sample size)

Standard deviation of different data spread (same sample size)

Descriptive error bars tells the reader about the spread of the data. The range is simply a measure of the most extreme values (max – min), while the standard deviation is roughly the average (typical) difference between each of the data points and the overall mean. One can expect 2/3 of all data points to be within 1 SD of the mean and ~95% to be within 2 SD. The length of these bars does not necessarily change with the number of observations you have, they only say something about the spread (variability) of the data.

 

 

Inferential error bars: Confidence intervals and standard error (of the mean)

Error bars

While the widths of the 95% CI and SE decreases with increasing sample size, SD remains relatively unaffected.

Instead of sampling the entire population, we usually collect a number of random observations from the population. Why? In some cases its matter of time and cost, while other times, such as when the nurse collect blood samples, well…you’d probably prefer to have some blood left. Because we only have a subsample of the population, we can only present a sample mean. When the sample mean is presented together with either a confidence interval, or standard error, it gives an indication of where you can expect the ‘real’ mean (of the whole population (μ) to lie). The more random observations you have, the more likely it is that your mean is close to the true mean.

So what’s the difference between SE and confidence interval?
SE measures the amount of variability in the sample mean. If we did a new collection of random data from the same population, our mean would likely not be exactly the same; it would vary from time to time. The SE is a measure of how we would expect the mean to vary, purely by chance.

Confidence interval 95% CI

The top portion of this figure presents the population of scores with a mean of 50 (blue dotted line) and a standard deviation of 10. The bottom portion of the figure presents the sample means (shaded circles) and the 95% CIs about each mean (bars) for 20 independent samples from the population.

Many people (even some textbooks) gets confidence intervals wrong; you cannot say that you are 95% sure that the true mean is within the confidence limits. Suppose we compute the sample means of all possible samples of size 20 and constructed the 95% CI for the population mean for each of these sample means. Then, 95% of these intervals would contain the true population mean and 5% would not. You don’t know if the confidence interval you see contains the true mean. So the confidence interval you are seeing is just one interval from among a large sample of different CI’s for a given parameter in which 95% of the intervals would capture the population parameter.

Advertisements

About TeeYay

Ph.D. in Assessment, Monitoring, and Geospatial Analysis (applied statistics) from the University of Minnesota, USA.
This entry was posted in Statistics and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s