And if you don’t, you’re not alone. Difficulties with use and interpretation of error bars are not only an issue for undergraduates and Malcolm Gladwell, but also experienced researchers get these wrong from time to time. The most common error bars are the range, standard deviation (SD), Confidence interval (CI) and standard error (SE). Different bars, different information.
Descriptive error bars: Range and standard deviation
Descriptive error bars tells the reader about the spread of the data. The range is simply a measure of the most extreme values (max – min), while the standard deviation is roughly the average (typical) difference between each of the data points and the overall mean. One can expect 2/3 of all data points to be within 1 SD of the mean and ~95% to be within 2 SD. The length of these bars does not necessarily change with the number of observations you have, they only say something about the spread (variability) of the data.
Inferential error bars: Confidence intervals and standard error (of the mean)
Instead of sampling the entire population, we usually collect a number of random observations from the population. Why? In some cases its matter of time and cost, while other times, such as when the nurse collect blood samples, well…you’d probably prefer to have some blood left. Because we only have a subsample of the population, we can only present a sample mean. When the sample mean is presented together with either a confidence interval, or standard error, it gives an indication of where you can expect the ‘real’ mean (of the whole population (μ) to lie). The more random observations you have, the more likely it is that your mean is close to the true mean.
So what’s the difference between SE and confidence interval?
SE measures the amount of variability in the sample mean. If we did a new collection of random data from the same population, our mean would likely not be exactly the same; it would vary from time to time. The SE is a measure of how we would expect the mean to vary, purely by chance.
Many people (even some textbooks) gets confidence intervals wrong; you cannot say that you are 95% sure that the true mean is within the confidence limits. Suppose we compute the sample means of all possible samples of size 20 and constructed the 95% CI for the population mean for each of these sample means. Then, 95% of these intervals would contain the true population mean and 5% would not. You don’t know if the confidence interval you see contains the true mean. So the confidence interval you are seeing is just one interval from among a large sample of different CI’s for a given parameter in which 95% of the intervals would capture the population parameter.