One of the topics we are emphasizing in physics lab this week is the importance of understanding the different kinds of uncertainty. There seems to be an almost magical power involved in taking the average of many measurements, and condensing all that uncertain data into one best guess, and statistics tells that the most probable “true” value is the mean of your measurements. In addition, you can quantify your uncertainty by measuring the standard error basically, the standard deviation divided by the square root of the number of measurements – and again, statistics will provide you with a range of the mean plus or minus the standard error as a confidence interval. Even though your measurements will necessarily be imprecise, no matter how careful you are or how expensive your equipment is, the more measurements you make, the smaller that interval gets. However, there are a couple of caveats. First, the standard error depends on the square root of the number of measurements, so to increase the precision by a factor of 2, you have to quadruple the number of measurements. Second, and more critically, you can only eliminate standard errors this way. You cannot defeat a systemic bias this way, no matter how many measurements you make. These flaws in the experimental setup are immune to the magic of statistics. In fact, this is a case where you hope for randomness – that is, you want the errors to be uncorrelated, as opposed to constantly biased one way or the other.
Sometimes we express this difference between standard error (the probabilistic error since you cannot make an infinite number of measurements) vs. systemic error (which is an inherent bias in your results), by defining “precision” and “accuracy” as not quite synonymous. Really, accuracy means the absence of a systemic bias, so the values you measure are close to the true value. In contrast, precision means how close together your individual measurements are, which will give you smaller standard errors.