Precision
The Precision is a measurement of how reproducible or how well the result of an experiment is known. The precision of the measurement is referred to as the "uncertainty" and has the same units as the measured value. The result of an experiment would be stated as:
The "uncertainty" can be expressed as an "absolute" uncertainty or a "relative" uncertainty and frequently is seen both ways. For example, suppose the result of a length measurement using a meter stick is 14.7 cm. And, further, suppose that as a result of the way the meterstick was marked, it was possible to estimate that the uncertainty in this value was 0.1 cm, then the result of the measurement would be reported as follows:
The "± 0.1 cm" is the uncertainty in this measurement and it is an "absolute" uncertainty. Often, it is more useful to have the "relative" uncertainty expressed because this states how big the uncertainty is compared to the quantity being measured.
So, in this example, the relative uncertainty is:
Please notice that "precision" and "accuracy" are not the same thing! It is possible to have a very precise measurement that is has a very large "error" because something was wrong with the measuring device or the person using it was not using it properly.
Random uncertainties: An example of something that naturally varies is the number of apples on a tree. Suppose that there is an orchard consisting of dwarf apple trees that are as uniform as possible. Even though every effort is made to have these trees be uniform there will be natural, random variation in the number of apples that will mature on each that can have a basis in anything from pollination to insect infestation. The total number of apples on any tree can be counted exactly, but the number varies from tree to tree. It might be very helpful to have a representative number for the number of apples on a tree in this orchard. What should that number be? In fact, a number is really not the answer; we probably ought to have a range so that we not only know about how many apples to expect per tree but we also have a good measure of just how variable this can be. In-other-words, the answer to the question will be the result as expressed above and again here:
In this example the "measured value" would be the AVERAGE of a number of sample counts. The "uncertainty" is usually given by the "sample standard deviation". This quantity is often represented by a lower case Greek letter sigma, σn-1, with the n-1 indicating that this is the "sample" rather than "population" value. When you have a choice with your calculator, use the σn-1 function. On TI-8X series calculators this function is represented by "Sx", and "σ" is reserved for the population standard deviation. No attempt will be made here to describe how to compute the sample standard deviation if your calculator will not do it for you. The equation that follows is presented so that there is no confusion about what value is expected in our laboratory work.
(Where "N" is the number of samples, Xi is a particular sample, "i" represents the position in the list of the results for that sample and X is the average of the individual samples.)
The significance of the standard deviation as a measure of uncertainty is that the range it describes around the average includes a predictable number of the samples. So, for example, if our sample of trees in the orchard gives as a result of our counting and calculation:
The "147" is the average of the number of apples counted on the trees in our sample and the "9" is the sample standard deviation. So the RANGE, 138 → 156, contains 68% of the values used to calculate the average. Another interpretation of this range is that there are 68 chances out of 100 of obtaining a value in this range if one were to count another tree in the orchard. For our purposes in this course, we are going to be a little casual about this 68% and simply refer to this as approximately equivalent to 2/3 or two chances out of three.
If, instead of counting, one is using a tool to make some measurement, then the separate results of repeated measurements are averaged and the sample standard deviation calculated in the same way.
Systematic error: This is the result of the measuring device having a built in error - or - the person using it, not being aware of how to use the device properly. This can be something as simple as forgetting to "tare" (set to zero, usually) the electronic scales or it might be the result of using a cheap ruler on which the inscribed distances are, say, 2.3% too short. These kinds of errors are very hard to detect sometimes. They don't always have an effect on what one is trying to discover, but we often "calibrate" equipment to test for the presence of systematic error, because they can lead to serious problems if undiscovered.