Accuracy and precision

November 09, 2013

What does accuracy and precision mean?

When a quantity is measured or calculated, the accuracy of the measurement or the calculated result gives the degree of closeness of the value to the correct value. Accuracy, therefore, describes a property of the result. Precision, on the other hand, quantifies the degree of how well the measurements were taken, or how well the calculations were carried out. Precision says something about the process of measurement or the calculation, but says nothing about the result of the measurement or the calculated value.

It is often possible to increase the accuracy of a result by increasing the precision of the measuring instrument or the method of calculation; however, if the manner of taking the measurement, or carrying out the calculation is incorrect, increasing the precision will not necessarily increase the accuracy of the result. Furthermore, if the value of a quantity is already known accurately, increasing the precision will not change the value.

What is precision?

When the numerical value of a quantity is presented, the precision is inherently determined based on the manner in which the value is displayed. For instance, 3.14 and 3.1415 are two values of \(\pi\) with different precisions, the latter being more precise. In order to understand precision, we must first understand the notion of significant digits.

In any representation of a numerical value:

  1. the leftmost nonzero digit is a significant digit;
  2. if there is no decimal point, the rightmost nonzero digit is a significant digit;
  3. if there is a decimal point, the rightmost digit is a significant digit, even when it is zero; and
  4. all of the digits in between the leftmost and the rightmost significant digits are all significant digits.

Hence, the number of significant digits in a representation gives the precision of the value. The following numbers all have 4 significant digits: 3586, 358.6, 0358.6, 358.0, 35.00, 358600. In the last number, the two zeros are not counted as significant digits, although they are significant. To resolve this ambiguity, we must either put a decimal point after the rightmost zero, or use a floating-point representation. Thus, we either represent the number as 358600., or as \(3.586 \times 10^5\).

Chopping-off or rounding-off?

After a value has been measured or calculated, it is sometimes necessary to reduce the value to the required precision. This may be achieved in two ways: a) chopping-off, or b) rounding-off.

When we reduce precision by chopping-off, we simply discard every digit in the representation to the right of the least significant digit (the rightmost significant digit). For instance, we chop off the value 48.9864 with six significant digits to a value with four significant digits as 48.98.

Although chopping-off is very simple, it is not attractive because the accuracy of the result is significantly reduced in the process due to straightforward discarding of the insignificant digits. When we round off a value, however, we remove the insignificant digits after taking into account their significance on the accuracy of the value prior to rounding-off. To understand what we mean by 'significance to accuracy', we must first quantify the meaning of accuracy.

What is accuracy?

The number of significant digits after the decimal point (to the right of the decimal point) gives the places of decimals where the approximation has a correct value as compared to the accurate value. For instance, the value of \(\pi\) as given by the fraction 22/7 is only accurate to 2 places of decimals because the value matches the more accurate result to only two significant digits after the decimal point:

\[\pi = 3.1415\ldots\] \[{22 \over 7} = 3.1428\ldots\]

In general, the accuracy of an approximation is defined as follows:

Definition 1. An approximation \(x\) is said to be accurate to \(k\) places of decimals to another number \(y\) if

\[|x - y| \le {1 \over 2} \times 10^{-k}\]

Based on this definition of accuracy, we define rounding-off as follows:

Definition 2. An approximation is said to be rounded-off to \(k\) places of decimals if it contains \(k\) digits after the decimal point and the absolute value of the difference between the previous value and the approximate value is not greater than \({1 \over 2} \times 10 ^{-k}\).

A procedure for rounding-off

A numerical value can be rounded-off by using the following procedure:

  1. First, chop off the value to the required places of decimals, and treat the digits to be discarded as a decimal fraction.

  2. If the value to be discarded is greater than 0.5, increment the least significant digit of the chopped-off value by 1.

  3. If the value to be discarded is less than 0.5, do nothing.

  4. If the value to be discarded equals 0.5, increment the least significant digit of the chopped-off value by 1, only if it is odd.

For instance, the value 2.4865, 2.4854, 2.4855 and 2.4845 are rounded off to 3 places of decimals to 2.486, 2.485, 2.486 and 2.484 respectively.

In the above procedure, step 4 is important as it reduces the effect of cumulative rounding up or rounding down to the accuracy of the approximation. By spreading the increments appropriately, the rounded values are kept close to the correct value when multiple calculations are carried out using the same number.

This post is based on a draft article I wrote in 27~30 January 2009 to reinforce my understanding of the text in First Course in Numerical Analysis [McGraw-Hill, 1978] by Anthony Ralston.
comments powered by Disqus