Propagation and Compounding of Errors

(modified 04/28/2007)

This page shows how uncertainty in a measured quantity will propagate through a mathematical expression involving that quantity.

Whenever calculations are done using imprecise numbers, then the numbers resulting from the calculations are also imprecise. The precision (expressed as the "standard error") of the result from evaluating any function f(x) depends on the precision of x, and on the derivative of the function with respect to x.

When two or more variables appear together in a function f(x,y), the precision of the result depends on:

Correlated fluctuations most commonly arise when the two variables are parameters resulting from a curve-fit. A good curve-fitting program should produce the error-correlation between the parameters as well as the standard error of each parameter. (Check out my non-linear least squares curve fitting page.)

If you're interested in how this page does what it does, read the Techie-Stuff section, at the bottom of this page.

This sections below perform all the required calculations for a function of one or two variables. Just enter the numbers and their standard errors (and error-correlation, if known), and click the Propagate button.


For a single variable: z=f(x)

1. Enter the measured value of the variable (x) and its standard error of estimate:
x = +/-

2. Enter the expression involving x: For example: 1/(10-x)
z =

3. Click on this button:

The value of the resulting expression, z, and its standard error:
z = +/-


For two variables: z=f(x,y)

1. Enter the measured value of the first variable (x) and its standard error of estimate:
x = +/-

2. Enter the measured value of the second variable (y) and its standard error of estimate:
y = +/-

3. Enter the "error-correlation" between the two variables (if known, otherwise use 0):
r =

4. Enter the expression involving x and y: For example: x + 3*y - x*y/10
z =

5. Click on this button:

The value of the resulting expression, z, and its standard error:
z = +/-


Syntax Rules for Constructing Expressions:

Operators: + - * / and parentheses
Constants: Pi (=3.14...), e (=2.718...), Deg(=180/Pi = 57.2...)
Built-in Functions...
[Unless otherwise indicated, all functions take a single numeric argument, enclosed in parentheses after the name of the function.]
Algebraic: Abs, Sqrt, Power(x,y) [= x raised to power of y)], Fact [factorial]
Transcendental: Exp, Ln [natural], Log10, Log2
Trigonometric: Sin, Cos, Tan, Cot, Sec, Csc
Inverse Trig: ASin, ACos, ATan, ACot, ASec, ACsc
Hyperbolic: SinH, CosH, TanH, CotH, SecH, CscH
Inverse Hyp: ASinH, ACosH, ATanH, ACotH, ASecH, ACscH
Statistical: Norm, ChiSq(x,df), StudT(t,df), FishF(F,df1,df2)
Inverse Stat: ANorm, AChiSq(p,df), AStudT(p,df), AFishF(p,df1,df2)

Note: Most versions of JavaScript are case-sensitive. Make sure you type function names exactly as you see them above.

Note: The trig functions work in radians. For degrees, multiply or divide by the Deg variable. For example: Sin(30/Deg) will return 0.5, and ATan(1)*Deg will return 45.

Note: The factorial function is implemented for all real numbers. For non-integers its accuracy is about 6 significant figures. For negative integers it returns either a very large number or a division-by-zero error.

Note: The statistical functions Norm and StudT return 2-tail p-values (eg: Norm(1.96)=0.05), while ChiSq and FishF return 1-tail values. This is consistent with the way these functions are most frequently used.

Note: Some of the functions listed above are not currently implemented in JavaScript, so I have programmed them as user-defined functions. You can see the algorithms by 'viewing the document source' for this page. Feel free to copy them if you find them useful.


Techie-Stuff (for those who may be interested in how this page works)...

My error-propagation web page takes a very general approach, which is valid for addition, multiplication, and any other functional form.

For propagating an error through any function of a single variable: z = F(x), the rule is fairly simple:
The standard error (SE) of z is obtained by multiplying the SE of x by the derivative of F(x) with respect to x (ignoring the sign of the derivative).

Now it would be hellishly difficult to have my web page attempt to perform symbolic differentiation of whatever function you typed in. So instead, it obtains a numerical estimate of the derivative if F(x) by the method of "finite differences". It takes the value of x that you provided, adds the value of the standard error that you provided, and then evaluates the function you typed in at this value and saves the resulting value of the function. Then it subtracts the standard error from the x value you entered, and evaluates the function at this value. It then takes the difference between the two evaluated function values, divides it by the difference between the two x values at which it evaluated the function (which happens to be equal to twice the standard error), and this ratio is a very good approximation to the derivative. It takes the absolute value of this derivative, and then multiplies it by the standard error you provided, and that's the resulting standard error of z that the web page reports out. Actually, the program is able to simply the formulas a little bit, but basically that's how it's done.

For a function of two variables: z = F(x,y), the rule is a little more complicated. If the random errors in x and y are independent (that is, uncorrelated with each other), then the rule is:

  1. Find the partial derivative of F(x,y) with respect to x, multiply this by SE(x), and square the product;
  2. Find the partial derivative of F(x,y) with respect to y, multiply this by SE(y), and square the product;
  3. Add the two squares together;
  4. Take the square root of the sum of the squared products, and that will be the SE(z).

I obtain the partial derivatives by the same "finite differences" technique.

If the random fluctuations in x and y are correlated with each other (which usually happens only if they x and y have been obtained from the same set of measurements, such as, for example, if x and y are two parameters that have been obtained from a curve-fit to a set of measured data), then the formulas are a little more complicated -- you have to add in cross-product terms involving the partial derivatives and the correlation coefficient between the random errors in x and y.

All this may seem abstract, but it turns out that it is a very general approach -- it automatically accomplishes the same thing as the usual "special case" formulas:

as well as for any other kind of functional relationship involving x and y. The beauty is that the web page doesn't care how complicated the expression for F(x,y) is. It passes all the work of parsing the expression and evaluating it over to the JavaScript interpreter by using the built-in "eval" function, and gets its derivatives by the finite differences method. So the programming is not very complicated.

You can see the JavaScript programming by having your browser show the HTML coding for the web page (go to the View menu and select Source, or Page Source). All the work is done in the two functions called Propagate1 (for handling expressions of only one variable) and Propagate2 (for handling expressions involving two variables). The entire process for a one-variable expression takes only about a half-dozen simple JavaScript statements, and the two-variable case is handled in about 15 simple statements.


Return to the Interactive Statistics page or to the JCP Home Page

Send e-mail to John C. Pezzullo (this page's author) at statpages.org@gmail.com