error function approximation

3
A handy approximation for the error function and its inverse Sergei Winitzki February 6, 2008 The error function erf x 2 π x 0 e -x 2 dx (1) can be numerically approximated in many ways. In some applications it is useful if not only erfx, but also its inverse function, erf -1 x, defined by x = 2 π erf -1 x 0 e t 2 dt = erf ( erf -1 x ) , (2) is approximated by a simple formula. The following approximation is then useful, erf x 1 - exp -x 2 4 π + ax 2 1+ ax 2 1/2 , (3) where the constant a is a 8 3π π - 3 4 - π 8887 63473 7 50 =0.14. (4) This formula can be derived by the method explained in Ref. [1]. One writes an ansatz of the form erf x 1 - exp - P (x) Q(x) 1/2 , (5) where P (x) and Q(x) are unknown polynomials, and finds P, Q such that the known series expansion of erf x at x =0 and the asymptotic expansion at x =+are reproduced within first few terms. In this way one obtains explicit coefficients of the polynomials P and Q to any desired degree. A numerical evaluation shows that Eq. (3) provides an approximation for erf x correct to better than 4 · 10 -4 in relative precision, uniformly for all real x 0, as illustrated in Fig. 1. For negative x one can use the identity erf (-x)= -erf x. (6) The function in Eq. (3) has the advantage that it can be easily inverted analytically, erf -1 x - 2 πa - ln ( 1 - x 2 ) 2 + 2 πa + ln (1 - x 2 ) 2 2 - 1 a ln (1 - x 2 ) 1/2 . (7) The relative precision of this approximation is better than 4 · 10 -3 , uniformly for all real x in the interval (0, 1), as illustrated in Fig. 2. 1

Upload: julian-price

Post on 19-Mar-2016

217 views

Category:

Documents


1 download

DESCRIPTION

Formula to evaluate the error function erf.

TRANSCRIPT

Page 1: Error Function Approximation

A handy approximation for the error function andits inverse

Sergei Winitzki

February 6, 2008

The error function

erfx ≡ 2√π

∫ x

0

e−x2

dx (1)

can be numerically approximated in many ways. In some applications it is useful if notonly erfx, but also its inverse function, erf−1x, defined by

x =2√π

∫ erf−1x

0

et2dt = erf(

erf−1x)

, (2)

is approximated by a simple formula. The following approximation is then useful,

erfx ≈[

1 − exp

(

−x2

4

π + ax2

1 + ax2

)]1/2

, (3)

where the constanta is

a ≡ 8

π − 3

4 − π≈ 8887

63473≈ 7

50= 0.14. (4)

This formula can be derived by the method explained in Ref. [1]. One writes an ansatz ofthe form

erfx ≈[

1 − exp

(

−P (x)

Q(x)

)]1/2

, (5)

whereP (x) andQ(x) are unknown polynomials, and findsP, Q such that the known seriesexpansion of erfx atx = 0 and the asymptotic expansion atx = +∞ are reproduced withinfirst few terms. In this way one obtains explicit coefficientsof the polynomialsP andQ toany desired degree.

A numerical evaluation shows that Eq. (3) provides an approximation for erfx correctto better than4 · 10−4 in relative precision,uniformly for all realx ≥ 0, as illustrated inFig. 1. For negativex one can use the identity

erf(−x) = −erfx. (6)

The function in Eq. (3) has the advantage that it can be easilyinverted analytically,

erf−1x ≈

− 2

πa− ln

(

1 − x2)

2+

(

2

πa+

ln (1 − x2)

2

)2

− 1

aln (1 − x2)

1/2

. (7)

The relative precision of this approximation is better than4 · 10−3, uniformly for all realxin the interval(0, 1), as illustrated in Fig. 2.

1

Page 2: Error Function Approximation

An improvement of the precision is possible if one chooses the coefficienta ≈ 0.147instead of the above valuea ≈ 0.140. With a ≈ 0.147 the largest relative error of Eq. (3)is about1.3 · 10−4, and the relative error of Eq. (7) is about2 · 10−3.

One application of the formula (7) is to Gaussian statistics. Suppose we would like totest that a large data samplex1, ..., xN comes from the normal distribution

p(x) =1

σ√

2πexp

(

− x2

2σ2

)

. (8)

We can split the total range ofx into bins with boundariesb1, b2, ... and count the numberof sample points that fall into each bin,bi < x < bi+1. If {xi} come from the Gaussiandistribution, then the expected number of samples within the binbi < x < bi+1 should be

n(bi, bi+1) = N

∫ bi+1

bi

p(x)dx =N

2

(

erfbi+1

σ√

2− erf

bi

σ√

2

)

. (9)

To have better statistics, one should choose the bin boundaries so that the expected num-bers of samples are equal in all bins. This is achieved if the bin boundariesbi are chosenaccording to the formula

bi = σ√

2erf−1 2i − B − 1

B + 1, i = 1, ..., B, (10)

where(B + 1) is the total number of bins and erf−1 is the inverse function to erfx. UsingEq. (7) it is straightforward to compute the bin boundariesbi.

The plots were produced by the following Maple code:

a:=0.14; # or other valuef1:=x->sqrt(1-exp(-x^2*(4/Pi+a*x^2) / (1+a*x^2)));plot([1-erf(x)/f1(x) ], x=0..4);f2:=x->sqrt(-(2/Pi/a+ln(1-x^2)/2) + sqrt(-ln(1-x^2)/a+(2/Pi/a+ln(1-x^2)/2)^2) );inverf:=x->solve(erf(y)=x,y);plot(1-f2(x)/inverf(x), x=0.01 ..1);

The author thanks David W. Cantrell who discussed the shortcomings of a previousversion of this note and suggesteda = 0.147 in a Google Group online forumsci.math inDecember 2007.

References

[1] S. Winitzki, Uniform approximations for transcendental functions, in Proc. ICCSA-2003, LNCS 2667/2003, p. 962.

2

Page 3: Error Function Approximation

0

5e–05

0.0001

0.00015

0.0002

0.00025

0.0003

0.00035

1 2 3 4

x

0

2e–15

4e–15

6e–15

8e–15

1e–14

1.2e–14

5.1 5.2 5.3 5.4 5.5 5.6 5.7

x

Figure 1: Relative error of the approximation (3) is below 0.00035 for allx > 0. Note thatthe relative error is not everywhere positive; the plot at right shows that erfx is underesti-mated at largex.

0

0.0005

0.001

0.0015

0.002

0.0025

0.003

0.0035

0.2 0.4 0.6 0.8 1

x

0.0026

0.0028

0.003

0.0032

0.0034

0.99 0.992 0.994 0.996 0.998 1

x

Figure 2: Relative error of the approximation (7) is below 0.0035 for allx > 0. The plot atright shows that the error is largest nearx = 1.

3