errors experiment

8
Introductory notes on errors Introduction No physical measurements are ever exactly correct. Thus we generally make many measure- ments of a quantity and use the mean as our best estimate of the true value. The question which then needs to be answered is; what is the likelihood that the true value lies within some specified range about our mean value? Thus an estimate of the error of any result is always necessary. Errors are not simply mistakes like reading from the wrong end of a vernier or reading a stop watch in tenths of a second when it is really graduated in fifths. Such mistakes have no place in physics. Error in a physical world means uncertainty which one can reduce by being very careful. Errors are of two kinds - (1) Systematic and (2) Random. Systematic Errors Here the result is being altered in a regular determinable way by some unsuspected cause for which no allowance has been made. Thus the room temperature may have altered during the experiment without the experimenter being aware of the change. When he realizes that tem- perature variations are affecting his results, he can either (1) remove the cause (by thermostatic control), (2) redesign his experiment to reduce the effect of the disturbing factor, (3) measure the change in conditions and allow for this in calculating his new results or (4) convert the systematic error into a random error by rearranging the work, e.g. repeating the experiment at different times of the day and night. Since the systematic errors may be constant or vary in some regular way with the value measured, they are not revealed by repeating the experiment, but only become evident when the conditions of the experiment are radically altered or when the physical property which is being measured is determined in an entirely different way. For instance, the value of the electronic charge e measured by Millikan’s oil drop experiment was affected for many years by an unsuspected variation in the viscosity of air. The error was only discovered later when another determination of e became possible from x-ray measurements of the spacing of crystal lattices. To summarize, a systematic error acts always in the same direction for the same conditions. It is not revealed by repeated experiments and is therefore difficult to spot. Much thought and cunning is needed to design experiments or find corrections which will eliminate it. Random Errors These are residual, usually small, errors of uncertain origin and irregular occurrence. To be absolutely sure of our conclusions we would have to repeat the experiment from scratch thou- sands of times, correcting each result for known systematic errors. Suppose for instance, we are 1

Upload: aliffpadzi

Post on 03-Oct-2015

8 views

Category:

Documents


1 download

DESCRIPTION

Experiment For Second Year UOW

TRANSCRIPT

  • Introductory notes on errors

    Introduction

    No physical measurements are ever exactly correct. Thus we generally make many measure-ments of a quantity and use the mean as our best estimate of the true value. The questionwhich then needs to be answered is; what is the likelihood that the true value lies within somespecified range about our mean value? Thus an estimate of the error of any result is alwaysnecessary.

    Errors are not simply mistakes like reading from the wrong end of a vernier or reading astop watch in tenths of a second when it is really graduated in fifths. Such mistakes have noplace in physics. Error in a physical world means uncertainty which one can reduce by beingvery careful.

    Errors are of two kinds - (1) Systematic and (2) Random.

    Systematic Errors

    Here the result is being altered in a regular determinable way by some unsuspected cause forwhich no allowance has been made. Thus the room temperature may have altered during theexperiment without the experimenter being aware of the change. When he realizes that tem-perature variations are affecting his results, he can either (1) remove the cause (by thermostaticcontrol), (2) redesign his experiment to reduce the effect of the disturbing factor, (3) measurethe change in conditions and allow for this in calculating his new results or (4) convert thesystematic error into a random error by rearranging the work, e.g. repeating the experimentat different times of the day and night.

    Since the systematic errors may be constant or vary in some regular way with the valuemeasured, they are not revealed by repeating the experiment, but only become evident whenthe conditions of the experiment are radically altered or when the physical property whichis being measured is determined in an entirely different way. For instance, the value of theelectronic charge e measured by Millikans oil drop experiment was affected for many yearsby an unsuspected variation in the viscosity of air. The error was only discovered later whenanother determination of e became possible from x-ray measurements of the spacing of crystallattices.

    To summarize, a systematic error acts always in the same direction for the same conditions.It is not revealed by repeated experiments and is therefore difficult to spot. Much thought andcunning is needed to design experiments or find corrections which will eliminate it.

    Random Errors

    These are residual, usually small, errors of uncertain origin and irregular occurrence. To beabsolutely sure of our conclusions we would have to repeat the experiment from scratch thou-sands of times, correcting each result for known systematic errors. Suppose for instance, we are

    1

  • measuring the length of a bench about 3 metres long. Our results could then vary from 300.01cm to 300.07 cm.

    Between 300.00 and 300.01 we would have 0 readings.

    Between 300.01 and 300.02 we might have 5 readings.

    Between 300.02 and 300.03 we might have 50 readings, and so on.

    From these results we construct a histogram (figure 1).

    Figure 1: Histogram

    If we increase the number of measurements and further subdivide the ranges of 0.01 cmintervals into much smaller intervals we finally approximate to a continuous curve known as afrequency distribution or limiting distribution (figure 2). The limiting distribution can neverbe measured exactly but is rather a theoretical construct.

    Figure 2: Frequency distribution

    From this frequency distribution we note the measurement most frequently occurring is300.046 which we could accept as probably the most accurate or true value. The frequency

    2

  • curve also tells us the relative probability of any particular measurement and the spread ofthese measurements around the true value. Thus it gives a quantitative idea of the accuracyor reliability of the method of measurement.

    The distribution curve is not necessarily symmetrical about the most frequent value, ormode as it is called. If the curve is asymmetrical the mean or average value, x = xi/n of all nof the readings xi, is not the same as the mode. Nevertheless unless we have sufficient readingsto prove asymmetry (which is not often) we usually assume that mean and mode are coincidentand the mean value represents the most probable or true value.

    If we took a very large number of measurements, N , where N is allowed to approach infinity,and computed the mean of these, we would expect the mean to be the true value. We will callthis true value the population mean, , where

    =1

    N

    Ni=1

    xi (1)

    The spread of values in this population is characterised by the population standard devia-tion, where

    2 =1

    N

    (xi )2 (2)

    However, we have to make do with a finite sample of n measurements. From this finitesample we then wish to estimate the true value and also estimate the likelihood that the realtrue value lies within some specified range of our estimate of the true value. We can calculatefor our finite sample the following two quantities;

    Sample mean x

    Sample standard deviation, s, defined by

    s =

    1

    N

    (xi x)2 (3)

    The normal distribution

    Fortunately, when a large number of causes of small errors exist and these errors may be eitherpositive or negative, the distribution of results can be shown to approximate closely to a specialsymmetric type of distribution called the normal or Gaussian distribution.

    A Gaussian distribution has a symmetrical bell-like shape (figure 3) given by the equation

    y = A exp

    [1

    2

    (x

    )2](4)

    3

  • Figure 3: Gaussian distributions for different standard deviations.

    For the normal distribution, 68% of observations are between and 95% between 2and 99% between 2.6 (figure 4). The percentages for other values may be found fromtables.

    Figure 4: Gaussian distribution showing the probability of an event.

    An observed distribution of measurements will not necessarily follow a normal distribution.If it does it will only approach the shape of a normal distribution when a large number ofmeasurements are available. For a small number of measurements, x is not necessarily equal to, nor s to . The observed values come closer to the true values as the number of measuresbecomes greater.

    Our aim is to determine how far our observed mean value of n measurements, x, may differfrom the true value . Imagine many samples of n measurements. The plot of the distributionof x is narrower than the distribution of the individual x. The distribution of the sample meansapproximates a normal distribution, even in cases where the distribution of the individualresults is not normal.

    Statistical theory predicts that the standard deviation of x is /n, called the standard

    error of the mean (SE). In practice, of course, we do not know and so we use the standard

    4

  • deviation of our sample, s, as an estimate of , and s/n as an estimate of the standard error

    of the mean.

    For a normally distributed quantity, any value picked at random will have a 68% chance oflying within one standard deviation of the mean. As the sample means themselves are normallydistributed, with a standard deviation (of the means) equal to the standard error, there is a68% chance that our sample mean lies within one standard error of the true mean. Converselythere is a 68% chance that the true value lies within one standard error of our sample mean.

    Example: 5 independent determinations of the mass, m, in grams of a piece of brass are:

    xi x xi (x xi)2x1 = 10.13 0.04 16 104x1 = 10.20 -0.03 9 104x1 = 10.18 -0.01 1 104x1 = 10.17 0 0x1 = 10.17 0 0

    The mean: x =1

    5

    5i=1

    xi = 10.17

    The standard deviation: s =

    15

    5i=1

    (xi x)2 = 0.0255

    The standard error of the mean: SE =s5

    = 0.0114

    The 95% confidence limit is 0.023 assuming a normal distribution.

    The result is then correctly given as:

    m = (10.17 0.02) g (95% confidence limit)

    Error estimates for single determinations

    The above method should be used whenever possible. However, time constraints will some-times not allow repeated determinations. To enable a student to give some meaning to hisexperimental results where the SE is not known the practices listed below may be used.

    1. If an experimental item is stamped with a physical quantity, say 10.2 g, and an indepen-dent measurement of the quantity cannot be made, it is a reasonable guess to say that itis correct to the nearest figure. In this case, it would be between 10.15 and 10.25 g, andor (10.20 0.05) g.

    2. Digital multimeters used in the laboratory generally have an accuracy of 0.04% of fullscale. This figure however is only applicable for steady readings. When readings wander

    5

  • it is best to observe the range for a few seconds and estimate the most likely value anduncertainty.

    3. If only one determination of a physical quantity can be made in the time available (orfor some other reason) and if the accuracy of the measuring device is unknown (e.g. themeasurement of the length of a brass rod with a cheap ruler), then a reasonable guess forthe accuracy is limit of reading or the smallest graduated division.

    Combining Errors

    If two quantities x1 and x2 with standard deviations s1 and s2 are combined in a formulax = x1 + x2 (or x = x1 x2). Then the variance s2 of x is given by

    s2 = s21 + s22 (5)

    So the error x is best taken as,

    x =

    (x1)2 + (x2)2 (6)

    If a formula x = x1x2 or x = x1/x2 is used to calculate a result, then the variance s2 of x is

    given by,s2

    x2=s21x21

    +s22x22

    (7)

    Thus, the fractional error x/x is best taken as

    x

    x=

    (x1)2x21

    +(x2)2

    x22(8)

    In 1st year practical classes you used a simplification of these formulae - directly addingthe errors or fractional errors. This simplification gives too large a result - it supposes thatall errors are in the same sense. In truth, the errors are as likely to cancel as to add. Theerror distribution of a sum or difference is the convolution of the error distributions of thecontributing quantities; the convolution of two normal distribution with SDs 1 and 2 is a

    normal distribution with SD =21 +

    22. From this result, the above formulae follow.

    Graphing

    To determine the experimental error we can apply two approaches. Either we repeat themeasurements so we can use the statistical analysis described previously or we take data overa range of some parameter we can easily control. Once we have such a data series we can plotthat and fit the data to whatever relationship theory predicts. Generally we will plot our datain such a way that we have a linear relationship and fit the data to a straight line. This mighttake some manipulation of the measured quantities before we can plot them.

    For example when examining the penetration of gamma radiation through matter we havethe relationship,

    I = I0ex (9)

    6

  • I is the intensity of the radiation (measured by the number of counts we detect), x is thethickness of the particular material the radiation is passing through. is the attenuationcoefficient for the material. We can determine by plotting the right graph. Plotting thenatural logarithm of the number of counts ln(I), on the y-axis and x on the x-axis, the datashould be linear, according to the theory.

    When plotting a straight line graph the best approach is to fit a straight line to your datausing the method of least squares. In this procedure the line is chosen so that the sum of thesquares of the distances (in the y-direction) from each point to the line is minimized. Thereis an analytic expression that can be used to calculate the slope and y-intercept of the lineof best fit this way. This is done when you add a trendline in EXCEL for example. It wontbe necessary to calculate this from first principles, you can just use the built in functions inEXCEL (or whatever graphing package you prefer). The error in the slope of the line and they-intercept can also be determined from how far away the points are from the fitted line. Everytime you plot a straight line graph you must include a measure of the uncertainty of the fit.

    Error Exercises

    Part 1: Statistical analysis of sample data

    Look at the data sheet provided. This was taken by measuring background counts with aGeiger-Muller tube and a counter. There are 200 background counts, each taken over a 30second period. This data will comprise your population. The population has been divided intotwenty samples, each of ten readings, by considering each consecutive series of ten readings tobe a sample.

    Using the spreadsheet program EXCEL carry out the following analysis. (Look at theseparate EXCEL help notes if you require assistance with using a spreadsheet.)

    1. For each sample determine: the mean, standard deviation, standard error, 68% and95% confidence limits. Use the functions AVERAGE (mean) and STDEV (standarddeviation). Then apply the required formula given in these notes for the remainingquantities. (2 marks)

    2. For the population determine the mean and standard deviation. What proportion ofyour population lies within one and two standard deviations from the mean? Use theFREQUENCY function to sort your data. (1 mark)

    3. Plot a histogram for your population, (a column graph using your sorted data, showeach count value as a separate column, dont group them together (bins of 1)). Plot onthis histogram the normal curve corresponding to your population mean and standarddeviation (use equation 4 to generate the data, you can put any convenient value for A.Add this as another series to your chart and then select it and change the chart type toxy scatter. ) How well does the normal distribution describe your data? (2 marks)

    4. Determine the proportion of your sample means which lie within one and two standarderrors from the population mean. (Note: Each sample has a different standard error.)

    7

  • Comment on how well the sample means and sample standard deviations estimate thepopulation mean and standard deviation. (1 mark)

    Part 2: Combining Errors

    A detector is used to examine the radiation emitted from an unknown source. In one minuteit records counts of two specific energies; (3640 120) counts at (1.3 0.1) MeV and(3850 120) counts at (1.2 0.1) MeV. The mass of the active part of the detector is(450 10) g. (All uncertainties are at 95% confidence levels.) Find the absorbed dose (gammaray energy times number of gammas absorbed divided by the mass of the material), in joulesper kilogram due to: (2 marks)

    1. The (1.3 0.1) MeV radiation2. The (1.2 0.1) MeV radiation3. Both energies.

    Note: 1 MeV = 106 eV and 1 eV = 1.6 x 1019 J

    Part 3: Errors from straight line graph

    To investigate how gamma rays are absorbed in matter a detector records the number of countsfrom a source with an increasing thickness of aluminium in between the source and detector.As more aluminium is added the counts drop. From theory we expect a linear dependence ifwe plot the log of the counts, lnC versus the thickness of aluminium, x.

    Use EXCEL to plot a graph of the following data (xy scatter, just points dont join them).Label it fully and fit a straight line to the points. Show the equation of the line on the graph.Use the LINEST function to determine the error in the slope and y-intercept. (2 marks)

    x (cm) (x-axis) ln(C) (y-axis)0 7.605

    0.2 7.5580.4 7.5280.6 7.4890.8 7.4411.0 7.3941.2 7.370

    Modified February 2012

    8