experimental uncertainty

12
1 Experimental Uncertainty Measure of data validity and accuracy. Representative of the overall result. Precision and accuracy. Uncertainty Error. Error. Fixed (systematic). o Predictable. o Same for each reading. o Removed by calibrations and corrections. o Decreases accuracy. Random (non-repeatable). o Unpredictable. o Different for every reading. o Cannot be removed. o Decreases precision. Sources of errors. Manufacturing Errors. Design Inadequacy. Operating Errors. Environmental Errors. Application Errors.

Upload: hamooodiii

Post on 16-Jul-2016

11 views

Category:

Documents


2 download

DESCRIPTION

the uncertainty in experiments

TRANSCRIPT

Page 1: Experimental Uncertainty

1

Experimental Uncertainty

Measure of data validity and accuracy.

Representative of the overall result.

Precision and accuracy.

Uncertainty Error.

Error.

Fixed (systematic).

o Predictable.

o Same for each reading.

o Removed by calibrations and corrections.

o Decreases accuracy.

Random (non-repeatable).

o Unpredictable.

o Different for every reading.

o Cannot be removed.

o Decreases precision.

Sources of errors.

Manufacturing Errors.

Design Inadequacy.

Operating Errors.

Environmental Errors.

Application Errors.

Page 2: Experimental Uncertainty

2

Uncertainty estimates result random errors.

Assumption: fixed error is almost zero (negligible).

o Proper construction and calibration of equipment.

o Good reading and recording of data.

Three steps of uncertainty estimation.

1) Confidence limit.

Unlimited .

Standard engineering 95 %.

2) Uncertainty interval x .

Small .

Equally .

Single-sample experiment: one-half the smallest scale division.

Population of data (N number of Data).

25 ,

25 ,

2 Nt

N

mx,

mx

Mean standard deviation:

11

2

NN

xxN

ii

mx

050. , 1 N

3) Propagation analysis.

Relative uncertainty x/u .

Multi-variable measurements.

,,, TPhf

222

TPh uT

TuP

Puh

hu

u

Results comparison. (Exp./Exp. & Exp./The.)

Overlapping: 2015 .. and 3045 .. .

1100 .exc

.appxx

E

%..

.E 3111415926533

13100

Page 3: Experimental Uncertainty

3

Page 4: Experimental Uncertainty

4

Ex. Determine the uncertainty interval of the given mass measurements.

m [kg] = 20.6, 20.9, 21.5, 20.6, 20.9, 21.3, 21.8, 20.7, 21.4, 21.6.

kg 3170121

3170140026221400

140011010

1216211219201216201

121

91

10

050

902502

2221

2

..m

....tt

.......NN

xx

.x

N

N

.

,.mx,

n

ii

mx

Page 5: Experimental Uncertainty

5

Ex. 42PTkx , C 16.04 , Pa 3120 , mm 02.02 , C/ms 3 442 TPxk .

Determine density relative uncertainty u .

32

42

4

4

2

40404160

520250120

3

10102020

PTkxT

TkxP

kxPTx

%..u

%..u

%..u

T

P

x

16.3%

1630

1600250020

42

42

222

222

232

42

242

42

24

42

222

.

...

uuu

uPTkxPTkxTuTkx

PTkxPukxPT

PTkxx

uT

TuP

Pux

xu

TPx

TPx

TPx

3

3

342

kg/m 06003690

kg/m 060036901630

kg/m 3690

..

...

.PTkx

Page 6: Experimental Uncertainty

6

Ex. g

Fkv2

, 232.51.5 m/s 1.08.9 , kg/m 060.0369.0 , N 6200 , s/kgm 1 gFk .

Determine velocity relative uncertainty vu .

2

2

2

2

2

0101008910

3161630

030302006

gkF

gv

gkFv

gkF

Fv

%....u

%..u

%..u

g

F

10.2%

1020

01008150060

2

22

222

22212

2

2

2

2

2

2

2

2

2

222

.

...

uuu

ug

kF

gkF

gugkF

gkF

ugkF

gkF

F

ugv

vguv

vu

Fv

vFu

gF

gF

gFv

m/s 925242479

m/s 9252102042479

m/s 424792

..v

...

.g

kFv

v

Page 7: Experimental Uncertainty

7

Ex. aPkhP , kPa 1101 , m 5.050 , kN/m 15.081.9 3 aPhk .

Determine pressure relative uncertainty Pu .

1

%101.0101

1

%101.050

5.0

%5.1015.081.915.0

a

P

h

k

PP

khP

hkP

u

u

u

a

1.5%

0150

00170008300120

010110150819

10101081910150819

5001505010150819

819

1

222

222

222

222

.

...

..

...

..

.

uPkh

Puk

Pkhhuh

Pkhk

uPP

PP

uhP

Phu

kP

Pku

a

a

Pa

ah

ak

a

Pa

ahkP

kPa 985591

kPa 9801505591

kPa 5591

..P

...

.PkhP

P

a

Page 8: Experimental Uncertainty

8

Uncertainty Analysis

INTRODUCTION: It is important to understand sources of errors/uncertainties not only to ensure proper experimental procedure but to ensure the accuracy and precision of the results. There are two types of errors that shortly will be explained. Most errors can be determined using statistical approach. Reference [1] outlines statistical procedure and computer programming commonly used in analyzing experimental data. The purpose of measurements is to determine a value of a property of interest (measurand) by means of an experimental procedure. Examples of some quantities of interest includes, The boiling point of water under 1 atmosphere The Rockwell hardness number of a given material The tensile strength of an elastic material The length of a metal bar

The objective is to determine a value that is representative of the overall result. The variation of the results is due to the influence quantities (many of them) that are not constant, i.e., random errors (can be modeled).

Uncertainty analysis is a procedure that is used to quantify data accuracy and validity, which is useful during experiments.

TYPES OF ERRORS: Fixed Error: Usually called systematic error and caused by faults in measuring

instruments or technique. Fixed uncertainty does not necessarily mean that the uncertainty is repeatable; it might be that the uncertainty involves physics that has not been accounted for in the analysis. Fixed errors are the same for each reading and can be eliminated by calibration, e.g., measuring the length of a table with a steel tape that has a kink in it or measuring the period if a pendulum with a clock that runs too fast. “Hard to discover”, “They affect the accuracy, if an experiment has low fixed error it is said to be accurate”

Random Error: Different for each reading and are associated with unpredictable variation in the experimental conditions under which the experiment is being conducted, e.g., changes in room temperature, electrical noise from nearby machinery, imperfect connections, or improper measurements. These kind of errors are hard to be removed but yet “Easy to discover”. “They affect the precision, if an experiment has low random error it is said to be precise”. They are usually associated with standard deviation

The differences between accurate and precise are illustrated in Figure 1, where the black dots represent data points taken in a measurement of a quantity whose true value is at the center of the circles. Remember that when thinking about uncertainty, it is important to understand these associations, so they are worth repeating:

Random uncertainty decreases the precision of an experiment. Systematic uncertainty decreases the accuracy of an experiment.

Page 9: Experimental Uncertainty

9

Figure 1: A “bulls-eye" plot showing the distinction between precision and accuracy in a measurement.

SOURCES OF ERRORS: Manufacturing Errors: These errors can be eliminated by calibration. Design Inadequacy: a] Assuming linear relation when designing an instrument or a

component that is used as a measure of force or pressure which might not be accurate.

b] Assuming that loading and unloading behavior are the same. c] Accuracy of friction effects. Operating Errors: a] Failure to read indicated values correctly. b] Failure to apply correct pressure between the measuring

device and the object to be measured. c] Failure to apply an instrument squarely to a component. Environmental Errors: a] Variations of local values such as pressure, temperature,

acceleration, will lead to errors. b] External disturbances such as vibrations, light reflection,

wind and many others lead to additional errors. Application Errors: A thermometer and a casing inserted inside a pipe to

measure the temperature of a hot flowing fluid will cause the heat to escape to the surrounding and hence the indicated temperature might not be the same as the fluid temperature.

Note that blunders such as calculation errors are not a source of uncertainty. They can always be eliminated completely by careful work. In your laboratory reports never list misreading the instrument or getting the wrong units as a source of uncertainty. Keep in mind that you should always try to keep the error within 5%.

ESTIMATION OF UNCERTAINTY: The estimation of the uncertainty depends on how large the population of your sample points. For large population size, greater than 25 points (n >25), normal distribution is usually used, whereas, for small population, less than 25 points (n <25), t-distribution is used.

Page 10: Experimental Uncertainty

10

LARGE POPULATION (N>25): To estimate the random uncertainty in a given experimental data, it is required to calculate the standard deviation, which is usually associated with this kind of uncertainty. To better illustrate this, consider ten students measuring the diameter of a steel ball with a Vernier calipers. It is almost impossible that all the measurements to be identical. The sources of error may include but not limited to:

Some students tighten the Vernier caliper more than others resulting in different reading.

Balls are not perfectly round. The ball is not centered between the jaws. The temperature of the ball may change causing contraction or expansion.

There are two common ways to state the uncertainty of a result; in terms of the standard deviation of the mean , or in terms of a percent or fractional uncertainty . The relationship between and for the quantity of interest be xi, is as follows,

( )x mxu

x

(1)

The quantity is the reading i and is the mean standard deviation and is given by,

21( )

1

ni

x m

x xn n

(2)

where n is the number of data points, is the value being measured, and is the mean of the value under consideration. One way to report your results and its uncertainty is in the form

with the units placed last. For example, if the average mass of an object is found to be 9.2 g and the uncertainty in the mass is 0.3 g, one would write

The other way to report your results and its uncertainty is in the form where is defined as multiplied by 100. For the above example, one can report the results as “The

mass of the object is 9.2 grams with an uncertainty of 3.26%", or,

It is preferred that you report your measurements as given in the first form, i.e., using . Keep in mind that has the same units as , while is always unitless.

SMALL POPULATION (N<25): To estimate the random uncertainty for small number of a given experimental data, it is required to calculate the standard deviation, which is usually associated with this kind of uncertainty.

/2, ( )v x mE t (3)

where is usually taken to be 0.02, which corresponds to 98% confidence, v is the degree of freedom defined as 1v n , n is the number of data points, is the value being measured, and ( )x m is the standard deviation of the mean. The results should be reported in the form with the units placed last. The value of

/2,vt is obtained from Table IV from reference [2]. For example, if the average mass of an object is found to be 9.2 g and the error in the mass Em is 0.3 g, one would write,

Page 11: Experimental Uncertainty

11

As in the case of large population, these results can also be reported as where is defined as multiplied by 100.

SINGLE-SAMPLE EXPERIMENT: For single-sample experiment, the uncertainty can be approximated by ±½ (the minimum measuring unit). For example, if five students measure the weight of specimen only once and found it to be 9.57 g, then the error is one-half the least unit measurement of scale (0.01). Therefore, the single-sample uncertainty is ( 0.005 g) and the resulting weight is written as 9.57 0.005 g.

ERROR PROPAGATION: Often times we deal with functions of many parameters and each has its own uncertainty. To obtain the effect of these individual uncertainties on the overall value and the error propagation in experimental data, one should proceed as follows: Estimate the uncertainty interval for each quantity “measurand”, i.e. the standard

deviation of a given quantity (). State the confidence limit on each measurement (± ). Incorporate the propagation of uncertainty into the results, which is given in Equation (3). The uncertainty propagation of a given value in the calculations is given by:

1 2

12 2 2

1 2

1 2

. . ..R x xx xR Ru u uR x R x

(4)

Where is a function of many variable and is given by,

1

11( )x m

xi i

xux x

(5)

The quantity is the error in reading 1. For example, using Equations (3) and (4) the uncertainty of the density , is given as,

2 2 2

h P Th P Tu u u u

h P T

(6)

and, u

PERCENTAGE ERROR: In quick measurements, we may not always calculate uncertainties for the quantities we measure. In these cases, the best we can state is that two values disagree by some amount. This disagreement is usually presented as a percent of the value of the quantity. For example, if we did not have uncertainties calculated for the above two density values, we could say that they disagree by

Page 12: Experimental Uncertainty

12

ERROR REPORTING: When comparing a physical quantity obtained by two different methods one wants to know whether they agree or not. If uncertainties for one or both numbers (expressed by an associated ) have been calculated, one can say that the two numbers agree with each other if they overlap within their uncertainties. For example, if a theory predicts that the density of an object should be , and a measurement gives a value of , then we can say the two values agree within the experimental uncertainty. But if the measurement gave instead , then we would be forced to admit that the two values did not agree. In the case of disagreement, the experimenter faces a problem; what effects have not been accounted for? There could be a source of additional random error that has not been appreciated, or more vexing, there may be a source of systematic error that is fouling the accuracy of the measurement. Generally, sources of random error are easier to track down and rectify; but in so doing, one may uncover other sources of systematic error that were previously invisible! You will often be asked to determine what the dominant source of error is in a particular experiment. In general, this is a subtle problem as there is no general method for determining systematic error. However, one important clue can be used when comparing measurements with each other, or with theory: if the measured quantity including the uncertainty calculated from random sources of error does not overlap with another expected value (either from another experiment or theory) then you can assume that the systematic error in the experiment dominates the experimental error. This is especially true when comparing against theoretically calculated values, as the theory almost always assumes some simplifications in order to make the calculation reasonable (for example, neglecting the weight of a string or assuming that friction is zero). To reiterate: systematic error comes into an experiment when the experimenter neglects some important physics in the analysis. The general rules for comparing results in lab reports are these: If uncertainties exist, state the quantities with their uncertainty,

and see if they overlap. If they do, they agree. If not, they don't, and you should try to explain why, that is, discuss the physics of the experiment and try to come up with some sources of systematic error.

If uncertainties do not exist, calculate a percent disagreement. If the percent disagreement is less than a few percent, the results are probably in agreement. If the disagreement is more than ten percent, they are probably not in agreement, and you should try to explain why.

REFERENCES

[1] Bevington, Philip R., and D. Keith Robinson Data Reduction and Error Analysis for the Physical Sciences, 3rd edition, McGraw-Hill, New York, 2003.

[2] Montgomery, D.C., and Runger, G.C. Applied Statistics and Probability for Engineers, 2nd edition, Wiley, 1999.