no single ndt method will work for all flaw detection or measurement applications

Upload: gekinter

Post on 30-Oct-2015

58 views

Category:

Documents


3 download

DESCRIPTION

NDT

TRANSCRIPT

No single NDT method will work for all flaw detection or measurement applications. Each of the methods has advantages and disadvantages when compared to other methods. The table below summarizes the scientific principles, common uses and the advantages and disadvantages for some of the most often used NDT methods.

PenetrantTestingMagnetic Particle TestingUltrasonicTestingEddy CurrentTestingRadiographicTesting

Scientific Principles

Penetrant solution is applied to the surface of a precleaned component. The liquid is pulled into surface-breaking defects by capillary action. Excess penetrant material is carefully cleaned from the surface. A developer is applied to pull the trapped penetrant back to the surface where it is spread out and forms an indication. The indication is much easier to see than the actual defect.A magnetic field is established in a component made from ferromagnetic material. The magnetic lines of force travel through the material, and exit and reenter the material at the poles. Defects such as crack or voids cannot support as much flux, and force some of the flux outside of the part. Magnetic particles distributed over the component will be attracted to areas of flux leakage and produce a visible indication.High frequency sound waves are sent into a material by use of a transducer. The sound waves travel through the material and are received by the same transducer or a second transducer. The amount of energy transmitted or received and the time the energy is received are analyzed to determine the presence of flaws. Changes in material thickness, and changes in material properties can also be measured.Alternating electrical current is passed through a coil producing a magnetic field. When the coil is placed near a conductive material, the changing magnetic field induces current flow in the material. These currents travel in closed loops and are called eddy currents. Eddy currents produce their own magnetic field that can be measured and used to find flaws and characterize conductivity, permeability, and dimensional features.X-rays are used to produce images of objects using film or other detector that is sensitive to radiation. The test object is placed between the radiation source and detector. The thickness and the density of the material that X-rays must penetrate affects the amount of radiation reaching the detector. This variation in radiation produces an image on the detector that often shows internal features of the test object.

Main Uses

Used to locate cracks, porosity, and other defects that break the surface of a material and have enough volume to trap and hold the penetrant material. Liquid penetrant testing is used to inspect large areas very efficiently and will work on most nonporous materials. Used to inspect ferromagnetic materials (those that can be magnetized) for defects that result in a transition in the magnetic permeability of a material. Magnetic particle inspection can detect surface and near surface defects.Used to locate surface and subsurface defects in many materials including metals, plastics, and wood. Ultrasonic inspection is also used to measure the thickness of materials and otherwise characterize properties of material based on sound velocity and attenuation measurements.Used to detect surface and near-surface flaws in conductive materials, such as the metals. Eddy current inspection is also used to sort materials based on electrical conductivity and magnetic permeability, and measures the thickness of thin sheets of metal and nonconductive coatings such as paint.Used to inspect almost any material for surface and subsurface defects. X-rays can also be used to locates and measures internal features, confirm the location of hidden parts in an assembly, and to measure thickness of materials.

Main Advantages

Large surface areas or large volumes of parts/materials can be inspected rapidly and at low cost.Parts with complex geometry are routinely inspected.Indications are produced directly on surface of the part providing a visual image of the discontinuity.Equipment investment is minimal. Large surface areas of complex parts can be inspected rapidly.Can detect surface and subsurface flaws.Surface preparation is less critical than it is in penetrant inspection.Magnetic particle indications are produced directly on the surface of the part and form an image of the discontinuity.Equipment costs are relatively low.Depth of penetration for flaw detection or measurement is superior to other methods.Only single sided access is required.Provides distance information.Minimum part preparation is required.Method can be used for much more than just flaw detection. Detects surface and near surface defects.Test probe does not need to contact the part.Method can be used for more than flaw detection.Minimum part preparation is required.Can be used to inspect virtually all materials.Detects surface and subsurface defects.Ability to inspect complex shapes and multi-layered structures without disassembly.Minimum part preparation is required.

Disadvantages

Detects only surface breaking defects.Surface preparation is critical as contaminants can mask defects.Requires a relatively smooth and nonporous surface.Post cleaning is necessary to remove chemicals.Requires multiple operations under controlled conditions.Chemical handling precautions are necessary (toxicity, fire, waste).Only ferromagnetic materials can be inspected.Proper alignment of magnetic field and defect is critical.Large currents are needed for very large parts.Requires relatively smooth surface.Paint or other nonmagnetic coverings adversely affect sensitivity.Demagnetization and post cleaning is usually necessary.Surface must be accessible to probe and couplant.Skill and training required is more extensive than other technique.Surface finish and roughness can interfere with inspection.Thin parts may be difficult to inspect.Linear defects oriented parallel to the sound beam can go undetected.Reference standards are often needed.Only conductive materials can be inspected.Ferromagnetic materials require special treatment to address magnetic permeability.Depth of penetration is limited.Flaws that lie parallel to the inspection probe coil winding direction can go undetected.Skill and training required is more extensive than other techniques.Surface finish and roughness may interfere.Reference standards are needed for setup. Extensive operator training and skill required.Access to both sides of the structure is usually required.Orientation of the radiation beam to non-volumetric defects is critical.Field inspection of thick section can be time consuming.Relatively expensive equipment investment is required.Possible radiation hazard for personnel.

PenetrantTestingMagnetic Particle TestingUltrasonicTestingEddy CurrentTestingRadiographicTesting

NDT Method SelectionEach NDT method has its own set of advantages and disadvantages and, therefore, some are better suited than others for a particular application. The NDT technician or engineer must select the method that will detect the defect or make the measurement with the highest sensitivity and reliability. The cost effectiveness of the technique must also be taken into consideration. The following table provides some guidance in the selection of NDT methods for common flaw detection and measurement applications.

Eddy Current Inspection Formula- Ohm's Law ImpedancePhase AngleMagneticPermeability

Where:

I=Current (amp)V=Voltage (volt)Z =Impedance (ohm)More InformationOhm's Law Calculator

Where:

Z=Impedance (ohm)R=Resistance (ohm)XL=Inductance Reactance(ohm)More InformationWhere:

=Phase Angle (deg)XL=Inductance Reactance(ohm)R=Resistance (ohm)More InformationPhase Angle Calculator Where:

=Magnetic Permeability (Henries/meter)B=Magnetic Flux Density (Tesla)H=Magnetizing Force (Am/meter)

Relative Magnetic PermeabilityConductivity & ResistivityElectrical Conductivity(%IACS)Electrical Conductivity(%IACS)

Where:

r=Relative Magnetic Permeability (dimensionless)=Any Given Magnetic Permeability (H/m)o=Magnetic Permeability in Free Space (H/m), which is 1.257 x 10-6 H/m

More Information Where:

=Electrical ConductivitySiemens/m=Electrical Resistivity(ohm-m)

More Information When resistivity is known Where:

IACS=Electrical Conductivity(% IACS)=Electrical Resistivity(ohm-cm)S/cm=Electrical Conductivity(Siemens/cm)

More InformationWhen conductivity in S/m or S/cm is knownor

Where:

IACS=Electrical Conductivity(% IACS)S/m=Electrical Conductivity(Siemens/meter)S/cm=Electrical Conductivity(Siemens/cm)More Information

Current DensityStandard Depthof PenetrationStandard Depthof PenetrationStandard Depthof Penetration

Where:

Jx=Current Density (amps/m2)Jo=Current Density at Surface (amps/m2)e=Base Natural Log = 2.71828 x=Distance Below Surface=

Standard Depth of Penetration More InformationWhen electrical conductivity (S/m) is known.

Where:

=Standard Depth of Penetration (m)=3.14f=Test Frequency (Hz)=Magnetic Permeability (H/m) (1.257 x 10-6 H/m for nonmagnetic mat'ls) =Electrical Conductivity(Siemens/m)More Information

HYPERLINK "http://www.ndt-ed.org/GeneralResources/Formula/ECFormula/DepthofPenetration/ECDepth.htm" Depth of Pen CalculatorWhen electrical conductivity (IACS) is known.In mm

In inches

Where:

=Standard Depth of Penetration (mm or in) f=Test Frequency (Hz)r=Relative Magnetic Permeability (dimensionless) =Electrical Conductivity (%IACS)

More InformationWhen electrical resistivity (ohm-cm) is known.In mm

In inches

Where:

=Standard Depth of Penetration (mm or in) =Electrical Resistivity(ohm-cm) f=Test Frequency (Hz)r=Relative Magnetic Permeability (dimensionless) More Information

Standard Depth of Penetration Versus Frequency Chart

Eddy Current Field Phase Lag Eddy Current Field Phase LagEddy Current Field Phase LagEddy Current Field Phase Lag

InRadians

InDegrees

Where:

=Phase Lag (Rad or Degrees) x=Distance Below Surface (in or mm) =Standard Depth of Penetration (in or mm)More InformationWhen electrical conductivity (S/m) is known.

Where:

=Phase Lag (degrees) x=Distance Below Surface (m) =3.14f= Test Frequency (Hz)r=

Relative Magnetic Permeability =Electrical Conductivity(Siemens/m) More InformationWhen electrical conductivity (IACS) is known.In mm

In inches

Where:

=Phase Lag (degrees) x=Distance Below Surface (mm) f=Test Frequency (Hz)r=Relative Magnetic Permeability (dimensionless) =Electrical Conductivity (%IACS)

More Information When electrical resistivity (ohm-cm) is known.In mm

In inches

Where:

=Phase Lag (degrees) x=Distance Below Surface (inch) =Electrical Resistivity(ohm-cm) f=Test Frequency (Hz)r=Relative Magnetic Permeability (dimensionless) More Information

Standard Depth of Penetration andPhase Angle Material Thickness Requirement forResistivity or Conductivity Measurement Frequency Selectionfor Thickness Measurement of Thin MaterialsFrequency Selectionfor Flaw Detection and Nonconductive Coating Thickness Measurements

Std.DepthRelativeStrengthof EC Phase Lag 0e0=100%0 rad = 0o de-1=37%1 rad = 57.3o 2de-2=14%2 rad = 114.6 o3de-3=5%3 rad = 171.9 o4de-4=2%4 rad = 229.2 o5de-5=0.7%5 rad = 286.5 o

When measuring resistivity or conductivity, the thickness of the material should be at least 3 times the depth of penetration to minimize material thickness effects

Where:

t=Material Thickness=Standard Depth of Penetration More InformationSelecting a frequency that produces a standard depth of penetration that exceeds the material thickness by 25% will produce a phase angle of approximately 90obetween the liftoff signal and the material thickness change signal.

More InformationDefect Detection A test frequency that puts the standard depth of penetration at about the expected depth of the defect will provide good phase separation between the defect and liftoff signals.

Nonconductive Coating Thickness MeasurementTo minimize effects from the base metal the highest practical frequency should be used.

Ultrasonic Formula- Longitudinal Wave VelocityShear Wave VelocityWavelength

Where:

VL=Longitudinal Wave VelocityE=Modulus of Elasticity=Density=Poisson's Ratiomore information:Longitudinal Wave Velocity Calculations

Where:

Vs=Shear Wave VelocityE=Modulus of Elasticity=Density=Poisson's RatioG=Shear Modulusmore information:Shear Wave Velocity Calculations

Where:

=WavelengthV=VelocityF=Frequency

more information:Wavelength Calculations

Refraction(Snell's Law)Acoustic ImpedanceReflection Coefficient

Where:

=Angle of the Incident WaveR=Angle of the Reflected WaveV1=Velocity of Incident WaveV2=Velocity of Reflected Wavemore information:Refracted Angle Calculations

Where:

Z=Acoustic Impedance=DensityV=Velocity

more information:Acoustic Impedance CalculationsWhere:

R=Reflection CoefficientZ1=Acoustic Impedance of Medium 1Z2=Acoustic Impedance of Medium 2

more information: Reflection Coefficient Calculations

Near FieldBeam SpreadHalf AngleDecibel (dB)Gain or Loss

Where:

N=Near FieldD=Transducer Diameter=WavelengthV=Velocitymore information:Near Field Calculations

HYPERLINK "javascript:;" Near Field Calculator

Where:

=WavelengthD=Transducer DiameterV=VelocityF=Frequency

more information:Beam Spread Calculations

HYPERLINK "javascript:;" Beam Divergence Calculator

Where:

dB=DecibelP1=Presure Amplitude 1P2=Presure Amplitude 2

more information:dB Gain or Loss Calculations

Angle Beam Inspection CalculationsWhen performing an angle beam inspection, it is important to know where the sound beam is encountering an interface and reflecting. The reflection points are sometimes referred to as nodes. The location of the nodes can be obtained by using the trigonometric functions or by using the trig-based formulas which are given below.

Nodes - surface points where sound waves reflect.

Skip Distance - surface distance of two successive nodes.

Leg 1 (L1) - sound path in material to 1st node.

Leg 2 (L2) - sound path in material from 1st to 2nd node.

R - refracted sound wave angle.

Skip Distance FormulasSurface Distance Formulas

Leg 1 and Leg 2 Formulas

Flaw Depth (1st Leg)Flaw Depth (2nd Leg)

Standards and SpecificationsA standard as something that is established for use as basis of comparison. There are standards for practically everything that can be measured or evaluated ... from time to materials to processes. Congress created the National Institute of Standards and Technology in 1901 at the start of the industrial revolution to provide the measurements and standards needed to resolve and prevent disputes over trade and to encourage standardization. NIST develops technologies, measurement methods and standards that help US companies compete in the global marketplace. NDT personnel are sometimes required to use calibration standards that are traceable back to a standard held by NIST. This might be a conductivity standard, which can be shown to have the same electrical conductivity as a NIST standard; or it could be a setup standard that was measured with a micrometer that was calibrated using a NIST standard.

A notable development of the twentieth century is the preparation and use of standard specifications to improve the consistency of manufacturing materials and processes, and the resulting products. A specification is a detailed description as to how to produce something or how to perform a particular task. Anytime a product is marked as meeting a specification or a contract requires use of a specification, the product or service must meet the requirements of document. A standard specification is the result of agreement among the involved parties and usually involves acceptance for use by some organization. Standard specifications do not, however, necessarily imply a degree of permanence (like dimensional or volumetric standards), because technical advances in a given field usually calls for periodic revisions to the requirements.

Properly prepared, standards can be of great value to industry. Some of the advantages are:

They usually represent the combined knowledge of large group of individuals including producers, consumers and other interested parties, and, thus, reduce the possibility of misinterpretation.

They give the manufacturer a standard of production and, therefore, tend to result in a more uniform process or product.

They lower unit cost by making standard processes and mass production possible.

They permit the consumer to use a specification that has been tried and is enforceable.

They set standards of testing and measurement and hence permit the comparison of results.

The disadvantage of standard specifications is they tend to "freeze" practices sometimes based on little data or knowledge, and slow the development of better practices.

Standards always represent an effort by some organized group of people. Any such organization, be it public or private, becomes the standardizing agency. Various levels of these agencies exist, ranging from a single business to local government to national groups to international organizations. The professional and industrial organizations in the United States that lead the development of standards relative to the field of NDT include: the ASTM International, the Society of Automotive Engineers (SAE), the American Iron and Steel Institution (AISI), the American Welding Society (AWS) and the ASME International. Many specifications have also been developed by US government agencies such as the Department of Defense (DOD). However, the US government is downscaling its specification efforts and many military specifications are being converted to specification controlled by industry groups. For example, MIL-I-25135 has historically been the controlling document for both military and civilian penetrant material uses. The recent change in military specification management has lead to the requirement of the Mil specification be incorporated into SAE's AMS 2644 and industry is transition towards the use of this specification.

Generally, the desired tendency is for a given standard to become more uniformly used and accepted. One method of increasing standardization is for a large agency to adopt a standard developed by a smaller one. In the US, thousands of standard specifications are recognized by the American National Standards Institute (ANSI), which is a national, yet private, coordinating agency. At the international level, the International Organization for Standardization (ISO) performs this function. The ISO was formed in 1947 as a non-governmental federation of standardization bodies from over 60 countries. The Unites States is represented within the ISO by the ANSI.

Additional information and links to the standards and specification organizations previously mentioned are provided below.

ASTM InternationalPartial list of ASTM standards relative to NDTLink to theASTM web siteFounded in 1898, ASTM International is a not-for-profit organization that provides a global forum for the development and publication of voluntary consensus standards for materials, products, systems, and services. Formerly known as the American Society for Testing and Materials, ASTM International provides standards that are accepted and used in research and development, product testing, quality systems, and commercial transactions around the globe. Over 30,000 individuals from 100 nations are the members of ASTM International, who are producers, users, consumers, and representatives of government and academia. In over 130 varied industry areas, ASTM standards serve as the basis for manufacturing, procurement, and regulatory activities.

Each year, ASTM publishes the Annual Book of ASTM Standards, which consists of approximately 70 volumes. Most of the NDT related documents can be found in Volume 03.03, Nondestructive Testing. E-03.03 is under the jurisdiction of ASTM Committee E-7. Each standard practice or guide is the direct responsibility of a subcommittee. For example, document E-94 is the responsibility of subcommittee E07.01 on Radiology (x and gamma) Methods. This committee, comprised of technical experts from many different industries, must review the document every five years and if not revised, it must be reapproved or withdrawn.

The Society of Automotive Engineers (SAE)Partial list of SAE standards relative to NDTLink to theSAE web siteThe Society of Automotive Engineers is a professional society that serves as resource for technical information and expertise used in designing, building, maintaining, and operating self-propelled vehicles for use on land or sea, in air or space. Over 83,000 engineers, business executives, educators, and students from more than 97 countries form the membership who share information and exchange ideas for advancing the engineering of mobility systems. SAE is responsible for developing several different documents for the aerospace community. These documents include: Aerospace Standards (AS), Aerospace Material Specifications (AMS), Aerospace Recommended Practices (ARP), Aerospace Information Reports (AIR) and Ground Vehicle Standards (J-Standards). The documents are developed by SAE Committee K members, which are technical experts from the aerospace community.

ASME International More information on the Boiler & Pressure Vessel CodeLink to theASME web site.ASME International was founded in 1880 as the American Society of Mechanical Engineers. It is a nonprofit educational and technical organization serving a worldwide membership of 125,000. ASME maintains and distributes 600 codes and standards used around the world for the design, manufacturing and installation of mechanical devices. One of these codes is called the Boiler and Pressure Vessel Code. This code controls the design, inspection, and repair of pressure vessels. Inspection plays a big part in keeping the components operating safely. More information about the B&PV Code can be found at the links to the left.

The American Welding SocietyPartial list of AWS Standards and Documents relative to NDTLink to the AWS web siteThe American Welding Society (AWS) was founded in 1919 as a multifaceted, nonprofit organization with a goal to advance the science, technology and application of welding and related joining disciplines. AWS serves 50,000 members worldwide. Membership consists of engineers, scientists, educators, researchers, welders, inspectors, welding foremen, company executives and officers, and sales associates.

The International Organization for Standardization (ISO)Partial list of ISO standards and documents relative to NDTLink to theISO web siteThe International Organization for Standardization (ISO) was formed in 1947 as a non-governmental federation of standardization bodies from over 60 countries. The ISO is headquartered in Geneva, Switzerland. The Unites States is represented by the ANSI.

The Air Transport Association (ATA)Link to the ATA web site.Founded by a group of 14 airlines in 1936, the ATA was the first, and today remains, the only trade organization for the principal US airlines. The purpose of the ATA is to support and assist its members by promoting the air transport industry and the safety, cost effectiveness, and technological advancement of its operations; advocating common industry positions before state and local governments; conducting designated industry-wide programs; and assuring governmental and public understanding of all aspects of air transport. There are two ATA documents that serve as guidelines for the training of inspection personnel.

ATA Specification 105, Guidelines for Training and Qualifying Personnel in Non-Destructive TestingMethods. This document serves as a guideline for the development of a training program for personnel who accomplish nondestructive testing tasks. While partially derived from more universal training standards such as ASNT SNT-TC-1A and NAS 410, this document is dedicated to preparing a curriculum for an airline's maintenance training program and qualifying individuals to conduct aircraft inspections.

ATA Specification 107, Visual Inspection Personnel Training and Qualification Guide for FAR Part 121Air Carriers. This document addresses training and qualification needs of the aircraft inspection technician and recommends a minimum list of required inspection items.

The Aerospace Industries AssociationLink to theAIA's web site.The Aerospace Industries Association represents the nation's major manufacturers of commercial, military and business aircraft, helicopters, aircraft engines, missiles, spacecraft, materials, and related components and equipment. The AIA has been a aerospace industry trade association since 1919. It was originally known as the Aeronautical Chamber of Commerce (ACCA). The AIA is responsible for two NDT related documents, which are:

NAS 410, Certification & Qualification Of Nondestructive Test Personnel. This document is a widely used document in the aerospace industry as it replaces MIL-STD-410E: Military Standard, Nondestructive Testing Personnel Qualification and Certification..

NAS 999, Nondestructive Inspection of Advanced Composite Structure.

The American National Standards Institute (ANSI)Link to the ANSI web site.ANSI is a private, nonprofit organization that administers and coordinates the US voluntary standardization and conformity assessment system. The Institute's mission is to enhance both the global competitiveness of US business and the US quality of life by promoting and facilitating voluntary consensus standards and conformity assessment systems, and safeguarding their integrity.

US Department of Defense Specifications - A list of DOD specifications (Mil Specs, NAV, Etc.) was not prepared since the trend is to move away from their use and more documents are being canceled or made inactive everyday. Information on DOD specifications can be found at the following web site.

The Department of Defense Single Stock Point for Military Specifications, Standards and Related PublicationsGreek LettersGreek letters were introduced into mathematics long ago to provide a collection of useful symbols to stand for abstract objects, such as numbers, sets, functions, and spaces. At the time they were introduced, most scholars had been taught some Greek during their education, so the letters were familiar. Today, outside of the Greek community, these symbols may seem quite foreign, and difficult to remember. The table below lists all of the letters in the Greek alphabet, upper-case and lower-case, with their names and pronunciations. The lower-case letters are most often used for variables, such as angles and complex numbers, and for functions and formulas, while the upper-case letters more commonly stand for sets and spaces.

CAP / lowerName & PronunciationCAP / lowerName & Pronunciation

ALPHA (AL-fuh)NU (NOO)

BETA (BAY-tuh)XI (KS-EYE)

GAMMA (GAM-uh)OMICRON (OM-i-KRON)

DELTA (DEL-tuh)PI (PIE)

EPSILON (EP-sil-on) The two lower-case versions are used interchangeably.RHO (ROW)

ZETA (ZAY-tuh)SIGMA (SIG-muh)

ETA (AY-tuh)TAU (TAU)

THETA (THAY-tuh)UPSILON (OOP-si-LON)

IOTA (eye-OH-tuh)PHI (FEE) The two lower-case versionsare used interchangeably.

KAPPA (KAP-uh)CHI (K-EYE)

LAMBDA (LAM-duh)PSI (SIGH)

MU (MYOO)OMEGA (oh-MAY-guh)

The DecibelThe equation used to describe the difference in intensity between two ultrasonic or other sound measurements is:

where: DI is the difference in sound intensity expressed in decibels (dB), P1 and P2 are two different sound pressure measurements, and the log is to base 10.

What exactly is a decibel? The decibel (dB) is one tenth of a Bel, which is a unit of measure that was developed by engineers at Bell Telephone Laboratories and named for Alexander Graham Bell. The dB is a logarithmic unit that describes a ratio of two measurements. The basic equation that describes the difference in decibels between two measurements is:

where: delta X is the difference in some quantity expressed in decibels, X1 and X2 are two different measured values of X, and the log is to base 10. (Note the factor of two difference between this basic equation for the dB and the one used when making sound measurements. This difference will be explained in the next section.)

Why is the dB unit used?Use of dB units allows ratios of various sizes to be described using easy to work with numbers. For example, consider the information in the table.

Ratio between Measurement 1 and 2EquationdB

1/2dB = 10 log (1/2)-3 dB

1dB = 10 log (1)0 dB

2dB = 10 log (2) 3 dB

10dB = 10 log (10)10 dB

100dB = 10 log (100)20 dB

1,000dB = 10 log (1000)30 dB

10,000dB = 10 log (10000)40 dB

100,000dB = 10 log (100000)50 dB

1,000,000dB = 10 log (1000000)60 dB

10,000,000dB = 10 log (10000000)70 dB

100,000,000dB = 10 log (100000000)80 dB

1,000,000,000dB = 10 log (1000000000)90 dB

From this table it can be seen that ratios from one up to ten billion can be represented with a single or double digit number. Ease to work with numbers was particularly important in the days before the advent of the calculator or computer. The focus of this discussion is on using the dB in measuring sound levels, but it is also widely used when measuring power, pressure, voltage and a number of other things.

Use of the dB in Sound MeasurementsSound intensity is defined as the sound power per unit area perpendicular to the wave. Units are typically in watts/m2 or watts/cm2. For sound intensity, the dB equation becomes:

However, the power or intensity of sound is generally not measured directly. Since sound consists of pressure waves, one of the easiest ways to quantify sound is to measure variations in pressure (i.e. the amplitude of the pressure wave). When making ultrasound measurements, a transducer is used, which is basically a small microphone. Transducers like most other microphones produced a voltage that is approximately proportionally to the sound pressure (P). The power carried by a traveling wave is proportional to the square of the amplitude. Therefore, the equation used to quantify a difference in sound intensity based on a measured difference in sound pressure becomes:

(The factor of 2 is added to the equation because the logarithm of the square of a quantity is equal to 2 times the logarithm of the quantity.)

Since transducers and microphones produce a voltage that is proportional to the sound pressure, the equation could also be written as:

where: delta I is the change in sound intensity incident on the transducer and V1 and V2 are two different transducer output voltages.

Revising the table to reflect the relationship between the ratio of the measured sound pressure and the change in intensity expressed in dB produces:

Ratio between Measurement 1 and 2 Equation dB

1/2dB = 20 log (1/2) - 6 dB

1 dB = 20 log (1) 0 dB

2 dB = 20 log (2) 6 dB

10 dB = 20 log (10) 20 dB

100 dB = 20 log (100) 40 dB

1,000 dB = 20 log (1000) 60 dB

10,000 dB = 20 log (10000) 80 dB

100,000 dB = 20 log (100000) 100 dB

1,000,000 dB = 20 log (1000000) 120 dB

10,000,000 dB = 20 log (10000000) 140 dB

100,000,000 dB = 20 log (100000000) 160 dB

1,000,000,000 dB = 20 log (1000000000) 180 dB

From the table it can be seen that 6 dB equates to a doubling of the sound pressure. Alternately, reducing the sound pressure by 2, results in a 6 dB change in intensity.

Absolute" Sound Levels Whenever the decibel unit is used, it always represents the ratio of two values. Therefore, in order to relate different sound intensities it is necessary to choose a standard reference level. The reference sound pressure (corresponding to a sound pressure level of 0 dB) commonly used is that at the threshold of human hearing, which is conventionally taken to be 210 5 Newton per square meter, or 20 micropascals (20 Pa). To avoid confusion with other decibel measures, the term dB(SPL) is used.

Accuracy, Error, Precision, and UncertaintyIntroductionAll measurements of physical quantities are subject to uncertainties in the measurements. Variability in the results of repeated measurements arises because variables that can affect the measurement result are impossible to hold constant. Even if the "circumstances," could be precisely controlled, the result would still have an error associated with it. This is because the scale was manufactured with a certain level of quality, it is often difficult to read the scale perfectly, fractional estimations between scale marking may be made and etc. Of course, steps can be taken to limit the amount of uncertainty but it is always there.

In order to interpret data correctly and draw valid conclusions the uncertainty must be indicated and dealt with properly. For the result of a measurement to have clear meaning, the value cannot consist of the measured value alone. An indication of how precise and accurate the result is must also be included. Thus, the result of any physical measurement has two essential components: (1) A numerical value (in a specified system of units) giving the best estimate possible of the quantity measured, and (2) the degree of uncertainty associated with this estimated value. Uncertainty is a parameter characterizing the range of values within which the value of the measurand can be said to lie within a specified level of confidence. For example, a measurement of the width of a table might yield a result such as 95.3 +/- 0.1 cm. This result is basically communicating that the person making the measurement believe the value to be closest to 95.3cm but it could have been 95.2 or 95.4cm. The uncertainty is a quantitative indication of the quality of the result. It gives an answer to the question, "how well does the result represent the value of the quantity being measured?"

The full formal process of determining the uncertainty of a measurement is an extensive process involving identifying all of the major process and environmental variables and evaluating their effect on the measurement. This process is beyond the scope of this material but is detailed in the ISO Guide to the Expression of Uncertainty in Measurement (GUM) and the corresponding American National Standard ANSI/NCSL Z540-2. However, there are measures for estimating uncertainty, such as standard deviation, that are based entirely on the analysis of experimental data when all of the major sources of variability were sampled in the collection of the data set. The first step in communicating the results of a measurement or group of measurements is to understand the terminology related to measurement quality. It can be confusing, which is partly due to some of the terminology having subtle differences and partly due to the terminology being used wrongly and inconsistently. For example, the term "accuracy" is often used when "trueness" should be used. Using the proper terminology is key to ensuring that results are properly communicated.

True ValueSince the true value cannot be absolutely determined, in practice an accepted reference value is used. The accepted reference value is usually established by repeatedly measuring some NIST or ISO traceable reference standard. This value is not the reference value that is found published in a reference book. Such reference values are not "right" answers; they are measurements that have errors associated with them as well and may not be totally representative of the specific sample being measured

Accuracy and ErrorAccuracy is the closeness of agreement between a measured value and the true value. Error is the difference between a measurement and the true value of the measurand (the quantity being measured). Error does not include mistakes. Values that result from reading the wrong value or making some other mistake should be explained and excluded from the data set. Error is what causes values to differ when a measurement is repeated and none of the results can be preferred over the others. Although it is not possible to completely eliminate error in a measurement, it can be controlled and characterized. Often, more effort goes into determining the error or uncertainty in a measurement than into performing the measurement itself.

The total error is usually a combination of systematic error and random error.Many times results are quoted with two errors. The first error quoted is usually the random error, and the second is the systematic error. If only one error is quoted it is the combined error.

Systematic error tends to shift all measurements in a systematic way so that in the course of a number of measurements the mean value is constantly displaced or varies in a predictable way. The causes may be known or unknown but should always be corrected for when present. For instance, no instrument can ever be calibrated perfectly so when a group of measurements systematically differ from the value of a standard reference specimen, an adjustment in the values should be made. Systematic error can be corrected for only when the "true value" (such as the value assigned to a calibration or reference specimen) is known.

Random error is a component of the total error which, in the course of a number of measurements, varies in an unpredictable way. It is not possible to correct for random error. Random errors can occur for a variety of reasons such as:

Lack of equipment sensitivity. An instrument may not be able to respond to or indicate a change in some quantity that is too small or the observer may not be able to discern the change.

Noise in the measurement. Noise is extraneous disturbances that are unpredictable or random and cannot be completely accounted for.

Imprecise definition. It is difficult to exactly define the dimensions of a object. For example, it is difficult to determine the ends of a crack with measuring its length. Two people may likely pick two different starting and ending points.

Trueness and BiasTrueness is the closeness of agreement between the average value obtained from a large series of test results and an accepted true. The terminology is very similar to that used in accuracy but trueness applies to the average value of a large number of measurements. Bias is the difference between the average value of the large series of measurements and the accepted true. Bias is equivalent to the total systematic error in the measurement and a correction to negate the systematic error can be made by adjusting for the bias.

Precision, Repeatability and Reproducibility Precision is the closeness of agreement between independent measurements of a quantity under the same conditions. It is a measure of how well a measurement can be made without reference to a theoretical or true value. The number of divisions on the scale of the measuring device generally affects the consistency of repeated measurements and, therefore, the precision. Since precision is not based on a true value there is no bias or systematic error in the value, but instead it depends only on the distribution of random errors. The precision of a measurement is usually indicated by the uncertainty or fractional relative uncertainty of a value.

Repeatability is simply the precision determined under conditions where the same methods and equipment are used by the same operator to make measurements on identical specimens. Reproducibility is simply the precision determined under conditions where the same methods but different equipment are used by different operator to make measurements on identical specimens.

UncertaintyUncertainty is the component of a reported value that characterizes the range of values within which the true value is asserted to lie. An uncertainty estimate should address error from all possible effects (both systematic and random) and, therefore, usually is the most appropriate means of expressing the accuracy of results. This is consistent with ISO guidelines. However, in many measurement situations the systematic error is not address and only random error is included in the uncertainty measurement. When only random error is included in the uncertainty estimate, it is a reflection of the precision of the measurement.SummaryError is the difference between the true value of the measurand and the measured value. The total error is a combination of both systematic error and random error. Trueness is the closeness of agreement between the average value obtained from a large series of test results and the accepted true. Trueness is largely affected by systematic error. Precision is the closeness of agreement between independent measurements. Precession is largely affected by random error. Accuracy is an expression of the lack of error. Uncertainty characterizes the range of values within which the true value is asserted to lie with some level of confidence. References:Royal Society of Chemistry, Analytical Methods Committee Technical Brief, No. 13, September 2003.

ANSI/NCSL, Z540-2-1997, U.S. Guide to the Expression of Uncertainty in Measurement, 1st ed., October 1997.Eurachem/CITAC, Quantifying Uncertainty in Analytical Measurement, 2nd edition, 2000.

NIST/SEMATECH e-Handbook of Statistical Methods, http://www.itl.nist.gov/div898/handbook/, 2006

ISO 5725-1, Accuracy (trueness and precision) of measurement methods and results Part 1: General principlesand definitions.

Technical Reports NDT personnel write technical reports for two primary purposes. Technical reports are used to communicate information to customers, colleagues and managers, and they are used to document the equipment and procedures used in testing or research and the results obtained so that the work can be repeated if necessary or built upon. The content and style of technical reports vary widely depending on the primary purpose and the audience. Many companies and organizations have developed their own standard format. The sections generally included in technical reports are shown to the right.

Qualities of Good Technical Reports Regardless of the specific format used, all quality technical reports will posses the following qualities:

AccuracyGreat care should be taken to ensure that the information is presented accurately. Make sure values are transferred correctly into the report and calculations are done properly. Since many people proof read right over their own typographical errors, it is often best to have another person proofread the report. Mistakes may cause the reader to doubt other points of the report and reflect on the professionalism of the author.

ObjectivityData must be evaluated honestly and without bias. Conclusions should be drawn solely from the facts presented. Opinions and conjecture should be clearly identified if included at all. Deficiencies in the testing or the results should be noted. Readers should be informed of all assumptions and probable sources of errors if encountered.

ClarityThe author should work to convey an exact meaning to the reader. The text must be clear and unambiguous, mathematical symbols must be fully defined, and the figures and tables must be easily understood. Clarity must be met from the readers' point of view. Dont assume that readers are familiar with previous work or previous reports. When photographs are included in a report, a scale or some object of standard size should be included in the photograph to help your readers judge the size of the objects shown. Simply stating the magnification of a photograph can cause uncertainty since the size of photographs often change in reproduction.

ConcisenessMost people are fairly busy and will not want to spend any more time than necessary reading a report. Therefore, technical reports should be concisely written. Include all the details needed to fully document and explain the work but keep it as brief as possible. Conciseness is especially important in the abstract and conclusion sections.

Continuity

Reports should be organized in a logical manner so that it is easy for the reader to follow. It is often helpful to start with an outline of the paper, making good use of headings. The same three step approach for developing an effective presentation can be used to develop an effective report:1) Introduce the subject matter (tell readers what they will be reading about)2) Provide the detailed information (tell them what you want them to know)3) Summarize the results and conclusions (re-tell them the main points)

Make sure that information is included in the appropriate section of the report. For example, dont add new information about the procedure followed in the discussion section. Information about the procedure belongs in the procedure section. The discussion section should focus on explaining the results, highlighting significant findings, discussing problems with the data and noting possible sources of error, etc. Be sure not to introduce any new information in the conclusion sections. The conclusion section should simple state the conclusion drawn from the work.

Writing Style A relatively formal writing style should be used when composing technical reports. The personal style of the writer should be secondary to the clear and objective communication of information. Writers should, however, strive to make their reports interesting and enjoyable to read.