seminar report on taguchi method

116
Home Explore Search You ghewarsinghbhati Logout slideshare Upload Back ghewarsinghbhati My Uploads Analytics Account Settings Support Logout Search Home Leadership Technology Education Marketing Design More Topics Search You are currently signedin to SlideShare with your LinkedIn identity. Not you? Click here. Settings Your SlideShare is downloading. ×

Upload: ghewarsinghbhati

Post on 14-Apr-2017

280 views

Category:

Engineering


17 download

TRANSCRIPT

Page 2: Seminar report on taguchi method
Page 3: Seminar report on taguchi method
Page 4: Seminar report on taguchi method
Page 5: Seminar report on taguchi method
Page 6: Seminar report on taguchi method
Page 7: Seminar report on taguchi method
Page 8: Seminar report on taguchi method
Page 9: Seminar report on taguchi method
Page 10: Seminar report on taguchi method
Page 11: Seminar report on taguchi method
Page 12: Seminar report on taguchi method
Page 13: Seminar report on taguchi method
Page 14: Seminar report on taguchi method
Page 15: Seminar report on taguchi method
Page 16: Seminar report on taguchi method
Page 17: Seminar report on taguchi method
Page 18: Seminar report on taguchi method
Page 19: Seminar report on taguchi method
Page 20: Seminar report on taguchi method
Page 21: Seminar report on taguchi method
Page 22: Seminar report on taguchi method
Page 23: Seminar report on taguchi method
Page 24: Seminar report on taguchi method
Page 25: Seminar report on taguchi method
Page 26: Seminar report on taguchi method
Page 27: Seminar report on taguchi method
Page 28: Seminar report on taguchi method
Page 29: Seminar report on taguchi method
Page 30: Seminar report on taguchi method
Page 31: Seminar report on taguchi method
Page 32: Seminar report on taguchi method
Page 33: Seminar report on taguchi method
Page 34: Seminar report on taguchi method
Page 35: Seminar report on taguchi method
Page 36: Seminar report on taguchi method
Page 37: Seminar report on taguchi method
Page 38: Seminar report on taguchi method
Page 39: Seminar report on taguchi method
Page 40: Seminar report on taguchi method
Page 41: Seminar report on taguchi method
Page 42: Seminar report on taguchi method
Page 43: Seminar report on taguchi method
Page 44: Seminar report on taguchi method
Page 45: Seminar report on taguchi method
Page 46: Seminar report on taguchi method
Page 47: Seminar report on taguchi method
Page 48: Seminar report on taguchi method
Page 49: Seminar report on taguchi method
Page 50: Seminar report on taguchi method
Page 51: Seminar report on taguchi method
Page 52: Seminar report on taguchi method
Page 53: Seminar report on taguchi method
Page 54: Seminar report on taguchi method
Page 55: Seminar report on taguchi method
Page 56: Seminar report on taguchi method
Page 57: Seminar report on taguchi method
Page 58: Seminar report on taguchi method
Page 59: Seminar report on taguchi method
Page 60: Seminar report on taguchi method
Page 61: Seminar report on taguchi method
Page 62: Seminar report on taguchi method
Page 63: Seminar report on taguchi method
Page 64: Seminar report on taguchi method
Page 65: Seminar report on taguchi method
Page 66: Seminar report on taguchi method
Page 67: Seminar report on taguchi method
Page 68: Seminar report on taguchi method
Page 69: Seminar report on taguchi method
Page 70: Seminar report on taguchi method
Page 71: Seminar report on taguchi method
Page 72: Seminar report on taguchi method
Page 73: Seminar report on taguchi method
Page 74: Seminar report on taguchi method
Page 75: Seminar report on taguchi method
Page 76: Seminar report on taguchi method
Page 77: Seminar report on taguchi method
Page 78: Seminar report on taguchi method
Page 79: Seminar report on taguchi method
Page 80: Seminar report on taguchi method
Page 81: Seminar report on taguchi method
Page 82: Seminar report on taguchi method
Page 83: Seminar report on taguchi method

Upcoming SlideShare

Loading in...5×

PornographicDefamatoryIllegal/UnlawfulSpamOther Violations

Page 84: Seminar report on taguchi method

+1

Thanks for flagging this SlideShare!

Oops! An error has occurred.

77 of 84

×

Saving this for later?

Get the SlideShare app to save on your phone or tablet.Read anywhere, anytime – even offline.

Text the download link to your phoneYour phone number

Send LinkStandard text messaging rates apply

Seminar Report On Taguchi Methods29,212

pulkit bajaj (15 SlideShares)Follow Following0 4 0 2

Published on Nov 10, 2009

CONTENTS …

CONTENTS

Chapter 1 1.1 Introduction 41.2 Definitions of quality 51.2.1 Traditional and Taguchi definition of Quality 61.3 Taguchi’s quality philosophy 71.4 Objective of Taguchi Methods 91.5 8Steps in Taguchi Methodology 9

Chapter 2 (Loss Function) 10 2.1 Taguchi Loss Function 102.2 Variation of Quadratic Loss function 16 Chapter 3 (Analysis of Variation) 183.1 Understanding Variation 183.2 What is ANOVA 183.2.1 No Way ANOVA 183.2. 1.1 Degree of Freedom 19

Page 85: Seminar report on taguchi method

3.2.2 One Way ANOVA 23 3.2.3 Two Way ANOVA 293.3 Example of ANOVA 35 Chapter 4 (Orthogonal Array) 454.1 What is Array 454.2 History of Array 454.3 Introduction of Orthogonal Array 464.3.1 Intersecting many factor A case study 484.3.1.1 Example of Orthogonal Array 49

4.3.2 A Full factorial Experiment 574.4 Steps in developing Orthogonal Array 59 4.4.1 Selection of factors and/or interactions to be evaluated 59 4.4.2 Selection of number of levels for the factors 594.4.3 Selection of the appropriate OA 60 4.4.4 Assignment of factors and/or interactions to columns 614.4.5 Conduct tests 634.4.6 Analyze results 644.4.7 Confirmation experiment 67 4.5 Example Experime

Published in: Technology, Business

0 Comments2 LikesStatisticsNotes

Full NameComment goes here.12 hours ago Delete Reply Spam BlockAre you sure you want to Yes NoYour message goes here

Share your thoughts...Post

Be the first to comment

وزارة التربية والتعليم at شعاع المعرفة , شركة متخصصة في التطوير والتنمية واالستشارات1 year ago

Amin Almasri1 year ago

Page 86: Seminar report on taguchi method

No DownloadsViewsTotal Views9,212On Slideshare0From Embeds0Number of Embeds1ActionsShares6Downloads747Comments0Likes2Embeds 0No embeds

Report contentFlag as inappropriateCopyright ComplaintNo notes for slide

Transcript

1. 1 Seminar Report On Taguchi’s Methods Submitted by: Submitted to: Ishwar Chander (800982007) Mr.Tarun Nanda Pulkit Bajaj (800982019) (Sr. Lecturer) Hitesh Bansal (800982006) Department of MechanicalEngineering Thapar University, Patiala (Deemed University)2. 2 CONTENTS Chapter 1 1.1Introduction 4 1.2 Definitions of quality 5 1.2.1 Traditional and Taguchidefinition of Quality 6 1.3Taguchi’s quality philosophy 7 1.4 Objective of Taguchi Methods 9 1.5 8Steps inTaguchi Methodology 9 Chapter 2 (Loss Function) 10 2.1 Taguchi Loss Function 10 2.2 Variation ofQuadratic Loss function 16 Chapter 3 (Analysis of Variation) 18 3.1 Understanding Variation 18 3.2 What isANOVA 18 3.2.1 No Way ANOVA 18 3.2. 1.1 Degree of Freedom 19 3.2.2 One Way ANOVA 23 3.2.3 TwoWay ANOVA 29 3.3 Example of ANOVA 35 Chapter 4 (Orthogonal Array) 45 4.1 What is Array 45 4.2History of Array 45 4.3 Introduction of Orthogonal Array 46 4.3.1 Intersecting many factor A case study 484.3.1.1 Example of Orthogonal Array 493. 3 4.3.2 A Full factorial Experiment 57 4.4 Steps in developing Orthogonal Array 59 4.4.1 Selection offactors and/or interactions to be evaluated 59 4.4.2 Selection of number of levels for the factors 59 4.4.3Selection of the appropriate OA 60 4.4.4 Assignment of factors and/or interactions to columns 61 4.4.5Conduct tests 63 4.4.6 Analyze results 64 4.4.7 Confirmation experiment 67 4.5 Example ExperimentalProcedure 67 4.6 Standard Orthogonal Array 71 Chapter 5 (Robust Designing) 5.1 What is robustness 72 5.2The Robustness Strategy uses five primary tools 72 5.2.1PDiagram 73 5.2.2 Quality Measurement 74 5.2.3Signal To Noise Ratio 75 5.3 Steps in Robust Para design 76 5.4 Noise Factor 77 5.5 OFFLINE and ONLINE Quality Control 78 5.5.1 OFFLINE Quality Control 78 5.5.2 ONLINE Quality Control 78 5.5.1.1Product Design 79 5.5.1.2 Process Design 79 5.5.2.1 Product Quality Control method (On Line QualityControl Stage 1) 80 5.5.2.2 Customer Relations (On Line Quality Control Stage 2) 80 References 844. 4 Preface When Japan began its reconstruction efforts after World War II, Japan faced an acute shortage ofgood quality raw material, high quality manufacturing equipment and skilled engineers. The challenge was toproduce high quality products and continue to improve quality product under these circumstances. The task ofdeveloping a methodology to meet the challenge was assigned to Dr. Genichi Taguchi, who at that time was

Page 87: Seminar report on taguchi method

manager in charge of developing certain telecommunication products at the electrical communicationlaboratories (ECL) of Nippon Telecom and Telegraph Company(NTT).Through his research in the 1950’s andthe early 1960’s. Dr Taguchi developed the foundation of robust design and validated his basics philosophiesby applying them in the development of many products. In recognisation of this contribution, Dr Taguchireceived the individual DEMING AWARD in 1962, which is one of the highest recognition in the qualityfield. CHAPTER 15. 5 1.1 Introduction Genichi Taguchi attended Kiryu Technical College where he studied textile engineering.From 1942 to 1945, he served in the Astronomical Department of the Navigation Institute of the ImperialJapanese Navy. After that, he worked in the Ministry of Public Health and Welfare and the Institute ofStatistical Mathematics, Ministry of Education. While working there, he was educated by MatosaburoMasuyama on the use of orthogonal arrays and also on different experimental design techniques. In 1950, hebegan working at the newly formed Electrical Communications Laboratory of the Nippon Telephone andTelegraph Company. He stayed there for more than 12 years and was responsible for training engineers to bemore effective with their techniques. While he was there he consulted with many different Japanesecompanies and also wrote his first book on orthogonal arrays. He served as a visiting Professor at the IndianStatistical Institute from 1954 to 1955. While he was there, Taguchi met Sir R.A. Fisher and Walter A.Shewhart. He published the first edition of his twovolume book on Experimental design in 1958. He made hisfirst visit to the United States in 1962 where he was a visiting Professor at Princeton University. In the sameyear, he was also awarded his PhD from Kyushu University. He developed the concept of the Quality LossFunction in the 1970’s. He also published the third and most current edition of his book on experimentaldesigns. He revisited the United States in 1980 and from then his methods spread and became more widelyused. Genichi Taguchi made many important contributions during his lifetime. Some of his most importantwere probably to the field of quality control. However he did make many important contributions toexperimental design. Professor Genichi Taguchi was the director of the Japanese Academy of quality and fourtimes receipt of the Deming Prize. The term Taguchi Methods was coined in the United States. Although SPCcan assist the operator in the elimination of the special cause of defects, thus bringing the process undercontrol. But some thing is still needed: the continuous improvement of manufacturing processes so that theproduction of robust products can be assured. And this is where Taguchi comes in. He starts where SPC(temporarily) finishes. He can help with the identification of common cause of variation, the most difficult todetermine and eliminate in process. He attempts to go even further: he tries to make the process and theproduct robust against their effect (eliminate of the effect rather then the cause) at the design stage; indeed, indealing with uncontrollable (noise) factors, there is no alternative. Even if the removal of the effect isimpossible, he provides a systematic procedure for controlling the noise (through tolerance design) at theminimum cost. When Dr. Taguchi was first brought his ideas to America in 1980, he was already well knownin Japan for his contribution to quality engineering. His arrival in the U.S. went virtually unnoticed, but by1984 his ideas had generated so much interest that Ford Motors Company sponsored the first supplierSymposium on Taguchi Methods.6. 6 1.2 Definitions of Quality “Fitness for use” Dr. Juran (1964) The leading promoter of the “zerodefects” concept and author of Quality is Free (1979) defines quality as” Conformance to requirements.Philips Crosby Quality should be aimed at needs of consumer, present and future. Dr. Deming Thetotality of features and characteristics of a product or service that bear on its ability to satisfy given needs. TheAmerican Society for Quality Control (1983) The aggregate of properties of a product determining itsability to satisfy the needs it was built to satisfy. (Russian Encyclopaedia) The totality of features andcharacteristics of a product and service that bear on its ability to satisfy a given need. (European Organizationfor Quality Control Glossary 1981) Although these definitions are all different, some common threads runthrough them: • Quality is a measure of the extent to which customer requirements and expectations aresatisfied. • Quality is not static, since customer expectation can change. • Quality involves developing productor service specifications and standards to meet customer needs (quality of design) and then manufacturingproducts or providing services which satisfy those specifications and standards (quality of conformance). It isimportant to note that the above quality definitions which are before 1950’s does not refer to grade or features.For example Honda Car has more features and is generally considered to be a higher grade car than Maruti.But it does not mean that it is of better quality. A couple with two children may find that a Maruti does amuch better job of meeting their requirements in terms of case of loading and unloading ,comfort when theentire family is in the car, gas mileage, maintenance, and of course ,basic cost of vehicle. 1.2.1 Traditional andTaguchi definition of Quality

Page 88: Seminar report on taguchi method

7. 7 Traditional Definition: The more traditional "Goalpost" mentality of what is considered good quality saysthat a product is either good or it isn't, depending or whether or not it is within the specification range(between the lower and upper specification limits the goalposts). With this approach, the specification rangeis more important than the nominal (target) value. But, is the product as good as it can Traditional QualityDefinition be, or should be, just because it is within specification. Taguchi Definition: Taguchi says no toabove definition. He define the ‘quality’ as deviation from ontarget performance. According to him, qualityof a manufactured product is total loss generated by that product to society from the time it is shipped.Financial loss or Quality loss Taguchi Quality Definition L(y) = k(ym) ² y objective characteristic m targetvalue k constant k = Cost of defective product / (Tolerance) ² = A/Δ² 1.3 Taguchi’s Quality Philosophy8. 8 Genichi Taguchi’s impact on the word quality scene has been far reaching. His quality engineeringsystem has been used successfully by many companies in Japan and elsewhere. He stresses the importance ofdesigning quality into product into processes, rather than depending on the more traditional tools of onlinequality control. Taguchi’s approach differs from that of other leading quality gurus in that he focuses more onthe engineering aspects of quality rather than on management philosophy of statistics. Also, Dr. Taguchi usesexperimental design primarily as a tool to make products more robust to make them less sensitive to noisefactors. That is, he views experimental design as a tool for reducing the effects of variation on product andprocess quality characteristics. Earlier applications of experimental design focused more on optimizingaverage product performance characteristics without considering effects on variations. Taguchi’s qualityphilosophy seven basic elements: 1. An important dimension of the quality of a manufactured product is thetotal loss generated by product to society. At a time when the BOTTOM LINE appears to be the driving forcefor so many organizations, it seems strange to see “loss to society” as part of product quality. 2. In acompetitive economy, continuous quality improvement and cost reduction are necessary for staying inbusiness. This is hard lesson to learn. Masaaki Imai (1986) argues very persuasively that the principaldifference between Japanese and American management is that American companies look to newtechnologies and innovation as the major route to improvement, while Japanese companies focus more on“Kaizen” means gradual improvement in everything they do. Taguchi stresses use of experimental designs inparameter design as a way of reducing quality costs. He identifies three types quality costs: R & D costs,manufacturing costs, and operating costs. All three costs can be reduced through use of suitable experimentaldesigns. 3. A continues quality improvement program includes continuous reduction in the variation ofproduct performance characteristics about their target values. Again Kaizen. But with the focus on productand process variability. This does not fit the mold of quality being of conformance to specification. 4. Thecustomer’s loss due to a product’s performance variation is often approximately proportional to the square ofthe deviation of the performance characteristic from its target value. This concept of a quadratic loss function,says that any deviation from target results in some loss of the customer, but that large deviations from targetresult in severe losses. 5. The final quality and cost of manufactured product are determined to a large extentby the engineering designs of the product and its manufacturing process. This is so9. 9 simple, and so true. The future belongs to companies, which, once they understand the variability’s oftheir manufacturing processes using statistical process control, move their quality improvement effortsupstream to product and process design. 6. A product (or process) performance variation can be reduced byexploiting the nonlinear effects of the product (or process) parameters on the performance characteristics. Thisis an important statement because its gets to the heart of off line QC. Instead of trying to tighten speciationbeyond a process capability, perhaps a change in design can allow specifications to be loosened. As anexample, suppose that in a heating process the tolerance on temperature is a function of the heating time in theoven. The tolerance relationship is represents by the band in given figure. For example : If a processspecification says the heating process is to last 4.5 minutes, then the temperature must be held between 354.0degrees and 355.0 degrees, a tolerance interval 1.0 degrees wide. Perhaps the oven cannot hold this tight atolerance. One solution would be spending a lot of money on a new oven and new controls. Other possibilitywould be to change the time for the heating process to, say, 3.5 minutes. Then the temperature would need tobe held to between 358.0 and 360.6 degrees an interval of width 2.6 degrees. If the oven could hold thistolerance, the most economical decision might be to adjust the specifications. This would make the processless sensitive to variation in oven temperature. Time Temperature Relationship 7. Statistically designedexperiments can be used to identify the settings of product parameters that reduce performance variations.And hence improve quality,10. 10 productivity, performance, reliability, and profits, statistically designed experiments will be thestrategic quality weapon of the 1990’s. 1.4 Objective of Taguchi Methods The objective of Taguchi’s efforts

Page 89: Seminar report on taguchi method

is process and productdesign improvement through the identifications of easily controllable factors and theirsettings, which minimize the variation in product response while keeping the mean response on target. Bysetting those factors at their optimal levels, the product can be made robust to changes in operating andenvironmental conditions. Thus, more stable and higherquality products can be obtained, and this is achievedduring Taguchi’ parameterdesign stage by removing the bad effect of the cause rather than the cause of thebad effect. Furthermore, since the method is applied in a systematic way at a preproduction stage (offline), itcan greatly reduce the number of timeconsuming tests needed to determine costeffective process conditions,thus saving in costs and wasted products 1.5 8Steps in Taguchi Methodology 1. Identify the main function,side effects and failure mode. 2. Identify the noise factor, testing condition and quality characteristics. 3.Identify the objective function to be optimized. 4. Identify the control factor and their levels. 5. Select theOrthogonal Array, Matrix experiments. 6. Conduct the Matrix equipment. 7. Analyze the data; predict theoptimum levels and performance. 8. Perform the verification experiment and plan the failure action.11. 11 CHAPTER 2 2.1 Taguchi Loss Function Genichi Taguchi has an unusual definition for product quality:“Quality is the loss a product causes to society after being shipped, other then any losses caused by itsintrinsic functions.” By “loss” Taguchi refers to the following two categories. • Loss caused by variability offunction. • Loss caused by harmful side effects. An example of loss caused by variability if function would bean automobile that does not start in cold weather. The car’s owner would suffer a loss if he or she had to paysome to start a car. The car owners employer losses the services of the employee who is now late for work. Anexample of a loss caused by a harmful side effect would be frost by it suffered by the owner of the car whichwould not start. An unacceptable product which is scrapped or rework prior to shipment is viewed by Taguchias a cost to the company but not a quality loss.12. 12 2.1.1 Comparing The Quality Levels of SONY TV Sets Made in JAPAN and in SAN DIEGO The frontpage of the Ashi News on April 17,1979 compared the quality levels of Sony color TV sets made in Japaneseplants and those made in San Diego, California, plant. The quality characteristic used to compare these setswas the color density distribution, which affect color balance. Although all the color TV sets had the samedesign, most American customers thought that the color TV sets made in San Diego plant were of lowerquality than those made in Japan. The distribution of the quality characteristic of these color TV sets wasgiven in the Ashi News (shown in Figure).The quality characteristics of the TV sets from Japanese Sonyplants are normally disturbed around the target value m. If a value of 10 is assigned to range of the tolerancespecifications for this objective characteristic, then the standard deviation of this normally distributed curvecan be calculated and is about 10/6. In quality control, the process capability index(Cp) is usually defined asthe tolerance specification divided by 6 times the standard deviation of the objective characteristic:Cp=Tolerance/6*Standard deviation13. 13 Therefore, the process capability index of the objective characteristic of Japanese Sony TV sets is about1. In addition, the mean value of the distribution of these objective characteristics is very close to the targetvalue of m. On the other hand, a higher percentage of TV sets from San Diego Sony are within the tolerancelimits than those from Japanese Sony. However, the color density of San Diego TV sets is uniformlydistributed rather than normally distributed. Therefore, the standard deviation of these uniformly distributedobjective characteristics is about 1/√12 of the tolerance specification. Consequently, the process capabilityindex of the San Diego Sony plant is calculated as follows: Cp=Tolerance/6(Tolerance/√12) = 0.577 It isobvious that the process capability index of San Diego Sony is much lower than that of Japanese Sony. Allproducts that are outside of the tolerance specifications are supposed to be consider defective and not shippedout of the plant. Thus products that are within tolerance specifications are assumed to be pass and are shipped.As a matter of fact, tolerance specifications are very similar to tests in schools, where 60% is usually thedividing line between passing and failing, and 100% is ideal score. In our example of ideal TV sets, the idealconsideration is that the objective characteristics, color density, meet the target value m. The more the colordensity deviates from the target value, the lower the quality level of TV set. If the deviation of color density isover the tolerance specifications, m ±5, a TV set is considered defective. In the case of school test, 59% orbelow is failing, while 60% or above is passing. Similarly, the grades between 60% and 100% in evaluatingquality can be classified as follows: 60%69% Passing (D) 70%79% Fair (C) 80%89% Good (B)14. 14 90%100% Excellent (A) The “grades” D,C,B and A in parentheses above are quite commonly used inthe United States for categorizing the objective characteristics of products. Thus, one can apply this scheme tothe classification of the objective characteristics (color density) of these color TV sets as shown in Fig. Onecan see that a very high percentage of Japanese Sony TV sets are within grade B, and a very low percentageare within or below grade D. In comparison, the color TV sets from San Diego SONY have about the same

Page 90: Seminar report on taguchi method

percentage in grades A, B and C. To reduce the difference in process capability indices between JapaneseSONY and San Diego SONY, (and thus seemingly increase the quality level of the San Diego sets) the lettertried to tighten the tolerance specification to extend only to grade C shown in Fig. rather than grade D.Therefore, Only the products within grades A,B and C were treated as passing. But this approach is faulty,Tightening the tolerance specifications because of a low process capability in a production plant ismeaningless as increasing the passing score of school tests from 60% to 70% just because students do notlearn well. On the contrary, such a school should consider asking the teachers to lower the passing score forstudent who do not test as well instead of rating it. The next section will illustrate how to evaluate thefunctional quality of products meaningfully and correctly. Quadratic Loss Function When an objectivecharacteristic y deviates from its target value m, some financial loss will occur. Therefore, the financial loss,sometimes referred to simply as quality loss or used as an expression of quality level, can be assumed to be afunction of y, which we shall designate L(y),When y meets the target m, the of L(y) will be at minimum;generally, the financial loss can be assumed to be zero under this ideal condition: L (m) = 0 ………………Equation 2.1 Since the financial loss is at a minimum at the point, the first derivative of the loss function withrespect to y at this point should also be zero. Therefore, one obtains the following equation: L΄ (m) = 0…………….. Equation 2.2 If one expand the loss function L(y) through a Taylor series expansion around thetarget value m and takes Equations (2.1) and (2.2) into consideration, one will get the following equation: L(y)= L(m) + L΄ (m)(ym)/1!+L΄΄(m)(ym) ²/2!+ …………… =L΄΄(ym)²/2! +………15. 15 is result is obtained because the constant term L(m) and the first derivative term L’(m) are both zero. Inaddition, the third order and higher order term are assumed to be negligible. Thus, one can express the lossfunction as a squared term multiplied by constant k: L(y) =k(ym)² …………… Equation 2.3 When thedeviation of the objective characteristic from the target value increases, the corresponding quality loss willincrease. When magnitude of deviation is outside of the tolerance specifications, this product should beconsidered defective. Let the cost due to defective product be A and the corresponding magnitude of thedeviation from the target value be Δ. Taking Δ into right hand side of Equation (2.3), one can determine thevalue for constant k by following Equation: k=cost of defective product/(tolerance)² In the case of the SONYcolour TV sets, let the adjustment cost be A= 300 Rs, when the colour density is out of the tolerancespecifications. Therefore, the value of k can be calculated by the following equation: k = 600/5² = 12 RsTherefore, the loss function is L(y) = 12(y – m)² This equation is still valid even when only one unit ofproduct is made. Consider the visitor to the BHEL Heavy Electric Equipment Company in India who was told,“In our company, only one unit of product needs to be made for our nuclear power plant. In fact, it is notreally necessary for us to make another unit of product. Since the sample size is only one, the variance is zero.Consequently, the quality loss is zero and it is not really necessary for us to apply statistical approaches toreduce the variance of our product.” However, the quality loss function [L = k(y – m)²] is defined as the meansquare deviation of objective characteristics from their target value, not the variance of products. Therefore,even when only one product is made, the corresponding quality loss can still be calculated by Equation (2.3).Generally, the mean square deviation of objective characteristics from their target values can be applied toestimate the mean value of quality loss in Equation (2.3). One can calculate the mean square deviation fromtarget σ² (σ² in this equation is not variance) by the following equation (the term σ² is also called the meansquare error or the variance of products): Quality of Sony TV sets where the tolerance specification is 10 andthe objective function data corresponds to figure16. 16 Plant Mean Value Loss L Defective Standard of Objective Variation Location Function Deviation (inRs) Ratio Japan M 10/6 10²/36 33 0.27% San Diego M 10/√12 10²/12 100 0.00 Substituting this equation intoEquation (2.3), one gets the following equation: L = kσ² From Equation (2.4), one can evaluate the differencesof average quality levels between the TV sets from Japanese Sony and those from San Diego Sony as shownin Table 2.1. From table 2.1 it is clear that although the defective ratio of the Japanese Sony is higher than thatof the san Deego Sony, the quality level of the former is 3 times higher than the latter. Assume that one cantighten the tolerance specifications of the TV sets of San Diego Sony to be m + 10/3. Also assume that theseTV sets remain uniformly distributed after the tolerance specifications are tightened. The average quality levelof San Diego Sony TV sets would be improved to the following quality level:17. 17 L = 12[(1/ √12) (10) (2/3)] ² = 44Rs where the value of the loss function is considered the relativequality level of the product. This average quality level of the TV sets of San Diego Sony is 56Rs higher thanthe original quality level but still 11Rs lower than that of Japanese Sony TV sets. In addition, in this type ofquality improvement, one must adjust the products that are between the two tolerance limits,m + 10/3 andm+ 5, to be within m + 10/3. In the uniform distribution shown in Figure 2.1, 33.3% would need adjustment,

Page 91: Seminar report on taguchi method

which would cost 300Rs per unit. Therefore each TV set from San Diego Sony would cost an additional100Rs on average for the adjustment: (300)(0.333) = 100Rs Consequently, it is not really a good idea to spend100Rs more to adjust each product, which is worth only 56Rs.A better way is to apply quality managementmethods to improve the quality level of products. 2.2 Variation of the Quadratic Loss Function 1. Nominal thebest type: Whenever the quality characteristic y has a finite target value, usually nonzero, and the quality lossis symmetric on the either side of the target, such quality characteristic called nominalthebest type. This isgiven by equation L(y) =k(ym)² ………………………Equation 1 Color density of a television set and theout put voltage of a power supply circuit are examples of the nominalthebest type quality characteristic. 2.Smallerthebetter type: Some characteristic, such as radiation leakage from a microwave oven, can never takenegative values. Also, their ideal value is equal to zero, and as their value increases, the performance becomesprogressively worse. Such characteristic are called smallerthebetter type quality characteristics. Theresponse time of a computer, leakage current in electronic circuits, and pollution from an automobile areadditional examples of this type of characteristic. The quality loss in such situation can be approximated bythe following function, which is obtain from equation by substituting m = 0 L(y) =ky² This is one side lossfunction because y cannot take negative values.18. 18 3. Largerthebetter type: Some characteristics, such as the bond strength of adhesives, also do not takenegative values. But, zero is there worst value, and as their value becomes larger, the performance becomesprogressively betterthat is, the quality loss becomes progressively smaller. Their ideal value is infinity and atthat point the loss is zero. Such characteristics are called largerthebetter type characteristics. It is clear thatthe reciprocal of such a characteristics has the same qualitative behavior as a smallerthebetter typecharacteristic. Thus we approximate the loss function for a largerthebetter type characteristic by substituting1/y for y in equation1: L(y) = k [1/y²] 4. Asymmetric loss function: In certain situations, deviation of thequality characteristic in one direction is much more harmful than in the other direction. In such cases, one canuse a different coefficient k for the two directions. Thus, the quality loss would be approximated by thefollowing asymmetric loss function: k(ym) ²,y>m L(y) = k(ym) ², y≤m19. 19 CHAPTER 3 Introduction to Analysis of variation 3.1 Understanding variation The purpose of productor process development is to improve the performance characteristics of the product or process relative tocustomer’s needs and expectations. The purpose of experimentation should be to reduce and control variationof a product or a process; subsequently decisions must be made which parameters affect the performance of aproduct/process. Since variation is a large part of the discussion relative to quality, analysis of variation(ANOVA) will be the statistical method used to interpret experimental data and make necessary decisions. 3.2What is ANOVA ANOVA is a statistically based decision tool for detecting any differences in averageperformance of groups of items tested. ANOVA is a mathematical technique which breaks total variationdown into accountable sources; total variation is decomposed into its appropriate components We will startwith a very simple case and then build up more comprehensive situations Thereafter, we will apply ANOVAto some very specialized experimental situations 3.2.1 No way ANOVA Imagine an engineer is sent to theproduction line to sample a set of windshield pumps for the purpose of measuring flow rate. The datacollected is as under: Pump No. 1 2 3 4 5 6 7 8 Flow rate 5 6 8 2 5 4 4 6 oz/min. 1 oz/min. = 0.473 ml/s20. 20 Noway ANOVA breaks total variation down into only two components 1. The variation of the average(or mean) of all the data points relative to zero 2. The variation of individual data points around the average(traditionally called experimental error) The notations used in the calculation method are as under: y =observation or response or simply data y i = ith response for example y3 = 8 oz/min N = Total number ofobservations T = Sum of all observations T = Average of all observations = T/N = y In this case N = 8, T = 40oz/min , and T = 5.0 oz/min What is the reason for variation from pump to pump? The true flow rate isactually unknown; it is only estimated through the use of some flow meter. There will be some unknownmeasurement error present, but flow rate will nonetheless be observed and accepted as the pump’sperformance under the conditions of the test. Also, the pumps were randomly selected. Although themanufacturer produces identical pumps; however there will be slight differences from pump to pump, causinga pump to pump variation in performance. (This is natural variation of the process) It is for this reason theflow rates of pumps are not identical. Noway ANOVA can be illustrated graphically. (Notes of this sectionmust be taken by hand) The magnitude of each observation can be represented by a line segment extendingfrom zero to the observation. These line segments can be divided into two portions: One portion attributed tothe mean; Other portion attributed to the error The error includes the natural process variation and themeasurement error. The magnitude of the line segment due to the mean is indicated by extending a line fromthe average value to zero. The magnitude of line segment due to error is indicated by the difference of the

Page 92: Seminar report on taguchi method

average value from each observation.21. 21 To calculate the total variation present we will do a mathematical operation which will allow a clearerpicture to develop. The magnitudes of each of the line segments can be squared and then summed to provide ameasure of the total variation present SS T = total sum of squares = 5 2 + 6 2 + 8 2 + − − − − − + 6 2 SS T =222.0 The magnitude of line segment due to mean can also be squared and summed SS m = sum of squaresdue to mean = N (T ) 2 T 2 T2 40 2 But T = T / N SS m = N = = = 200.0 N N 8 The portion ofthe magnitude of the line segment due to error can be squared and summed to provide measure of the variationaround the average value n SSe = error sum of squares = ∑i =1 ( yi − y ) 2 SS e = 0 + 1 + 3 + (−3) + 0 2 +(−1) 2 + (−1) 2 + 12 2 2 3 2 SSe = 22.0 Note that 222.0 = 200.0 + 22.0 This demonstrates a basic property ofANOVA that the total sum of squares is equal to the sum of squares due to known components SST = SSm +SSE The formulas for the sum of squares can be written generally N SS T = ∑ y i2 i =1 SS m = T2 N SS = ∑ (y − T ) N 2 e i I =1 In the above example the error value was calculated, but it is not necessary as SSe = SST SSm22. 22 3.2.1.1 Degrees of Freedom (dof) To complete ANOVA calculations, one other element must beconsidered i.e. Degrees of Freedom. The concept of dof is to allow 1 dof for each independent comparisonthat can be made in the data. Only 1 independent comparison can be made between the mean of all the data(There is only 1 mean). Therefore, only 1 dof is associated with the mean. The concept of dof also applies tothe dof associated with the error estimate. With reference to 8 observations, there are 7 independentcomparisons that can be made to estimate the variation in data. Data point 1 can be compared to 2, 2 to 3, 3 to4 etc. which are 7 independent comparisons. One of the questions an instructor dreads most from an audienceis, "What exactly is degrees of freedom?" It's not that there's no answer. The mathematical answer is a singlephrase, "The rank of a quadratic form." It is one thing to say that degrees of freedom is an index and todescribe how to calculate it for certain situations, but none of these pieces of information tells what degrees offreedom means. At the moment, I'm inclined to define degrees of freedom as a way of keeping score. A dataset contains a number of observations, say, n. They constitute n individual pieces of information. These piecesof information can be used either to estimate parameters (mean) or variability. In general, each item beingestimated costs one degree of freedom. The remaining degrees of freedom are used to estimate variability. Allwe have to do is count properly. A single sample: There are n observations. There's one parameter (the mean)that needs to be estimated. That leaves (n1) degrees of freedom for estimating variability.23. 23 Two samples: There are n1 + n2 observations. There are two means to be estimated. That leaves (n1 +n2 − 2) degrees of freedom for estimating variability. Let v = dof vt = Total dof vm = dof associated withmean (always 1 for each sample) ve = dof associated with error Then vt = vm + ve 8=1+7 The total dof equalstotal number of observations in the data set for this method of ANOVA Summary of Noway ANOVA table:Source SS Dof Mean 200 1 Error 22 7 Total 222 8 One other statistic calculated from ANOVA is variance V.Error variance or just variance is SS e 22 Ve = = = 3.14 ve 7 Also, sample standard deviation s = V Where N∑( y − y) 2 i s= i =1 N −1 ∑ ( y − y ) = variance i 2 s2 =V = N −1 Which is essentially SS Ve = e ve24. 24 Although the formula above is faster than ANOVA for calculating error variance in this case, but whenthe experimental situations become more complex ANOVA will become faster method. Error variance is ameasure of the variation due to all the uncontrolled parameters, including measurement error involved in aparticular experiment (set of data collected). Which is essentially the natural variation of a process 3.2.2 One –Way ANOVA This is next most complex ANOVA to conduct. This situation considers the effect of onecontrolled parameter upon the performance of product or process, in contrast to noway ANOVA, where noparameters were controlled. Again let us try to solve this problem through an imaginary, yet potentially realsituation. Imagine the same engineer who took samples for flow rate of windshield pumps is charged with thetask of establishing the fluid velocity generated by the windshield washer pumps. This means when the fluidvelocity is too low, the fluid will merely dribble out, and if too high the air movement past the windshield willnot be able to distribute the cleaning fluid adequately to satisfy the car driver. The engineer proposes a test ofthree different orifice areas to determine which give a proper fluid velocity. Before the test data is collectedsome notation in order to simplify the mathematical discussion is: A = Factor under investigation (outletorifice area) A1 = Ist level of orifice area = 0.0015 sq. in A2 = 2nd level of orifice area = 0.0030 sq. in A3 =3rd level of orifice area = 0.0045 sq. in The same symbol for the level designations will be used to denote thesum of responses Ai = sum of observations under Ai level Ai = Average of observations under Ai level = Ai /n Ai T = sum of all observations T = Average of all observations = T/N n Ai = number of observations underAi level k A = number of levels of factor A25. 25 With notation in mind, the engineer constructs four pumps with each given orifice area (making 12 to

Page 93: Seminar report on taguchi method

test for three levels) The test data is as under: Level Area sqin Velocity Ft/s Total A1 0.0015 2.2 1.9 2.7 2.08.8 A2 0.0030 1.5 1.9 1.7 * 5.1 A3 0.0045 0.6 0.7 1.1 0.8 3.2 Grand Total 17.1 * Dropped pump anddestroyed it, no data A1 = 8.8 ft/s n A1 = 4 A1 = 2.2 ft/s A2 = 5.1 ft/s n A2 = 3 A2 = 1.7 ft/s A3 = 3.2 ft/s nA3 = 4 A3 = 0.8 ft/s kA = 3 T = 17.1 ft/s N = 11 T = 1.6 ft/s Sum of squares (Oneway ANOVA) Twomethods can be used to complete the calculations Including the mean Excluding the mean Method 1(Including the mean) As before the total variation can be decomposed into its appropriate components: Thevariation of the mean of all observations relative to zero – VARIATION DUE TO MEAN The variation ofthe mean of observations under each factor level around the average (mean) of all observations –VARIATIONDUE TO FACTOR A The variation of individual observations around the average of observations undereach level – VARIATION DUE TO ERROR The calculations are identical to Noway ANOVA example,except for the component of variation due to factor A, outlet orifice area. N SS T = ∑ y i2 = 2.2²+1.9²+ +0.8² = 31.90 i =1 T 2 = 17.1²/11 = 26.583 SS m = N (T ) 2 = N Graphically, also this can be shown (Plmake hand notes here for graphical representation)26. 26 The magnitude of segments for each level of factor A is squared and summed. For ( instance, the lengthof the line segment due to level A1 is A1 − T . ) There are four observations under A1 condition. The sametype of information is collected for other levels of factor A. SS A = n A 1 ( A1 − T ) + n A 2 ( A2 − T ) + n A3( A3 − T ) 2 2 2 SS A = 4(0.64545)² + 3(0.14545) ² + 4(0.75454) ² = 4.007 The above calculation is tediousand is mathematically equivalent to: k A Ai2 T 2 SS A = ∑ − i =1 n Ai N 8.8 2 5.12 3.2 2 17.12 = 4.007 which is same as above SS A = + + − 4 3 4 11 The variation due to error is

given by 2 SS e = ∑ ∑ ( y i − A j ) k A n Ai j =1 i =1 SSe = 02 + ( − 0.3) + 0.52 + ( −0.2) 2 + − − − − 0.32 +02 2 SS e = 0.600 Error variation is again based on method of least squares but in one way ANOVA the leastsquares are evaluated around the average of each level of the controlled factor. Error variation is theuncontrolled variation within the controlled group. Again the total variation is SST = SSm + SSA + SSe31.190 = 26.583 + 4.007 + 0.600 Dof (Including the mean) v t = vm + v A + ve v t = 11, vA = k A − 1 = 227. 27 ve = 11 – 1 – 2 = 8 One way ANOVA summary (Method 1) Source SS Dof v Variance V Mean m26.583 1 26.583 Factor A 4.007 2 2.004 Uncontrolled Error e 0.600 8 0.075 Total 31.190 11 In the abovetable we have been able to estimate variance for both factor A and uncontrolled error. This is what will be ofinterest to us when we design experiments. Also, if look at the calculations done for Method 1, you willobserve that mean does not affect the calculations for the variation due to factor A and error in any way. Thusin most experimental situations, mean has no practical value with the exception of ‘lower is better’ situationswhere the variation due to mean is a measure of how far the average is from zero and how successful thefactor might be in reducing the average to zero. Method 2: When we exclude the mean from ANOVAcalculations, the total variation is then calculated as: The variation of average of observations under eachfactor level around the average of all observations The variation of the individual observations around theaverage of observations under each factor level Again graphically this can be demonstrated: The same conceptof summing the squares of the magnitudes of various line segments is applied in method 2 as well. N SST = ∑( yi − T ) 2 = 4.607 i =1 Mathematically this is equivalent to N T2 SST = ∑ ( yi ) −2 i =1 N28. 28 This expression will be used to define the total variation by this method. See this equivalent to ( SS T −SS m ) from previous calculations The variation due to factor A and uncontrolled error is calculatedidentically as in Method 1 k A Ai2 T 2 SS A = ∑ − i =1 n Ai N k A nAi 2 SS e = ∑ ∑ ( y i − A j ) j =1 i =1 Dof (Excluding the mean) In method 1, dof was vt = vm + v A + veWhere vm = 1 (always) and vt =N In Method 2(excluded mean), the dof for mean is subtracted from bothsides of the above equation So v t = N – 1 = 11 – 1 = 10 v t = vA + ve 10 = ( k A 1) + ve so ve = 8 One wayANOVA summary (Method 2) Source SS Dof v Variance V Factor A 4.007 2 2.004 Uncontrolled Error e0.600 8 0.075 Total 4.607 10 The values of variance for factor A and Error in both methods are identical. Thevalue of mean is disregarded in method 2 and is the most popular method.29. 29 Only when the performance parameter is ‘lower is better’ characteristic would the variance due tomean be relevant; this provides a measure of how effective some factor might be in reducing the average tozero. Let us sum up the above discussion Define three "Sums of Squares" Total Corrected Sum of Squares(SST) • Squared deviations of observations from overall average Error Sum of Squares (SSE) • Squareddeviations of observations from treatment averages Treatment Sum of Squares (SStrt) • Squared deviations oftreatment averages from overall average (times n) Dot Notation a n y.. = ∑ ∑ i =1 j =1 yij y.. = y../N (theoverall average) n yi. = ∑ yij j =1 yi. = yi./n (the average within Treatment i) a n Raw SS = ∑ ∑ i =1 j =1 yij2Total Corrected SS a n SST = ∑ ∑ ( yij yi. )2 i =1 j =1 This measures the overall variability in the data.SST/(N1) is just the sample variance of the whole dataset DECOMPOSITION OF SS I will do (hopefully)

Page 94: Seminar report on taguchi method

derivation of the following equation:30. 30 SST = SStrt + SSE a n 2 a n 2 SS = ∑ ∑ y − y = ∑ ∑ y − y + y − y T ij ij i. i. i =1 j =1 i =1 j = 1 a n 2 = ∑ ∑ y − y + y − y

ij i. i. i = 1 j =1 a n 2 a n 2 a n = ∑ ∑ y − y + ∑ ∑ y − y + ∑ ∑ 2 y − y y − y ij i. i. ij i. i.

i = 1 j =1 i =1 j =1 i = 1 j =1 a 2 a n = SS + ∑ n y − y + ∑ ∑ 2 y − y y − y E i. ij i. i. i =1 i =1 j =1 a n = SS + SS + 2

y − y y − y E TRT ∑ ∑ ij i. i. i =1 j =1 = SS + SS +0 ETRT Must show last term is zero a n a n ∑ ∑ 2 yij − yi. yi. − y = 2 ∑ yi. − y ∑ yij − yi. i =1 j =1 i =1 j =1

a n = 2 ∑ y − y ∑ y − ny i. ij i. i =1 j =1 a a = 2 ∑ y − y ny − ny = 2 ∑

y − y 0 = 0 i. i. i. i. i =1 i =1 Two –Way ANOVA Twoway ANOVA isthe next highest order of ANOVA to review There are two controlled parameters in this experimental situationLet us consider an experimental situation. A student worked at an aluminum casting foundry whichmanufactured pistons for reciprocating engines. The problem with the process was how to attain the properhardness (Rockwell B) of the casting for a particular product. Engineers were interested in the effect of copperand magnesium content on casting hardness.31. 31 According to specs the copper content could be 3.5 to 4.5% and the magnesium content could be 1.2 to1.8%. The student runs an experiment to evaluate these factors and conditions simultaneously. If A = %Copper Content A1 = 3.5 A2 = 4.5 If B = % Magnesium Content B1 = 1.2 B2 = 1.8 The experimentalconditions for a two level factors is given by 2 f = 4 which are A1 B1 A1 B2 A2 B1 A2 B2 Imagine, fourdifferent mixes of metal constituents are prepared, casting poured and hardness measured. Two samples aremeasured from each mix for hardness. The result will look like: A1 A2 B1 76, 78 73, 74 B2 77, 78 79, 80 Tosimplify discussion 70 points from each value are subtracted in the above observations from each of the fourmixes. Transformed results can be shown as: A1 A2 B1 6, 8 3, 4 B2 7, 8 9, 10 Two way ANOVAcalculations: The variation may be decomposed into more components: 1. Variation due to factor A 2.Variation due to factor B 3. Variation due to interaction of factors A and B 4. Variation due to error Anequation for total variation can be written as SS T = SS A + SS B + SS A×B + SS e A x B represents theinteraction of factor A and B. The interaction is the mutual effect of Cu and Mg in affecting hardness. Somepreliminary calculations will speed up ANOVA32. 32 A1 A2 Total B1 6, 8 3, 4 21 B2 7, 8 9, 10 34 Total 29 26 55 Grand Total A1 = 29 A2 = 26 B1 =21 B2=34 T = 55, N=8 n A1 = 4 nB1 = 4 n A2 = 4 nB 2 = 4 Total Variation N T2 SST = ∑ ( yi ) − 2 i =1 N 55 2 =6² + 8² + 3² + + 10² = 40.875 8 Variation due to factor A k A Ai2 T 2 SS A = ∑ n − N i =1 Ai 2 2 2 A1 A2 Ak T2 + + −− − − − − − + − n A1 n A2 n Ak N 29 2 26 2 552SS A = + − = 1.125 4 4 8 Please carry out a mathematical check here which is Numerator 29 + 26 = 55 andDenominator 4 + 4 = 8 If these conditions are not met then the SS A calculation will be wrong. For a two levelexperiment, when the sample sizes are equal, the equation above can be simplified to this special formula:33. 33 ( A1 − A2 ) 2 ( 29 −26) 2 SS A = = = 1.125 N 8 Similarly the variation due to factor B SS B = (B1 −B2)2 = ( 21 − ) 2 34 =21.125 N 8 To calculate the variation due to interaction of factors A and B Let ( A × B) irepresent the sum of data under the ith condition of the combination of factor A and B. Also let c represent thenumber of possible combinations of the interacting factors and n( A×B )i the number of data under thiscondition. c ( A ×B ) 2 T 2 SS A×B ∑ n = i − N −SS A −SS B i = ( A×B ) i 1

Note that when the various combinations are summed, squared, and divided by the number of data pointsfor that combination, the subsequent value also includes the factor main effects which must be subtracted.While using this formula, all lower order interactions and factor effects are to be subtracted. For the exampleproblem: A1 B1 = 14, A1 B2 = 15, A2 B1 = 7 A2 B2 = 19 And no. of possible combinations c = 4 And sincethere are two observations under each combination n( A×B )i = 2 14 2 7 2 15 2 19 2 55 2 SS A× B = + + + − −SS A − SS B = 15.125 2 2 2 2 8 Since SS T = SS A +SS B +SS A×B +SS e SS e = 40.875 −1.125 −21.125−15.125 = 3.50034. 34 Degrees of Freedom (Dof) – Two way ANOVA vt = v A + vB + v A×B + ve vt =N–1 =8–1=7 vA = kA 1=1 vB = k B 1=1 v A×B = (v A )(vB ) = 1 ve = 7 − 1 − 1 − 1 = 4 ANOVA summary Table (Twoway)Source SS Dof v Variance V F A 1.125 1 1.125 1.29 B 21.125 1 21.125 24.14* AxB 15.125 1 15.125 17.29**E 3.500 4 0.875 Total 40.875 7 * at 95% confidence ** at 90% confidence The ANOVA results indicate thatCu by itself has no effect on the resultant hardness, magnesium has a large effect (largest SS) on hardness and

Page 95: Seminar report on taguchi method

the interaction of Cu and Mg plays a substantial part in determining hardness. A plot of these data can be seen.(Take hand notes here) In this plot there exist non parallel lines which indicate the presence of an interaction.The factor A effect depends on the level of factor B and vice versa. If the lines are parallel, there would be nointeraction which means the factor A effect would be same regardless of the level of factor B.35. 35 Hardn ess B2 10 8 6 4 B1 2 A1 A2 Geometrically, there is some information available from the graphthat may be useful. The relative magnitudes of the various effects can be seen graphically. The B effect is thelargest, A x B effect next largest and the A effect is very small. See 29 26 A1 = = 7.1 A2 = = 6.5 4 4 21 34 B1= = 5.2 B2 = = 8.5 4 436. 36 Hardn ess 10 B2 8 B effect A effect 6 Mid pt. A2B1 & A2B2 Mid pt. on line B1 A x B effect B1 4 2A1 A2 So by plotting the data for each factor we could observe the following: (Make hand notes here) In thefirst case, there is no interaction because the lines are parallel In 2nd case, some interaction In 3rd case, thereis a strong interaction 3.3 EXAMPLE OF ANOVA During the late 1980s, Modi Xerox had a large base ofcustomers (50 thousand) for this model spread over the entire country. Many buyers of these machines earntheir livelihood by running copying services. Each of these buyers ultimately serves a very large number ofcustomers (end user). When copy quality is either poor or inconsistent, customers earn a bad name and theirimage and business gets affected. In the late 1980s, the company integrated the total quality managementphilosophy into its operation and placed the highest focus on customer satisfaction—any problem of fieldfailure was given the highest priority for investigation. The problem of skips was subjected to37. 37 detailed investigation by a crossfunctional team from the Production, Marketing, and QualityAssurance Departments. Problem Description The pattern of blurred images (skips) observed in the copy isshown in Figure above. It usually occurs between 1060 mm from the lead edge of the paper. Sometimes, on aphotocopy taken on a company letterhead paper, the company logo gets blurred, which is not appreciated bythe customer. This problem was noticed in only onethird of the machines produced by the company, with theremaining twothirds of machines in the field working well without this problem. The inhouse test evaluationrecord also confirmed the problem in only about 15% of the machines produced. Data analysis indicated thatnot all the machines produced were faulty. Therefore, the focus of further investigation was to find out whatwent wrong in the faulty machines or whether there are basic differences between the components used ingood and faulty machines. Preliminary Investigation A copier machine consists of more than 1000components and assemblies. A brainstorming session by the team helped in the identification of 16components suspected to be responsible for the problem of blurred images. Each Suspected component had atleast two possible dimensional characters which could have resulted in the skip symptom. This led to morethan 40 probable causes (40 dimensions arising out of 16 components) for the problem. An attempt was madeby the team to identify the real causes among these 40 probable causes. Ten bad machines were stripped openand various dimensions of these 16 component were measured. It was observed that all the dimensions werewell within specifications Hence, this investigation did not give any clues38. 38 to the problem. Moreover, the time and effort spent in dismantling the faulty machines and checkingvarious dimensions in 16 components was in vain. This gave rise to the thought that conforming tospecification does nor always lead to perfect quality. The team needed to think beyond the specification inorder to find a solution to the problem Taguchi Experiment An earlier brainstorming session had identified 16components that were likely to be the cause of this problem. A study of travel documents of 300 problemmachines revealed that on 88% of occasions, the problem was solved by replacing one or more of only fourparts of the machine. These four parts were from the list of 16 parts identified earlier. They were considered tobe Critical and it was decided to conduct an experiment on these four parts. These parts were the following:(a) Drum shaft (b) Drum gear (c) Drum flange (d) Feed shaft Two sets of these parts Were taken forExperiment I, one from an identified problem machine and one from a problem free machine. The two levelsconsidered in the experiment were good and bad; ‘good’ signifying parts from the problemfree machine and‘bad’ signifying parts from the problem machine. The factors and levels thus identified are given in Tablebelow. A full factorial experiment would have required 16 trials while the experiment was designed in L8(27)fractional factorial using a linear graph and orthogonal array (OA) table developed by Taguchi.39. 39 The linear graph is presented in Fig. 9.24 and the layout in Table 9.14. A master plan for conductingeight experiments was prepared. The response considered was the number of defective copies (copiesexhibiting the skip problem) in a 50copy run. The master plan along with the response is presented in Table9.15.40. 40 Analysis and Results The response considered was fraction defective (p5 d/n). Data were normalizedby the transformation sin ‾ ¹ (p½). Analysis of variance (ANOVA) was performed on normalized data and the

Page 96: Seminar report on taguchi method

results are presented in Table . F(1,.5) at 0.05 = 6.61, F(1,.5) at 0.01=16.26. pA=(352832.4)*100/3788 =923As can be seen from Table factor A is highly significant (the only significant factor), explaining 92.3% of thetotal variation. In other words, of the four components studied, the drum shaft alone is the source of troublefor skip. The problem was now narrowed down to one component from the earlier list of 16 components,giving a ray of hope for moving towards a solution. Further investigations were carried out on drum shaftdesign. Drum Shaft Design41. 41 The configuration of the drum shaft is defined by 15 dimensions. A brainstorming session by the teammembers identified wobbling and increased play in the drum shaft as major causes for this problem. Fourdimensions of the drum shaft were suspected to be causing wobbling and excessive play. These dimensions inall 20 machines (10 good and 10 bad) were checked and found to be well within specification Now thequestion as to where the problem lay arosedefinitely not within the specification, perhaps outside thespecification? This led u to think beyond the specification in order to find a solution. As a first step, thedimension patterns of good and bad machines were compared. The dimension patterns for four criticaldimensions suspected to be the cause of the problem are show ft in Figure below. There is not muchdifference in pattern between good and bad machines with respect to dimensions B, C, and D. Dimension A,that is, diameter over pin (DOP) dimensions of the drum shaft splines revealed a difference in pattern betweenthe good and bad machines. The DOP of shafts from 10 problem machines were found to lie in the lower halfof the specification range, whereas in case of problemfree machines, the DOP was found to be always on the42. 42 upper half of the specification range(shown in fig. above). The DOP dimension in the drum shaft isshown in Figure below. DOP (diameter over pin) is a measure of the tooth thickness, t. Higher DOP meansgreater tooth thickness of the splines and vice versa. If the DOP in the splines of the drum shaft is on the lowerside, then it will increase the clearance, resulting in more play between the drum shaft and the drum gearassembly .43. 43 Here, the image of the original document is transferred on the photoreceptor drum through a series oflenses and mirrors. The photoreceptor drum is coated with a photoconductor material and it is electricallycharged with positive charge. During the transfer of the image from the document, the whole of the drum areais exposed to light except the area where the image is formed, Due to the exposure to light, the photoconductor material becomes a conductor and the charge is neutralized, except in the image area. This image iscalled ‘latent image’. Subsequently, this image is transferred to paper through toner and developer. During thetransfer of image, the drum should rotate at a uniform speed. Any jerk to the photoreceptor drum duringrotation will cause distortion or blur on the latent image. The photoreceptor drum is driven by the drum shaftanti drum gear assembly. An excessive play between drum shaft and drum gear gets magnified and producesjerks in the photoreceptor unit. Bad machine dimension pattern clearly indicates the possibility of an excessiveplay between drum shaft and drum gear assembly. A sketch of the Photoreceptor assembly is shown belowhere.44. 44 A lower DOP results in Producing a large gap between the drum shaft and drum gear which causesexcessive play in the drum shaft. Technically, excessive play between the drum and drive gear can cause theskip problem This theory was further confirmed when this model (X) was compared with model Y and modelZ where no skip problem was observed. In models Y and Z, the drum shaft and drive gear were integrated intoa single unit. This Probably explains the zero play and no skip defect. The drum and drive gear assembly ofthe three models X, Y, and Z are shown in figure. for comparison For further validation of this point, playbetween the drum shaft and drive gear was eliminated by temporarily integrating the system using a drop ofaraldite (glue) in 50 problem machines. A test run was pen on all 50 machine and no skip defect was observeThis led to the conclusion that drum shaft DOP specifications are not failsafe against skips. It was now feltnecessary to arrive at new specifications for DOP to ensure no excessive play between the drum45. 45 shaft and drive gear. The question arose as to how much play can be permitted. To find an answer, asimilar drive system of the very successful twowheeler Lambretta Scooter was studied, and it was found thatthe Play varied between 0.04 and 0.07 mm. To be on the safe side, it was decided to allow maximum play ofonly 0.04 mm between the drum shaft and drive gear. These drum shafts are manufactured by subcontractors,so new specifications were reached by taking into consideration the supplier’ s capability of machining thesedimensions and a maximum permissible play of 0.04mm. The old and new specifications for DOP are shownin figure. Confirmatory Trial The implications of the new specifications on other systems of the machine wereexamined and it was found that the change in specification would not create any problems. The 36 worstaffected machines were selected from the field. Drum shafts with the new specifications were made and thenfitted on these machines. Test results of these machines showed a total elimination of skip defects. Ultimately,

Page 97: Seminar report on taguchi method

to give the customer the benefits of the study, 5000 drum shafts with 30 new specifications were made andincorporated in 5000 existing machines with the old design in the field. A sample performance audit of 800machines (out of those 5000) in the field was carried out and none of these 800 machines indicated skipproblems. This provided confidence that the new design had worked successfully. After that, the new designwas implemented fully by releasing the new specification. The rate of occurrence of the skip problem in theassembly line dropped from the previous 13% to less than 0.5%. Beating the Benchmark46. 46 Machine specifications released from Rank Xerox (UK) permit the occurrence of skip up to 10mmfrom the lead edge. Earlier specifications of Modi Xerox permitted the occurrence of skip up to 60 mm fromthe lead edge, but, to most of the customers, loss of information near the lead edge is not acceptable, as thecompany logos are located near the lead edge of the letterheads. The exercise was initially taken up to reachthe standard of Rank Xerox (skip up to 10mm). Now, the modified design of the drum shaft, evolved throughscientific and systematic investigation, has completely eliminated the skip and hence has surpassed even theRank Xerox benchmark of permitting skip up to 10 mm from the lead edge. This is a great accomplishmenttowards skipfree copy. A problem is completely solved for which no solution was previously availableworldwide. CHAPTER 4 4.1 WHAT IS ARRAY An Array’s name indicates the number of rows and columnit has , and also the number of levels in each of the column. Thus the array L4(2³) has four rows and three 2level column. 4.2 HISTORY OF ORTHOGONAL ARRAY Historically, related methods was developed foragriculture, largely in UK, around the second world war ,Sir R.A.Fisher was particularly associated with thiswork . F1 F2 F3 F4 Here the fields area has been divided I1 up into rows and column and four fertilizers (F1F4) and four irrigation levels are I2 Represented. Since all combination are I3 Taken ,sixteen ‘cells’ or ‘plots’result. I447. 47 The Fisher field experiment is a full factional experiment since all 4*4 =16 combinations of theexperiments factors ,fertilizer and irrigation level ,are included. The number of combinations required may notbe feasible or economic. To cut down on the number of experimental combinations included, a Latin Squaredesign of experiment may be used. Here there are three fertilisers, three irrigation levels and three alternativeadditives (A1A2) but only nine instead of the 3*3*3 =27 combination, of the full factorial are included. Thereare ‘pivotal’ combinations, however, that still allow the F F2 F3 identification of the best fertiliser, Irrigationlevel I1 A1 A2 A3 and additive provided that these no serious nonadditives or interactions in the relationshipI2 A2 A3 A1 between yield and these control factors. The property of Latin Squares A3 A1 A2 thatcorresponds to this separability is that each I3 of the labels A1,A2,A3 appears exactly once in each row andcolumn. Difference from agricultural applications is that in agriculture the ‘noise’ Or uncontrollable factorsthat disturb production also tend to disturb experimentation, such as the weather. In industry, factors thatdisturb production, or are uneconomic to control in production, can and should be directly manipulated in test.Our desire is to identify a design or line calibration which can best survive the transient effects in themanufacturing process caused by the uncontrolled factors. We wish to have small piece to piece and time totime variations associated with this noise variation. To do this we can force diversity on noise conditions bycrossing our orthogonal array of controllable factors by full factorial or orthogonal array of noise factors. Thusin the example, we evaluate our product for each of the nine trials against the background of four differentcombinations of noise conditions. We are looking one of the nine rows of control factors combinations, or forone of the ‘missing’ 72 rows (3*3*3*3=81; 819=72), which not only gives the correct mean result onaverage, but also minimises variation away from the mean. To do this Taguchi introduces the signaltonoiseratio. 4.3 Introduction to Orthogonal Arrays Engineers and Scientists are often faced with two product orprocess improvement situations. One development situation is to find a parameter that will improve someperformance characteristic to an acceptable and optimum value. This is the most typical situation in mostorganizations. A 2nd situation is to find a less expensive, alternate design, material, or method which willprovide equivalent performance48. 48 When searching for improved or equivalent deigns, the person typically runs some tests, observe someperformance of the product and makes a decision to use or reject the new design. In order to improve thequality of this decision, proper test strategies are utilized. Before describing the OA’s let us look at some othertest strategies: Most common test plan is to evaluate the effect of one parameter on product performance. Thisis what is typically called as one factor experiment. This experiment evaluates the effect of one parameterwhile holding everything else constant The simplest case of testing the effect of one parameter onperformance would be to run a test at two different conditions of that parameter. For example: the effect ofcutting speed on the finish of a machined part. Two different cutting speeds could be used and the resultantfinish measured to determine which cutting speed gave better results. If the first level, the first cutting speed,

Page 98: Seminar report on taguchi method

is symbolized by 1 and the second level by 2, the experimental conditions will look like this: Trial No. FactorLevel Test Results 1 1 *,* 2 2 *,* The * symbolizes the value of finish that would be obtained. This sample oftwo (in this case) could be averaged and compared to the second test. If there happens to be an interaction ofthis factor with some other factor then this interaction cannot be studied. Several Factors One at a Time If thisdoesn’t work, the next progression is to evaluate the effect of several parameters on product performance oneat a time. Let us assume the experimenter has looked at four different factors A, B, C and D each evaluatedone at a time. The resultant test program may appear like the table below: Factor and Factor Level TestResults Trial No. A B C D 1 1 1 1 1 *,* 2 2 1 1 1 *,* 3 1 2 1 1 *,* 4 1 1 2 1 *,* 5 1 1 1 2 *,*49. 49 One can see that the first trial is the base line condition and results of trial 2 can be compared to trial 1to estimate the effect of factor A on product performance. Similarly results of trial 3 can be compared to trial 1to estimate the effect of factor B and so on. The main limitation of several factors one at a time is that nointeraction among the factors studied can be observed. Also, the strategy makes limited use of data whenevaluating factor effects. Of the ten data points we had in the above example, only two were used to compareagainst two others; and the remaining six data points were temporarily ignored. If we try to use all the datapoints, then the experiment will not remain orthogonal. One main requirement of orthogonality is a balancedexperiment which means equal number of samples under various test conditions. (Equal no. of tests under A1and A2) For instance, in the above experiment if all the data under A1 and A2 is averaged and compared, thenthis is not a fair comparison of A1 to A2. Of the four trials under level A1, three were level B1 and one atlevel B2. The one trial under A2 was at level B1. Therefore one cannot see that if factor B has an effect on theperformance it will be part of the observed effect of factor A, and vice versa. Only when trial 1 is compared toother trials one at a time are the factor effects orthogonal. Several factors all at the same time The mostdesperate and urgent situations finds the experimenter evaluating effect of several parameters on performanceall at the same time. Here the experimenter hopes that at least one of the changes will improve the situationsufficiently. Factor and Factor Level Test Results Trial No. A B C D 1 1 1 1 1 *,* 2 2 2 2 2 *,* This situationmakes separation of main factor effects impossible, let alone any interaction effects.50. 50 Some factors may be making positive effect and some negative, but we will not get any hint of thisinformation. 4.3.1 Investigating many factors – a case study In most problems, preliminary brainstormingwould reveal a large number of factors which may influence the output of the process under study. How arethe effects of these factors prioritized? The traditional approach is to Isolate what is believed to be the mostimportant factor Investigate this factor by itself, ignoring all others Make recommendations on changes tothis crucial factor Move on to the next factor and repeat This OFAT (One factor at a time) approach hasseveral critical weaknesses. The factorial approach in which several factors are studied simultaneously in abalanced manner is much better. We will try to understand this through an example. 4.3.1.1 Example Aprocess producing steel springs is generating considerable scrap due to cracking after heat treatment. A studyis planned to determine better operating conditions to reduce the cracking problem. There are several ways tomeasure cracking Size of the crack Presence or absence of cracks The response selected was Y: thepercentage without cracks in a batch of 100 springs Three major factors were believed to affect the response T: Steel temperature before quenching C: carbon content (percent) O: Oil quenching temperature Levelschosen for the study are: Factor Low (Level 1) High (Level 2) T 1450 °F 1600 °F51. 51 C 0.5% 0.7% O 70 °F 120 °F Classical approach : OFAT Experiment: Four runs at each level of T withC and O at their low levels Steel Tempt. % springs without cracks Average 1450 61 67 68 66 65.5 1600 79 7571 77 75.5 Conclusion: Increased T reduces cracks by 10% Problem: How general is this conclusion? Does itdepend upon Quench Temperature? Carbon Content? Steel chemistry? Spring type? Analyst Etc.??Carrying out similar OFAT experiments for C and O would require a total of 24 observations and providelimited knowledge. Factorial Approach: Include all factors in a balanced design: To increase the generalityof the conclusions, use a design that involves all eight combinations of the three factors. The treatments forthe eight runs are given as under: Run C T O 1 0.5 1450 70 2 0.7 1450 70 3 0.5 1600 70 4 0.7 1600 70 5 0.51450 120 6 0.7 1450 120 7 0.5 1600 120 8 0.7 1600 120 The above eight runs constitute a FULLFACTORIAL DESIGN. The design is balanced for every factor. This means 4 runs have T at 1450 and 4 haveT at 1600. Same is true for C and O. IMMEDIATE ADVANTAGES The effect of each factor can beassessed by comparing the responses from the appropriate sets of four runs.52. 52 More general conclusions 8 runs rather than 24 runs. The data for the complete factorial experimentare: Run C T O Y 1 0.5 1450 70 67 2 0.7 1450 70 61 3 0.5 1600 70 79 4 0.7 1600 70 75 5 0.5 1450 120 59 60.7 1450 120 52 7 0.5 1600 120 90 8 0.7 1600 120 87 The main effects of each factor can be estimated by thedifference between the average of the responses at the high level and the average of the responses at the low

Page 99: Seminar report on taguchi method

level. For example to calculate the O main effect: 67 + 61 + 79 + 75 Avg. of responses with O as 70 = = 70.54 59 + 52 + 90 + 87 Avg. of responses with O as 120 = = 72 4 So the main effect of O is = 72.0 − 70.5 = 1.553. 53 Avg. Y 74 O Main Effect 73 72 71 70 70 O 120 The apparent conclusion is that changing the oiltemperature from 70 to 120 has little effect. The use of factorial approach allows examination of two factorinteractions. For example we can estimate the effect of factor O at each level of T. At T = 1450 67 + 61 Avg.of responses with O as 70 = = 64.0 2 59 + 52 Avg. of responses with O as 120 = = 55.5 2 So the effect of O is55.5 – 64 = 8.5 At T = 1600 79 + 75 Avg. of responses with O as 70 = = 77.0 2 90 + 87 Avg. of responseswith O as 120 = = 88.5 2 So the effect of O is 88.5 – 77 = 11.5 The conclusion is that at T = 1450, increasingO decreases the average response by 8.5 whereas at T = 1600, increasing O increases the average response by11.5.54. 54 That is, O has a strong effect but the nature of the effect depends on the value of T. This is calledinteraction between T and O in their effect on the response. It is convenient to summarize the four averagescorresponding to the four combinations of T and O in a table: O 70 120 Average 1450 64 55.5 59.75 T 160077 88.5 82.75 Average 70.5 72 71.25 Respo nse Y 90 O = 120 80 O = 70 70 60 50 1450 T 1600 When aninteraction is present the lines on the plot will not be parallel. When an interaction is present the effect of thetwo factors must be considered simultaneously. The lines are added to the plot only to help with theinterpretation. We cannot know that the response will increase linearly. Two way tables of averages and plotsfor the other factor pairs are: C 0.5 0.7 Average 1450 63.0 56.5 59.75 T 1600 84.5 81.0 82.75 Average 73.7568.75 71.2555. 55 Res Y 90 C = 0.5 80 C = 0.7 70 60 50 1450 T 1600 O 70 120 Average 0.5 73 74.5 73.75 C 0.7 68 69.568.75 Average 70.6 72 71.25 Conclusions: C has little effect There is an interaction between T and O.Recommendations: Run the process with T and O at their high levels to produce about 90% crack freeproduct (further investigation at other levels might produce more improvement). Choose the level of C sothat the lowest cost is realized. Comparison with OFAT On the basis of the observed data we can see thatOFAT approach leads to different conclusions if the factors are considered in the following order: Fix T =1450 and C = 0.5 and vary O, conclude O=70 is best Run C T O Y 1 0.5 1450 70 67 2 0.7 1450 70 61 3 0.51600 70 79 4 0.7 1600 70 7556. 56 5 0.5 1450 120 59 6 0.7 1450 120 52 7 0.5 1600 120 90 8 0.7 1600 120 87 Fix O = 70 and C = 0.5 andvary T, conclude T = 1600 is best Run C T O Y 1 0.5 1450 70 67 2 0.7 1450 70 61 3 0.5 1600 70 79 4 0.71600 70 75 5 0.5 1450 120 59 6 0.7 1450 120 52 7 0.5 1600 120 90 8 0.7 1600 120 87 T = 1600 and vary C,conclude C = 0.5 is Fix O = 70 and best Run C T O Y 1 0.5 1450 70 67 2 0.7 1450 70 61 3 0.5 1600 70 79 40.7 1600 70 75 5 0.5 1450 120 59 6 0.7 1450 120 52 7 0.5 1600 120 90 8 0.7 1600 120 87 This approach willincorrectly conclude that T = 1600 C = 0.5 O = 70 Is the best Whereas the factorial approach concluded that Tand O should be at their high levels and C has no effect. Looking at the above experimental situations, it willbe pertinent to answer a few questions here: • How can poor utilization of data and non orthogonal situation beavoided? • How can interactions be estimated and still have an orthogonal experiment? The use of fullfactorial experiments is one possibility!!!57. 57 And, the use of some orthogonal arrays is another. Better Test Strategies Let us recall the example wehad discussed for the twoway ANOVA discussion. A full factorial experiments is as shown: Factor andFactor Level Trial No. Hardness data (RB) A B 1 1 1 76 78 2 1 2 77 78 3 2 1 73 74 4 2 2 79 80 The above is afull factorial experiment and is orthogonal. Note that under level A1, factor B has two data points under B1condition and two under B2 condition. The same is true under level A2. The same balanced situation is truewhen looking at the experiment w.r.t two conditions B1 and B2. Because of the balanced arrangement, factorA does not influence the estimate of effect of factor B and vice versa. All possible combinations of factor Aand B at both there levels are represented in the test matrix. Using this information both factor and interactioneffects can be estimated. The perfect experimental design is a full factorial, with replications, that is conductedin a random manner. Unfortunately, this type of experimental design may make the number of experimentalruns prohibitive, especially if the experiment is conducted on production equipment with lengthy setup times.The number of treatment conditions (TC) for a full factorial experiment is determined by TC = l f Where TC =number of treatment conditions, l is the number of levels and f is the number of factors. Thus for a two leveldesign 2 2 = 4,2 6 = 32.......... and for a three level design 3 2 = 6,3 3 = 27,3 4 = 81........ If each treatmentcondition is replicated only once, the number of experimental runs is doubled. Thus, for a threelevel designwith five factors and one replicate, the number of runs is 486. Table below shows a three factor full factorialdesign. The design space is composed of seven columns with 1 or 2, and the design matrix is composed of thethree individual factor columns A, B, and C.

Page 100: Seminar report on taguchi method

58. 58 The design matrix tells us how to run the Treatment Conditions. Treatment Factors Response ConditionA B C AB AC BC ABC 1 1 1 1 2 2 2 1 * 2 2 1 1 1 1 2 2 * 3 1 2 1 1 2 1 2 * 4 2 2 1 2 1 1 1 * 5 1 1 2 2 1 1 2 * 62 1 2 1 2 1 1 * 7 1 2 2 1 1 2 1 * 8 2 2 2 2 2 2 2 * Three factor interactions with a significant effect on theprocess are rare, and some two factor interactions will not occur or can be eliminated by using engineeringexperience and sound judgment. If our engineering judgment shows that there was no threefactor interaction(AxBxC), we could place a factor D in that column and make it part of the design matrix. Of course, wewould need to have a high degree of confidence that factor D does not interact with other columns. Similarly,we could place a factor E in column headed BxC if we thought there was no BxC interaction. This approachkeeps the number of runs the same and adds factors. Please note that a full factorial experiments is possibleonly if there are few factors to be investigated otherwise the matrix may become too large for many factors.Typically most engineering problems will be five or more factors affecting performance (at least initially). Fora seven factor experiments with each factor at two levels 2 7 = 128 experiments need to be conducted.However, usual time and financial limitations preclude the use of full factorial experiments. How can anengineer efficiently (economically) investigate these design factors? 4.3.2 A Full Factorial Experiment Anactual example of an experiments used in an engine plant to investigate the problem of water pump leaksinvolved seven factors Factor Level 1 Level 2 Front cover design Production New Gasket Design ProductionNew59. 59 Front bolt torque Low High Gasket Coating Yes No Pump Housing Finish Rough Smooth Rear bolttorque Low High Torque pattern Front rear Rear front If a full factorial is to be used in this situation 2 7 = 128will have to be conducted. (As shown in figure below) A1 A2 B1 B2 B1 B2 C1 C2 C1 C2 C1 C2 C1 C2 D1D2 D1 D2 D1 D2 D1 D2 D1 D2 D1 D2 D1 D2 D1 D2 G1 F1 G2 E1 G1 F2 G2 G1 F1 G2 E2 G1 F2 G2Efficient Test Strategies Statisticians have developed more efficient test plans which are called fractionalfactorial experiments (FFE’s) FFEs use only a portion of the total combinations to estimate the main factoreffects and some, not all, of the interactions. Certain treatment conditions are chosen to maintain theorthogonality among the various factors and interactions. It is obvious that 1/8th FFE with only 16 testcombinations or 1/16th with only 8 combinations is much more appealing to the experimenter from a time andcost standpoint. Taguchi has developed a family of FFE matrices which can be utilized in various situations.In this situation a possible matrix is an eight trial OA which is labeled as L8 matrix.60. 60 L8 OA matrix Column No. Trial No. 1 2 3 4 5 6 7 1 1 1 1 1 1 1 1 2 1 1 1 2 2 2 2 3 1 2 2 1 1 2 2 4 1 2 22 2 1 1 5 2 1 2 1 2 1 2 6 2 1 2 2 1 2 1 7 2 2 1 1 2 2 1 8 2 2 1 2 1 1 2 th This is 1/16 FFE which has only 8 ofthe possible 128 combinations represented. One can observe that there are 7 columns in this array which mayhave a factor assigned to each column. The eight trials provide 7 dof s for the entire experiment allocated to 7columns of 2 levels, each column with one dof The array allows all the error dofs to be traded for factor dofsand provide the particular test combinations that accommodate that approach. When all columns are assigneda factor, this is known as a saturated design. The levels of factors are designated by 1’s and 2’s. It can be seenthat each column provide 4 tests under level 1 and 4 tests under level 2. This is the feature that providesorthogonality to the experiments. This is the real power of OA i.e. the ability to evaluate several factors in aminimum of tests. This is called an efficient experiment since much information is obtained from few trials.The assignment of factors to a saturated design FFE is not difficult; all columns are assigned a factor.However, the experiments which are not fully saturated (when all columns cannot be assigned factors) may bemore complicated to design. 4.4 Steps in Designing, Conducting and Analyzing an Experiment The majorinitial steps are: 1. Selection of factors and/or interactions to be evaluated 2. Selection of number of levels forthe factors 3. Selection of the appropriate OA 4. Assignment of factors and/or interactions to columns61. 61 5. Conduct tests 6. Analyze results 7. Confirmation experiment Steps 1 to 4 concern the actual designof experiment Let us try to understand each step 4.4.1 Selection of factors and/or interactions to evaluateSeveral methods are useful for determining which factors to include in initial experiments. These are 1.Brainstorming 2. Flowcharting 3. Cause and Effect diagrams 4.4.2 Selection of Number of Levels Initialround of experiments should involve many factors at few levels; two are recommended to minimize the size ofthe initial experiment This is because dof for a factor is the no. of levels minus one; increasing the number oflevels increases the total dof in the experiments which is a direct function of total number of tests to beconducted The initial round of experimentation will eliminate many trivial factors from contention and a fewremaining can then be investigated with multiple levels without causing an undue inflation in the size of theexperiments.62. 62 4.4.3 Selection of OA Degrees of Freedom: The selection of which OA to use depends upon: 1. Thenumber of factors and interactions of interest 2. The number of levels for the factors of interest Recall of

Page 101: Seminar report on taguchi method

ANOVA analysis: Dof for each factor is (Say factor A) v A = k A −1 Dof for interaction is (Say A and B) vA×B = (v A )(vB ) The minimum required dof in the experiment is the sum of all the factor and interactiondofs. Orthogonal Arrays Two basic kind of arrays are available. Two level arrays: L4, L8, L12, L16, L32 Three level arrays: L9, L18, L27 The number in the array designates the number of trials in the array. Thetotal dof available in an array vLN = N − 1 where N is the number of trials When an array is selected, thefollowing inequality is to be satisfied: vLN ≥ vrequired for factor and int erations Depending upon number oflevels in a factor, a 2 or a 3 level OA can be selected. If some factors are twolevel and some threelevel, thenwhichever is predominant should indicate which kind of OA is selected. Once the decision is made about theright OA, then the number of trials for that array must provide an adequate total dof. When required dof fallbetween the two dof provided by two OAs, the next larger OA must be chosen.63. 63 4.4.4 Assignment of Factors and Interactions Before getting into the details of using some method ofassignment of factors and interactions, a demonstration of a mathematical property of OAs is in order.Demonstration of Interaction Columns The simplest OA is an L4 which has an arrangement as shown:Column No. Trial No. First Second Third 1 1 1 1 2 1 2 2 3 2 1 2 4 2 2 1 Recall the twoway ANOVAproblem: Let us consider an experimental situation. A student worked at an aluminum casting foundry whichmanufactured pistons for reciprocating engines. The problem with the process was how to attain the properhardness (Rockwell B) of the casting for a particular product. Engineers were interested in the effect of Cuand Mg content on casting hardness. According to specifications the copper content could be 3.5 to 4.5% andthe magnesium content could be 1.2 to 1.8%. The student runs an experiment to evaluate these factors andconditions simultaneously. If A = % Copper Content A1 = 3.5 A2 = 4.5 If B = % Magnesium Content B1 =1.2 B2 = 1.8 The experimental conditions for a two level factors is given by 2 f = 4 which are A1 B1 A1 B2A2 B1 A2 B2 Imagine, four different mixes of metal constituents are prepared, casting poured and hardnessmeasured. Two samples are measured from each mix for hardness. The result will look like: A1 A2 B1 76, 7873, 74 B2 77, 78 79, 80 To simplify discussion 70 points from each value are subtracted in the aboveobservations from each of the four mixes. Transformed results can be shown as:64. 64 A1 A2 B1 6, 8 3, 4 B2 7, 8 9, 10 Let us adapt this problem to a L4 OA Factor A can be assigned tocolumn 1 and Factor B can be assigned to column 2. The entire experiment can be shown in a L4 OA asunder: Column No. y data Trial No. Factor A Factor B 3 (Rb – 70) 1 1 1 1 6, 8 2 1 2 2 7, 8 3 2 1 2 3, 4 4 2 2 19, 10 ANOVA for L4 OA The ANOVA for an OA is conducted by calculating the sum of squares for eachcolumn. The formula for SS A is the same as done for 2way ANOVA. ( A1 − A2 ) 2 SS A = N A1 = 6 + 8 +7 + 8 = 29 A2 = 3 + 4 + 9 + 10 = 26 SS A = ( 29 − 26 ) 2 =1.125 8 The sum of squares for factor B, column 2is SS B = ( B1 − B2 ) 2 = (21 − 34) 2 = 21.125 N 8 Note that the Sum of Squares for factor A and B areidentical to twoway ANOVA. The sum of squares of column 3 is SS 3 = (31 −32 ) 2 = (33 −22) 2 =15.125 N8 Note that the value of SS3 is equal to SS A× B This is not coincidental but is a mathematical property ofOA.65. 65 The calculation is a demonstration that the third column represents the interaction of factors assigned tothe column 1 and 2. This particular L4 example is similar to the twoway ANOVA example and has similartest results. Thus L4 with two factors assigned to it is equivalent to a full factorial experiment and theANOVA is equivalent to a twoway ANOVA because certain columns represent the interaction of two othercolumns. 4.4.5 Conducting the experiment Once the factors are assigned to a particular column of an OA, thetest strategy has been set and physical preparation for performing the experiment can begin. Some decisionsneed to be made regarding the order of testing the various trials. Factors are assigned to columns, trial testconditions are dictated by rows of the OA. For an L8 OA, one can observe that trial 6 requires the testconditions of Trial 6: A2 B1 C 2 D1 Factors A B AxB C AxC BxC D Column No. Trial # 1 2 3 4 5 6 7 1 1 1 11 1 1 1 2 1 1 1 2 2 2 2 3 1 2 2 1 1 2 2 4 1 2 2 2 2 1 1 5 2 1 2 1 2 1 2 6 2 1 2 2 1 2 1 7 2 2 1 1 2 2 1 8 2 2 1 2 1 12 The interaction conditions cannot be controlled when conducting a test because they are dependent uponmain factor levels. Only the analysis is concerned with the interaction columns. Therefore, it is recommendedthat test sheets be made up which show only the main factor levels required for each trial. This will minimizemistakes in conducting the experiment which may inadvertently destroy the orthogonality.66. 66 4.4.6 Analysis of Experimental Results The simple example of casting discussed for twoway ANOVAis intended to demonstrate another basic property of OAs, which is that the total variation can be accountedfor by summing the variation for all columns. Let us try to put the data from that twoway ANOVA exampleto an L8OA. Factor A is assigned to column 1 and B to column 2. The first two trials of the OA represent theA1 B1 condition which has corresponding results if 6 and 8 in the example. Similarly, Trials 3 and 4 representthe A1 B2 condition which has results corresponding to 7 and 8. The complete OA is as shown below: Factors

Page 102: Seminar report on taguchi method

and Interactions A B AxB Column No. Y data Trial # 1 2 3 4 5 6 7 Rb =70 1 1 1 1 1 1 1 1 6 2 1 1 1 2 2 2 2 8 31 2 2 1 1 2 2 7 4 1 2 2 2 2 1 1 8 5 2 1 2 1 2 1 2 3 6 2 1 2 2 1 2 1 4 7 2 2 1 1 2 2 1 9 8 2 2 1 2 1 1 2 10 ANOVAof Taguchi L8 OA A1 = 6 + 8 + 7 + 8 = 29 A2 = 3 + 4 + 9 + 10 = 26 SS A = ( A1 − A2 ) 2 = ( 29 −26) 2=1.125 N 8 The sum of squares for factor B SS B = ( B1 − B2 ) 2 = (21 −34) 2 = 21.125 N 8 The sum ofsquares of column A x B is SS A× B = ( 31 − 32 ) 2 = (33 − 22) 2 = 15.125 N 867. 67 Note that the SS for factor A, B and interaction A x B are same as twoway ANOVA calculated earlier.Continuing with sum of squares calculations for other columns: SS 4 = ( 41 −4 2 ) 2 = ( 25 −30) 2 =3.125 N 8SS 5 = ( 51 −52 ) 2 = ( 27 − 28) 2 = 0.125 N 8 SS 6 = ( 61 −6 2 ) 2 = ( 27 − 28) 2 = 0.125 N 8 SS 7 = ( 71 −7 2) 2 = ( 27 −28) 2 = 0.125 N 8 SS e = SS 4 + SS 5 + SS 6 + SS 7 = 3.500 The total sum of squares for theunassigned columns is equal to SS e calculated in the two way ANOVA example. Thus the unassignedcolumns in an OA represent an estimate of error variation. Here the difference of the particular array selectedfor the experiment changes the analysis approach slightly. The L4 has two data points per trial and the L8 onedata point per trial. L4 OA for the same experiment: Column No. y data Trial No. Factor A Factor B 3 (Rb –70) 1 1 1 1 6, 8 2 1 2 2 7, 8 3 2 1 2 3, 4 4 2 2 1 9, 10 The error variance of the L4 must come from therepetitions in each trial, but the error variance in L8 must come from the columns since there are norepetitions within trials. Note that: SS T = ∑ SS columns This is a demonstration of the property of the totalsum of squares being contained within the columns of an OA.68. 68 Column estimate of error variance In the previous example, the unassigned columns were shown toestimate error variance when there was only one test result per trial. This approach of using columns toestimate the error variance will be used even if all the columns have factors assigned to them. Some factorsassigned to an experiment will not be significant at all, even though thought to be before experimentation.This is equivalent to saying the color of the car can affect fuel economy and assigning two different colors to acolumn (2 levels). This column will have small sum of squares because it will be estimating error variancerather than any true color effect. When a column effect turns out to be small in an OA, it means any one of thefollowing: • No assigned factors or interactions like in the L8 example discussed above • Not significant orvery small factor and /or interaction effect • Canceling factor and/or interaction effects A fully saturated OAwill depend upon some column effects turning out small relative to others and using the small ones as estimateof error variance. Pooling estimates of error variance In the above example, there were four unassignedcolumns having four degrees of freedom, one for each column, which provides estimate for error variance. Abetter estimate was the combination of all four columns effects for one overall estimate of error variance with4 dofs. The combining of column effects to better estimate variance is referred to as ‘pooling’. The pooling upstrategy entails Ftesting the smallest column effect against the next larger one to see of significance exists. Ifno significance exists, then these two effects are pooled to test the next larger column effect until somesignificant Fratio exists. The ANOVA table for the experimental problem discussed above will appear likethis: Source SS Dof V F A 1.125 1 1.125 B 21.125 1 21.125 22.83 AxB 15.125 1 15.125 16.35 Col 4 3.125 13.125 Col 5 0.125 1 0.125 Col 6 0.125 1 0.125 Col 7 0.125 1 0.125 T 40.875 7 Error pooled 4.625 5 0.92569. 69 In this situation five smaller column effects have been pooled to form an estimate of error variancewith 5 dof associated with that estimate. As a rule of thumb, pooling up to half of the dofs is advisable. Herethat rule was exceeded slightly because two of the column effects were substantially larger than the others.The ANOVA summary table could be rewritten to recognize the pooling as shown in Table below; Source SSDof V F B 21.125 1 21.125 22.83 AxB 15.125 1 15.125 16.35 Error pooled 4.625 5 0.925 T 40.875 7 4.4.7Confirmation experiment The confirmation experiment is the final step in verifying the conclusions from theprevious round of experimentation. The optimum conditions are set for the significant factors and levels andseveral tests are made under constant conditions. The average of the confirmation experiment results iscompared to the anticipated average based on the factors and levels tested. 4.5 EXAMPLE EXPERIMENTALPROCEDURE Popcorn experiment This is an example to make you walk through the process of designingexperiments. The scenario is to develop process (cooking) specifications to go on a bag of popcorns. Theowner of the company has developed a new hybrid seed which may or may not use the same cooking processrecommended for the current seed. One of the processes used by the customer is the hot oil method, which isaddressed in this situation. Statement of problem and objective of experiment To find process factors whichinfluence popcorn quality characteristics relative to customer’s requirement characteristics such as • unpoppedkernels in a batch, • the fluffiness or volume of the popped corn, • the color, • the taste, and • the crispiness aretypically considered.70. 70 The objective of the experiment is to find the process conditions which optimize the various qualitycharacteristics to provide improved popping, fluffiness, color, taste and texture. Measurement Methods

Page 103: Seminar report on taguchi method

Number of unpopped kernels in a batch can be easily measured, but it assumes there are an equal number ofuncooked kernels in a measuring cup The fluffiness or volume can be quantified by placing the poppedkernels in a measuring cup; which again assumes there are an equal number of uncooked kernels were used ineach batch. The performance of color, taste and texture are somewhat more abstract. These may be addressedby assigning a numerical color rating, taste rating, and texture rating to each batch. Flowchart – PopcornCooking Add Corn process Heat Oil Agitation (Preheat) Venting Inspect Popped Corn Quality Improvement:Problem Solving Cause & Effect diagram – Popcorn Experiment Oil Method Type Agitation Amount VentingHigh Quality Popcorn Shape Preheat Amount Material Pan Heat71. 71 Popcorn factors and levels Factor Level 1 Level 2 A – Type of Oil Corn Oil Peanut Oil B – Amount ofOil Low High C – Amount of heat Medium High D – Preheat No Yes E – Agitation No Yes F – Venting NoYes G – Pan Material Aluminum Steel H – Pan Shape Shallow Deep Assignment of factors to columns – L16OA The factor list is small enough to fit into an L16 OA at a resolution 2 if 2 levels are used for each factor.Using the Taguchi L16 OA the assignment of factors to the columns can be done by referring to table D2 ofthe appendix. (Hard copies distributed already) Factors A to H are assigned to columns 1, 2, 4, 7, 8, 11, 13 and14. The trial data sheets can then be generated from the factor column assignment. Popcorn factors A B C D EF G H Column Number Trial 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 12 2 2 2 2 2 2 2 3 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 4 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1 5 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 6 1 22 1 1 2 2 2 2 1 1 2 2 1 1 7 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 8 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2 9 2 1 2 1 2 1 2 1 2 1 2 1 21 2 10 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1 11 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1 12 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2 13 2 2 1 1 22 1 1 2 2 1 1 2 2 1 14 2 2 1 1 2 2 1 2 1 1 2 2 1 1 2 15 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2 16 2 2 1 2 1 1 2 2 1 1 2 1 2 21 Trial # 5 data sheet will look like:72. 72 • Corn Oil • High amount of oil • Medium Heat • Preheat before adding popcorn • No agitation duringpopping • Vented during popping • Aluminum Pan • Deep Pan shape Conducting the experiment: Order maybe completely randomized with one test per trial. A batch of 200 seeds could be made for each trial with thespecified test conditions. For each trial, unpopped kernels, fluffiness, color, taste and texture would be noted.Popcorn experiment interpretation ANOVA will be used to analyze each performance characteristic separatelyto determine which factors and levels gave the best result. Problems: 1. Assign factors A,B,C,D and E as wellas interactions CxD and CxE to an OA if all factors are using two levels. 2. Assign these factors andinteractions to an OA A,B,C,D, and E (Two levels) AxB BxC CxE AxC BxD DxE AxD BxE BxF Answer to Problem 1: Several possibilities exist L8 (Resolution 1) Column # 1 2 3 4 5 6 7 Option 1 C DCxD E CxE A B Option 2 D E A C CxD CxE B Option 3 A B C D E CxE CxD L16 (Resolution 3) Column #1 2 4 8 11 12 15 A B C D CxE CxD E73. 73 4.6 Standard Orthogonal Array74. 74 CHAPER 5 ROBUST DESIGNING 5.1 WHAT IS ROBUSTNESS Robust products work well—closeto ideal customer satisfaction—even when produced in real factories and used by real customers under realconditions of use. All products look good when they are precisely made: in a model shop and are tested undercarefully controlled laboratory conditions. Only robust products provide consistent customer satisfaction. :Robustness also greatly shortens the development time by eliminating much of the rework that is known asbuild, test, and fix. Robustness is small variation in performance. For example, Sam and John go to the targetrange, and each shoots an initial round of 10 shots. Sam has his shots in a tight cluster, which lies outside thebull’seye. John actually has one shot in the bull’seye, but his success results only from his hitor misspattern. In this initial round John has one more bull’seye than Sam, but Sam is the robust shooter. By asimple adjustment of his sights, Sam will move his tight cluster into the bull’seye for the next round. Johnfaces a much more difficult task. He must improve his control altogether, systematically optimizing his armposition, the tension of his sling, and other critical parameters. Several facts about this example revealimportant characteristics of robustness: (1) The application of the ultimate performance metrics to initialperformance is often misleading; Sam had no bull’seyes even though he is an excellent marksman. (2)Adjustment to the target is usually a simple secondary step. (3) Reduction of variation is the difficult step. (4)A metric is needed that recognizes that Sam is a good marks man and that measures his expected performanceafter he adjusts his sights to the target. Automobiles give further insight into robustness. Customers do notwant a car that is a lemon. They want one that is robust against production variations. A lemon is a car thathas excessive production variations that cause great customer dissatisfaction. To overcome75. 75 this, the production processes have to be more robust so that they produce less variation, and the cardesign has to be more robust so that its performance is less sensitive to production variations. The customersalso want a car that will start readily in northern Canada in the winter and will not overheat in southern

Page 104: Seminar report on taguchi method

Arizona during the summer; that is, they want a car that is robust with respect to the variations of customeruse conditions. Customers would also prefer cars that are as good at 50,000 miles as when new, that are robustagainst time and wear. This example reveals the three sources of undesirable variation (also called noises) inproducts: (1) Variations in conditions of use (2) Production variations (3) Deterioration (variation with timeand use) 5.2 The Robustness Strategy uses five primary tools: 1. PDiagram is used to classify the variablesassociated with the product into noise, control, signal (input), and response (output) factors. 2. Ideal Functionis used to mathematically specify the ideal form of the signalresponse relationship as embodied by the designconcept for making the higherlevel system work perfectly. 3. Quadratic Loss Function (also known asQuality Loss Function) is used to quantify the loss incurred by the user due to deviation from targetperformance. 4. SignaltoNoise Ratio is used for predicting the field quality through laboratory experiments.5. Orthogonal Arrays are used for gathering dependable information about control factors (design parameters)with a small number of experiments. 5.2.1 PDiagram PDiagram is a must for every development project. Itis a way of succinctly defining the development scope. First we identify the signal (input) and response(output) associated with the design concept. For example, in designing the cooling system for a room thethermostat setting is the signal and the resulting room temperature is the response.76. 76 Next consider the parameters/factors that are beyond the control of the designer. Those factors arecalled noise factors. Outside temperature, opening/closing of windows, and number of occupants are examplesof noise factors. Parameters that can be specified by the designer are called control factors. The number ofregisters, their locations, size of the air conditioning unit, insulation are examples of control factors. Ideally,the resulting room temperature should be equal to the set point temperature. Thus the ideal function here is astraight line of slope one in the signalresponse graph. This relationship must hold for all operating conditions.However, the noise factors cause the relationship to deviate from the ideal. The job of the designer is to selectappropriate control factors and their settings so that the deviation from the ideal is minimum at a low cost.Such a design is called a minimum sensitivity design or a robust design. It can be achieved by exploitingnonlinearity of the products/systems. The Robust Design method prescribes a systematic procedure forminimizing design sensitivity and it is called Parameter Design. An overwhelming majority of product failuresand the resulting field costs and design iterations come from ignoring noise factors during the early designstages. The noise factors crop up one by one as surprises in the subsequent product delivery stages causingcostly failures and band aids. These problems are avoided in the Robust Design method by subjecting thedesign ideas to noise factors through parameter design. The next step is to specify allowed deviation of theparameters from the nominal values. It involves balancing the added cost of tighter tolerances against thebenefits to the customer. Similar decisions must be made regarding the selection of different grades of thesubsystems and components from available alternatives. The quadratic loss function is very useful forquantifying the impact of these decisions on customers or higherlevel systems. The process of balancing thecost is called Tolerance Design. The result of using parameter design followed by tolerance design issuccessful products at low cost. 5.2.2 Quality Measurement In quality improvement and design optimizationthe metric plays a crucial role. Unfortunately, a single metric does not serve all stages of product delivery. It iscommon to use the fraction of products outside the specified limits as the measure of quality. Though it is agood measure of the loss due to scrap, it miserably fails as a predictor of customer satisfaction. The qualityloss function serves that purpose very well.77. 77 Let us define the following variables: m: target value for a critical product characteristic +/ ∆0:allowed deviation from the target A0: loss due to a defective product Then the quality loss, L, suffered by anaverage customer due to a product with y as value of the characteristic is given by the following equation: L =k * ( y m )2 where k = ( A0 / ∆02 ) If the output of the factory has distribution of the critical characteristicwith mean µ and variance σ2, then the average quality loss per unit of the product is given by: Q = k ( µ m)2 + σ 2 5.2.3 Signal To Noise (S/N) Ratios The product/process/system design phase involves deciding thebest values/levels for the control factors. The signal to noise (S/N) ratio is an ideal metric for that purpose.The equation for average quality loss, Q, says that the customer's average quality loss depends on thedeviation of the mean from the target and also on the variance. An important class of design optimizationproblem requires minimization of the variance while keeping the mean on target. Between the mean andstandard deviation, it is typically easy to adjust the mean on target, but reducing the variance is difficult.Therefore, the designer should minimize the variance first and then adjust the mean on target.Among theavailable control factors most of them should be used to reduce variance. Only one or two control factors areadequate for adjusting the mean on target. The design optimization problem can be solved in two steps:78. 78 1. Maximize the S/N ratio, η, defined as η = 10 log10 ( η2~ / σ2 ) This is the step of variance reduction.

Page 105: Seminar report on taguchi method

2. Adjust the mean on target using a control factor that has no effect on h. Such a factor is called a scalingfactor. This is the step of adjusting the mean on target. One typically looks for one scaling factor to adjust themean on target during design and another for adjusting the mean to compensate for process variation duringmanufacturing. 5.3. Steps in Robust Parameter Design Robust Parameter design has 4 main steps: 1. ProblemFormulation: This step consists of identifying the main function, developing the Pdiagram, defining the idealfunction and S/N ratio, and planning the experiments. The experiments involve changing the control, noiseand signal factors systematically using orthogonal arrays. 2. Data Collection/Simulation: The experimentsmay be conducted in hardware or through simulation. It is not necessary to have a fullscale model of theproduct for the purpose of experimentation. It is sufficient and more desirable to have an essential model ofthe product that adequately captures the design concept. Thus, the experiments can be done moreeconomically. 3. Factor Effects Analysis: The effects of the control factors are calculated in this step and theresults are analyzed to select optimum setting of the control factors. 4. Prediction/Confirmation: In order tovalidate the optimum conditions we predict the performance of the product design under baseline andoptimum settings of the control factors. Then we perform confirmation experiments under these conditionsand compare the results with the predictions. If the results of confirmation experiments agree with thepredictions, then we implement the results. Otherwise, the above steps must be iterated.79. 79 5.4 NOISE FACTORS There are two main aspects to the Taguchi technique: First, the behavior of aproduct or process is characterized in term of factors (parameters that are separated into two types: 1.Controllable (or design) factors Those whose value may be set or easily adjusted by the designer or processengineer. 2. Uncontrollable (or noise) factors Which are sources of variation often associated with theproduction or operational environment; overall performance should, ideally, be insensitive to their variation.Second are the controllable factors, which are divided into: 1. Those which affect the average levels of theresponse of interest, referred to as target control factor (TCF), some times called signal factor. 2. Those whichaffect the variability in the response, the variability control factor(VCF). Noise: The variables/factors causingvariation and which are impossible or difficult to control Outer Noise: Operating Conditions EnvironmentInner Noise: Deterioration Manufacturing Imperfections Purpose Make product/process robust against Noisefactors(NF) Taguchi’s definitions of noise Factor (parameters, variables) Controllable Factor Noise Factor(NF) (CF) (uncontrollable) Variability control Factor Target control Factor Cost control Internal Externalnoise (VCF) (TCF) Factor noise Purpose: To make the process/product insensitive to the effect of NF’sProcedure a. Find VCF’s and their settings which minimize variability b. Find TCF’s and their settings whichbring the mean response on to target80. 80 3. Those which affect neither the mean response nor the variability, and can thus be adjusted to fiteconomic requirements, called the cost factors. It is this concentration on variability which distinguishes theTaguchi approach from traditional tolerance methods or inspectionbased quality control. The idea is to reducevariability by changing the variability control factors, while maintaining the required average performancethrough adjustments to the target control factors. 5.5 OFFLINE and ONLINE Quality Control Westernbooks on frequently divide quality systems in to two parts: • Quality of design • Quality of conformance DrTaguchi refers to these two parts as offline quality control and online quality control, respectively. 5.5.1 Offline quality control: It is concern with: 1. Correctly identifying customer needs and expectations, 2. Designinga product which will meet customer expectation, 3. Designing a product which can be constantly andeconomically manufactured, 4. Developing clear and adequate specifications, standards, procedures andequipment for manufacture. There are two stages in offline quality control: Product design stage Processdesign stage During the product design stage a new product is developed or an existing product is modified.The goal here is to design a product which is manufacturable and will meet customer requirements. During theprocess design stage, production and process engineers developed manufacturing processes to meet therespecifications developed during the product design stage. Taguchi developed a threestep approach forassuring quality with in each of the two stages of offline quality control. He called these steps system design,parameter design and tolerance design. 5.5.2 Online quality control: It is concern with manufacturingproducts within the specifications established during product design using the procedure developed duringprocess design. Taguchi identifies two stages of online quality control. Stage 1: Production quality controlmethods It has three forms:81. 81 • Process diagnosis and adjustment • Prediction and correction • Measurement and action Stage 2:customer relations 5.5.1.1 Product Design (OffLine Quality Stage 1) 1. System Design: applying engineeringand scientific knowledge to develop a prototype design which meets customer requirements. Initial selectionof parts, materials and manufacturing technology are made at this time. The emphasis here is on using the best

Page 106: Seminar report on taguchi method

available technology to meet customer requirements at lower cost. A key difference between this step inTaguchi’s approach and prototype design step in many western R & D departments is Taguchi’s focus onproven technology, low cost parts, and customer requirements rather then on using the latest technology andexotic or expensive parts. 2. Parameter Design: Determination of optimal setting for product parameters. Thegoal here is to minimise manufacturing product life time costs by minimising performance variation. Thisinvolves making the product design robustinsensitive to noise factors. A noise factor is an uncontrollablesource of variation in the functional characteristics of a product. Taguchi identifies three types of noise factor:• External noise – these are due to variation in environmental conditions such as dust, temperature, humidityor supply voltages. • Internal noise –these are mainly due to deterioration such as product wear, material agingor other changes in components or materials with time or use. • Unit to unit noise – it is the differences inproducts built to the same specifications caused by variability in materials, manufacturing equipments andassembly processes. 3. Tolerance design: Establish tolerances around the target (nominal) values establishduring parameter design. The goal is set to tolerances wide (to reduce manufacturing costs) while still keepingthe product’s functional characteristics within specified bounds. 5.5.1.2 Process Design (OffLine QualityControl, Stage 2) 1. System design: Select the manufacturing process on the basis of knowledge of the productand current manufacturing technology. The focus here is on building to specification using existing machineryand processes when ever possible. 2. Parameter design: determine appropriate levels for controllableproduction process parameters. The goal here is to make the process robustto minimise the effect of noise onthe production process and finished product. Experimental designs are used during this step.82. 82 3. Tolerance design: establish tolerances for the process parameters identified as critical during processparameter design. If the product or process parameter design steps are poorly done, if may be necessary hereto tighten tolerances or specify higher cost materials or better equipments, thus driving up manufacturingcosts. 5.5.2.1 Production Quality Control Methods (on line QC, stage 1) Dr. Taguchi identifies three forms ofonline quality control: 1. Process diagnosis and adjustment: the process is monitored at regular intervals,adjustments and corrections are made as needed. 2. Prediction and correction: a quantitative process parameteris measured at regular interval, the data are used to project trends in the process. Whenever the process isfound to be too far off target, the process is adjusted to correct the situation. This method is also calledfeedback or feedforward control. 3. Measurement and action: quality by inspection. Every manufactured unitis inspected. Defected units are reworked or scrapped. This is the most expensive and least desirable form ofproduction quality control since it doesn’t prevent defects from occurring or even identify all defective units.5.5.2.2 Customer relations (online QC, stage 2) Customer service can involve repair or replacement ofdefective products, or compensation for losses. The complaint handling process should be more than customerrelations operation. Information on types of complaints and failures, and customer perceptions of products,should be promptly fed back to relevant functions within the organization for corrective action.83. 83 References Books Writer Taguchi on robust technology Development By Genichi Taguchi A Primer onthe Taguchi Method Ranjit Roy Designing For Quality Robert H. Lochner & Joseph E. Mator Managing ForTotal Quality N Logothetis Quality Management Kanishka Bedi T.Q.M Besterfield Internet Referenceswww.slideshare.net www.google.com www.wikipedia.org www.scribd.com84. 84

Page 107: Seminar report on taguchi method

RecommendedMore from User

Seminar on Basics of Taguchi Methodspulkit bajaj10,893

Taguchi methodMentari Pagi3,968

Application of Design of Experiments (DOE) using Dr.Taguchi Orthogonal Array i……Karthikeyan Kannappan888

Introduction To Taguchi MethodRamon Balisnomo74,836

Taguchi methodchandrmouli singh2,390

TAGUCHI QUALITY GURURajeev Sharan4,407

Page 109: Seminar report on taguchi method

Genichi TaguchiNimal Namboodiripad5,597

Implimentation of Taguchi method on CNC EDM and CNC WEDMkiranhg1,355

DS004Robust Designhandbook3,030

Taguchi method and anova an approach for process parameters 2iaeme464

Genichi taguchiArchit Jain320

Design of Experiment (DOE): Taguchi Method and Full Factorial Design in Surface……Ahmad Syafiq351

Robust DesignMuhammad Fakhrurrazi Bin Muhammad Ghazi2,449

Page 110: Seminar report on taguchi method

Optimization of wedm process parameters using taguchi methodDharam Deo Prasad973

DOE Design Of Experiments For Dummies – A Beginner’s GuideSiddharth Nath17,276

The Theory Of The Design Of Experiments [DOE]Siddharth Nath1,431

Problem Solving J WixsonJim Wixson4,250

Portfolio_Sofia GarinSofia Garin Martinez79

Wire Electrical Discharge Machining (WEDM)Asert Seminar6,837

Page 111: Seminar report on taguchi method

Optimization of electrical discharge machiningrakigeo012,015

Ch13 apqpSuresh Kothandaraman763

Tagauchi methodPranav Kumar Ojha755

Robust design and reliability engineering synergy webinar 2013 04 10ASQ Reliability Division954

MRR improvement in electrical discharge machiningPraveen Kumar Kushwah715

Control charts and Taguchi methodsMit526

Tqm taguchiMentari Pagi3,641

Page 112: Seminar report on taguchi method

Electric discharge machining by shaikh mohd aslamgautam buddha university361

DOE Design Of ExperimentsSiddharth Nath2,930

Taguchi Ing En CalidadUNAM Facultad de Contaduría, Administración e Informática46,090

Introduction to engineering statistics & six sigmaAhsan Saleem5,590

Design and Analysis of ExperimentsGladys Grace Kikoy1,439

Montgomery applied statistics_probability_engineers_5th_txtbkAnkit Katiyar31,741

Page 113: Seminar report on taguchi method

Design & analysis of experiments (montgomery)Carlos Calderas Lizarraga3,559

Applied statistics and probability for engineers montgomery && rungercarecacabeludo6,425

guru gobind singh jipulkit bajaj1,455

guru teg bhadaur jipulkit bajaj838

guru gobind singh jipulkit bajaj1,207

guru teg bhadaur jipulkit bajaj623

guru harkrishan jipulkit bajaj707

Page 115: Seminar report on taguchi method

guru nanak dev jipulkit bajaj4,408

introduction to Cast Ironpulkit bajaj18,011

Seminar on Basics of Taguchi Methodspulkit bajaj10,893

ENGLISHEnglishFrançaisEspañolPortuguês (Brasil)Deutsch

EnglishEspanolPortuguesFrançaisDeutsche

AboutCareersDev & APIPressBlogTermsPrivacyCopyrightSupport

Page 116: Seminar report on taguchi method

LinkedIn Corporation © 2015

×Share this documentEmbed this documentLike this documentYou have liked this documentSave this document