in this issue chair’s message i -...

24
Volume 27, No. 2 Winter 2009 I hope everyone had a wonderful holiday season and I wish everyone a very happy, healthy, and prosperous 2009! With the start of the New Year, I’d like to look back for a moment and talk about our activities since the last newsletter. I am delighted that four members of the Statistics Division were named as 2008 ASQ Fellows. The rank of Fellow is one of the highest honors granted by the ASQ to its members. Congratulations to Jonathon Andell, Necip Doganaksoy, Vijay Nair, and Gregory Piepel! The Fall Technical Conference (FTC) was held in October. The conference was a great success with many accolades given to the organizers, coordinators, speakers, moderators, and presenters of short courses. Great job, everyone! Thanks and congratulations to everyone involved for their hard work and support. I would particularly like to recognize Frank Rossi, our FTC Chair’s Message by Daksha Chokshi Link to page 3 Chair’s Message . . . . . . . . . . . . . . . . . . . . 1 Editor’s Corner . . . . . . . . . . . . . . . . . . . . . 1 2008 Youden Address: Sequential Experimentation for Meta-Analyses . . 4-11 Statistical Thinking: Past, Present, and Future - Panel Session Highlights . . . . . 11 Ronald Does Receives Hunter Award . . . . 12 52nd Annual Fall Technical Conference - Highlights from Statistics Division Event Coordinator . . . . . . . . . . . . . . . . . . . . . 13 Edward Schilling Memorial . . . . . . . . 14-15 Awards Showcase . . . . . . . . . . . . . . . . . . 16 Statistics Division Standards Committee Report: Meeting in Beijing . . . . . . . . . . 17 Session at the 18th Simposio de Estadistica in Cartagena, Colombia . . . . . . . . . . . . 18 Statistical Resources on the Web . . . . . . . 18 Call for Papers . . . . . . . . . . . . . . . . . . 19-20 Highlights from FTC Statistical Division Council Meeting Minutes . . . . . . . . . . . 21 Treasurer’s Report . . . . . . . . . . . . . . . . . . 22 Statistics Division Committee Roster . . . . 23 In This Issue W e are hearing exciting things about the upcoming WCQI and FTC. The chair and the chair elect are helping to dream up fun ways for the division to connect in Minneapolis and Indianapolis. An important concern for them is keeping the history of the division alive in the minds of the next generation. Also, using the core of past Division chairs to be the heart of the meetings is a key aspect of the developing strategy. Here in Ohio and Taiwan, we are seeing firsthand the effects of the global recession on quality related jobs and industry. For example, a well-respected master black belt we know who is active in ASQ saw her job lost as her company laid off the entire section. Yet, many others are experiencing business as usual. Feel free to share with us your views about coping with the recession and its relationship to ASQ ([email protected] and [email protected] ). Another interest for us is the interaction between ASQ and Asia. A year ago, Thong Ngee Goh won the Hunter Award for his contributions bringing ASQ concepts to the eastern Pacific with his focus being Singapore. Also, the report in this newsletter about the ISO meeting in Beijing further shows ties with Asian countries. Several Asian nations have respect for the contributions of ASQ members and have sincere interest in participating in Statistics Division functions. Clearly, with ASQ’s traditional focus on manufacturing sectors, building ties with Asia seems only natural. We also welcome thoughts about how ASQ might interact with Asia more, particularly Taiwan and the rest of China. Editor’s Corner by Ted Allen and Shih-Hsien Tseng Ted Allen Daksha Chokshi Shih-Hsien Tseng

Upload: others

Post on 27-Jan-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

  • Volume 27, No. 2 Winter 2009

    Ihope everyone had a wonderful holiday season and Iwish everyone a very happy, healthy, and prosperous2009! With the start of the New Year, I’d like to lookback for a moment and talk about our activities since the lastnewsletter.

    I am delighted that four members of the Statistics Divisionwere named as 2008 ASQ Fellows. The rank of Fellow is oneof the highest honors granted by the ASQ to its members.Congratulations to Jonathon Andell, Necip Doganaksoy, VijayNair, and Gregory Piepel!

    The Fall Technical Conference (FTC) was held in October. The conferencewas a great success with many accolades given to the organizers,coordinators, speakers, moderators, and presenters of short courses. Greatjob, everyone! Thanks and congratulations to everyone involved for their hardwork and support. I would particularly like to recognize Frank Rossi, our FTC

    Chair’s Messageby Daksha Chokshi

    Link to page 3

    Chair’s Message . . . . . . . . . . . . . . . . . . . . 1Editor’s Corner . . . . . . . . . . . . . . . . . . . . . 12008 Youden Address: Sequential

    Experimentation for Meta-Analyses . . 4-11Statistical Thinking: Past, Present, and

    Future - Panel Session Highlights . . . . . 11Ronald Does Receives Hunter Award . . . . 1252nd Annual Fall Technical Conference -

    Highlights from Statistics Division EventCoordinator . . . . . . . . . . . . . . . . . . . . . 13

    Edward Schilling Memorial . . . . . . . . 14-15Awards Showcase . . . . . . . . . . . . . . . . . . 16Statistics Division Standards Committee

    Report: Meeting in Beijing . . . . . . . . . . 17Session at the 18th Simposio de Estadistica

    in Cartagena, Colombia . . . . . . . . . . . . 18Statistical Resources on the Web . . . . . . . 18Call for Papers . . . . . . . . . . . . . . . . . . 19-20Highlights from FTC Statistical Division

    Council Meeting Minutes . . . . . . . . . . . 21Treasurer’s Report . . . . . . . . . . . . . . . . . . 22Statistics Division Committee Roster . . . . 23

    In This Issue

    We are hearing exciting things about the upcoming WCQI and FTC. The chair and the chairelect are helping to dream up fun ways for the division to connect in Minneapolis andIndianapolis. An important concern for them is keeping the history of the division alive inthe minds of the next generation. Also, using the core of past Division chairs to be the heart of themeetings is a key aspect of the developing strategy.

    Here in Ohio and Taiwan, we are seeing firsthand the effects of the global recession on qualityrelated jobs and industry. For example, a well-respected master black belt we know who is active inASQ saw her job lost as her company laid off the entire section. Yet, many others are experiencingbusiness as usual. Feel free to share with us your views about coping with the recession and itsrelationship to ASQ ([email protected] and [email protected]).

    Another interest for us is the interaction between ASQ and Asia. A year ago, Thong Ngee Goh wonthe Hunter Award for his contributions bringing ASQ concepts to the eastern Pacific with his focusbeing Singapore. Also, the report in this newsletter about the ISO meeting in Beijing further showsties with Asian countries. Several Asian nations have respect for the contributions of ASQ membersand have sincere interest in participating in Statistics Division functions. Clearly, with ASQ’s traditionalfocus on manufacturing sectors, building ties with Asia seems only natural. We also welcomethoughts about how ASQ might interact with Asia more, particularly Taiwan and the rest of China.

    Editor’s Cornerby Ted Allen and Shih-Hsien Tseng

    Ted Allen

    Daksha Chokshi

    Shih-Hsien Tseng

    mailto:[email protected]:[email protected]

  • 2

    Criteria forBasic Tools and

    Mini PaperColumns

    Basic ToolsPurpose: To inform/teach the “qualitypractitioner” about useful techniques thatcan be easily understood, applied andexplained to others.

    Criteria:1. Application oriented/not theory2. Non-technical in nature3. Techniques that can be understood

    and applied by non-statisticians.4. Approximately five pages or less in

    length (8 1/2” x 11” typewritten,single spaced.)

    5. Should be presented in “how to use it”fashion.

    6. Should include applicable examples.

    Possible Topics:New SPC techniquesGraphical techniquesStatistical thinking principles“Rehash” established methods

    Mini-PaperPurpose: To provide insight intoapplication-oriented techniques ofsignificant value to quality professionals.

    Criteria:1. Application oriented.2. More technical than Basic Tools, but

    contains no mathematical derivations.3. Focus is on insight into why a

    technique is of value.4. Approximately six to eight pages or less

    in length (8 1/2” x 11” typewritten,single spaced.)Longer articles may be submitted andpublished in two parts.

    5. Not overly controversial.6. Should include applicable examples.

    General InformationAuthors should have a conceptual

    understanding of the topic and should bewilling to answer questions relating to thearticle through the newsletter. Authors donot have to be members of the StatisticsDivision.

    Submissions may be made at any timeto the Statistics Division Newsletter Editor.All articles will be reviewed. The editorreserves discretionary right indetermination of which articles arepublished.

    Acceptance of articles does not implyany agreement that a given article will bepublished.

    VISION• Data Driven Decisions Through Statistical Thinking• We are the recognized forum that advances data-driven decision making through Statistical Thinking.

    MISSION• Advance data-driven decision making through Statistical Thinking.• Improve the public’s perception and understanding of statistical methods and data-driven decisions.• Be the source for the statistical components of the ASQ body of knowledge.• Support the growth and development of ASQ Statistics Division members.• Increase the credibility, marketability and influence of ASQ Statistics Division members.

    STRATEGIC FOCUS1. BODY OF KNOWLEDGE

    • What it is?• Where is it?• How to categorize it?• Disseminate via Web page• Keep current• Partner with HQ• Goals to understand, organize, make accessible,

    inventory, gap analysis

    2. COMMUNICATION• Newsletter• E-Zines• Align both to vision and mission• Gap analysis with primary audiences• Discussion boards• Promote via E-Zine, conference booths• Align discussion boards to vision and mission• Evaluate whether to continue

    3. VOICE OF THE CUSTOMER• Members, other divisions, audiences• Proactive way to engage (go, see listen)

    4. DATA DRIVEN DECISIONS• How do we advance?• Do we broaden the audience?• AQC session?• Partnerships?

    DESIRED END STATE• Our members will be proud to be part of the Statistics Division.• Our Division’s operations will be a model for other organizations.• We will be a widely influential authority on scientific approaches to quality and productivity improvement.

    PRINCIPLES• Our customers’ needs will be continuously anticipated and met (i.e. Customer focused rather than

    customer driven).• Our market focus for products and services is weighted as follows:

    • Greatest weight on intermediate level.• Nearly as much weight on basic level.• Much less weight on advanced level.

    • Focus on a few key things.• Balance short-term and long-term efforts.• Value diversity (including geographical and occupational) of our membership.• Be proactive.• Recognize that we exist for our customers.• View statistics from the broad perspective of quality management.• Apply Statistical Thinking ourselves; that is, practice what we preach.• Uphold professional ethics.• Continuously improve.

    MEETING GROUND RULES• Respect and listen to all participants.• No speeches.• No “side-bar” discussions.• Decisions by consensus, if possible.• We will be open and honest, even if it hurts.• Support your ideas, don’t defend them.• We will delegate word-smithing to small groups.• All help facilitate, although we will have a formal leader, facilitator, scribe, and timekeeper (including at

    breakouts).• We will rotate scribes.• We will keep a separate flipchart for To-Do’s.• Mission, Vision, Principles, Strategy, Ground Rules should be visible.

    DisclaimerThe technical content of material published in the ASQ Statistics Division Newsletter may not have been refereed to the same extent as the rigorous refereeing that isundergone for publication in Technometrics or J.Q.T. The objective of this newsletter is to be a forum for new ideas and to be open to differing points of view. Theeditor will strive to review all articles and to ask other statistics professionals to provide reviews of all content of this newsletter. We encourage readers with differingpoints of view to write to the editor and request an opportunity to present their views via a letter to the editor. The views expressed in material published in thisnewsletter represents the views of the author of the material, and may or may not represent the official views of the Statistics Division of ASQ.

    Strategy

    Prin

    cipl

    es

    Prin

    cipl

    es

    Vision

    Mission

    Strategy

    Strategy

    Strategy

    Strategy

    Strategy

    Return Home

  • 3

    Chair’s Message(continued from page 1)

    representative, and Bob Brill, our Short Course chair, forall their time and effort.

    The Statistics Division Hospitality Suite had a greatturnout, over 60 people! Certificates of recognition andsmall tokens of appreciation were also presented to ourdedicated volunteers for their contributions to theDivision. The names are too many to list here but areincluded on our Awards Showcase.

    I would also like to recognize Christine M. Anderson-Cook of Los Alamos National Laboratory, who gave theW.J. Youden Memorial Address at the FTC. Her talk on“Sequential Experimentation for Meta-Analyses” isincluded in this newsletter. Congratulations also toRonald Does, winner of the William G. Hunter Award, aswell as Jeroen de Mast and Albert Trip, winners of theLloyd S. Nelson Award. The text of Ronald Does’sHunter Award acceptance speech is also included.

    Our past chairs Gordon Clark, Doug Hlavacek, RogerHoerl, and Bob Mitchell also deserve special recognitionfor developing and presenting the “Statistical Thinking:Past, Present, and Future” panel session. This sessionfocused on the development and evolution of statisticalthinking and recommended a broad application ofstatistical thinking principles to continuous improvementstrategies for the future. The session was very wellreceived. Highlights are included in this newsletter.

    And now on to future events… The upcoming FTC, to take place in Indianapolis,

    marks a very special milestone for the Statistics Divisionas we celebrate our 30th anniversary! Sub-committeesare already working to plan commemorative events thatyou will not want to miss. Information currentlyavailable about the 2009 FTC is athttp://www.asqstatdiv.org/ftc.htm. Please make plansnow to participate!

    Later this spring, our Chair-Elect, Vijay Nair, will leadthe Long-Range Planning meeting. This meeting is heldevery 3-5 years to assess the current state of divisionactivities and evaluate whether these activities supportour long-term direction with the best value to ourmembers. The Division invites you to submit ideasregarding the Statistics Division’s strategies and futureplans to Vijay Nair at [email protected].

    The 2009 World Conference on Quality andImprovement (WCQI) will be held in Minneapolis onMay 18-20. The theme of this year’s conference is “TheCulture of Quality: Serving Customers, Organizations,and Communities”. The Statistics Division Council willhold its annual Tactical Planning Meeting on Sunday,May 17th. The Division’s annual business meeting isscheduled for Monday, May 18th. Other division-sponsored events are a booth in the exhibit hall and ahospitality suite, scheduled for both Monday andTuesday nights. These are all great opportunities to meetthe leadership team and share your thoughts. Newvolunteers with fresh ideas and enthusiasm are alwayswelcome! Information about WCQI is available athttp://wcqi.asq.org/.

    I am pleased to continue to serve the members of ourStatistics Division. Hopefully, you share my excitementand continue to benefit from Division plans andactivities. Please don’t hesitate to contact me. I canbe reached at [email protected] or(561) 796-8373.

    Best wishes for the coming year!

    Return Home

    The Membership Committee is looking for a fewgood members to help with the membershipexpansion initiative for 2009. Our first project isto develop and publish an ASQ – Statistics Divisionadvertisement for Spring/Summer 2009. The purpose ofthis advertising campaign is to expand our membership.There are a few other initiatives that I would like topromote during the next year with your help. Ianticipate your level of commitment to be a few hourseach month for a regular conference call and committeeassignments. If you are interested in helping me withthe Membership Committee, please e-mail me today [email protected]. I would also like to take thisopportunity to thank all of the members that recentlyresponded to our call for volunteers.

    MembershipCommittee Invitationby Brian Sersion

    http://www.asqstatdiv.org/ftc.htmhttp://wcqi.asq.org/

  • 4

    Thank you to the conference committee for givingme the opportunity to speak today. It is indeed agreat honor to be the second woman in the 35year history of the Youden Addresses. In preparing togive this address, I read a number of the previous talksand was struck by how many profound ideas andprophetic words they contain. The bar is set very highfor me today. I would like to say thank you to thefollowing people who helped me with the manyiterations of this talk: Sallie Keller-McNulty (RiceUniversity), David Higdon, Michael Hamada, AparnaHuzurbazar (Los Alamos National Laboratory), AlysonWilson (Iowa State University), William Woodall (VirginiaTech), Timothy Robinson (University of Wyoming) andmy husband, Stan Cook. These are also people whohave greatly influenced my thinking on this topic.

    Today I am going to talk about data combination anddata synthesis in the context of meta-analyses. Therewas very little in my formal training that helped meprepare to work in this area, and yet it is a topic which isemerging in importance and application. Myunderstanding of meta-analyses has been greatlyinfluenced by my time at Los Alamos where there is astrong focus on facilitating decision-making by using allavailable data. Meta-analysis for multiple types of data isan emerging area, with some tools starting to beavailable, but there is still a great deal of researchopportunity in the areas of methodological developmentas well as for implementation for particular applications.I think that this is an area that Jack Youden would havebeen interested in, as there are some interesting designof experiment issues to consider. In addition, the idea ofsystematic bias from different data sources connects wellwith his interest in measurement systems.

    The recent issue of Technometrics included the “Futureof Industrial Statistics: A Panel Discussion” (2008). In itthere is a wonderful thought-provoking discussion ofchallenges and opportunities facing industrialstatisticians. Bill Meeker said that the “two maintechnological advances … driving the need for newstatistical methodology are the increased availability oflarge amounts of data and the continuing developmentof ...[mathematical] models for phenomenon thatpreviously would have been studied primarily through… experimentation”. Where previously ourunderstanding of complex processes and systems mayhave been driven by empirical data, there is now anincreasing emphasis on computer codes which model

    our understanding based on the underlying science. Thecombination of these computer codes with physical dataand how to unite multiple sources of data into a singleanalysis is the focus of my talk.

    I will discuss issues of analysis of meta-analysis throughthree examples. The first is a historical example whichdescribes estimating two physical constants: theastronomical unit (the distance from the earth to thesun) and the speed of light. Then I will talk about theestimation of reliability for aging complex systems,where multiple data sources at the system, sub-systemand component levels can help with prediction ofsystem reliability. The third example examines theprocess for estimating some cosmology parameterswhich seeks to better understand the history andevolution of the universe. It involves a blending ofphysical data with complex computer codes. I will alsotalk about some of the data collection or design ofexperiments issues in the context of meta-analyses.

    So what do we mean be meta-analyses? For many of us,we associate them with medical studies where severalclinical trials are combined into a single analysis. Adefinition from the American Heritage Dictionarydescribes a meta-analysis as “the process or technique ofsynthesizing research results by using various statisticalmethods to retrieve, select, and combine results frompreviously separate but related studies.” I also found analternate definition in the Skeptics Dictionary, whichdescribes a “meta-analysis is a type of data analysis inwhich the results of several studies, none of which needfind anything of statistical significance, are lumpedtogether and analyzed as if they were the results of onelarge study.” While the essence of this second definitionsounds quite negative, the spirit of the definitionactually has a fair bit that we can take from it. If we havemultiple small data sets that are each individuallyinsufficient to answer the question of interest, then bycombining them and incorporating engineering orscientific understanding of the process, there should behope of extracting more from that collection of datacompared to just looking at the pieces alone. Therefore,the broader definition that I will be working from todayis as follows: “Data combination from various direct andindirect data sources to answer a global question(s) byleveraging knowledge and power throughunderstanding of the connection of the data to eachother and to the question of interest.”

    2008 YOUDEN ADDRESSSequential Experimentation for Meta-Analyses

    Christine Anderson-Cook, Statistical Sciences Group Los Alamos National Laboratory

    Link to page 5

    Return Home

  • 5Return Home

    Example 1: Estimating Physical ConstantsThe first example was also presented by Jim Lucas in hisYouden address and talks about estimating theastronomical unit (AU), which is the distance from theearth to the sun. The data involves point and intervalestimates for the AU from 1895 to 1961, and were firstpresented in the form shown in Figure 1 by Jack Youdenin his Technometrics paper “Enduring Values” (1972).His idea of enduring values is one that frames a goodmotivation for meta-analyses well. So what did JackYouden mean by enduring values? What we would reallylike is that when new data become available, our currentestimate with its associated uncertainty range remainsconsistent with what we have just observed. We mayexpect that with new information, our bounds ofuncertainty may be reduced, but we would hope thatour previous knowledge does not just become irrelevantand get replaced. Figure 2 shows how we might ideallyexpect our knowledge to evolve, with a progression ofrelated values that show a gradual reduction inuncertainty as additional information becomes available.One of the historical arguments against combining datafrom different studies has been that it involves somesubjectivity about the weighting for different data sets.However, I would argue that just using the most currentdata also has an implied weighting structure: a weight of1 for data from the current study, and all other data areassigned a weight of 0. Surely there is opportunity forsome middle ground by considering the sensitivity toseveral potential weighting structures for combining thedata. Returning to the AU data in Figure 1, we see thatinstead of enduring values, we have disposable values.For each of the studies, the point estimate is notcontained in the uncertainty range of the previous study,leading us to replace values rather than updating ourunderstanding. This is not that pattern that we want touse as we seek to understand scientific processes. Notethat the current estimate is now reported with nouncertainty at all.

    Figure 1: Astronomical Unit Data from J. Youden’s

    “Enduring Values” paper.

    Figure 2: Ideal evolution of knowledge about a parameter value

    The second historical illustration of estimating physicalconstants involving the speed of light is relatively similar.Jock MacKay and Wayne Oldford present a wonderfuldescription of the historical progress on estimating thisconstant in a Statistical Science paper (2000), showingthe connection of statistical methods to the broaderscientific process. What is different about this example isthat five different methods were used to estimate thisconstant. If we look at the estimates grouped by method,we are able to discern some patterns of systematic bias.These patterns could potentially help us to understandthe weaknesses of the different approaches, when thedata are combined into a single analysis. Interestingly,the current value for the speed of light is not an estimate,but rather a defined quantity in the metric system. Ameter is then defined to be the distance traveled by lightin 1/299,792,458th of a second. So if our understandingof the speed of light changes, then instead of adjustingthis raw speed, we will instead need to pull out our metersticks and shave them down or glue something else ontothem.

    So some of the interesting aspects and issues of these firstexamples are as follows: First, in each of the individualstudies, the researchers did a good job of quantifying thevariance of their estimates based on their data. However,they did not do a good job of accounting for thepotential bias in their data collection methods. This is alittle bit like using optimality criteria such as D-, G- or I-optimality in a single data type experiment as the onlycriteria for measuring the goodness of the design. Thesemeasures are predicated on the belief that the underlyingassumed model is correct. Some additional broadermeasures of the goodness of the design should also beconsidered which examine potential lack of fit of themodel. A lot of other aspects of single data typeexperiments (such as outliers, lack of fit and modelcorrectness) also apply in the meta-analysis setting, butneed more careful definition. All of these are much richerconcepts in multiple data type experiments.

    Second, each of the results reported was based onstarting with a clear slate, as there was no leveraging ofinformation from previous studies. In fairness, combiningdifferent kinds of data was much more difficult to do

    YOUDEN ADDRESS Continued from page 4

    Link to page 6

  • 6 Return Home

    before much of the computational fire-power that is nowavailable to us. However, there was probably also someelement of not wanting to combine the data, with thebelief that “our lab is correctly calibrated, and the othersare not”. In my previous training, the key distinguishingfactor separating the frequentist and Bayesianframeworks was the ability to incorporate expertknowledge into an analysis through the use of priors.There is another important advantage of the Bayesianapproach for this context, and that is the ability to moresimply and appropriately propagate uncertainty fromdifferent data types in a single analysis. This is animportant consideration for using the Bayesian approachfor meta-analyses.

    Third, there is a natural ranking of the data in theseexamples. More recent data should have the advantageof modern measurement equipment for improving theestimation of these physical constants. However, it is easyto see from the examples that although the widths of theuncertainty estimates generally decrease, there areseveral examples of the technological advantage beingsuperceded by the bias considerations and localcalibration of measurement system.

    Example 2: Estimating Reliability of a ComplexSystemThe second example involving reliability of a complexsystem has a different feel. In the Technometrics paneldiscussion about the future of industrial statistics, SallieKeller-McNulty, former ASA president and current Deanof Engineering at Rice University, said “The holy grail willbe to combine massively heterogeneous information andintegrate it across what is known about all of thesubsystems into some believable assessment about theperformance of the full system”. She made this statementin context of the James Webb Space Telescope(http://www.jwst.nasa.gov/) which is scheduled forlaunch in 2013. The need for a meta-analysis for thiscomplex system is critical here, since there will be noopportunity to assemble the entire system for a fullsystem test. Assessments about performance andreliability will need to be based on the appropriatecombination of component or sub-system level data andunderstanding. Once the system is launched there will belittle opportunity to fix it, so our understanding needs toadequately and competently represent the system.

    Where I encounter the problem of good system reliabilityestimation and prediction is in conjunction with apopulation of complex systems (here munitions), where afull-system test is destructive, and hence very expensiveand only available in very small numbers. However, thereis a wealth of other data, from component or sub-systemlevels, that is less directly a measure of system reliability,

    but that we believe is informative. At Los AlamosNational Laboratory, we have developed somemethodology that is helpful for combining data frommultiple data sources into a single analysis, as shownschematically in Figure 3. See Anderson-Cook et al.(2007, 2008) for more details. Various sources of data arecombined into one Bayesian analysis, where the buildingblocks of the analysis are the component reliabilities. Byusing engineering understanding of how thesecomponents are connected, a system reliability estimatecan be obtained. For example, for a series system whereall of the modeled components need to functioncorrectly for the system to work, then the systemreliability estimate is defined as the product of thecomponent reliability estimates. The results of theanalysis are to enable prediction of reliability at both thesystem and component levels. Since understanding isavailable at the lower levels, critical components whichare driving changes in system reliability can be identifiedfor increased monitoring or maintenance. In addition,there is an opportunity to examine discrepanciesbetween data sources, which can help with our overallunderstanding of the data and the system.

    Figure 3: Schematic of approach to modeling system reliability based

    on multiple data sources.

    Consider the sample series system shown in Figure 4,with 5 components and 8 different types of dataavailable at the system level, the sub-system level and foreach of the components. Some of the data are pass/failobserved at different ages, similar to what we wouldhave for reliability estimation using a logistic or probitmodel. Other components have degradation data whichis compared to an operational limit. Component 2 hastwo different types of data available, which representalternate measures of component reliability. Based onthis collection of data, there are 5 different ways thatsystem reliability can be estimated:

    where is the reliability of element i. Clearly, the reliabilityestimate based on the system data itself, , is the mostdirect method to estimate system reliability, but if thesample size available from this type of data is very small,then incorporating alternate data from other less directsources could potentially be advantageous.

    YOUDEN ADDRESS Continued from page 5

    Link to page 7

    http://www.jwst.nasa.gov/

  • 7Return Home

    Figure 4: Sample complex system with different available data types.

    Since the data combination of lower level data sourcesimplies many more assumptions for the correctestimation of system reliability, we can clearly getourselves into trouble quite easily if our understanding ofthe system is not correct. For example, the system modelmight be missing a component, or there may beconnectivity issues between components (components 1and 2 each work separately, but their combination doesnot work). There may also be calibration problems,where the test for the lower level data does not matchthe requirements of that component during a full-systemtest. In addition, if the data are not collectedindependently from the different data sources, we couldhave problems with how to appropriately combine thedata to estimate system reliability.

    So for the system reliability example, what are some ofthe issues when we implement a meta-analysis? First, weare dealing with many different types of data, wheremany of them provide only a partial answer to theprimary question of interest. Namely, knowing thereliability of component 1 does not directly allow us toestimate system reliability, but rather this informationneeds to be combined with engineering and scienceunderstanding of the system.

    Second, there is a very natural ordering of the data. Thesystem reliability measures are usually viewed as thegold-standard for the question of interest, since they aredirectly measuring the quantity of interest. However,limited quantities of data may make the uncertainty fromthis data type alone impractically large. The other typesof data are likely more abundant, but often haveproblems of miscalibration and are more dependent ondetailed understanding of the system structure. Therelative cost of different kinds of data can be quiteextreme. Often a single full-system test may cost theequivalent of a hundred or a thousand observations fromsome of the other data types. This will allow us to collectmuch more of the lower level data for the same cost asthe most direct tests.

    Third, since the system was designed to perform to meetspecific standards, expert knowledge is typically availableand helpful. If as in some applications, we are relativelydata-poor, then this expert understanding can beparticularly beneficial to add to our analysis.

    Finally, there may be some components in the systemwhich cannot be tested. Consider again the systemdescribed in Figure 4, and suppose that no data areavailable for evaluating the reliability of component 1.Since four of the five methods for estimating systemreliability include , does this mean that we lose all abilityto use this component level approach? By incorporatingexpert knowledge, we can proceed, but now with theunderstanding that our estimation has a strongersubjective aspect.

    Example 3: Estimating Cosmological ParametersThe third example involves estimating cosmologyparameters. Cosmology is the branch of astronomy thatdeals with the general structure and evolution of theuniverse. There has been considerable excitement andnews from the large Hadron Collider(http://www.lhc.ac.uk/) on the border betweenSwitzerland and France, which is involved in investigationof some aspects of cosmology. In this example, we seekto understand some of the fundamental quantities thatguide changes in the universe as well as our ability tomeasure those changes. The goal is to combine thesnapshot of data about our universe which we have beenable to observe in the last 20 to 50 years, with complexscientific computer codes which model our currentunderstanding of the universe’s evolution from the timeof the Big Bang 13.7 billion years ago to the present. Ifthis evolution is well-understood, then our understandingof history, the present and future events will besubstantially enriched.

    The standard model of cosmology includesapproximately 70% dark energy, 25% cold dark matter,and only about 5% of baryons. Baryonic matter is theordinary matter composed of proton, neutrons andelectrons, which we associate with our daily lifeexperiences. Dark energy and cold dark matter areelements which are theorized to exist because ofobserved behaviors that are not easily explainedotherwise. There are approximately 20 parameters whichare required in the various complex computer codes formodeling the evolution. They range from measureswhich affect our ability to collect data like optical depth(a measure of transparency, which is related to thefraction of radiation or light that is scattered or absorbedalong a viewing path) to fundamental parametersaffecting the evolution itself like Hubble’s constant (therate of expansion of the universe). To calibrate our

    YOUDEN ADDRESS Continued from page 6

    Link to page 8

    http://www.lhc.ac.uk/

  • 8 Return Home

    current understanding, some of these constants can onlybe estimated +/- 10%, which pales in comparison to ourunderstanding of particle physics constants, which aretypically known with 0.1% accuracy. Why is estimatingthese cosmology parameters so much more difficult thanother areas of physics? Essentially, we cannot observe allof the data needed to measure them directly – we have afew decades worth of data to model a 13.7 billion yearphenomenon. The extrapolation back through history isfacilitated by complex computer codes based on ourscientific theories. By considering different values for thecosmological parameters as inputs to these codes, wecan obtain estimates of what the state of the universeshould be currently. These results can then be comparedto what we are able to see with observed data.

    There are multiple sources and types of observed data,which focus on different aspects of estimating thecosmological parameters. The Sloan Digital Sky Survey(http://www.sdss.org/) located in New Mexico providesa detailed 3-dimensional map of over a million galaxiesand quasars. Their existence, location and characteristicscan be matched to output from the computer models.The Wilkinson Microwave Anisotropy Probe(http://map.gsfc.nasa.gov/) produced the first full-skymap of cosmic microwave background radiation left overfrom the Big Bang. By comparing the observed data fromthese and other sources to various combinations of thecosmological parameters input into the complex physicscodes, we can obtain Bayesian posterior distributions forplausible values of these parameters that are consistentwith the universe as we see it today.

    By combining data from different sources in a meta-analysis, we are able to refine our estimates of theseparameters, since various data types contributedifferently to our understanding of these values. Theweaknesses of some data types to predict somecosmological parameters well are mitigated by betterestimation of these from other data types. Because of thesmall time window of observed data, the estimates of theparameters are typically highly dependent on otherparameter values and hence are quite correlated. As weadd more data types to our analysis, we are able toreduce some of the uncertainties with these parameterestimates by finding the subset of values that aremutually consistent with the different observations.

    There is one additional wrinkle added to these analyses.Our ability to observe the universe is evolving rapidly. Ifwe examine the data for the 3-dimensional map of thesky from 20 years ago, only a portion of the sky wasmapped and included only 1100 galaxies. The Sloantelescope is able to map a larger portion of the visible skyand with improved resolution can identify over one

    million galaxies. So even in our brief snapshot of history,the change in data quality is moving very rapidly. Theprecision and calibration of these measurements ischanging quickly even in the short time span of theavailable data. With each new wave of improved dataresolution, the underlying astronomy theory and relatedcomputer codes need to evolve to incorporate the newunderstanding gained.

    What are some of the key issues illustrated with thisexample? First, this is a very high dimensional problem –in terms of the observed responses, the large number ofcosmological parameters that we wish to estimate, andthe types of data potentially available to include in themeta-analysis. This complexity is both a complicationand an advantage. The complication involves therequirements for computational intensity for the analysis.The advantage is that the relative weaknesses of anyparticular data type can be partially reduced bycombining it with other data types which have differentareas of strength and weakness.

    Second, this example involves both physical data andoutputs from computer codes. Physical data typicallyhave stochastic measurement errors, while the resultsfrom computer experiments more commonly havesystematic deviations from reality because of biasintroduced from imperfect or evolving theory. Anyanalysis which combines these different types of datashould recognize these different characteristics andincorporate them appropriately.

    Finally, there are substantial constraints on where we canmeaningfully explore. Our physical data are restricted tothe present time, which represents only a tiny fraction ofthe history which we seek to model. The computer codesalso have limited ranges of applicability because they arebounded by our scientific understanding of theunderlying mechanisms driving the evolution.

    Re-Occurring Themes in Analyses of Meta-AnalysesIn each of the examples presented, the goal of theanalysis has been to answer the primary question ofinterest using all available and relevant information. Eventhough our knowledge and available data may not be ascomplete as we might like, our expectation is that wemust answer the question based on our best availableinformation. This focus on facilitating good decision-making is both empowering and potentially dangerous.The need to make a decision prompts us into action, butit is essential to carefully specify our underlyingassumptions and also to provide caveats where necessaryto itemize where there are potential gaps in our

    YOUDEN ADDRESS Continued from page 7

    Link to page 9

    http://www.sdss.org/http://map.gsfc.nasa.gov/

  • 9Return Home

    knowledge. We also fully expect to update our estimatesand understanding as new data becomes available.The differences between our data types are again acomplication and an opportunity. Our model to answerour primary question of interest needs to be flexible andrich enough to incorporate the different types of directand indirect data. This is typically quite challengingmethodologically. The opportunity comes from thedifferent characteristics of the data across the varioustypes – the weaknesses of one data type might be thestrength of another. We can hope to leverage thestrengths of all our data to help improve the overallresults.

    Finally, the existence of discrepancies between data typesinitially seems to be at best a nuisance. However byconsidering the longer term view of the scientific process,we see that studying and modeling the discrepancybetween data types can increase our understanding ofour problem. Some of the huge steps forward in scientificexploration have come because a researcher noticed asmall systematic discrepancy between the observed dataand the results suggested by the existing theory.Understanding potential gaps in our knowledge can fuelfurther investigations as well as suggest new data typesto collect.

    Graphical methods can be helpful in the process ofexamining differences between our data types. RogerHoerl said in his 1995 Youden address that “thepopularity of graphical methods is partially due to theability to stimulate inductive thinking”. This inductivethinking is key to the evolution of our theory andunderstanding.

    By considering discrepancy and incorporating it into ourmodels, we can also more accurately quantify theuncertainty of our estimation to include not only variancerelated uncertainty, but also bias from our model notbeing entirely correct. Consider the plot in Figure 5,which shows the reliability estimates for the samplesystem given in Figure 4. By carefully examining thepattern of reliability from the various estimates, perhapssome calibration problems between different data typescan be identified. In addition, by including a discrepancymeasure into our model for system reliability, we canhope to more appropriately summarize the trueuncertainty in our estimate.

    Figure 5: Five system reliability estimates based on different subsets

    of data for sample system given in Figure 4.

    Sequential Data CollectionNow we consider the design of experiments or datacollection portion of this problem. We can think of this asa “design within a design”, where first we need toidentify what type of data we need to collect, and thenwithin that data type specify particular combinations ofinput factors to consider. In the system reliabilityexample, we would first need to decide that we wantsome additional component 1 data, and then we wouldneed to decide what ages of that component do wewant to test. For the cosmology example, we would firstneed to decide that we want some additional data fromthe Sloan Digital Sky Survey, and then specify whatparticular data to collect.

    Box, Hunter and Hunter (2005) present the idea ofiterating between deductive and inductive learning. Withdeductive thinking, we are examining our current theoryand checking whether the observed data match it, whilewith inductive thinking we are using the discrepanciesbetween our observations and the theory to postulatenew versions of the theory. Therefore, the sequential dataanalysis that I am talking about extends beyond the usualresponse surface methodology sequential design notionwhere we are using stages of experimentation to informus about where future data should be collected. Asstatisticians, I think we are quite good at developingmethods for the deductive part of this process. However,I feel that we have potential to be more involved in theinductive phase as well, by providing graphical andnumerical tools to help illustrate the nature of thediscrepancies between our data types in the meta-analysis. This can help guide the evolution of theunderlying theory.

    So perhaps the ideal path for enduring values presentedearlier in Figure 2 was too naïve. Instead of a smoothlyevolving reduction in precision where previous estimatesremain consistent with subsequent values, perhaps weneed to incorporate the jumps that will result from theevolution of our supporting theory. Figure 6 shows howthis pattern might look with changes in our

    YOUDEN ADDRESS Continued from page 8

    Link to page 10

  • 10 Return Home

    understanding being illustrated with different colors.Within a particular version of the theory, we might hopeto improve the precision of our estimates. But as newtheory is developed, then our estimate might “jump” toreflect this change in understanding.

    Figure 6: Updated evolution of knowledge for parameter values when

    changes in underlying theory are included.

    Figure 7 shows how we might think about the problemof sequential data collection for a meta-analysis. Earlierwe talked about some of the analysis issues that wemight encounter with Phase 1 as we perform a firstanalysis. Sequential data collection considers the decisionmaking that occurs between phases 1 and 2 todetermine what would be the best new data to obtain tomaximally improve our estimates with a combinedanalysis of the current data with the newly collecteddata.

    Before being able to find a particular answer to thequestion of what new data to collect for a particularapplication, we need to consider what doesimprovement mean here? Perhaps the most obviousimprovement would be to focus on reducing thevariability of our parameter estimates or predicted values.This, of course is conditional on having the modelcorrectly specified, and is a familiar topic for statisticians.For the speed of light example discussed earlier, thiswould correspond to collecting more data from oneparticular study, which would reduce the uncertaintyassociated with that study.

    There are some other aspects of improvement in thisbroader problem that are also worthy of consideration.We could choose to actively incorporate parameters thatcapture discrepancies between different data types, andthen work to estimate the discrepancy portion of ourmodel more precisely. In familiar terminology, thisobjective seeks to understand the systematic bias portionof the model better. Again, for the speed of lightexample, this might look like collecting more data fromtwo or more different methods for estimating thatconstant, and then working to understand the systematicdifferences between the methods better.

    Figure 7: Overview of sequential data collection for meta-analyses

    Finally, we can look for problems or omissions in themodel which cannot currently be detected by our data.We can think of these as “unknown unknowns” andseeking to improve our results in this area would implycollecting new data types that can test the fundamentalassumptions of our model. A simple example of thiswould be if we are modeling the relationship between asingle input and a response, and we assume a straightline model is adequate. If we have only collected data attwo values of the input, we have no ability to checkwhether the straight line model is appropriate, and ourdata gives us no warning signs that the model could beincorrect. Looking for an unknown unknown wouldmean that we would collect a new type of data, here at athird input value, to check the assumption of curvature.

    If we are trying to decide between these three objectives,it will be helpful to think of the decision in two steps:First, determine what proportion of our new budget dowe want to dedicate to each objective, and then second,develop a strategy for optimizing what new data tocollect within that objective.

    ConclusionsI would like to conclude with a few thoughts about whatJack Youden might have thought about this topic. JohnGorman was quoted in John Cornell’s 2006 Youdenaddress with the thought that Jack Youden was noted forhis “ingenuity and appeal to practitioners”. One of thethings that a meta-analysis involving multiple data typescan do is help us answer difficult practical questionsusing the best available information combined in a singleanalysis.

    Jack Youden made substantial contributions in the areasof design of experiments and understandingmeasurement systems. We can think about measurementsystems as capturing the truth in data without systematicbias. Meta-analysis methods consider variations of theseresearch areas, broadened to deal with multiple directand indirect data sources. In his 2000 Youden address,Geoff Vining said that “Jack Youden was someone whoappreciated the fundamental role statistics can play in

    YOUDEN ADDRESS Continued from page 9

    Link to page 11

  • 11Return Home

    industry, and even more importantly, he did somethingabout it”. The area of meta-analysis can help facilitatebetter decision-making by using all available information,and this in turn can help expand the roles thatstatisticians have in influencing policy and thedevelopment of science.

    Gerry Hahn said in his 2003 Youden address, “work onimportant problems – those with the highest impact arenot necessarily technically the most challenging”. Thearea of meta-analysis certainly has the potential for a veryhigh impact, because of its ability to contribute to highlevel decision-making. The bonus for research statisticiansis that there are also some very big technical challengesin this area as well. This is a research area with bothabundant methodological challenges for appropriatelycombining data and implementation issues for differentapplications.

    I hope that this discussion of meta-analysis has beenthought-provoking and has added some interesting anduseful ideas to the collection of Youden addresses.

    References:1. Anderson-Cook, C.M., Graves, T., Hamada, M., Hengartner, N.,

    Johnson, V., Reese, C.S., Wilson, A.G. (2007) “Bayesian StockpileReliability Methodology for Complex Systems” Journal of theMilitary Operations Research Society 12 25-37.

    2. Anderson-Cook, C.M., Graves, T., Hengartner, N., Klamann, R.,Wiedlea, A.K., Wilson, A.G., Anderson, G., Lopez, G. (2008)“Reliability Modeling using Both System Test and Quality AssuranceData” Journal of the Military Operations Research Society (inpress)

    3. Box, G.E.P., Hunter J.S. and Hunter, W.G. (2005) Statistics forExperimenters: Design, Innovation and Discovery New York:Wiley.

    4. Cornell, J.A. (2006) “Youden Address: Remembering Jack Youden”http://216.171.160.55/documents/newsletters/Winter07StatDiv.pdf

    5. Hahn, G.J. (2003) “Youden Address: The Embedded Statistician”http://216.171.160.55/documents/newsletters/STAT_0104_ocwf394.pdf

    6. Hoerl, R. (1995) “Youden Address: Using On-line Process Data toImprove Quality”http://216.171.160.55/documents/newsletters/Vol%2016%20No%202%20Winter%201996.pdf

    7. Steinberg, D.M., editor (2008) “Future of Industrial Statistics: A PanelDiscussion” Technometrics 50, 103-127

    8. Vining, G.G. (2000) “Youden Address: A Call to Action”http://216.171.160.55/documents/newsletters/Winter%202001.pdf

    9. Youden, W.J. (1972) “Enduring Values” Technometrics 14, 1-11.

    YOUDEN ADDRESS Continued from page 10

    The Statistics Division sponsored an invited session atthe 52nd annual Fall Technical Conference in Mesa,AZ (October 11, 2008) focusing on thedevelopment, evolution, and deployment of “StatisticalThinking”.

    Doug Hlavacek, immediate Past Chair, moderated thepanel discussion; discussants included Roger Hoerl, GordonClark and Bob Mitchell – all Past Chairs of the StatisticsDivision. Roger Hoerl presented the history of StatisticalThinking, explaining that the Statistical Thinking philosophyowes its roots to W. Edwards Deming, though Dr. Demingnever used the phrase “Statistical Thinking”. Roger attributesthe popularization and dissemination of Statistical Thinkingconcepts to Ron Snee. Back in 1986 Ron defined statisticalthinking as thought processes, not formulas, and clarified thedistinction and synergy of statistical thinking and statisticalmethods. Heero Hacquebord, a student of Deming, beganteaching courses in Statistical Thinking for Management in1987. In 1994 the Statistics Division chartered a tacticalplanning team to formally develop and deploy “StatisticalThinking”. Statistical Thinking Everywhere became theDivision vision. An official definition of Statistical Thinking waspublished in the 1996 Glossary & Tables for Statistical QualityControl, and a Special Publication newsletter was publishedto its members. In 2000, the tactical team wrote a booklettitled, “Improving Performance Through Statistical Thinking”.Many conference presentations and additional publicationshave been organized to disseminate the message.

    Statistical Thinking principles have since becomecornerstones of major improvement initiatives such as TQMand Six Sigma. Gordon Clark compared and contrasted SQCto SPC, and offered that Statistical Thinking concepts be usedto strengthen and enhance SQC towards a more rigorousprocess improvement strategy, such as that advocated byHoerl and Snee in their 2002 book, Statistical Thinking –Improving Business Performance.

    Bob Mitchell discussed a case study in one multinationalmanufacturing company where Statistical Thinking conceptsare being re-introduced in its fusion of DMAIC, DFSS, andLean methodologies. Roger Hoerl wrapped up thepresentation portion of the FTC Invited Session, with adiscussion of a continuous improvement system comprised ofprocess control, process improvement, and product andprocess redesign. We tend to treat every new improvementmethod as a “float in the parade” (Jim Buckman, JuranInstitute) rather than a system of interconnectedimprovement processes. Summarizing, statistical thinkingconcepts are timeless and an idea whose time has comeagain.

    A panel discussion followed the presentations. Manyquestions focused on how to further integrate statisticalthinking into formal education and training – at theK-12, undergrad, and graduate levels. A copy of thepresentation slides (PDF format) is available for viewingand download from the Statistics Division website atwww.asqstatdiv.org/powerpoint.htm.

    Statistical Thinking: Past,Present, and Futureby Bob Mitchell

    http://216.171.160.55/documents/newsletters/Winter07StatDiv.pdfhttp://216.171.160.55/documents/newsletters/STAT_0104_ocwf394http://216.171.160.55/documents/newsletters/Vol%2016%20No%2http://216.171.160.55/documents/newsletters/Winter%202001.pdf

  • 12 Return Home

    The recipient of the 2008 William G.Hunter Award is Ronald Does. TheStatistics Division of the AmericanSociety for Quality (ASQ) established theHunter Award in 1987 in memory of theDivision’s founding chair to promote,encourage and acknowledge outstandingaccomplishments during a career in thebroad field of applied statistics. Theattributes that characterize Bill Hunter’scareer—consultant, educator forpractitioners, communicator, and integrator

    of statistical thinking into other disciplines—clearly applies as wellto Ronald.

    Hunter Award Acceptance Speech by Ronald DoesDear fellow statisticians and quality professionals,

    It is exciting to be here, and a real honor to receive theWilliam G. Hunter Award 2008 from the Statistics Division of theASQ. I have never met Bill Hunter. I was not yet involved inindustrial statistics before his untimely death in 1986. I started mycareer in mathematical statistics. After finishing my PhD thesis in1982, I changed focus to medical statistics and psychometrics.Later in 1989, I got involved in industrial statistics at PhilipsElectronics. The first advice I received from my new colleagueswas to read the book by Box, Hunter and Hunter. The reason wasclear. Because I was not familiar with industrial statistics I had tolearn this from the authors who were really practicingstatisticians. It took them years to write this landmark book.

    After hearing the good news from the chairman of the HunterAward committee, Bob Mitchell, I have read about the legacy ofBill Hunter. Bill was a special statistician and a special person. Hewas passionate about teaching and applying statistics. It did notmatter to him at which level he had to teach. He worked as astatistician in third world countries to make a difference in thelives of less fortunate people. Also his project in the city garage inMadison is famous. After reading about his life and scientificwork, it has become clear to me what a real honor it is to receivethis award.

    I would like to thank Jeroen de Mast for nominating me, aswell as Martina Vandebroek, Jaap van den Heuvel, Stefan Steinerand Geoff Vining for their supporting letters.

    For the past 15 years I have been the managing director of theInstitute for Business and Industrial Statistics. This is a consultancyfirm owned by the University of Amsterdam. The interactionbetween scientific research and the application of qualitytechnology via our consultancy work is the core operatingprinciple of the institute. This is reflected in the type of peoplethat work for the institute, all of whom are young professionalshaving strong ambitions in both the academic world and inbusiness and industry. My colleague Jeroen de Mast is one of themost talented young researchers and consultants in the field ofquantitative improvement strategies. He also believes that it is all-importance of training people in profit and non-profitorganizations in good research and decision methodologies.During the last 10 years we have trained hundreds of

    Ronald Does Receives 2008 ASQ Statistic Division’sHunter Award

    professionals in statistical methods, Six Sigma and statisticalthinking at diverse companies in electronics, food, finance,healthcare, plastics and semi-conductors.

    One foot in practice, the other in academy: that is how wepractice our profession. It is a view strongly advocated by ourinstitute's inspirator Søren Bisgaard. I met Søren for the first timeat the 1999 ISI satellite conference on industrial statistics inLinkoping, Sweden. During this conference, a special workshopwas organized to discuss the viability of the idea of forming anapplied statistics organization in Europe. This workshop, led bySøren Bisgaard, was attended by about 20 statisticians several ofwhom later would play prominent roles in what became theEuropean Network for Business and Industrial Statistics(abbreviated by ENBIS). At the following meeting in Eindhoven,the original mission and vision of ENBIS was discussed andformulated, and a founding board was elected chaired by SørenBisgaard. The model was ASQ’s Statistics Division of which BillHunter was the founding chair. I offered to take on theadministrative burden via my organization IBIS and offered toorganize a founding kickoff conference in Amsterdam, December11, 2000. The kickoff conference attracted approximately 80statisticians and statistical practitioners from all over Europe.ENBIS was officially founded in June 2001 as "an autonomousSociety having as its objective the development andimprovement of statistical methods, and their application,throughout Europe, all this in the widest sense of the words."Since the first meeting membership has grown to about 1300from nearly all European countries.

    Jeroen and I have worked pro bono for ENBIS because weknew from the start that its mission and vision were unique. Thebenefits have been really great. It has introduced us to acomplete new world. We have made many friends from all overEurope and abroad. One of them was, as I already mentionedbefore, Søren Bisgaard. Søren is an ASQ fellow, Shewhart andBox medalist, and six years ago winner of the Hunter award. Hehas become one of my closest friends and we have a very fruitfulcooperation. The interaction between science and application ishow our joint articles originate. Currently, we work together withJaap van den Heuvel (CEO of one of the largest hospitals in theNetherlands) on healthcare quality, just as Bill Hunter did yearsago. Some of the apparent conflicts between quality and cost ofhealth care are rooted in confusions about the definition ofquality, confusions that may impede progress in solving problemswith the health care systems and paralyze the leadership.Although there are crucial differences between manufacturingand health care, we show that definitions and concepts that haveevolved in the manufacturing industry may help to explain theeconomics of quality improvement and show how we canimprove quality while reducing cost in health care. We also studyseveral other important concepts and ideas of modern qualitymanagement that have evolved in other application areas thatcan be adopted by the health care industry and help us moveahead without having to relearn the same painful and costlylessons.

    Again, thank you very much. I am deeply honored toreceive this award that bears the name of the person whowas a real ambassador for statistics.

    Ronald Does

  • 13Return Home

    On behalf of the organizing committee, we would like to thank many of you for making this year’s FallTechnical Conference (FTC) enjoyable and successful. The FTC was held October 9th and 10th at the HiltonPhoenix East in Mesa Arizona. The conference theme was “Statistics & Quality: Coming to the Table forGrowth and Improvement”. Robert Rodriguez of the SAS Institute delivered the plenary address “Questions andAnswers: Statistics and Quality in the Business World”. Christine Anderson-Cook of the Los Alamos NationalLaboratory delivered the W.J. Youden Memorial Address. Her topic was Sequential Experimentation for Meta-Analyses.Ronald Does from the University of Amsterdam received the William G. Hunter Award. The Statistics Division presentsthis award annually in memory of its founding chair. The award recognizes outstanding accomplishments in thebroad field of applied statistics.

    The conference program consisted of 18 sessions comprising 5 invited presentations, 4 panel discussions and 21contributed papers. The sessions covered a variety of topics including Design and Analysis of Experiments, StatisticalProcess Control, Process Capability, Data Mining and Bowling. Short courses in CUSUM and EWMA procedures;designing and analyzing mixture experiments; time series analysis and forecasting; and data quality and record linkagetechniques were offered in addition to the conference.

    The 2009 Fall Technical Conference will be held at the Hilton Indianapolis in Indianapolis Indiana October 8th and 9th. See thecall for papers elsewhere in this issue for instructions on submitting an abstract if you wish to give a presentation at the conference.

    52nd Annual Fall Technical Conferenceby Frank Rossi

  • 14 Return Home

    Edward Schilling, Professor Emeritus of The John D. Hromi Center for Quality andApplied Statistics at Rochester Institute of Technology (RIT), statistician andinternationally recognized expert in Statistical Quality Control, passed away onNovember 1 after an extended illness.

    Professor Schilling was Chair of the Master’s degree program in Applied andMathematical Statistics from 1983 to 1992, and Director of the Center for Quality andApplied Statistics from 1992 to 1996. He was also a Fellow of the American Society forQuality (ASQ), American Statistical Association (ASA), and The American Society forTesting and Materials (ASTM) as well as a member of the Institute of MathematicalStatistics (IMS) and American Economic Association (AEA). He was registered as aprofessional engineer in California and an ASQ Certified Quality Engineer (CQE) and anASQ Certified Reliability Engineer (CRE).

    Prior to joining RIT he was manager of the Lighting Quality Operation for the Lighting Business Group of the GeneralElectric Company. He received his B.A. and M.B.A. degrees from SUNY Buffalo, and his M.S. and Ph.D. degrees instatistics from Rutgers University where he studied under ASQ Honorary Members Dr. Ellis R. Ott, Dr. Harold F. Dodgeand Dr. Mason E. Wescott. He served on the faculties of SUNY Buffalo, Rutgers University and Case Western ReserveUniversity. He had extensive industrial experience in quality engineering at RCA and the Carborundum Co., and instatistical consulting and quality management at General Electric.

    He was awarded many honors for his work in quality control by ASQ, including the 1983 Shewhart Medal foroutstanding technical leadership; the 1999 E.L. Grant Award for the development and presentation of educationalprograms; the 2005 Freund-Marquardt Medal for contributions to management standards; and its highest award, theDistinguished Service Medal in 2002. In addition, he was the only four-time recipient of the Brumbaugh Award,presented by the ASQ for the paper published in the preceding year that was judged to make the largest singlecontribution to the development of industrial application of quality control. He was also the recipient of two awardsnamed for his former professors: the 1984 Ellis R. Ott Award for contributions to quality management given by theMetropolitan New York Section of ASQ, and the H.F. Dodge Award by the ASTM EII Committee on Quality andStatistics in 1993. He was awarded the Award of Merit from ASTM in 2002. He was honored by being invited to givethe 1986 W.J. Youden Memorial Address at the Joint ASQ/ASA Annual Fall Technical Conference. In 2006, he acceptedan award for lifetime contributions to statistics at the Joint Research Conference on Statistical Quality, Industry andTechnology.

    Dr. Schilling published extensively in the field of quality control and statistics. He served as Founding Series Editor forthe Marcel Dekker series of books on Quality and Reliability, and he was associate editor of the fifth edition of Juran’sQuality Handbook. His two books, Acceptance Sampling in Quality Control and Process Quality Control (with E.R. Ott andD. V. Neubauer), are among the leading texts in the field. The second edition of Acceptance Sampling in Quality Control(with D.V. Neubauer) will be published in 2009.

    On a more personal note, Ed, as he was known to all of us, was a very gentle soul who appreciated the honors hereceived but never called attention to them. Many of us who worked with Ed as a colleague know of his passion for

    Edward Schilling Memorialby Dean Neubauer

    Link to page 15

  • 15Return Home

    quality and statistics and his love of family and friends. I wrote an article in 2007 on Ellis R. Ott that was published inQuality Digest [1] and asked Ed to contribute his thoughts of Dr. Ott. These words show his appreciation for Dr. Ott’ssupport and leadership and how he wanted to treat others:

    “Few students have the good fortune to be part of a small cohesive department led by so dynamic a person. Ellis lovedstatistics, and took a great interest in conveying that love to his carefully selected students. His faculty, as well, wereoutstanding and each of them had industrial experience. Dr. Ott believed that experience in analyzing real data onreal problems was absolutely necessary for an appreciation of the power and universal applicability of statisticalmethods.

    Who was this man, and what did he believe? In preparing the preface to the second edition of his book, I believe Imay have unintentionally answered these questions in the last few lines which address the Ott approach. “Who is thisbook for? Not for the close-minded who would use the data as a means to an end, but for the open minded whoseend is what the data means. For statistics as a science has its ultimate meaning only insofar as it is developed and usedin the search for truth. The romance of statistics is the dream of reality.” Surely, above all, Ellis Ott was a realist.

    Recently, I asked our publisher to include the following paragraph at the end of the Preface as a testimonial to Ed inthe Acceptance Sampling in Quality Control, 2nd ed. book [2] that we just finished:

    “Before this text went into publication, Dr. Edward G. Schilling, the book’s principal author, passed away. He leftbehind many dear friends and loved ones, but also a legacy in the field of acceptance sampling. Dr. Harold F. Dodge,one of Ed’s professors at Rutgers, is known as the father of acceptance sampling due to his pioneering work in the field.Dr. Dodge mentored his young protégé and wrote papers with him on acceptance sampling while Ed was at Rutgers.Little did Dr. Dodge know that Ed would become a pioneer and mentor himself in shaping what the world knowstoday as modern acceptance sampling. This book is a testimony to not only the work of Dodge, Romig and others,but also to a larger extent the work done by Dr. Schilling and others to shape the field and extend it in ways that theearly pioneers had perhaps envisioned but did not pursue. Ed’s work does not lie entirely in the statistical literature,but rather he also played an integral role in the development of acceptance sampling standards with the Departmentof Defense, ISO TC 69, ANSI/ASQ, and ASTM. For this body of work, Dr. Edward G. Schilling should be known as thefather of modern acceptance sampling. As a former student, a colleague at RIT and on the ISO TC 69 and ASTM E11Committees, and co-author with Ed on two books, I feel that I have lost a very dear friend. Of course, I can’t feel theloss that his family feels to lose a great husband and father, but I feel honored to have known such a great man.”

    The Edward G. Schilling Memorial Endowed Scholarship is being established at RIT. Scholarship contributions may besent to: RIT Office of Development, P.O. Box 92765, Rochester, NY 14692-8865. Donations may also be made toNiagara Aerospace Museum, P.O. Box 1181, 345 Third St., Niagara Falls, NY 14303, or to Lutheran Church of OurSavior, 2415 Chili Ave, Chili, NY.

    References:

    1. Neubauer, D. V., “Pl-Ott the Data!”, Quality Digest, May 2007, pp. 43-47.2. Schilling, E. G. and D. V. Neubauer, Acceptance Sampling in Quality Control, 2nd ed., CRC Press, Taylor & Francis

    Group, to be published in January 2009.

    EDWARD SCHILLING MEMORIAL Continued from page 14

  • Awards ShowcaseFour members of Statistics Division honored as 2008 ASQ FellowsCandidates for ASQ Fellow Nomination must be senior members of ASQ with active experience in quality-relatedpositions and who have attained distinction in quality-related disciplines. Citations for our new Fellows are below:

    For significant contributions to the quality assurance profession through quality managementpositions, consulting with Fortune 500 companies, pioneering work in the application of statisticalmethods to reduce emissions of substances that contribute to ozone layer depletion, and applying thetools of quality to improve other fields.

    For his outstanding leadership in high-impact applications of statistics and statistical thinking to furtherquality improvement in product design, manufacturing, servicing, and business processes; for hisnumerous contributions to quality, reliability, and product improvement, as evidenced by hispublications; and for his many years of service to the profession.

    For significant research contributions to quality improvement; for editorial service to ASQ journals; andfor leadership in promoting industrial statistics.

    For outstanding contributions to the experimental design and analysis of mixtureexperiments; for important applications of statistics and quality to nuclear wasteimmobilization; and for service to the profession.

    2007-2008 Division Volunteers Recognized at FTC Hospitality Event The Statistics Division relies on the time and effort generously contributed by our dedicated volunteers. The followingpeople were recognized for their contributions during the previous year:

    Susan Albin Ott Scholarship Governing BoardTed Allen Nelson Award Committee Chair, Hunter Award Committee MemberDavis Balestracci Hunter Award Committee MemberNancy Belunis Ott Scholarship Governing BoardSoren Bisgaard Hunter Award Committee MemberBob Brill Fall Technical Conference Short Course ChairGalen Britz Ott Scholarship Governing BoardGordon Clark Youden Address Committee ChairJohn Cornell Fall Technical Conference Short Course InstructorWilliam Guthrie Youden Address Committee MemberLynne Hare Ott Scholarship Governing BoardDoug Hlavacek Ambassador, FTC Student Grants Committee Member Stuart Hunter Ott Scholarship Governing BoardBill Meeker Nelson Award Committee MemberBob Mitchell Hunter Award Committee Chair, Nelson Award Committee MemberTom Murphy Ott Scholarship Governing BoardJulia O’Neill Nelson Award Committee MemberRobert Perry Ott Scholarship Governing Board ,Nelson Award Committee Member Lori Pfahler Youden Address Committee MemberGreg Piepel Fall Technical Conference Short Course InstructorPaul Prew FTC Student Grants Committee Chair Frank Rossi Fall Technical Conference Program Committee Representative Susan Schall Hunter Award Committee MemberBrain Sersion Newsletter EditorRonald Snee Ott Scholarship Governing BoardJennifer Van Mullekom Youden Address Committee Member

    Congratulations to all our Award winners!

    Jonathon Andell

    Necip Doganaksoy

    Vijay Nair

    Gregory Piepel

    16 Return Home

  • The ISO TC69 30th annual meeting was held inBeijing, China, on October 11 – 17, 2008. Theprimary purpose of the meeting was to createresolutions regarding TC69 (which governs standards onacceptance sampling) activities to provide the impetus forsubsequent work progress. Each subcommittee isresponsible for maintaining existing standards andproducing new ones as appropriate. My subcommitteeSC1 is presently deeply involved in the revision to ISO3534-3 on Statistics — Vocabulary and Symbols — Part 3:Design of Experiments which is presently out for ballot as aCommittee Draft (CD). In anticipation of the comments,revisions were made during the working group sessionsand an interim meeting in Orlando is planned for January19-21, 2009. As convener of the group, I am responsiblefor progressing this document.

    The host nation chose the Lake Side Hotel at the LeisureCity Convention center complex as the venue, a newfacility that was built as part of the Olympics preparations.Over sixty international experts registered from fourteencountries, an attendance level that is somewhat lower thanthe recent European hosted meetings, probably owing tothe international financial concerns and the remoteness ofthe meeting for non-Asian member nations. Delegatesrepresented Canada, China, Denmark, France, Germany,India, Italy, Japan, Korea, Malaysia, Republic of SouthAfrica, Slovakia, the UK, and the USA.

    The traditional Chairman Advisor Group (CAG) meetingtook place on Sunday afternoon, October 12, for finalpreparations for the upcoming week’s sessions. The CAGconsists of each country’s Head of Delegation and theChair of each Subcommittee. The subcommittees areSC1, SC4, SC5, SC6 and SC7 covering terminology andsymbols (SC1), applications of statistical methods inproduct management (SC4), acceptance sampling (SC5),measurement methods and results (SC6) and the newestsubcommittee on applications of statistical and relatedtechniques for the implementation of Six Sigma (SC7). Atthe CAG meeting, Japan provided an agenda for aworkshop scheduled during the week on a proposal forthe creation of a new subcommittee (SC8) on “Statisticsand Related Methodology for New TechnologyDevelopment.” The topical area appears to resembleDesign for Six Sigma (DFSS) since the workshop to be heldas part of the meeting mentions prominently the notionsof customer satisfaction surveys, voice of the customer,quality function deployment and robust parameter design.With a small collection of experts to review thedocuments, the proposed new subcommittee poses achallenge for the member countries to provide adequatesupport. On the other hand, this topical area and that ofSC7 on Six Sigma may attract significant new participation

    to ISO/TC69. ASQ members interested in participating inthis new subcommittee are encouraged to contact MikeManteuffel ([email protected]) or Mark Johnson([email protected]) about joining the US TC69delegation.

    The US delegation met for dinner on Sunday night in asmall but elegant banquet room. This kick-off dinner forthe arriving US delegation has become a popular traditionin which to renew acquaintances, to plan for possiblechallenges ahead during the week and to receive a West-Texas pep talk from our Chairman Rudy Kittlitz, who hasdone an outstanding job of leading the US delegation.The US delegation at Beijing included R. Kittlitz (retired,du Pont), M. Boulanger (JISC Consulting), M.E. Johnson(U. Central Florida), A. Rainosek (U. South Alabama), J. Kim(3M Corp.), H. Wadsworth (retired, Georgia Institute ofTechnology), M. Manteuffel (ASQ), J. Amundson (ASQ)and Nien-Fang (NIST).

    Of particular interest to the Statistics Division may bethe work of SC7 on Six Sigma methodology andapplications. In its first year of existence, thesubcommittee produced an interesting document ISO TR29901 “Selected Illustrations of Full Factorial Experimentswith Four Factors” with project leader M. Boulanger (US)and working group participants F. Boulanger (France), C.Harris (UK), J. Granveldt (Denmark), M. Johnson (US), andH. Shah (US). This document considers six distinctapplications of 24 full factorial experiments with completeproblem descriptions and analyses with software output.In the past year two new technical guideline documentsare under development and nearing completion. Anotherdocument on design of experiments is ISO TR 12845“Selected Illustrations of Fractional Factorial ScreeningExperiments,” for which the comments from the ballot(approved) were handled during the meeting. The otherdocument is “Selected Illustrations of Gage R&R Studies,Part 1: Continuous Variables.”

    The ballot for the Japanese proposal for a newsubcommittee “Statistics and Related Methodology forNew Technology Development,” will proceed perresolution approved at the closing TC69 plenary, whichwas held on October 17, 2008. Considerable progresswas reported at the plenary and the meeting as a wholewas deemed a success. The venue was outside the sixthring of Beijing which successfully trapped the delegates onsite and thus, facilitated continued discussions in theevening. The next international meeting of ISO TC69 isto be hosted by Malaysia in June 2009 in Kuala Lampur(tentatively June 20-26, 2009).

    My participation in ISO standards work would not bepossible without the continuing support of the ASQStatistics Division, for which I am very appreciative.

    Statistics Division Standards CommitteeReport: Meeting in Beijingby Mark E. Johnson

    17Return Home

  • 18 Return Home

    The Statistics Division supported travel for KwokTsui of Georgia Tech and me to present asession at the 18th Simposio de Estadística heldin Cartagena Colombia from August 11-15, 2008. Thismeeting was sponsored by the National University ofColombia and the International Society for Business andIndustrial Statistics (ISBIS). There were nearly 400attendees, many of them students.

    Kwok spoke on “Modeling of Disease SpreadSimulation and Surveillance” while I gave “An Overviewof Health-Related Monitoring”. The two-hour sessionwas attended by roughly fifty participants, which was agood crowd considering the bulk of the conferencepresentations were given in Spanish.

    Overall we found the statisticians in Colombia to bevery interested in quality-related topics such as design ofexperiments and statistical process control. There were,for example, student presentations on Six Sigmaapplications, process capability, and multivariate controlcharting.

    Stu Hunter gave the conference inaugural address,“100 Years of Industrial Statistics”, which was very well-received. In the reception that followed the number ofrequests for photographs with him was astounding. BillMeeker gave several tutorials on “Experiences and Pitfallsin Reliability Data Analysis”. The opening and closingcomments by Geoff Vining, representing ISBIS, werecourageously delivered in Spanish.

    Our visit to the city of Cartagena was quite enjoyable,although the weather was hot. The restaurants wereexcellent. Our hotel, the Hotel Caribe, had several slothsand a large lizard moving freely in the tall trees in itscourtyard, several small deer, and a monkey. This is notsomething one sees too often at conference hotels!

    The support of the Statistics Division for this sessionwas greatly appreciated. For more information on theconference talks and participants, seehttp://www.ciencias.unal.edu.co/estadistica/simposio/index.html.

    This new feature in the Statistics Division newsletter willhighlight resources available on the web of particularinterest to industrial statisticians or quality and reliabilityengineers. Please feel free to contact me with anycomments at [email protected] or if you knowof any particularly useful sites or tools that you would liketo recommend.

    Ris a free statistical computing and graphicspackage available at http://www.r-project.org/.R is part of the Free Software Foundation’s GNUproject (http://www.gnu.org/) and provides an open-source option for statistical analysis and methodologicalresearch. It is widely used in the academic andbiomedical communities, but less well-known in theindustrial arena.

    R does have a steeper learning curve that commercialpoint-and-click software for statistical analyses since it iscommand-oriented rather than menu-oriented. R isactually a language and computing environment, ratherthan a typical analysis package. This requires the user towrite code. (Don’t stop reading yet.)

    R code is very similar to S (for those familiar with S)and is a completely functional object-orientedprogramming language with considerable flexibility. Rhas become the programming language of choice formany statistical researchers both because of itsusefulness for simulations as well as its potential forextension, so users can develop their own new functionsand modules and make them available to others.

    Several manuals are available in html and pdf form tohelp new users get off the ground, including AnIntroduction to R and The R language Definition(draft), and R Data Import/Export. Additionalguidance on Writing R Extensions is also available tohelp turn users into developers, for those who want tolearn to create their own packages. Additional referencematerial about the guts of R is also available, as well asan avid online user community with a variety of activemailing lists and their own annual UseR! Conference.Various power users have also assembled a slew of non-official documentation, also available on the R web-site(click on Other under Documentation, then ContributedDocumentation), many in the 100+ page range.

    The latest version of R is version 2.8.1, made availablein December 2008.

    Session at the 18thSimposio de Estadisticain Cartagena, Colombiaby Bill Woodall

    Statistical Resources onthe Web:The R Project forStatistical Computinghttp://www.r-project.org/

    By Mindy Hotchkiss

    http://www.ciencias.unal.edu.co/estadistica/simposio/inhttp://www.r-project.org/http://www.gnu.org/http://www.r-project.org/

  • 19Return Home

    CALL FOR PAPERS53rd Annual Fall Technical Conference

    Quality and Statistics: Accelerating to Higher Performance

    October 8th & 9th, 2009Hilton Indianapolis / Indianapolis, IN

    Co-sponsored by:

    American Society for Quality American Statistical AssociationChemical and Process Industry Division Section on Physical and Engineering Sciences

    Statistics Division Quailty & Productivity Section

    We invite you to submit papers for presentation to the 53nd Fall Technical Conference to be held October 8-9, 2009 inIndianapolis, IN. The Fall Technical Conference has long been a forum for both statistics and quality and is co-sponsored by the American Society for Quality (Chemical and Process Industry Division and the Statistics Division) andthe American Statistical Association (Section on Physical and Engineering Sciences and the Section on Quality andProductivity). The goal of this conference is to engage researchers and practitioners in a dialogue that leads to moreeffective use of statistics to improve quality.

    If you are interested in presenting an applied or expository paper in any of three parallel sessions (Statistics, QualityControl or Tutorial/Case Studies) contact any of the committee members listed below, preferably by e-mail. Workshould be strongly justified by application to a problem in engineering, manufacturing, process/chemical industry,physical sciences, or a service industry. The mathematical level of the papers may range from basic to that of theJournal of Quality Technology or Technometrics. Please note which level of audience is targeted (Introductory,Intermediate, or Advanced) so the committee can pair papers appropriately and plan a balanced program.

    The program committee welcomes any suggestions for special session topics or speakers. If you have ideas, pleasecontact one of the program committee members listed below.

    Abstract Submission Deadline is February 27, 2009

    Link to page 20

  • 20 Return Home

    Committee Members:

    Q&P: SPES:Don McCormack Cheryl DingusSAS Institute Battelle Memorial Institute(512) 916-8060 (614) [email protected] [email protected]

    STAT: CPID:Frank Rossi (Chair) Flor CastilloKraft Foods Dow Chemical(847) 646-5196 (979) [email protected] [email protected]

    It is important to follow the abstract format (provided below). Papers are selected based on subject matter, technicalcorrectness, usefulness, interest, clarity, and readability.

    Abstract Format (use only a single page please)

    Title of Presentation

    first Author second Author third Authoraffiliation affiliation affiliationphone number (day) phone number (day) phone number (day)fax number fax number fax numberpaper mail address paper mail address paper mail addresse-mail address e-mail address e-mail address

    Presenter: Name of presenter

    Keywords: Include 3 to 5 key words or phrases

    Purpose: One sentence. To derive, prove, synthesize, review, present, inform, encourage, motivate, enlighten,exemplify, highlight, etc.

    AbstractThe abstract should include the following 3 components:

    1. Motivation or Background:2. Description: Describe the work done.3. Significance: Are there improvements, applications, new abilities, new points of view, etc.? How will the status

    quo be changed?

    Session Preference: (choose one) Target Audience: (choose one)____ Statistics ____ Introductory/Practitioner____ Quality Control ____ Intermediate____ Tutorial/Case Study ____ Advanced/Theoretical

    CALL FOR PAPERS Continued from page 19

  • ASQ Statistics Division at 2008 Fall Technical ConferenceOctober 8, 2008 - Phoenix, AZ

    Participants: Daksha Chokshi, Doug Hlavacek, Vijay Nair, Bill Rodebaugh, Mark Kiel, Bob Mitchell, Frank Rossi, GordonClark, Geoff Vining, and Ron Snee

    Agenda:• Sponsored Events• Special Projects / Ongoing Activities• Awards / Scholarships / Fellows• Miscellaneous

    Sponsored Events• FTC 2008 (Phoenix) – Attendance at short courses was good. Program structure and overall attendance are similar

    to last year. “Statistical Thinking” panel session ready to go.• FTC 2009 (Indianapolis) – Program theme selected. Members will get call for papers. Discussion of how to involve

    local statisticians. Planning underway for Statistics Division 30th Anniversary.• WCQI 2009 (Minneapolis) – Papers submitted for review by Technical Planning Committee. 4-5 papers from

    division this year, no tutorials from any division.• WCQI 2010 – Considering conference-within-a-conference format.• ISBIS 2008 – Newsletter article submitted by Bill Woodall summarizing experience.

    Special Projects / Ongoing Activities• Upcoming Challenges: Statistics & Statisticians – Ron Snee made a presentation on the “Future of the Statistics

    Profession”. Some thoughts to consider:a) Role of statisticians as leadersb) How can we better identify the statistical needs of industry? c) How can organizations use statistical thinking to create competitive advantage?

    Plan is to put together a team to address these topics jointly with other ASQ and ASA sections. Ron Snee andRoger Hoerl will be involved. Potential activities include sessions at conferences for discussion, workshops, andusing the newsletter for continuing discussion.

    • Body of Knowledge – Reviewed goals of this initiative. Idea is to organize website and improve access formembers, then identify gaps. Logistics discussed.

    • How to Series – Next topic and author are tentatively identified, working on logistics.• Newsletter – Fall 2008 issue now available. Beginning work on Winter 2008 issue. Nov 30 is submission deadline.• Special Publications – Identifying content for next in series.

    Awards / Scholarships / Fellows• Four members of Statistics Division named ASQ Fellows – Vijay Nair, Jonathon Andell, Necip Doganaksoy, and

    Greg Piepel.

    Miscellaneous• Mark Johnson, Standards Chair, is on his way to Beijing to attend the ISO TC69 meeting. Thanks to the Division

    for support!• We may have an opportunity to develop a Statistics stream at the inaugural Canadian Quality Conference. More

    to come.

    ASQ Statistics DivisionCouncil Meeting Minutes: Highlights

    21Return Home

  • 22 Return Home

    2008-2009

    Budget

    July - Oct

    ActualExpenses (con't.)

    2008-2009

    Budget

    July - Oct

    Actual